safety settings

Configure content safety thresholds across four harm categories. Adjust based on your application type and audience.

Syntax

gemini-api
safetySettings: [{ category: HarmCategory.X, threshold: HarmBlockThreshold.Y }]

Example

gemini-api
import { HarmCategory, HarmBlockThreshold } from "@google/generative-ai";

const model = genAI.getGenerativeModel({
  model: "gemini-1.5-pro",
  safetySettings: [
    { category: HarmCategory.HARM_CATEGORY_HARASSMENT, threshold: HarmBlockThreshold.BLOCK_ONLY_HIGH },
    { category: HarmCategory.HARM_CATEGORY_HATE_SPEECH, threshold: HarmBlockThreshold.BLOCK_MEDIUM_AND_ABOVE },
    { category: HarmCategory.HARM_CATEGORY_DANGEROUS_CONTENT, threshold: HarmBlockThreshold.BLOCK_ONLY_HIGH },
  ],
});

// Check if response was blocked
const result = await model.generateContent(prompt);
if (result.response.candidates?.[0]?.finishReason === "SAFETY") {
  console.warn("Response blocked:", result.response.candidates[0].safetyRatings);
}