Advanced Features
Grounding with Google Search
Connect Gemini to Google Search for real-time, factual responses grounded in current web information.
What is Grounding?
Grounding connects Gemini's responses to real-world information sources, reducing hallucinations and providing up-to-date answers. The primary grounding source is Google Search.
Why Grounding Matters
Without grounding, LLMs can confidently state outdated or incorrect information. With Google Search grounding:
- Responses include citations to source URLs
- Information reflects current events (beyond training cutoff)
- Factual claims are verified against live web content
- Users can verify answers by checking the cited sources
Grounding vs RAG
| Approach | Best For | Data Source |
|---|---|---|
| Google Search Grounding | Public, current web information | Live Google Search |
| RAG (Vector Store) | Private, internal documents | Your own data |
| Long Context | Fixed corpus that fits in context | Files you provide |
For applications requiring fresh public information (news, market prices, current events), Google Search grounding is the right tool. For proprietary data (your documentation, databases), use RAG.
Dynamic Retrieval
Dynamic retrieval lets Gemini decide whether to use Google Search based on the confidence threshold you set. This avoids unnecessary search calls for questions the model can answer from its training data.
The dynamicRetrievalConfig.dynamicThreshold parameter (0.0–1.0) controls how confident the model must be before skipping search. Lower values = search more often.
Grounding Metadata
Grounded responses include:
groundingChunks— the source documents retrievedgroundingSupports— which parts of the response are supported by which sourcessearchEntryPoint— the rendered Google Search widget for display
Example
import { GoogleGenerativeAI } from "@google/generative-ai";
const genAI = new GoogleGenerativeAI(process.env.GEMINI_API_KEY!);
// Model with Google Search grounding enabled
const model = genAI.getGenerativeModel({
model: "gemini-1.5-pro",
tools: [{ googleSearch: {} }],
});
// Query that benefits from up-to-date information
const result = await model.generateContent(
"What are the latest Gemini API pricing updates and new model releases?"
);
const response = result.response;
console.log("Answer:", response.text());
// Access grounding metadata
const groundingMetadata = response.candidates?.[0]?.groundingMetadata;
if (groundingMetadata?.groundingChunks) {
console.log("\nSources used:");
groundingMetadata.groundingChunks.forEach((chunk, i) => {
if (chunk.web) {
console.log(` ${i + 1}. ${chunk.web.title}`);
console.log(` ${chunk.web.uri}`);
}
});
}
// Dynamic retrieval — let Gemini decide when to search
const modelDynamic = genAI.getGenerativeModel({
model: "gemini-1.5-pro",
tools: [{
googleSearchRetrieval: {
dynamicRetrievalConfig: {
mode: "MODE_DYNAMIC",
dynamicThreshold: 0.3, // Search if confidence < 0.3
},
},
}],
});
// This will search because it's asking about recent events
const recentResult = await modelDynamic.generateContent(
"What AI models were released in the last month?"
);
console.log(recentResult.response.text());