Getting Started

Text Generation with Gemini

Master text generation, chat sessions, system instructions, and streaming responses with the Gemini API.

Generating Text

The generateContent() method is the core method for one-shot text generation. For multi-turn conversations, use startChat().

System Instructions

System instructions set the model's persona, guidelines, and constraints. They are passed during model initialization:

typescript
const model = genAI.getGenerativeModel({
  model: "gemini-1.5-pro",
  systemInstruction: "You are a senior TypeScript developer. Always use strict types and add JSDoc comments.",
});

Generation Config

Control the output with generation configuration:

ParameterRangePurpose
temperature0.0–2.0Randomness (0 = deterministic)
maxOutputTokens1–8192Cap the response length
topP0.0–1.0Nucleus sampling threshold
topK1–40Top-K sampling
stopSequencesstring[]Stop generation on these strings

Multi-Turn Chat

startChat() maintains conversation history automatically — no need to manually track messages:

typescript
const chat = model.startChat({
  history: [
    { role: "user", parts: [{ text: "My name is Alice." }] },
    { role: "model", parts: [{ text: "Hello Alice! How can I help?" }] },
  ],
});

const response = await chat.sendMessage("What is my name?");
console.log(response.response.text()); // "Your name is Alice."

Streaming Responses

For long outputs or real-time interfaces, stream tokens as they are generated:

Example

typescript
import { GoogleGenerativeAI } from "@google/generative-ai";

const genAI = new GoogleGenerativeAI(process.env.GEMINI_API_KEY!);

// Basic generation with config
const model = genAI.getGenerativeModel({
  model: "gemini-1.5-pro",
  systemInstruction: "You are a concise, accurate coding assistant.",
  generationConfig: {
    temperature: 0.2,
    maxOutputTokens: 2048,
  },
});

// One-shot generation
const result = await model.generateContent("Write a TypeScript utility to debounce a function.");
console.log(result.response.text());

// Multi-turn chat
const chat = model.startChat();
const msg1 = await chat.sendMessage("I'm building a REST API in Node.js.");
console.log(msg1.response.text());
const msg2 = await chat.sendMessage("What authentication strategy do you recommend?");
console.log(msg2.response.text()); // Has context from msg1

// Streaming
const stream = await model.generateContentStream("Explain async/await with examples.");
for await (const chunk of stream.stream) {
  process.stdout.write(chunk.text());
}
Try it yourself — TYPESCRIPT