AI Integration
Couchbase for AI Applications
Leverage Couchbase's vector search, full-text search, and in-memory performance for building AI-powered applications.
Couchbase Capella: AI-Ready Database as a Service
Couchbase Capella is the fully managed cloud offering of Couchbase. It includes:
- Automated provisioning, scaling, and backups
- Vector Search — built on a distributed vector index for semantic search
- Full-Text Search (FTS) — Elastic-powered text search integrated with N1QL
- Capella iQ — AI assistant for generating N1QL queries in natural language
Vector Search in Couchbase
Couchbase Capella now supports vector search, enabling semantic similarity queries on embeddings stored alongside your documents. This makes it a strong choice for RAG (Retrieval-Augmented Generation) pipelines where your documents already live in Couchbase.
javascript
import couchbase from 'couchbase';
const cluster = await couchbase.connect('couchbases://your-capella.cloud.couchbase.com', {
username: 'admin',
password: 'password',
});
// Store a document with its embedding
const collection = cluster.bucket('knowledge').defaultCollection();
await collection.upsert('doc:001', {
type: 'article',
title: 'Understanding Transformers',
body: 'The transformer architecture...',
embedding: [0.12, -0.34, 0.56, /* ... 1536 dimensions */],
});
// Vector similarity search via Search API
const searchRequest = couchbase.SearchRequest.create(
couchbase.VectorSearch.fromVectorQuery(
couchbase.VectorQuery.create('embedding', queryEmbedding)
.numCandidates(100)
.boost(1.5)
)
);
const result = await cluster.search('knowledge-vector-index', searchRequest, {
limit: 10,
fields: ['title', 'body'],
});
for (const row of result.rows) {
console.log(row.id, row.score, row.fields);
}Full-Text Search + N1QL (Hybrid Search)
Couchbase uniquely allows you to combine full-text search with N1QL queries in a single request — a powerful pattern for AI applications that need both semantic relevance and structured filters:
sql
-- SEARCH() function integrates FTS into N1QL
SELECT META().id, title, author, publishedAt,
SEARCH_SCORE() AS relevanceScore
FROM knowledge
WHERE SEARCH(knowledge, {
"query": {
"match": "transformer attention mechanism",
"field": "body",
"fuzziness": 1
}
})
AND publishedAt > '2023-01-01'
AND category = 'machine-learning'
ORDER BY SEARCH_SCORE() DESC
LIMIT 20;RAG Pipeline with Couchbase
javascript
import couchbase from 'couchbase';
import Anthropic from '@anthropic-ai/sdk';
async function ragQuery(userQuestion) {
// 1. Generate embedding for the question
const embeddingResp = await fetch('https://api.voyageai.com/v1/embeddings', {
method: 'POST',
headers: { 'Authorization': `Bearer ${process.env.VOYAGE_API_KEY}` },
body: JSON.stringify({ input: userQuestion, model: 'voyage-3' }),
});
const { data } = await embeddingResp.json();
const queryVector = data[0].embedding;
// 2. Vector search in Couchbase
const searchRequest = couchbase.SearchRequest.create(
couchbase.VectorSearch.fromVectorQuery(
couchbase.VectorQuery.create('embedding', queryVector).numCandidates(20)
)
);
const results = await cluster.search('docs-index', searchRequest, {
limit: 5, fields: ['title', 'body'],
});
const context = results.rows
.map(r => r.fields.body)
.join('\n\n');
// 3. Generate answer with Claude
const anthropic = new Anthropic();
const response = await anthropic.messages.create({
model: 'claude-opus-4-5',
max_tokens: 1024,
messages: [{
role: 'user',
content: `Answer this question using the context provided.
Context:
${context}
Question: ${userQuestion}`
}],
});
return response.content[0].text;
}When to Choose Couchbase
Couchbase is an excellent choice when you need:
- SQL familiarity with NoSQL scale — N1QL dramatically reduces the learning curve
- In-memory performance for a subset of hot data
- Mobile sync — Couchbase Lite + Sync Gateway for iOS/Android offline-first apps
- Unified platform — Search, analytics, caching, and documents in one system
- Enterprise support — SLAs, security certifications, professional services
Example
javascript
// Couchbase Capella: store docs with embeddings for RAG
import couchbase from 'couchbase';
const cluster = await couchbase.connect(process.env.CB_CONN_STRING, {
username: process.env.CB_USERNAME,
password: process.env.CB_PASSWORD,
});
const collection = cluster.bucket('knowledge').defaultCollection();
async function indexDocument(doc, embedding) {
await collection.upsert(`doc:${doc.id}`, {
type: 'document',
title: doc.title,
body: doc.body,
category: doc.category,
embedding,
indexedAt: new Date().toISOString(),
});
}Want to run this code interactively?
Try in Compiler