Tools

Pinecone

Use Pinecone managed vector database for production-scale similarity search.

Pinecone

Pinecone is a fully managed vector database. You don't manage infrastructure — just insert vectors and query them.

Key Features

  • Serverless: Automatically scales, pay per use
  • Namespaces: Logical partitions within an index
  • Metadata filtering: Filter by any attribute
  • Sparse-dense hybrid search: Combine keyword and semantic search
  • Freshness: Low-latency updates

Concepts

  • Index: The main container. Stores vectors of a specific dimension.
  • Namespace: Logical partition for multi-tenancy
  • Vector: An ID + values + optional metadata
  • Pod: Reserved compute capacity (for dedicated deployments)

Example

python
# pip install pinecone-client openai
from pinecone import Pinecone, ServerlessSpec
from openai import OpenAI
import os

pc = Pinecone(api_key=os.environ.get("PINECONE_API_KEY"))
openai_client = OpenAI()

# Create index (one-time)
if "my-index" not in pc.list_indexes().names():
    pc.create_index(
        name="my-index",
        dimension=1536,  # text-embedding-3-small output size
        metric="cosine",
        spec=ServerlessSpec(cloud="aws", region="us-east-1")
    )

index = pc.Index("my-index")

# Generate embeddings
def embed(texts: list[str]) -> list[list[float]]:
    response = openai_client.embeddings.create(
        model="text-embedding-3-small",
        input=texts
    )
    return [item.embedding for item in response.data]

# Upsert vectors
def upsert_documents(documents: list[dict]):
    texts = [doc["content"] for doc in documents]
    embeddings = embed(texts)

    vectors = [
        {
            "id": doc["id"],
            "values": emb,
            "metadata": {
                "content": doc["content"],
                "source": doc.get("source", "unknown"),
            }
        }
        for doc, emb in zip(documents, embeddings)
    ]

    index.upsert(vectors=vectors, namespace="docs")
    print(f"Upserted {len(vectors)} vectors")

# Query
def search(query: str, top_k: int = 5, filter: dict = None):
    query_embedding = embed([query])[0]

    results = index.query(
        vector=query_embedding,
        top_k=top_k,
        namespace="docs",
        filter=filter,
        include_metadata=True
    )

    return [
        {
            "id": match["id"],
            "score": match["score"],
            "content": match["metadata"]["content"],
        }
        for match in results["matches"]
    ]

# Test
docs = [
    {"id": "doc_1", "content": "Pinecone is a managed vector database", "source": "docs"},
    {"id": "doc_2", "content": "pgvector extends PostgreSQL with vector support", "source": "docs"},
]
upsert_documents(docs)

results = search("managed cloud vector database")
for r in results:
    print(f"{r['score']:.3f}: {r['content']}")
Try it yourself — PYTHON