embedding

Converting text chunks into dense vector representations. Similar content produces similar vectors, enabling semantic search across the corpus.

Syntax

rag
vector = embed_model.encode(text)  # float32 array of 384-1536 dims

Example

rag
# Embedding documents for RAG:
from sentence_transformers import SentenceTransformer

model = SentenceTransformer("all-MiniLM-L6-v2")  # 384 dims

# Embed all chunks:
chunk_texts = [c.page_content for c in chunks]
embeddings = model.encode(chunk_texts, batch_size=32)

print(f"Embedding shape: {embeddings.shape}")
# (num_chunks, 384)

# Now store in vector DB