embeddings

Dense vector representations of data (text, images, etc.) that capture semantic meaning. Similar items cluster together in embedding space.

Syntax

ai-fundamentals
embedding = model.encode(text)  # float vector [0.1, -0.3, ...]

Example

ai-fundamentals
# Text embeddings with sentence-transformers:
from sentence_transformers import SentenceTransformer

model = SentenceTransformer("all-MiniLM-L6-v2")

sentences = ["I love coding", "Programming is fun", "I enjoy chess"]
embeddings = model.encode(sentences)

# Cosine similarity:
from sklearn.metrics.pairwise import cosine_similarity
sim = cosine_similarity([embeddings[0]], [embeddings[1]])
print(f"Similarity: {sim[0][0]:.3f}")  # high for similar sentences