Getting Started

LangChain Introduction

Get started with LangChain — the framework for building LLM-powered applications.

What is LangChain?

LangChain is an open-source framework that simplifies building applications with large language models. It provides:

  • Abstractions for LLMs, prompts, memory, and chains
  • Integrations with 100+ LLMs, vector stores, and tools
  • Components for common patterns like RAG, agents, and chatbots
  • LangSmith: Observability and debugging platform

Core Components

  • Models: Unified interface to any LLM (OpenAI, Anthropic, local models)
  • Prompts: Templates and management
  • Chains: Sequences of operations (LCEL)
  • Memory: Conversation history and persistence
  • Agents: LLM + tools in a loop
  • Vector Stores: Document storage and retrieval

LangChain Expression Language (LCEL)

LCEL is LangChain's composable pipeline syntax using the | operator:

python
chain = prompt | llm | output_parser
result = chain.invoke({"input": "Hello"})

Example

python
# pip install langchain langchain-anthropic langchain-openai
from langchain_anthropic import ChatAnthropic
from langchain_core.prompts import ChatPromptTemplate
from langchain_core.output_parsers import StrOutputParser

# Initialize a model
llm = ChatAnthropic(model="claude-3-5-haiku-20241022")

# Simple completion
response = llm.invoke("What is the capital of Japan?")
print(response.content)

# Using prompts
prompt = ChatPromptTemplate.from_messages([
    ("system", "You are a helpful {role}."),
    ("human", "{question}")
])

# Create a chain with LCEL (| operator)
chain = prompt | llm | StrOutputParser()

# Run the chain
result = chain.invoke({
    "role": "Python tutor",
    "question": "What is a list comprehension?"
})
print(result)

# Batch processing
questions = [
    {"role": "historian", "question": "Who built the Eiffel Tower?"},
    {"role": "chef", "question": "What is the difference between baking and roasting?"},
]
results = chain.batch(questions)
for r in results:
    print(r[:100])

# Streaming
for chunk in chain.stream({"role": "poet", "question": "Write a haiku about Python"}):
    print(chunk, end="", flush=True)
Try it yourself — PYTHON