Core Concepts

Chains with LCEL

Build powerful pipelines using LangChain Expression Language (LCEL).

LCEL: LangChain Expression Language

LCEL provides a unified way to compose LangChain components into pipelines using the pipe operator |.

Benefits of LCEL

  • Composability: Chain any components together
  • Streaming: First-class support
  • Parallelism: Run branches concurrently
  • Async: Native async/await support
  • Observability: Automatic LangSmith tracing

Common Patterns

Sequential Chains

python
chain = step1 | step2 | step3

Parallel Branches (RunnableParallel)

Run multiple chains simultaneously and combine results.

Conditional Logic (RunnableBranch)

Route to different chains based on input.

Passthrough

Pass input through unchanged with RunnablePassthrough.

Example

python
from langchain_anthropic import ChatAnthropic
from langchain_core.prompts import ChatPromptTemplate
from langchain_core.output_parsers import StrOutputParser, JsonOutputParser
from langchain_core.runnables import RunnableParallel, RunnablePassthrough, RunnableLambda
from langchain_core.pydantic_v1 import BaseModel

llm = ChatAnthropic(model="claude-3-5-haiku-20241022")
parser = StrOutputParser()

# Simple chain
summarize_chain = (
    ChatPromptTemplate.from_template("Summarize this in one sentence: {text}")
    | llm
    | parser
)

# Chained chains - translate then summarize
translate_prompt = ChatPromptTemplate.from_template(
    "Translate to English: {text}"
)
translate_chain = translate_prompt | llm | parser

# Chain two chains together
translate_then_summarize = translate_chain | (
    lambda translated: {"text": translated}
) | summarize_chain

# Parallel chain - run multiple tasks simultaneously
parallel_chain = RunnableParallel(
    summary=summarize_chain,
    word_count=RunnableLambda(lambda x: len(x["text"].split())),
    original=RunnablePassthrough()
)

result = parallel_chain.invoke({"text": "LangChain is a framework for building LLM applications. It provides tools for chaining AI components together in powerful ways."})
print(f"Summary: {result['summary']}")
print(f"Word count: {result['word_count']}")

# Structured output with Pydantic
class ArticleAnalysis(BaseModel):
    sentiment: str
    key_topics: list[str]
    summary: str

json_chain = (
    ChatPromptTemplate.from_template(
        "Analyze this article. Return JSON with sentiment, key_topics, summary: {article}"
    )
    | llm
    | JsonOutputParser()
)

analysis = json_chain.invoke({"article": "AI is transforming software development, making developers more productive but also raising ethical concerns about job displacement."})
print(analysis)

# Error handling with fallbacks
fallback_chain = (
    ChatPromptTemplate.from_template("Answer: {question}")
    | ChatAnthropic(model="claude-3-5-haiku-20241022")
    | parser
).with_fallbacks([
    ChatPromptTemplate.from_template("Briefly answer: {question}")
    | llm
    | parser
])
Try it yourself — PYTHON