LangGraph vs Plain LangChain — When to Use Which
title: "LangGraph vs Plain LangChain — When to Use Which" slug: "langgraph-vs-langchain" description: "LangChain and LangGraph solve different problems. Picking the wrong one doubles your debugging time." date: "2026-04-15" tags: ["LangChain", "LangGraph", "Agents"] readingTime: "7 min" draft: false
The most common mistake I see in AI engineering projects is using LangGraph when plain LangChain would have been enough — or using plain LangChain when the problem clearly needs LangGraph. Both choices cost time. Here's the decision framework I actually use.
What LangChain LCEL is
LangChain's core abstraction is a pipeline. You chain components together — a prompt, a model, an output parser — and data flows through them in one direction.
from langchain_core.prompts import ChatPromptTemplate
from langchain_anthropic import ChatAnthropic
from langchain_core.output_parsers import StrOutputParser
chain = (
ChatPromptTemplate.from_template("Summarize this: {text}")
| ChatAnthropic(model="claude-sonnet-4-20250514")
| StrOutputParser()
)
result = chain.invoke({"text": document})
This is excellent for: single-pass summarization, classification, extraction, straightforward RAG where you retrieve once and answer once. If your problem has a clear input and a clear output with no branching logic, LCEL is the right tool.
What LangGraph is
LangGraph is a state machine. You define nodes (functions that do work) and edges (transitions between nodes, which can be conditional). State persists across the entire execution and every node can read and write to it.
from langgraph.graph import StateGraph, END
from typing import TypedDict
class AgentState(TypedDict):
query: str
retrieved_docs: list
answer: str
needs_followup: bool
def retrieve(state: AgentState) -> AgentState:
docs = retriever.invoke(state["query"])
return {"retrieved_docs": docs}
def generate(state: AgentState) -> AgentState:
answer = chain.invoke({"docs": state["retrieved_docs"], "query": state["query"]})
return {"answer": answer}
def route(state: AgentState) -> str:
return "followup" if state["needs_followup"] else END
graph = StateGraph(AgentState)
graph.add_node("retrieve", retrieve)
graph.add_node("generate", generate)
graph.add_conditional_edges("generate", route)
Use LangChain LCEL when
- The flow is linear: input → process → output
- You don't need to branch based on intermediate results
- There's no state to carry between steps
- You want fast iteration — LCEL chains are easy to debug and modify
Use LangGraph when
- You need conditional routing — different paths based on what the model returns
- You're building a multi-agent system — different agents hand off to each other
- You need retry loops — e.g., if quality check fails, re-retrieve and regenerate
- You need human-in-the-loop — pause execution, wait for approval, resume
- State needs to persist across multiple LLM calls in one session
The expensive mistake
The pattern I see most often: a developer builds a simple RAG pipeline with LCEL, it works. They add features. Then more features. They start bolting conditional logic onto the chain with Python if-statements wrapping the chain invocation. Then they add a retry mechanism around the chain. Then a quality-check step. Now they have a 200-line function managing state in local variables, with branching logic that's impossible to visualize or test.
That's LangGraph, written poorly. Refactoring it into a proper graph at this point is a week of work.
The rule: if you've written more than one if-statement around your chain to handle different execution paths, it's time for LangGraph.
Practical starting point
Default to LCEL. When you catch yourself managing state in a dictionary passed between functions, or writing retry logic that wraps the chain, stop and switch to LangGraph. The graph abstraction makes that complexity visible and testable. The plain chain abstraction hides it until it breaks in production.