LangChain Agents

Medium 30 min read

LangChain Overview

Why LangChain Agents Matter

The Problem: Building AI agents from scratch requires handling prompt management, tool execution, output parsing, and conversation memory individually.

The Solution: LangChain provides a composable framework that abstracts these concerns into reusable components, letting you build sophisticated agents with minimal boilerplate.

Real Impact: LangChain is the most widely adopted agent framework with over 80K GitHub stars and thousands of production deployments worldwide.

Real-World Analogy

Think of LangChain as a modular robotics kit:

  • LLM = The robot's brain that reasons and decides
  • Tools = Attachable arms and sensors (search, calculator, APIs)
  • Prompt Templates = Instruction manuals for the robot
  • Memory = The robot's notebook for remembering conversations
  • AgentExecutor = The control loop that runs the robot

Core LangChain Agent Components

AgentExecutor

The runtime that orchestrates the agent loop: receives input, calls the LLM, executes tools, and returns the final answer.

create_react_agent

Factory function that creates a ReAct-style agent combining reasoning and acting in an interleaved loop.

Tool Definitions

Structured descriptions of external capabilities the agent can invoke, including name, description, and input schema.

Memory Integration

Conversation history management using buffer, summary, or vector-store-backed memory for multi-turn interactions.

Agent Types

LangChain Agent Architecture
User Input Agent LLM (Reasoning) Prompt Template Output Parser Memory Search Tool Calculator API Tool Custom Tool Output

Available Agent Types

Agent Type Description Best For
ReAct Reasoning + Acting in interleaved steps General-purpose tool use with step-by-step reasoning
OpenAI Functions Uses OpenAI function calling API Structured tool calls with OpenAI models
Structured Chat Multi-input tool support Tools requiring multiple parameters
Self-Ask Decomposes questions into sub-questions Complex multi-hop reasoning tasks
Plan-and-Execute Plans first, then executes step by step Complex multi-step tasks needing upfront planning

Tool Integration

tools.py
from langchain.tools import Tool, tool
from langchain_community.tools import DuckDuckGoSearchRun

# Built-in tool
search = DuckDuckGoSearchRun()

# Custom tool using decorator
@tool
def calculate(expression: str) -> str:
    """Evaluate a mathematical expression."""
    try:
        result = eval(expression)
        return str(result)
    except Exception as e:
        return f"Error: {e}"

# Tool with structured input
from pydantic import BaseModel, Field

class WeatherInput(BaseModel):
    city: str = Field(description="City name")
    units: str = Field(default="celsius")

@tool(args_schema=WeatherInput)
def get_weather(city: str, units: str = "celsius") -> str:
    """Get current weather for a city."""
    return f"Weather in {city}: 22 {units}"

tools = [search, calculate, get_weather]

Agent Executor

agent_executor.py
from langchain.agents import create_react_agent, AgentExecutor
from langchain_openai import ChatOpenAI
from langchain import hub

# Initialize the LLM
llm = ChatOpenAI(model="gpt-4", temperature=0)

# Pull a ReAct prompt template from the hub
prompt = hub.pull("hwchase17/react")

# Create the agent
agent = create_react_agent(llm, tools, prompt)

# Create the executor with configuration
agent_executor = AgentExecutor(
    agent=agent,
    tools=tools,
    verbose=True,
    max_iterations=10,
    handle_parsing_errors=True,
    return_intermediate_steps=True,
)

# Run the agent
result = agent_executor.invoke({
    "input": "What is the population of Tokyo times 2?"
})

print(result["output"])
# Agent thinks: I need to search for Tokyo's population, then multiply by 2
# Action: search("Tokyo population")
# Action: calculate("13960000 * 2")
# Final Answer: 27,920,000

AgentExecutor Loop

  • Step 1: Agent receives the user query and available tools
  • Step 2: LLM reasons about which tool to use (or if it can answer directly)
  • Step 3: Tool is executed and observation is returned
  • Step 4: LLM sees the observation and decides next action or final answer
  • Step 5: Loop repeats until Final Answer or max iterations reached

Custom Agents

custom_agent_with_memory.py
from langchain.agents import create_react_agent, AgentExecutor
from langchain.memory import ConversationBufferMemory
from langchain_openai import ChatOpenAI
from langchain.prompts import PromptTemplate

# Custom prompt with memory
template = """You are a helpful research assistant.

Chat History:
{chat_history}

You have access to these tools:
{tools}

Tool names: {tool_names}

Use this format:
Question: the input question
Thought: reason about what to do
Action: tool name
Action Input: input for the tool
Observation: tool result
... (repeat as needed)
Thought: I know the final answer
Final Answer: the answer

Question: {input}
{agent_scratchpad}"""

prompt = PromptTemplate.from_template(template)

# Set up memory
memory = ConversationBufferMemory(
    memory_key="chat_history",
    return_messages=True
)

# Build agent with memory
llm = ChatOpenAI(model="gpt-4", temperature=0)
agent = create_react_agent(llm, tools, prompt)

agent_executor = AgentExecutor(
    agent=agent,
    tools=tools,
    memory=memory,
    verbose=True,
    handle_parsing_errors=True,
)

# Multi-turn conversation
agent_executor.invoke({"input": "Search for LangChain latest version"})
agent_executor.invoke({"input": "What did you find?"})

Common Pitfall

Problem: Agent enters infinite loops or exceeds token limits.

Solution: Always set max_iterations and max_execution_time on AgentExecutor. Use handle_parsing_errors=True to gracefully recover from malformed LLM outputs.

Quick Reference

Essential LangChain Agent Commands

Function Description Example
create_react_agent() Create ReAct agent create_react_agent(llm, tools, prompt)
AgentExecutor() Create agent runtime AgentExecutor(agent=agent, tools=tools)
@tool Define custom tool @tool def my_func(x): ...
hub.pull() Load prompt from hub hub.pull("hwchase17/react")
.invoke() Run agent executor.invoke({"input": "..."})
.stream() Stream agent output for chunk in executor.stream(...):
ConversationBufferMemory Add conversation memory memory=ConversationBufferMemory()