- LangChain has revolutionized how developers create AI-driven applications.
- It bridges large language models (LLMs) with tools, memory, and reasoning logic — making your AI not just “talk,” but actually think and act.
- One of LangChain’s most powerful design patterns is the ReAct Agent — short for Reason + Act.
- If you’ve ever wondered how an AI agent can decide what to do, call external functions, and loop until it finds an answer, this post is for you

🧩 ReAct Agent architecture
Query (Q = Query)
➜ Someone asks a question — like “What’s the weather today?”
This is the input that starts everything.
-
Thinking (the brain)
➜ The AI starts thinking — it plans what to do.
Example: “Hmm, I don’t know the weather directly, maybe I should check a weather tool.”
-
Action → Tool
➜ Based on that thinking, the AI takes an action, like using a tool (a calculator, web search, database, etc.).
Example: It calls a weather API to get real-time data.
-
Observation (from the tool)
➜ The AI observes what came back.
Example: “The API says: 28°C and sunny.”
-
Thinking again
➜ The AI looks at what it learned and thinks again — “Okay, I got the data. Now how do I explain it nicely?”
-
Answer
➜ Finally, it gives the Answer —
Example: “It’s 28°C and sunny right now!”
Query (Q = Query)
➜ Someone asks a question — like “What’s the weather today?”
This is the input that starts everything.
Thinking (the brain)
➜ The AI starts thinking — it plans what to do.
Example: “Hmm, I don’t know the weather directly, maybe I should check a weather tool.”
Action → Tool
➜ Based on that thinking, the AI takes an action, like using a tool (a calculator, web search, database, etc.).
Example: It calls a weather API to get real-time data.
Observation (from the tool)
➜ The AI observes what came back.
Example: “The API says: 28°C and sunny.”
Thinking again
➜ The AI looks at what it learned and thinks again — “Okay, I got the data. Now how do I explain it nicely?”
Answer
➜ Finally, it gives the Answer —
Example: “It’s 28°C and sunny right now!”
⚙️ In short: ReAct = Reason + Act
- It means the AI doesn’t just think — it also takes actions, checks results, and loops until it’s ready to answer confidently.
🪄 Understanding the ReAct Flow
-
Query → The user asks a question.
-
Agent → Controls the logic and sends the query to the model.
-
LLM Call → The language model (e.g., GPT-4) thinks about what to do.
-
Thought → The model reasons internally: “I should use the
get_text_lengthtool.” -
Parsing → LangChain converts this text reasoning into structured actions.
-
Tool Execution → The agent executes the tool or API call.
-
Output → The tool returns data back to the agent.
-
Validation → The agent checks if it’s satisfied (OK / Not OK).
-
Answer → If OK, the loop ends and the result is returned to the user.
If the output isn’t good enough, the process loops back — the agent thinks again.
🧩 Let’s Start with the Code
🧩 Let’s Start with the Code
main.pyfrom typing import Union, List
import re
from dotenv import load_dotenv
from langchain.agents import tool
from langchain.agents.format_scratchpad import format_log_to_str
from langchain.agents.output_parsers import ReActSingleInputOutputParser
from langchain_openai import ChatOpenAI
from langchain.prompts import PromptTemplate
from langchain.schema import AgentAction, AgentFinish
from langchain.tools import Tool
from langchain.tools.render import render_text_description
from callbacks import AgentCallbackHandler
load_dotenv()
@tool
def get_text_length(text: str) -> int:
"""Returns the length of a text by characters"""
print(f"get_text_length enter with {text=}")
text = text.strip("'\n").strip(
'"'
) # stripping away non alphabetic characters just in case
return len(text)
def find_tool_by_name(tools: List[Tool], tool_name: str) -> Tool:
for tool in tools:
if tool.name == tool_name:
return tool
raise ValueError(f"Tool wtih name {tool_name} not found")
if __name__ == "__main__":
print("Hello ReAct LangChain!")
tools = [get_text_length]
template = """
Answer the following questions as best you can. You have access to the following tools:
{tools}
Use the following format:
Question: the input question you must answer
Thought: you should always think about what to do
Action: the action to take, should be one of [{tool_names}]
Action Input: the input to the action
Observation: the result of the action
... (this Thought/Action/Action Input/Observation can repeat N times)
Thought: I now know the final answer
Final Answer: the final answer to the original input question
Begin!
Question: {input}
Thought: {agent_scratchpad}
"""
prompt = PromptTemplate.from_template(template=template).partial(
tools=render_text_description(tools),
tool_names=", ".join([t.name for t in tools]),
)
llm = ChatOpenAI(
temperature=0,
stop=["\nObservation", "Observation"],
callbacks=[AgentCallbackHandler()],
)
intermediate_steps = []
agent = (
{
"input": lambda x: x["input"],
"agent_scratchpad": lambda x: format_log_to_str(x["agent_scratchpad"]),
}
| prompt
| llm
| ReActSingleInputOutputParser()
)
agent_step = ""
while not isinstance(agent_step, AgentFinish):
agent_step: Union[AgentAction, AgentFinish] = agent.invoke(
{
"input": "What is the length of the word: DOG",
"agent_scratchpad": intermediate_steps,
}
)
print(agent_step)
if isinstance(agent_step, AgentAction):
tool_name = agent_step.tool
tool_to_use = find_tool_by_name(tools, tool_name)
tool_input = agent_step.tool_input
observation = tool_to_use.func(str(tool_input))
print(f"{observation=}")
intermediate_steps.append((agent_step, str(observation)))
if isinstance(agent_step, AgentFinish):
print(agent_step.return_values)
from typing import Union, List import re from dotenv import load_dotenv from langchain.agents import tool from langchain.agents.format_scratchpad import format_log_to_str from langchain.agents.output_parsers import ReActSingleInputOutputParser from langchain_openai import ChatOpenAI from langchain.prompts import PromptTemplate from langchain.schema import AgentAction, AgentFinish from langchain.tools import Tool from langchain.tools.render import render_text_description from callbacks import AgentCallbackHandler load_dotenv()@tool def get_text_length(text: str) -> int: """Returns the length of a text by characters""" print(f"get_text_length enter with {text=}") text = text.strip("'\n").strip( '"' ) # stripping away non alphabetic characters just in case return len(text) def find_tool_by_name(tools: List[Tool], tool_name: str) -> Tool: for tool in tools: if tool.name == tool_name: return tool raise ValueError(f"Tool wtih name {tool_name} not found") if __name__ == "__main__": print("Hello ReAct LangChain!") tools = [get_text_length] template = """ Answer the following questions as best you can. You have access to the following tools: {tools} Use the following format: Question: the input question you must answer Thought: you should always think about what to do Action: the action to take, should be one of [{tool_names}] Action Input: the input to the action Observation: the result of the action ... (this Thought/Action/Action Input/Observation can repeat N times) Thought: I now know the final answer Final Answer: the final answer to the original input question Begin! Question: {input} Thought: {agent_scratchpad} """ prompt = PromptTemplate.from_template(template=template).partial( tools=render_text_description(tools), tool_names=", ".join([t.name for t in tools]), ) llm = ChatOpenAI( temperature=0, stop=["\nObservation", "Observation"], callbacks=[AgentCallbackHandler()], ) intermediate_steps = [] agent = ( { "input": lambda x: x["input"], "agent_scratchpad": lambda x: format_log_to_str(x["agent_scratchpad"]), } | prompt | llm | ReActSingleInputOutputParser() ) agent_step = "" while not isinstance(agent_step, AgentFinish): agent_step: Union[AgentAction, AgentFinish] = agent.invoke( { "input": "What is the length of the word: DOG", "agent_scratchpad": intermediate_steps, } ) print(agent_step) if isinstance(agent_step, AgentAction): tool_name = agent_step.tool tool_to_use = find_tool_by_name(tools, tool_name) tool_input = agent_step.tool_input observation = tool_to_use.func(str(tool_input)) print(f"{observation=}") intermediate_steps.append((agent_step, str(observation))) if isinstance(agent_step, AgentFinish): print(agent_step.return_values)
from typing import Dict, Any, List from langchain.callbacks.base import BaseCallbackHandler from langchain.schema import LLMResult class AgentCallbackHandler(BaseCallbackHandler): def on_llm_start( self, serialized: Dict[str, Any], prompts: List[str], **kwargs: Any ) -> Any: """Run when LLM starts running.""" print(f"***Prompt to LLM was:***\n{prompts[0]}") print("*********") def on_llm_end(self, response: LLMResult, **kwargs: Any) -> Any: """Run when LLM ends running.""" print(f"***LLM Response:***\n{response.generations[0][0].text}") print("*********")
🧰 1. Imports and Setup
from typing import Union, List import re from dotenv import load_dotenv from langchain.agents import tool from langchain.agents.format_scratchpad import format_log_to_str from langchain.agents.output_parsers import ReActSingleInputOutputParser from langchain_openai import ChatOpenAI from langchain.prompts import PromptTemplate from langchain.schema import AgentAction, AgentFinish from langchain.tools import Tool from langchain.tools.render import render_text_description from callbacks import AgentCallbackHandler load_dotenv()
-
load_dotenv()→ Loads your.envfile so the OpenAI API key is available. -
ChatOpenAI→ The model interface (like GPT-4). -
AgentAction&AgentFinish→ Structured objects that represent what the agent decides next. -
format_log_to_str()→ Converts the agent’s previous thoughts and actions into readable text for the next LLM call. -
render_text_description()→ Renders human-readable descriptions of the tools available. -
ReActSingleInputOutputParser()→ Parses model output text into actionable structures.
⚙️ 2. Creating a Tool
@tool def get_text_length(text: str) -> int: """Returns the length of a text by characters""" print(f"get_text_length enter with {text=}") text = text.strip("'\n").strip('"') return len(text)
✅ In simple terms:
-
The agent can “decide” to call
get_text_length. -
It trims unnecessary characters and returns the number of characters.
🔍 3. Helper: Find the Tool by Name
🧠 4. Defining the ReAct Prompt Template
-
Think step-by-step.
-
Use tools by name.
-
Return thoughts, actions, and observations in a specific format.
Think of this as the “script” your AI actor follows.
💬 5. Create the Prompt and Model
Breaking this down:
-
PromptTemplate — Injects dynamic values into placeholders like
{tools}and{input}. -
temperature=0 — Makes the model deterministic (no randomness).
-
stop=["Observation"] — Stops model output before it writes the “Observation:” line.
-
callbacks — Attaches
AgentCallbackHandler, which logs prompts and raw LLM responses.
🧱 6. The LCEL Chain (LangChain Expression Language)
-
Mapping Inputs:
-
"input"→ the user’s question. -
"agent_scratchpad"→ memory of previous steps, formatted byformat_log_to_str().
-
-
| prompt→ Fills your ReAct template with these values. -
| llm→ Sends the prompt to GPT (the reasoning brain). -
| ReActSingleInputOutputParser()→ Parses LLM’s text response intoAgentActionorAgentFinish.
✅ Output: A structured object (either action or final answer) instead of raw text.
🔄 7. The Agent Execution Loop
🧩 Step-by-Step Explanation
intermediate_steps
Stores a history of all actions and their results.
[ (AgentAction(tool='get_text_length', tool_input='DOG'), '3') ]
agent_step
Represents the model’s latest instruction or answer:
-
If it’s AgentAction, it means “I need to call this tool.”
-
If it’s AgentFinish, it means “I’m done.”
The While Loop Logic
-
Invoke AgentCalls the pipeline (
agent.invoke(...)) with:-
The user question (
input). -
The memory (
agent_scratchpad=intermediate_steps).
-
-
Check What Model Returned
-
Calls the correct tool based on
agent_step.tool. -
Executes the tool (e.g.,
get_text_length("DOG")). -
Appends
(action, observation)tointermediate_steps.
-
-
Re-loopSends updated memory back to the model.The LLM now “remembers” its last action + observation.
-
Exit ConditionWhen
agent_stepbecomes anAgentFinish, the reasoning is complete.
🧩 8. Callback Logging (callbacks.py)
from langchain.callbacks.base import BaseCallbackHandler from langchain.schema import LLMResult class AgentCallbackHandler(BaseCallbackHandler): def on_llm_start(self, serialized, prompts, **kwargs): print(f"***Prompt to LLM was:***\n{prompts[0]}\n*********") def on_llm_end(self, response, **kwargs): print(f"***LLM Response:***\n{response.generations[0][0].text}\n*********")
-
Logs the prompt sent to the LLM.
-
Logs the raw output it produced.
🧮 Full Code Flow with Diagram Alignment
✅ Final Example Output
***Prompt to LLM was:*** Question: What is the length of the word: DOG Thought: ********* ***LLM Response:*** Thought: I should use the get_text_length tool Action: get_text_length Action Input: DOG ********* get_text_length enter with text='DOG' observation='3' ***LLM Response:*** Thought: Now I know the final answer Final Answer: 3 ********* {'output': '3'}
📘 Key Terms Recap
🧠 Key Takeaways
🧠 Key Takeaways
LangChain ReAct Agents mimic human reasoning — they think, act, observe, and iterate.
-
The agent loop is what gives your AI “agency.”
-
intermediate_stepsandagent_scratchpadgive it short-term memory. -
The prompt defines how it reasons, and the parser translates model text into structured steps.
-
With callbacks, you can watch it “think” in real time.
🔗 Next Steps
🔗 Next Steps
Try adding more tools like:
-
get_word_count(text) -
reverse_text(text) -
get_weather(city)

