Concept of Tools and Agents in LangChain

Tools

LLM can think,reason and perform language generation But they can’t perform actions in the real world like searching the web, looking up a database or calling an API that where the tools come into play.

Tools are python function that is packed in a way that LLM can understand and use it.

There are two types of tools in LangChain

  • Built-in Tools: LangChain provides a set of pre-defined tools for common tasks like web search, calculations, and database queries.
  • Custom Tools: You can create your own tools by defining Python functions and using decorator tools to wrap them.

Built-in Tools Example

Buuilt in tools are the tools that are already provided by LangChain for common tasks. Here are the list of some built-in tools:

  • DuckDuckGoSearchRun: A tool for searching the web using DuckDuckGo.
  • WikipediaQueryRun: A tool for querying Wikipedia articles.
  • PythonREPLTool: A tool for executing Python code.
  • ShellTool: A tool for executing shell commands.
  • RequestsGetTool: A tool for making HTTP GET requests.
  • GmailSendTool: A tool for sending emails via Gmail.
  • SlackSendMessageTool: A tool for sending messages to Slack channels.
  • SQLDatabaseTool: A tool for querying SQL databases.

Note: Tools are also runnable

Using Tools

To use a built-in tool, you need to import it from langchain_community.tools and create an instance of it. Here’s an example of using the DuckDuckGoSearchRun tool:

DuckDuckGoSearchRun Example

from langchain_community.tools import DuckDuckGoSearchRun
search_tool= DuckDuckGoSearchRun()
result = search_tool.run("nirajankhatiwada.com.np")
print(result)

ShellTool Example

from langchain_community.tools import ShellTool
shell_tool = ShellTool()
result = shell_tool.run("ls -la")
print(result)

See official documentation for more built-in tools and their usage.

Custom Tools Example

You can create your own custom tools by defining a Python function and wrapping it using tool decorator from langchain.tools. Here’s an example of creating a custom tool that adds two numbers:

we can define tool using 3 steps:

  • Define a Python function that performs the desired action.
  • Use the @tool decorator to wrap the function and provide metadata like name and description.
  • Add type hints to the function parameters and return type for better clarity.
from langchain_community.tools import tool

@tool
def multiply_two_number(a: float, b: float) -> float:
    """Multiplies two numbers and returns the result."""
    return a * b

result =multiply_two_number.invoke({
    "a": 6,
    "b": 7
})
print(f"The result of multiplication is: {result}") # The result of multiplication is: 42.0
print(multiply_two_number.args_schema.model_json_schema()) # What LLM will see

Tool Binding

Tool binding is the process of connecting tools to agents so that

  • agents can use them to perform actions.
  • It know what each tool do using description provided during tool creation.
  • It know which input format to use view schema

Binding Tools to Agents

To bind tools to agents, you need to pass the list of tools to the agent when creating it. Here’s an example of binding the DuckDuckGoSearchRun tool and a custom multiply_two_number tool to an agent:

llm_with_tool = chat_model.bind_tools(
    tools=[search_tool, multiply_two_number],
)

example:

from langchain_community.tools import tool
from langchain_ollama.chat_models import ChatOllama

@tool
def multiply_two_number(a: float, b: float) -> float:
    """Multiplies two numbers and returns the result."""
    return a * b

chat_model = ChatOllama(model="llama3")

chat_model_with_tool = chat_model.bind_tools([multiply_two_number])

Tool Calling

Tool Calling is the process where LLM decides when to use a tool based on the user input and the context of the conversation.

Note: LLM doesnt actually run the tool instead it suggest the tool and input argument.The actual execution of the tool is handled by the LangChain framework or you

How toll calling works:

  1. LLM receives a user input.
  2. It analyzes the input and determines if a tool is needed to fulfill the request.
  3. If a tool is needed, LLM generates a tool call with the tool name and input arguments in the format defined by the tool’s schema.

TO know which which tool is suggesting by LLM we can use

response = llm_with_tool.invoke("What is the product of 6 and 7?")
print(response)
print(response.tool_calls)
content='' additional_kwargs={} response_metadata={'model': 'llama3.1', 'created_at': '2026-01-26T08:26:47.5149049Z', 'done': True, 'done_reason': 'stop', 'total_duration': 9839590200, 'load_duration': 5771385700, 'prompt_eval_count': 166, 'prompt_eval_duration': 603947100, 'eval_count': 24, 'eval_duration': 3417969900, 'logprobs': None, 'model_name': 'llama3.1', 'model_provider': 'ollama'} id='lc_run--019bf969-78ef-7b52-8247-6a6570f3b01c-0' tool_calls=[{'name': 'multiply_two_number', 'args': {'a': 6, 'b': 7}, 'id': 'bdf4c07f-d683-491d-966b-54edef02d5b7', 'type': 'tool_call'}] invalid_tool_calls=[] usage_metadata={'input_tokens': 166, 'output_tokens': 24, 'total_tokens': 190}


[{'name': 'multiply_two_number', 'args': {'a': 6, 'b': 7}, 'id': '028a6197-af10-47f5-be26-17dae465d938', 'type': 'tool_call'}]
  1. The response contains nothing in content but the tool call information.
  2. Then we perform the tool execution using the tool call information.

Tool Execution

Once LLM suggests a tool call, the LangChain framework takes over and executes the tool with the provided arguments. The result of the tool execution is then returned to LLM, which can use it to generate a final response to the user.

We can execute the tool call using

# get the tool call from the response
tool_call = response.tool_calls
# execute the tool call

for x in tool_call:
    print(multiply_two_number.invoke(x))

Note:IN args of invoke of tool we need to pass the args dictionary from tool call which will return the result of tool execution.but if we pass the whole tool call dictionary it will give ToolMessage object containg the result and metadata .


Note: Always use chat models for tool calling no llms


Combined Example of Tool Calling and Execution

from langchain_community.tools import tool
from langchain_ollama.chat_models import ChatOllama
from langchain.messages import HumanMessage,AIMessage

@tool
def multiply_two_number(a: float, b: float) -> float:
    """Multiplies two numbers and returns the result."""
    return a * b


messages =[]

chat_model = ChatOllama(model="llama3.1")

messages.append(HumanMessage(content="What is 4535435 multiplied by 6543543543?"))

chat_model_with_tool = chat_model.bind_tools([multiply_two_number])

response = chat_model_with_tool.invoke(messages)

messages.append(AIMessage(content=response.content))

for x in response.tool_calls:
    messages.append(multiply_two_number.invoke(x))

result = chat_model.invoke(messages)
print(result.content)

Agents in Langchain

AI agent is an intelligent system that receives a high level goal or task from a user and autonomously determines the steps and actions needed to achieve that goal using tools and resources at its disposal.

LLM vs Agents:

  • LLMs are powerful language models that can generate text based on input prompts. They excel at understanding and generating human-like language.
  • Agents, on the other hand, are systems that leverage LLMs as a component but go beyond just text generation. They can plan, reason, and take actions in the real world by utilizing tools and resources.

Creating an Agent

from langchain_community.tools import DuckDuckGoSearchRun
from langchain_openai import ChatOpenAI
from langchain.agents import create_agent
from langchain.messages import SystemMessage, HumanMessage


# Initialize the model
chat_model = ChatOpenAI(
    model="gpt-4o", 
    openai_api_base="https://models.inference.ai.azure.com",
    api_key="YOUR_API_KEY"
)

# Initialize the tool
search_tool = DuckDuckGoSearchRun()

# Create the agent using the correct 2026 parameters
agent = create_agent(
    model=chat_model, 
    tools=[search_tool],
    system_prompt=SystemMessage(content="You are a helpful assistant that do the task provided by user . If you cant find the answer just say I don't know and use tools to find the answer of the user question."),
)

# Run the agent using the new 'messages' input format
response = agent.invoke({
    "messages": [HumanMessage(content="What is the population of Capital City of Nepal?")],
})

# Access the final message in the returned state
print(response["messages"][-1].content)

React Agent

The above agent we make is React Agent.

React Agent is an advanced type of agent that uses the ReAct (Reasoning and Acting) framework to interleave reasoning and actions. It allows the agent to think through a problem step-by-step while also taking actions using tools as needed.

Instead of generating a single response, the React Agent produces a series of “thoughts” and “actions” that guide its problem-solving process. This allows for more dynamic and flexible interactions, as the agent can adapt its approach based on the information it gathers through tool usage.

Its work on 3 cycle, Thought, Action, Observation and if final answer is reached it will give the final answer.

Example:

Thought: I need to find out the capital city of Nepal.
Action: DuckDuckGoSearchRun
Action Input: "capital city of Nepal"
Observation: Kathmandu is the capital city of Nepal.
Thought: Now I need to find out the population of Kathmandu.
Action: DuckDuckGoSearchRun
Action Input: "population of Kathmandu"
Observation: The population of Kathmandu is approximately 1.5 million.
Thought: I have gathered enough information to answer the question.
Final Answer: The population of the capital city of Nepal, Kathmandu, is approximately 1.
5 million.