Skip to main content

Overview

The ToolAgent extends LlmAgent with function calling capabilities. It allows you to provide Python functions as tools that the LLM can intelligently call to complete tasks. ToolAgent handles:
  • Converting Python functions to FastMCP tools
  • Tool schema generation for the LLM
  • Automatic tool execution based on LLM requests
  • Tool result formatting and history management
  • Parallel and sequential tool execution

Key Features

  • Function Tools: Convert any Python function to an LLM-callable tool
  • FastMCP Integration: Use FastMCP tools directly
  • Agent-as-Tool: Expose other agents as tools
  • Tool Hooks: Lifecycle hooks for tool execution
  • Parallel Execution: Run multiple tools concurrently
  • Tool Filtering: Control which tools are exposed
  • Result Handling: Automatic formatting of tool outputs

Architecture

ToolAgent
    ↓ extends
LlmAgent (conversation & display)
    ↓ uses
ToolRunner (execution loop)
    ↓ calls
FastMCP Tools (function wrappers)

Creating a Tool Agent

Basic Example

import asyncio
from fast_agent.agents.agent_types import AgentConfig
from fast_agent.agents.tool_agent import ToolAgent
from fast_agent.core import Core
from fast_agent.llm.model_factory import ModelFactory

# Define tool functions
def get_weather(city: str) -> str:
    """Get the weather in a city.
    
    Args:
        city: The city to check weather for
        
    Returns:
        Weather information
    """
    return f"The weather in {city} is sunny, 22°C"

def get_time() -> str:
    """Get the current time."""
    from datetime import datetime
    return datetime.now().strftime("%I:%M %p")

async def main():
    core = Core()
    await core.initialize()
    
    # Create agent with tools
    config = AgentConfig(
        name="assistant",
        instruction="You are a helpful assistant with access to weather and time tools.",
        model="gpt-4o-mini"
    )
    
    agent = ToolAgent(
        config,
        tools=[get_weather, get_time],  # Pass functions directly
        context=core.context
    )
    
    await agent.attach_llm(ModelFactory.create_factory("gpt-4o-mini"))
    
    # Agent will automatically call tools as needed
    result = await agent.send(
        "What's the weather in Paris and what time is it?"
    )
    print(result)
    
    await core.cleanup()

asyncio.run(main())

With FastMCP Tools

from mcp.server.fastmcp import FastMCP

mcp = FastMCP("Math Tools")

@mcp.tool()
def calculate_sum(a: int, b: int) -> int:
    """Add two numbers together.
    
    Args:
        a: First number
        b: Second number
        
    Returns:
        Sum of a and b
    """
    return a + b

@mcp.tool()
def calculate_product(a: int, b: int) -> int:
    """Multiply two numbers.
    
    Args:
        a: First number
        b: Second number
        
    Returns:
        Product of a and b
    """
    return a * b

# Use FastMCP tools in agent
config = AgentConfig(name="calculator", model="gpt-4o-mini")
agent = ToolAgent(
    config,
    tools=[calculate_sum, calculate_product],
    context=core.context
)

Tool Function Requirements

Function Signature

Tools can be sync or async functions with type hints:
# Synchronous tool
def sync_tool(param: str) -> str:
    """Tool description.
    
    Args:
        param: Parameter description
        
    Returns:
        Return value description
    """
    return f"Result: {param}"

# Asynchronous tool
async def async_tool(param: str) -> str:
    """Async tool description.
    
    Args:
        param: Parameter description
        
    Returns:
        Return value description
    """
    import asyncio
    await asyncio.sleep(1)
    return f"Result: {param}"

Type Hints

Use type hints for automatic schema generation:
from typing import List, Optional
from enum import Enum

class Priority(str, Enum):
    HIGH = "high"
    MEDIUM = "medium"
    LOW = "low"

def create_task(
    title: str,
    description: Optional[str] = None,
    priority: Priority = Priority.MEDIUM,
    tags: List[str] = None
) -> str:
    """Create a new task.
    
    Args:
        title: Task title (required)
        description: Detailed description (optional)
        priority: Task priority level
        tags: List of tags to categorize the task
        
    Returns:
        Confirmation message with task ID
    """
    tags = tags or []
    return f"Created task '{title}' with priority {priority.value}"

Docstrings

Docstrings are used for tool descriptions shown to the LLM:
def search_documents(query: str, limit: int = 10) -> list[dict]:
    """Search through document database.
    
    This tool searches the document database using semantic search.
    Results are ranked by relevance score.
    
    Args:
        query: Search query string
        limit: Maximum number of results to return (default: 10)
        
    Returns:
        List of matching documents with metadata
        
    Examples:
        search_documents("python tutorial", limit=5)
    """
    # Implementation
    return [{"title": "Doc 1", "score": 0.95}]

Managing Tools

Adding Tools Dynamically

from mcp.server.fastmcp.tools.base import Tool as FastMCPTool

# Create tool from function
def new_tool(param: str) -> str:
    """A new tool."""
    return f"Result: {param}"

fast_tool = FastMCPTool.from_function(new_tool)

# Add to agent
agent.add_tool(fast_tool)

# Replace existing tool
agent.add_tool(fast_tool, replace=True)

Listing Available Tools

from mcp.types import ListToolsResult

# Get all tools
tools_result: ListToolsResult = await agent.list_tools()

for tool in tools_result.tools:
    print(f"Tool: {tool.name}")
    print(f"Description: {tool.description}")
    print(f"Parameters: {tool.inputSchema}")

Calling Tools Directly

from mcp.types import CallToolResult

# Execute a tool directly
result: CallToolResult = await agent.call_tool(
    name="get_weather",
    arguments={"city": "Tokyo"}
)

if not result.isError:
    print(result.content[0].text)

Tool Execution Loop

The ToolRunner manages the tool execution loop:
  1. LLM generates response with tool calls
  2. Agent executes requested tools (parallel or sequential)
  3. Tool results are formatted and added to history
  4. LLM generates next response with tool results
  5. Repeat until LLM returns final answer

Execution Flow

# User message
"What's the weather in Paris and London?"

# LLM response (TOOL_USE)
# Calls: get_weather(city="Paris"), get_weather(city="London")

# Tool execution (parallel)
# Results: ["Sunny, 22°C", "Cloudy, 18°C"]

# LLM response (END_TURN)
# Final answer with weather information

Tool Hooks

Tool hooks allow you to intercept and modify tool execution:
from fast_agent.agents.tool_runner import ToolRunnerHooks
from fast_agent.types import PromptMessageExtended

async def before_llm_call(runner, messages: list[PromptMessageExtended]):
    """Called before LLM generates response."""
    print(f"Generating with {len(messages)} messages")

async def after_llm_call(runner, message: PromptMessageExtended):
    """Called after LLM generates response."""
    if message.tool_calls:
        print(f"LLM requested {len(message.tool_calls)} tool calls")

async def before_tool_call(runner, request: PromptMessageExtended):
    """Called before executing tools."""
    for tool_id, tool_call in request.tool_calls.items():
        print(f"Executing: {tool_call.params.name}")

async def after_tool_call(runner, result: PromptMessageExtended):
    """Called after tools execute."""
    print(f"Tools completed: {len(result.tool_results)} results")

async def after_turn_complete(runner, final_message: PromptMessageExtended):
    """Called when conversation turn completes."""
    print(f"Turn complete: {final_message.stop_reason}")

# Attach hooks to agent
agent.tool_runner_hooks = ToolRunnerHooks(
    before_llm_call=before_llm_call,
    after_llm_call=after_llm_call,
    before_tool_call=before_tool_call,
    after_tool_call=after_tool_call,
    after_turn_complete=after_turn_complete
)

Hook Example: Logging

import time

class TimingHooks:
    def __init__(self):
        self.start_time = None
        
    async def before_llm_call(self, runner, messages):
        self.start_time = time.time()
        
    async def after_llm_call(self, runner, message):
        elapsed = time.time() - self.start_time
        print(f"LLM call took {elapsed:.2f}s")

timing = TimingHooks()
agent.tool_runner_hooks = ToolRunnerHooks(
    before_llm_call=timing.before_llm_call,
    after_llm_call=timing.after_llm_call
)

Agents as Tools

Expose other agents as tools for delegation:
# Create a specialized agent
research_config = AgentConfig(
    name="researcher",
    instruction="You are a research specialist.",
    model="gpt-4o"
)
researcher = ToolAgent(research_config, tools=[], context=core.context)
await researcher.attach_llm(ModelFactory.create_factory("gpt-4o"))

# Create main agent
main_config = AgentConfig(
    name="orchestrator",
    instruction="You coordinate tasks and delegate to specialists.",
    model="gpt-4o-mini"
)
main_agent = ToolAgent(main_config, tools=[], context=core.context)
await main_agent.attach_llm(ModelFactory.create_factory("gpt-4o-mini"))

# Add researcher as tool
main_agent.add_agent_tool(
    researcher,
    name="research_tool",
    description="Delegate research tasks to the research specialist"
)

# Main agent can now delegate to researcher
result = await main_agent.send(
    "Research the history of Fast Agent framework"
)

Parallel Tool Execution

Tools are executed in parallel when beneficial:
# These will execute in parallel automatically
result = await agent.send(
    "Get the weather in Paris, London, Tokyo, and New York"
)
# All 4 get_weather calls run concurrently

Parallel Execution Configuration

From fast_agent/constants.py:
PARALLEL_TOOL_CALL_THRESHOLD = 2  # Run in parallel if >= 2 tools

Tool Result Formatting

Tool results are automatically formatted:
from mcp.types import CallToolResult, TextContent

# Tool returns string
def simple_tool() -> str:
    return "Result text"

# Converted to CallToolResult:
CallToolResult(
    content=[TextContent(type="text", text="Result text")],
    isError=False
)

Error Handling

def risky_tool(param: str) -> str:
    """Tool that might fail."""
    if not param:
        raise ValueError("param is required")
    return f"Success: {param}"

# Error automatically captured:
CallToolResult(
    content=[TextContent(type="text", text="Error: param is required")],
    isError=True
)

Advanced Patterns

Stateful Tools

class DatabaseTools:
    def __init__(self):
        self.connection = None
        
    def connect(self, host: str, port: int) -> str:
        """Connect to database."""
        self.connection = f"{host}:{port}"
        return f"Connected to {self.connection}"
        
    def query(self, sql: str) -> str:
        """Execute SQL query."""
        if not self.connection:
            return "Error: Not connected"
        return f"Query result from {self.connection}"
        
    def disconnect(self) -> str:
        """Disconnect from database."""
        if self.connection:
            conn = self.connection
            self.connection = None
            return f"Disconnected from {conn}"
        return "Not connected"

# Use stateful tools
db_tools = DatabaseTools()
agent = ToolAgent(
    config,
    tools=[db_tools.connect, db_tools.query, db_tools.disconnect],
    context=core.context
)

Context-Aware Tools

class ContextualTools:
    def __init__(self, agent):
        self.agent = agent
        
    def get_history_summary(self) -> str:
        """Get summary of conversation history."""
        history = self.agent.message_history
        return f"Conversation has {len(history)} messages"
        
    def get_usage_stats(self) -> str:
        """Get token usage statistics."""
        usage = self.agent.usage_accumulator
        if usage:
            return f"Used {usage.total_tokens} tokens (${usage.total_cost:.4f})"
        return "No usage data"

# Tools that access agent state
tools = ContextualTools(agent)
agent.add_tool(FastMCPTool.from_function(tools.get_history_summary))
agent.add_tool(FastMCPTool.from_function(tools.get_usage_stats))

Best Practices

  • Keep tools focused on single responsibilities
  • Use clear, descriptive names and docstrings
  • Provide type hints for all parameters
  • Handle errors gracefully with meaningful messages
  • Make tools idempotent when possible
  • Validate inputs and raise clear errors
  • Return error messages the LLM can understand
  • Log errors for debugging
  • Consider retry logic for transient failures
  • Design tools for parallel execution
  • Avoid blocking operations in sync tools
  • Use async tools for I/O operations
  • Cache expensive computations
  • Monitor tool execution time
  • Validate and sanitize all inputs
  • Limit tool capabilities appropriately
  • Never expose destructive operations without safeguards
  • Use tool hooks for authorization checks

Common Patterns

Data Analysis Agent

import pandas as pd

def load_csv(filepath: str) -> str:
    """Load CSV file and return summary."""
    df = pd.read_csv(filepath)
    return f"Loaded {len(df)} rows, {len(df.columns)} columns"

def analyze_column(column: str) -> str:
    """Get statistics for a column."""
    # Access cached dataframe
    return "Mean: 42.5, Median: 40, StdDev: 8.2"

config = AgentConfig(
    name="analyst",
    instruction="You analyze data from CSV files.",
    model="gpt-4o"
)
agent = ToolAgent(
    config,
    tools=[load_csv, analyze_column],
    context=core.context
)

API Integration Agent

import requests

def api_get(endpoint: str, params: dict = None) -> str:
    """Make GET request to API."""
    response = requests.get(f"https://api.example.com{endpoint}", params=params)
    return response.text

def api_post(endpoint: str, data: dict) -> str:
    """Make POST request to API."""
    response = requests.post(f"https://api.example.com{endpoint}", json=data)
    return response.text

config = AgentConfig(
    name="api_client",
    instruction="You interact with the Example API.",
    model="gpt-4o-mini"
)
agent = ToolAgent(
    config,
    tools=[api_get, api_post],
    context=core.context
)

Next Steps

MCP Agent

Connect to MCP servers for more tools

FastMCP

Create advanced FastMCP tools

Tool Hooks

Deep dive into tool execution hooks

Examples

See complete tool agent examples