Skip to main content
Fast Agent uses a hierarchical configuration system with support for environment variables, YAML files, and programmatic configuration.

Settings

The main settings class for Fast Agent configuration.
from fast_agent.config import Settings

settings = Settings(
    default_model="anthropic.claude-sonnet-4-20250514",
    execution_engine="asyncio",
    session_history=True
)

Core Settings

execution_engine
'asyncio'
default:"'asyncio'"
Execution engine for the agent application
environment_dir
str | None
default:"None"
Base directory for runtime data. Defaults to .fast-agent
default_model
str | None
default:"None"
Default model for agents. Format: provider.model_name.reasoning_effort or provider.model?reasoning=value. Examples: "openai.o3-mini.low", "anthropic.claude-sonnet-4-20250514?reasoning=high". Falls back to FAST_AGENT_MODEL env var, then "gpt-5-mini.low".
model_aliases
dict[str, dict[str, str]]
default:"{}"
Model aliases grouped by namespace. Example: {"$system": {"default": "gpt-5-mini"}}
auto_sampling
bool
default:"True"
Enable automatic sampling model selection if not explicitly configured
session_history
bool
default:"True"
Persist session history in the environment sessions folder
session_history_window
int
default:"20"
Maximum number of sessions to keep in the rolling window

MCP Configuration

mcp
MCPSettings | None
default:"MCPSettings()"
MCP server configuration and settings

Provider Settings

anthropic
AnthropicSettings | None
default:"None"
Settings for Anthropic models
openai
OpenAISettings | None
default:"None"
Settings for OpenAI models
responses
OpenAISettings | None
default:"None"
Settings for OpenAI Responses models
openresponses
OpenResponsesSettings | None
default:"None"
Settings for Open Responses models
codexresponses
CodexResponsesSettings | None
default:"None"
Settings for Codex Responses models
deepseek
DeepSeekSettings | None
default:"None"
Settings for DeepSeek models
google
GoogleSettings | None
default:"None"
Settings for Google models
xai
XAISettings | None
default:"None"
Settings for xAI Grok models
generic
GenericSettings | None
default:"None"
Settings for generic OpenAI-compatible models (e.g., Ollama)
openrouter
OpenRouterSettings | None
default:"None"
Settings for OpenRouter models
azure
AzureSettings | None
default:"None"
Settings for Azure OpenAI Service
groq
GroqSettings | None
default:"None"
Settings for Groq models
tensorzero
TensorZeroSettings | None
default:"None"
Settings for TensorZero LLM gateway
bedrock
BedrockSettings | None
default:"None"
Settings for AWS Bedrock models
huggingface
HuggingFaceSettings | None
default:"None"
Settings for HuggingFace inference providers

Logging and Telemetry

logger
LoggerSettings | None
default:"None"
Logger configuration for the agent
otel
OpenTelemetrySettings | None
default:"OpenTelemetrySettings()"
OpenTelemetry tracing configuration
skills
SkillsSettings | None
default:"None"
Skills directory and marketplace configuration
cards
CardsSettings | None
default:"None"
Card pack registry configuration
shell
ShellSettings | None
default:"None"
Shell execution behavior configuration

Provider Settings

AnthropicSettings

Configuration for Anthropic models.
from fast_agent.config import AnthropicSettings

anthropic = AnthropicSettings(
    api_key="sk-ant-...",
    default_model="claude-sonnet-4-20250514",
    cache_mode="auto",
    reasoning="medium"
)
api_key
str | None
default:"None"
Anthropic API key
base_url
str | None
default:"None"
Override API endpoint
default_model
str | None
default:"None"
Default model when provider is selected without explicit model
default_headers
dict[str, str] | None
default:"None"
Custom headers for all requests
cache_mode
'off' | 'prompt' | 'auto'
default:"'auto'"
Caching mode: off (disabled), prompt (cache tools+system), auto (same as prompt)
cache_ttl
'5m' | '1h'
default:"'5m'"
Cache TTL: 5m (standard) or 1h (extended, additional cost)
reasoning
ReasoningEffortSetting | str | int | bool | None
default:"None"
Reasoning setting. Supports effort strings (adaptive models), budget tokens (int), or toggle (bool). Use 0 or false to disable.
structured_output_mode
'auto' | 'json' | 'tool_use'
default:"'auto'"
Structured output mode
Built-in web search tool configuration
web_fetch
AnthropicWebFetchSettings
default:"AnthropicWebFetchSettings()"
Built-in web fetch tool configuration

OpenAISettings

Configuration for OpenAI models.
from fast_agent.config import OpenAISettings

openai = OpenAISettings(
    api_key="sk-...",
    default_model="gpt-5-mini",
    reasoning_effort="medium"
)
api_key
str | None
default:"None"
OpenAI API key
base_url
str | None
default:"None"
Override API endpoint
default_model
str | None
default:"None"
Default model when provider is selected
default_headers
dict[str, str] | None
default:"None"
Custom headers for all requests
text_verbosity
'low' | 'medium' | 'high'
default:"'medium'"
Text verbosity level for Responses models
transport
'sse' | 'websocket' | 'auto' | None
default:"None"
Responses transport mode. Defaults to websocket with SSE fallback.
service_tier
'fast' | 'flex' | None
default:"None"
Responses service tier: fast (priority) or flex
reasoning
ReasoningEffortSetting | str | int | bool | None
default:"None"
Unified reasoning setting (effort level or budget)
reasoning_effort
'minimal' | 'low' | 'medium' | 'high'
default:"'medium'"
Default reasoning effort
web_search
OpenAIWebSearchSettings
default:"OpenAIWebSearchSettings()"
Web search tool configuration

MCP Settings

MCPSettings

Configuration for MCP servers.
from fast_agent.config import MCPSettings, MCPServerSettings

mcp = MCPSettings(
    servers={
        "filesystem": MCPServerSettings(
            command="npx",
            args=["-y", "@modelcontextprotocol/server-filesystem", "/tmp"]
        )
    }
)
servers
dict[str, MCPServerSettings]
default:"{}"
Dictionary mapping server names to their configurations

MCPServerSettings

Configuration for an individual MCP server.
name
str | None
default:"None"
Server name
description
str | None
default:"None"
Server description
transport
'stdio' | 'sse' | 'http'
default:"'stdio'"
Transport mechanism. Auto-inferred from url or command presence.
command
str | None
default:"None"
Command to execute the server (e.g., npx)
args
list[str] | None
default:"None"
Arguments for the server command
url
str | None
default:"None"
URL for SSE/HTTP transport
headers
dict[str, str] | None
default:"None"
HTTP headers for connections
auth
MCPServerAuthSettings | None
default:"None"
Authentication configuration
roots
list[MCPRootSettings] | None
default:"None"
Root directories the server has access to
env
dict[str, str] | None
default:"None"
Environment variables for the server process
cwd
str | None
default:"None"
Working directory for the server command
load_on_start
bool
default:"True"
Whether to connect automatically when agent starts
include_instructions
bool
default:"True"
Whether to include server instructions in system prompt
reconnect_on_disconnect
bool
default:"True"
Whether to automatically reconnect on session termination
read_timeout_seconds
int | None
default:"None"
Timeout in seconds for the session
ping_interval_seconds
int
default:"30"
Interval for MCP ping requests. Set ≤0 to disable.
max_missed_pings
int
default:"3"
Consecutive missed pings before treating connection as failed

Shell Settings

ShellSettings

Configuration for shell execution behavior.
from fast_agent.config import ShellSettings

shell = ShellSettings(
    timeout_seconds=120,
    show_bash=True,
    enable_read_text_file=True
)
timeout_seconds
int
default:"90"
Maximum seconds to wait for command output before terminating. Supports duration strings like "90s", "2m", "1h".
warning_interval_seconds
int
default:"30"
Show timeout warnings every N seconds
interactive_use_pty
bool
default:"True"
Use a PTY for interactive prompt shell commands
output_display_lines
int | None
default:"5"
Maximum shell output lines to display. Set to None for no limit.
show_bash
bool
default:"True"
Show shell command output on the console
output_byte_limit
int | None
default:"None"
Override model-based output byte limit. None = auto.
missing_cwd_policy
'ask' | 'create' | 'warn' | 'error'
default:"'warn'"
Policy when agent shell cwd is missing or invalid
enable_read_text_file
bool
default:"True"
Expose local read_text_file tool (ACP-compatible) when shell runtime is enabled
write_text_file_mode
'auto' | 'on' | 'off' | 'apply_patch' | None
default:"None"
Control which local file edit tool is exposed:
  • auto: Uses apply_patch for GPT-5/Codex models, write_text_file otherwise
  • on: Always expose write_text_file
  • apply_patch: Always expose apply_patch
  • off: Disable local file edit tools

Logger Settings

LoggerSettings

Configuration for logging and console output.
from fast_agent.config import LoggerSettings

logger = LoggerSettings(
    type="file",
    level="info",
    path="fastagent.jsonl",
    show_chat=True
)
type
'none' | 'console' | 'file' | 'http'
default:"'file'"
Logger type
level
'debug' | 'info' | 'warning' | 'error'
default:"'warning'"
Minimum logging level
progress_display
bool
default:"True"
Enable or disable progress display
path
str
default:"'fastagent.jsonl'"
Path to log file when type is 'file'
batch_size
int
default:"100"
Number of events to accumulate before processing
flush_interval
float
default:"2.0"
How often to flush events in seconds
max_queue_size
int
default:"2048"
Maximum queue size for event processing
show_chat
bool
default:"True"
Show User/Assistant chat on console
show_tools
bool
default:"True"
Show MCP server tool calls on console
truncate_tools
bool
default:"True"
Truncate display of long tool calls
enable_markup
bool
default:"True"
Enable markup in console output
enable_prompt_marks
bool
default:"True"
Emit OSC 133 prompt marks for terminal scrollbar markers
streaming
'markdown' | 'plain' | 'none'
default:"'markdown'"
Streaming renderer for assistant responses
message_style
'classic' | 'a3'
default:"'a3'"
Chat message layout style for console output

Configuration Files

Fast Agent supports layered YAML configuration:
  1. Project config: fastagent.config.yaml in project root
  2. Environment config: .fast-agent/fastagent.config.yaml (overrides project)
  3. Secrets: fastagent.secrets.yaml (merged with config)

Environment Variable Substitution

Use ${VAR_NAME} or ${VAR_NAME:default} syntax:
anthropic:
  api_key: ${ANTHROPIC_API_KEY}
  default_model: ${DEFAULT_MODEL:claude-sonnet-4-20250514}

Example Configuration

default_model: anthropic.claude-sonnet-4-20250514

anthropic:
  api_key: ${ANTHROPIC_API_KEY}
  cache_mode: auto
  reasoning: medium

mcp:
  servers:
    filesystem:
      command: npx
      args:
        - -y
        - "@modelcontextprotocol/server-filesystem"
        - /tmp
    
    web-search:
      url: https://api.example.com/mcp
      transport: http
      headers:
        Authorization: Bearer ${MCP_TOKEN}

shell:
  timeout_seconds: 120
  show_bash: true

logger:
  type: file
  level: info
  show_chat: true
  streaming: markdown