MCPRepo Logo
MCPRepo
PostsUse CasesAPI Docs
MCP Repo LogoStay up to date with all things MCP.@mcp-repo

© Copyright 2025, All Rights Reserved

  • Browse use cases
  • Browse servers
  • Browse clients
  • Submit server/client
  • Submit use case
  • Blog Posts
  • API
  • Terms of Service
  • Privacy Policy

Fast Agent

Official
Report IssueGitHub Repo (2379 stars)
Report IssueGitHub (2379)

fast-agent is a comprehensive Python framework that fully supports the Model Context Protocol (MCP), enabling developers to easily create and interact with complex AI agents and workflows using Anthropic and OpenAI models. It provides declarative syntax for configuring prompts and MCP servers, multimodal support, and various workflow patterns, making it a powerful tool for building MCP-enabled applications.

prompt-engineering
python
workflow-automation
agent-orchestration
mcp
llm-framework
multimodal
ai-agents
2379 stars
235 forks
Python
Apache License 2.0
Updated 4/6/2025

Image Image Image discord Pepy Total Downloads Image

Overview

[!TIP] Documentation site is in production here : https://fast-agent.ai. Feel free to feed back what's helpful and what's not. There is also an LLMs.txt here

fast-agent enables you to create and interact with sophisticated Agents and Workflows in minutes. It is the first framework with complete, end-to-end tested MCP Feature support including Sampling. Both Anthropic (Haiku, Sonnet, Opus) and OpenAI models (gpt-4o/gpt-4.1 family, o1/o3 family) are supported.

The simple declarative syntax lets you concentrate on composing your Prompts and MCP Servers to build effective agents.

fast-agent is multi-modal, supporting Images and PDFs for both Anthropic and OpenAI endpoints via Prompts, Resources and MCP Tool Call results. The inclusion of passthrough and playback LLMs enable rapid development and test of Python glue-code for your applications.

[!IMPORTANT]

fast-agent The fast-agent documentation repo is here: https://github.com/evalstate/fast-agent-docs. Please feel free to submit PRs for documentation, experience reports or other content you think others may find helpful. All help and feedback warmly received.

Agent Application Development

Prompts and configurations that define your Agent Applications are stored in simple files, with minimal boilerplate, enabling simple management and version control.

Chat with individual Agents and Components before, during and after workflow execution to tune and diagnose your application. Agents can request human input to get additional context for task completion.

Simple model selection makes testing Model <-> MCP Server interaction painless. You can read more about the motivation behind this project here

2025-03-23-fast-agent

Get started:

Start by installing the uv package manager for Python. Then:

1uv pip install fast-agent-mcp # install fast-agent! 2fast-agent go # start an interactive session 3fast-agent go https://hf.co/mcp # with a remote MCP 4fast-agent go --model=generic.qwen2.5 # use ollama qwen 2.5 5fast-agent setup # create an example agent and config files 6uv run agent.py # run your first agent 7uv run agent.py --model=o3-mini.low # specify a model 8fast-agent quickstart workflow # create "building effective agents" examples

Other quickstart examples include a Researcher Agent (with Evaluator-Optimizer workflow) and Data Analysis Agent (similar to the ChatGPT experience), demonstrating MCP Roots support.

[!TIP] Windows Users - there are a couple of configuration changes needed for the Filesystem and Docker MCP Servers - necessary changes are detailed within the configuration files.

Basic Agents

Defining an agent is as simple as:

1@fast.agent( 2 instruction="Given an object, respond only with an estimate of its size." 3)

We can then send messages to the Agent:

1async with fast.run() as agent: 2 moon_size = await agent("the moon") 3 print(moon_size)

Or start an interactive chat with the Agent:

1async with fast.run() as agent: 2 await agent.interactive()

Here is the complete sizer.py Agent application, with boilerplate code:

1import asyncio 2from mcp_agent.core.fastagent import FastAgent 3 4# Create the application 5fast = FastAgent("Agent Example") 6 7@fast.agent( 8 instruction="Given an object, respond only with an estimate of its size." 9) 10async def main(): 11 async with fast.run() as agent: 12 await agent.interactive() 13 14if __name__ == "__main__": 15 asyncio.run(main())

The Agent can then be run with uv run sizer.py.

Specify a model with the --model switch - for example uv run sizer.py --model sonnet.

Combining Agents and using MCP Servers

To generate examples use fast-agent quickstart workflow. This example can be run with uv run workflow/chaining.py. fast-agent looks for configuration files in the current directory before checking parent directories recursively.

Agents can be chained to build a workflow, using MCP Servers defined in the fastagent.config.yaml file:

1@fast.agent( 2 "url_fetcher", 3 "Given a URL, provide a complete and comprehensive summary", 4 servers=["fetch"], # Name of an MCP Server defined in fastagent.config.yaml 5) 6@fast.agent( 7 "social_media", 8 """ 9 Write a 280 character social media post for any given text. 10 Respond only with the post, never use hashtags. 11 """, 12) 13@fast.chain( 14 name="post_writer", 15 sequence=["url_fetcher", "social_media"], 16) 17async def main(): 18 async with fast.run() as agent: 19 # using chain workflow 20 await agent.post_writer("http://llmindset.co.uk")

All Agents and Workflows respond to .send("message") or .prompt() to begin a chat session.

Saved as social.py we can now run this workflow from the command line with:

1uv run workflow/chaining.py --agent post_writer --message "<url>"

Add the --quiet switch to disable progress and message display and return only the final response - useful for simple automations.

Workflows

Chain

The chain workflow offers a more declarative approach to calling Agents in sequence:

1 2@fast.chain( 3 "post_writer", 4 sequence=["url_fetcher","social_media"] 5) 6 7# we can them prompt it directly: 8async with fast.run() as agent: 9 await agent.post_writer() 10

This starts an interactive session, which produces a short social media post for a given URL. If a chain is prompted it returns to a chat with last Agent in the chain. You can switch the agent to prompt by typing @agent-name.

Chains can be incorporated in other workflows, or contain other workflow elements (including other Chains). You can set an instruction to precisely describe it's capabilities to other workflow steps if needed.

Human Input

Agents can request Human Input to assist with a task or get additional context:

1@fast.agent( 2 instruction="An AI agent that assists with basic tasks. Request Human Input when needed.", 3 human_input=True, 4) 5 6await agent("print the next number in the sequence")

In the example human_input.py, the Agent will prompt the User for additional information to complete the task.

Parallel

The Parallel Workflow sends the same message to multiple Agents simultaneously (fan-out), then uses the fan-in Agent to process the combined content.

1@fast.agent("translate_fr", "Translate the text to French") 2@fast.agent("translate_de", "Translate the text to German") 3@fast.agent("translate_es", "Translate the text to Spanish") 4 5@fast.parallel( 6 name="translate", 7 fan_out=["translate_fr","translate_de","translate_es"] 8) 9 10@fast.chain( 11 "post_writer", 12 sequence=["url_fetcher","social_media","translate"] 13)

If you don't specify a fan-in agent, the parallel returns the combined Agent results verbatim.

parallel is also useful to ensemble ideas from different LLMs.

When using parallel in other workflows, specify an instruction to describe its operation.

Evaluator-Optimizer

Evaluator-Optimizers combine 2 agents: one to generate content (the generator), and the other to judge that content and provide actionable feedback (the evaluator). Messages are sent to the generator first, then the pair run in a loop until either the evaluator is satisfied with the quality, or the maximum number of refinements is reached. The final result from the Generator is returned.

If the Generator has use_history off, the previous iteration is returned when asking for improvements - otherwise conversational context is used.

1@fast.evaluator_optimizer( 2 name="researcher", 3 generator="web_searcher", 4 evaluator="quality_assurance", 5 min_rating="EXCELLENT", 6 max_refinements=3 7) 8 9async with fast.run() as agent: 10 await agent.researcher.send("produce a report on how to make the perfect espresso")

When used in a workflow, it returns the last generator message as the result.

See the evaluator.py workflow example, or fast-agent quickstart researcher for a more complete example.

Router

Routers use an LLM to assess a message, and route it to the most appropriate Agent. The routing prompt is automatically generated based on the Agent instructions and available Servers.

1@fast.router( 2 name="route", 3 agents=["agent1","agent2","agent3"] 4)

Look at the router.py workflow for an example.

Orchestrator

Given a complex task, the Orchestrator uses an LLM to generate a plan to divide the task amongst the available Agents. The planning and aggregation prompts are generated by the Orchestrator, which benefits from using more capable models. Plans can either be built once at the beginning (plantype="full") or iteratively (plantype="iterative").

1@fast.orchestrator( 2 name="orchestrate", 3 agents=["task1","task2","task3"] 4)

See the orchestrator.py or agent_build.py workflow example.

Agent Features

Calling Agents

All definitions allow omitting the name and instructions arguments for brevity:

1@fast.agent("You are a helpful agent") # Create an agent with a default name. 2@fast.agent("greeter","Respond cheerfully!") # Create an agent with the name "greeter" 3 4moon_size = await agent("the moon") # Call the default (first defined agent) with a message 5 6result = await agent.greeter("Good morning!") # Send a message to an agent by name using dot notation 7result = await agent.greeter.send("Hello!") # You can call 'send' explicitly 8 9await agent.greeter() # If no message is specified, a chat session will open 10await agent.greeter.prompt() # that can be made more explicit 11await agent.greeter.prompt(default_prompt="OK") # and supports setting a default prompt 12 13agent["greeter"].send("Good Evening!") # Dictionary access is supported if preferred

Defining Agents

Basic Agent

1@fast.agent( 2 name="agent", # name of the agent 3 instruction="You are a helpful Agent", # base instruction for the agent 4 servers=["filesystem"], # list of MCP Servers for the agent 5 model="o3-mini.high", # specify a model for the agent 6 use_history=True, # agent maintains chat history 7 request_params=RequestParams(temperature= 0.7), # additional parameters for the LLM (or RequestParams()) 8 human_input=True, # agent can request human input 9)

Chain

1@fast.chain( 2 name="chain", # name of the chain 3 sequence=["agent1", "agent2", ...], # list of agents in execution order 4 instruction="instruction", # instruction to describe the chain for other workflows 5 cumulative=False, # whether to accumulate messages through the chain 6 continue_with_final=True, # open chat with agent at end of chain after prompting 7)

Parallel

1@fast.parallel( 2 name="parallel", # name of the parallel workflow 3 fan_out=["agent1", "agent2"], # list of agents to run in parallel 4 fan_in="aggregator", # name of agent that combines results (optional) 5 instruction="instruction", # instruction to describe the parallel for other workflows 6 include_request=True, # include original request in fan-in message 7)

Evaluator-Optimizer

1@fast.evaluator_optimizer( 2 name="researcher", # name of the workflow 3 generator="web_searcher", # name of the content generator agent 4 evaluator="quality_assurance", # name of the evaluator agent 5 min_rating="GOOD", # minimum acceptable quality (EXCELLENT, GOOD, FAIR, POOR) 6 max_refinements=3, # maximum number of refinement iterations 7)

Router

1@fast.router( 2 name="route", # name of the router 3 agents=["agent1", "agent2", "agent3"], # list of agent names router can delegate to 4 model="o3-mini.high", # specify routing model 5 use_history=False, # router maintains conversation history 6 human_input=False, # whether router can request human input 7)

Orchestrator

1@fast.orchestrator( 2 name="orchestrator", # name of the orchestrator 3 instruction="instruction", # base instruction for the orchestrator 4 agents=["agent1", "agent2"], # list of agent names this orchestrator can use 5 model="o3-mini.high", # specify orchestrator planning model 6 use_history=False, # orchestrator doesn't maintain chat history (no effect). 7 human_input=False, # whether orchestrator can request human input 8 plan_type="full", # planning approach: "full" or "iterative" 9 plan_iterations=5, # maximum number of full plan attempts, or iterations 10)

Multimodal Support

Add Resources to prompts using either the inbuilt prompt-server or MCP Types directly. Convenience class are made available to do so simply, for example:

1 summary: str = await agent.with_resource( 2 "Summarise this PDF please", 3 "mcp_server", 4 "resource://fast-agent/sample.pdf", 5 )

MCP Tool Result Conversion

LLM APIs have restrictions on the content types that can be returned as Tool Calls/Function results via their Chat Completions API's:

  • OpenAI supports Text
  • Anthropic supports Text and Image

For MCP Tool Results, ImageResources and EmbeddedResources are converted to User Messages and added to the conversation.

Prompts

MCP Prompts are supported with apply_prompt(name,arguments), which always returns an Assistant Message. If the last message from the MCP Server is a 'User' message, it is sent to the LLM for processing. Prompts applied to the Agent's Context are retained - meaning that with use_history=False, Agents can act as finely tuned responders.

Prompts can also be applied interactively through the interactive interface by using the /prompt command.

Sampling

Sampling LLMs are configured per Client/Server pair. Specify the model name in fastagent.config.yaml as follows:

1mcp: 2 servers: 3 sampling_resource: 4 command: "uv" 5 args: ["run", "sampling_resource_server.py"] 6 sampling: 7 model: "haiku"

Secrets File

[!TIP] fast-agent will look recursively for a fastagent.secrets.yaml file, so you only need to manage this at the root folder of your agent definitions.

Interactive Shell

fast-agent

Project Notes

fast-agent builds on the mcp-agent project by Sarmad Qadri.

Contributing

Contributions and PRs are welcome - feel free to raise issues to discuss. Full guidelines for contributing and roadmap coming very soon. Get in touch!