Skip to article frontmatterSkip to article content

Building Agent Tools with FastMCP

University of Central Florida
Valorum Data

Computational Analysis of Social Complexity

Fall 2025, Spencer Lyon

Prerequisites

  • L.A2.01 (Function calling and tool use)
  • L.A2.02 (Type-safe agents with PydanticAI)
  • L.A2.03 (Agent evaluations)
  • Basic understanding of client-server architecture

Outcomes

  • Understand the Model Context Protocol (MCP) and its role in the AI ecosystem
  • Create MCP servers using FastMCP to expose computational tools
  • Integrate MCP servers with PydanticAI agents for distributed tool access
  • Deploy and test MCP servers in multiple environments
  • Apply MCP patterns to course domains: network analysis, game theory, and agent-based models

References

From Embedded Tools to Distributed Tools

The Reusability Problem

In Week A2, we built tools using @agent.tool. This works well but locks tools into PydanticAI. What if you want to use your network analysis toolkit in:

  1. A PydanticAI agent
  2. A ChatGPT plugin
  3. Claude Desktop
  4. A web API
  5. A Jupyter notebook assistant

You’d need 5 different implementations of the same tools, each with different formats, authentication, and deployment.

Enter: The Model Context Protocol

MCP is “USB-C for AI” - a universal standard for AI tools.

  1. Write your tools once as an MCP server
  2. Use them anywhere with any MCP-compatible client
┌─────────────────────────────────┐
│     Your MCP Server             │
│  (Network Analysis Tools)       │
└──────────────┬──────────────────┘
               │ MCP Protocol
    ┌──────────┴──────────┐
    ▼                     ▼
┌─────────┐          ┌─────────┐
│ Claude  │          │Pydantic │
│ Desktop │          │   AI    │
└─────────┘          └─────────┘

The Three MCP Primitives

  1. Tools (Functions): Actions the AI can perform
  2. Resources (Data): Read-only access to information
  3. Prompts (Templates): Reusable message templates

Today we focus primarily on Tools.

FastMCP Fundamentals

What is FastMCP?

FastMCP is a Python framework for building production-ready MCP servers with the “Pydantic way”:

  • Type-safe by default
  • Automatic validation
  • Minimal boilerplate

Core Concepts:

  1. Server Creation: FastMCP("ServerName")
  2. Tool Decoration: @mcp.tool() (like @agent.tool)
  3. Automatic Schema Generation: From type hints and docstrings
  4. Multiple Transports: stdio (local), HTTP (remote)
# !pip install fastmcp pydantic-ai networkx quantecon mesa

Your First MCP Server

from fastmcp import FastMCP

mcp = FastMCP("Calculator")

@mcp.tool()
def add(a: float, b: float) -> float:
    """Add two numbers together."""
    return a + b

@mcp.tool()
def multiply(a: float, b: float) -> float:
    """Multiply two numbers together."""
    return a * b

print("✓ Calculator MCP server created with 2 tools")
✓ Calculator MCP server created with 2 tools

No JSON schema writing, no manual validation - FastMCP generates everything from type hints.

Running an MCP Server

%%file calculator_server.py

from fastmcp import FastMCP

mcp = FastMCP("Calculator")

@mcp.tool()
def add(a: float, b: float) -> float:
    """Add two numbers together."""
    return a + b

if __name__ == "__main__":
    mcp.run()  # stdio by default
Overwriting calculator_server.py

Run with: python calculator_server.py

For HTTP: mcp.run(transport="http", port=8000)

Testing an MCP Server

from fastmcp import Client

async with Client("calculator_server.py") as client:
    tools = await client.list_tools()
    print("Available tools:", [t.name for t in tools])

    result = await client.call_tool("add", {"a": 5, "b": 3})
    print("5 + 3 =", result)
Available tools: ['add']
5 + 3 = CallToolResult(content=[TextContent(type='text', text='8.0', annotations=None, meta=None)], structured_content={'result': 8.0}, data=8.0, is_error=False)

This server is immediately usable by Claude Desktop, PydanticAI, or any MCP client. Write once, use everywhere.

Building Course-Specific MCP Servers

Network Analysis MCP Server

Let’s expose network analysis capabilities from Weeks 3-5 using NetworkX.

State Management: FastMCP’s Context is request-scoped, so we use a global dictionary for persistent state:

cache: Dict[str, Any] = {}  # Global cache, lives for server lifetime
%%file network_analysis_server.py

from fastmcp import FastMCP
import networkx as nx
from typing import Dict, List, Tuple, Any

# Global cache for persistent state across tool calls
# MCP Context is request-scoped, so we need external storage
cache: Dict[str, Any] = {}

network_mcp = FastMCP("NetworkAnalysis")

@network_mcp.tool()
def create_network(
    graph_id: str,
    edges: List[Tuple[int, int]]
) -> Dict[str, Any]:
    """
    Create a network from an edge list and store it.

    Args:
        graph_id: Unique identifier for this graph (e.g., 'social_network', 'graph1')
        edges: List of edges as [source, target] pairs. Example: [[1,2], [2,3], [1,3]]

    Returns:
        Dictionary with graph statistics (num_nodes, num_edges, density)
    """
    G = nx.Graph()
    G.add_edges_from(edges)
    cache[f"graph:{graph_id}"] = G
    return {
        "graph_id": graph_id,
        "num_nodes": G.number_of_nodes(),
        "num_edges": G.number_of_edges(),
        "density": round(nx.density(G), 4)
    }

@network_mcp.tool()
def calculate_degree_centrality(
    graph_id: str,
    node: int
) -> Dict[str, Any]:
    """
    Calculate degree centrality for a node.

    Degree centrality measures how many connections a node has.
    Higher values indicate more central/connected nodes.

    Args:
        graph_id: ID of the graph to analyze
        node: The node ID to calculate centrality for

    Returns:
        Dictionary with degree and normalized centrality value
    """
    G = cache.get(f"graph:{graph_id}")
    if G is None:
        return {"error": f"Graph '{graph_id}' not found. Create it first."}
    if node not in G:
        return {"error": f"Node {node} not in graph '{graph_id}'"}

    degree = G.degree(node)
    max_possible = G.number_of_nodes() - 1
    normalized = degree / max_possible if max_possible > 0 else 0
    return {"node": node, "degree": degree, "normalized_centrality": round(normalized, 4)}

@network_mcp.tool()
def calculate_betweenness(
    graph_id: str,
    node: int
) -> Dict[str, Any]:
    """
    Calculate betweenness centrality for a node.

    Betweenness measures how often a node lies on shortest paths between other nodes.
    High betweenness nodes are 'bridges' connecting different parts of the network.

    Args:
        graph_id: ID of the graph to analyze
        node: The node ID to calculate betweenness for

    Returns:
        Dictionary with betweenness centrality value
    """
    G = cache.get(f"graph:{graph_id}")
    if G is None:
        return {"error": f"Graph '{graph_id}' not found"}
    if node not in G:
        return {"error": f"Node {node} not in graph '{graph_id}'"}

    betweenness = nx.betweenness_centrality(G)
    return {"node": node, "betweenness_centrality": round(betweenness[node], 4)}

@network_mcp.tool()
def find_shortest_path(
    graph_id: str,
    source: int,
    target: int
) -> Dict[str, Any]:
    """
    Find shortest path between two nodes.

    Args:
        graph_id: ID of the graph to search
        source: Starting node ID
        target: Destination node ID

    Returns:
        Dictionary with path and length, or error if no path exists
    """
    G = cache.get(f"graph:{graph_id}")
    if G is None:
        return {"error": f"Graph '{graph_id}' not found"}

    try:
        path = nx.shortest_path(G, source, target)
        return {"found": True, "path": path, "length": len(path) - 1}
    except nx.NetworkXNoPath:
        return {"found": False, "message": f"No path exists between {source} and {target}"}
    except nx.NodeNotFound:
        return {"found": False, "message": f"One or both nodes not in graph"}

if __name__ == "__main__":
    network_mcp.run()
Overwriting network_analysis_server.py

Key Features: State management via global cache, type safety, domain expertise encoded in docstrings, and structured error handling.

Testing the Network Analysis Server

from fastmcp import Client

async with Client("network_analysis_server.py") as client:
    tools = await client.list_tools()
    print("Available tools:", [t.name for t in tools])
    
    # Create a social network
    result = await client.call_tool(
        "create_network",
        {"graph_id": "social", "edges": [[1,2], [1,3], [2,3], [3,4], [4,5], [5,6], [6,4], [3,7], [7,8], [8,9], [9,7]]}
    )
    print(f"\nNetwork created: {result.data}")
    
    # Analyze centrality
    result = await client.call_tool("calculate_degree_centrality", {"graph_id": "social", "node": 3})
    print(f"Node 3 centrality: {result.data}")
    
    result = await client.call_tool("calculate_betweenness", {"graph_id": "social", "node": 3})
    print(f"Node 3 betweenness: {result.data}")
    
    # Find path
    result = await client.call_tool("find_shortest_path", {"graph_id": "social", "source": 1, "target": 9})
    print(f"Path 1→9: {result.data}")
Available tools: ['create_network', 'calculate_degree_centrality', 'calculate_betweenness', 'find_shortest_path']

Network created: {'graph_id': 'social', 'num_nodes': 9, 'num_edges': 11, 'density': 0.3056}
Node 3 centrality: {'node': 3, 'degree': 4, 'normalized_centrality': 0.5}
Node 3 betweenness: {'node': 3, 'betweenness_centrality': 0.75}
Path 1→9: {'found': True, 'path': [1, 3, 7, 9], 'length': 3}

State persists across calls, and results are structured with semantic fields.

Game Theory MCP Server

%%file game_server.py

from fastmcp import FastMCP
import quantecon.game_theory as gt
import numpy as np
from typing import Dict, List, Any

game_mcp = FastMCP("GameTheory")
game_cache: Dict[str, Any] = {}

@game_mcp.tool()
def create_game(
    game_id: str,
    payoff_matrix_p1: List[List[float]],
    payoff_matrix_p2: List[List[float]]
) -> Dict[str, Any]:
    """
    Create a two-player normal-form game.

    Args:
        game_id: Unique identifier for this game
        payoff_matrix_p1: Payoff matrix for Player 1 (rows = P1 strategies, cols = P2 strategies)
        payoff_matrix_p2: Payoff matrix for Player 2 (rows = P1 strategies, cols = P2 strategies)

    Returns:
        Game statistics and confirmation
    """
    p1_payoffs = np.array(payoff_matrix_p1)
    p2_payoffs = np.array(payoff_matrix_p2)
    game = gt.NormalFormGame([p1_payoffs, p2_payoffs])
    game_cache[game_id] = game
    return {
        "game_id": game_id,
        "num_players": 2,
        "p1_strategies": p1_payoffs.shape[0],
        "p2_strategies": p1_payoffs.shape[1],
        "message": f"Game '{game_id}' created successfully"
    }

@game_mcp.tool()
def find_pure_nash_equilibria(game_id: str) -> Dict[str, Any]:
    """
    Find all pure strategy Nash equilibria in the game.

    A Nash equilibrium is a strategy profile where no player can improve
    by unilaterally changing their strategy.

    Args:
        game_id: ID of the game to analyze

    Returns:
        List of Nash equilibria (strategy profiles) and their payoffs
    """
    game = game_cache.get(game_id)
    if game is None:
        return {"error": f"Game '{game_id}' not found"}

    equilibria = game.pure_nash_brute()
    results = []
    for eq in equilibria:
        payoffs = [game.players[i].payoff_array[eq] for i in range(len(game.players))]
        results.append({"strategies": eq, "payoffs": [float(p) for p in payoffs]})

    return {
        "game_id": game_id,
        "num_equilibria": len(results),
        "equilibria": results
    }

print("✓ Game Theory MCP server created")
Overwriting game_server.py

Agent-Based Model Controller

%%file abm_server.py

from fastmcp import FastMCP
from mesa import Agent, Model
from mesa.time import RandomActivation
from mesa.space import SingleGrid
from mesa.datacollection import DataCollector
from typing import Dict, Any
import random

abm_mcp = FastMCP("AgentBasedModels")
abm_cache: Dict[str, Any] = {}

class SchellingAgent(Agent):
    def __init__(self, unique_id, model, agent_type):
        super().__init__(unique_id, model)
        self.type = agent_type

    def step(self):
        neighbors = self.model.grid.get_neighbors(self.pos, moore=True, include_center=False)
        similar = sum(1 for n in neighbors if n.type == self.type)
        total = len(neighbors)
        if total > 0 and (similar / total) < self.model.homophily:
            self.model.grid.move_to_empty(self)

class SchellingModel(Model):
    def __init__(self, width=20, height=20, density=0.8, minority_pc=0.2, homophily=3):
        super().__init__()
        self.homophily = homophily / 8
        self.schedule = RandomActivation(self)
        self.grid = SingleGrid(width, height, torus=True)
        n_agents = int(width * height * density)
        for i in range(n_agents):
            agent_type = 1 if random.random() < minority_pc else 0
            agent = SchellingAgent(i, self, agent_type)
            self.schedule.add(agent)
            self.grid.position_agent(agent, (random.randrange(width), random.randrange(height)))
        self.datacollector = DataCollector(model_reporters={"segregation": lambda m: self.measure_segregation(m)})

    @staticmethod
    def measure_segregation(model):
        similar_neighbors = total_neighbors = 0
        for agent in model.schedule.agents:
            neighbors = model.grid.get_neighbors(agent.pos, moore=True, include_center=False)
            if neighbors:
                similar_neighbors += sum(1 for n in neighbors if n.type == agent.type)
                total_neighbors += len(neighbors)
        return similar_neighbors / total_neighbors if total_neighbors > 0 else 0

    def step(self):
        self.datacollector.collect(self)
        self.schedule.step()

@abm_mcp.tool()
def create_schelling_model(
    model_id: str,
    width: int = 20,
    height: int = 20,
    density: float = 0.8,
    minority_percent: float = 0.2,
    homophily: int = 3
) -> Dict[str, Any]:
    """
    Create a Schelling segregation model.

    The Schelling model demonstrates how mild preferences for similar neighbors
    can lead to high levels of segregation.

    Args:
        model_id: Unique identifier for this model
        width: Grid width (default 20)
        height: Grid height (default 20)
        density: Fraction of cells occupied (0-1, default 0.8)
        minority_percent: Fraction of agents that are minority type (0-1, default 0.2)
        homophily: Number of similar neighbors desired (out of 8, default 3)

    Returns:
        Model configuration and initial state
    """
    model = SchellingModel(width, height, density, minority_percent, homophily)
    abm_cache[model_id] = model
    return {
        "model_id": model_id,
        "width": width,
        "height": height,
        "num_agents": len(model.schedule.agents),
        "initial_segregation": round(model.measure_segregation(model), 3)
    }

@abm_mcp.tool()
def step_model(
    model_id: str,
    num_steps: int = 1
) -> Dict[str, Any]:
    """
    Run the model for a specified number of steps.

    Args:
        model_id: ID of the model to step
        num_steps: Number of steps to run (default 1)

    Returns:
        Segregation metrics after stepping
    """
    model = abm_cache.get(model_id)
    if model is None:
        return {"error": f"Model '{model_id}' not found"}

    for _ in range(num_steps):
        model.step()

    df = model.datacollector.get_model_vars_dataframe()
    return {
        "model_id": model_id,
        "steps_completed": num_steps,
        "total_steps": len(df),
        "current_segregation": round(df['segregation'].iloc[-1], 3),
        "initial_segregation": round(df['segregation'].iloc[0], 3)
    }

print("✓ ABM MCP server created")
Overwriting abm_server.py

We now have three MCP servers exposing capabilities from network science, game theory, and ABMs. The common patterns: global cache for state, type safety, clear errors, and structured returns.

Resources and Prompts

MCP Resources: Read-Only Data

In addition to Tools, MCP servers can expose Resources (read-only data) and Prompts (templates).

from fastmcp import FastMCP, Context

network_resources_mcp = FastMCP("NetworkResources")

@network_resources_mcp.resource("network://{graph_id}/summary")
def get_network_summary(ctx: Context, graph_id: str) -> str:
    """Get a text summary of network properties."""
    if not hasattr(ctx, 'graphs') or graph_id not in ctx.graphs:
        return f"Error: Graph '{graph_id}' not found"
    G = ctx.graphs[graph_id]
    return f"Network {graph_id}: {G.number_of_nodes()} nodes, {G.number_of_edges()} edges, density {nx.density(G):.4f}"

print("✓ Network resources server created")
✓ Network resources server created

PydanticAI Integration

PydanticAI has native MCP support. MCP servers are treated as toolsets:

from pydantic_ai import Agent
from pydantic_ai.toolsets.fastmcp import FastMCPToolset
from dotenv import load_dotenv

load_dotenv()

from network_analysis_server import network_mcp
toolset = FastMCPToolset(network_mcp)
agent = Agent('anthropic:claude-haiku-4-5', toolsets=[toolset])

async def main():
    prompt = """
    Create a social network with friendships: 1↔2, 1↔3, 1↔4, 2↔3, 3↔4.
    Person 5 is isolated. Analyze the structure.
    """
    result = await agent.run(prompt)
    print(result.output)

await main()
## Social Network Analysis

### Network Overview
- **Nodes**: 4 connected + 1 isolated (Person 5)
- **Edges**: 5 friendships
- **Density**: 0.833 (very dense - 83.3% of possible connections exist)

### Connected Component (Persons 1-4)

**Degree Centrality** (number of direct friends):
| Person | Friends | Normalized Centrality |
|--------|---------|----------------------|
| 1 | 3 | 1.0 ⭐ |
| 2 | 2 | 0.667 |
| 3 | 3 | 1.0 ⭐ |
| 4 | 2 | 0.667 |

**Betweenness Centrality** (how often they bridge other connections):
| Person | Betweenness |
|--------|------------|
| 1 | 0.167 🌉 |
| 3 | 0.167 🌉 |
| 2 | 0.0 |
| 4 | 0.0 |

### Key Insights

1. **Hub Nodes**: Persons 1 and 3 are the most influential, each connected to 3 others
2. **Bridge Role**: Both Persons 1 and 3 serve as bridges connecting different parts of the network
3. **Highly Connected**: The network is very dense (0.833), forming an almost complete subgraph
4. **Isolated Person**: Person 5 has no connections and is completely isolated from the social network
5. **Network Shape**: The connected component forms a near-complete graph with only 1 missing edge (Person 2 and 4 are not directly connected)

This is a tightly-knit group with Person 5 completely outside the social circle!

FastMCPToolset can connect to:

  • Python scripts: FastMCPToolset('my_server.py')
  • HTTP URLs: FastMCPToolset('http://localhost:8000/mcp')
  • FastMCP objects: FastMCPToolset(network_mcp) (zero network overhead)

Multi-Server Agents

Connect to multiple MCP servers:

from pydantic_ai.mcp import MCPServerStdio

network_server = MCPServerStdio('python', args=['network_server.py'])
game_server = MCPServerStdio('python', args=['game_server.py'])

agent = Agent('anthropic:claude-sonnet-4-5', toolsets=[network_server, game_server])

The agent orchestrates across servers like composing Lego blocks.

# Example with file-based server
toolset = FastMCPToolset('network_analysis_server.py')
agent = Agent('anthropic:claude-haiku-4-5', toolsets=[toolset], system_prompt="You are a helpful network analysis assistant.")

async with agent:
    result = await agent.run('Create a path network (1-2-3-4-5) and find the shortest path from 1 to 5')
    print("Agent Response:", result.output)
Agent Response: Perfect! I've successfully created a path network and found the shortest path. Here are the results:

**Network Created:**
- **Graph ID:** path_network
- **Nodes:** 5
- **Edges:** 4
- **Density:** 0.4

**Shortest Path from Node 1 to Node 5:**
- **Path:** 1 → 2 → 3 → 4 → 5
- **Length:** 4 hops

The path network is a simple linear chain where each node connects to the next one. The shortest (and only) path from node 1 to node 5 traverses through all intermediate nodes, requiring 4 steps.

MCP Prompts

from fastmcp import FastMCP

network_prompts_mcp = FastMCP("NetworkPrompts")

@network_prompts_mcp.prompt()
def analyze_network_structure(graph_id: str) -> str:
    """Generate a comprehensive network analysis prompt."""
    return f"""
Please analyze the network '{graph_id}':
1. Basic Statistics (nodes, edges, density)
2. Centrality Analysis (top nodes by degree and betweenness)
3. Structural Properties (connectivity, clustering)
4. Interpretation (information flow implications)
"""

print("✓ Network prompts server created")
✓ Network prompts server created

When to use each:

  • Tools: Actions requiring computation
  • Resources: Read-only data access
  • Prompts: Guide users with common workflows

Deployment Options

Local Development: stdio

if __name__ == "__main__":
    mcp.run()  # stdio by default

Claude Desktop Integration

fastmcp install claude-desktop network_server.py

This adds your server to Claude Desktop, making tools available in natural language.

HTTP Deployment

if __name__ == "__main__":
    mcp.run(transport='streamable-http', port=8000)

Connect from PydanticAI:

from pydantic_ai.mcp import MCPServerStreamableHTTP
server = MCPServerStreamableHTTP('http://localhost:8000/mcp')
agent = Agent('anthropic:claude-sonnet-4-5', toolsets=[server])

Configuration Files

{
  "mcpServers": {
    "network-analysis": {"command": "python", "args": ["network_server.py"]},
    "game-theory": {"url": "http://localhost:8001/mcp"}
  }
}

Load with:

from pydantic_ai.mcp import load_mcp_servers
servers = load_mcp_servers('mcp_config.json')
agent = Agent('anthropic:claude-sonnet-4-5', toolsets=servers)

Exercises

Exercise 1: Conceptual Understanding

Part A: Explain the difference between embedded tools (@agent.tool) and MCP servers (@mcp.tool). When would you use each?

Part B: For each scenario, identify Tool, Resource, or Prompt:

  1. Providing access to a dataset of network structures
  2. Computing the Nash equilibrium of a game
  3. Guiding users through a network analysis workflow
  4. Running a simulation for 1000 steps
  5. Retrieving historical simulation results

Exercise 2: Build a Statistics MCP Server

Create an MCP server with these tools:

  1. calculate_mean(data: List[float]) -> float
  2. calculate_std(data: List[float]) -> float
  3. find_outliers(data: List[float], threshold: float = 2.0) -> List[float]

Include proper docstrings and handle edge cases.

from fastmcp import FastMCP
from typing import List

stats_mcp = FastMCP("Statistics")

@stats_mcp.tool()
def calculate_mean(data: List[float]) -> float:
    """Calculate the arithmetic mean of a list of numbers."""
    # TODO: implement
    pass

# TODO: Add other tools

Exercise 3: Extend the Network Analysis Server

Add these tools:

  1. calculate_clustering_coefficient(graph_id: str) -> float using nx.average_clustering(G)
  2. find_communities(graph_id: str) -> List[List[int]] using nx.community.greedy_modularity_communities(G)
  3. calculate_diameter(graph_id: str) -> int using nx.diameter(G) (handle disconnected graphs)

from typing import Dict

@network_mcp.tool()
def calculate_clustering_coefficient(ctx: Context, graph_id: str) -> Dict[str, any]:
    """Calculate the average clustering coefficient."""
    # TODO: implement
    pass

Exercise 4: Design an MCP Server for Your Domain

Choose a domain and write specifications for 5 tools:

  • Tool name, parameters (with types), return value
  • Docstring explaining what it does
  • When/why you’d use it

Options: Blockchain Analysis, Auction Mechanisms, or your research domain.

Exercise 5: MCP vs Embedded Tools Trade-offs

For each scenario, discuss whether to use MCP or embedded tools, and analyze complexity, maintenance, reusability, performance, and security:

  1. Single-user research script
  2. Multi-user web application
  3. Educational platform for students

Further Reading

Official Documentation:

Course-Related:

Academic Papers:

Next Lecture Preview: