Module 2: Custom Tools & MCP
Learning Objectives
Section titled “Learning Objectives”- Create custom tools using the
@tooldecorator - Understand tool schemas (docstrings → LLM-readable descriptions)
- Build an MCP server with FastMCP
- Connect an agent to an MCP server
- See the full agentic loop with tool calling
Part A: Custom Tools
Section titled “Part A: Custom Tools”How Tools Work
Section titled “How Tools Work”When you decorate a function with @tool, Strands:
- Extracts the function name → tool name
- Parses the docstring → tool description for the LLM
- Reads type hints → parameter schema
- Registers it as a callable tool in the agent loop
from strands import tool
@tooldef lookup_order(order_id: str) -> dict: """Look up a customer order by its order ID.
Args: order_id: The order ID to look up (e.g., ORD-10001) """ # Your implementation here return {"status": "delivered", ...}The LLM sees the tool description and decides when to call it based on the user’s request.
Hands-On: Agent with Tools
Section titled “Hands-On: Agent with Tools”-
Open the tools module
Terminal window code module_02_tools_mcp/tools.py -
Review the custom tools
We define 5 tools:
lookup_order: Find order by IDsearch_products: Search the product catalogsearch_faq: Search the knowledge basecreate_support_ticket: Create a ticket for escalationcheck_product_availability: Check stock status
-
Run the agent with tools
Terminal window python module_02_tools_mcp/tools.py -
Test tool-calling behavior
You: What's the status of order ORD-10002?You: Do you have any headphones in stock?You: What's your return policy?You: I need to report a defective product, my name is Carol and email is carol@example.com
Notice how the agent automatically decides which tool to call based on your question. This is the model-driven approach, the LLM reasons about which tool is relevant.
The Tool-Calling Loop
Section titled “The Tool-Calling Loop”sequenceDiagram
participant U as User
participant LLM as LLM (Claude Sonnet)
participant T as lookup_order Tool
U->>LLM: "What's the status of order ORD-10002?"
LLM->>LLM: Reason: "I need to look up this order"
LLM->>T: lookup_order(order_id="ORD-10002")
T-->>LLM: {status: "shipped", tracking: "TRK-BBB222"}
LLM->>LLM: Reason: "I have the details, let me respond"
LLM-->>U: "Order ORD-10002 is currently shipped..."
Part B: Building an MCP Server
Section titled “Part B: Building an MCP Server”What is MCP?
Section titled “What is MCP?”The Model Context Protocol is an open standard for exposing tools to AI agents. Think of it as USB-C for AI tools, any MCP server works with any MCP client.
Hands-On: Build an MCP Server
Section titled “Hands-On: Build an MCP Server”-
Open the MCP server
Terminal window code module_02_tools_mcp/mcp_server.py -
Review the server implementation
We use FastMCP to create a server that exposes product catalog tools:
from fastmcp import FastMCPmcp = FastMCP(name="TechStore Catalog Server")@mcp.tool()def get_product_details(sku: str) -> dict:"""Get product info by SKU."""return PRODUCTS[sku]@mcp.resource("catalog://categories")def list_categories() -> str:"""List available categories."""return "Electronics, Furniture, Accessories"MCP servers expose tools (callable functions) and resources (static data).
-
Open the MCP-connected agent
Terminal window code module_02_tools_mcp/agent_with_mcp.py -
Review the MCP connection
from strands.tools.mcp import MCPClientfrom mcp import StdioServerParametersmcp_client = MCPClient(StdioServerParameters(command="python",args=["mcp_server.py"],))with mcp_client:mcp_tools = mcp_client.list_tools()agent = Agent(tools=[search_faq, *mcp_tools])The agent now has tools from two sources: a local
@toolfunction and MCP server tools. -
Run the MCP-connected agent
Terminal window python module_02_tools_mcp/agent_with_mcp.py -
Test MCP tools
You: Show me all electronics productsYou: What's the status of order ORD-10001?You: Search for webcam
MCP Transport Types
Section titled “MCP Transport Types”| Transport | Use Case | How It Works |
|---|---|---|
| stdio | Local servers, CLI tools | Subprocess via stdin/stdout |
| Streamable HTTP | Remote servers, web services | HTTP POST/GET requests |
| SSE | Streaming, real-time updates | Server-Sent Events |
In this workshop we use stdio (simplest). In production, you’d typically use HTTP for remote MCP servers.
Key Takeaways
Section titled “Key Takeaways”- Tools extend agents from “answering” to “doing”
- The
@tooldecorator turns any Python function into an agent tool - MCP separates tool servers from agents for reusability
- Agents can combine tools from multiple sources (local + MCP)
- The LLM decides which tool to call based on context