mirror of https://github.com/dapr/dapr-agents.git
Reorganize MCP quickstart examples and add SSE implementation
Signed-off-by: Bilgin Ibryam <bibryam@gmail.com> Updated references to align with namechange Signed-off-by: Bilgin Ibryam <bibryam@gmail.com> Fix code formatting with ruff Signed-off-by: Bilgin Ibryam <bibryam@gmail.com>
This commit is contained in:
parent
8741289e7d
commit
7571cf2a08
|
@ -3,19 +3,19 @@ services:
|
|||
image: localhost:5001/workflow-llm:latest
|
||||
build:
|
||||
context: ../
|
||||
dockerfile: ./07-k8s-multi-agent-workflow/services/workflow-llm/Dockerfile
|
||||
dockerfile: ./05-multi-agent-workflow-k8s/services/workflow-llm/Dockerfile
|
||||
elf:
|
||||
image: localhost:5001/elf:latest
|
||||
build:
|
||||
context: ../
|
||||
dockerfile: ./07-k8s-multi-agent-workflow/services/elf/Dockerfile
|
||||
dockerfile: ./05-multi-agent-workflow-k8s/services/elf/Dockerfile
|
||||
hobbit:
|
||||
image: localhost:5001/hobbit:latest
|
||||
build:
|
||||
context: ../
|
||||
dockerfile: ./07-k8s-multi-agent-workflow/services/hobbit/Dockerfile
|
||||
dockerfile: ./05-multi-agent-workflow-k8s/services/hobbit/Dockerfile
|
||||
wizard:
|
||||
image: localhost:5001/wizard:latest
|
||||
build:
|
||||
context: ../
|
||||
dockerfile: ./07-k8s-multi-agent-workflow/services/wizard/Dockerfile
|
||||
dockerfile: ./05-multi-agent-workflow-k8s/services/wizard/Dockerfile
|
|
@ -0,0 +1,155 @@
|
|||
# MCP Agent with SSE Transport
|
||||
|
||||
This quickstart demonstrates how to build a simple agent that uses tools exposed via the [Model Context Protocol (MCP)](https://modelcontextprotocol.io/introduction) over SSE (Server-Sent Events) transport. You'll learn how to create MCP tools in a standalone server and connect to them using SSE communication.
|
||||
|
||||
## Prerequisites
|
||||
|
||||
- Python 3.10 (recommended)
|
||||
- pip package manager
|
||||
- OpenAI API key
|
||||
|
||||
## Environment Setup
|
||||
|
||||
```bash
|
||||
# Create a virtual environment
|
||||
python3.10 -m venv .venv
|
||||
|
||||
# Activate the virtual environment
|
||||
# On Windows:
|
||||
.venv\Scripts\activate
|
||||
# On macOS/Linux:
|
||||
source .venv/bin/activate
|
||||
|
||||
# Install dependencies
|
||||
pip install -r requirements.txt
|
||||
```
|
||||
|
||||
## Configuration
|
||||
|
||||
Create a `.env` file in the project root:
|
||||
|
||||
```env
|
||||
OPENAI_API_KEY=your_api_key_here
|
||||
```
|
||||
|
||||
Replace `your_api_key_here` with your actual OpenAI API key.
|
||||
|
||||
## Examples
|
||||
|
||||
### MCP Tool Creation
|
||||
|
||||
First, create MCP tools in `tools.py`:
|
||||
|
||||
```python
|
||||
mcp = FastMCP("TestServer")
|
||||
|
||||
@mcp.tool()
|
||||
async def get_weather(location: str) -> str:
|
||||
"""Get weather information for a specific location."""
|
||||
temperature = random.randint(60, 80)
|
||||
return f"{location}: {temperature}F."
|
||||
```
|
||||
|
||||
### SSE Server Creation
|
||||
|
||||
Set up the SSE server for your MCP tools in `server.py`.
|
||||
|
||||
### Agent Creation
|
||||
|
||||
Create the agent that connects to these tools in `app.py` over MCP with SSE transport:
|
||||
|
||||
```python
|
||||
# Load MCP tools from server using SSE
|
||||
client = MCPClient()
|
||||
await client.connect_sse("local", url="http://localhost:8000/sse")
|
||||
tools = client.get_all_tools()
|
||||
|
||||
# Create the Weather Agent using MCP tools
|
||||
weather_agent = AssistantAgent(
|
||||
role="Weather Assistant",
|
||||
name="Stevie",
|
||||
goal="Help humans get weather and location info using smart tools.",
|
||||
instructions=["Instrictions go here"],
|
||||
tools=tools,
|
||||
message_bus_name="messagepubsub",
|
||||
state_store_name="workflowstatestore",
|
||||
state_key="workflow_state",
|
||||
agents_registry_store_name="agentstatestore",
|
||||
agents_registry_key="agents_registry",
|
||||
).as_service(port=8001)
|
||||
|
||||
```
|
||||
|
||||
### Running the Example
|
||||
|
||||
1. Start the MCP server in SSE mode:
|
||||
|
||||
```bash
|
||||
python server.py --server_type sse --port 8000
|
||||
```
|
||||
|
||||
2. In a separate terminal window, start the agent with Dapr:
|
||||
|
||||
```bash
|
||||
dapr run --app-id weatherappmcp --app-port 8001 --dapr-http-port 3500 --resources-path ./components/ -- python app.py
|
||||
```
|
||||
|
||||
3. Send a test request to the agent:
|
||||
|
||||
```bash
|
||||
curl -X POST http://localhost:8001/start-workflow \
|
||||
-H "Content-Type: application/json" \
|
||||
-d '{"task": "What is the weather in New York?"}'
|
||||
```
|
||||
|
||||
**Expected output:** The agent will initialize the MCP client, connect to the tools module via SSE transport, and fetch weather information for New York using the MCP tools. The results will be stored in state files.
|
||||
|
||||
## Key Concepts
|
||||
|
||||
### MCP Tool Definition
|
||||
- The `@mcp.tool()` decorator registers functions as MCP tools
|
||||
- Each tool has a docstring that helps the LLM understand its purpose
|
||||
|
||||
### SSE Transport
|
||||
- SSE (Server-Sent Events) transport enables network-based communication
|
||||
- Perfect for distributed setups where tools run as separate services
|
||||
- Allows multiple agents to connect to the same tool server
|
||||
|
||||
### Dapr Integration
|
||||
- The `AssistantAgent` class creates a service that runs inside a Dapr workflow
|
||||
- Dapr components (pubsub, state stores) manage message routing and state persistence
|
||||
- The agent's conversation history and tool calls are saved in Dapr state stores
|
||||
|
||||
### Execution Flow
|
||||
1. MCP server starts with tools exposed via SSE endpoint
|
||||
2. Agent connects to the MCP server via SSE
|
||||
3. The agent receives a user query via HTTP
|
||||
4. The LLM determines which MCP tool to use
|
||||
5. The agent sends the tool call to the MCP server
|
||||
6. The server executes the tool and returns the result
|
||||
7. The agent formulates a response based on the tool result
|
||||
8. State is saved in the configured Dapr state store
|
||||
|
||||
## Alternative: Using STDIO Transport
|
||||
|
||||
While this quickstart uses SSE transport, MCP also supports STDIO for process-based communication. This approach is useful when:
|
||||
|
||||
- Tools need to run in the same process as the agent
|
||||
- Simplicity is preferred over network distribution
|
||||
- You're developing locally and don't need separate services
|
||||
|
||||
To explore STDIO transport, check out the related [MCP with STDIO Transport quickstart](../07-agent-mcp-client-stdio).
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
1. **OpenAI API Key**: Ensure your key is correctly set in the `.env` file
|
||||
2. **Server Connection**: If you see SSE connection errors, make sure the server is running on the correct port
|
||||
3. **Dapr Setup**: Verify that Dapr is installed and that Redis is running for state stores
|
||||
4. **Module Import Errors**: Verify that all dependencies are installed correctly
|
||||
|
||||
## Next Steps
|
||||
|
||||
After completing this quickstart, you might want to explore:
|
||||
- Creating more complex MCP tools with actual API integrations
|
||||
- Deploying your agent as a Dapr microservice in Kubernetes
|
||||
- Exploring the [MCP specification](https://modelcontextprotocol.io/) for advanced usage
|
|
@ -0,0 +1,46 @@
|
|||
{
|
||||
"instances": {
|
||||
"5d29bf0f6b90420fb4fbda02ea8f3abd": {
|
||||
"input": "What is the weather in New York",
|
||||
"output": "The current temperature in New York is 63\u00b0F. If you need any more information or have other questions, feel free to ask!",
|
||||
"start_time": "2025-05-02T20:30:21.825508",
|
||||
"end_time": "2025-05-02T20:30:23.623042",
|
||||
"messages": [
|
||||
{
|
||||
"id": "d908b272-55bd-4374-b7ab-f6c69e394709",
|
||||
"role": "user",
|
||||
"content": "What is the weather in New York",
|
||||
"timestamp": "2025-05-02T20:30:21.831930",
|
||||
"name": null
|
||||
},
|
||||
{
|
||||
"id": "53da56c6-f149-45c5-a712-d2f8f9e4ec90",
|
||||
"role": "assistant",
|
||||
"content": "The current temperature in New York is 63\u00b0F. If you need any more information or have other questions, feel free to ask!",
|
||||
"timestamp": "2025-05-02T20:30:23.620126",
|
||||
"name": null
|
||||
}
|
||||
],
|
||||
"last_message": {
|
||||
"id": "53da56c6-f149-45c5-a712-d2f8f9e4ec90",
|
||||
"role": "assistant",
|
||||
"content": "The current temperature in New York is 63\u00b0F. If you need any more information or have other questions, feel free to ask!",
|
||||
"timestamp": "2025-05-02T20:30:23.620126",
|
||||
"name": null
|
||||
},
|
||||
"tool_history": [
|
||||
{
|
||||
"content": "New York: 63F.",
|
||||
"role": "tool",
|
||||
"tool_call_id": "call_qc2s8yGF5clDeEEmwzwIXVvG",
|
||||
"id": "aebc3147-4ca6-4747-9ef8-03c1eb526f70",
|
||||
"function_name": "LocalGetWeather",
|
||||
"function_args": "{\"location\":\"New York\"}",
|
||||
"timestamp": "2025-05-02T20:30:22.632999"
|
||||
}
|
||||
],
|
||||
"source": null,
|
||||
"source_workflow_instance_id": null
|
||||
}
|
||||
}
|
||||
}
|
|
@ -0,0 +1,46 @@
|
|||
import asyncio
|
||||
import logging
|
||||
from dotenv import load_dotenv
|
||||
|
||||
from dapr_agents import AssistantAgent
|
||||
from dapr_agents.tool.mcp import MCPClient
|
||||
|
||||
|
||||
async def main():
|
||||
try:
|
||||
# Load MCP tools from server (stdio or sse)
|
||||
client = MCPClient()
|
||||
await client.connect_sse("local", url="http://localhost:8000/sse")
|
||||
|
||||
# Convert MCP tools to AgentTool list
|
||||
tools = client.get_all_tools()
|
||||
|
||||
# Create the Weather Agent using those tools
|
||||
weather_agent = AssistantAgent(
|
||||
role="Weather Assistant",
|
||||
name="Stevie",
|
||||
goal="Help humans get weather and location info using smart tools.",
|
||||
instructions=[
|
||||
"Respond clearly and helpfully to weather-related questions.",
|
||||
"Use tools when appropriate to fetch or simulate weather data.",
|
||||
"You may sometimes jump after answering the weather question.",
|
||||
],
|
||||
tools=tools,
|
||||
message_bus_name="messagepubsub",
|
||||
state_store_name="workflowstatestore",
|
||||
state_key="workflow_state",
|
||||
agents_registry_store_name="agentstatestore",
|
||||
agents_registry_key="agents_registry",
|
||||
).as_service(port=8001)
|
||||
|
||||
# Start the FastAPI agent service
|
||||
await weather_agent.start()
|
||||
|
||||
except Exception as e:
|
||||
logging.exception("Error starting weather agent service", exc_info=e)
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
load_dotenv()
|
||||
logging.basicConfig(level=logging.INFO)
|
||||
asyncio.run(main())
|
|
@ -0,0 +1,12 @@
|
|||
apiVersion: dapr.io/v1alpha1
|
||||
kind: Component
|
||||
metadata:
|
||||
name: messagepubsub
|
||||
spec:
|
||||
type: pubsub.redis
|
||||
version: v1
|
||||
metadata:
|
||||
- name: redisHost
|
||||
value: localhost:6379
|
||||
- name: redisPassword
|
||||
value: ""
|
|
@ -0,0 +1,16 @@
|
|||
apiVersion: dapr.io/v1alpha1
|
||||
kind: Component
|
||||
metadata:
|
||||
name: agentstatestore
|
||||
spec:
|
||||
type: state.redis
|
||||
version: v1
|
||||
metadata:
|
||||
- name: redisHost
|
||||
value: localhost:6379
|
||||
- name: redisPassword
|
||||
value: ""
|
||||
- name: keyPrefix
|
||||
value: none
|
||||
- name: actorStateStore
|
||||
value: "true"
|
|
@ -0,0 +1,12 @@
|
|||
apiVersion: dapr.io/v1alpha1
|
||||
kind: Component
|
||||
metadata:
|
||||
name: workflowstatestore
|
||||
spec:
|
||||
type: state.redis
|
||||
version: v1
|
||||
metadata:
|
||||
- name: redisHost
|
||||
value: localhost:6379
|
||||
- name: redisPassword
|
||||
value: ""
|
|
@ -0,0 +1,6 @@
|
|||
dapr-agents>=0.5.0
|
||||
python-dotenv
|
||||
mcp
|
||||
starlette
|
||||
uvicorn
|
||||
requests
|
|
@ -0,0 +1,83 @@
|
|||
import argparse
|
||||
import logging
|
||||
import uvicorn
|
||||
from starlette.applications import Starlette
|
||||
from starlette.requests import Request
|
||||
from starlette.routing import Mount, Route
|
||||
|
||||
from mcp.server.sse import SseServerTransport
|
||||
from tools import mcp
|
||||
|
||||
# ─────────────────────────────────────────────
|
||||
# Logging Configuration
|
||||
# ─────────────────────────────────────────────
|
||||
logging.basicConfig(
|
||||
level=logging.INFO, format="%(asctime)s - %(levelname)s - %(message)s"
|
||||
)
|
||||
logger = logging.getLogger("mcp-server")
|
||||
|
||||
|
||||
# ─────────────────────────────────────────────
|
||||
# Starlette App Factory
|
||||
# ─────────────────────────────────────────────
|
||||
def create_starlette_app():
|
||||
"""
|
||||
Create a Starlette app wired with the MCP server over SSE transport.
|
||||
"""
|
||||
logger.debug("Creating Starlette app with SSE transport")
|
||||
sse = SseServerTransport("/messages/")
|
||||
|
||||
async def handle_sse(request: Request) -> None:
|
||||
logger.info("🔌 SSE connection established")
|
||||
async with sse.connect_sse(request.scope, request.receive, request._send) as (
|
||||
read_stream,
|
||||
write_stream,
|
||||
):
|
||||
logger.debug("Starting MCP server run loop over SSE")
|
||||
await mcp._mcp_server.run(
|
||||
read_stream,
|
||||
write_stream,
|
||||
mcp._mcp_server.create_initialization_options(),
|
||||
)
|
||||
logger.debug("MCP run loop completed")
|
||||
|
||||
return Starlette(
|
||||
debug=False,
|
||||
routes=[
|
||||
Route("/sse", endpoint=handle_sse),
|
||||
Mount("/messages/", app=sse.handle_post_message),
|
||||
],
|
||||
)
|
||||
|
||||
|
||||
# ─────────────────────────────────────────────
|
||||
# CLI Entrypoint
|
||||
# ─────────────────────────────────────────────
|
||||
def main():
|
||||
parser = argparse.ArgumentParser(description="Run an MCP tool server.")
|
||||
parser.add_argument(
|
||||
"--server_type",
|
||||
choices=["stdio", "sse"],
|
||||
default="stdio",
|
||||
help="Transport to use",
|
||||
)
|
||||
parser.add_argument(
|
||||
"--host", default="127.0.0.1", help="Host to bind to (SSE only)"
|
||||
)
|
||||
parser.add_argument(
|
||||
"--port", type=int, default=8000, help="Port to bind to (SSE only)"
|
||||
)
|
||||
args = parser.parse_args()
|
||||
|
||||
logger.info(f"🚀 Starting MCP server in {args.server_type.upper()} mode")
|
||||
|
||||
if args.server_type == "stdio":
|
||||
mcp.run("stdio")
|
||||
else:
|
||||
app = create_starlette_app()
|
||||
logger.info(f"🌐 Running SSE server on {args.host}:{args.port}")
|
||||
uvicorn.run(app, host=args.host, port=args.port)
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
main()
|
|
@ -0,0 +1,17 @@
|
|||
from mcp.server.fastmcp import FastMCP
|
||||
import random
|
||||
|
||||
mcp = FastMCP("TestServer")
|
||||
|
||||
|
||||
@mcp.tool()
|
||||
async def get_weather(location: str) -> str:
|
||||
"""Get weather information for a specific location."""
|
||||
temperature = random.randint(60, 80)
|
||||
return f"{location}: {temperature}F."
|
||||
|
||||
|
||||
@mcp.tool()
|
||||
async def jump(distance: str) -> str:
|
||||
"""Simulate a jump of a given distance."""
|
||||
return f"I jumped the following distance: {distance}"
|
|
@ -0,0 +1,135 @@
|
|||
# MCP Agent with STDIO Transport
|
||||
|
||||
This quickstart demonstrates how to build a simple agent that uses tools exposed via the [Model Context Protocol (MCP)](https://modelcontextprotocol.io/introduction) over STDIO transport. You'll learn how to create MCP tools in a standalone module and connect to them using STDIO communication.
|
||||
|
||||
## Prerequisites
|
||||
|
||||
- Python 3.10 (recommended)
|
||||
- pip package manager
|
||||
- OpenAI API key
|
||||
|
||||
## Environment Setup
|
||||
|
||||
```bash
|
||||
# Create a virtual environment
|
||||
python3.10 -m venv .venv
|
||||
|
||||
# Activate the virtual environment
|
||||
# On Windows:
|
||||
.venv\Scripts\activate
|
||||
# On macOS/Linux:
|
||||
source .venv/bin/activate
|
||||
|
||||
# Install dependencies
|
||||
pip install -r requirements.txt
|
||||
```
|
||||
|
||||
## Configuration
|
||||
|
||||
Create a `.env` file in the project root:
|
||||
|
||||
```env
|
||||
OPENAI_API_KEY=your_api_key_here
|
||||
```
|
||||
|
||||
Replace `your_api_key_here` with your actual OpenAI API key.
|
||||
|
||||
## Examples
|
||||
|
||||
### MCP Tool Creation
|
||||
|
||||
First, create MCP tools in `tools.py`:
|
||||
|
||||
```python
|
||||
mcp = FastMCP("TestServer")
|
||||
|
||||
@mcp.tool()
|
||||
async def get_weather(location: str) -> str:
|
||||
"""Get weather information for a specific location."""
|
||||
temperature = random.randint(60, 80)
|
||||
return f"{location}: {temperature}F."
|
||||
```
|
||||
|
||||
### Agent Creation
|
||||
|
||||
Then, create the agent that connects to these tools in `agent.py` over MCP:
|
||||
|
||||
```python
|
||||
client = MCPClient()
|
||||
|
||||
# Connect to MCP server using STDIO transport
|
||||
await client.connect_stdio(
|
||||
server_name="local",
|
||||
command=sys.executable, # Use the current Python interpreter
|
||||
args=["tools.py"] # Run tools.py directly
|
||||
)
|
||||
|
||||
# Get available tools from the MCP instance
|
||||
tools = client.get_all_tools()
|
||||
|
||||
# Create the Weather Agent using MCP tools
|
||||
weather_agent = Agent(
|
||||
name="Stevie",
|
||||
role="Weather Assistant",
|
||||
goal="Help humans get weather and location info using MCP tools.",
|
||||
instructions=["Instrictions go here"],
|
||||
tools=tools,
|
||||
)
|
||||
```
|
||||
|
||||
### Running the Example
|
||||
|
||||
Run the agent script:
|
||||
|
||||
```bash
|
||||
python agent.py
|
||||
```
|
||||
|
||||
**Expected output:** The agent will initialize the MCP client, connect to the tools module via STDIO, and fetch weather information for New York using the MCP tools.
|
||||
|
||||
## Key Concepts
|
||||
|
||||
### MCP Tool Definition
|
||||
- The `@mcp.tool()` decorator registers functions as MCP tools
|
||||
- Each tool has a docstring that helps the LLM understand its purpose
|
||||
|
||||
### STDIO Transport
|
||||
- STDIO transport uses standard input/output streams for communication
|
||||
- No network ports or HTTP servers are required for this transport
|
||||
- Ideal for local development and testing
|
||||
|
||||
### Agent Setup with MCP Client
|
||||
- The `MCPClient` class manages connections to MCP tool servers
|
||||
- `connect_stdio()` starts a subprocess and establishes communication
|
||||
- The client translates MCP tools into agent tools automatically
|
||||
|
||||
### Execution Flow
|
||||
1. Agent starts the tools module as a subprocess
|
||||
2. MCPClient connects to the subprocess via STDIO
|
||||
3. The agent receives a user query
|
||||
4. The LLM determines which MCP tool to use
|
||||
5. The agent sends the tool call to the tools subprocess
|
||||
6. The subprocess executes the tool and returns the result
|
||||
7. The agent formulates a response based on the tool result
|
||||
|
||||
## Alternative: Using SSE Transport
|
||||
|
||||
While this quickstart uses STDIO transport, MCP also supports Server-Sent Events (SSE) for network-based communication. This approach is useful when:
|
||||
|
||||
- Tools need to run as separate services
|
||||
- Tools are distributed across different machines
|
||||
- You need long-running services that multiple agents can connect to
|
||||
|
||||
To explore SSE transport, check out the related [MCP with SSE Transport quickstart](../07-agent-mcp-client-sse).
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
1. **OpenAI API Key**: Ensure your key is correctly set in the `.env` file
|
||||
2. **Subprocess Communication**: If you see STDIO errors, make sure tools.py can run independently
|
||||
3. **Module Import Errors**: Verify that all dependencies are installed correctly
|
||||
|
||||
## Next Steps
|
||||
|
||||
After completing this quickstart, you might want to explore:
|
||||
- Checkout SSE transport example [MCP with SSE Transport quickstart](../07-agent-mcp-client-sse).
|
||||
- Exploring the [MCP specification](https://modelcontextprotocol.io/) for advanced usage
|
|
@ -0,0 +1,49 @@
|
|||
import asyncio
|
||||
import logging
|
||||
import sys
|
||||
from dotenv import load_dotenv
|
||||
|
||||
from dapr_agents import Agent
|
||||
from dapr_agents.tool.mcp import MCPClient
|
||||
|
||||
load_dotenv()
|
||||
|
||||
|
||||
async def main():
|
||||
# Create the MCP client
|
||||
client = MCPClient()
|
||||
|
||||
# Connect to MCP server using STDIO transport
|
||||
await client.connect_stdio(
|
||||
server_name="local",
|
||||
command=sys.executable, # Use the current Python interpreter
|
||||
args=["tools.py"], # Run tools.py directly
|
||||
)
|
||||
|
||||
# Get available tools from the MCP instance
|
||||
tools = client.get_all_tools()
|
||||
print("🔧 Available tools:", [t.name for t in tools])
|
||||
|
||||
# Create the Weather Agent using MCP tools
|
||||
weather_agent = Agent(
|
||||
name="Stevie",
|
||||
role="Weather Assistant",
|
||||
goal="Help humans get weather and location info using MCP tools.",
|
||||
instructions=[
|
||||
"Respond clearly and helpfully to weather-related questions.",
|
||||
"Use tools when appropriate to fetch or simulate weather data.",
|
||||
"You may sometimes jump after answering the weather question.",
|
||||
],
|
||||
tools=tools,
|
||||
)
|
||||
|
||||
# Run a sample query
|
||||
result = await weather_agent.run("What is the weather in New York?")
|
||||
print(result)
|
||||
|
||||
# Clean up resources
|
||||
await client.close()
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
asyncio.run(main())
|
|
@ -0,0 +1,3 @@
|
|||
dapr-agents>=0.5.0
|
||||
python-dotenv
|
||||
mcp
|
|
@ -0,0 +1,22 @@
|
|||
from mcp.server.fastmcp import FastMCP
|
||||
import random
|
||||
|
||||
mcp = FastMCP("TestServer")
|
||||
|
||||
|
||||
@mcp.tool()
|
||||
async def get_weather(location: str) -> str:
|
||||
"""Get weather information for a specific location."""
|
||||
temperature = random.randint(60, 80)
|
||||
return f"{location}: {temperature}F."
|
||||
|
||||
|
||||
@mcp.tool()
|
||||
async def jump(distance: str) -> str:
|
||||
"""Simulate a jump of a given distance."""
|
||||
return f"I jumped the following distance: {distance}"
|
||||
|
||||
|
||||
# When run directly, serve tools over STDIO
|
||||
if __name__ == "__main__":
|
||||
mcp.run("stdio")
|
Loading…
Reference in New Issue