Integrate MCP Client + Enable Async Tool Execution for Agent Framework (#72)

* Enable async-first execution for AgentTool and ToolExecutor

* Update basic agent patterns to support async tool execution

* Updated assistant agentic workflow base to support async tool execution

* fix #70. Updated input for workflow task agent execution

* Update agent actor base to support async tool execution

* Updated logging on workflow message decorator

* Enable lazy initialization of schemas for Orchestrator agentic workflows

* minor update on quickstart 05 apps to follow the right structure

* added basic single-agent example of a dapr workflow agent

* Integrate MCP client with full tool and prompt support

* Created MCP examples in cookbook
This commit is contained in:
Roberto Rodriguez 2025-04-10 17:31:03 -04:00 committed by GitHub
parent d6fc2c89f0
commit c872c5a8bd
No known key found for this signature in database
GPG Key ID: B5690EEEBB952194
44 changed files with 2531 additions and 444 deletions

View File

@ -111,7 +111,7 @@
"INFO:dapr_agents.llm.openai.client.base:Initializing OpenAI client...\n",
"INFO:dapr_agents.tool.executor:Tool registered: GetWeather\n",
"INFO:dapr_agents.tool.executor:Tool registered: Jump\n",
"INFO:dapr_agents.tool.executor:Tool Executor initialized with 2 registered tools.\n",
"INFO:dapr_agents.tool.executor:Tool Executor initialized with 2 tool(s).\n",
"INFO:dapr_agents.agent.base:Constructing system_prompt from agent attributes.\n",
"INFO:dapr_agents.agent.base:Using system_prompt to create the prompt template.\n",
"INFO:dapr_agents.agent.base:Pre-filled prompt template with attributes: ['name', 'role', 'goal']\n"
@ -136,7 +136,7 @@
{
"data": {
"text/plain": [
"ChatPromptTemplate(input_variables=['chat_history'], pre_filled_variables={'name': 'Rob', 'role': 'Weather Assistant', 'goal': 'Help humans'}, messages=[('system', '# Today\\'s date is: March 04, 2025\\n\\n## Name\\nYour name is {{name}}.\\n\\n## Role\\nYour role is {{role}}.\\n\\n## Goal\\n{{goal}}.\\n\\n## Tools\\nYou have access ONLY to the following tools:\\nGetWeather: Get weather information for a specific location.. Args schema: {\\'location\\': {\\'description\\': \\'location to get weather for\\', \\'type\\': \\'string\\'}}\\nJump: Jump a specific distance.. Args schema: {\\'distance\\': {\\'description\\': \\'Distance for agent to jump\\', \\'type\\': \\'string\\'}}\\n\\nIf you think about using tool, it must use the correct tool JSON blob format as shown below:\\n```\\n{\\n \"name\": $TOOL_NAME,\\n \"arguments\": $INPUT\\n}\\n```\\n\\n## ReAct Format\\nThought: Reflect on the current state of the conversation or task. If additional information is needed, determine if using a tool is necessary. When a tool is required, briefly explain why it is needed for the specific step at hand, and immediately follow this with an `Action:` statement to address that specific requirement. Avoid combining multiple tool requests in a single `Thought`. If no tools are needed, proceed directly to an `Answer:` statement.\\nAction:\\n```\\n{\\n \"name\": $TOOL_NAME,\\n \"arguments\": $INPUT\\n}\\n```\\nObservation: Describe the result of the action taken.\\n... (repeat Thought/Action/Observation as needed, but **ALWAYS proceed to a final `Answer:` statement when you have enough information**)\\nThought: I now have sufficient information to answer the initial question.\\nAnswer: ALWAYS proceed to a final `Answer:` statement once enough information is gathered or if the tools do not provide the necessary data.\\n\\n### Providing a Final Answer\\nOnce you have enough information to answer the question OR if tools cannot provide the necessary data, respond using one of the following formats:\\n\\n1. **Direct Answer without Tools**:\\nThought: I can answer directly without using any tools. Answer: Direct answer based on previous interactions or current knowledge.\\n\\n2. **When All Needed Information is Gathered**:\\nThought: I now have sufficient information to answer the question. Answer: Complete final answer here.\\n\\n3. **If Tools Cannot Provide the Needed Information**:\\nThought: The available tools do not provide the necessary information. Answer: Explanation of limitation and relevant information if possible.\\n\\n### Key Guidelines\\n- Always Conclude with an `Answer:` statement.\\n- Ensure every response ends with an `Answer:` statement that summarizes the most recent findings or relevant information, avoiding incomplete thoughts.\\n- Direct Final Answer for Past or Known Information: If the user inquires about past interactions, respond directly with an Answer: based on the information in chat history.\\n- Avoid Repetitive Thought Statements: If the answer is ready, skip repetitive Thought steps and proceed directly to Answer.\\n- Minimize Redundant Steps: Use minimal Thought/Action/Observation cycles to arrive at a final Answer efficiently.\\n- Reference Past Information When Relevant: Use chat history accurately when answering questions about previous responses to avoid redundancy.\\n- Progressively Move Towards Finality: Reflect on the current step and avoid re-evaluating the entire user request each time. Aim to advance towards the final Answer in each cycle.\\n\\n## Chat History\\nThe chat history is provided to avoid repeating information and to ensure accurate references when summarizing past interactions.'), MessagePlaceHolder(variable_name=chat_history)], template_format='jinja2')"
"ChatPromptTemplate(input_variables=['chat_history'], pre_filled_variables={'name': 'Rob', 'role': 'Weather Assistant', 'goal': 'Help humans'}, messages=[('system', '# Today\\'s date is: April 05, 2025\\n\\n## Name\\nYour name is {{name}}.\\n\\n## Role\\nYour role is {{role}}.\\n\\n## Goal\\n{{goal}}.\\n\\n## Tools\\nYou have access ONLY to the following tools:\\nGetWeather: Get weather information for a specific location.. Args schema: {\\'location\\': {\\'description\\': \\'location to get weather for\\', \\'type\\': \\'string\\'}}\\nJump: Jump a specific distance.. Args schema: {\\'distance\\': {\\'description\\': \\'Distance for agent to jump\\', \\'type\\': \\'string\\'}}\\n\\nIf you think about using tool, it must use the correct tool JSON blob format as shown below:\\n```\\n{\\n \"name\": $TOOL_NAME,\\n \"arguments\": $INPUT\\n}\\n```\\n\\n## ReAct Format\\nThought: Reflect on the current state of the conversation or task. If additional information is needed, determine if using a tool is necessary. When a tool is required, briefly explain why it is needed for the specific step at hand, and immediately follow this with an `Action:` statement to address that specific requirement. Avoid combining multiple tool requests in a single `Thought`. If no tools are needed, proceed directly to an `Answer:` statement.\\nAction:\\n```\\n{\\n \"name\": $TOOL_NAME,\\n \"arguments\": $INPUT\\n}\\n```\\nObservation: Describe the result of the action taken.\\n... (repeat Thought/Action/Observation as needed, but **ALWAYS proceed to a final `Answer:` statement when you have enough information**)\\nThought: I now have sufficient information to answer the initial question.\\nAnswer: ALWAYS proceed to a final `Answer:` statement once enough information is gathered or if the tools do not provide the necessary data.\\n\\n### Providing a Final Answer\\nOnce you have enough information to answer the question OR if tools cannot provide the necessary data, respond using one of the following formats:\\n\\n1. **Direct Answer without Tools**:\\nThought: I can answer directly without using any tools. Answer: Direct answer based on previous interactions or current knowledge.\\n\\n2. **When All Needed Information is Gathered**:\\nThought: I now have sufficient information to answer the question. Answer: Complete final answer here.\\n\\n3. **If Tools Cannot Provide the Needed Information**:\\nThought: The available tools do not provide the necessary information. Answer: Explanation of limitation and relevant information if possible.\\n\\n### Key Guidelines\\n- Always Conclude with an `Answer:` statement.\\n- Ensure every response ends with an `Answer:` statement that summarizes the most recent findings or relevant information, avoiding incomplete thoughts.\\n- Direct Final Answer for Past or Known Information: If the user inquires about past interactions, respond directly with an Answer: based on the information in chat history.\\n- Avoid Repetitive Thought Statements: If the answer is ready, skip repetitive Thought steps and proceed directly to Answer.\\n- Minimize Redundant Steps: Use minimal Thought/Action/Observation cycles to arrive at a final Answer efficiently.\\n- Reference Past Information When Relevant: Use chat history accurately when answering questions about previous responses to avoid redundancy.\\n- Progressively Move Towards Finality: Reflect on the current step and avoid re-evaluating the entire user request each time. Aim to advance towards the final Answer in each cycle.\\n\\n## Chat History\\nThe chat history is provided to avoid repeating information and to ensure accurate references when summarizing past interactions.'), MessagePlaceHolder(variable_name=chat_history)], template_format='jinja2')"
]
},
"execution_count": 5,
@ -184,7 +184,6 @@
"name": "stderr",
"output_type": "stream",
"text": [
"INFO:dapr_agents.agent.base:Pre-filled prompt template with variables: dict_keys(['chat_history'])\n",
"INFO:dapr_agents.agent.patterns.react.base:Iteration 1/10 started.\n",
"INFO:dapr_agents.llm.openai.chat:Invoking ChatCompletion API.\n"
]
@ -215,7 +214,7 @@
"name": "stdout",
"output_type": "stream",
"text": [
"\u001b[38;2;217;95;118mThought: Hello Roberto! How can I assist you today with the weather?\u001b[0m\u001b[0m\n"
"\u001b[38;2;217;95;118mThought: Hello Roberto! How can I assist you today?\u001b[0m\u001b[0m\n"
]
},
{
@ -231,18 +230,18 @@
"name": "stdout",
"output_type": "stream",
"text": [
"\u001b[38;2;217;95;118mThought: Answer: Hello Roberto! How can I assist you today with the weather?\u001b[0m\u001b[0m\n",
"\u001b[38;2;217;95;118mThought: Answer: Hello Roberto! How can I assist you today?\u001b[0m\u001b[0m\n",
"\u001b[0m\u001b[0m\n",
"\u001b[0m--------------------------------------------------------------------------------\u001b[0m\n",
"\u001b[0m\u001b[0m\u001b[0m\n",
"\u001b[38;2;147;191;183massistant:\u001b[0m\n",
"\u001b[38;2;147;191;183m\u001b[0m\u001b[38;2;147;191;183mHello Roberto! How can I assist you today with the weather?\u001b[0m\u001b[0m\n"
"\u001b[38;2;147;191;183m\u001b[0m\u001b[38;2;147;191;183mHello Roberto! How can I assist you today?\u001b[0m\u001b[0m\n"
]
},
{
"data": {
"text/plain": [
"'Hello Roberto! How can I assist you today with the weather?'"
"'Hello Roberto! How can I assist you today?'"
]
},
"execution_count": 7,
@ -251,7 +250,7 @@
}
],
"source": [
"AIAgent.run(\"Hi my name is Roberto\")"
"await AIAgent.run(\"Hi my name is Roberto\")"
]
},
{
@ -263,7 +262,7 @@
"data": {
"text/plain": [
"[{'role': 'user', 'content': 'Hi my name is Roberto'},\n",
" {'content': 'Hello Roberto! How can I assist you today with the weather?',\n",
" {'content': 'Hello Roberto! How can I assist you today?',\n",
" 'role': 'assistant'}]"
]
},
@ -285,7 +284,6 @@
"name": "stderr",
"output_type": "stream",
"text": [
"INFO:dapr_agents.agent.base:Pre-filled prompt template with variables: dict_keys(['chat_history'])\n",
"INFO:dapr_agents.agent.patterns.react.base:Iteration 1/10 started.\n",
"INFO:dapr_agents.llm.openai.chat:Invoking ChatCompletion API.\n"
]
@ -308,11 +306,7 @@
"INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
"INFO:dapr_agents.llm.openai.chat:Chat completion retrieved successfully.\n",
"INFO:dapr_agents.agent.patterns.react.base:Executing GetWeather with arguments {'location': 'Virginia'}\n",
"INFO:dapr_agents.tool.executor:Attempting to execute tool: GetWeather\n",
"INFO:dapr_agents.tool.executor:Tool 'GetWeather' executed successfully.\n",
"INFO:dapr_agents.agent.patterns.react.base:Thought:I will need to gather the current weather information for both Virginia and Washington, D.C. by using the GetWeather tool.\n",
"Action:{'name': 'GetWeather', 'arguments': {'location': 'Virginia'}}\n",
"Observation:Virginia: 74F.\n",
"INFO:dapr_agents.tool.executor:Running tool (auto): GetWeather\n",
"INFO:dapr_agents.agent.patterns.react.base:Iteration 2/10 started.\n",
"INFO:dapr_agents.llm.openai.chat:Invoking ChatCompletion API.\n"
]
@ -321,9 +315,9 @@
"name": "stdout",
"output_type": "stream",
"text": [
"\u001b[38;2;217;95;118mThought: I will need to gather the current weather information for both Virginia and Washington, D.C. by using the GetWeather tool.\u001b[0m\u001b[0m\n",
"\u001b[38;2;217;95;118mThought: I need to get the current weather information for Virginia, New York, and Washington DC. I will fetch the data for each location separately. Let's start with Virginia.\u001b[0m\u001b[0m\n",
"\u001b[38;2;191;69;126mAction: {\"name\": \"GetWeather\", \"arguments\": {\"location\": \"Virginia\"}}\u001b[0m\u001b[0m\n",
"\u001b[38;2;146;94;130mObservation: Virginia: 74F.\u001b[0m\u001b[0m\n"
"\u001b[38;2;146;94;130mObservation: Virginia: 77F.\u001b[0m\u001b[0m\n"
]
},
{
@ -333,11 +327,7 @@
"INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
"INFO:dapr_agents.llm.openai.chat:Chat completion retrieved successfully.\n",
"INFO:dapr_agents.agent.patterns.react.base:Executing GetWeather with arguments {'location': 'New York'}\n",
"INFO:dapr_agents.tool.executor:Attempting to execute tool: GetWeather\n",
"INFO:dapr_agents.tool.executor:Tool 'GetWeather' executed successfully.\n",
"INFO:dapr_agents.agent.patterns.react.base:Thought:\n",
"Action:{'name': 'GetWeather', 'arguments': {'location': 'New York'}}\n",
"Observation:New York: 65F.\n",
"INFO:dapr_agents.tool.executor:Running tool (auto): GetWeather\n",
"INFO:dapr_agents.agent.patterns.react.base:Iteration 3/10 started.\n",
"INFO:dapr_agents.llm.openai.chat:Invoking ChatCompletion API.\n"
]
@ -346,9 +336,9 @@
"name": "stdout",
"output_type": "stream",
"text": [
"\u001b[38;2;217;95;118mThought: \u001b[0m\u001b[0m\n",
"\u001b[38;2;217;95;118mThought: I now have the weather information for Virginia. Next, I will get the weather information for New York.\u001b[0m\u001b[0m\n",
"\u001b[38;2;191;69;126mAction: {\"name\": \"GetWeather\", \"arguments\": {\"location\": \"New York\"}}\u001b[0m\u001b[0m\n",
"\u001b[38;2;146;94;130mObservation: New York: 65F.\u001b[0m\u001b[0m\n"
"\u001b[38;2;146;94;130mObservation: New York: 68F.\u001b[0m\u001b[0m\n"
]
},
{
@ -358,11 +348,7 @@
"INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
"INFO:dapr_agents.llm.openai.chat:Chat completion retrieved successfully.\n",
"INFO:dapr_agents.agent.patterns.react.base:Executing GetWeather with arguments {'location': 'Washington DC'}\n",
"INFO:dapr_agents.tool.executor:Attempting to execute tool: GetWeather\n",
"INFO:dapr_agents.tool.executor:Tool 'GetWeather' executed successfully.\n",
"INFO:dapr_agents.agent.patterns.react.base:Thought:\n",
"Action:{'name': 'GetWeather', 'arguments': {'location': 'Washington DC'}}\n",
"Observation:Washington DC: 66F.\n",
"INFO:dapr_agents.tool.executor:Running tool (auto): GetWeather\n",
"INFO:dapr_agents.agent.patterns.react.base:Iteration 4/10 started.\n",
"INFO:dapr_agents.llm.openai.chat:Invoking ChatCompletion API.\n"
]
@ -371,9 +357,9 @@
"name": "stdout",
"output_type": "stream",
"text": [
"\u001b[38;2;217;95;118mThought: \u001b[0m\u001b[0m\n",
"\u001b[38;2;217;95;118mThought: I have the weather information for Virginia and New York. Next, I will get the weather information for Washington DC.\u001b[0m\u001b[0m\n",
"\u001b[38;2;191;69;126mAction: {\"name\": \"GetWeather\", \"arguments\": {\"location\": \"Washington DC\"}}\u001b[0m\u001b[0m\n",
"\u001b[38;2;146;94;130mObservation: Washington DC: 66F.\u001b[0m\u001b[0m\n"
"\u001b[38;2;146;94;130mObservation: Washington DC: 69F.\u001b[0m\u001b[0m\n"
]
},
{
@ -389,26 +375,26 @@
"name": "stdout",
"output_type": "stream",
"text": [
"\u001b[38;2;217;95;118mThought: I now have sufficient information to answer the question. \u001b[0m\n",
"\u001b[38;2;217;95;118mThought: I now have the weather information for all requested locations. \u001b[0m\n",
"\u001b[38;2;217;95;118m\u001b[0m\n",
"\u001b[38;2;217;95;118mAnswer: The current weather is as follows:\u001b[0m\n",
"\u001b[38;2;217;95;118m- Virginia: 74°F\u001b[0m\n",
"\u001b[38;2;217;95;118m- New York: 65°F\u001b[0m\n",
"\u001b[38;2;217;95;118m- Washington, D.C.: 66°F\u001b[0m\u001b[0m\n",
"\u001b[38;2;217;95;118m- Virginia: 77°F\u001b[0m\n",
"\u001b[38;2;217;95;118m- New York: 68°F\u001b[0m\n",
"\u001b[38;2;217;95;118m- Washington DC: 69°F\u001b[0m\u001b[0m\n",
"\u001b[0m\u001b[0m\n",
"\u001b[0m--------------------------------------------------------------------------------\u001b[0m\n",
"\u001b[0m\u001b[0m\u001b[0m\n",
"\u001b[38;2;147;191;183massistant:\u001b[0m\n",
"\u001b[38;2;147;191;183m\u001b[0m\u001b[38;2;147;191;183mThe current weather is as follows:\u001b[0m\n",
"\u001b[38;2;147;191;183m- Virginia: 74°F\u001b[0m\n",
"\u001b[38;2;147;191;183m- New York: 65°F\u001b[0m\n",
"\u001b[38;2;147;191;183m- Washington, D.C.: 66°F\u001b[0m\u001b[0m\n"
"\u001b[38;2;147;191;183m- Virginia: 77°F\u001b[0m\n",
"\u001b[38;2;147;191;183m- New York: 68°F\u001b[0m\n",
"\u001b[38;2;147;191;183m- Washington DC: 69°F\u001b[0m\u001b[0m\n"
]
},
{
"data": {
"text/plain": [
"'The current weather is as follows:\\n- Virginia: 74°F\\n- New York: 65°F\\n- Washington, D.C.: 66°F'"
"'The current weather is as follows:\\n- Virginia: 77°F\\n- New York: 68°F\\n- Washington DC: 69°F'"
]
},
"execution_count": 9,
@ -417,124 +403,8 @@
}
],
"source": [
"AIAgent.run(\"What is the weather in Virgina, New York and Washington DC?\")"
"await AIAgent.run(\"What is the weather in Virgina, New York and Washington DC?\")"
]
},
{
"cell_type": "code",
"execution_count": 10,
"metadata": {},
"outputs": [
{
"name": "stderr",
"output_type": "stream",
"text": [
"INFO:dapr_agents.agent.base:Pre-filled prompt template with variables: dict_keys(['chat_history'])\n",
"INFO:dapr_agents.agent.patterns.react.base:Iteration 1/10 started.\n",
"INFO:dapr_agents.llm.openai.chat:Invoking ChatCompletion API.\n"
]
},
{
"name": "stdout",
"output_type": "stream",
"text": [
"\u001b[38;2;242;182;128muser:\u001b[0m\n",
"\u001b[38;2;242;182;128m\u001b[0m\u001b[38;2;242;182;128mWhat places did you already help me with the weather?\u001b[0m\u001b[0m\n",
"\u001b[0m\u001b[0m\n",
"\u001b[0m--------------------------------------------------------------------------------\u001b[0m\n",
"\u001b[0m\u001b[0m\u001b[0m\n"
]
},
{
"name": "stderr",
"output_type": "stream",
"text": [
"INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
"INFO:dapr_agents.llm.openai.chat:Chat completion retrieved successfully.\n",
"INFO:dapr_agents.agent.patterns.react.base:No action specified; continuing with further reasoning.\n",
"INFO:dapr_agents.agent.patterns.react.base:Iteration 2/10 started.\n",
"INFO:dapr_agents.llm.openai.chat:Invoking ChatCompletion API.\n"
]
},
{
"name": "stdout",
"output_type": "stream",
"text": [
"\u001b[38;2;217;95;118mThought: You asked about the weather in Virginia, New York, and Washington, D.C., and I provided you with the current temperatures for those locations.\u001b[0m\u001b[0m\n"
]
},
{
"name": "stderr",
"output_type": "stream",
"text": [
"INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
"INFO:dapr_agents.llm.openai.chat:Chat completion retrieved successfully.\n",
"INFO:dapr_agents.agent.patterns.react.base:Agent provided a direct final answer.\n"
]
},
{
"name": "stdout",
"output_type": "stream",
"text": [
"\u001b[38;2;217;95;118mThought: Answer: I helped you with the weather for Virginia, New York, and Washington, D.C.\u001b[0m\u001b[0m\n",
"\u001b[0m\u001b[0m\n",
"\u001b[0m--------------------------------------------------------------------------------\u001b[0m\n",
"\u001b[0m\u001b[0m\u001b[0m\n",
"\u001b[38;2;147;191;183massistant:\u001b[0m\n",
"\u001b[38;2;147;191;183m\u001b[0m\u001b[38;2;147;191;183mI helped you with the weather for Virginia, New York, and Washington, D.C.\u001b[0m\u001b[0m\n"
]
},
{
"data": {
"text/plain": [
"'I helped you with the weather for Virginia, New York, and Washington, D.C.'"
]
},
"execution_count": 10,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"AIAgent.run(\"What places did you already help me with the weather?\")"
]
},
{
"cell_type": "code",
"execution_count": 11,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"[{'role': 'user', 'content': 'Hi my name is Roberto'},\n",
" {'content': 'Hello Roberto! How can I assist you today with the weather?',\n",
" 'role': 'assistant'},\n",
" {'role': 'user',\n",
" 'content': 'What is the weather in Virgina, New York and Washington DC?'},\n",
" {'content': 'The current weather is as follows:\\n- Virginia: 74°F\\n- New York: 65°F\\n- Washington, D.C.: 66°F',\n",
" 'role': 'assistant'},\n",
" {'role': 'user',\n",
" 'content': 'What places did you already help me with the weather?'},\n",
" {'content': 'I helped you with the weather for Virginia, New York, and Washington, D.C.',\n",
" 'role': 'assistant'}]"
]
},
"execution_count": 11,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"AIAgent.chat_history"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": []
}
],
"metadata": {
@ -553,7 +423,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.12.1"
"version": "3.13.1"
}
},
"nbformat": 4,

View File

@ -542,7 +542,7 @@
},
{
"cell_type": "code",
"execution_count": 18,
"execution_count": null,
"metadata": {},
"outputs": [
{
@ -840,7 +840,7 @@
],
"source": [
"prompt = \"Get information about a user with ID da48bd32-94bd-4263-b23a-5b9820a67fab\"\n",
"AIAgent.run(prompt)"
"await AIAgent.run(prompt)"
]
},
{

View File

@ -114,7 +114,7 @@
},
{
"cell_type": "code",
"execution_count": 5,
"execution_count": null,
"metadata": {},
"outputs": [
{
@ -156,7 +156,7 @@
}
],
"source": [
"weather_agent.run(\"what will be the difference of temperature in Paris between 7 days ago and 7 from now?\")"
"await weather_agent.run(\"what will be the difference of temperature in Paris between 7 days ago and 7 from now?\")"
]
},
{
@ -188,7 +188,7 @@
"metadata": {},
"outputs": [],
"source": [
"weather_agent.run(\"What was the weather like in Paris two days ago?\")"
"await weather_agent.run(\"What was the weather like in Paris two days ago?\")"
]
},
{

View File

@ -113,7 +113,7 @@
},
{
"cell_type": "code",
"execution_count": 6,
"execution_count": null,
"metadata": {},
"outputs": [
{
@ -155,12 +155,12 @@
}
],
"source": [
"weather_agent.run(\"what is the weather in Paris?\")"
"await weather_agent.run(\"what is the weather in Paris?\")"
]
},
{
"cell_type": "code",
"execution_count": 7,
"execution_count": null,
"metadata": {},
"outputs": [
{
@ -229,7 +229,7 @@
}
],
"source": [
"weather_agent.run(\"what was the weather like in Paris two days ago?\")"
"await weather_agent.run(\"what was the weather like in Paris two days ago?\")"
]
},
{

View File

@ -109,7 +109,7 @@
{
"data": {
"text/plain": [
"ChatCompletion(choices=[Choice(finish_reason='stop', index=0, message=MessageContent(content='One famous dog is Lassie, a fictional Rough Collie known from movies, television series, and books for her intelligence and bravery.', role='assistant'), logprobs=None)], created=1741085078, id='chatcmpl-B7K3KbzErY3CMSoknZyDUSAN52xzL', model='gpt-4o-2024-08-06', object='chat.completion', usage={'completion_tokens': 27, 'prompt_tokens': 12, 'total_tokens': 39, 'completion_tokens_details': {'accepted_prediction_tokens': 0, 'audio_tokens': 0, 'reasoning_tokens': 0, 'rejected_prediction_tokens': 0}, 'prompt_tokens_details': {'audio_tokens': 0, 'cached_tokens': 0}})"
"ChatCompletion(choices=[Choice(finish_reason='stop', index=0, message=MessageContent(content='One famous dog is Lassie, the fictional Rough Collie from the \"Lassie\" television series and movies. Lassie is known for her intelligence, loyalty, and the ability to help her human companions out of tricky situations.', role='assistant'), logprobs=None)], created=1743846818, id='chatcmpl-BIuVWArM8Lzqug16s43O9M8BLaFkZ', model='gpt-4o-2024-08-06', object='chat.completion', usage={'completion_tokens': 48, 'prompt_tokens': 12, 'total_tokens': 60, 'completion_tokens_details': {'accepted_prediction_tokens': 0, 'audio_tokens': 0, 'reasoning_tokens': 0, 'rejected_prediction_tokens': 0}, 'prompt_tokens_details': {'audio_tokens': 0, 'cached_tokens': 0}})"
]
},
"execution_count": 4,
@ -135,7 +135,7 @@
"name": "stdout",
"output_type": "stream",
"text": [
"{'content': 'One famous dog is Lassie, a fictional Rough Collie known from movies, television series, and books for her intelligence and bravery.', 'role': 'assistant'}\n"
"{'content': 'One famous dog is Lassie, the fictional Rough Collie from the \"Lassie\" television series and movies. Lassie is known for her intelligence, loyalty, and the ability to help her human companions out of tricky situations.', 'role': 'assistant'}\n"
]
}
],
@ -189,7 +189,7 @@
{
"data": {
"text/plain": [
"ChatCompletion(choices=[Choice(finish_reason='stop', index=0, message=MessageContent(content=\"I am an AI assistant and don't have a personal name, but you can call me Assistant.\", role='assistant'), logprobs=None)], created=1741085084, id='chatcmpl-B7K3QXh8FWH8odMdwUI61eXieb0zk', model='gpt-4o-2024-08-06', object='chat.completion', usage={'completion_tokens': 19, 'prompt_tokens': 39, 'total_tokens': 58, 'completion_tokens_details': {'accepted_prediction_tokens': 0, 'audio_tokens': 0, 'reasoning_tokens': 0, 'rejected_prediction_tokens': 0}, 'prompt_tokens_details': {'audio_tokens': 0, 'cached_tokens': 0}})"
"ChatCompletion(choices=[Choice(finish_reason='stop', index=0, message=MessageContent(content=\"I am an AI assistant and don't have a personal name, but you can call me Assistant.\", role='assistant'), logprobs=None)], created=1743846828, id='chatcmpl-BIuVgBC6I3w1TFn15pmuCBGu6VZQM', model='gpt-4o-2024-08-06', object='chat.completion', usage={'completion_tokens': 20, 'prompt_tokens': 39, 'total_tokens': 59, 'completion_tokens_details': {'accepted_prediction_tokens': 0, 'audio_tokens': 0, 'reasoning_tokens': 0, 'rejected_prediction_tokens': 0}, 'prompt_tokens_details': {'audio_tokens': 0, 'cached_tokens': 0}})"
]
},
"execution_count": 8,
@ -278,7 +278,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.12.1"
"version": "3.13.1"
}
},
"nbformat": 4,

View File

@ -0,0 +1,88 @@
# 🧪 Basic MCP Agent Playground
This demo shows how to use a **lightweight agent** to call tools served via the [Model Context Protoco (MCP)](https://modelcontextprotocol.io/introduction). The agent uses a simple pattern from `dapr_agents` — but **without running inside Dapr**.
Its a minimal, Python-based setup for:
- Exploring how MCP tools work
- Testing stdio and SSE transport
- Running tool-calling agents (like ToolCallingAgent or ReActAgent)
- Experimenting **without** durable workflows or Dapr dependencies
> 🧠 Looking for something more robust?
> Check out the full `dapr_agents` repo to see how we run these agents inside Dapr workflows with durable task orchestration and state management.
---
## 🛠️ Project Structure
```text
.
├── tools.py # Registers two tools via FastMCP
├── server.py # Starts the MCP server in stdio or SSE mode
├── stdio.ipynb # Example using ToolCallingAgent over stdio
├── sse.ipynb # Example using ToolCallingAgent over SSE
├── requirements.txt
└── README.md
```
## Installation
Before running anything, make sure to install the dependencies:
```bash
pip install -r requirements.txt
```
## 🚀 Starting the MCP Tool Server
The server exposes two tools via MCP:
* `get_weather(location: str) → str`
* `jump(distance: str) → str`
Defined in `tools.py`, these tools are registered using FastMCP.
You can run the server in two modes:
### ▶️ 1. STDIO Mode
This runs inside the notebook. It's useful for quick tests because the MCP server doesn't need to be running in a separate terminal.
* This is used in `stdio.ipynb`
* The agent communicates with the tool server via stdio transport
### 🌐 2. SSE Mode (Starlette + Uvicorn)
This mode requires running the server outside the notebook (in a terminal).
```python
python server.py --server_type sse --host 127.0.0.1 --port 8000
```
The server exposes:
* `/sse` for the SSE connection
* `/messages/` to receive tool calls
Used by `sse.ipynb`
📌 You can change the port and host using --host and --port.
## 📓 Notebooks
There are two notebooks in this repo that show basic agent behavior using MCP tools:
| Notebook | Description | Transport |
| --- | --- | --- |
| stdio.ipynb | Uses ToolCallingAgent via mcp.run("stdio") | STDIO |
| sse.ipynb Uses | ToolCallingAgent with SSE tool server | SSE |
Each notebook runs a basic `ToolCallingAgent`, using tools served via MCP. These agents are not managed via Dapr or durable workflows — it's pure Python execution with async support.
## 🔄 Whats Next?
After testing these lightweight agents, you can try:
* Running the full dapr_agents workflow system
* Registering more complex MCP tools
* Using other agent types (e.g., ReActAgent, AssistantAgent)
* Testing stateful, durable workflows using Dapr + MCP tools

View File

@ -0,0 +1,4 @@
dapr-agents
python-dotenv
mcp
starlette

View File

@ -0,0 +1,65 @@
import argparse
import logging
import uvicorn
from starlette.applications import Starlette
from starlette.requests import Request
from starlette.routing import Mount, Route
from mcp.server.sse import SseServerTransport
from tools import mcp
# ─────────────────────────────────────────────
# Logging Configuration
# ─────────────────────────────────────────────
logging.basicConfig(
level=logging.INFO,
format="%(asctime)s - %(levelname)s - %(message)s"
)
logger = logging.getLogger("mcp-server")
# ─────────────────────────────────────────────
# Starlette App Factory
# ─────────────────────────────────────────────
def create_starlette_app():
"""
Create a Starlette app wired with the MCP server over SSE transport.
"""
logger.debug("Creating Starlette app with SSE transport")
sse = SseServerTransport("/messages/")
async def handle_sse(request: Request) -> None:
logger.info("🔌 SSE connection established")
async with sse.connect_sse(request.scope, request.receive, request._send) as (read_stream, write_stream):
logger.debug("Starting MCP server run loop over SSE")
await mcp._mcp_server.run(read_stream, write_stream, mcp._mcp_server.create_initialization_options())
logger.debug("MCP run loop completed")
return Starlette(
debug=False,
routes=[
Route("/sse", endpoint=handle_sse),
Mount("/messages/", app=sse.handle_post_message)
]
)
# ─────────────────────────────────────────────
# CLI Entrypoint
# ─────────────────────────────────────────────
def main():
parser = argparse.ArgumentParser(description="Run an MCP tool server.")
parser.add_argument("--server_type", choices=["stdio", "sse"], default="stdio", help="Transport to use")
parser.add_argument("--host", default="127.0.0.1", help="Host to bind to (SSE only)")
parser.add_argument("--port", type=int, default=8000, help="Port to bind to (SSE only)")
args = parser.parse_args()
logger.info(f"🚀 Starting MCP server in {args.server_type.upper()} mode")
if args.server_type == "stdio":
mcp.run("stdio")
else:
app = create_starlette_app()
logger.info(f"🌐 Running SSE server on {args.host}:{args.port}")
uvicorn.run(app, host=args.host, port=args.port)
if __name__ == "__main__":
main()

View File

@ -0,0 +1,298 @@
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Basic Weather Agent with MCP Support (SSE Transport)\n",
"\n",
"* Collaborator: Roberto Rodriguez @Cyb3rWard0g"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Install Required Libraries\n",
"Before starting, ensure the required libraries are installed:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"!pip install dapr-agents python-dotenv mcp starlette"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Import Environment Variables"
]
},
{
"cell_type": "code",
"execution_count": 1,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"True"
]
},
"execution_count": 1,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"from dotenv import load_dotenv\n",
"load_dotenv() # take environment variables from .env."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Enable Logging"
]
},
{
"cell_type": "code",
"execution_count": 2,
"metadata": {},
"outputs": [],
"source": [
"import logging\n",
"\n",
"logging.basicConfig(level=logging.INFO)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Connect to MCP Server and Get Tools"
]
},
{
"cell_type": "code",
"execution_count": 3,
"metadata": {},
"outputs": [
{
"name": "stderr",
"output_type": "stream",
"text": [
"INFO:dapr_agents.tool.mcp.client:Connecting to MCP server 'local' via SSE: http://localhost:8000/sse\n",
"INFO:mcp.client.sse:Connecting to SSE endpoint: http://localhost:8000/sse\n",
"INFO:httpx:HTTP Request: GET http://localhost:8000/sse \"HTTP/1.1 200 OK\"\n",
"INFO:mcp.client.sse:Received endpoint URL: http://localhost:8000/messages/?session_id=916bc6e1fb514b3e814e6a980ce20bbc\n",
"INFO:mcp.client.sse:Starting post writer with endpoint URL: http://localhost:8000/messages/?session_id=916bc6e1fb514b3e814e6a980ce20bbc\n",
"INFO:httpx:HTTP Request: POST http://localhost:8000/messages/?session_id=916bc6e1fb514b3e814e6a980ce20bbc \"HTTP/1.1 202 Accepted\"\n",
"INFO:httpx:HTTP Request: POST http://localhost:8000/messages/?session_id=916bc6e1fb514b3e814e6a980ce20bbc \"HTTP/1.1 202 Accepted\"\n",
"INFO:httpx:HTTP Request: POST http://localhost:8000/messages/?session_id=916bc6e1fb514b3e814e6a980ce20bbc \"HTTP/1.1 202 Accepted\"\n",
"INFO:dapr_agents.tool.mcp.client:Loaded 2 tools from server 'local'\n",
"INFO:httpx:HTTP Request: POST http://localhost:8000/messages/?session_id=916bc6e1fb514b3e814e6a980ce20bbc \"HTTP/1.1 202 Accepted\"\n",
"INFO:dapr_agents.tool.mcp.client:Loaded 0 prompts from server 'local': \n",
"INFO:dapr_agents.tool.mcp.client:Successfully connected to MCP server 'local'\n"
]
},
{
"name": "stdout",
"output_type": "stream",
"text": [
"🔧 Tools: ['LocalGetWeather', 'LocalJump']\n"
]
}
],
"source": [
"from dapr_agents.tool.mcp.client import MCPClient\n",
"\n",
"client = MCPClient()\n",
"\n",
"await client.connect_sse(\n",
" server_name=\"local\", # Unique name you assign to this server\n",
" url=\"http://localhost:8000/sse\", # MCP SSE endpoint\n",
" headers=None # Optional HTTP headers if needed\n",
")\n",
"\n",
"# See what tools were loaded\n",
"tools = client.get_all_tools()\n",
"print(\"🔧 Tools:\", [t.name for t in tools])"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Initialize Agent"
]
},
{
"cell_type": "code",
"execution_count": 4,
"metadata": {},
"outputs": [
{
"name": "stderr",
"output_type": "stream",
"text": [
"INFO:dapr_agents.llm.openai.client.base:Initializing OpenAI client...\n",
"INFO:dapr_agents.tool.executor:Tool registered: LocalGetWeather\n",
"INFO:dapr_agents.tool.executor:Tool registered: LocalJump\n",
"INFO:dapr_agents.tool.executor:Tool Executor initialized with 2 tool(s).\n",
"INFO:dapr_agents.agent.base:Constructing system_prompt from agent attributes.\n",
"INFO:dapr_agents.agent.base:Using system_prompt to create the prompt template.\n",
"INFO:dapr_agents.agent.base:Pre-filled prompt template with attributes: ['name', 'role', 'goal']\n"
]
}
],
"source": [
"from dapr_agents import Agent\n",
"\n",
"agent = Agent(\n",
" name=\"Rob\",\n",
" role= \"Weather Assistant\",\n",
" tools=tools\n",
")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Run Agent"
]
},
{
"cell_type": "code",
"execution_count": 5,
"metadata": {},
"outputs": [
{
"name": "stderr",
"output_type": "stream",
"text": [
"INFO:dapr_agents.agent.patterns.toolcall.base:Iteration 1/10 started.\n",
"INFO:dapr_agents.llm.utils.request:Tools are available in the request.\n",
"INFO:dapr_agents.llm.openai.chat:Invoking ChatCompletion API.\n"
]
},
{
"name": "stdout",
"output_type": "stream",
"text": [
"\u001b[38;2;242;182;128muser:\u001b[0m\n",
"\u001b[38;2;242;182;128m\u001b[0m\u001b[38;2;242;182;128mWhat is the weather in New York?\u001b[0m\u001b[0m\n",
"\u001b[0m\u001b[0m\n",
"\u001b[0m--------------------------------------------------------------------------------\u001b[0m\n",
"\u001b[0m\u001b[0m\u001b[0m\n"
]
},
{
"name": "stderr",
"output_type": "stream",
"text": [
"INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
"INFO:dapr_agents.llm.openai.chat:Chat completion retrieved successfully.\n",
"INFO:dapr_agents.agent.patterns.toolcall.base:Executing LocalGetWeather with arguments {\"location\":\"New York\"}\n",
"INFO:dapr_agents.tool.executor:Running tool (auto): LocalGetWeather\n",
"INFO:dapr_agents.tool.mcp.client:[MCP] Executing tool 'get_weather' with args: {'location': 'New York'}\n",
"INFO:mcp.client.sse:Connecting to SSE endpoint: http://localhost:8000/sse\n",
"INFO:httpx:HTTP Request: GET http://localhost:8000/sse \"HTTP/1.1 200 OK\"\n",
"INFO:mcp.client.sse:Received endpoint URL: http://localhost:8000/messages/?session_id=b47ef10b57dd471aac4c5d7aaeadbf5b\n",
"INFO:mcp.client.sse:Starting post writer with endpoint URL: http://localhost:8000/messages/?session_id=b47ef10b57dd471aac4c5d7aaeadbf5b\n",
"INFO:httpx:HTTP Request: POST http://localhost:8000/messages/?session_id=b47ef10b57dd471aac4c5d7aaeadbf5b \"HTTP/1.1 202 Accepted\"\n",
"INFO:httpx:HTTP Request: POST http://localhost:8000/messages/?session_id=b47ef10b57dd471aac4c5d7aaeadbf5b \"HTTP/1.1 202 Accepted\"\n",
"INFO:httpx:HTTP Request: POST http://localhost:8000/messages/?session_id=b47ef10b57dd471aac4c5d7aaeadbf5b \"HTTP/1.1 202 Accepted\"\n",
"INFO:dapr_agents.agent.patterns.toolcall.base:Iteration 2/10 started.\n",
"INFO:dapr_agents.llm.utils.request:Tools are available in the request.\n",
"INFO:dapr_agents.llm.openai.chat:Invoking ChatCompletion API.\n"
]
},
{
"name": "stdout",
"output_type": "stream",
"text": [
"\u001b[38;2;217;95;118massistant:\u001b[0m\n",
"\u001b[38;2;217;95;118m\u001b[0m\u001b[38;2;217;95;118mFunction name: LocalGetWeather (Call Id: call_lBVZIV7seOsWttLnfZaLSwS3)\u001b[0m\n",
"\u001b[38;2;217;95;118m\u001b[0m\u001b[38;2;217;95;118mArguments: {\"location\":\"New York\"}\u001b[0m\u001b[0m\n",
"\u001b[0m\u001b[0m\n",
"\u001b[0m--------------------------------------------------------------------------------\u001b[0m\n",
"\u001b[0m\u001b[0m\u001b[0m\n",
"\u001b[38;2;191;69;126mLocalGetWeather(tool) (Id: call_lBVZIV7seOsWttLnfZaLSwS3):\u001b[0m\n",
"\u001b[38;2;191;69;126m\u001b[0m\u001b[38;2;191;69;126mNew York: 65F.\u001b[0m\u001b[0m\n",
"\u001b[0m\u001b[0m\n",
"\u001b[0m--------------------------------------------------------------------------------\u001b[0m\n",
"\u001b[0m\u001b[0m\u001b[0m\n"
]
},
{
"name": "stderr",
"output_type": "stream",
"text": [
"INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
"INFO:dapr_agents.llm.openai.chat:Chat completion retrieved successfully.\n"
]
},
{
"name": "stdout",
"output_type": "stream",
"text": [
"\u001b[38;2;147;191;183massistant:\u001b[0m\n",
"\u001b[38;2;147;191;183m\u001b[0m\u001b[38;2;147;191;183mThe current weather in New York is 65°F. If you need more information, feel free to ask!\u001b[0m\u001b[0m\n",
"\u001b[0m\u001b[0m\n",
"\u001b[0m--------------------------------------------------------------------------------\u001b[0m\n",
"\u001b[0m\u001b[0m\u001b[0m\n"
]
},
{
"data": {
"text/plain": [
"'The current weather in New York is 65°F. If you need more information, feel free to ask!'"
]
},
"execution_count": 5,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"await agent.run(\"What is the weather in New York?\")"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": []
}
],
"metadata": {
"kernelspec": {
"display_name": ".venv",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.13.1"
}
},
"nbformat": 4,
"nbformat_minor": 2
}

View File

@ -0,0 +1,296 @@
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Basic Weather Agent with MCP Support (Stdio Transport)\n",
"\n",
"* Collaborator: Roberto Rodriguez @Cyb3rWard0g"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Install Required Libraries\n",
"Before starting, ensure the required libraries are installed:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"!pip install dapr-agents python-dotenv mcp starlette"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Import Environment Variables"
]
},
{
"cell_type": "code",
"execution_count": 1,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"True"
]
},
"execution_count": 1,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"from dotenv import load_dotenv\n",
"load_dotenv() # take environment variables from .env."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Enable Logging"
]
},
{
"cell_type": "code",
"execution_count": 2,
"metadata": {},
"outputs": [],
"source": [
"import logging\n",
"\n",
"logging.basicConfig(level=logging.INFO)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Connect to MCP Server and Get Tools"
]
},
{
"cell_type": "code",
"execution_count": 3,
"metadata": {},
"outputs": [
{
"name": "stderr",
"output_type": "stream",
"text": [
"INFO:dapr_agents.tool.mcp.client:Connecting to MCP server 'local' via stdio: python ['server.py', '--server_type', 'stdio']\n",
"INFO:dapr_agents.tool.mcp.client:Loaded 2 tools from server 'local'\n",
"INFO:dapr_agents.tool.mcp.client:Loaded 0 prompts from server 'local': \n",
"INFO:dapr_agents.tool.mcp.client:Successfully connected to MCP server 'local'\n"
]
},
{
"name": "stdout",
"output_type": "stream",
"text": [
"🔧 Tools: ['LocalGetWeather', 'LocalJump']\n"
]
}
],
"source": [
"from dapr_agents.tool.mcp.client import MCPClient\n",
"\n",
"client = MCPClient()\n",
"\n",
"# Connect to your test server\n",
"await client.connect_stdio(\n",
" server_name=\"local\",\n",
" command=\"python\",\n",
" args=[\"server.py\", \"--server_type\", \"stdio\"]\n",
")\n",
"\n",
"# Test tools\n",
"tools = client.get_all_tools()\n",
"print(\"🔧 Tools:\", [t.name for t in tools])"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Initialize Agent"
]
},
{
"cell_type": "code",
"execution_count": 5,
"metadata": {},
"outputs": [
{
"name": "stderr",
"output_type": "stream",
"text": [
"INFO:dapr_agents.llm.openai.client.base:Initializing OpenAI client...\n",
"INFO:dapr_agents.tool.executor:Tool registered: LocalGetWeather\n",
"INFO:dapr_agents.tool.executor:Tool registered: LocalJump\n",
"INFO:dapr_agents.tool.executor:Tool Executor initialized with 2 tool(s).\n",
"INFO:dapr_agents.agent.base:Constructing system_prompt from agent attributes.\n",
"INFO:dapr_agents.agent.base:Using system_prompt to create the prompt template.\n",
"INFO:dapr_agents.agent.base:Pre-filled prompt template with attributes: ['name', 'role', 'goal']\n"
]
}
],
"source": [
"from dapr_agents import Agent\n",
"\n",
"agent = Agent(\n",
" name=\"Rob\",\n",
" role= \"Weather Assistant\",\n",
" tools=tools\n",
")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Run Agent"
]
},
{
"cell_type": "code",
"execution_count": 6,
"metadata": {},
"outputs": [
{
"name": "stderr",
"output_type": "stream",
"text": [
"INFO:dapr_agents.agent.patterns.toolcall.base:Iteration 1/10 started.\n",
"INFO:dapr_agents.llm.utils.request:Tools are available in the request.\n",
"INFO:dapr_agents.llm.openai.chat:Invoking ChatCompletion API.\n"
]
},
{
"name": "stdout",
"output_type": "stream",
"text": [
"\u001b[38;2;242;182;128muser:\u001b[0m\n",
"\u001b[38;2;242;182;128m\u001b[0m\u001b[38;2;242;182;128mWhat is the weather in New York?\u001b[0m\u001b[0m\n",
"\u001b[0m\u001b[0m\n",
"\u001b[0m--------------------------------------------------------------------------------\u001b[0m\n",
"\u001b[0m\u001b[0m\u001b[0m\n"
]
},
{
"name": "stderr",
"output_type": "stream",
"text": [
"INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
"INFO:dapr_agents.llm.openai.chat:Chat completion retrieved successfully.\n",
"INFO:dapr_agents.agent.patterns.toolcall.base:Executing LocalGetWeather with arguments {\"location\":\"New York\"}\n",
"INFO:dapr_agents.tool.executor:Running tool (auto): LocalGetWeather\n",
"INFO:dapr_agents.tool.mcp.client:[MCP] Executing tool 'get_weather' with args: {'location': 'New York'}\n"
]
},
{
"name": "stdout",
"output_type": "stream",
"text": [
"\u001b[38;2;217;95;118massistant:\u001b[0m\n",
"\u001b[38;2;217;95;118m\u001b[0m\u001b[38;2;217;95;118mFunction name: LocalGetWeather (Call Id: call_l8KuS39PvriksogjGN71rzCm)\u001b[0m\n",
"\u001b[38;2;217;95;118m\u001b[0m\u001b[38;2;217;95;118mArguments: {\"location\":\"New York\"}\u001b[0m\u001b[0m\n",
"\u001b[0m\u001b[0m\n",
"\u001b[0m--------------------------------------------------------------------------------\u001b[0m\n",
"\u001b[0m\u001b[0m\u001b[0m\n"
]
},
{
"name": "stderr",
"output_type": "stream",
"text": [
"INFO:dapr_agents.agent.patterns.toolcall.base:Iteration 2/10 started.\n",
"INFO:dapr_agents.llm.utils.request:Tools are available in the request.\n",
"INFO:dapr_agents.llm.openai.chat:Invoking ChatCompletion API.\n"
]
},
{
"name": "stdout",
"output_type": "stream",
"text": [
"\u001b[38;2;191;69;126mLocalGetWeather(tool) (Id: call_l8KuS39PvriksogjGN71rzCm):\u001b[0m\n",
"\u001b[38;2;191;69;126m\u001b[0m\u001b[38;2;191;69;126mNew York: 60F.\u001b[0m\u001b[0m\n",
"\u001b[0m\u001b[0m\n",
"\u001b[0m--------------------------------------------------------------------------------\u001b[0m\n",
"\u001b[0m\u001b[0m\u001b[0m\n"
]
},
{
"name": "stderr",
"output_type": "stream",
"text": [
"INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
"INFO:dapr_agents.llm.openai.chat:Chat completion retrieved successfully.\n"
]
},
{
"name": "stdout",
"output_type": "stream",
"text": [
"\u001b[38;2;147;191;183massistant:\u001b[0m\n",
"\u001b[38;2;147;191;183m\u001b[0m\u001b[38;2;147;191;183mThe current temperature in New York is 60°F.\u001b[0m\u001b[0m\n",
"\u001b[0m\u001b[0m\n",
"\u001b[0m--------------------------------------------------------------------------------\u001b[0m\n",
"\u001b[0m\u001b[0m\u001b[0m\n"
]
},
{
"data": {
"text/plain": [
"'The current temperature in New York is 60°F.'"
]
},
"execution_count": 6,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"await agent.run(\"What is the weather in New York?\")"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": []
}
],
"metadata": {
"kernelspec": {
"display_name": ".venv",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.13.1"
}
},
"nbformat": 4,
"nbformat_minor": 2
}

View File

@ -0,0 +1,15 @@
from mcp.server.fastmcp import FastMCP
import random
mcp = FastMCP("TestServer")
@mcp.tool()
async def get_weather(location: str) -> str:
"""Get weather information for a specific location."""
temperature = random.randint(60, 80)
return f"{location}: {temperature}F."
@mcp.tool()
async def jump(distance: str) -> str:
"""Simulate a jump of a given distance."""
return f"I jumped the following distance: {distance}"

View File

@ -0,0 +1,146 @@
# MCP Agent with Dapr Workflows
This demo shows how to run an AI agent inside a Dapr Workflow, calling tools exposed via the [Model Context Protoco (MCP)](https://modelcontextprotocol.io/introduction).
Unlike the lightweight notebook-based examples, this setup runs a full Dapr agent using:
✅ Durable task orchestration with Dapr Workflows
✅ Tools served via MCP (stdio or SSE)
✅ Full integration with the Dapr ecosystem
## 🛠️ Project Structure
```text
.
├── app.py # Main entrypoint: runs a Dapr Agent and workflow on port 8001
├── tools.py # MCP tool definitions (get_weather, jump)
├── server.py # Starlette-based SSE server
|-- client.py # Script to send an HTTP request to the Agent over port 8001
├── components/ # Dapr pubsub + state components (Redis, etc.)
├── requirements.txt
└── README.md
```
## 📦 Installation
Install dependencies:
```python
pip install -r requirements.txt
```
Make sure you have Dapr installed and initialized:
```bash
dapr init
```
## 🧰 MCP Tool Server
Your agent will call tools defined in tools.py, served via FastMCP:
```python
@mcp.tool()
async def get_weather(location: str) -> str:
...
@mcp.tool()
async def jump(distance: str) -> str:
...
```
These tools can be served in one of two modes:
### STDIO Mode (local execution)
No external server needed — the agent runs the MCP server in-process.
✅ Best for internal experiments or testing
🚫 Not supported for agents that rely on external workflows (e.g., Dapr orchestration)
### SSE Mode (recommended for Dapr workflows)
In this demo, we run the MCP server as a separate Starlette + Uvicorn app:
```python
python server.py --server_type sse --host 127.0.0.1 --port 8000
```
This exposes:
* `/sse` for the SSE stream
* `/messages/` for tool execution
Used by the Dapr agent in this repo.
## 🚀 Running the Dapr Agent
Start the MCP server in SSE mode:
```python
python server.py --server_type sse --port 8000
```
Then in a separate terminal, run the agent workflow:
```bash
dapr run --app-id weatherappmcp --resources-path components/ -- python app.py
```
Once agent is ready, run the `client.py` script to send a message to it.
```bash
python3 client.py
```
You will see the state of the agent in a json file in the same directory.
```
{
"instances": {
"e098e5b85d544c84a26250be80316152": {
"input": "What is the weather in New York?",
"output": "The current temperature in New York, USA, is 66\u00b0F.",
"start_time": "2025-04-05T05:37:50.496005",
"end_time": "2025-04-05T05:37:52.501630",
"messages": [
{
"id": "e8ccc9d2-1674-47cc-afd2-8e68b91ff791",
"role": "user",
"content": "What is the weather in New York?",
"timestamp": "2025-04-05T05:37:50.516572",
"name": null
},
{
"id": "47b8db93-558c-46ed-80bb-8cb599c4272b",
"role": "assistant",
"content": "The current temperature in New York, USA, is 66\u00b0F.",
"timestamp": "2025-04-05T05:37:52.499945",
"name": null
}
],
"last_message": {
"id": "47b8db93-558c-46ed-80bb-8cb599c4272b",
"role": "assistant",
"content": "The current temperature in New York, USA, is 66\u00b0F.",
"timestamp": "2025-04-05T05:37:52.499945",
"name": null
},
"tool_history": [
{
"content": "New York, USA: 66F.",
"role": "tool",
"tool_call_id": "call_LTDMHvt05e1tvbWBe0kVvnUM",
"id": "2c1535fe-c43a-42c1-be7e-25c71b43c32e",
"function_name": "LocalGetWeather",
"function_args": "{\"location\":\"New York, USA\"}",
"timestamp": "2025-04-05T05:37:51.609087"
}
],
"source": null,
"source_workflow_instance_id": null
}
}
}
```

View File

@ -0,0 +1,44 @@
import asyncio
import logging
from dotenv import load_dotenv
from dapr_agents import AssistantAgent
from dapr_agents.tool.mcp import MCPClient
async def main():
try:
# Load MCP tools from server (stdio or sse)
client = MCPClient()
await client.connect_sse("local", url="http://localhost:8000/sse")
# Convert MCP tools to AgentTool list
tools = client.get_all_tools()
# Create the Weather Agent using those tools
weather_agent = AssistantAgent(
role="Weather Assistant",
name="Stevie",
goal="Help humans get weather and location info using smart tools.",
instructions=[
"Respond clearly and helpfully to weather-related questions.",
"Use tools when appropriate to fetch or simulate weather data.",
"You may sometimes jump after answering the weather question.",
],
tools=tools,
message_bus_name="messagepubsub",
state_store_name="workflowstatestore",
state_key="workflow_state",
agents_registry_store_name="agentstatestore",
agents_registry_key="agents_registry",
).as_service(port=8001)
# Start the FastAPI agent service
await weather_agent.start()
except Exception as e:
logging.exception("Error starting weather agent service", exc_info=e)
if __name__ == "__main__":
load_dotenv()
logging.basicConfig(level=logging.INFO)
asyncio.run(main())

View File

@ -0,0 +1,57 @@
#!/usr/bin/env python3
import requests
import time
import sys
if __name__ == "__main__":
status_url = "http://localhost:8001/status"
healthy = False
for attempt in range(1, 11):
try:
print(f"Attempt {attempt}...")
response = requests.get(status_url, timeout=5)
if response.status_code == 200:
print("Workflow app is healthy!")
healthy = True
break
else:
print(f"Received status code {response.status_code}: {response.text}")
except requests.exceptions.RequestException as e:
print(f"Request failed: {e}")
attempt += 1
print(f"Waiting 5s seconds before next health checkattempt...")
time.sleep(5)
if not healthy:
print("Workflow app is not healthy!")
sys.exit(1)
workflow_url = "http://localhost:8001/start-workflow"
task_payload = {"task": "What is the weather in New York?"}
for attempt in range(1, 11):
try:
print(f"Attempt {attempt}...")
response = requests.post(workflow_url, json=task_payload, timeout=5)
if response.status_code == 202:
print("Workflow started successfully!")
sys.exit(0)
else:
print(f"Received status code {response.status_code}: {response.text}")
except requests.exceptions.RequestException as e:
print(f"Request failed: {e}")
attempt += 1
print(f"Waiting 1s seconds before next attempt...")
time.sleep(1)
print(f"Maximum attempts (10) reached without success.")
print("Failed to get successful response")
sys.exit(1)

View File

@ -0,0 +1,12 @@
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
name: messagepubsub
spec:
type: pubsub.redis
version: v1
metadata:
- name: redisHost
value: localhost:6379
- name: redisPassword
value: ""

View File

@ -0,0 +1,16 @@
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
name: agentstatestore
spec:
type: state.redis
version: v1
metadata:
- name: redisHost
value: localhost:6379
- name: redisPassword
value: ""
- name: keyPrefix
value: none
- name: actorStateStore
value: "true"

View File

@ -0,0 +1,12 @@
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
name: workflowstatestore
spec:
type: state.redis
version: v1
metadata:
- name: redisHost
value: localhost:6379
- name: redisPassword
value: ""

View File

@ -0,0 +1,4 @@
dapr-agents
python-dotenv
mcp
starlette

View File

@ -0,0 +1,65 @@
import argparse
import logging
import uvicorn
from starlette.applications import Starlette
from starlette.requests import Request
from starlette.routing import Mount, Route
from mcp.server.sse import SseServerTransport
from tools import mcp
# ─────────────────────────────────────────────
# Logging Configuration
# ─────────────────────────────────────────────
logging.basicConfig(
level=logging.INFO,
format="%(asctime)s - %(levelname)s - %(message)s"
)
logger = logging.getLogger("mcp-server")
# ─────────────────────────────────────────────
# Starlette App Factory
# ─────────────────────────────────────────────
def create_starlette_app():
"""
Create a Starlette app wired with the MCP server over SSE transport.
"""
logger.debug("Creating Starlette app with SSE transport")
sse = SseServerTransport("/messages/")
async def handle_sse(request: Request) -> None:
logger.info("🔌 SSE connection established")
async with sse.connect_sse(request.scope, request.receive, request._send) as (read_stream, write_stream):
logger.debug("Starting MCP server run loop over SSE")
await mcp._mcp_server.run(read_stream, write_stream, mcp._mcp_server.create_initialization_options())
logger.debug("MCP run loop completed")
return Starlette(
debug=False,
routes=[
Route("/sse", endpoint=handle_sse),
Mount("/messages/", app=sse.handle_post_message)
]
)
# ─────────────────────────────────────────────
# CLI Entrypoint
# ─────────────────────────────────────────────
def main():
parser = argparse.ArgumentParser(description="Run an MCP tool server.")
parser.add_argument("--server_type", choices=["stdio", "sse"], default="stdio", help="Transport to use")
parser.add_argument("--host", default="127.0.0.1", help="Host to bind to (SSE only)")
parser.add_argument("--port", type=int, default=8000, help="Port to bind to (SSE only)")
args = parser.parse_args()
logger.info(f"🚀 Starting MCP server in {args.server_type.upper()} mode")
if args.server_type == "stdio":
mcp.run("stdio")
else:
app = create_starlette_app()
logger.info(f"🌐 Running SSE server on {args.host}:{args.port}")
uvicorn.run(app, host=args.host, port=args.port)
if __name__ == "__main__":
main()

View File

@ -0,0 +1,15 @@
from mcp.server.fastmcp import FastMCP
import random
mcp = FastMCP("TestServer")
@mcp.tool()
async def get_weather(location: str) -> str:
"""Get weather information for a specific location."""
temperature = random.randint(60, 80)
return f"{location}: {temperature}F."
@mcp.tool()
async def jump(distance: str) -> str:
"""Simulate a jump of a given distance."""
return f"I jumped the following distance: {distance}"

View File

@ -0,0 +1,35 @@
from dapr_agents import AssistantAgent
from dotenv import load_dotenv
import asyncio
import logging
async def main():
try:
# Create the Weather Agent using those tools
weather_agent = AssistantAgent(
role="Weather Assistant",
name="Stevie",
goal="Help humans get weather and location info using smart tools.",
instructions=[
"Respond clearly and helpfully to weather-related questions.",
"Use tools when appropriate to fetch or simulate weather data.",
"You may sometimes jump after answering the weather question.",
],
message_bus_name="messagepubsub",
state_store_name="workflowstatestore",
state_key="workflow_state",
agents_registry_store_name="agentstatestore",
agents_registry_key="agents_registry",
).as_service(port=8001)
# Start the FastAPI agent service
await weather_agent.start()
except Exception as e:
print(f"Error starting service: {e}")
if __name__ == "__main__":
load_dotenv()
logging.basicConfig(level=logging.INFO)
asyncio.run(main())

View File

@ -0,0 +1,57 @@
#!/usr/bin/env python3
import requests
import time
import sys
if __name__ == "__main__":
status_url = "http://localhost:8001/status"
healthy = False
for attempt in range(1, 11):
try:
print(f"Attempt {attempt}...")
response = requests.get(status_url, timeout=5)
if response.status_code == 200:
print("Workflow app is healthy!")
healthy = True
break
else:
print(f"Received status code {response.status_code}: {response.text}")
except requests.exceptions.RequestException as e:
print(f"Request failed: {e}")
attempt += 1
print(f"Waiting 5s seconds before next health checkattempt...")
time.sleep(5)
if not healthy:
print("Workflow app is not healthy!")
sys.exit(1)
workflow_url = "http://localhost:8001/start-workflow"
task_payload = {"task": "What is the weather in New York?"}
for attempt in range(1, 11):
try:
print(f"Attempt {attempt}...")
response = requests.post(workflow_url, json=task_payload, timeout=5)
if response.status_code == 202:
print("Workflow started successfully!")
sys.exit(0)
else:
print(f"Received status code {response.status_code}: {response.text}")
except requests.exceptions.RequestException as e:
print(f"Request failed: {e}")
attempt += 1
print(f"Waiting 1s seconds before next attempt...")
time.sleep(1)
print(f"Maximum attempts (10) reached without success.")
print("Failed to get successful response")
sys.exit(1)

View File

@ -0,0 +1,12 @@
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
name: messagepubsub
spec:
type: pubsub.redis
version: v1
metadata:
- name: redisHost
value: localhost:6379
- name: redisPassword
value: ""

View File

@ -0,0 +1,16 @@
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
name: agentstatestore
spec:
type: state.redis
version: v1
metadata:
- name: redisHost
value: localhost:6379
- name: redisPassword
value: ""
- name: keyPrefix
value: none
- name: actorStateStore
value: "true"

View File

@ -0,0 +1,12 @@
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
name: workflowstatestore
spec:
type: state.redis
version: v1
metadata:
- name: redisHost
value: localhost:6379
- name: redisPassword
value: ""

View File

@ -98,7 +98,8 @@ class AgentActorBase(Actor, AgentActorInterface):
try:
# Run the task if provided, or fallback to agent.run() if no task
result = self.agent.run(task) if task else self.agent.run()
task_input = task or None
result = await self.agent.run(task_input)
# Update the task entry with the result and mark as COMPLETE
task_entry.output = result

View File

@ -1,10 +1,16 @@
from dapr_agents.types import AgentError, AssistantMessage, ChatCompletion, FunctionCall
import json
import logging
import textwrap
from datetime import datetime
import regex
from pydantic import ConfigDict, Field
from typing import Any, Callable, Dict, List, Literal, Optional, Tuple, Union
from dapr_agents.agent import AgentBase
from dapr_agents.tool import AgentTool
from typing import List, Dict, Any, Union, Callable, Literal, Optional, Tuple
from datetime import datetime
from pydantic import Field, ConfigDict
import regex, json, textwrap, logging
from dapr_agents.types import AgentError, AssistantMessage, ChatCompletion
logger = logging.getLogger(__name__)
@ -104,32 +110,29 @@ class ReActAgent(AgentBase):
return "\n\n".join(prompt_parts)
def run(self, input_data: Optional[Union[str, Dict[str, Any]]] = None) -> Any:
async def run(self, input_data: Optional[Union[str, Dict[str, Any]]] = None) -> Any:
"""
Runs the main logic loop for processing the task and executing actions until a result is reached.
Runs the agent in a ReAct-style loop until it generates a final answer or reaches max iterations.
Args:
input_data (Optional[Union[str, Dict[str, Any]]]): The task or data for the agent to process. If None, relies on memory.
input_data (Optional[Union[str, Dict[str, Any]]]): Initial task or message input.
Returns:
Any: Final response after processing the task or reaching a final answer.
Any: The agent's final answer.
Raises:
AgentError: On errors during chat message processing or action execution.
AgentError: If LLM fails or tool execution encounters issues.
"""
logger.debug(f"Agent run started with input: {input_data if input_data else 'Using memory context'}")
logger.debug(f"Agent run started with input: {input_data or 'Using memory context'}")
# Format messages; construct_messages already includes chat history.
messages = self.construct_messages(input_data or {})
# Get Last User Message
user_message = self.get_last_user_message(messages)
if input_data:
# Add the new user message to memory only if input_data is provided
if user_message: # Ensure a user message exists before adding to memory
self.memory.add_message(user_message)
# Add the new user message to memory only if input_data is provided and user message exists.
if input_data and user_message:
self.memory.add_message(user_message)
# Always print the last user message for context, even if no input_data is provided
if user_message:
self.text_formatter.print_message(user_message)
@ -139,7 +142,7 @@ class ReActAgent(AgentBase):
# Initialize react_loop for iterative reasoning
react_loop = ""
for iteration in range(self.max_iterations):
logger.info(f"Iteration {iteration + 1}/{self.max_iterations} started.")
@ -162,103 +165,123 @@ class ReActAgent(AgentBase):
iteration_messages[-1]["content"] += f"\n{react_loop}" # Append react_loop to the last message
try:
response: ChatCompletion = self.llm.generate(messages=iteration_messages, stop=self.stop_at_token)
response: ChatCompletion = self.llm.generate(
messages=iteration_messages, stop=self.stop_at_token
)
# Parse response into thought, action, and potential final answer
thought_action, action, final_answer = self.parse_response(response)
# Print Thought immediately
self.text_formatter.print_react_part("Thought", thought_action)
if final_answer: # Direct final answer provided
assistant_final_message = AssistantMessage(final_answer)
self.memory.add_message(assistant_final_message)
if final_answer:
assistant_final = AssistantMessage(final_answer)
self.memory.add_message(assistant_final)
self.text_formatter.print_separator()
self.text_formatter.print_message(assistant_final_message, include_separator=False)
self.text_formatter.print_message(assistant_final, include_separator=False)
logger.info("Agent provided a direct final answer.")
return final_answer
# If there's no action, update the loop and continue reasoning
if action is None:
if not action:
logger.info("No action specified; continuing with further reasoning.")
react_loop += f"Thought:{thought_action}\n"
continue # Proceed to the next iteration
action_name = action["name"]
action_args = action["arguments"]
# Print Action
self.text_formatter.print_react_part("Action", json.dumps(action))
if action_name in available_tools:
logger.info(f"Executing {action_name} with arguments {action_args}")
function_call = FunctionCall(**action)
execution_results = self.tool_executor.execute(action_name, **function_call.arguments_dict)
# Print Observation
self.text_formatter.print_react_part("Observation", execution_results)
# Update react_loop with the current execution
new_content = f"Thought:{thought_action}\nAction:{action}\nObservation:{execution_results}"
react_loop += new_content
logger.info(new_content)
else:
if action_name not in available_tools:
raise AgentError(f"Unknown tool specified: {action_name}")
logger.info(f"Executing {action_name} with arguments {action_args}")
result = await self.tool_executor.run_tool(action_name, **action_args)
# Print Observation
self.text_formatter.print_react_part("Observation", result)
react_loop += f"Thought:{thought_action}\nAction:{json.dumps(action)}\nObservation:{result}\n"
except Exception as e:
logger.error(f"Failed during chat generation: {e}")
raise AgentError(f"Failed during chat generation: {e}") from e
logger.info("Max iterations completed. Agent has stopped.")
logger.error(f"Error during ReAct agent loop: {e}")
raise AgentError(f"ReActAgent failed: {e}") from e
logger.info("Max iterations reached. Agent has stopped.")
def parse_response(self, response: ChatCompletion) -> Tuple[str, Optional[dict], Optional[str]]:
"""
Extracts the thought, action, and final answer (if present) from the language model response.
Parses a ReAct-style LLM response into a Thought, optional Action (JSON blob), and optional Final Answer.
Args:
response (ChatCompletion): The language model's response message.
response (ChatCompletion): The LLM response object containing the message content.
Returns:
tuple: (thought content, action dictionary if present, final answer if present)
Tuple[str, Optional[dict], Optional[str]]:
- Thought string.
- Parsed Action dictionary, if present.
- Final Answer string, if present.
"""
pattern = r'\{(?:[^{}]|(?R))*\}' # Recursive pattern to match nested JSON blobs
content = response.get_content()
# Compile reusable regex patterns
action_split_regex = regex.compile(r'action:\s*', flags=regex.IGNORECASE)
final_answer_regex = regex.compile(r'answer:\s*(.*)', flags=regex.IGNORECASE | regex.DOTALL)
thought_label_regex = regex.compile(r'thought:\s*', flags=regex.IGNORECASE)
# Strip leading "Thought:" labels (they get repeated a lot)
content = thought_label_regex.sub('', content).strip()
# Check if there's a final answer present
if final_match := final_answer_regex.search(content):
final_answer = final_match.group(1).strip()
logger.debug(f"[parse_response] Final answer detected: {final_answer}")
return content, None, final_answer
# Split on first "Action:" to separate Thought and Action
if action_split_regex.search(content):
thought_part, action_block = action_split_regex.split(content, 1)
thought_part = thought_part.strip()
logger.debug(f"[parse_response] Thought extracted: {thought_part}")
logger.debug(f"[parse_response] Action block to parse: {action_block.strip()}")
else:
logger.debug(f"[parse_response] No action or answer found. Returning content as Thought: {content}")
return content, None, None
# Attempt to extract the first valid JSON blob from the action block
for match in regex.finditer(pattern, action_block, flags=regex.DOTALL):
try:
action_dict = json.loads(match.group())
logger.debug(f"[parse_response] Successfully parsed action: {action_dict}")
return thought_part, action_dict, None
except json.JSONDecodeError as e:
logger.debug(f"[parse_response] Failed to parse action JSON blob: {match.group()}{e}")
continue
logger.debug(f"[parse_response] No valid action JSON found. Returning Thought only.")
return thought_part, None, None
async def run_tool(self, tool_name: str, *args, **kwargs) -> Any:
"""
Executes a tool by name, resolving async or sync tools automatically.
Args:
tool_name (str): The name of the registered tool.
*args: Positional arguments.
**kwargs: Keyword arguments.
Returns:
Any: The tool result.
Raises:
ValueError: If the action details cannot be decoded from the response.
AgentError: If execution fails.
"""
pattern = r'\{(?:[^{}]|(?R))*\}' # Pattern to match JSON blobs
message_content = response.get_content()
# Use regex to find the start of "Action" or "Final Answer" (case insensitive)
action_split_regex = regex.compile(r'(?i)action:\s*', regex.IGNORECASE)
final_answer_regex = regex.compile(r'(?i)answer:\s*(.*)', regex.IGNORECASE | regex.DOTALL)
thought_label_regex = regex.compile(r'(?i)thought:\s*', regex.IGNORECASE)
# Clean up any repeated or prefixed "Thought:" labels
message_content = thought_label_regex.sub('', message_content).strip()
# Check for "Final Answer" directly in the thought
final_answer_match = final_answer_regex.search(message_content)
if final_answer_match:
final_answer = final_answer_match.group(1).strip() if final_answer_match.group(1) else None
return message_content, None, final_answer
# Split the content into "thought" and "action" parts
if action_split_regex.search(message_content):
parts = action_split_regex.split(message_content, 1)
thought_part = parts[0].strip() # Everything before "Action" is the thought part
action_part = parts[1] if len(parts) > 1 else None # Everything after "Action" is the action part
else:
thought_part = message_content
action_part = None
# If there's an action part, attempt to extract the JSON blob
if action_part:
matches = regex.finditer(pattern, action_part, regex.DOTALL)
for match in matches:
try:
action_dict = json.loads(match.group())
return thought_part, action_dict, None # Return thought and action directly
except json.JSONDecodeError:
continue
# If no action is found, just return the thought part with None for action and final answer
return thought_part, None, None
try:
return await self.tool_executor.run_tool(tool_name, *args, **kwargs)
except Exception as e:
logger.error(f"Failed to run tool '{tool_name}' via ReActAgent: {e}")
raise AgentError(f"Error running tool '{tool_name}': {e}") from e

View File

@ -27,95 +27,115 @@ class ToolCallAgent(AgentBase):
# Proceed with base model setup
super().model_post_init(__context)
def run(self, input_data: Optional[Union[str, Dict[str, Any]]] = None) -> Any:
async def run(self, input_data: Optional[Union[str, Dict[str, Any]]] = None) -> Any:
"""
Executes the agent's main task using the provided input or memory context.
Asynchronously executes the agent's main task using the provided input or memory context.
Args:
input_data (Optional[Union[str, Dict[str, Any]]]): User's input, either as a string, a dictionary, or `None` to use memory context.
input_data (Optional[Union[str, Dict[str, Any]]]): User input as string or dict.
Returns:
Any: The agent's response after processing the input.
Any: The agent's final output.
Raises:
AgentError: If the input data is invalid or if a user message is missing.
AgentError: If user input is invalid or tool execution fails.
"""
logger.debug(f"Agent run started with input: {input_data if input_data else 'Using memory context'}")
# Format messages; construct_messages already includes chat history.
messages = self.construct_messages(input_data or {})
# Get Last User Message
user_message = self.get_last_user_message(messages)
if input_data:
# Add the new user message to memory only if input_data is provided
if user_message: # Ensure a user message exists before adding to memory
self.memory.add_message(user_message)
if input_data and user_message:
# Add the new user message to memory only if input_data is provided and user message exists
self.memory.add_message(user_message)
# Always print the last user message for context, even if no input_data is provided
if user_message:
self.text_formatter.print_message(user_message)
# Process conversation iterations
return self.process_iterations(messages)
def process_response(self, tool_calls: List[dict]) -> None:
return await self.process_iterations(messages)
async def process_response(self, tool_calls: List[dict]) -> None:
"""
Execute tool calls and log their results in the tool history.
Asynchronously executes tool calls and appends tool results to memory.
Args:
tool_calls (List[dict]): Definitions of tool calls from the response.
tool_calls (List[dict]): Tool calls returned by the LLM.
Raises:
AgentError: If an error occurs during tool execution.
AgentError: If a tool execution fails.
"""
for tool in tool_calls:
function_name = tool.function.name
try:
logger.info(f"Executing {function_name} with arguments {tool.function.arguments}")
result = self.tool_executor.execute(function_name, **tool.function.arguments_dict)
result = await self.tool_executor.run_tool(function_name, **tool.function.arguments_dict)
tool_message = ToolMessage(tool_call_id=tool.id, name=function_name, content=str(result))
self.text_formatter.print_message(tool_message)
self.tool_history.append(tool_message)
except Exception as e:
logger.error(f"Error executing tool {function_name}: {e}")
raise AgentError(f"Error executing tool '{function_name}': {e}") from e
def process_iterations(self, messages: List[Dict[str, Any]]) -> Any:
async def process_iterations(self, messages: List[Dict[str, Any]]) -> Any:
"""
Processes conversation iterations, invoking tool calls as needed.
Iteratively drives the agent conversation until a final answer or max iterations.
Args:
messages (List[Dict[str, Any]]): Initial conversation messages.
Returns:
Any: The final response content after processing all iterations.
Any: The final assistant message.
Raises:
AgentError: If an error occurs during chat generation or if maximum iterations are reached.
AgentError: On chat failure or tool issues.
"""
for iteration in range(self.max_iterations):
logger.info(f"Iteration {iteration + 1}/{self.max_iterations} started.")
messages += self.tool_history
try:
response: ChatCompletion = self.llm.generate(messages=messages, tools=self.tools, tool_choice=self.tool_choice)
response: ChatCompletion = self.llm.generate(
messages=messages,
tools=self.tools,
tool_choice=self.tool_choice,
)
response_message = response.get_message()
self.text_formatter.print_message(response_message)
if response.get_reason() == "tool_calls":
self.tool_history.append(response_message)
self.process_response(response.get_tool_calls())
await self.process_response(response.get_tool_calls())
else:
self.memory.add_message(AssistantMessage(response.get_content()))
self.tool_history.clear()
return response.get_content()
except Exception as e:
logger.error(f"Error during chat generation: {e}")
raise AgentError(f"Failed during chat generation: {e}") from e
logger.info("Max iterations reached. Agent has stopped.")
logger.info("Max iterations reached. Agent has stopped.")
async def run_tool(self, tool_name: str, *args, **kwargs) -> Any:
"""
Executes a registered tool by name, automatically handling sync or async tools.
Args:
tool_name (str): Name of the tool to run.
*args: Positional arguments passed to the tool.
**kwargs: Keyword arguments passed to the tool.
Returns:
Any: Result from the tool execution.
Raises:
AgentError: If the tool is not found or execution fails.
"""
try:
return await self.tool_executor.run_tool(tool_name, *args, **kwargs)
except Exception as e:
logger.error(f"Agent failed to run tool '{tool_name}': {e}")
raise AgentError(f"Failed to run tool '{tool_name}': {e}") from e

View File

@ -1,16 +1,18 @@
from dapr_agents.tool.utils.function_calling import to_function_call_definition
from pydantic import BaseModel, Field, ValidationError, model_validator
from typing import Callable, Type, Optional, Any, Dict
from dapr_agents.tool.utils.tool import ToolHelper
from inspect import signature, Parameter
from dapr_agents.types import ToolError
import inspect
import logging
from typing import Callable, Type, Optional, Any, Dict
from inspect import signature, Parameter
from pydantic import BaseModel, Field, ValidationError, model_validator, PrivateAttr
from dapr_agents.tool.utils.tool import ToolHelper
from dapr_agents.tool.utils.function_calling import to_function_call_definition
from dapr_agents.types import ToolError
logger = logging.getLogger(__name__)
class AgentTool(BaseModel):
"""
Base class for agent tools, structuring both class-based and function-based tools.
Base class for agent tools, supporting both synchronous and asynchronous execution.
Attributes:
name (str): The tool's name.
@ -23,23 +25,18 @@ class AgentTool(BaseModel):
args_model: Optional[Type[BaseModel]] = Field(None, description="Pydantic model for validating tool arguments.")
func: Optional[Callable] = Field(None, description="Optional function implementing the tool's behavior.")
_is_async: bool = PrivateAttr(default=False)
@model_validator(mode='before')
@classmethod
def set_name_and_description(cls, values: dict) -> dict:
"""
Validator to dynamically set `name` and `description` before Pydantic validation.
This ensures that the `name` is formatted and derived either from the class or from the `func`.
Args:
values (dict): A dictionary of field values before validation.
Returns:
dict: Updated field values after processing.
Validator to dynamically set `name` and `description` before validation.
"""
func = values.get('func')
func = values.get("func")
if func:
values['name'] = values.get('name', func.__name__)
values['description'] = func.__doc__
values.setdefault("name", func.__name__)
values.setdefault("description", func.__doc__ or "")
return values
@classmethod
@ -64,10 +61,10 @@ class AgentTool(BaseModel):
self.name = self.name.replace(' ', '_').title().replace('_', '')
if self.func:
self._is_async = inspect.iscoroutinefunction(self.func)
self._initialize_from_func(self.func)
else:
self._initialize_from_run()
return super().model_post_init(__context)
def _initialize_from_func(self, func: Callable) -> None:
@ -79,70 +76,69 @@ class AgentTool(BaseModel):
"""Initialize Tool fields based on the abstract `_run` method."""
if self.args_model is None:
self.args_model = ToolHelper.infer_func_schema(self._run)
def _run(self, *args, **kwargs) -> Any:
"""Provide default implementation of _run to support function-based tools."""
if self.func:
return self.func(*args, **kwargs)
raise NotImplementedError("No function or _run method defined for this tool.")
def run(self, *args, **kwargs) -> Any:
def _validate_and_prepare_args(self, func: Callable, *args, **kwargs) -> Dict[str, Any]:
"""
Executes the tool by running either the class-based `_run` method or the function-based `func`.
Normalize and validate arguments for the given function.
Args:
*args: Positional arguments for the tool.
**kwargs: Keyword arguments for the tool.
func (Callable): The function whose signature is used.
*args: Positional arguments.
**kwargs: Keyword arguments.
Returns:
Any: The result of executing the tool.
Dict[str, Any]: Validated and prepared arguments.
Raises:
ValidationError: If argument validation fails.
ToolError: If any other error occurs during execution.
"""
try:
if self.func:
return self._run_function(*args, **kwargs)
return self._run_method(*args, **kwargs)
except ValidationError as e:
self._log_and_raise_error(e)
except Exception as e:
self._log_and_raise_error(e)
def _run_method(self, *args, **kwargs) -> Any:
"""Validates and executes the class-based `_run` method."""
return self._execute_with_signature(self._run, *args, **kwargs)
def _run_function(self, *args, **kwargs) -> Any:
"""Validates and executes the provided function `func`."""
return self._execute_with_signature(self.func, *args, **kwargs)
def _execute_with_signature(self, func: Callable, *args, **kwargs) -> Any:
"""
Validates and executes a function (either class-based or function-based) using its signature.
Args:
func (Callable): The function to execute.
*args: Positional arguments for the function.
**kwargs: Keyword arguments for the function.
Returns:
Any: The result of executing the function.
ToolError: If argument validation fails.
"""
sig = signature(func)
# Update kwargs with args by mapping positional arguments to parameter names
if args:
arg_names = list(sig.parameters.keys())
kwargs.update(dict(zip(arg_names, args)))
# Validate and execute the function
if self.args_model:
validated_args = self.args_model(**kwargs) # Validate keyword arguments
kwargs = validated_args.model_dump() # Convert validated model back to dict
try:
validated_args = self.args_model(**kwargs)
return validated_args.model_dump()
except ValidationError as ve:
logger.debug(f"Validation failed for tool '{self.name}': {ve}")
raise ToolError(f"Validation error in tool '{self.name}': {ve}") from ve
return func(**kwargs)
return kwargs
def run(self, *args, **kwargs) -> Any:
"""
Execute the tool synchronously.
Raises:
ToolError if the tool is async or execution fails.
"""
if self._is_async:
raise ToolError(f"Tool '{self.name}' is async and must be awaited. Use `await tool.arun(...)` instead.")
try:
func = self.func or self._run
kwargs = self._validate_and_prepare_args(func, *args, **kwargs)
return func(**kwargs)
except Exception as e:
self._log_and_raise_error(e)
async def arun(self, *args, **kwargs) -> Any:
"""
Execute the tool asynchronously (whether it's sync or async under the hood).
"""
try:
func = self.func or self._run
kwargs = self._validate_and_prepare_args(func, *args, **kwargs)
return await func(**kwargs) if self._is_async else func(**kwargs)
except Exception as e:
self._log_and_raise_error(e)
def _run(self, *args, **kwargs) -> Any:
"""Fallback default run logic if no `func` is set."""
if self.func:
return self.func(*args, **kwargs)
raise NotImplementedError("No function or _run method defined for this tool.")
def _log_and_raise_error(self, error: Exception) -> None:
"""Log the error and raise a ToolError."""
@ -150,7 +146,14 @@ class AgentTool(BaseModel):
raise ToolError(f"An error occurred during the execution of tool '{self.name}': {str(error)}")
def __call__(self, *args, **kwargs) -> Any:
"""Allow the AgentTool instance to be called like a regular function."""
"""
Enables `tool(...)` syntax.
Raises:
ToolError: if async tool is called without `await`.
"""
if self._is_async:
raise ToolError(f"Tool '{self.name}' is async and must be awaited. Use `await tool.arun(...)`.")
return self.run(*args, **kwargs)
def to_function_call(self, format_type: str = 'openai', use_deprecated: bool = False) -> Dict:
@ -175,9 +178,9 @@ class AgentTool(BaseModel):
"""Returns a JSON-serializable dictionary of the tool's function args_model."""
if self.args_model:
schema = self.args_model.model_json_schema()
for property_details in schema.get('properties', {}).values():
property_details.pop('title', None)
return schema["properties"]
for prop in schema.get("properties", {}).values():
prop.pop("title", None)
return schema.get("properties", {})
return {}
@property

View File

@ -1,84 +1,60 @@
from dapr_agents.types import AgentToolExecutorError, ToolError
import logging
from typing import Any, Dict, List, Optional
from pydantic import BaseModel, Field, PrivateAttr
from dapr_agents.tool import AgentTool
from typing import Any, Dict, List
from rich.table import Table
from rich.console import Console
import logging
from dapr_agents.tool import AgentTool
from dapr_agents.types import AgentToolExecutorError, ToolError
logger = logging.getLogger(__name__)
class AgentToolExecutor(BaseModel):
"""
Manages the registration and execution of tools, providing efficient access and validation
for tool instances by name.
"""
Manages the registration and execution of tools, providing both sync and async interfaces.
Attributes:
tools (List[AgentTool]): List of tools to register and manage.
"""
tools: List[AgentTool] = Field(default_factory=list, description="List of tools to register and manage.")
_tools_map: Dict[str, AgentTool] = PrivateAttr(default_factory=dict)
def model_post_init(self, __context: Any) -> None:
"""
Registers each tool after model initialization, populating `_tools_map`.
"""
if self.tools: # Only register tools if the list is not empty
for tool in self.tools:
self.register_tool(tool)
logger.info(f"Tool Executor initialized with {len(self.tools)} registered tools.")
else:
logger.info("Tool Executor initialized with no tools to register.")
# Complete post-initialization
"""Initializes the internal tools map after model creation."""
for tool in self.tools:
self.register_tool(tool)
logger.info(f"Tool Executor initialized with {len(self._tools_map)} tool(s).")
super().model_post_init(__context)
def register_tool(self, tool: AgentTool) -> None:
"""
Registers a tool, ensuring no duplicate names.
Registers a tool instance, ensuring no duplicate names.
Args:
tool (AgentTool): The tool instance to register.
tool (AgentTool): The tool to register.
Raises:
AgentToolExecutorError: If a tool with the same name is already registered.
AgentToolExecutorError: If the tool name is already registered.
"""
if tool.name in self._tools_map:
logger.error(f"Attempt to register duplicate tool: {tool.name}")
logger.error(f"Attempted to register duplicate tool: {tool.name}")
raise AgentToolExecutorError(f"Tool '{tool.name}' is already registered.")
self._tools_map[tool.name] = tool
logger.info(f"Tool registered: {tool.name}")
def execute(self, tool_name: str, *args, **kwargs) -> Any:
def get_tool(self, tool_name: str) -> Optional[AgentTool]:
"""
Executes a specified tool by name, passing any arguments.
Retrieves a tool by name.
Args:
tool_name (str): Name of the tool to execute.
*args: Positional arguments for tool execution.
**kwargs: Keyword arguments for tool execution.
tool_name (str): Name of the tool to retrieve.
Returns:
Any: Result from tool execution.
Raises:
AgentToolExecutorError: If tool not found or if an execution error occurs.
AgentTool or None if not found.
"""
tool = self._tools_map.get(tool_name)
if not tool:
logger.error(f"Tool not found: {tool_name}")
raise AgentToolExecutorError(f"Tool '{tool_name}' not found.")
try:
logger.info(f"Executing tool: {tool_name}")
logger.debug(f"Tool Arguments: {kwargs}")
result = tool(*args, **kwargs)
logger.info(f"Tool '{tool_name}' executed successfully.")
return result
except ToolError as e:
logger.error(f"Error executing tool '{tool_name}': {e}")
raise AgentToolExecutorError(f"Execution error in tool '{tool_name}': {e}") from e
except Exception as e:
logger.error(f"Unexpected error executing tool '{tool_name}': {e}")
raise AgentToolExecutorError(f"Unexpected execution error in tool '{tool_name}': {e}") from e
return self._tools_map.get(tool_name)
def get_tool_names(self) -> List[str]:
"""
Lists all registered tool names.
@ -108,12 +84,41 @@ class AgentToolExecutor(BaseModel):
f"{tool.name}: {tool.description}. Args schema: {tool.args_schema}"
for tool in self._tools_map.values()
)
async def run_tool(self, tool_name: str, *args, **kwargs) -> Any:
"""
Executes a tool by name, automatically handling both sync and async tools.
Args:
tool_name (str): Tool name to execute.
*args: Positional arguments.
**kwargs: Keyword arguments.
Returns:
Any: Result of tool execution.
Raises:
AgentToolExecutorError: If the tool is not found or execution fails.
"""
tool = self.get_tool(tool_name)
if not tool:
logger.error(f"Tool not found: {tool_name}")
raise AgentToolExecutorError(f"Tool '{tool_name}' not found.")
try:
logger.info(f"Running tool (auto): {tool_name}")
if tool._is_async:
return await tool.arun(*args, **kwargs)
return tool(*args, **kwargs)
except ToolError as e:
logger.error(f"Tool execution error in '{tool_name}': {e}")
raise AgentToolExecutorError(str(e)) from e
except Exception as e:
logger.error(f"Unexpected error in '{tool_name}': {e}")
raise AgentToolExecutorError(f"Unexpected error in tool '{tool_name}': {e}") from e
@property
def help(self) -> None:
"""
Displays a tabular view of all registered tools with descriptions and signatures.
"""
"""Displays a rich-formatted table of registered tools."""
table = Table(title="Available Tools")
table.add_column("Name", style="bold cyan")
table.add_column("Description")

View File

@ -0,0 +1,4 @@
from .client import MCPClient, create_sync_mcp_client
from .transport import connect_stdio, connect_sse
from .schema import create_pydantic_model_from_schema
from .prompt import convert_prompt_message

View File

@ -0,0 +1,592 @@
from contextlib import AsyncExitStack, asynccontextmanager
from typing import Dict, List, Optional, Set, Any, Type, AsyncGenerator
from types import TracebackType
import asyncio
import logging
from pydantic import BaseModel, Field, PrivateAttr
from mcp import ClientSession
from mcp.types import Tool as MCPTool, Prompt
from dapr_agents.tool import AgentTool
from dapr_agents.types import ToolError
logger = logging.getLogger(__name__)
class MCPClient(BaseModel):
"""
Client for connecting to MCP servers and integrating their tools with the Dapr agent framework.
This client manages connections to one or more MCP servers, retrieves their tools,
and converts them to native AgentTool objects that can be used in the agent framework.
Attributes:
allowed_tools: Optional set of tool names to include (when None, all tools are included)
server_timeout: Timeout in seconds for server connections
sse_read_timeout: Read timeout for SSE connections in seconds
"""
allowed_tools: Optional[Set[str]] = Field(default=None, description="Optional set of tool names to include (when None, all tools are included)")
server_timeout: float = Field(default=5.0, description="Timeout in seconds for server connections")
sse_read_timeout: float = Field(default=300.0, description="Read timeout for SSE connections in seconds")
# Private attributes not exposed in model schema
_exit_stack: AsyncExitStack = PrivateAttr(default_factory=AsyncExitStack)
_sessions: Dict[str, ClientSession] = PrivateAttr(default_factory=dict)
_server_tools: Dict[str, List[AgentTool]] = PrivateAttr(default_factory=dict)
_server_prompts: Dict[str, Dict[str, Prompt]] = PrivateAttr(default_factory=dict)
_task_locals: Dict[str, Any] = PrivateAttr(default_factory=dict)
_server_configs: Dict[str, Dict[str, Any]] = PrivateAttr(default_factory=dict)
def model_post_init(self, __context: Any) -> None:
"""Initialize the client after the model is created."""
logger.debug("Initializing MCP client")
super().model_post_init(__context)
@asynccontextmanager
async def create_session(self, server_name: str) -> AsyncGenerator[ClientSession, None]:
"""
Create an ephemeral session for the given server and yield it.
Used during tool execution to avoid reuse issues.
Args:
server_name: The server to create a session for.
Yields:
A short-lived, initialized MCP session.
"""
logger.debug(f"[MCP] Creating ephemeral session for server '{server_name}'")
session = await self._create_ephemeral_session(server_name)
try:
yield session
finally:
# Session cleanup is managed by AsyncExitStack (via transport module)
pass
async def connect_stdio(
self,
server_name: str,
command: str,
args: List[str],
env: Optional[Dict[str, str]] = None
) -> None:
"""
Connect to an MCP server using stdio transport and store the connection
metadata for future dynamic reconnection if needed.
Args:
server_name (str): Unique identifier for this server connection.
command (str): Executable to run.
args (List[str]): Command-line arguments.
env (Optional[Dict[str, str]]): Environment variables for the process.
Raises:
RuntimeError: If a server with the same name is already connected.
Exception: If connection setup or initialization fails.
"""
logger.info(f"Connecting to MCP server '{server_name}' via stdio: {command} {args}")
if server_name in self._sessions:
raise RuntimeError(f"Server '{server_name}' is already connected")
try:
self._task_locals[server_name] = asyncio.current_task()
from dapr_agents.tool.mcp.transport import connect_stdio as transport_connect_stdio
session = await transport_connect_stdio(
command=command,
args=args,
env=env,
stack=self._exit_stack
)
await session.initialize()
self._sessions[server_name] = session
# Store how to reconnect this server later
self._server_configs[server_name] = {
"type": "stdio",
"params": {"command": command, "args": args, "env": env},
}
logger.debug(f"Initialized session for server '{server_name}', loading tools and prompts")
await self._load_tools_from_session(server_name, session)
await self._load_prompts_from_session(server_name, session)
logger.info(f"Successfully connected to MCP server '{server_name}'")
except Exception as e:
logger.error(f"Failed to connect to MCP server '{server_name}': {str(e)}")
self._sessions.pop(server_name, None)
self._task_locals.pop(server_name, None)
self._server_configs.pop(server_name, None)
raise
async def connect_sse(
self,
server_name: str,
url: str,
headers: Optional[Dict[str, str]] = None
) -> None:
"""
Connect to an MCP server using Server-Sent Events (SSE) transport and store
the connection metadata for future dynamic reconnection if needed.
Args:
server_name (str): Unique identifier for this server connection.
url (str): The SSE endpoint URL.
headers (Optional[Dict[str, str]]): HTTP headers to include with the request.
Raises:
RuntimeError: If a server with the same name is already connected.
Exception: If connection setup or initialization fails.
"""
logger.info(f"Connecting to MCP server '{server_name}' via SSE: {url}")
if server_name in self._sessions:
raise RuntimeError(f"Server '{server_name}' is already connected")
try:
self._task_locals[server_name] = asyncio.current_task()
from dapr_agents.tool.mcp.transport import connect_sse as transport_connect_sse
session = await transport_connect_sse(
url=url,
headers=headers,
timeout=self.server_timeout,
read_timeout=self.sse_read_timeout,
stack=self._exit_stack
)
await session.initialize()
self._sessions[server_name] = session
# Store how to reconnect this server later
self._server_configs[server_name] = {
"type": "sse",
"params": {
"url": url,
"headers": headers,
"timeout": self.server_timeout,
"read_timeout": self.sse_read_timeout,
},
}
logger.debug(f"Initialized session for server '{server_name}', loading tools and prompts")
await self._load_tools_from_session(server_name, session)
await self._load_prompts_from_session(server_name, session)
logger.info(f"Successfully connected to MCP server '{server_name}'")
except Exception as e:
logger.error(f"Failed to connect to MCP server '{server_name}': {str(e)}")
self._sessions.pop(server_name, None)
self._task_locals.pop(server_name, None)
self._server_configs.pop(server_name, None)
raise
async def _load_tools_from_session(self, server_name: str, session: ClientSession) -> None:
"""
Load tools from a given MCP session and convert them to AgentTools.
Args:
server_name: Unique identifier for this server
session: The MCP client session
"""
logger.debug(f"Loading tools from server '{server_name}'")
try:
# Get tools from the server
tools_response = await session.list_tools()
# Convert MCP tools to agent tools
converted_tools = []
for mcp_tool in tools_response.tools:
# Skip tools not in allowed_tools if filtering is enabled
if self.allowed_tools and mcp_tool.name not in self.allowed_tools:
logger.debug(f"Skipping tool '{mcp_tool.name}' (not in allowed_tools)")
continue
try:
agent_tool = await self.wrap_mcp_tool(server_name, mcp_tool)
converted_tools.append(agent_tool)
except Exception as e:
logger.warning(f"Failed to convert tool '{mcp_tool.name}': {str(e)}")
self._server_tools[server_name] = converted_tools
logger.info(f"Loaded {len(converted_tools)} tools from server '{server_name}'")
except Exception as e:
logger.warning(f"Failed to load tools from server '{server_name}': {str(e)}")
self._server_tools[server_name] = []
async def _load_prompts_from_session(self, server_name: str, session: ClientSession) -> None:
"""
Load prompts from a given MCP session.
Args:
server_name: Unique identifier for this server
session: The MCP client session
"""
logger.debug(f"Loading prompts from server '{server_name}'")
try:
response = await session.list_prompts()
prompt_dict = {prompt.name: prompt for prompt in response.prompts}
self._server_prompts[server_name] = prompt_dict
loaded = [
f"{p.name} ({len(p.arguments or [])} args)"
for p in response.prompts
]
logger.info(
f"Loaded {len(loaded)} prompts from server '{server_name}': " +
", ".join(loaded)
)
except Exception as e:
logger.warning(f"Failed to load prompts from server '{server_name}': {str(e)}")
self._server_prompts[server_name] = []
async def _create_ephemeral_session(self, server_name: str) -> ClientSession:
"""
Create a fresh session for a single tool call.
Args:
server_name: The MCP server to connect to.
Returns:
A fully initialized ephemeral ClientSession.
"""
config = self._server_configs.get(server_name)
if not config:
raise ToolError(f"No stored config found for server '{server_name}'")
try:
if config["type"] == "stdio":
from dapr_agents.tool.mcp.transport import connect_stdio
session = await connect_stdio(**config["params"], stack=self._exit_stack)
elif config["type"] == "sse":
from dapr_agents.tool.mcp.transport import connect_sse
session = await connect_sse(**config["params"], stack=self._exit_stack)
else:
raise ToolError(f"Unknown transport type: {config['type']}")
await session.initialize()
return session
except Exception as e:
logger.error(f"Failed to create ephemeral session: {e}")
raise ToolError(f"Could not create session for '{server_name}': {e}") from e
async def wrap_mcp_tool(self, server_name: str, mcp_tool: MCPTool) -> AgentTool:
"""
Wrap an MCPTool as an AgentTool with dynamic session creation at runtime,
based on stored server configuration.
Args:
server_name: The MCP server that registered the tool.
mcp_tool: The MCPTool object describing the tool.
Returns:
An AgentTool instance that can be executed by the agent.
Raises:
ToolError: If the tool cannot be executed or configuration is missing.
"""
tool_name = f"{server_name}_{mcp_tool.name}"
tool_docs = f"{mcp_tool.description or ''} (from MCP server: {server_name})"
logger.debug(f"Wrapping MCP tool: {tool_name}")
def build_executor(client: MCPClient, server_name: str, tool_name: str):
async def executor(**kwargs: Any) -> Any:
"""
Execute the tool using a dynamically created session context.
Args:
kwargs: Input arguments to the tool.
Returns:
Result from the tool execution.
Raises:
ToolError: If execution fails or response is malformed.
"""
logger.info(f"[MCP] Executing tool '{tool_name}' with args: {kwargs}")
try:
async with client.create_session(server_name) as session:
result = await session.call_tool(tool_name, kwargs)
logger.debug(f"[MCP] Received result from tool '{tool_name}'")
return client._process_tool_result(result)
except Exception as e:
logger.exception(f"Execution failed for '{tool_name}'")
raise ToolError(f"Error executing tool '{tool_name}': {str(e)}") from e
return executor
# Build executor using dynamic context-managed session resolution
tool_func = build_executor(self, server_name, mcp_tool.name)
# Optionally generate args model from input schema
args_model = None
if getattr(mcp_tool, "inputSchema", None):
try:
from dapr_agents.tool.mcp.schema import create_pydantic_model_from_schema
args_model = create_pydantic_model_from_schema(
mcp_tool.inputSchema, f"{tool_name}Args"
)
logger.debug(f"Generated argument model for tool '{tool_name}'")
except Exception as e:
logger.warning(f"Failed to create schema for tool '{tool_name}': {str(e)}")
return AgentTool(
name=tool_name,
description=tool_docs,
func=tool_func,
args_model=args_model,
)
def _process_tool_result(self, result: Any) -> Any:
"""
Process the result from an MCP tool call.
Args:
result: The result from calling an MCP tool
Returns:
Processed result in a format expected by AgentTool
Raises:
ToolError: If the result indicates an error
"""
# Handle error result
if hasattr(result, 'isError') and result.isError:
error_message = "Unknown error"
if hasattr(result, 'content') and result.content:
for content in result.content:
if hasattr(content, 'text'):
error_message = content.text
break
raise ToolError(f"MCP tool error: {error_message}")
# Extract text content from result
if hasattr(result, 'content') and result.content:
text_contents = []
for content in result.content:
if hasattr(content, 'text'):
text_contents.append(content.text)
# Return single string if only one content item
if len(text_contents) == 1:
return text_contents[0]
elif text_contents:
return text_contents
# Fallback for unexpected formats
return str(result)
def get_all_tools(self) -> List[AgentTool]:
"""
Get all tools from all connected MCP servers.
Returns:
A list of all available AgentTools from MCP servers
"""
all_tools = []
for server_tools in self._server_tools.values():
all_tools.extend(server_tools)
return all_tools
def get_server_tools(self, server_name: str) -> List[AgentTool]:
"""
Get tools from a specific MCP server.
Args:
server_name: The name of the server to get tools from
Returns:
A list of AgentTools from the specified server
"""
return self._server_tools.get(server_name, [])
def get_server_prompts(self, server_name: str) -> List[Prompt]:
"""
Get all prompt definitions from a specific MCP server.
Args:
server_name: The name of the server to retrieve prompts from
Returns:
A list of Prompt objects available on the specified server.
Returns an empty list if no prompts are available.
"""
return list(self._server_prompts.get(server_name, {}).values())
def get_all_prompts(self) -> Dict[str, List[Prompt]]:
"""
Get all prompt definitions from all connected MCP servers.
Returns:
A dictionary mapping each server name to a list of Prompt objects.
Returns an empty dictionary if no servers are connected.
"""
return {server: list(prompts.values()) for server, prompts in self._server_prompts.items()}
def get_prompt_names(self, server_name: str) -> List[str]:
"""
Get the names of all prompts from a specific MCP server.
Args:
server_name: The name of the server
Returns:
A list of prompt names registered on the server.
"""
return list(self._server_prompts.get(server_name, {}).keys())
def get_all_prompt_names(self) -> Dict[str, List[str]]:
"""
Get prompt names from all connected servers.
Returns:
A dictionary mapping server names to lists of prompt names.
"""
return {server: list(prompts.keys()) for server, prompts in self._server_prompts.items()}
def get_prompt_metadata(self, server_name: str, prompt_name: str) -> Optional[Prompt]:
"""
Retrieve the full metadata for a given prompt from a connected MCP server.
Args:
server_name: The server that registered the prompt
prompt_name: The name of the prompt to retrieve
Returns:
The full Prompt object if available, otherwise None.
"""
return self._server_prompts.get(server_name, {}).get(prompt_name)
def get_prompt_arguments(self, server_name: str, prompt_name: str) -> Optional[List[Dict[str, Any]]]:
"""
Get the list of arguments defined for a prompt, if available.
Useful for generating forms or validating prompt input.
Args:
server_name: The server where the prompt is registered
prompt_name: The name of the prompt to inspect
Returns:
A list of argument definitions, or None if the prompt is not found.
"""
prompt = self.get_prompt_metadata(server_name, prompt_name)
return prompt.arguments if prompt else None
def describe_prompt(self, server_name: str, prompt_name: str) -> Optional[str]:
"""
Retrieve a human-readable description of a specific prompt.
Args:
server_name: The name of the server where the prompt is registered
prompt_name: The name of the prompt to describe
Returns:
The description string if available, otherwise None.
"""
prompt = self.get_prompt_metadata(server_name, prompt_name)
return prompt.description if prompt else None
def get_connected_servers(self) -> List[str]:
"""
Get a list of all connected server names.
Returns:
List of server names that are currently connected
"""
return list(self._sessions.keys())
async def close(self) -> None:
"""
Close all connections to MCP servers and clean up resources.
This method should be called when the client is no longer needed to
ensure proper cleanup of all resources and connections.
"""
logger.info("Closing MCP client and all server connections")
# Verify we're in the same task as the one that created the connections
current_task = asyncio.current_task()
for server_name, server_task in self._task_locals.items():
if server_task != current_task:
logger.warning(
f"Attempting to close server '{server_name}' in a different task "
f"than it was created in. This may cause errors."
)
# Close all connections
try:
await self._exit_stack.aclose()
self._sessions.clear()
self._server_tools.clear()
self._task_locals.clear()
logger.info("MCP client successfully closed")
except Exception as e:
logger.error(f"Error closing MCP client: {str(e)}")
raise
async def __aenter__(self) -> "MCPClient":
"""Context manager entry point."""
return self
async def __aexit__(
self,
exc_type: Optional[Type[BaseException]],
exc_val: Optional[BaseException],
exc_tb: Optional[TracebackType]
) -> None:
"""Context manager exit - close all connections."""
await self.close()
def create_sync_mcp_client(*args, **kwargs) -> MCPClient:
"""
Create an MCPClient with synchronous wrapper methods for each async method.
This allows the client to be used in synchronous code.
Args:
*args: Positional arguments for MCPClient constructor
**kwargs: Keyword arguments for MCPClient constructor
Returns:
An MCPClient with additional sync_* methods
"""
client = MCPClient(*args, **kwargs)
# Add sync versions of async methods
def create_sync_wrapper(async_func):
def sync_wrapper(*args, **kwargs):
try:
loop = asyncio.get_event_loop()
if loop.is_running():
raise RuntimeError(
f"Cannot call {async_func.__name__} synchronously in an async context. "
f"Use {async_func.__name__} directly instead."
)
except RuntimeError:
pass # No event loop, which is fine for sync operation
return asyncio.run(async_func(*args, **kwargs))
# Copy metadata
sync_wrapper.__name__ = f"sync_{async_func.__name__}"
sync_wrapper.__doc__ = f"Synchronous version of {async_func.__name__}.\n\n{async_func.__doc__}"
return sync_wrapper
# Add sync wrappers for all async methods
client.sync_connect_stdio = create_sync_wrapper(client.connect_stdio)
client.sync_connect_sse = create_sync_wrapper(client.connect_sse)
client.sync_close = create_sync_wrapper(client.close)
return client

View File

@ -0,0 +1,74 @@
from typing import Optional, Any, List, Dict
import logging
from mcp import ClientSession
from mcp.types import PromptMessage
from dapr_agents.types import UserMessage, AssistantMessage, BaseMessage
logger = logging.getLogger(__name__)
def convert_prompt_message(message: PromptMessage) -> BaseMessage:
"""
Convert an MCP PromptMessage to a compatible internal BaseMessage.
Args:
message: The MCP PromptMessage instance
Returns:
A compatible BaseMessage subclass (UserMessage or AssistantMessage)
Raises:
ValueError: If the message contains unsupported content type or role
"""
# Verify text content type is supported
if message.content.type != "text":
error_msg = f"Unsupported content type: {message.content.type}"
logger.error(error_msg)
raise ValueError(error_msg)
# Convert based on role
if message.role == "user":
return UserMessage(content=message.content.text)
elif message.role == "assistant":
return AssistantMessage(content=message.content.text)
else:
# Fall back to generic message with role preserved
logger.warning(f"Converting message with non-standard role: {message.role}")
return BaseMessage(content=message.content.text, role=message.role)
async def load_prompt(
session: ClientSession,
prompt_name: str,
arguments: Optional[Dict[str, Any]] = None
) -> List[BaseMessage]:
"""
Fetch and convert a prompt from the MCP server to internal message format.
Args:
session: An initialized MCP client session
prompt_name: The registered prompt name
arguments: Optional dictionary of arguments to format the prompt
Returns:
A list of internal BaseMessage-compatible messages
Raises:
Exception: If prompt retrieval fails
"""
logger.info(f"Loading prompt '{prompt_name}' from MCP server")
try:
# Get prompt from server
response = await session.get_prompt(prompt_name, arguments or {})
# Convert all messages
converted_messages = [convert_prompt_message(m) for m in response.messages]
logger.info(f"Loaded prompt '{prompt_name}' with {len(converted_messages)} messages")
return converted_messages
except Exception as e:
logger.error(f"Failed to load prompt '{prompt_name}': {str(e)}")
raise

View File

@ -0,0 +1,104 @@
from typing import Any, Dict, Optional, Type, List
import logging
from pydantic import BaseModel, Field, create_model
logger = logging.getLogger(__name__)
# Mapping from JSON Schema types to Python types
TYPE_MAPPING = {
"string": str,
"number": float,
"integer": int,
"boolean": bool,
"object": dict,
"array": list,
"null": type(None),
}
def create_pydantic_model_from_schema(
schema: Dict[str, Any],
model_name: str
) -> Type[BaseModel]:
"""
Create a Pydantic model from a JSON schema definition.
This function converts a JSON Schema object (commonly used in MCP tool definitions)
to a Pydantic model that can be used for validation in the Dapr agent framework.
Args:
schema: JSON Schema dictionary containing type information
model_name: Name for the generated model class
Returns:
A dynamically created Pydantic model class
Raises:
ValueError: If the schema is invalid or cannot be converted
"""
logger.debug(f"Creating Pydantic model '{model_name}' from schema")
try:
properties = schema.get("properties", {})
required = set(schema.get("required", []))
fields = {}
# Process each property in the schema
for field_name, field_props in properties.items():
# Get field type information
json_type = field_props.get("type", "string")
# Handle complex type definitions (arrays, unions, etc.)
if isinstance(json_type, list):
# Process union types (e.g., ["string", "null"])
has_null = "null" in json_type
non_null_types = [t for t in json_type if t != "null"]
if not non_null_types:
# Only null type specified
field_type = Optional[str]
else:
# Use the first non-null type
# TODO: Proper union type handling would be better but more complex
primary_type = non_null_types[0]
field_type = TYPE_MAPPING.get(primary_type, str)
# Make optional if null is included
if has_null:
field_type = Optional[field_type]
else:
# Simple type
field_type = TYPE_MAPPING.get(json_type, str)
# Handle arrays with item type information
if json_type == "array" or (isinstance(json_type, list) and "array" in json_type):
# Get the items type if specified
if "items" in field_props:
items_type = field_props["items"].get("type", "string")
if isinstance(items_type, str):
item_py_type = TYPE_MAPPING.get(items_type, str)
field_type = List[item_py_type]
# Set default value based on required status
if field_name in required:
default = ... # Required field
else:
default = None
# Make optional if not already
if not isinstance(field_type, type(Optional)):
field_type = Optional[field_type]
# Add field with description and default value
field_description = field_props.get("description", "")
fields[field_name] = (
field_type,
Field(default, description=field_description)
)
# Create and return the model class
return create_model(model_name, **fields)
except Exception as e:
logger.error(f"Failed to create model from schema: {str(e)}")
raise ValueError(f"Invalid schema: {str(e)}")

View File

@ -0,0 +1,88 @@
from contextlib import AsyncExitStack
from typing import Optional, Dict
import logging
from mcp import ClientSession, StdioServerParameters
from mcp.client.stdio import stdio_client
from mcp.client.sse import sse_client
logger = logging.getLogger(__name__)
async def connect_stdio(
command: str,
args: list[str],
env: Optional[Dict[str, str]],
stack: AsyncExitStack
) -> ClientSession:
"""
Connect to an MCP server using stdio transport.
Args:
command: The executable to run
args: Command line arguments
env: Optional environment variables
stack: AsyncExitStack for resource management
Returns:
An initialized MCP client session
Raises:
Exception: If connection fails
"""
logger.debug(f"Establishing stdio connection: {command} {args}")
# Create server parameters
params = StdioServerParameters(
command=command,
args=args,
env=env
)
# Establish connection via stdio
try:
read_stream, write_stream = await stack.enter_async_context(stdio_client(params))
session = await stack.enter_async_context(ClientSession(read_stream, write_stream))
logger.debug("Stdio connection established successfully")
return session
except Exception as e:
logger.error(f"Failed to establish stdio connection: {str(e)}")
raise
async def connect_sse(
url: str,
headers: Optional[Dict[str, str]],
timeout: float,
read_timeout: float,
stack: AsyncExitStack
) -> ClientSession:
"""
Connect to an MCP server using Server-Sent Events (SSE) transport.
Args:
url: The SSE endpoint URL
headers: Optional HTTP headers
timeout: Connection timeout in seconds
read_timeout: Read timeout in seconds
stack: AsyncExitStack for resource management
Returns:
An initialized MCP client session
Raises:
Exception: If connection fails
"""
logger.debug(f"Establishing SSE connection to: {url}")
# Establish connection via SSE
try:
read_stream, write_stream = await stack.enter_async_context(
sse_client(url, headers, timeout, read_timeout)
)
session = await stack.enter_async_context(ClientSession(read_stream, write_stream))
logger.debug("SSE connection established successfully")
return session
except Exception as e:
logger.error(f"Failed to establish SSE connection: {str(e)}")
raise

View File

@ -285,8 +285,7 @@ class AssistantAgent(AgentWorkflowBase):
function_args_as_dict = json.loads(function_args) if function_args else {}
# Execute tool function
result = self.tool_executor.execute(function_name, **function_args_as_dict)
result = await self.tool_executor.run_tool(function_name, **function_args_as_dict)
# Construct tool execution message payload
workflow_tool_message = {
"tool_call_id": tool_call.get("id"),

View File

@ -34,17 +34,12 @@ def message_router(
"""
def decorator(f: Callable[..., Any]) -> Callable[..., Any]:
logger.debug(f"Inspecting function '{f.__name__}' for message routing.")
is_workflow = hasattr(f, "_is_workflow")
workflow_name = getattr(f, "_workflow_name", None)
type_hints = get_type_hints(f)
raw_hint = type_hints.get("message", None)
logger.debug(f"Found type hint on 'message': {raw_hint}")
message_models = extract_message_models(raw_hint)
if not message_models:
@ -53,8 +48,8 @@ def message_router(
for model in message_models:
if not is_valid_routable_model(model):
raise TypeError(f"Handler '{f.__name__}' has unsupported message type: {model}")
logger.debug(f"Extracted message models: {[m.__name__ for m in message_models]}")
logger.debug(f"@message_router: '{f.__name__}' => models {[m.__name__ for m in message_models]}")
# Attach metadata for later registration
f._is_message_handler = True

View File

@ -6,7 +6,7 @@ from dapr_agents.types import DaprWorkflowContext
from dapr_agents.workflow.decorators import task, workflow
from dapr_agents.workflow.messaging.decorator import message_router
from dapr_agents.workflow.orchestrators.base import OrchestratorWorkflowBase
from dapr_agents.workflow.orchestrators.llm.schemas import BroadcastMessage, TriggerAction, NextStep, AgentTaskResponse, ProgressCheckOutput, PLAN_SCHEMA, NEXT_STEP_SCHEMA, PROGRESS_CHECK_SCHEMA
from dapr_agents.workflow.orchestrators.llm.schemas import BroadcastMessage, TriggerAction, NextStep, AgentTaskResponse, ProgressCheckOutput, schemas
from dapr_agents.workflow.orchestrators.llm.prompts import TASK_INITIAL_PROMPT, TASK_PLANNING_PROMPT, NEXT_STEP_PROMPT, PROGRESS_CHECK_PROMPT, SUMMARY_GENERATION_PROMPT
from dapr_agents.workflow.orchestrators.llm.state import LLMWorkflowState, LLMWorkflowEntry, LLMWorkflowMessage, PlanStep, TaskResult
from dapr_agents.workflow.orchestrators.llm.utils import update_step_statuses, restructure_plan, find_step_in_plan
@ -75,7 +75,7 @@ class LLMOrchestrator(OrchestratorWorkflowBase):
logger.info(f"Initial message from User -> {self.name}")
# Generate the plan using a language model
plan = yield ctx.call_activity(self.generate_plan, input={"task": task, "agents": agents, "plan_schema": PLAN_SCHEMA})
plan = yield ctx.call_activity(self.generate_plan, input={"task": task, "agents": agents, "plan_schema": schemas.plan})
# Prepare initial message with task, agents and plan context
initial_message = yield ctx.call_activity(self.prepare_initial_message, input={"instance_id": instance_id, "task": task, "agents": agents, "plan": plan})
@ -84,7 +84,7 @@ class LLMOrchestrator(OrchestratorWorkflowBase):
yield ctx.call_activity(self.broadcast_message_to_agents, input={"instance_id": instance_id, "task": initial_message})
# Step 4: Identify agent and instruction for the next step
next_step = yield ctx.call_activity(self.generate_next_step, input={"task": task, "agents": agents, "plan": plan, "next_step_schema": NEXT_STEP_SCHEMA})
next_step = yield ctx.call_activity(self.generate_next_step, input={"task": task, "agents": agents, "plan": plan, "next_step_schema": schemas.next_step})
# Extract Additional Properties from NextStep
next_agent = next_step["next_agent"]
@ -122,7 +122,7 @@ class LLMOrchestrator(OrchestratorWorkflowBase):
yield ctx.call_activity(self.update_task_history, input={"instance_id": instance_id, "agent": next_agent, "step": step_id, "substep": substep_id, "results": task_results})
# Step 10: Check progress
progress = yield ctx.call_activity(self.check_progress, input={ "task": task, "plan": plan, "step": step_id, "substep": substep_id, "results": task_results["content"], "progress_check_schema": PROGRESS_CHECK_SCHEMA})
progress = yield ctx.call_activity(self.check_progress, input={ "task": task, "plan": plan, "step": step_id, "substep": substep_id, "results": task_results["content"], "progress_check_schema": schemas.progress_check})
if not ctx.is_replaying:
logger.info(f"Tracking Progress: {progress}")

View File

@ -1,9 +1,11 @@
from functools import cached_property
import json
from typing import List, Optional, Literal
from pydantic import BaseModel, Field
from dapr_agents.workflow.orchestrators.llm.state import PlanStep
from dapr_agents.types.message import BaseMessage
from dapr_agents.llm.utils import StructureHandler
from pydantic import BaseModel, Field
from typing import List, Optional, Literal
import json
class BroadcastMessage(BaseMessage):
"""
@ -50,6 +52,24 @@ class ProgressCheckOutput(BaseModel):
plan_restructure: Optional[List[PlanStep]] = Field(None, description="A list of restructured steps. Only one step should be modified at a time.")
# Schemas used in Prompts
PLAN_SCHEMA = json.dumps(StructureHandler.enforce_strict_json_schema(TaskPlan.model_json_schema())["properties"]["plan"])
PROGRESS_CHECK_SCHEMA = json.dumps(StructureHandler.enforce_strict_json_schema(ProgressCheckOutput.model_json_schema()))
NEXT_STEP_SCHEMA = json.dumps(NextStep.model_json_schema())
class Schemas:
"""Lazily evaluated JSON schemas used in prompt calls."""
@cached_property
def plan(self) -> str:
return json.dumps(
StructureHandler.enforce_strict_json_schema(TaskPlan.model_json_schema())["properties"]["plan"]
)
@cached_property
def progress_check(self) -> str:
return json.dumps(
StructureHandler.enforce_strict_json_schema(ProgressCheckOutput.model_json_schema())
)
@cached_property
def next_step(self) -> str:
return json.dumps(NextStep.model_json_schema())
schemas = Schemas()

View File

@ -185,7 +185,7 @@ class WorkflowTask(BaseModel):
"""
logger.info("Invoking Task with AI Agent...")
result = self.agent.run(task=description)
result = await self.agent.run(description)
logger.debug(f"Agent result type: {type(result)}, value: {result}")
return self._convert_result(result)

View File

@ -70,6 +70,7 @@ tools = [get_weather, jump]
2. Then, create the agent in `weather_agent.py`:
```python
import asyncio
from weather_tools import tools
from dapr_agents import Agent
from dotenv import load_dotenv
@ -77,14 +78,22 @@ from dotenv import load_dotenv
load_dotenv()
AIAgent = Agent(
name = "Stevie",
role = "Weather Assistant",
goal = "Assist Humans with weather related tasks.",
instructions = ["Get accurate weather information", "From time to time, you can also jump after answering the weather question."],
name="Stevie",
role="Weather Assistant",
goal="Assist Humans with weather related tasks.",
instructions=[
"Get accurate weather information",
"From time to time, you can also jump after answering the weather question."
],
tools=tools
)
AIAgent.run("What is the weather in Virginia, New York and Washington DC?")
# Wrap your async call
async def main():
await AIAgent.run("What is the weather in Virginia, New York and Washington DC?")
if __name__ == "__main__":
asyncio.run(main())
```
3. Run the weather agent:
@ -150,7 +159,7 @@ print("Chat history after first interaction:")
print(AIAgent.chat_history)
# Second interaction (agent will remember the first one)
AIAgent.run("How about in Seattle?")
await AIAgent.run("How about in Seattle?")
# View updated history
print("Chat history after second interaction:")

View File

@ -1,3 +1,4 @@
import asyncio
from weather_tools import tools
from dapr_agents import Agent
from dotenv import load_dotenv
@ -5,11 +6,19 @@ from dotenv import load_dotenv
load_dotenv()
AIAgent = Agent(
name = "Stevie",
role = "Weather Assistant",
goal = "Assist Humans with weather related tasks.",
instructions = ["Get accurate weather information", "From time to time, you can also jump after answering the weather question."],
name="Stevie",
role="Weather Assistant",
goal="Assist Humans with weather related tasks.",
instructions=[
"Get accurate weather information",
"From time to time, you can also jump after answering the weather question."
],
tools=tools
)
AIAgent.run("What is the weather in Virginia, New York and Washington DC?")
# Wrap your async call
async def main():
await AIAgent.run("What is the weather in Virginia, New York and Washington DC?")
if __name__ == "__main__":
asyncio.run(main())

View File

@ -7,7 +7,8 @@ import logging
async def main():
try:
elf_service = AssistantAgent(
name="Legolas", role="Elf",
name="Legolas",
role="Elf",
goal="Act as a scout, marksman, and protector, using keen senses and deadly accuracy to ensure the success of the journey.",
instructions=[
"Speak like Legolas, with grace, wisdom, and keen observation.",

View File

@ -7,7 +7,8 @@ import logging
async def main():
try:
hobbit_service = AssistantAgent(
name="Frodo", role="Hobbit",
name="Frodo",
role="Hobbit",
goal="Carry the One Ring to Mount Doom, resisting its corruptive power while navigating danger and uncertainty.",
instructions=[
"Speak like Frodo, with humility, determination, and a growing sense of resolve.",