{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "# OpenAI Tool Calling Agent - Dummy Weather Example\n", "\n", "* Collaborator: Roberto Rodriguez @Cyb3rWard0g" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Import Environment Variables" ] }, { "cell_type": "code", "execution_count": 1, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "True" ] }, "execution_count": 1, "metadata": {}, "output_type": "execute_result" } ], "source": [ "from dotenv import load_dotenv\n", "load_dotenv() # take environment variables from .env." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Enable Logging" ] }, { "cell_type": "code", "execution_count": 2, "metadata": {}, "outputs": [], "source": [ "import logging\n", "\n", "logging.basicConfig(level=logging.INFO)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Define Tools" ] }, { "cell_type": "code", "execution_count": 3, "metadata": {}, "outputs": [], "source": [ "from dapr_agents import tool\n", "from pydantic import BaseModel, Field\n", "\n", "class GetWeatherSchema(BaseModel):\n", " location: str = Field(description=\"location to get weather for\")\n", "\n", "@tool(args_model=GetWeatherSchema)\n", "def get_weather(location: str) -> str:\n", " \"\"\"Get weather information for a specific location.\"\"\"\n", " import random\n", " temperature = random.randint(60, 80)\n", " return f\"{location}: {temperature}F.\"\n", "\n", "class JumpSchema(BaseModel):\n", " distance: str = Field(description=\"Distance for agent to jump\")\n", "\n", "@tool(args_model=JumpSchema)\n", "def jump(distance: str) -> str:\n", " \"\"\"Jump a specific distance.\"\"\"\n", " return f\"I jumped the following distance {distance}\"\n", "\n", "tools = [get_weather,jump]" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Initialize Agent" ] }, { "cell_type": "code", "execution_count": 4, "metadata": {}, "outputs": [ { "name": "stderr", "output_type": "stream", "text": [ "INFO:dapr_agents.llm.openai.client.base:Initializing OpenAI client...\n", "INFO:dapr_agents.tool.executor:Tool registered: GetWeather\n", "INFO:dapr_agents.tool.executor:Tool registered: Jump\n", "INFO:dapr_agents.tool.executor:Tool Executor initialized with 2 registered tools.\n", "INFO:dapr_agents.agent.base:Constructing system_prompt from agent attributes.\n", "INFO:dapr_agents.agent.base:Using system_prompt to create the prompt template.\n", "INFO:dapr_agents.agent.base:Pre-filled prompt template with attributes: ['name', 'role', 'goal']\n" ] } ], "source": [ "from dapr_agents import ReActAgent\n", "\n", "AIAgent = ReActAgent(\n", " name=\"Rob\",\n", " role= \"Weather Assistant\",\n", " tools=tools\n", ")" ] }, { "cell_type": "code", "execution_count": 5, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "ChatPromptTemplate(input_variables=['chat_history'], pre_filled_variables={'name': 'Rob', 'role': 'Weather Assistant', 'goal': 'Help humans'}, messages=[('system', '# Today\\'s date is: March 04, 2025\\n\\n## Name\\nYour name is {{name}}.\\n\\n## Role\\nYour role is {{role}}.\\n\\n## Goal\\n{{goal}}.\\n\\n## Tools\\nYou have access ONLY to the following tools:\\nGetWeather: Get weather information for a specific location.. Args schema: {\\'location\\': {\\'description\\': \\'location to get weather for\\', \\'type\\': \\'string\\'}}\\nJump: Jump a specific distance.. Args schema: {\\'distance\\': {\\'description\\': \\'Distance for agent to jump\\', \\'type\\': \\'string\\'}}\\n\\nIf you think about using tool, it must use the correct tool JSON blob format as shown below:\\n```\\n{\\n \"name\": $TOOL_NAME,\\n \"arguments\": $INPUT\\n}\\n```\\n\\n## ReAct Format\\nThought: Reflect on the current state of the conversation or task. If additional information is needed, determine if using a tool is necessary. When a tool is required, briefly explain why it is needed for the specific step at hand, and immediately follow this with an `Action:` statement to address that specific requirement. Avoid combining multiple tool requests in a single `Thought`. If no tools are needed, proceed directly to an `Answer:` statement.\\nAction:\\n```\\n{\\n \"name\": $TOOL_NAME,\\n \"arguments\": $INPUT\\n}\\n```\\nObservation: Describe the result of the action taken.\\n... (repeat Thought/Action/Observation as needed, but **ALWAYS proceed to a final `Answer:` statement when you have enough information**)\\nThought: I now have sufficient information to answer the initial question.\\nAnswer: ALWAYS proceed to a final `Answer:` statement once enough information is gathered or if the tools do not provide the necessary data.\\n\\n### Providing a Final Answer\\nOnce you have enough information to answer the question OR if tools cannot provide the necessary data, respond using one of the following formats:\\n\\n1. **Direct Answer without Tools**:\\nThought: I can answer directly without using any tools. Answer: Direct answer based on previous interactions or current knowledge.\\n\\n2. **When All Needed Information is Gathered**:\\nThought: I now have sufficient information to answer the question. Answer: Complete final answer here.\\n\\n3. **If Tools Cannot Provide the Needed Information**:\\nThought: The available tools do not provide the necessary information. Answer: Explanation of limitation and relevant information if possible.\\n\\n### Key Guidelines\\n- Always Conclude with an `Answer:` statement.\\n- Ensure every response ends with an `Answer:` statement that summarizes the most recent findings or relevant information, avoiding incomplete thoughts.\\n- Direct Final Answer for Past or Known Information: If the user inquires about past interactions, respond directly with an Answer: based on the information in chat history.\\n- Avoid Repetitive Thought Statements: If the answer is ready, skip repetitive Thought steps and proceed directly to Answer.\\n- Minimize Redundant Steps: Use minimal Thought/Action/Observation cycles to arrive at a final Answer efficiently.\\n- Reference Past Information When Relevant: Use chat history accurately when answering questions about previous responses to avoid redundancy.\\n- Progressively Move Towards Finality: Reflect on the current step and avoid re-evaluating the entire user request each time. Aim to advance towards the final Answer in each cycle.\\n\\n## Chat History\\nThe chat history is provided to avoid repeating information and to ensure accurate references when summarizing past interactions.'), MessagePlaceHolder(variable_name=chat_history)], template_format='jinja2')" ] }, "execution_count": 5, "metadata": {}, "output_type": "execute_result" } ], "source": [ "AIAgent.prompt_template" ] }, { "cell_type": "code", "execution_count": 6, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "'GetWeather'" ] }, "execution_count": 6, "metadata": {}, "output_type": "execute_result" } ], "source": [ "AIAgent.tools[0].name" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Run Agent" ] }, { "cell_type": "code", "execution_count": 7, "metadata": {}, "outputs": [ { "name": "stderr", "output_type": "stream", "text": [ "INFO:dapr_agents.agent.base:Pre-filled prompt template with variables: dict_keys(['chat_history'])\n", "INFO:dapr_agents.agent.patterns.react.base:Iteration 1/10 started.\n", "INFO:dapr_agents.llm.openai.chat:Invoking ChatCompletion API.\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "\u001b[38;2;242;182;128muser:\u001b[0m\n", "\u001b[38;2;242;182;128m\u001b[0m\u001b[38;2;242;182;128mHi my name is Roberto\u001b[0m\u001b[0m\n", "\u001b[0m\u001b[0m\n", "\u001b[0m--------------------------------------------------------------------------------\u001b[0m\n", "\u001b[0m\u001b[0m\u001b[0m\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n", "INFO:dapr_agents.llm.openai.chat:Chat completion retrieved successfully.\n", "INFO:dapr_agents.agent.patterns.react.base:No action specified; continuing with further reasoning.\n", "INFO:dapr_agents.agent.patterns.react.base:Iteration 2/10 started.\n", "INFO:dapr_agents.llm.openai.chat:Invoking ChatCompletion API.\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "\u001b[38;2;217;95;118mThought: Hello Roberto! How can I assist you today with the weather?\u001b[0m\u001b[0m\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n", "INFO:dapr_agents.llm.openai.chat:Chat completion retrieved successfully.\n", "INFO:dapr_agents.agent.patterns.react.base:Agent provided a direct final answer.\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "\u001b[38;2;217;95;118mThought: Answer: Hello Roberto! How can I assist you today with the weather?\u001b[0m\u001b[0m\n", "\u001b[0m\u001b[0m\n", "\u001b[0m--------------------------------------------------------------------------------\u001b[0m\n", "\u001b[0m\u001b[0m\u001b[0m\n", "\u001b[38;2;147;191;183massistant:\u001b[0m\n", "\u001b[38;2;147;191;183m\u001b[0m\u001b[38;2;147;191;183mHello Roberto! How can I assist you today with the weather?\u001b[0m\u001b[0m\n" ] }, { "data": { "text/plain": [ "'Hello Roberto! How can I assist you today with the weather?'" ] }, "execution_count": 7, "metadata": {}, "output_type": "execute_result" } ], "source": [ "AIAgent.run(\"Hi my name is Roberto\")" ] }, { "cell_type": "code", "execution_count": 8, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "[{'role': 'user', 'content': 'Hi my name is Roberto'},\n", " {'content': 'Hello Roberto! How can I assist you today with the weather?',\n", " 'role': 'assistant'}]" ] }, "execution_count": 8, "metadata": {}, "output_type": "execute_result" } ], "source": [ "AIAgent.chat_history" ] }, { "cell_type": "code", "execution_count": 9, "metadata": {}, "outputs": [ { "name": "stderr", "output_type": "stream", "text": [ "INFO:dapr_agents.agent.base:Pre-filled prompt template with variables: dict_keys(['chat_history'])\n", "INFO:dapr_agents.agent.patterns.react.base:Iteration 1/10 started.\n", "INFO:dapr_agents.llm.openai.chat:Invoking ChatCompletion API.\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "\u001b[38;2;242;182;128muser:\u001b[0m\n", "\u001b[38;2;242;182;128m\u001b[0m\u001b[38;2;242;182;128mWhat is the weather in Virgina, New York and Washington DC?\u001b[0m\u001b[0m\n", "\u001b[0m\u001b[0m\n", "\u001b[0m--------------------------------------------------------------------------------\u001b[0m\n", "\u001b[0m\u001b[0m\u001b[0m\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n", "INFO:dapr_agents.llm.openai.chat:Chat completion retrieved successfully.\n", "INFO:dapr_agents.agent.patterns.react.base:Executing GetWeather with arguments {'location': 'Virginia'}\n", "INFO:dapr_agents.tool.executor:Attempting to execute tool: GetWeather\n", "INFO:dapr_agents.tool.executor:Tool 'GetWeather' executed successfully.\n", "INFO:dapr_agents.agent.patterns.react.base:Thought:I will need to gather the current weather information for both Virginia and Washington, D.C. by using the GetWeather tool.\n", "Action:{'name': 'GetWeather', 'arguments': {'location': 'Virginia'}}\n", "Observation:Virginia: 74F.\n", "INFO:dapr_agents.agent.patterns.react.base:Iteration 2/10 started.\n", "INFO:dapr_agents.llm.openai.chat:Invoking ChatCompletion API.\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "\u001b[38;2;217;95;118mThought: I will need to gather the current weather information for both Virginia and Washington, D.C. by using the GetWeather tool.\u001b[0m\u001b[0m\n", "\u001b[38;2;191;69;126mAction: {\"name\": \"GetWeather\", \"arguments\": {\"location\": \"Virginia\"}}\u001b[0m\u001b[0m\n", "\u001b[38;2;146;94;130mObservation: Virginia: 74F.\u001b[0m\u001b[0m\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n", "INFO:dapr_agents.llm.openai.chat:Chat completion retrieved successfully.\n", "INFO:dapr_agents.agent.patterns.react.base:Executing GetWeather with arguments {'location': 'New York'}\n", "INFO:dapr_agents.tool.executor:Attempting to execute tool: GetWeather\n", "INFO:dapr_agents.tool.executor:Tool 'GetWeather' executed successfully.\n", "INFO:dapr_agents.agent.patterns.react.base:Thought:\n", "Action:{'name': 'GetWeather', 'arguments': {'location': 'New York'}}\n", "Observation:New York: 65F.\n", "INFO:dapr_agents.agent.patterns.react.base:Iteration 3/10 started.\n", "INFO:dapr_agents.llm.openai.chat:Invoking ChatCompletion API.\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "\u001b[38;2;217;95;118mThought: \u001b[0m\u001b[0m\n", "\u001b[38;2;191;69;126mAction: {\"name\": \"GetWeather\", \"arguments\": {\"location\": \"New York\"}}\u001b[0m\u001b[0m\n", "\u001b[38;2;146;94;130mObservation: New York: 65F.\u001b[0m\u001b[0m\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n", "INFO:dapr_agents.llm.openai.chat:Chat completion retrieved successfully.\n", "INFO:dapr_agents.agent.patterns.react.base:Executing GetWeather with arguments {'location': 'Washington DC'}\n", "INFO:dapr_agents.tool.executor:Attempting to execute tool: GetWeather\n", "INFO:dapr_agents.tool.executor:Tool 'GetWeather' executed successfully.\n", "INFO:dapr_agents.agent.patterns.react.base:Thought:\n", "Action:{'name': 'GetWeather', 'arguments': {'location': 'Washington DC'}}\n", "Observation:Washington DC: 66F.\n", "INFO:dapr_agents.agent.patterns.react.base:Iteration 4/10 started.\n", "INFO:dapr_agents.llm.openai.chat:Invoking ChatCompletion API.\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "\u001b[38;2;217;95;118mThought: \u001b[0m\u001b[0m\n", "\u001b[38;2;191;69;126mAction: {\"name\": \"GetWeather\", \"arguments\": {\"location\": \"Washington DC\"}}\u001b[0m\u001b[0m\n", "\u001b[38;2;146;94;130mObservation: Washington DC: 66F.\u001b[0m\u001b[0m\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n", "INFO:dapr_agents.llm.openai.chat:Chat completion retrieved successfully.\n", "INFO:dapr_agents.agent.patterns.react.base:Agent provided a direct final answer.\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "\u001b[38;2;217;95;118mThought: I now have sufficient information to answer the question. \u001b[0m\n", "\u001b[38;2;217;95;118m\u001b[0m\n", "\u001b[38;2;217;95;118mAnswer: The current weather is as follows:\u001b[0m\n", "\u001b[38;2;217;95;118m- Virginia: 74°F\u001b[0m\n", "\u001b[38;2;217;95;118m- New York: 65°F\u001b[0m\n", "\u001b[38;2;217;95;118m- Washington, D.C.: 66°F\u001b[0m\u001b[0m\n", "\u001b[0m\u001b[0m\n", "\u001b[0m--------------------------------------------------------------------------------\u001b[0m\n", "\u001b[0m\u001b[0m\u001b[0m\n", "\u001b[38;2;147;191;183massistant:\u001b[0m\n", "\u001b[38;2;147;191;183m\u001b[0m\u001b[38;2;147;191;183mThe current weather is as follows:\u001b[0m\n", "\u001b[38;2;147;191;183m- Virginia: 74°F\u001b[0m\n", "\u001b[38;2;147;191;183m- New York: 65°F\u001b[0m\n", "\u001b[38;2;147;191;183m- Washington, D.C.: 66°F\u001b[0m\u001b[0m\n" ] }, { "data": { "text/plain": [ "'The current weather is as follows:\\n- Virginia: 74°F\\n- New York: 65°F\\n- Washington, D.C.: 66°F'" ] }, "execution_count": 9, "metadata": {}, "output_type": "execute_result" } ], "source": [ "AIAgent.run(\"What is the weather in Virgina, New York and Washington DC?\")" ] }, { "cell_type": "code", "execution_count": 10, "metadata": {}, "outputs": [ { "name": "stderr", "output_type": "stream", "text": [ "INFO:dapr_agents.agent.base:Pre-filled prompt template with variables: dict_keys(['chat_history'])\n", "INFO:dapr_agents.agent.patterns.react.base:Iteration 1/10 started.\n", "INFO:dapr_agents.llm.openai.chat:Invoking ChatCompletion API.\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "\u001b[38;2;242;182;128muser:\u001b[0m\n", "\u001b[38;2;242;182;128m\u001b[0m\u001b[38;2;242;182;128mWhat places did you already help me with the weather?\u001b[0m\u001b[0m\n", "\u001b[0m\u001b[0m\n", "\u001b[0m--------------------------------------------------------------------------------\u001b[0m\n", "\u001b[0m\u001b[0m\u001b[0m\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n", "INFO:dapr_agents.llm.openai.chat:Chat completion retrieved successfully.\n", "INFO:dapr_agents.agent.patterns.react.base:No action specified; continuing with further reasoning.\n", "INFO:dapr_agents.agent.patterns.react.base:Iteration 2/10 started.\n", "INFO:dapr_agents.llm.openai.chat:Invoking ChatCompletion API.\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "\u001b[38;2;217;95;118mThought: You asked about the weather in Virginia, New York, and Washington, D.C., and I provided you with the current temperatures for those locations.\u001b[0m\u001b[0m\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n", "INFO:dapr_agents.llm.openai.chat:Chat completion retrieved successfully.\n", "INFO:dapr_agents.agent.patterns.react.base:Agent provided a direct final answer.\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "\u001b[38;2;217;95;118mThought: Answer: I helped you with the weather for Virginia, New York, and Washington, D.C.\u001b[0m\u001b[0m\n", "\u001b[0m\u001b[0m\n", "\u001b[0m--------------------------------------------------------------------------------\u001b[0m\n", "\u001b[0m\u001b[0m\u001b[0m\n", "\u001b[38;2;147;191;183massistant:\u001b[0m\n", "\u001b[38;2;147;191;183m\u001b[0m\u001b[38;2;147;191;183mI helped you with the weather for Virginia, New York, and Washington, D.C.\u001b[0m\u001b[0m\n" ] }, { "data": { "text/plain": [ "'I helped you with the weather for Virginia, New York, and Washington, D.C.'" ] }, "execution_count": 10, "metadata": {}, "output_type": "execute_result" } ], "source": [ "AIAgent.run(\"What places did you already help me with the weather?\")" ] }, { "cell_type": "code", "execution_count": 11, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "[{'role': 'user', 'content': 'Hi my name is Roberto'},\n", " {'content': 'Hello Roberto! How can I assist you today with the weather?',\n", " 'role': 'assistant'},\n", " {'role': 'user',\n", " 'content': 'What is the weather in Virgina, New York and Washington DC?'},\n", " {'content': 'The current weather is as follows:\\n- Virginia: 74°F\\n- New York: 65°F\\n- Washington, D.C.: 66°F',\n", " 'role': 'assistant'},\n", " {'role': 'user',\n", " 'content': 'What places did you already help me with the weather?'},\n", " {'content': 'I helped you with the weather for Virginia, New York, and Washington, D.C.',\n", " 'role': 'assistant'}]" ] }, "execution_count": 11, "metadata": {}, "output_type": "execute_result" } ], "source": [ "AIAgent.chat_history" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [] } ], "metadata": { "kernelspec": { "display_name": ".venv", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.12.1" } }, "nbformat": 4, "nbformat_minor": 2 }