dapr-agents/cookbook/agents/basic_oai_agent_react.ipynb

432 lines
17 KiB
Plaintext

{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# OpenAI Tool Calling Agent - Dummy Weather Example\n",
"\n",
"* Collaborator: Roberto Rodriguez @Cyb3rWard0g"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Import Environment Variables"
]
},
{
"cell_type": "code",
"execution_count": 1,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"True"
]
},
"execution_count": 1,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"from dotenv import load_dotenv\n",
"load_dotenv() # take environment variables from .env."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Enable Logging"
]
},
{
"cell_type": "code",
"execution_count": 2,
"metadata": {},
"outputs": [],
"source": [
"import logging\n",
"\n",
"logging.basicConfig(level=logging.INFO)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Define Tools"
]
},
{
"cell_type": "code",
"execution_count": 3,
"metadata": {},
"outputs": [],
"source": [
"from dapr_agents import tool\n",
"from pydantic import BaseModel, Field\n",
"\n",
"class GetWeatherSchema(BaseModel):\n",
" location: str = Field(description=\"location to get weather for\")\n",
"\n",
"@tool(args_model=GetWeatherSchema)\n",
"def get_weather(location: str) -> str:\n",
" \"\"\"Get weather information for a specific location.\"\"\"\n",
" import random\n",
" temperature = random.randint(60, 80)\n",
" return f\"{location}: {temperature}F.\"\n",
"\n",
"class JumpSchema(BaseModel):\n",
" distance: str = Field(description=\"Distance for agent to jump\")\n",
"\n",
"@tool(args_model=JumpSchema)\n",
"def jump(distance: str) -> str:\n",
" \"\"\"Jump a specific distance.\"\"\"\n",
" return f\"I jumped the following distance {distance}\"\n",
"\n",
"tools = [get_weather,jump]"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Initialize Agent"
]
},
{
"cell_type": "code",
"execution_count": 4,
"metadata": {},
"outputs": [
{
"name": "stderr",
"output_type": "stream",
"text": [
"INFO:dapr_agents.llm.openai.client.base:Initializing OpenAI client...\n",
"INFO:dapr_agents.tool.executor:Tool registered: GetWeather\n",
"INFO:dapr_agents.tool.executor:Tool registered: Jump\n",
"INFO:dapr_agents.tool.executor:Tool Executor initialized with 2 tool(s).\n",
"INFO:dapr_agents.agent.base:Constructing system_prompt from agent attributes.\n",
"INFO:dapr_agents.agent.base:Using system_prompt to create the prompt template.\n",
"INFO:dapr_agents.agent.base:Pre-filled prompt template with attributes: ['name', 'role', 'goal']\n"
]
}
],
"source": [
"from dapr_agents import ReActAgent\n",
"\n",
"AIAgent = ReActAgent(\n",
" name=\"Rob\",\n",
" role= \"Weather Assistant\",\n",
" tools=tools\n",
")"
]
},
{
"cell_type": "code",
"execution_count": 5,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"ChatPromptTemplate(input_variables=['chat_history'], pre_filled_variables={'name': 'Rob', 'role': 'Weather Assistant', 'goal': 'Help humans'}, messages=[('system', '# Today\\'s date is: April 05, 2025\\n\\n## Name\\nYour name is {{name}}.\\n\\n## Role\\nYour role is {{role}}.\\n\\n## Goal\\n{{goal}}.\\n\\n## Tools\\nYou have access ONLY to the following tools:\\nGetWeather: Get weather information for a specific location.. Args schema: {\\'location\\': {\\'description\\': \\'location to get weather for\\', \\'type\\': \\'string\\'}}\\nJump: Jump a specific distance.. Args schema: {\\'distance\\': {\\'description\\': \\'Distance for agent to jump\\', \\'type\\': \\'string\\'}}\\n\\nIf you think about using tool, it must use the correct tool JSON blob format as shown below:\\n```\\n{\\n \"name\": $TOOL_NAME,\\n \"arguments\": $INPUT\\n}\\n```\\n\\n## ReAct Format\\nThought: Reflect on the current state of the conversation or task. If additional information is needed, determine if using a tool is necessary. When a tool is required, briefly explain why it is needed for the specific step at hand, and immediately follow this with an `Action:` statement to address that specific requirement. Avoid combining multiple tool requests in a single `Thought`. If no tools are needed, proceed directly to an `Answer:` statement.\\nAction:\\n```\\n{\\n \"name\": $TOOL_NAME,\\n \"arguments\": $INPUT\\n}\\n```\\nObservation: Describe the result of the action taken.\\n... (repeat Thought/Action/Observation as needed, but **ALWAYS proceed to a final `Answer:` statement when you have enough information**)\\nThought: I now have sufficient information to answer the initial question.\\nAnswer: ALWAYS proceed to a final `Answer:` statement once enough information is gathered or if the tools do not provide the necessary data.\\n\\n### Providing a Final Answer\\nOnce you have enough information to answer the question OR if tools cannot provide the necessary data, respond using one of the following formats:\\n\\n1. **Direct Answer without Tools**:\\nThought: I can answer directly without using any tools. Answer: Direct answer based on previous interactions or current knowledge.\\n\\n2. **When All Needed Information is Gathered**:\\nThought: I now have sufficient information to answer the question. Answer: Complete final answer here.\\n\\n3. **If Tools Cannot Provide the Needed Information**:\\nThought: The available tools do not provide the necessary information. Answer: Explanation of limitation and relevant information if possible.\\n\\n### Key Guidelines\\n- Always Conclude with an `Answer:` statement.\\n- Ensure every response ends with an `Answer:` statement that summarizes the most recent findings or relevant information, avoiding incomplete thoughts.\\n- Direct Final Answer for Past or Known Information: If the user inquires about past interactions, respond directly with an Answer: based on the information in chat history.\\n- Avoid Repetitive Thought Statements: If the answer is ready, skip repetitive Thought steps and proceed directly to Answer.\\n- Minimize Redundant Steps: Use minimal Thought/Action/Observation cycles to arrive at a final Answer efficiently.\\n- Reference Past Information When Relevant: Use chat history accurately when answering questions about previous responses to avoid redundancy.\\n- Progressively Move Towards Finality: Reflect on the current step and avoid re-evaluating the entire user request each time. Aim to advance towards the final Answer in each cycle.\\n\\n## Chat History\\nThe chat history is provided to avoid repeating information and to ensure accurate references when summarizing past interactions.'), MessagePlaceHolder(variable_name=chat_history)], template_format='jinja2')"
]
},
"execution_count": 5,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"AIAgent.prompt_template"
]
},
{
"cell_type": "code",
"execution_count": 6,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"'GetWeather'"
]
},
"execution_count": 6,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"AIAgent.tools[0].name"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Run Agent"
]
},
{
"cell_type": "code",
"execution_count": 7,
"metadata": {},
"outputs": [
{
"name": "stderr",
"output_type": "stream",
"text": [
"INFO:dapr_agents.agent.patterns.react.base:Iteration 1/10 started.\n",
"INFO:dapr_agents.llm.openai.chat:Invoking ChatCompletion API.\n"
]
},
{
"name": "stdout",
"output_type": "stream",
"text": [
"\u001b[38;2;242;182;128muser:\u001b[0m\n",
"\u001b[38;2;242;182;128m\u001b[0m\u001b[38;2;242;182;128mHi my name is Roberto\u001b[0m\u001b[0m\n",
"\u001b[0m\u001b[0m\n",
"\u001b[0m--------------------------------------------------------------------------------\u001b[0m\n",
"\u001b[0m\u001b[0m\u001b[0m\n"
]
},
{
"name": "stderr",
"output_type": "stream",
"text": [
"INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
"INFO:dapr_agents.llm.openai.chat:Chat completion retrieved successfully.\n",
"INFO:dapr_agents.agent.patterns.react.base:No action specified; continuing with further reasoning.\n",
"INFO:dapr_agents.agent.patterns.react.base:Iteration 2/10 started.\n",
"INFO:dapr_agents.llm.openai.chat:Invoking ChatCompletion API.\n"
]
},
{
"name": "stdout",
"output_type": "stream",
"text": [
"\u001b[38;2;217;95;118mThought: Hello Roberto! How can I assist you today?\u001b[0m\u001b[0m\n"
]
},
{
"name": "stderr",
"output_type": "stream",
"text": [
"INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
"INFO:dapr_agents.llm.openai.chat:Chat completion retrieved successfully.\n",
"INFO:dapr_agents.agent.patterns.react.base:Agent provided a direct final answer.\n"
]
},
{
"name": "stdout",
"output_type": "stream",
"text": [
"\u001b[38;2;217;95;118mThought: Answer: Hello Roberto! How can I assist you today?\u001b[0m\u001b[0m\n",
"\u001b[0m\u001b[0m\n",
"\u001b[0m--------------------------------------------------------------------------------\u001b[0m\n",
"\u001b[0m\u001b[0m\u001b[0m\n",
"\u001b[38;2;147;191;183massistant:\u001b[0m\n",
"\u001b[38;2;147;191;183m\u001b[0m\u001b[38;2;147;191;183mHello Roberto! How can I assist you today?\u001b[0m\u001b[0m\n"
]
},
{
"data": {
"text/plain": [
"'Hello Roberto! How can I assist you today?'"
]
},
"execution_count": 7,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"await AIAgent.run(\"Hi my name is Roberto\")"
]
},
{
"cell_type": "code",
"execution_count": 8,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"[{'role': 'user', 'content': 'Hi my name is Roberto'},\n",
" {'content': 'Hello Roberto! How can I assist you today?',\n",
" 'role': 'assistant'}]"
]
},
"execution_count": 8,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"AIAgent.chat_history"
]
},
{
"cell_type": "code",
"execution_count": 9,
"metadata": {},
"outputs": [
{
"name": "stderr",
"output_type": "stream",
"text": [
"INFO:dapr_agents.agent.patterns.react.base:Iteration 1/10 started.\n",
"INFO:dapr_agents.llm.openai.chat:Invoking ChatCompletion API.\n"
]
},
{
"name": "stdout",
"output_type": "stream",
"text": [
"\u001b[38;2;242;182;128muser:\u001b[0m\n",
"\u001b[38;2;242;182;128m\u001b[0m\u001b[38;2;242;182;128mWhat is the weather in Virgina, New York and Washington DC?\u001b[0m\u001b[0m\n",
"\u001b[0m\u001b[0m\n",
"\u001b[0m--------------------------------------------------------------------------------\u001b[0m\n",
"\u001b[0m\u001b[0m\u001b[0m\n"
]
},
{
"name": "stderr",
"output_type": "stream",
"text": [
"INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
"INFO:dapr_agents.llm.openai.chat:Chat completion retrieved successfully.\n",
"INFO:dapr_agents.agent.patterns.react.base:Executing GetWeather with arguments {'location': 'Virginia'}\n",
"INFO:dapr_agents.tool.executor:Running tool (auto): GetWeather\n",
"INFO:dapr_agents.agent.patterns.react.base:Iteration 2/10 started.\n",
"INFO:dapr_agents.llm.openai.chat:Invoking ChatCompletion API.\n"
]
},
{
"name": "stdout",
"output_type": "stream",
"text": [
"\u001b[38;2;217;95;118mThought: I need to get the current weather information for Virginia, New York, and Washington DC. I will fetch the data for each location separately. Let's start with Virginia.\u001b[0m\u001b[0m\n",
"\u001b[38;2;191;69;126mAction: {\"name\": \"GetWeather\", \"arguments\": {\"location\": \"Virginia\"}}\u001b[0m\u001b[0m\n",
"\u001b[38;2;146;94;130mObservation: Virginia: 77F.\u001b[0m\u001b[0m\n"
]
},
{
"name": "stderr",
"output_type": "stream",
"text": [
"INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
"INFO:dapr_agents.llm.openai.chat:Chat completion retrieved successfully.\n",
"INFO:dapr_agents.agent.patterns.react.base:Executing GetWeather with arguments {'location': 'New York'}\n",
"INFO:dapr_agents.tool.executor:Running tool (auto): GetWeather\n",
"INFO:dapr_agents.agent.patterns.react.base:Iteration 3/10 started.\n",
"INFO:dapr_agents.llm.openai.chat:Invoking ChatCompletion API.\n"
]
},
{
"name": "stdout",
"output_type": "stream",
"text": [
"\u001b[38;2;217;95;118mThought: I now have the weather information for Virginia. Next, I will get the weather information for New York.\u001b[0m\u001b[0m\n",
"\u001b[38;2;191;69;126mAction: {\"name\": \"GetWeather\", \"arguments\": {\"location\": \"New York\"}}\u001b[0m\u001b[0m\n",
"\u001b[38;2;146;94;130mObservation: New York: 68F.\u001b[0m\u001b[0m\n"
]
},
{
"name": "stderr",
"output_type": "stream",
"text": [
"INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
"INFO:dapr_agents.llm.openai.chat:Chat completion retrieved successfully.\n",
"INFO:dapr_agents.agent.patterns.react.base:Executing GetWeather with arguments {'location': 'Washington DC'}\n",
"INFO:dapr_agents.tool.executor:Running tool (auto): GetWeather\n",
"INFO:dapr_agents.agent.patterns.react.base:Iteration 4/10 started.\n",
"INFO:dapr_agents.llm.openai.chat:Invoking ChatCompletion API.\n"
]
},
{
"name": "stdout",
"output_type": "stream",
"text": [
"\u001b[38;2;217;95;118mThought: I have the weather information for Virginia and New York. Next, I will get the weather information for Washington DC.\u001b[0m\u001b[0m\n",
"\u001b[38;2;191;69;126mAction: {\"name\": \"GetWeather\", \"arguments\": {\"location\": \"Washington DC\"}}\u001b[0m\u001b[0m\n",
"\u001b[38;2;146;94;130mObservation: Washington DC: 69F.\u001b[0m\u001b[0m\n"
]
},
{
"name": "stderr",
"output_type": "stream",
"text": [
"INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
"INFO:dapr_agents.llm.openai.chat:Chat completion retrieved successfully.\n",
"INFO:dapr_agents.agent.patterns.react.base:Agent provided a direct final answer.\n"
]
},
{
"name": "stdout",
"output_type": "stream",
"text": [
"\u001b[38;2;217;95;118mThought: I now have the weather information for all requested locations. \u001b[0m\n",
"\u001b[38;2;217;95;118m\u001b[0m\n",
"\u001b[38;2;217;95;118mAnswer: The current weather is as follows:\u001b[0m\n",
"\u001b[38;2;217;95;118m- Virginia: 77°F\u001b[0m\n",
"\u001b[38;2;217;95;118m- New York: 68°F\u001b[0m\n",
"\u001b[38;2;217;95;118m- Washington DC: 69°F\u001b[0m\u001b[0m\n",
"\u001b[0m\u001b[0m\n",
"\u001b[0m--------------------------------------------------------------------------------\u001b[0m\n",
"\u001b[0m\u001b[0m\u001b[0m\n",
"\u001b[38;2;147;191;183massistant:\u001b[0m\n",
"\u001b[38;2;147;191;183m\u001b[0m\u001b[38;2;147;191;183mThe current weather is as follows:\u001b[0m\n",
"\u001b[38;2;147;191;183m- Virginia: 77°F\u001b[0m\n",
"\u001b[38;2;147;191;183m- New York: 68°F\u001b[0m\n",
"\u001b[38;2;147;191;183m- Washington DC: 69°F\u001b[0m\u001b[0m\n"
]
},
{
"data": {
"text/plain": [
"'The current weather is as follows:\\n- Virginia: 77°F\\n- New York: 68°F\\n- Washington DC: 69°F'"
]
},
"execution_count": 9,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"await AIAgent.run(\"What is the weather in Virgina, New York and Washington DC?\")"
]
}
],
"metadata": {
"kernelspec": {
"display_name": ".venv",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.13.1"
}
},
"nbformat": 4,
"nbformat_minor": 2
}