* Fix: Fix Setup lint GitHub action #30 Signed-off-by: Casper Guldbech Nielsen <scni@novonordisk.com> * Remove branch filter on PR and remove on push Signed-off-by: Casper Guldbech Nielsen <scni@novonordisk.com> * Remove on mergequeue Signed-off-by: Casper Guldbech Nielsen <scni@novonordisk.com> * Fix: Add tox.ini file Signed-off-by: Casper Guldbech Nielsen <scni@novonordisk.com> * Return on push Signed-off-by: Casper Guldbech Nielsen <scni@novonordisk.com> * Chore: tox -e ruff Signed-off-by: Casper Guldbech Nielsen <scni@novonordisk.com> * Ignore .ruff_cache Signed-off-by: Casper Guldbech Nielsen <scni@novonordisk.com> * Chore: Update tox file Signed-off-by: Casper Guldbech Nielsen <scni@novonordisk.com> * Chore: Add mypy.ini Signed-off-by: Casper Guldbech Nielsen <scni@novonordisk.com> * Ignore if line is too long Signed-off-by: Casper Guldbech Nielsen <scni@novonordisk.com> * Fix: Set the ignore in command instead Signed-off-by: Casper Guldbech Nielsen <scni@novonordisk.com> * Fix: W503 Signed-off-by: Casper Guldbech Nielsen <scni@novonordisk.com> * Fix: F541 Signed-off-by: Casper Guldbech Nielsen <scni@novonordisk.com> * Fix: 541 Signed-off-by: Casper Guldbech Nielsen <scni@novonordisk.com> * Fix: W503 Signed-off-by: Casper Guldbech Nielsen <scni@novonordisk.com> * Fix: F541 Signed-off-by: Casper Guldbech Nielsen <scni@novonordisk.com> * Chore: Ignore F401, unused imports as __init__ files has them Signed-off-by: Casper Guldbech Nielsen <scni@novonordisk.com> * Return linebreak as tox -e ruff yields that Signed-off-by: Casper Guldbech Nielsen <scni@novonordisk.com> * Ignore W503 as ruff introduces it Signed-off-by: Casper Guldbech Nielsen <scni@novonordisk.com> * Fix: F841 Signed-off-by: Casper Guldbech Nielsen <scni@novonordisk.com> * Fix: E203 Signed-off-by: Casper Guldbech Nielsen <scni@novonordisk.com> * Fix: W293 Signed-off-by: Casper Guldbech Nielsen <scni@novonordisk.com> * Fix: W291 Signed-off-by: Casper Guldbech Nielsen <scni@novonordisk.com> * Fix: F541 Signed-off-by: Casper Guldbech Nielsen <scni@novonordisk.com> * Fix: E203 Signed-off-by: Casper Guldbech Nielsen <scni@novonordisk.com> * Fix: E203 Signed-off-by: Casper Guldbech Nielsen <scni@novonordisk.com> * Fix: W291 Signed-off-by: Casper Guldbech Nielsen <scni@novonordisk.com> * Fix: F541 Signed-off-by: Casper Guldbech Nielsen <scni@novonordisk.com> * Fix: F811 Signed-off-by: Casper Guldbech Nielsen <scni@novonordisk.com> * Fix: F841 Signed-off-by: Casper Guldbech Nielsen <scni@novonordisk.com> * Fix: F811 Signed-off-by: Casper Guldbech Nielsen <scni@novonordisk.com> * Fix: F541 Signed-off-by: Casper Guldbech Nielsen <scni@novonordisk.com> * Fix: F541 Signed-off-by: Casper Guldbech Nielsen <scni@novonordisk.com> * Fix: F841 Signed-off-by: Casper Guldbech Nielsen <scni@novonordisk.com> * Fix: F811 Signed-off-by: Casper Guldbech Nielsen <scni@novonordisk.com> * Fix: W291 Signed-off-by: Casper Guldbech Nielsen <scni@novonordisk.com> * Fix: F811 Signed-off-by: Casper Guldbech Nielsen <scni@novonordisk.com> * Fix: F541 Signed-off-by: Casper Guldbech Nielsen <scni@novonordisk.com> * Fix: F541 Signed-off-by: Casper Guldbech Nielsen <scni@novonordisk.com> * Ruff want's the space before : Signed-off-by: Casper Guldbech Nielsen <scni@novonordisk.com> * Ignore space before : Signed-off-by: Casper Guldbech Nielsen <scni@novonordisk.com> * Fix: E291 Signed-off-by: Casper Guldbech Nielsen <scni@novonordisk.com> * Fix: Add dev-requirements.txt Signed-off-by: Casper Guldbech Nielsen <scni@novonordisk.com> * Fix: Correct python version Signed-off-by: Casper Guldbech Nielsen <scni@novonordisk.com> * Fix: Ref dev-requirements.txt Signed-off-by: Casper Guldbech Nielsen <scni@novonordisk.com> * Fix: Add mypy cache dir Signed-off-by: Casper Guldbech Nielsen <scni@novonordisk.com> * Chore: Update mypy version Signed-off-by: Casper Guldbech Nielsen <scni@novonordisk.com> * Fix: Exclude cookbook and quicstarts Signed-off-by: Casper Guldbech Nielsen <scni@novonordisk.com> * Remove unused import Signed-off-by: Casper Guldbech Nielsen <scni@novonordisk.com> * Chore: Add specific sub module ignore on error for future smaller fixing Signed-off-by: Casper Guldbech Nielsen <scni@novonordisk.com> * Reintroduce branches filter on push and pull_request Signed-off-by: Casper Guldbech Nielsen <scni@novonordisk.com> * chore: Ruff Signed-off-by: Casper Guldbech Nielsen <scni@novonordisk.com> * Chore: ruff formatting * Chore: F541 Signed-off-by: Casper Guldbech Nielsen <scni@novonordisk.com> * Chore: E401 Signed-off-by: Casper Guldbech Nielsen <scni@novonordisk.com> * Chore: Ruff Signed-off-by: Casper Guldbech Nielsen <scni@novonordisk.com> * Chore: F811 Signed-off-by: Casper Guldbech Nielsen <scni@novonordisk.com> * Chore: F841 Signed-off-by: Casper Guldbech Nielsen <scni@novonordisk.com> * Chore: Ruff Signed-off-by: Casper Guldbech Nielsen <scni@novonordisk.com> * Chore: E711 Signed-off-by: Casper Guldbech Nielsen <scni@novonordisk.com> * Chore: ruff Signed-off-by: Casper Guldbech Nielsen <scni@novonordisk.com> --------- Signed-off-by: Casper Guldbech Nielsen <scni@novonordisk.com> |
||
---|---|---|
.. | ||
components | ||
README.md | ||
requirements.txt | ||
weather_agent.py | ||
weather_tools.py |
README.md
Agent Tool Call with Dapr Agents
This quickstart demonstrates how to create an AI agent with custom tools using Dapr Agents. You'll learn how to build a weather assistant that can fetch information and perform actions using defined tools through LLM-powered function calls.
Prerequisites
- Python 3.10 (recommended)
- pip package manager
- OpenAI API key
Environment Setup
# Create a virtual environment
python3.10 -m venv .venv
# Activate the virtual environment
# On Windows:
.venv\Scripts\activate
# On macOS/Linux:
source .venv/bin/activate
# Install dependencies
pip install -r requirements.txt
Configuration
Create a .env
file in the project root:
OPENAI_API_KEY=your_api_key_here
Replace your_api_key_here
with your actual OpenAI API key.
Examples
Tool Creation and Agent Execution
This example shows how to create tools and an agent that can use them:
- First, create the tools in
weather_tools.py
:
from dapr_agents import tool
from pydantic import BaseModel, Field
class GetWeatherSchema(BaseModel):
location: str = Field(description="location to get weather for")
@tool(args_model=GetWeatherSchema)
def get_weather(location: str) -> str:
"""Get weather information based on location."""
import random
temperature = random.randint(60, 80)
return f"{location}: {temperature}F."
class JumpSchema(BaseModel):
distance: str = Field(description="Distance for agent to jump")
@tool(args_model=JumpSchema)
def jump(distance: str) -> str:
"""Jump a specific distance."""
return f"I jumped the following distance {distance}"
tools = [get_weather, jump]
- Then, create the agent in
weather_agent.py
:
import asyncio
from weather_tools import tools
from dapr_agents import Agent
from dotenv import load_dotenv
load_dotenv()
AIAgent = Agent(
name="Stevie",
role="Weather Assistant",
goal="Assist Humans with weather related tasks.",
instructions=[
"Get accurate weather information",
"From time to time, you can also jump after answering the weather question."
],
tools=tools
)
# Wrap your async call
async def main():
await AIAgent.run("What is the weather in Virginia, New York and Washington DC?")
if __name__ == "__main__":
asyncio.run(main())
- Run the weather agent:
python weather_agent.py
Expected output: The agent will identify the locations and use the get_weather tool to fetch weather information for each city.
Key Concepts
Tool Definition
- The
@tool
decorator registers functions as tools with the agent - Each tool has a docstring that helps the LLM understand its purpose
- Pydantic models provide type-safety for tool arguments
Agent Setup
- The
Agent
class sets up a tool-calling agent by default - The
role
,goal
, andinstructions
guide the agent's behavior - Tools are provided as a list for the agent to use
- Agent Memory keeps the conversation history that the agent can reference
Execution Flow
- The agent receives a user query
- The LLM determines which tool(s) to use based on the query
- The agent executes the tool with appropriate arguments
- The results are returned to the LLM to formulate a response
- The final answer is provided to the user
Working with Agent Memory
You can access and manage the agent's conversation history too. Add this code fragment to the end of weather_agent.py
and run it again.
# View the history after first interaction
print("Chat history after first interaction:")
print(AIAgent.chat_history)
# Second interaction (agent will remember the first one)
await AIAgent.run("How about in Seattle?")
# View updated history
print("Chat history after second interaction:")
print(AIAgent.chat_history)
# Reset memory
AIAgent.reset_memory()
print("Chat history after reset:")
print(AIAgent.chat_history) # Should be empty now
This will show agent interaction history growth and reset.
Persistent Agent Memory
Dapr Agents allows for agents to retain long-term memory by providing automatic state management of the history. The state can be saved into a wide variety of Dapr supported state stores.
To configure persistent agent memory, follow these steps:
- Set up the state store configuration. Here's an example of working with local Redis:
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
name: historystore
spec:
type: state.redis
version: v1
metadata:
- name: redisHost
value: localhost:6379
- name: redisPassword
value: ""
Save the file in a ./components
dir.
- Enable Dapr memory in code
import asyncio
from weather_tools import tools
from dapr_agents import Agent
from dotenv import load_dotenv
from dapr_agents.memory import ConversationDaprStateMemory
load_dotenv()
AIAgent = Agent(
name="Stevie",
role="Weather Assistant",
goal="Assist Humans with weather related tasks.",
instructions=[
"Get accurate weather information",
"From time to time, you can also jump after answering the weather question."
],
memory=ConversationDaprStateMemory(store_name="historystore", session_id="some-id"),
tools=tools
)
# Wrap your async call
async def main():
await AIAgent.run("What is the weather in Virginia, New York and Washington DC?")
if __name__ == "__main__":
asyncio.run(main())
- Run the agent with Dapr
dapr run --app-id weatheragent --resources-path ./components -- python weather_agent.py
Available Agent Types
Dapr Agents provides several agent implementations, each designed for different use cases:
1. Standard Agent (ToolCallAgent)
The default agent type, designed for tool execution and straightforward interactions. It receives your input, determines which tools to use, executes them directly, and provides the final answer. The reasoning process is mostly hidden from you, focusing instead on delivering concise responses.
2. ReActAgent
Implements the Reasoning-Action framework for more complex problem-solving with explicit thought processes. When you interact with it, you'll see explicit "Thought", "Action", and "Observation" steps as it works through your request, providing transparency into how it reaches conclusions.
3. OpenAPIReActAgent
There is one more agent that we didn't run in this quickstart. OpenAPIReActAgent specialized agent for working with OpenAPI specifications and API integrations. When you ask about working with an API, it will methodically identify relevant endpoints, construct proper requests with parameters, handle authentication, and execute API calls on your behalf.
from dapr_agents import Agent
from dapr_agents.tool.utils.openapi import OpenAPISpecParser
from dapr_agents.storage import VectorStore
# This agent type requires additional components
openapi_agent = Agent(
name="APIExpert",
role="API Expert",
pattern="openapireact", # Specify OpenAPIReAct pattern
spec_parser=OpenAPISpecParser(),
api_vector_store=VectorStore(),
auth_header={"Authorization": "Bearer token"}
)
Troubleshooting
- OpenAI API Key: Ensure your key is correctly set in the
.env
file - Tool Execution Errors: Check tool function implementations for exceptions
- Module Import Errors: Verify that requirements are installed correctly
Next Steps
After completing this quickstart, move on to the Agentic Workflow quickstart to learn how to orchestrate multi-step processes combining deterministic tasks with LLM-powered reasoning.