|
||
---|---|---|
.. | ||
01_ask_llm.py | ||
02_build_agent.py | ||
03_reason_act.py | ||
04_chain_tasks.py | ||
README.md | ||
requirements.txt |
README.md
Hello World with Dapr Agents
This quickstart provides a hands-on introduction to Dapr Agents through simple examples. You'll learn the fundamentals of working with LLMs, creating basic agents, implementing the ReAct pattern, and setting up simple workflows - all in less than 20 lines of code per example.
Prerequisites
- Python 3.10 (recommended)
- pip package manager
- OpenAI API key
Environment Setup
# Create a virtual environment
python3.10 -m venv .venv
# Activate the virtual environment
# On Windows:
.venv\Scripts\activate
# On macOS/Linux:
source .venv/bin/activate
# Install dependencies
pip install -r requirements.txt
Configuration
Create a .env
file in the project root:
OPENAI_API_KEY=your_api_key_here
Replace your_api_key_here
with your actual OpenAI API key.
Examples
1. Basic LLM Usage
Run the basic LLM example to see how to interact with OpenAI's language models:
python 01_ask_llm.py
This example demonstrates the simplest way to use Dapr Agents' OpenAIChatClient:
from dapr_agents import OpenAIChatClient
from dotenv import load_dotenv
load_dotenv()
llm = OpenAIChatClient()
response = llm.generate("Tell me a joke")
print("Got response:", response.get_content())
Expected output: The LLM will respond with a joke.
2. Simple Agent with Tools
Run the agent example to see how to create an agent with custom tools:
python 02_build_agent.py
This example shows how to create a basic agent with a custom tool:
import asyncio
from dapr_agents import tool, Agent
from dotenv import load_dotenv
load_dotenv()
@tool
def my_weather_func() -> str:
"""Get current weather."""
return "It's 72°F and sunny"
async def main():
weather_agent = Agent(
name="WeatherAgent",
role="Weather Assistant",
instructions=["Help users with weather information"],
tools=[my_weather_func]
)
response = await weather_agent.run("What's the weather?")
print(response)
if __name__ == "__main__":
asyncio.run(main())
Expected output: The agent will use the weather tool to provide the current weather.
3. ReAct Pattern Implementation
Run the ReAct pattern example to see how to create an agent that can reason and act:
python 03_reason_act.py
import asyncio
from dapr_agents import tool, ReActAgent
from dotenv import load_dotenv
load_dotenv()
@tool
def search_weather(city: str) -> str:
"""Get weather information for a city."""
weather_data = {"london": "rainy", "paris": "sunny"}
return weather_data.get(city.lower(), "Unknown")
@tool
def get_activities(weather: str) -> str:
"""Get activity recommendations."""
activities = {"rainy": "Visit museums", "sunny": "Go hiking"}
return activities.get(weather.lower(), "Stay comfortable")
async def main():
react_agent = ReActAgent(
name="TravelAgent",
role="Travel Assistant",
instructions=["Check weather, then suggest activities"],
tools=[search_weather, get_activities]
)
result = await react_agent.run("What should I do in London today?")
if result:
print("Result:", result)
if __name__ == "__main__":
asyncio.run(main())
Expected output: The agent will first check the weather in London, find it's rainy, and then recommend visiting museums.
4. Simple Workflow
Make sure Dapr is initialized on your system:
dapr init
Run the workflow example to see how to create a multi-step LLM process:
dapr run --app-id dapr-agent-wf -- python 04_chain_tasks.py
This example demonstrates how to create a workflow with multiple tasks:
from dapr_agents.workflow import WorkflowApp, workflow, task
from dapr.ext.workflow import DaprWorkflowContext
from dotenv import load_dotenv
load_dotenv()
@workflow(name='analyze_topic')
def analyze_topic(ctx: DaprWorkflowContext, topic: str):
# Each step is durable and can be retried
outline = yield ctx.call_activity(create_outline, input=topic)
blog_post = yield ctx.call_activity(write_blog, input=outline)
return blog_post
@task(description="Create a detailed outline about {topic}")
def create_outline(topic: str) -> str:
pass
@task(description="Write a comprehensive blog post following this outline: {outline}")
def write_blog(outline: str) -> str:
pass
if __name__ == '__main__':
wfapp = WorkflowApp()
results = wfapp.run_and_monitor_workflow_sync(
analyze_topic,
input="AI Agents"
)
print(f"Result: {results}")
Expected output: The workflow will create an outline about AI Agents and then generate a blog post based on that outline.
Key Concepts
- OpenAIChatClient: The interface for interacting with OpenAI's LLMs
- Agent: A class that combines an LLM with tools and instructions
- @tool decorator: A way to create tools that agents can use
- ReActAgent: An agent that follows the Reasoning + Action pattern
- WorkflowApp: A Dapr-powered way to create stateful, multi-step processes
Dapr Integration
These examples don't directly expose Dapr building blocks, but they're built on Dapr Agents which behind the scenes leverages the full capabilities of the Dapr runtime:
- Resilience: Built-in retry policies, circuit breaking, and timeout handling external systems interactions
- Orchestration: Stateful, durable workflows that can survive process restarts and continue execution from where they left off
- Interoperability: Pluggable component architecture that works with various backends and cloud services without changing application code
- Scalability: Distribute agents across infrastructure, from local development to multi-node Kubernetes clusters
- Event-Driven: Pub/Sub messaging for event-driven agent collaboration and coordination
- Observability: Integrated distributed tracing, metrics collection, and logging for visibility into agent operations
- Security: Protection through scoping, encryption, secret management, and authentication/authorization controls
In the later quickstarts, you'll see explicit Dapr integration through state stores, pub/sub, and workflow services.
Troubleshooting
- API Key Issues: If you see an authentication error, verify your OpenAI API key in the
.env
file - Python Version: If you encounter compatibility issues, make sure you're using Python 3.10+
- Environment Activation: Ensure your virtual environment is activated before running examples
- Import Errors: If you see module not found errors, verify that
pip install -r requirements.txt
completed successfully
Next Steps
After completing these examples, move on to the LLM Call quickstart to learn more about structured outputs from LLMs.