Initial import of Dapr Agents docs

Signed-off-by: Bilgin Ibryam <bibryam@gmail.com>
This commit is contained in:
Bilgin Ibryam 2025-07-17 11:05:21 +01:00
parent 01e8b9b6a7
commit d853078d7b
No known key found for this signature in database
GPG Key ID: F4E44C0A8C57006F
56 changed files with 1154 additions and 1 deletions

View File

@ -0,0 +1,115 @@
---
type: docs
title: "Contributing to Dapr agents"
linkTitle: "Dapr agents"
weight: 85
description: Guidelines for contributing to Dapr agents
---
When contributing to Dapr agents, the following rules and best-practices should be followed.
## Examples
The examples directory contains code samples for users to run to try out specific functionality of the various Dapr agents packages and extensions. When writing new and updated samples keep in mind:
- All examples should be runnable on Windows, Linux, and MacOS. While Python code is consistent among operating systems, any pre/post example commands should provide options through [codetabs]({{< ref "contributing-docs.md#tabbed-content" >}})
- Contain steps to download/install any required pre-requisites. Someone coming in with a fresh OS install should be able to start on the example and complete it without an error. Links to external download pages are fine.
## Dependencies
This project uses modern Python packaging with `pyproject.toml`. Dependencies are managed as follows:
- Main dependencies are in `[project.dependencies]`
- Test dependencies are in `[project.optional-dependencies.test]`
- Development dependencies are in `[project.optional-dependencies.dev]`
### Generating Requirements Files
If you need to generate requirements files (e.g., for deployment or specific environments):
```bash
# Generate requirements.txt
pip-compile pyproject.toml
# Generate dev-requirements.txt
pip-compile pyproject.toml --extra dev
```
### Installing Dependencies
```bash
# Install main package with test dependencies
pip install -e ".[test]"
# Install main package with development dependencies
pip install -e ".[dev]"
# Install main package with all optional dependencies
pip install -e ".[test,dev]"
```
## Testing
The project uses pytest for testing. To run tests:
```bash
# Run all tests
tox -e pytest
# Run specific test file
tox -e pytest tests/test_random_orchestrator.py
# Run tests with coverage
tox -e pytest --cov=dapr_agents
```
## Code Quality
The project uses several tools to maintain code quality:
```bash
# Run linting
tox -e flake8
# Run code formatting
tox -e ruff
# Run type checking
tox -e type
```
## Development Workflow
1. Install development dependencies:
```bash
pip install -e ".[dev]"
```
2. Run tests before making changes:
```bash
tox -e pytest
```
3. Make your changes
4. Run code quality checks:
```bash
tox -e flake8
tox -e ruff
tox -e type
```
5. Run tests again:
```bash
tox -e pytest
```
6. Submit your changes
## GitHub Dapr Bot Commands
Checkout the [daprbot documentation]({{< ref "daprbot.md" >}}) for GitHub commands you can run in this repo for common tasks. For example, you can run the `/assign` (as a comment on an issue) to assign issues to a user or group of users.
## Feedback
Was this page helpful?

View File

@ -1 +1,12 @@
# Dapr Agents ---
type: docs
title: "Dapr Agents"
linkTitle: "Dapr Agents"
weight: 25
description: "A framework for building production-grade resilient AI agent systems at scale"
---
### What is Dapr Agents?
Dapr Agents is a framework for building LLM-powered autonomous agentic applications using Dapr's distributed systems capabilities. It provides tools for creating agents that can execute tasks, make decisions, and collaborate through workflows, while leveraging Dapr's state management, messaging, and observability features for reliable execution at scale.

View File

@ -0,0 +1,259 @@
---
type: docs
title: "Core Concepts"
linkTitle: "Core Concepts"
weight: 30
description: "Learn about the core concepts and principles of Dapr Agents"
---
# Core Concepts
## Core Principles
![Agent Overview](/images/dapr-agents/concepts-agents-overview.png)
### 1. Agent-Centric Design
Dapr Agents is designed to place agents, powered by LLMs, at the core of task execution and workflow orchestration. This principle emphasizes:
* **LLM-Powered Agents**: Dapr Agents enables the creation of agents that leverage LLMs for reasoning, dynamic decision-making, and natural language interactions.
* **Adaptive Task Handling**: Agents in Dapr Agents are equipped with flexible patterns like tool calling and reasoning loops (e.g., ReAct), allowing them to autonomously tackle complex and evolving tasks.
* **Seamless Integration**: Dapr Agents' framework allows agents to act as modular, reusable building blocks that integrate seamlessly into workflows, whether they operate independently or collaboratively.
While Dapr Agents centers around agents, it also recognizes the versatility of using LLMs directly in deterministic workflows or simpler task sequences. In scenarios where the agent's built-in task-handling patterns, like `tool calling` or `ReAct` loops, are unnecessary, LLMs can act as core components for reasoning and decision-making. This flexibility ensures users can adapt Dapr Agents to suit diverse needs without being confined to a single approach.
{{% alert title="Note" color="info" %}}
Agents are not standalone; they are building blocks in larger, orchestrated workflows.
{{% /alert %}}
### 2. Decoupled Infrastructure Design
Dapr Agents ensures a clean separation between agents and the underlying infrastructure, emphasizing simplicity, scalability, and adaptability:
* **Agent Simplicity**: Agents focus purely on reasoning and task execution, while Pub/Sub messaging, routing, and validation are managed externally by modular infrastructure components.
* **Scalable and Adaptable Systems**: By offloading non-agent-specific responsibilities, Dapr Agents allows agents to scale independently and adapt seamlessly to new use cases or integrations.
{{% alert title="Note" color="info" %}}
Decoupling infrastructure keeps agents focused on tasks while enabling seamless scalability and integration across systems.
{{% /alert %}}
![Decoupled Principles](/images/dapr-agents/home_concepts_principles_decoupled.png)
### 3. Modular Component Model
Dapr Agents utilizes [Dapr's pluggable component framework](https://docs.dapr.io/concepts/components-concept/) and building blocks to simplify development and enhance flexibility:
* **Building Blocks for Core Functionality**: Dapr provides API building blocks, such as Pub/Sub messaging, state management, service invocation, and more, to address common microservice challenges and promote best practices.
* **Interchangeable Components**: Each building block operates on swappable components (e.g., Redis, Kafka, Azure CosmosDB), allowing you to replace implementations without changing application code.
* **Seamless Transitions**: Develop locally with default configurations and deploy effortlessly to cloud environments by simply updating component definitions.
* **Scalable Foundations**: Build resilient and adaptable architectures using Dapr's modular, production-ready building blocks.
{{% alert title="Note" color="info" %}}
Developers can easily switch between different components (e.g., Redis to DynamoDB) based on their deployment environment, ensuring portability and adaptability.
{{% /alert %}}
![Modular Principles](/images/dapr-agents/home_concepts_principles_modular.png)
### 4. Actor-Based Model for Agents
Dapr Agents leverages [Dapr's Virtual Actor model](https://docs.dapr.io/developing-applications/building-blocks/actors/actors-overview/) to enable agents to function efficiently and flexibly within distributed environments. Each agent in Dapr Agents is instantiated as an instance of a class, wrapped and managed by a virtual actor. This design offers:
* **Stateful Agents**: Virtual actors allow agents to store and recall information across tasks, maintaining context and continuity for workflows.
* **Dynamic Lifecycle Management**: Virtual actors are automatically instantiated when invoked and deactivated when idle. This eliminates the need for explicit creation or cleanup, ensuring resource efficiency and simplicity.
* **Location Transparency**: Agents can be accessed and operate seamlessly, regardless of where they are located in the system. The underlying runtime handles their mobility, enabling fault-tolerance and dynamic load balancing.
* **Scalable Execution**: Agents process one task at a time, avoiding concurrency issues, and scale dynamically across nodes to meet workload demands.
This model ensures agents remain focused on their core logic, while the infrastructure abstracts complexities like state management, fault recovery, and resource optimization.
{{% alert title="Note" color="info" %}}
Dapr Agents' use of virtual actors makes agents always addressable and highly scalable, enabling them to operate reliably and efficiently in distributed, high-demand environments.
{{% /alert %}}
### 5. Message-Driven Communication
Dapr Agents emphasizes the use of Pub/Sub messaging for event-driven communication between agents. This principle ensures:
* **Decoupled Architecture**: Asynchronous communication for scalability and modularity.
* **Real-Time Adaptability**: Agents react dynamically to events for faster, more flexible task execution.
* **Seamless Collaboration**: Agents share updates, distribute tasks, and respond to events in a highly coordinated way.
{{% alert title="Note" color="info" %}}
Pub/Sub messaging serves as the backbone for Dapr Agents' event-driven workflows, enabling agents to communicate and collaborate in real time.
{{% /alert %}}
![Message Principles](/images/dapr-agents/home_concepts_principles_message.png)
### 6. Workflow-Oriented Design
Dapr Agents embraces workflows as a foundational concept, integrating [Dapr Workflows](https://docs.dapr.io/developing-applications/building-blocks/workflow/workflow-overview/) to support both deterministic and event-driven task orchestration. This dual approach enables robust and adaptive systems:
* **Deterministic Workflows**: Dapr Agents uses Dapr Workflows for stateful, predictable task sequences. These workflows ensure reliable execution, fault tolerance, and state persistence, making them ideal for structured, multi-step processes that require clear, repeatable logic.
* **Event-Driven Workflows**: By combining Dapr Workflows with Pub/Sub messaging, Dapr Agents supports workflows that adapt to real-time events. This facilitates decentralized, asynchronous collaboration between agents, allowing workflows to dynamically adjust to changing scenarios.
By integrating these paradigms, Dapr Agents enables workflows that combine the reliability of deterministic execution with the adaptability of event-driven processes, ensuring flexibility and resilience in a wide range of applications.
{{% alert title="Note" color="info" %}}
Dapr Agents workflows blend structured, predictable logic with the dynamic responsiveness of event-driven systems, empowering both centralized and decentralized workflows.
{{% /alert %}}
![Workflow Principles](/images/dapr-agents/home_concepts_principles_workflows.png)
## Agents
Agents in `Dapr Agents` are autonomous systems powered by Large Language Models (LLMs), designed to execute tasks, reason through problems, and collaborate within workflows. Acting as intelligent building blocks, agents seamlessly combine LLM-driven reasoning with tool integration, memory, and collaboration features to enable scalable, production-grade agentic systems.
![Concepts Agents](/images/dapr-agents/concepts-agents.png)
### Core Features
#### 1. LLM Integration
Dapr Agents provides a unified interface to connect with LLM inference APIs. This abstraction allows developers to seamlessly integrate their agents with cutting-edge language models for reasoning and decision-making.
#### 2. Structured Outputs
Agents in Dapr Agents leverage structured output capabilities, such as [OpenAI's Function Calling](https://platform.openai.com/docs/guides/function-calling), to generate predictable and reliable results. These outputs follow [JSON Schema Draft 2020-12](https://json-schema.org/draft/2020-12/release-notes.html) and [OpenAPI Specification v3.1.0](https://github.com/OAI/OpenAPI-Specification) standards, enabling easy interoperability and tool integration.
#### 3. Tool Selection
Agents dynamically select the appropriate tool for a given task, using LLMs to analyze requirements and choose the best action. This is supported directly through LLM parametric knowledge and enhanced by [Function Calling](https://platform.openai.com/docs/guides/function-calling), ensuring tools are invoked efficiently and accurately.
#### 4. MCP Support
Dapr Agents includes built-in support for the [Model Context Protocol (MCP)](https://modelcontextprotocol.io/), enabling agents to dynamically discover and invoke external tools through a standardized interface. Using the provided MCPClient, agents can connect to MCP servers via two transport options: stdio for local development and sse for remote or distributed environments.
#### 5. Memory
Agents retain context across interactions, enhancing their ability to provide coherent and adaptive responses. Memory options range from simple in-memory lists for managing chat history to vector databases for semantic search and retrieval.
#### 6. Prompt Flexibility
Dapr Agents supports flexible prompt templates to shape agent behavior and reasoning. Users can define placeholders within prompts, enabling dynamic input of context for inference calls.
#### 7. Agent Services
Agents are exposed as independent services using [FastAPI and Dapr applications](https://docs.dapr.io/developing-applications/sdks/python/python-sdk-extensions/python-fastapi/). This modular approach separates the agent's logic from its service layer, enabling seamless reuse, deployment, and integration into multi-agent systems.
#### 8. Message-Driven Communication
Agents collaborate through [Pub/Sub messaging](https://docs.dapr.io/developing-applications/building-blocks/pubsub/pubsub-overview/), enabling event-driven communication and task distribution. This message-driven architecture allows agents to work asynchronously, share updates, and respond to real-time events, ensuring effective collaboration in distributed systems.
#### 9. Workflow Orchestration
Dapr Agents supports both deterministic and event-driven workflows to manage multi-agent systems via [Dapr Workflows](https://docs.dapr.io/developing-applications/building-blocks/workflow/workflow-overview/). Deterministic workflows provide clear, repeatable processes, while event-driven workflows allow for dynamic, adaptive collaboration between agents in centralized or decentralized architectures.
### Agent Types
Dapr Agents provides two agent types, each designed for different use cases:
#### Agent
The `Agent` class is a conversational agent that manages tool calls and conversations using a language model. It provides immediate, synchronous execution with built-in conversation memory and tool integration capabilities.
**Key Characteristics:**
- Synchronous execution with immediate responses
- Built-in conversation memory and tool history tracking
- Iterative conversation processing with max iteration limits
- Direct tool execution and result processing
- Graceful shutdown support with cancellation handling
**When to use:**
- Building conversational assistants that need immediate responses
- Scenarios requiring real-time tool execution and conversation flow
- When you need direct control over the conversation loop
- Quick prototyping and development of agent interactions
#### DurableAgent
The `DurableAgent` class is a workflow-based agent that extends the standard Agent with Dapr Workflows for long-running, fault-tolerant, and durable execution. It provides persistent state management, automatic retry mechanisms, and deterministic execution across failures.
**Key Characteristics:**
- Workflow-based execution using Dapr Workflows
- Persistent workflow state management across sessions and failures
- Automatic retry and recovery mechanisms
- Deterministic execution with checkpointing
- Built-in message routing and agent communication
- Supports complex orchestration patterns and multi-agent collaboration
**When to use:**
- Multi-step workflows that span time or systems
- Tasks requiring guaranteed progress tracking and state persistence
- Scenarios where operations may pause, fail, or need recovery without data loss
- Complex agent orchestration and multi-agent collaboration
- Production systems requiring fault tolerance and scalability
### Agent Patterns
In Dapr Agents, Agent Patterns define the built-in loops that allow agents to dynamically handle tasks. These patterns enable agents to iteratively reason, act, and adapt, making them flexible and capable problem-solvers.
#### Tool Calling
Tool Calling is an essential pattern in autonomous agent design, allowing AI agents to interact dynamically with external tools based on user input. One reliable method for enabling this is through [OpenAI's Function Calling](https://platform.openai.com/docs/guides/function-calling) capability.
![Tool Call Flow](/images/dapr-agents/concepts_agents_toolcall_flow.png)
1. The user submits a query specifying a task and the available tools.
2. The LLM analyzes the query and selects the right tool for the task.
3. The LLM provides a structured JSON output containing the tool's unique ID, name, and arguments.
4. The AI agent parses the JSON, executes the tool with the provided arguments, and sends the results back as a tool message.
5. The LLM then summarizes the tool's execution results within the user's context to deliver a comprehensive final response.
#### ReAct
The [ReAct (Reason + Act)](https://arxiv.org/pdf/2210.03629.pdf) pattern was introduced in 2022 to enhance the capabilities of LLM-based AI agents by combining reasoning with action. This approach allows agents not only to reason through complex tasks but also to interact with the environment, taking actions based on their reasoning and observing the outcomes.
![ReAct Flow](/images/dapr-agents/concepts_agents_react_flow.png)
* **Thought (Reasoning)**: The agent analyzes the situation and generates a thought or a plan based on the input.
* **Action**: The agent takes an action based on its reasoning.
* **Observation**: After the action is executed, the agent observes the results or feedback from the environment, assessing the effectiveness of its action.
## Messaging
Messaging is how agents communicate, collaborate, and adapt in workflows. It enables them to share updates, execute tasks, and respond to events seamlessly. Messaging is one of the main components of `event-driven` agentic workflows, ensuring tasks remain scalable, adaptable, and decoupled. Built entirely around the `Pub/Sub (publish/subscribe)` model, messaging leverages a message bus to facilitate communication across agents, services, and workflows.
### Key Role of Messaging in Agentic Workflows
Messaging connects agents in workflows, enabling real-time communication and coordination. It acts as the backbone of event-driven interactions, ensuring that agents work together effectively without requiring direct connections.
Through messaging, agents can:
* **Collaborate Across Tasks**: Agents exchange messages to share updates, broadcast events, or deliver task results.
* **Orchestrate Workflows**: Tasks are triggered and coordinated through published messages, enabling workflows to adjust dynamically.
* **Respond to Events**: Agents adapt to real-time changes by subscribing to relevant topics and processing events as they occur.
By using messaging, workflows remain modular and scalable, with agents focusing on their specific roles while seamlessly participating in the broader system.
### How Messaging Works
Messaging relies on the `Pub/Sub` model, which organizes communication into topics. These topics act as channels where agents can publish and subscribe to messages, enabling efficient and decoupled communication.
#### Message Bus and Topics
The message bus serves as the central system that manages topics and message delivery. Agents interact with the message bus to send and receive messages:
* **Publishing Messages**: Agents publish messages to a specific topic, making the information available to all subscribed agents.
* **Subscribing to Topics**: Agents subscribe to topics relevant to their roles, ensuring they only receive the messages they need.
* **Broadcasting Updates**: Multiple agents can subscribe to the same topic, allowing them to act on shared events or updates.
#### Scalability and Adaptability
The message bus ensures that communication scales effortlessly, whether you are adding new agents, expanding workflows, or adapting to changing requirements. Agents remain loosely coupled, allowing workflows to evolve without disruptions.
### Messaging in Event-Driven Workflows
Event-driven workflows depend on messaging to enable dynamic and real-time interactions. Unlike deterministic workflows, which follow a fixed sequence of tasks, event-driven workflows respond to the messages and events flowing through the system.
* **Real-Time Triggers**: Agents can initiate tasks or workflows by publishing specific events.
* **Asynchronous Execution**: Tasks are coordinated through messages, allowing agents to operate independently and in parallel.
* **Dynamic Adaptation**: Agents adjust their behavior based on the messages they receive, ensuring workflows remain flexible and resilient.
### Why Pub/Sub Messaging for Agentic Workflows?
Pub/Sub messaging is essential for event-driven agentic workflows because it:
* **Decouples Components**: Agents publish messages without needing to know which agents will receive them, promoting modular and scalable designs.
* **Enables Real-Time Communication**: Messages are delivered as events occur, allowing agents to react instantly.
* **Fosters Collaboration**: Multiple agents can subscribe to the same topic, making it easy to share updates or divide responsibilities.
This messaging framework ensures that agents operate efficiently, workflows remain flexible, and systems can scale dynamically.

View File

@ -0,0 +1,133 @@
---
type: docs
title: "Getting Started"
linkTitle: "Getting Started"
weight: 20
description: "How to install and set up Dapr Agents"
---
## Install Dapr Agents
{{% alert title="Note" color="info" %}}
Make sure you have Python already installed. `Python >=3.10`
{{% /alert %}}
### As a Python package using Pip
```bash
pip install dapr-agents
```
## Install Dapr CLI
Install the Dapr CLI to manage Dapr-related tasks like running applications with sidecars, viewing logs, and launching the Dapr dashboard. It works seamlessly with both self-hosted and Kubernetes environments. For a complete step-by-step guide, visit the official [Dapr CLI installation page](https://docs.dapr.io/getting-started/install-dapr-cli/).
Verify the CLI is installed by restarting your terminal/command prompt and running the following:
```bash
dapr -h
```
## Initialize Dapr in Local Mode
{{% alert title="Note" color="info" %}}
Make sure you have [Docker](https://docs.docker.com/get-started/get-docker/) already installed. I use [Docker Desktop](https://www.docker.com/products/docker-desktop/).
{{% /alert %}}
Initialize Dapr locally to set up a self-hosted environment for development. This process fetches and installs the Dapr sidecar binaries, runs essential services as Docker containers, and prepares a default components folder for your application. For detailed steps, see the official [guide on initializing Dapr locally](https://docs.dapr.io/getting-started/install-dapr-selfhost/).
![Dapr Initialization](/images/dapr-agents/home_installation_init.png)
To initialize the Dapr control plane containers and create a default configuration file, run:
```bash
dapr init
```
Verify you have container instances with `daprio/dapr`, `openzipkin/zipkin`, and `redis` images running:
```bash
docker ps
```
## Enable Redis Insights
Dapr uses [Redis](https://docs.dapr.io/reference/components-reference/supported-state-stores/setup-redis/) by default for state management and pub/sub messaging, which are fundamental to Dapr Agents's agentic workflows. These capabilities enable the following:
* Viewing Pub/Sub Messages: Monitor and inspect messages exchanged between agents in event-driven workflows.
* Inspecting State Information: Access and analyze shared state data among agents.
* Debugging and Monitoring Events: Track workflow events in real time to ensure smooth operations and identify issues.
To make these insights more accessible, you can leverage Redis Insight.
```bash
docker run --rm -d --name redisinsight -p 5540:5540 redis/redisinsight:latest
```
Once running, access the Redis Insight interface at `http://localhost:5540/`
### Connection Configuration
* Port: 6379
* Host (Linux): 172.17.0.1
* Host (Windows/Mac): host.docker.internal (example `host.docker.internal:6379`)
Redis Insight makes it easy to visualize and manage the data powering your agentic workflows, ensuring efficient debugging, monitoring, and optimization.
![Redis Dashboard](/images/dapr-agents/home_installation_redis_dashboard.png)
## Using custom endpoints
### Azure hosted OpenAI endpoint
In order to use Azure OpenAI for the model you'll need the following `.env` file:
```env
AZURE_OPENAI_API_KEY=your_custom_key
AZURE_OPENAI_ENDPOINT=your_custom_endpoint
AZURE_OPENAI_DEPLOYMENT=your_custom_model
AZURE_OPENAI_API_VERSION="azure_openai_api_version"
```
**NB!** the `AZURE_OPENAI_DEPLOYMENT` refers to the _model_, e.g., `gpt-4o`. `AZURE_OPENAI_API_VERSION` has been tested to work against `2024-08-01-preview`.
Then instantiate the agent(s) as well as the orchestrator as follows:
```python
from dapr_agents import DurableAgent, OpenAIChatClient
from dotenv import load_dotenv
import asyncio
import logging
import os
async def main():
llm = OpenAIChatClient(
api_key=os.getenv("AZURE_OPENAI_API_KEY"),
azure_endpoint=os.getenv("AZURE_OPENAI_ENDPOINT"),
azure_deployment=os.getenv("AZURE_OPENAI_DEPLOYMENT"),
api_version=os.getenv("AZURE_OPENAI_API_VERSION")
)
try:
elf_service = DurableAgent(
name="Legolas", role="Elf",
goal="Act as a scout, marksman, and protector, using keen senses and deadly accuracy to ensure the success of the journey.",
instructions=[
"Speak like Legolas, with grace, wisdom, and keen observation.",
"Be swift, silent, and precise, moving effortlessly across any terrain.",
"Use superior vision and heightened senses to scout ahead and detect threats.",
"Excel in ranged combat, delivering pinpoint arrow strikes from great distances.",
"Respond concisely, accurately, and relevantly, ensuring clarity and strict alignment with the task."],
llm=llm, # Note the explicit reference to the above OpenAIChatClient
message_bus_name="messagepubsub",
state_store_name="workflowstatestore",
state_key="workflow_state",
agents_registry_store_name="agentstatestore",
agents_registry_key="agents_registry",
)
...
```
The above is taken from [multi-agent quick starter](https://github.com/dapr/dapr-agents/blob/main/quickstarts/05-multi-agent-workflow-dapr-workflows/services/elf/app.py#L1-L23).

View File

@ -0,0 +1,21 @@
---
type: docs
title: "Introduction"
linkTitle: "Introduction"
weight: 10
description: "Overview of Dapr Agents and its key features"
---
Dapr Agents is a developer framework designed to build production-grade resilient AI agent systems that operate at scale. Built on top of the battle-tested Dapr project, it enables software developers to create AI agents that reason, act, and collaborate using Large Language Models (LLMs), while leveraging built-in observability and stateful workflow execution to guarantee agentic workflows complete successfully, no matter how complex.
![Dapr Agents Logo](/images/dapr-agents/logo-workflows.png)
## Key Features
- **Scale and Efficiency**: Run thousands of agents efficiently on a single core. Dapr distributes single and multi-agent apps transparently across fleets of machines and handles their lifecycle.
- **Workflow Resilience**: Automatically retries agentic workflows and ensures task completion.
- **Kubernetes-Native**: Easily deploy and manage agents in Kubernetes environments.
- **Data-Driven Agents**: Directly integrate with databases, documents, and unstructured data by connecting to dozens of different data sources.
- **Multi-Agent Systems**: Secure and observable by default, enabling collaboration between agents.
- **Vendor-Neutral & Open Source**: Avoid vendor lock-in and gain flexibility across cloud and on-premises deployments.
- **Platform-Ready**: Built-in RBAC, access scopes and declarative resources enable platform teams to integrate Dapr agents into their systems.

View File

@ -0,0 +1,227 @@
---
type: docs
title: "Quickstarts"
linkTitle: "Quickstarts"
weight: 55
description: "Get started with Dapr Agents through practical examples and tutorials"
---
# Dapr Agents Quickstarts
[Quickstarts](https://github.com/dapr/dapr-agents/tree/main/quickstarts) demonstrate how to use Dapr Agents to build applications with LLM-powered autonomous agents and event-driven workflows. Each quickstart builds upon the previous one, introducing new concepts incrementally.
{{% alert title="Note" color="info" %}}
Not all quickstarts require Docker, but it is recommended to have your [local Dapr environment set up]({{< ref "/developing-applications/dapr-agents/dapr-agents-getting-started.md" >}}) with Docker for the best development experience and to follow the steps in this guide seamlessly.
{{% /alert %}}
## Available Quickstarts
| Scenario | What You'll Learn |
|----------|-------------------|
| [Hello World](https://github.com/dapr/dapr-agents/tree/main/quickstarts/01-hello-world)<br>A rapid introduction that demonstrates core Dapr Agents concepts through simple, practical examples. | - **Basic LLM Usage**: Simple text generation with OpenAI models <br> - **Creating Agents**: Building agents with custom tools in under 20 lines of code <br> - **ReAct Pattern**: Implementing reasoning and action cycles in agents <br> - **Simple Workflows**: Setting up multi-step LLM processes |
| [LLM Call with Dapr Chat Client](https://github.com/dapr/dapr-agents/tree/main/quickstarts/02_llm_call_dapr)<br>Explore interaction with Language Models through Dapr Agents' `DaprChatClient`, featuring basic text generation with plain text prompts and templates. | - **Text Completion**: Generating responses to prompts <br> - **Swapping LLM providers**: Switching LLM backends without application code change <br> - **Resilience**: Setting timeout, retry and circuit-breaking <br> - **PII Obfuscation**: Automatically detect and mask sensitive user information |
| [LLM Call with OpenAI Client](https://github.com/dapr/dapr-agents/tree/main/quickstarts/02_llm_call_open_ai)<br>Discover how to leverage native LLM client libraries with Dapr Agents using the OpenAI Client for chat completion, audio processing, and embeddings. | - **Text Completion**: Generating responses to prompts <br> - **Structured Outputs**: Converting LLM responses to Pydantic objects <br><br> *Note: Other quickstarts for specific clients are available for [Elevenlabs](https://github.com/dapr/dapr-agents/tree/main/quickstarts/02_llm_call_elevenlabs), [Hugging Face](https://github.com/dapr/dapr-agents/tree/main/quickstarts/02_llm_call_hugging_face), and [Nvidia](https://github.com/dapr/dapr-agents/tree/main/quickstarts/02_llm_call_nvidia).* |
| [Agent Tool Call](https://github.com/dapr/dapr-agents/tree/main/quickstarts/03-agent-tool-call)<br>Build your first AI agent with custom tools by creating a practical weather assistant that fetches information and performs actions. | - **Tool Definition**: Creating reusable tools with the `@tool` decorator <br> - **Agent Configuration**: Setting up agents with roles, goals, and tools <br> - **Function Calling**: Enabling LLMs to execute Python functions |
| [Agentic Workflow](https://github.com/dapr/dapr-agents/tree/main/quickstarts/04-agentic-workflow)<br>Dive into stateful workflows with Dapr Agents by orchestrating sequential and parallel tasks through powerful workflow capabilities. | - **LLM-powered Tasks**: Using language models in workflows <br> - **Task Chaining**: Creating resilient multi-step processes executing in sequence <br> - **Fan-out/Fan-in**: Executing activities in parallel; then synchronizing these activities until all preceding activities have completed |
| [Multi-Agent Workflows](https://github.com/dapr/dapr-agents/tree/main/quickstarts/05-multi-agent-workflow-dapr-workflows)<br>Explore advanced event-driven workflows featuring a Lord of the Rings themed multi-agent system where autonomous agents collaborate to solve problems. | - **Multi-agent Systems**: Creating a network of specialized agents <br> - **Event-driven Architecture**: Implementing pub/sub messaging between agents <br> - **Actor Model**: Using Dapr Actors for stateful agent management <br> - **Workflow Orchestration**: Coordinating agents through different selection strategies <br><br> *Note: To see Actor-based workflow see [Multi-Agent Actors](https://github.com/dapr/dapr-agents/tree/main/quickstarts/05-multi-agent-workflow-actors).* |
| [Multi-Agent Workflow on Kubernetes](https://github.com/dapr/dapr-agents/tree/main/quickstarts/07-k8s-multi-agent-workflow)<br>Run multi-agent workflows in Kubernetes, demonstrating deployment and orchestration of event-driven agent systems in a containerized environment. | - **Kubernetes Deployment**: Running agents on Kubernetes <br> - **Container Orchestration**: Managing agent lifecycles with K8s <br> - **Service Communication**: Inter-agent communication in K8s |
| [Document Agent with Chainlit](https://github.com/dapr/dapr-agents/tree/main/quickstarts/06-document-agent-chainlit)<br>Create a conversational agent with an operational UI that can upload, and learn unstructured documents while retaining long-term memory. | - **Conversational Document Agent**: Upload and converse over unstructured documents <br> - **Cloud Agnostic Storage**: Upload files to multiple storage providers <br> - **Conversation Memory Storage**: Persists conversation history using external storage. |
| [Data Agent with MCP and Chainlit](https://github.com/dapr/dapr-agents/tree/main/quickstarts/08-data-agent-mcp-chainlit)<br>Build a conversational agent over a Postgres database using Model Composition Protocol (MCP) with a ChatGPT-like interface. | - **Database Querying**: Natural language queries to relational databases <br> - **MCP Integration**: Connecting to databases without DB-specific code <br> - **Data Analysis**: Complex data analysis through conversation |
## Agentic Workflows
{{% alert title="Note" color="info" %}}
This quickstart requires `Dapr CLI` and `Docker`. You must have your [local Dapr environment set up]({{< ref "/developing-applications/dapr-agents/dapr-agents-getting-started.md" >}}).
{{% /alert %}}
Traditional workflows follow fixed, step-by-step processes, while autonomous agents make real-time decisions based on reasoning and available data. Agentic workflows combine the best of both approaches, integrating structured execution with reasoning loops to enable more adaptive decision-making.
This allows systems to analyze information, adjust to new conditions, and refine actions dynamically rather than strictly following a predefined sequence. By incorporating planning, feedback loops, and model-driven adjustments, agentic workflows provide both scalability and predictability while still allowing for autonomous adaptation.
In `Dapr Agents`, agentic workflows leverage LLM-based tasks, reasoning loop patterns, and an event-driven system powered by pub/sub messaging and a shared message bus. Agents operate autonomously, responding to events in real time, making decisions, and collaborating dynamically. This makes the system highly adaptable—agents can communicate, share tasks, and adjust based on new information, ensuring fluid coordination across distributed environments.
{{% alert title="Tip" color="primary" %}}
We will demonstrate this concept using the [Multi-Agent Workflow Guide](https://github.com/dapr/dapr-agents/tree/main/cookbook/workflows/multi_agents/basic_lotr_agents_as_workflows) from our Cookbook, which outlines a step-by-step guide to implementing a basic agentic workflow.
{{% /alert %}}
### Agents as Services: Dapr Workflows
In `Dapr Agents`, agents can be implemented using [Dapr Workflows](https://docs.dapr.io/developing-applications/building-blocks/workflow/workflow-overview/), both of which are exposed as microservices via [FastAPI servers](https://docs.dapr.io/developing-applications/sdks/python/python-sdk-extensions/python-fastapi/).
#### Agents as Dapr Workflows (Orchestration, Complex Execution)
[Dapr Workflows](https://docs.dapr.io/developing-applications/building-blocks/workflow/workflow-overview/) define the structured execution of agent behaviors, reasoning loops, and tool selection. Workflows allow agents to:
✅ Define complex execution sequences instead of just reacting to events.
✅ Integrate with message buses to listen and act on real-time inputs.
✅ Orchestrate multi-step reasoning, retrieval-augmented generation (RAG), and tool use.
✅ Best suited for goal-driven, structured, and iterative decision-making workflows.
🚀 Dapr agents uses Dapr Workflows for orchestration and complex multi-agent collaboration.
**Example: An Agent as a Dapr Workflow**
```python
from dapr_agents import DurableAgent
from dotenv import load_dotenv
import asyncio
import logging
async def main():
try:
# Define Agent
wizard_service = DurableAgent(
name="Gandalf",
role="Wizard",
goal="Guide the Fellowship with wisdom and strategy, using magic and insight to ensure the downfall of Sauron.",
instructions=[
"Speak like Gandalf, with wisdom, patience, and a touch of mystery.",
"Provide strategic counsel, always considering the long-term consequences of actions.",
"Use magic sparingly, applying it when necessary to guide or protect.",
"Encourage allies to find strength within themselves rather than relying solely on your power.",
"Respond concisely, accurately, and relevantly, ensuring clarity and strict alignment with the task."
],
message_bus_name="messagepubsub",
state_store_name="agenticworkflowstate",
state_key="workflow_state",
agents_registry_store_name="agentsregistrystore",
agents_registry_key="agents_registry",
)
await wizard_service.start()
except Exception as e:
print(f"Error starting service: {e}")
if __name__ == "__main__":
load_dotenv()
logging.basicConfig(level=logging.INFO)
asyncio.run(main())
```
Here, `Gandalf` is an `DurableAgent` implemented as a workflow, meaning it executes structured reasoning, plans actions, and integrates tools within a managed workflow execution loop.
#### How We Use Dapr Workflows for Orchestration
In dapr agents, the orchestrator itself is a Dapr Workflow, which:
✅ Coordinates execution of agentic workflows (LLM-driven or rule-based).
✅ Delegates tasks to agents implemented as either other workflows.
✅ Manages reasoning loops, plan adaptation, and error handling dynamically.
🚀 The LLM default orchestrator is a Dapr Workflow that interacts with agent workflows.
**Example: The Orchestrator as a Dapr Workflow**
```python
from dapr_agents import LLMOrchestrator
from dotenv import load_dotenv
import asyncio
import logging
async def main():
try:
agentic_orchestrator = LLMOrchestrator(
name="Orchestrator",
message_bus_name="messagepubsub",
state_store_name="agenticworkflowstate",
state_key="workflow_state",
agents_registry_store_name="agentsregistrystore",
agents_registry_key="agents_registry",
max_iterations=25
).as_service(port=8009)
await agentic_orchestrator.start()
except Exception as e:
print(f"Error starting service: {e}")
if __name__ == "__main__":
load_dotenv()
logging.basicConfig(level=logging.INFO)
asyncio.run(main())
```
This orchestrator acts as a central controller, ensuring that agentic workflows communicate effectively, execute tasks in order, and handle iterative reasoning loops.
### Structuring A Multi-Agent Project
The way to structure such a project is straightforward. We organize our services into a directory that contains individual folders for each agent, along with a `components` directory for Dapr resources configurations. Each agent service includes its own app.py file, where the FastAPI server and the agent logic are defined.
```
dapr.yaml # Dapr main config file
components/ # Dapr resource files
├── statestore.yaml # State store configuration
├── pubsub.yaml # Pub/Sub configuration
└── ... # Other Dapr components
services/ # Directory for agent services
├── agent1/ # First agent's service
│ ├── app.py # FastAPI app for agent1
│ └── ... # Additional agent1 files
│── agent2/ # Second agent's service
│ ├── app.py # FastAPI app for agent2
│ └── ... # Additional agent2 files
└── ... # More agents
```
### Set Up an Environment Variables File
This example uses our default `LLM Orchestrator`. Therefore, you have to create an `.env` file to securely store your Inference Service (i.e. OpenAI) API keys and other sensitive information. For example:
```
OPENAI_API_KEY="your-api-key"
OPENAI_BASE_URL="https://api.openai.com/v1"
```
### Define Your First Agent Service
Let's start by defining a `Hobbit` service with a specific `name`, `role`, `goal` and `instructions`.
```
services/ # Directory for agent services
├── hobbit/ # Hobbit Service
│ ├── app.py # Dapr Enabled FastAPI app for Hobbit
```
Create the `app.py` script and provide the following information.
```python
from dapr_agents import DurableAgent
from dotenv import load_dotenv
import asyncio
import logging
async def main():
try:
# Define Agent and expose it as a service
hobbit_agent = DurableAgent(
role="Hobbit",
name="Frodo",
goal="Carry the One Ring to Mount Doom, resisting its corruptive power while navigating danger and uncertainty.",
instructions=[
"Speak like Frodo, with humility, determination, and a growing sense of resolve.",
"Endure hardships and temptations, staying true to the mission even when faced with doubt.",
"Seek guidance and trust allies, but bear the ultimate burden alone when necessary.",
"Move carefully through enemy-infested lands, avoiding unnecessary risks.",
"Respond concisely, accurately, and relevantly, ensuring clarity and strict alignment with the task."
],
message_bus_name="messagepubsub",
agents_registry_store_name="agentsregistrystore",
agents_registry_key="agents_registry",
).as_service(8001)
await hobbit_service.start()
except Exception as e:
print(f"Error starting service: {e}")
if __name__ == "__main__":
load_dotenv()
logging.basicConfig(level=logging.INFO)
asyncio.run(main())
```
Now, you can define multiple services following this format, but it's essential to pay attention to key areas to ensure everything runs smoothly. Specifically, focus on correctly configuring the components (e.g., `statestore` and `pubsub` names) and incrementing the ports for each service.
**Key Considerations:**
* Ensure the `message_bus_name` matches the `pub/sub` component name in your `pubsub.yaml` file.
* Verify the `agents_registry_store_name` matches the state store component defined in your `agentstate.yaml` file.
* Increment the `service_port` for each new agent service (e.g., 8001, 8002, 8003).

View File

@ -0,0 +1,316 @@
---
type: docs
title: "Tools"
linkTitle: "Tools"
weight: 50
description: "Various tools and integrations available in Dapr Agents"
---
# Tools
## Text Splitter
The Text Splitter module is a foundational tool in `Dapr Agents` designed to preprocess documents for use in [Retrieval-Augmented Generation (RAG)](https://en.wikipedia.org/wiki/Retrieval-augmented_generation) workflows and other `in-context learning` applications. Its primary purpose is to break large documents into smaller, meaningful chunks that can be embedded, indexed, and efficiently retrieved based on user queries.
By focusing on manageable chunk sizes and preserving contextual integrity through overlaps, the Text Splitter ensures documents are processed in a way that supports downstream tasks like question answering, summarization, and document retrieval.
### Why Use a Text Splitter?
When building RAG pipelines, splitting text into smaller chunks serves these key purposes:
* **Enabling Effective Indexing**: Chunks are embedded and stored in a vector database, making them retrievable based on similarity to user queries.
* **Maintaining Semantic Coherence**: Overlapping chunks help retain context across splits, ensuring the system can connect related pieces of information.
* **Handling Model Limitations**: Many models have input size limits. Splitting ensures text fits within these constraints while remaining meaningful.
This step is crucial for preparing knowledge to be embedded into a searchable format, forming the backbone of retrieval-based workflows.
### Strategies for Text Splitting
The Text Splitter supports multiple strategies to handle different types of documents effectively. These strategies balance the size of each chunk with the need to maintain context.
#### 1. Character-Based Length
* **How It Works**: Counts the number of characters in each chunk.
* **Use Case**: Simple and effective for text splitting without dependency on external tokenization tools.
Example:
```python
from dapr_agents.document.splitter.text import TextSplitter
# Character-based splitter (default)
splitter = TextSplitter(chunk_size=1024, chunk_overlap=200)
```
#### 2. Token-Based Length
* **How It Works**: Counts tokens, which are the semantic units used by language models (e.g., words or subwords).
* **Use Case**: Ensures compatibility with models like GPT, where token limits are critical.
**Example**:
```python
import tiktoken
from dapr_agents.document.splitter.text import TextSplitter
enc = tiktoken.get_encoding("cl100k_base")
def length_function(text: str) -> int:
return len(enc.encode(text))
splitter = TextSplitter(
chunk_size=1024,
chunk_overlap=200,
chunk_size_function=length_function
)
```
The flexibility to define the chunk size function makes the Text Splitter adaptable to various scenarios.
### Chunk Overlap
To preserve context, the Text Splitter includes a chunk overlap feature. This ensures that parts of one chunk carry over into the next, helping maintain continuity when chunks are processed sequentially.
Example:
* With `chunk_size=1024` and `chunk_overlap=200`, the last `200` tokens or characters of one chunk appear at the start of the next.
* This design helps in tasks like text generation, where maintaining context across chunks is essential.
### How to Use the Text Splitter
Here's a practical example of using the Text Splitter to process a PDF document:
#### Step 1: Load a PDF
```python
import requests
from pathlib import Path
# Download PDF
pdf_url = "https://arxiv.org/pdf/2412.05265.pdf"
local_pdf_path = Path("arxiv_paper.pdf")
if not local_pdf_path.exists():
response = requests.get(pdf_url)
response.raise_for_status()
with open(local_pdf_path, "wb") as pdf_file:
pdf_file.write(response.content)
```
#### Step 2: Read the Document
For this example, we use Dapr Agents' `PyPDFReader`.
{{% alert title="Note" color="info" %}}
The PyPDF Reader relies on the [pypdf python library](https://pypi.org/project/pypdf/), which is not included in the Dapr Agents core module. This design choice helps maintain modularity and avoids adding unnecessary dependencies for users who may not require this functionality. To use the PyPDF Reader, ensure that you install the library separately.
{{% /alert %}}
```python
pip install pypdf
```
Then, initialize the reader to load the PDF file.
```python
from dapr_agents.document.reader.pdf.pypdf import PyPDFReader
reader = PyPDFReader()
documents = reader.load(local_pdf_path)
```
#### Step 3: Split the Document
```python
splitter = TextSplitter(
chunk_size=1024,
chunk_overlap=200,
chunk_size_function=length_function
)
chunked_documents = splitter.split_documents(documents)
```
#### Step 4: Analyze Results
```python
print(f"Original document pages: {len(documents)}")
print(f"Total chunks: {len(chunked_documents)}")
print(f"First chunk: {chunked_documents[0]}")
```
### Key Features
* **Hierarchical Splitting**: Splits text by separators (e.g., paragraphs), then refines chunks further if needed.
* **Customizable Chunk Size**: Supports character-based and token-based length functions.
* **Overlap for Context**: Retains portions of one chunk in the next to maintain continuity.
* **Metadata Preservation**: Each chunk retains metadata like page numbers and start/end indices for easier mapping.
By understanding and leveraging the `Text Splitter`, you can preprocess large documents effectively, ensuring they are ready for embedding, indexing, and retrieval in advanced workflows like RAG pipelines.
## Arxiv Fetcher
The Arxiv Fetcher module in `Dapr Agents` provides a powerful interface to interact with the [arXiv API](https://info.arxiv.org/help/api/index.html). It is designed to help users programmatically search for, retrieve, and download scientific papers from arXiv. With advanced querying capabilities, metadata extraction, and support for downloading PDF files, the Arxiv Fetcher is ideal for researchers, developers, and teams working with academic literature.
### Why Use the Arxiv Fetcher?
The Arxiv Fetcher simplifies the process of accessing research papers, offering features like:
* **Automated Literature Search**: Query arXiv for specific topics, keywords, or authors.
* **Metadata Retrieval**: Extract structured metadata, such as titles, abstracts, authors, categories, and submission dates.
* **Precise Filtering**: Limit search results by date ranges (e.g., retrieve the latest research in a field).
* **PDF Downloading**: Fetch full-text PDFs of papers for offline use.
### How to Use the Arxiv Fetcher
#### Step 1: Install Required Modules
{{% alert title="Note" color="info" %}}
The Arxiv Fetcher relies on a [lightweight Python wrapper](https://github.com/lukasschwab/arxiv.py) for the arXiv API, which is not included in the Dapr Agents core module. This design choice helps maintain modularity and avoids adding unnecessary dependencies for users who may not require this functionality. To use the Arxiv Fetcher, ensure you install the [library](https://pypi.org/project/arxiv/) separately.
{{% /alert %}}
```python
pip install arxiv
```
#### Step 2: Initialize the Fetcher
Set up the `ArxivFetcher` to begin interacting with the arXiv API.
```python
from dapr_agents.document import ArxivFetcher
# Initialize the fetcher
fetcher = ArxivFetcher()
```
#### Step 3: Perform Searches
**Basic Search by Query String**
Search for papers using simple keywords. The results are returned as Document objects, each containing:
* `text`: The abstract of the paper.
* `metadata`: Structured metadata such as title, authors, categories, and submission dates.
```python
# Search for papers related to "machine learning"
results = fetcher.search(query="machine learning", max_results=5)
# Display metadata and summaries
for doc in results:
print(f"Title: {doc.metadata['title']}")
print(f"Authors: {', '.join(doc.metadata['authors'])}")
print(f"Summary: {doc.text}\n")
```
**Advanced Querying**
Refine searches using logical operators like AND, OR, and NOT or perform field-specific searches, such as by author.
Examples:
Search for papers on "agents" and "cybersecurity":
```python
results = fetcher.search(query="all:(agents AND cybersecurity)", max_results=10)
```
Exclude specific terms (e.g., "quantum" but not "computing"):
```python
results = fetcher.search(query="all:(quantum NOT computing)", max_results=10)
```
Search for papers by a specific author:
```python
results = fetcher.search(query='au:"John Doe"', max_results=10)
```
**Filter Papers by Date**
Limit search results to a specific time range, such as papers submitted in the last 24 hours.
```python
from datetime import datetime, timedelta
# Calculate the date range
last_24_hours = (datetime.now() - timedelta(days=1)).strftime("%Y%m%d")
today = datetime.now().strftime("%Y%m%d")
# Search for recent papers
recent_results = fetcher.search(
query="all:(agents AND cybersecurity)",
from_date=last_24_hours,
to_date=today,
max_results=5
)
# Display metadata
for doc in recent_results:
print(f"Title: {doc.metadata['title']}")
print(f"Authors: {', '.join(doc.metadata['authors'])}")
print(f"Published: {doc.metadata['published']}")
print(f"Summary: {doc.text}\n")
```
#### Step 4: Download PDFs
Fetch the full-text PDFs of papers for offline use. Metadata is preserved alongside the downloaded files.
```python
import os
from pathlib import Path
# Create a directory for downloads
os.makedirs("arxiv_papers", exist_ok=True)
# Download PDFs
download_results = fetcher.search(
query="all:(agents AND cybersecurity)",
max_results=5,
download=True,
dirpath=Path("arxiv_papers")
)
for paper in download_results:
print(f"Downloaded Paper: {paper['title']}")
print(f"File Path: {paper['file_path']}\n")
```
#### Step 5: Extract and Process PDF Content
Use `PyPDFReader` from `Dapr Agents` to extract content from downloaded PDFs. Each page is treated as a separate Document object with metadata.
```python
from pathlib import Path
from dapr_agents.document import PyPDFReader
reader = PyPDFReader()
docs_read = []
for paper in download_results:
local_pdf_path = Path(paper["file_path"])
documents = reader.load(local_pdf_path, additional_metadata=paper)
docs_read.extend(documents)
# Verify results
print(f"Extracted {len(docs_read)} documents.")
print(f"First document text: {docs_read[0].text}")
print(f"Metadata: {docs_read[0].metadata}")
```
### Practical Applications
The Arxiv Fetcher enables various use cases for researchers and developers:
* **Literature Reviews**: Quickly retrieve and organize relevant papers on a given topic or by a specific author.
* **Trend Analysis**: Identify the latest research in a domain by filtering for recent submissions.
* **Offline Research Workflows**: Download and process PDFs for local analysis and archiving.
### Next Steps
While the Arxiv Fetcher provides robust functionality for retrieving and processing research papers, its output can be integrated into advanced workflows:
* **Building a Searchable Knowledge Base**: Combine fetched papers with tools like text splitting and vector embeddings for advanced search capabilities.
* **Retrieval-Augmented Generation (RAG)**: Use processed papers as inputs for RAG pipelines to power question-answering systems.
* **Automated Literature Surveys**: Generate summaries or insights based on the fetched and processed research.

View File

@ -0,0 +1,71 @@
---
type: docs
title: "Why Dapr Agents"
linkTitle: "Why Dapr Agents"
weight: 25
description: "Understanding the benefits and use cases for Dapr Agents"
---
Dapr Agents is an open-source framework for building and orchestrating LLM-based autonomous agents, designed to simplify the complexity of creating scalable agentic workflows and microservices. Inspired by the growing need for frameworks that integrate seamlessly with distributed systems, Dapr Agents enables developers to focus on designing intelligent agents without getting bogged down by infrastructure concerns.
### The Problem
Many agentic frameworks today attempt to redefine how microservices are built and orchestrated by developing their own platforms for workflows, Pub/Sub messaging, state management, and service communication. While these efforts showcase innovation, they often lead to a steep learning curve, fragmented systems, and unnecessary complexity when scaling or adapting to new environments.
Many of these frameworks require developers to adopt entirely new paradigms or recreate foundational infrastructure, rather than building on existing solutions that are proven to handle these challenges at scale. This added complexity often diverts focus from the primary goal: designing and implementing intelligent, effective agents.
### Dapr Agents' Approach
Dapr Agents takes a distinct approach by building on [Dapr](https://dapr.io/), a portable and event-driven runtime optimized for distributed systems. Dapr offers built-in APIs and patterns such as state management, Pub/Sub messaging, service invocation, and virtual actors—that eliminate the need to recreate foundational components from scratch. By integrating seamlessly with Dapr, Dapr Agents empowers developers to focus on the intelligence and behavior of LLM-powered agents while leveraging a proven framework for scalability and reliability.
Rather than reinventing microservices, Dapr Agents enables developers to design, test, and deploy agents that seamlessly integrate as collaborative services within larger systems. Whether experimenting with a single agent or orchestrating workflows involving multiple agents, Dapr Agents simplifies the exploration and implementation of scalable agentic workflows.
[//]: # (### Conclusion)
[//]: # ()
[//]: # (Dapr Agents provides a unified framework for designing, deploying, and orchestrating LLM-powered agents. By leveraging Daprs runtime and modular components, Dapr Agents allows developers to focus on building intelligent systems without worrying about the complexities of distributed infrastructure. Whether you're creating standalone agents or orchestrating multi-agent workflows, Dapr Agents empowers you to explore the future of intelligent, scalable, and collaborative systems.)
## Dapr Agents Benefits
### Scalable Workflows as a First Class Citizen
Dapr Agents uses a [durable-execution workflow engine](https://docs.dapr.io/developing-applications/building-blocks/workflow/workflow-overview/) that guarantees each agent task executes to completion in the face of network interruptions, node crashes and other types of disruptive failures. Developers do not need to know about the underlying concepts of the workflow engine - simply write an agent that performs any number of tasks and these will get automatically distributed across the cluster. If any task fails, it will be retried and recover its state from where it left off.
### Cost-Effective AI Adoption
Dapr Agents builds on top of Dapr's Workflow API, which under the hood represents each agent as an actor, a single unit of compute and state that is thread-safe and natively distributed, lending itself well to an agentic Scale-To-Zero architecture. This minimizes infrastructure costs, making AI adoption accessible to everyone. The underlying virtual actor model allows thousands of agents to run on demand on a single core machine with double-digit millisecond latency when scaling from zero. When unused, the agents are reclaimed by the system but retain their state until the next time they are needed. With this design, there's no trade-off between performance and resource efficiency.
### Data-Centric AI Agents
With built-in connectivity to over 50 enterprise data sources, Dapr Agents efficiently handles structured and unstructured data. From basic [PDF extraction]({{< ref "/developing-applications/dapr-agents/dapr-agents-tools.md" >}}) to large-scale database interactions, it enables seamless data-driven AI workflows with minimal code changes. Dapr's [bindings](https://docs.dapr.io/reference/components-reference/supported-bindings/) and [state stores](https://docs.dapr.io/reference/components-reference/supported-state-stores/), along with [MCP](https://modelcontextprotocol.io/) support, provide access to a large number of data sources that can be used to ingest data to an agent.
### Accelerated Development
Dapr Agents provides a set of AI features that give developers a complete API surface to tackle common problems. Some of these include:
- Multi-agent communications
- Structured outputs
- Multiple LLM providers
- Contextual memory
- Flexible prompting
- Intelligent tool selection
- [MCP integration](https://docs.anthropic.com/en/docs/agents-and-tools/mcp)
### Integrated Security and Reliability
By building on top of Dapr, platform and infrastructure teams can apply Dapr's [resiliency policies](https://docs.dapr.io/operations/resiliency/policies/) to the database and/or message broker of their choice that are used by Dapr Agents. These policies include timeouts, retry/backoffs and circuit breakers. When it comes to security, Dapr provides the option to scope access to a given database or message broker to one or more agentic app deployments. In addition, Dapr Agents uses mTLS to encrypt the communication layer of its underlying components.
### Built-in Messaging and State Infrastructure
- **Service-to-Service Invocation**: Facilitates direct communication between agents with built-in service discovery, error handling, and distributed tracing. Agents can leverage this for synchronous messaging in multi-agent workflows.
- **Publish and Subscribe**: Supports loosely coupled collaboration between agents through a shared message bus. This enables real-time, event-driven interactions critical for task distribution and coordination.
- **Durable Workflow**: Defines long-running, persistent workflows that combine deterministic processes with LLM-based decision-making. Dapr Agents uses this to orchestrate complex multi-step agentic workflows seamlessly.
- **State Management**: Provides a flexible key-value store for agents to retain context across interactions, ensuring continuity and adaptability during workflows.
- **Actors**: Implements the Virtual Actor pattern, allowing agents to operate as self-contained, stateful units that handle messages sequentially. This eliminates concurrency concerns and enhances scalability in agentic systems.
### Vendor-Neutral and Open Source
As a part of **CNCF**, Dapr Agents is vendor-neutral, eliminating concerns about lock-in, intellectual property risks, or proprietary restrictions. Organizations gain full flexibility and control over their AI applications using open-source software they can audit and contribute to.

Binary file not shown.

After

Width:  |  Height:  |  Size: 105 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 59 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 53 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 143 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 7.7 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 152 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 115 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 364 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 154 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 243 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 513 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 36 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 32 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 117 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 134 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 127 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 233 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 240 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 593 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 582 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 325 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 540 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 406 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 580 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 251 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 493 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 510 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 547 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 387 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 309 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 402 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 458 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 541 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 150 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 122 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 459 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 377 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 390 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 223 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 120 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 324 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 482 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 329 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 555 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 225 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 406 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 356 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 409 KiB