More testing coverage (more quickstarts) (#25)

* Completes all OpenAI LLM calls quickstarts

Signed-off-by: Elena Kolevska <elena@kolevska.com>

* Completes all OpenAI LLM calls quickstarts

Signed-off-by: Elena Kolevska <elena@kolevska.com>

* More tests for multi agent actor workflows (adds llm and random orchestrators)

Signed-off-by: Elena Kolevska <elena@kolevska.com>

* Adds nvidia tests

Signed-off-by: Elena Kolevska <elena@kolevska.com>

* Adds dapr llm tests

Signed-off-by: Elena Kolevska <elena@kolevska.com>

* Readds the diff workflows

Signed-off-by: Elena Kolevska <elena@kolevska.com>

* fix orchestrator names

Signed-off-by: Elena Kolevska <elena@kolevska.com>

* Multi agent workflows - agents as dapr workflows

Signed-off-by: Elena Kolevska <elena@kolevska.com>

not needed

Signed-off-by: Elena Kolevska <elena@kolevska.com>

* Adds a github workflow to validate quickstarts

Signed-off-by: Elena Kolevska <elena@kolevska.com>

* Adds elevenlabs llm test

Signed-off-by: Elena Kolevska <elena@kolevska.com>

---------

Signed-off-by: Elena Kolevska <elena@kolevska.com>
This commit is contained in:
Elena Kolevska 2025-03-10 16:24:00 +00:00 committed by GitHub
parent 04ddf1acbd
commit 6471baa784
No known key found for this signature in database
GPG Key ID: B5690EEEBB952194
62 changed files with 1785 additions and 34 deletions

42
.github/workflows/e2e-tests.yaml vendored Normal file
View File

@ -0,0 +1,42 @@
#
# Copyright 2025 The Dapr Authors
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
# http://www.apache.org/licenses/LICENSE-2.0
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
name: E2E Tests
on:
workflow_dispatch:
jobs:
test:
name: Run tests
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
with:
fetch-depth: 0
- name: Set up Python 3.11
uses: actions/setup-python@v5
with:
python-version: 3.11
- name: Install dependencies
run: |
python -m pip install --upgrade pip
- name: Validate quickstarts
env:
OPENAI_BASE_URL: "https://api.openai.com/v1"
OPENAI_API_KEY: ${{ secrets.OPENAI_API_KEY }}
NVIDIA_API_KEY: ${{ secrets.NVIDIA_API_KEY }}
ELEVENLABS_API_KEY: ${{ secrets.ELEVENLABS_API_KEY }}
HUGGINGFACE_API_KEY: ${{ secrets.HUGGINGFACE_API_KEY }}
run: |
make validate-quickstarts

View File

@ -6,7 +6,7 @@
# http://www.apache.org/licenses/LICENSE-2.0
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implieh.
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#

1
.gitignore vendored
View File

@ -163,3 +163,4 @@ cython_debug/
# and can be added to the global gitignore or merged into this file. For a more nuclear
# option (not recommended) you can uncomment the following to ignore the entire idea folder.
#.idea/
.idea

View File

@ -116,6 +116,3 @@ pip install mkdocs-jupyter
```bash
mkdocs serve
```
## Acknowledgments
Dapr Agents was born out of a desire to explore and learn more about [Dapr](https://dapr.io/) and its potential for building agentic systems. I wanted to understand how to deploy agents as services, manage message communication, and connect various components effectively. Along the way, I looked to several established frameworks for ideas and guidance, which helped shape my thinking and approach:

View File

@ -255,4 +255,4 @@ In the later quickstarts, you'll see explicit Dapr integration through state sto
## Next Steps
After completing these examples, move on to the [LLM Call quickstart](../02-llm-call) to learn more about structured outputs from LLMs.
After completing these examples, move on to the [LLM Call quickstart](../02_llm_call_open_ai) to learn more about structured outputs from LLMs.

View File

@ -1,12 +0,0 @@
from dapr_agents import OpenAIChatClient
from dotenv import load_dotenv
# Load environment variables from .env
load_dotenv()
# Initialize the chat client and call
llm = OpenAIChatClient()
response = llm.generate("Name a famous dog!")
if len(response.get_content()) > 0:
print("Response: ", response.get_content())

View File

@ -0,0 +1,77 @@
# OpenAI LLM calls with Dapr Agents
This quickstart demonstrates how to use Dapr Agents' LLM capabilities to interact with language models and generate both free-form text and structured data. You'll learn how to make basic calls to LLMs and how to extract structured information in a type-safe manner.
## Prerequisites
- Python 3.10 (recommended)
- pip package manager
## Environment Setup
```bash
# Create a virtual environment
python3.10 -m venv .venv
# Activate the virtual environment
# On Windows:
.venv\Scripts\activate
# On macOS/Linux:
source .venv/bin/activate
# Install dependencies
pip install -r requirements.txt
```
## Examples
### Text
**1. Run the basic text completion example:**
<!-- STEP
name: Run text completion example
expected_stdout_lines:
- "Response:"
- "Response with prompty:"
- "Response with user input:"
timeout_seconds: 30
output_match_mode: substring
-->
```bash
dapr run --app-id daprllm --resources-path components/ -- python text_completion.py
```
<!-- END_STEP -->
The script demonstrates basic usage of the DaprChatClient for text generation:
```python
import os
from dapr_agents.llm import DaprChatClient
from dapr_agents.types import UserMessage
os.environ['DAPR_LLM_COMPONENT_DEFAULT'] = 'echo'
# Basic chat completion
llm = DaprChatClient()
response = llm.generate("Name a famous dog!")
if len(response.get_content()) > 0:
print("Response: ", response.get_content())
# Chat completion using a prompty file for context
llm = DaprChatClient.from_prompty('basic.prompty')
response = llm.generate(input_data={"question":"What is your name?"})
if len(response.get_content()) > 0:
print("Response with prompty: ", response.get_content())
# Chat completion with user input
llm = DaprChatClient()
response = llm.generate(messages=[UserMessage("hello")])
if len(response.get_content()) > 0 and "hello" in response.get_content().lower():
print("Response with user input: ", response.get_content())
```

View File

@ -0,0 +1,23 @@
---
name: Basic Prompt
description: A basic prompt that uses the chat API to answer questions
model:
api: chat
configuration:
type: nvidia
name: meta/llama3-8b-instruct
parameters:
max_tokens: 128
temperature: 0.2
inputs:
question:
type: string
sample:
"question": "Who is the most famous person in the world?"
---
system:
You are an AI assistant who helps people find information.
As the assistant, you answer questions briefly, succinctly.
user:
{{question}}

View File

@ -0,0 +1,7 @@
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
name: echo
spec:
type: conversation.echo
version: v1

View File

@ -0,0 +1 @@
dapr-agents==0.1.dev26

View File

@ -0,0 +1,28 @@
import os
from dapr_agents.llm import DaprChatClient
from dapr_agents.types import UserMessage
os.environ['DAPR_LLM_COMPONENT_DEFAULT'] = 'echo'
# Basic chat completion
llm = DaprChatClient()
response = llm.generate("Name a famous dog!")
if len(response.get_content()) > 0:
print("Response: ", response.get_content())
# Chat completion using a prompty file for context
llm = DaprChatClient.from_prompty('basic.prompty')
response = llm.generate(input_data={"question":"What is your name?"})
if len(response.get_content()) > 0:
print("Response with prompty: ", response.get_content())
# Chat completion with user input
llm = DaprChatClient()
response = llm.generate(messages=[UserMessage("hello")])
if len(response.get_content()) > 0 and "hello" in response.get_content().lower():
print("Response with user input: ", response.get_content())

View File

@ -0,0 +1,85 @@
# OpenAI LLM calls with Dapr Agents
This quickstart demonstrates how to use Dapr Agents' LLM capabilities to interact with language models and generate both free-form text and structured data. You'll learn how to make basic calls to LLMs and how to extract structured information in a type-safe manner.
## Prerequisites
- Python 3.10 (recommended)
- pip package manager
- OpenAI API key
## Environment Setup
```bash
# Create a virtual environment
python3.10 -m venv .venv
# Activate the virtual environment
# On Windows:
.venv\Scripts\activate
# On macOS/Linux:
source .venv/bin/activate
# Install dependencies
pip install -r requirements.txt
```
## Configuration
Create a `.env` file in the project root:
```env
ELEVENLABS_API_KEY=your_api_key_here
```
Replace `your_api_key_here` with your actual Elevenlabs API key.
## Examples
### Audio
You can use the OpenAIAudioClient in `dapr-agents` for basic tasks with the OpenAI Audio API. We will explore:
- Generating speech from text and saving it as an MP3 file.
- Transcribing audio to text.
- Translating audio content to English.
**1. Run the text to speech example:**
<!-- STEP
name: Run audio generation example
expected_stdout_lines:
- "Audio saved to output_speech.mp3"
- "File output_speech.mp3 has been deleted."
-->
```bash
python text_to_speech.py
```
<!-- END_STEP -->
## Key Concepts
- **OpenAIChatClient**: The interface for interacting with OpenAI's language models
- **generate()**: The primary method for getting responses from LLMs
- **response_model**: Using Pydantic models to structure LLM outputs
- **get_content()**: Extracting plain text from LLM responses
## Dapr Integration
While these examples don't explicitly use Dapr's distributed capabilities, Dapr Agents provides:
- **Unified API**: Consistent interfaces for different LLM providers
- **Type Safety**: Structured data extraction and validation
- **Integration Path**: Foundation for building more complex, distributed LLM applications
In later quickstarts, you'll see how these LLM interactions integrate with Dapr's building blocks.
## Troubleshooting
1. **Authentication Errors**: If you encounter authentication failures, check your OpenAI API key in the `.env` file
2. **Structured Output Errors**: If the model fails to produce valid structured data, try refining your model or prompt
3. **Module Not Found**: Ensure you've activated your virtual environment and installed the requirements
## Next Steps
After completing these examples, move on to the [Agent Tool Call quickstart](../03-agent-tool-call) to learn how to build agents that can use tools to interact with external systems.

View File

@ -1,2 +1,3 @@
dapr-agents==0.1.dev26
python-dotenv
tiktoken

View File

@ -0,0 +1,41 @@
import os
from dapr_agents.types.llm import AudioSpeechRequest
from dapr_agents import ElevenLabsSpeechClient
from dotenv import load_dotenv
load_dotenv()
client = ElevenLabsSpeechClient(
model="eleven_multilingual_v2", # Default model
voice="JBFqnCBsd6RMkjVDRZzb" # 'name': 'George', 'language': 'en', 'labels': {'accent': 'British', 'description': 'warm', 'age': 'middle aged', 'gender': 'male', 'use_case': 'narration'}
)
# Define the text to convert to speech
text = "Dapr Agents is an open-source framework for researchers and developers"
# Create speech from text
audio_bytes = client.create_speech(
text=text,
output_format="mp3_44100_128" # default output format, mp3 with 44.1kHz sample rate at 128kbps.
)
# You can also automatically create the audio file by passing the file name as an argument
# client.create_speech(
# text=text,
# output_format="mp3_44100_128", # default output format, mp3 with 44.1kHz sample rate at 128kbps.,
# file_name='output_speech_auto.mp3'
# )
# Save the audio to an MP3 file
output_path = "output_speech.mp3"
with open(output_path, "wb") as audio_file:
audio_file.write(audio_bytes)
print(f"Audio saved to {output_path}")
os.remove(output_path)
print(f"File {output_path} has been deleted.")

View File

@ -0,0 +1,158 @@
# OpenAI LLM calls with Dapr Agents
This quickstart demonstrates how to use Dapr Agents' LLM capabilities to interact with language models and generate both free-form text and structured data. You'll learn how to make basic calls to LLMs and how to extract structured information in a type-safe manner.
## Prerequisites
- Python 3.10 (recommended)
- pip package manager
- OpenAI API key
## Environment Setup
```bash
# Create a virtual environment
python3.10 -m venv .venv
# Activate the virtual environment
# On Windows:
.venv\Scripts\activate
# On macOS/Linux:
source .venv/bin/activate
# Install dependencies
pip install -r requirements.txt
```
## Configuration
Create a `.env` file in the project root:
```env
NVIDIA_API_KEY=your_api_key_here
```
Replace `your_api_key_here` with your actual Nvidia key.
## Examples
### Text
**1. Run the basic text completion example:**
<!-- STEP
name: Run text completion example
expected_stdout_lines:
- "Response:"
- "Response with prompty:"
- "Response with user input:"
timeout_seconds: 30
output_match_mode: substring
-->
```bash
python text_completion.py
```
<!-- END_STEP -->
The script demonstrates basic usage of Dapr Agents' OpenAIChatClient for text generation:
```python
from dapr_agents import NVIDIAChatClient
from dapr_agents.types import UserMessage
from dotenv import load_dotenv
# Load environment variables from .env
load_dotenv()
# Basic chat completion
llm = NVIDIAChatClient()
response = llm.generate("Name a famous dog!")
if len(response.get_content()) > 0:
print("Response: ", response.get_content())
# Chat completion using a prompty file for context
llm = NVIDIAChatClient.from_prompty('basic.prompty')
response = llm.generate(input_data={"question":"What is your name?"})
if len(response.get_content()) > 0:
print("Response with prompty: ", response.get_content())
# Chat completion with user input
llm = NVIDIAChatClient()
response = llm.generate(messages=[UserMessage("hello")])
if len(response.get_content()) > 0 and "hello" in response.get_content().lower():
print("Response with user input: ", response.get_content())
```
**2. Expected output:** The LLM will respond with the name of a famous dog (e.g., "Lassie", "Hachiko", etc.).
**Run the structured text completion example:**
<!-- STEP
name: Run text completion example
expected_stdout_lines:
- '"name":'
- '"breed":'
- '"reason":'
timeout_seconds: 30
output_match_mode: substring
-->
```bash
python structured_completion.py
```
<!-- END_STEP -->
This example shows how to use Pydantic models to get structured data from LLMs:
```python
import json
from dapr_agents import NVIDIAChatClient
from dapr_agents.types import UserMessage
from pydantic import BaseModel
from dotenv import load_dotenv
# Load environment variables from .env
load_dotenv()
# Define our data model
class Dog(BaseModel):
name: str
breed: str
reason: str
# Initialize the chat client
llm = NVIDIAChatClient(
model="meta/llama-3.1-8b-instruct"
)
# Get structured response
response = llm.generate(
messages=[UserMessage("One famous dog in history.")],
response_format=Dog
)
print(json.dumps(response.model_dump(), indent=2))
```
**Expected output:** A structured Dog object with name, breed, and reason fields (e.g., `Dog(name='Hachiko', breed='Akita', reason='Known for his remarkable loyalty...')`)
### Embeddings
You can use the `OpenAIEmbedder` in dapr-agents for generating text embeddings.
**1. Embeddings a single text:**
<!-- STEP
name: Run audio transcription example
expected_stdout_lines:
- "Embedding (first 5 values):"
- "Text 1 embedding (first 5 values):"
- "Text 2 embedding (first 5 values):"
output_match_mode: substring
-->
```bash
python embeddings.py
```
<!-- END_STEP -->

View File

@ -0,0 +1,23 @@
---
name: Basic Prompt
description: A basic prompt that uses the chat API to answer questions
model:
api: chat
configuration:
type: nvidia
name: meta/llama3-8b-instruct
parameters:
max_tokens: 128
temperature: 0.2
inputs:
question:
type: string
sample:
"question": "Who is the most famous person in the world?"
---
system:
You are an AI assistant who helps people find information.
As the assistant, you answer questions briefly, succinctly.
user:
{{question}}

View File

@ -0,0 +1,35 @@
from dapr_agents.document.embedder import NVIDIAEmbedder
from dotenv import load_dotenv
load_dotenv()
# Initialize the embedder
embedder = NVIDIAEmbedder(
model="nvidia/nv-embedqa-e5-v5", # Default embedding model
)
# Generate embedding with a single text
text = "Dapr Agents is an open-source framework for researchers and developers"
embedding = embedder.embed(text)
# Display the embedding
if len(embedding) > 0:
print(f"Embedding (first 5 values): {embedding[:5]}...")
# Multiple input texts
texts = [
"Dapr Agents is an open-source framework for researchers and developers",
"It provides tools to create, orchestrate, and manage agents"
]
# Generate embeddings
embeddings = embedder.embed(texts)
if len(embeddings) == 0:
print("No embeddings generated")
exit()
# Display the embeddings
for i, emb in enumerate(embeddings):
print(f"Text {i + 1} embedding (first 5 values): {emb[:5]}")

View File

@ -0,0 +1,3 @@
dapr-agents==0.1.dev26
python-dotenv
tiktoken

View File

@ -0,0 +1,28 @@
import json
from dapr_agents import NVIDIAChatClient
from dapr_agents.types import UserMessage
from pydantic import BaseModel
from dotenv import load_dotenv
# Load environment variables from .env
load_dotenv()
# Define our data model
class Dog(BaseModel):
name: str
breed: str
reason: str
# Initialize the chat client
llm = NVIDIAChatClient(
model="meta/llama-3.1-8b-instruct"
)
# Get structured response
response = llm.generate(
messages=[UserMessage("One famous dog in history.")],
response_format=Dog
)
print(json.dumps(response.model_dump(), indent=2))

View File

@ -0,0 +1,28 @@
from dapr_agents import NVIDIAChatClient
from dapr_agents.types import UserMessage
from dotenv import load_dotenv
# Load environment variables from .env
load_dotenv()
# Basic chat completion
llm = NVIDIAChatClient()
response = llm.generate("Name a famous dog!")
if len(response.get_content()) > 0:
print("Response: ", response.get_content())
# Chat completion using a prompty file for context
llm = NVIDIAChatClient.from_prompty('basic.prompty')
response = llm.generate(input_data={"question":"What is your name?"})
if len(response.get_content()) > 0:
print("Response with prompty: ", response.get_content())
# Chat completion with user input
llm = NVIDIAChatClient()
response = llm.generate(messages=[UserMessage("hello")])
if len(response.get_content()) > 0 and "hello" in response.get_content().lower():
print("Response with user input: ", response.get_content())

View File

@ -1,4 +1,4 @@
# LLM Call with Dapr Agents
# OpenAI LLM calls with Dapr Agents
This quickstart demonstrates how to use Dapr Agents' LLM capabilities to interact with language models and generate both free-form text and structured data. You'll learn how to make basic calls to LLMs and how to extract structured information in a type-safe manner.
@ -36,14 +36,16 @@ Replace `your_api_key_here` with your actual OpenAI API key.
## Examples
### 1. Text Completion
### Text
Run the basic text completion example:
**1. Run the basic text completion example:**
<!-- STEP
name: Run text completion example
expected_stdout_lines:
- "Response:"
- "Response with prompty:"
- "Response with user input:"
timeout_seconds: 30
output_match_mode: substring
-->
@ -56,24 +58,38 @@ The script demonstrates basic usage of Dapr Agents' OpenAIChatClient for text ge
```python
from dapr_agents import OpenAIChatClient
from dapr_agents.types import UserMessage
from dotenv import load_dotenv
# Load environment variables from .env
load_dotenv()
# Initialize the chat client and call
# Basic chat completion
llm = OpenAIChatClient()
response = llm.generate("Name a famous dog!")
if len(response.get_content()) > 0:
print("Response: ", response.get_content())
# Chat completion using a prompty file for context
llm = OpenAIChatClient.from_prompty('basic.prompty')
response = llm.generate(input_data={"question":"What is your name?"})
if len(response.get_content()) > 0:
print("Response with prompty: ", response.get_content())
# Chat completion with user input
llm = OpenAIChatClient()
response = llm.generate(messages=[UserMessage("hello")])
if len(response.get_content()) > 0 and "hello" in response.get_content().lower():
print("Response with user input: ", response.get_content())
```
**Expected output:** The LLM will respond with the name of a famous dog (e.g., "Lassie", "Hachiko", etc.).
**2. Expected output:** The LLM will respond with the name of a famous dog (e.g., "Lassie", "Hachiko", etc.).
### 2. Structured Output
Run the structured output example:
**Run the structured text completion example:**
<!-- STEP
name: Run text completion example
@ -122,6 +138,82 @@ print(json.dumps(response.model_dump(), indent=2))
**Expected output:** A structured Dog object with name, breed, and reason fields (e.g., `Dog(name='Hachiko', breed='Akita', reason='Known for his remarkable loyalty...')`)
### Audio
You can use the OpenAIAudioClient in `dapr-agents` for basic tasks with the OpenAI Audio API. We will explore:
- Generating speech from text and saving it as an MP3 file.
- Transcribing audio to text.
- Translating audio content to English.
**1. Run the text to speech example:**
<!-- STEP
name: Run audio generation example
expected_stdout_lines:
- "Audio saved to output_speech.mp3"
- "File output_speech.mp3 has been deleted."
-->
```bash
python text_to_speech.py
```
<!-- END_STEP -->
**2. Run the speech to text transcription example:**
<!-- STEP
name: Run audio transcription example
expected_stdout_lines:
- "Transcription:"
- "Success! The transcription contains at least 5 out of 7 words."
output_match_mode: substring
-->
```bash
python audio_transcription.py
```
<!-- END_STEP -->
**2. Run the speech to text translation example:**
[//]: # (<!-- STEP)
[//]: # (name: Run audio translation example)
[//]: # (expected_stdout_lines:)
[//]: # ( - "Translation:")
[//]: # ( - "Success! The translation contains at least 5 out of 6 words.")
[//]: # (-->)
[//]: # (```bash)
[//]: # (python audio_translation.py)
[//]: # (```)
[//]: # (<!-- END_STEP -->)
### Embeddings
You can use the `OpenAIEmbedder` in dapr-agents for generating text embeddings.
**1. Embeddings a single text:**
<!-- STEP
name: Run audio transcription example
expected_stdout_lines:
- "Embedding (first 5 values):"
- "Text 1 embedding (first 5 values):"
- "Text 2 embedding (first 5 values):"
output_match_mode: substring
-->
```bash
python embeddings.py
```
<!-- END_STEP -->
## Key Concepts
- **OpenAIChatClient**: The interface for interacting with OpenAI's language models

View File

@ -0,0 +1,49 @@
from dapr_agents.types.llm import AudioTranscriptionRequest
from dapr_agents import OpenAIAudioClient
from dotenv import load_dotenv
load_dotenv()
client = OpenAIAudioClient()
# Specify the audio file to transcribe
audio_file_path = "speech.mp3"
# Create a transcription request
transcription_request = AudioTranscriptionRequest(
model="whisper-1",
file=audio_file_path
)
############
# You can also use audio bytes:
############
#
# with open(audio_file_path, "rb") as f:
# audio_bytes = f.read()
#
# transcription_request = AudioTranscriptionRequest(
# model="whisper-1",
# file=audio_bytes, # File as bytes
# language="en" # Optional: Specify the language of the audio
# )
# Generate transcription
transcription_response = client.create_transcription(request=transcription_request)
# Display the transcription result
if not len(transcription_response.text) > 0:
exit(1)
print("Transcription:", transcription_response.text)
words = ["dapr", "agents", "open", "source", "framework", "researchers", "developers"]
normalized_text = transcription_response.text.lower()
count = 0
for word in words:
if word in normalized_text:
count += 1
if count >= 5:
print("Success! The transcription contains at least 5 out of 7 words.")

View File

@ -0,0 +1,36 @@
from dapr_agents.types.llm import AudioTranslationRequest
from dapr_agents import OpenAIAudioClient
from dotenv import load_dotenv
load_dotenv()
client = OpenAIAudioClient()
# Specify the audio file to translate
audio_file_path = "speech_spanish.mp3"
# Create a translation request
translation_request = AudioTranslationRequest(
model="whisper-1",
file=audio_file_path,
prompt="The user will provide an audio file in Spanish. Translate the audio to English and transcribe the english text, word for word."
)
# Generate translation
translation_response = client.create_translation(request=translation_request)
# Display the transcription result
if not len(translation_response.text) > 0:
exit(1)
print("Translation:", translation_response)
words = ["dapr", "agents", "open", "source", "framework", "researchers", "developers"]
normalized_text = translation_response.text.lower()
count = 0
for word in words:
if word in normalized_text:
count += 1
if count >= 5:
print("Success! The transcription contains at least 5 out of 7 words.")

View File

@ -0,0 +1,23 @@
---
name: Basic Prompt
description: A basic prompt that uses the chat API to answer questions
model:
api: chat
configuration:
type: openai
name: gpt-4o
parameters:
max_tokens: 128
temperature: 0.2
inputs:
question:
type: string
sample:
"question": "Who is the most famous person in the world?"
---
system:
You are an AI assistant who helps people find information.
As the assistant, you answer questions briefly, succinctly.
user:
{{question}}

View File

@ -0,0 +1,37 @@
from dapr_agents.document.embedder import OpenAIEmbedder
from dotenv import load_dotenv
load_dotenv()
# Initialize the embedder
embedder = OpenAIEmbedder(
model="text-embedding-ada-002", # Default embedding model
chunk_size=1000, # Batch size for processing
max_tokens=8191 # Maximum tokens per input
)
# Generate embedding with a single text
text = "Dapr Agents is an open-source framework for researchers and developers"
embedding = embedder.embed(text)
# Display the embedding
if len(embedding) > 0:
print(f"Embedding (first 5 values): {embedding[:5]}...")
# Multiple input texts
texts = [
"Dapr Agents is an open-source framework for researchers and developers",
"It provides tools to create, orchestrate, and manage agents"
]
# Generate embeddings
embeddings = embedder.embed(texts)
if len(embeddings) == 0:
print("No embeddings generated")
exit()
# Display the embeddings
for i, emb in enumerate(embeddings):
print(f"Text {i + 1} embedding (first 5 values): {emb[:5]}")

View File

@ -0,0 +1,3 @@
dapr-agents==0.1.dev26
python-dotenv
tiktoken

Binary file not shown.

Binary file not shown.

View File

@ -0,0 +1,28 @@
from dapr_agents import OpenAIChatClient
from dapr_agents.types import UserMessage
from dotenv import load_dotenv
# Load environment variables from .env
load_dotenv()
# Basic chat completion
llm = OpenAIChatClient()
response = llm.generate("Name a famous dog!")
if len(response.get_content()) > 0:
print("Response: ", response.get_content())
# Chat completion using a prompty file for context
llm = OpenAIChatClient.from_prompty('basic.prompty')
response = llm.generate(input_data={"question":"What is your name?"})
if len(response.get_content()) > 0:
print("Response with prompty: ", response.get_content())
# Chat completion with user input
llm = OpenAIChatClient()
response = llm.generate(messages=[UserMessage("hello")])
if len(response.get_content()) > 0 and "hello" in response.get_content().lower():
print("Response with user input: ", response.get_content())

View File

@ -0,0 +1,36 @@
import os
from dapr_agents.types.llm import AudioSpeechRequest
from dapr_agents import OpenAIAudioClient
from dotenv import load_dotenv
load_dotenv()
client = OpenAIAudioClient()
# Define the text to convert to speech
text_to_speech = "Dapr Agents is an open-source framework for researchers and developers"
# Create a request for TTS
tts_request = AudioSpeechRequest(
model="tts-1",
input=text_to_speech,
voice="fable",
response_format="mp3"
)
# Generate the audio - returns a byte string
audio_bytes = client.create_speech(request=tts_request)
# You can also automatically create the audio file by passing the file name as an argument
# client.create_speech(request=tts_request, file_name=output_path)
# Save the audio to an MP3 file
output_path = "output_speech.mp3"
with open(output_path, "wb") as audio_file:
audio_file.write(audio_bytes)
print(f"Audio saved to {output_path}")
os.remove(output_path)
print(f"File {output_path} has been deleted.")

View File

@ -61,7 +61,13 @@ services/ # Directory for agent services
│ └── app.py # FastAPI app for elf
└── workflow-random/ # Workflow orchestrator
└── app.py # Workflow service
dapr.yaml # Multi-App Run Template
└── workflow-roundrobin/ # Roundrobin orchestrator
└── app.py # Workflow service
└── workflow-llm/ # LLM orchestrator
└── app.py # Workflow service
dapr-random.yaml # Multi-App Run Template using the random orchestrator
dapr-roundrobin.yaml # Multi-App Run Template using the roundrobin orchestrator
dapr-llm.yaml # Multi-App Run Template using the LLM orchestrator
```
## Examples
@ -108,9 +114,9 @@ if __name__ == "__main__":
Similar implementations exist for the Wizard (Gandalf) and Elf (Legolas) agents.
### Workflow Orchestrator Implementation
### Workflow Orchestrator Implementations
The workflow orchestrator manages the interaction between agents. Currently Dapr Agents support three workflow types: RoundRobin, Random, and LLM-based. Here's an example for the Random workflow orchestrator:
The workflow orchestrators manage the interaction between agents. Currently, Dapr Agents support three workflow types: RoundRobin, Random, and LLM-based. Here's an example for the Random workflow orchestrator (you can find examples for RoundRobin and LLM-based orchestrators in the project):
```python
# services/workflow-random/app.py
@ -122,7 +128,7 @@ import logging
async def main():
try:
random_workflow_service = RandomOrchestrator(
name="Orchestrator",
name="RandomOrchestrator",
message_bus_name="messagepubsub",
state_store_name="agenticworkflowstate",
state_key="workflow_state",
@ -146,8 +152,9 @@ if __name__ == "__main__":
### Running the Multi-Agent System
The project includes a `dapr.yaml` configuration for running all services and an additional Client application for interacting with the agents:
The project includes three dapr multi-app run configuration files (`dapr-random.yaml`, `dapr-roundrobin.yaml` and `dapr-llm.yaml` ) for running all services and an additional Client application for interacting with the agents:
Example: `dapr-random.yaml`
```yaml
version: 1
common:
@ -199,18 +206,62 @@ expected_stdout_lines:
- "assistant:"
- "user:"
- "assistant:"
- "workflow completed with status 'ORCHESTRATION_STATUS_COMPLETED' workflowName 'RandomWorkflow'"
timeout_seconds: 20
output_match_mode: substring
background: false
sleep: 5
-->
```bash
dapr run -f .
dapr run -f dapr-random.yaml
```
<!-- END_STEP -->
You will see the agents engaging in a conversation about getting to Mordor, with different agents contributing based on their character.
You can also run the RoundRobin and LLM-based orchestrators using `dapr-roundrobin.yaml` and `dapr-llm.yaml` respectively:
<!-- STEP
name: Run text completion example
match_order: none
expected_stdout_lines:
- "Workflow started successfully!"
- "user:"
- "How to get to Mordor? We all need to help!"
- "assistant:"
- "user:"
- "assistant:"
- "workflow completed with status 'ORCHESTRATION_STATUS_COMPLETED' workflowName 'RoundRobinWorkflow'"
timeout_seconds: 20
output_match_mode: substring
background: false
sleep: 5
-->
```bash
dapr run -f dapr-roundrobin.yaml
```
<!-- END_STEP -->
<!-- STEP
name: Run text completion example
match_order: none
expected_stdout_lines:
- "Workflow started successfully!"
- "user:"
- "How to get to Mordor? We all need to help!"
- "assistant:"
- "user:"
- "assistant:"
- "workflow completed with status 'ORCHESTRATION_STATUS_COMPLETED' workflowName 'LLMWorkflow'"
timeout_seconds: 20
output_match_mode: substring
background: false
sleep: 5
-->
```bash
dapr run -f dapr-llm.yaml
```
<!-- END_STEP -->
**Expected output:** The agents will engage in a conversation about getting to Mordor, with different agents contributing based on their character.
## Key Concepts

View File

@ -0,0 +1,37 @@
# https://docs.dapr.io/developing-applications/local-development/multi-app-dapr-run/multi-app-template/#template-properties
version: 1
common:
resourcesPath: ./components
logLevel: info
appLogDestination: console
daprdLogDestination: console
apps:
- appId: HobbitApp
appDirPath: ./services/hobbit/
appPort: 8001
command: ["python3", "app.py"]
daprGRPCPort: 50001
- appId: WizardApp
appDirPath: ./services/wizard/
appPort: 8002
command: ["python3", "app.py"]
daprGRPCPort: 50002
- appId: ElfApp
appDirPath: ./services/elf/
appPort: 8003
command: ["python3", "app.py"]
daprGRPCPort: 50003
- appId: WorkflowApp
appDirPath: ./services/workflow-llm/
appPort: 8004
command: ["python3", "app.py"]
daprGRPCPort: 50004
- appId: ClientApp
appDirPath: ./services/client/
command: ["python3", "client.py"]
daprGRPCPort: 50011

View File

@ -0,0 +1,37 @@
# https://docs.dapr.io/developing-applications/local-development/multi-app-dapr-run/multi-app-template/#template-properties
version: 1
common:
resourcesPath: ./components
logLevel: info
appLogDestination: console
daprdLogDestination: console
apps:
- appId: HobbitApp
appDirPath: ./services/hobbit/
appPort: 8001
command: ["python3", "app.py"]
daprGRPCPort: 50001
- appId: WizardApp
appDirPath: ./services/wizard/
appPort: 8002
command: ["python3", "app.py"]
daprGRPCPort: 50002
- appId: ElfApp
appDirPath: ./services/elf/
appPort: 8003
command: ["python3", "app.py"]
daprGRPCPort: 50003
- appId: WorkflowApp
appDirPath: ./services/workflow-roundrobin/
appPort: 8004
command: ["python3", "app.py"]
daprGRPCPort: 50004
- appId: ClientApp
appDirPath: ./services/client/
command: ["python3", "client.py"]
daprGRPCPort: 50011

View File

@ -0,0 +1,31 @@
from dapr_agents import LLMOrchestrator
from dotenv import load_dotenv
import asyncio
import logging
async def main():
try:
workflow_service = LLMOrchestrator(
name="LLMOrchestrator",
message_bus_name="messagepubsub",
state_store_name="workflowstatestore",
state_key="workflow_state",
agents_registry_store_name="agentstatestore",
agents_registry_key="agents_registry",
service_port=8004,
daprGrpcPort=50004,
max_iterations=3
)
await workflow_service.start()
except Exception as e:
print(f"Error starting service: {e}")
if __name__ == "__main__":
load_dotenv()
logging.basicConfig(level=logging.INFO)
asyncio.run(main())

View File

@ -0,0 +1,31 @@
from dapr_agents import RandomOrchestrator
from dotenv import load_dotenv
import asyncio
import logging
async def main():
try:
workflow_service = RandomOrchestrator(
name="Random Orchestrator",
message_bus_name="messagepubsub",
state_store_name="workflowstatestore",
state_key="workflow_state",
agents_registry_store_name="agentstatestore",
agents_registry_key="agents_registry",
service_port=8004,
daprGrpcPort=50004,
max_iterations=3
)
await workflow_service.start()
except Exception as e:
print(f"Error starting service: {e}")
if __name__ == "__main__":
load_dotenv()
logging.basicConfig(level=logging.INFO)
asyncio.run(main())

View File

@ -0,0 +1,31 @@
from dapr_agents import RoundRobinOrchestrator
from dotenv import load_dotenv
import asyncio
import logging
async def main():
try:
workflow_service = RoundRobinOrchestrator(
name="RoundRobin Orchestrator",
message_bus_name="messagepubsub",
state_store_name="workflowstatestore",
state_key="workflow_state",
agents_registry_store_name="agentstatestore",
agents_registry_key="agents_registry",
service_port=8004,
daprGrpcPort=50004,
max_iterations=3
)
await workflow_service.start()
except Exception as e:
print(f"Error starting service: {e}")
if __name__ == "__main__":
load_dotenv()
logging.basicConfig(level=logging.INFO)
asyncio.run(main())

View File

@ -0,0 +1,313 @@
# Multi-Agent Event-Driven Workflows
This quickstart demonstrates how to create and orchestrate event-driven workflows with multiple autonomous agents using Dapr Agents. You'll learn how to set up agents as services, implement workflow orchestration, and enable real-time agent collaboration through pub/sub messaging.
## Prerequisites
- Python 3.10 (recommended)
- pip package manager
- OpenAI API key
- Dapr CLI and Docker installed
## Environment Setup
```bash
# Create a virtual environment
python3.10 -m venv .venv
# Activate the virtual environment
# On Windows:
.venv\Scripts\activate
# On macOS/Linux:
source .venv/bin/activate
# Install dependencies
pip install -r requirements.txt
```
## Configuration
1. Create a `.env` file for your API keys:
```env
OPENAI_API_KEY=your_api_key_here
```
2. Make sure Dapr is initialized on your system:
```bash
dapr init
```
3. The quickstart includes the necessary Dapr components in the `components` directory:
- `statestore.yaml`: Agent state configuration
- `pubsub.yaml`: Pub/Sub message bus configuration
- `workflowstate.yaml`: Workflow state configuration
## Project Structure
```
components/ # Dapr configuration files
├── statestore.yaml # State store configuration
├── pubsub.yaml # Pub/Sub configuration
└── workflowstate.yaml # Workflow state configuration
services/ # Directory for agent services
├── hobbit/ # First agent's service
│ └── app.py # FastAPI app for hobbit
├── wizard/ # Second agent's service
│ └── app.py # FastAPI app for wizard
├── elf/ # Third agent's service
│ └── app.py # FastAPI app for elf
└── workflow-random/ # Workflow orchestrator
└── app.py # Workflow service
└── workflow-roundrobin/ # Roundrobin orchestrator
└── app.py # Workflow service
└── workflow-llm/ # LLM orchestrator
└── app.py # Workflow service
dapr-random.yaml # Multi-App Run Template using the random orchestrator
dapr-roundrobin.yaml # Multi-App Run Template using the roundrobin orchestrator
dapr-llm.yaml # Multi-App Run Template using the LLM orchestrator
```
## Examples
### Agent Service Implementation
Each agent is implemented as a separate service. Here's an example for the Hobbit agent:
```python
# services/hobbit/app.py
from dapr_agents import Agent, AgentActorService, AssistantAgent
from dotenv import load_dotenv
import asyncio
import logging
async def main():
try:
hobbit_service = AssistantAgent(name="Frodo", role="Hobbit",
goal="Carry the One Ring to Mount Doom, resisting its corruptive power while navigating danger and uncertainty.",
instructions=[
"Speak like Frodo, with humility, determination, and a growing sense of resolve.",
"Endure hardships and temptations, staying true to the mission even when faced with doubt.",
"Seek guidance and trust allies, but bear the ultimate burden alone when necessary.",
"Move carefully through enemy-infested lands, avoiding unnecessary risks.",
"Respond concisely, accurately, and relevantly, ensuring clarity and strict alignment with the task."],
message_bus_name="messagepubsub",
state_store_name="workflowstatestore",
state_key="workflow_state",
agents_registry_store_name="agentstatestore",
agents_registry_key="agents_registry", service_port=8001,
daprGrpcPort=50001)
await hobbit_service.start()
except Exception as e:
print(f"Error starting service: {e}")
if __name__ == "__main__":
load_dotenv()
logging.basicConfig(level=logging.INFO)
asyncio.run(main())
```
Similar implementations exist for the Wizard (Gandalf) and Elf (Legolas) agents.
### Workflow Orchestrator Implementations
The workflow orchestrators manage the interaction between agents. Currently, Dapr Agents support three workflow types: RoundRobin, Random, and LLM-based. Here's an example for the Random workflow orchestrator (you can find examples for RoundRobin and LLM-based orchestrators in the project):
```python
# services/workflow-random/app.py
from dapr_agents import RandomOrchestrator
from dotenv import load_dotenv
import asyncio
import logging
async def main():
try:
random_workflow_service = RandomOrchestrator(
name="RandomOrchestrator",
message_bus_name="messagepubsub",
state_store_name="agenticworkflowstate",
state_key="workflow_state",
agents_registry_store_name="agentsregistrystore",
agents_registry_key="agents_registry",
service_port=8009,
daprGrpcPort=50009,
max_iterations=3
)
await random_workflow_service.start()
except Exception as e:
print(f"Error starting service: {e}")
if __name__ == "__main__":
load_dotenv()
logging.basicConfig(level=logging.INFO)
asyncio.run(main())
```
### Running the Multi-Agent System
The project includes three dapr multi-app run configuration files (`dapr-random.yaml`, `dapr-roundrobin.yaml` and `dapr-llm.yaml` ) for running all services and an additional Client application for interacting with the agents:
Example: `dapr-random.yaml`
```yaml
version: 1
common:
resourcesPath: ./components
logLevel: info
appLogDestination: console
daprdLogDestination: console
apps:
- appId: HobbitApp
appDirPath: ./services/hobbit/
appPort: 8001
command: ["python3", "app.py"]
daprGRPCPort: 50001
- appId: WizardApp
appDirPath: ./services/wizard/
appPort: 8002
command: ["python3", "app.py"]
daprGRPCPort: 50002
- appId: ElfApp
appDirPath: ./services/elf/
appPort: 8003
command: ["python3", "app.py"]
daprGRPCPort: 50003
- appId: WorkflowApp
appDirPath: ./services/workflow-random/
appPort: 8004
command: ["python3", "app.py"]
daprGRPCPort: 50004
- appId: ClientApp
appDirPath: ./services/client/
command: ["python3", "client.py"]
daprGRPCPort: 50011
```
Start all services using the Dapr CLI:
<!-- STEP
name: Run text completion example
match_order: none
expected_stdout_lines:
- "Workflow started successfully!"
- "user:"
- "How to get to Mordor? We all need to help!"
- "assistant:"
- "user:"
- "assistant:"
- "workflow completed with status 'ORCHESTRATION_STATUS_COMPLETED' workflowName 'RandomWorkflow'"
timeout_seconds: 20
output_match_mode: substring
background: false
sleep: 5
-->
```bash
dapr run -f dapr-random.yaml
```
<!-- END_STEP -->
You will see the agents engaging in a conversation about getting to Mordor, with different agents contributing based on their character.
You can also run the RoundRobin and LLM-based orchestrators using `dapr-roundrobin.yaml` and `dapr-llm.yaml` respectively:
<!-- STEP
name: Run text completion example
match_order: none
expected_stdout_lines:
- "Workflow started successfully!"
- "user:"
- "How to get to Mordor? We all need to help!"
- "assistant:"
- "user:"
- "assistant:"
- "workflow completed with status 'ORCHESTRATION_STATUS_COMPLETED' workflowName 'RoundRobinWorkflow'"
timeout_seconds: 20
output_match_mode: substring
background: false
sleep: 5
-->
```bash
dapr run -f dapr-roundrobin.yaml
```
<!-- END_STEP -->
<!-- STEP
name: Run text completion example
match_order: none
expected_stdout_lines:
- "Workflow started successfully!"
- "user:"
- "How to get to Mordor? We all need to help!"
- "assistant:"
- "user:"
- "assistant:"
- "workflow completed with status 'ORCHESTRATION_STATUS_COMPLETED' workflowName 'LLMWorkflow'"
timeout_seconds: 20
output_match_mode: substring
background: false
-->
```bash
dapr run -f dapr-llm.yaml
```
<!-- END_STEP -->
**Expected output:** The agents will engage in a conversation about getting to Mordor, with different agents contributing based on their character.
## Key Concepts
- **Agent Service**: Stateful service exposing an agent via API endpoints
- **Pub/Sub Messaging**: Event-driven communication between agents
- **Actor Model**: Stateful agent representation using Dapr Actors
- **Workflow Orchestration**: Coordinating agent interactions
- **Distributed System**: Multiple services working together
## Workflow Types
Dapr Agents supports multiple workflow orchestration patterns:
1. **RoundRobin**: Cycles through agents sequentially
2. **Random**: Selects agents randomly for tasks
3. **LLM-based**: Uses GPT-4o to intelligently select agents based on context
## Dapr Integration
This quickstart showcases several Dapr building blocks:
- **Pub/Sub**: Agent communication via Redis message bus
- **State Management**: Persistence of agent and workflow states
- **Service Invocation**: Direct HTTP communication between services
- **Actors**: Stateful agent representation
## Monitoring and Observability
1. **Console Logs**: Monitor real-time workflow execution
2. **Redis Insights**: View message bus and state data at http://localhost:5540/
3. **Zipkin Tracing**: Access distributed tracing at http://localhost:9411/zipkin/
## Troubleshooting
1. **Service Startup**: If services fail to start, verify Dapr components configuration
2. **Communication Issues**: Check Redis connection and pub/sub setup
3. **Workflow Errors**: Check Zipkin traces for detailed request flows
4. **System Reset**: Clear Redis data through Redis Insights if needed
## Next Steps
After completing this quickstart, you can:
- Add more agents to the workflow
- Switch to another workflow orchestration pattern (Random, LLM-based)
- Extend agents with custom tools
- Deploy to a Kubernetes cluster using Dapr

View File

@ -0,0 +1,16 @@
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
name: agentstatestore
spec:
type: state.redis
version: v1
metadata:
- name: redisHost
value: localhost:6379
- name: redisPassword
value: ""
- name: keyPrefix
value: none
- name: actorStateStore
value: "true"

View File

@ -0,0 +1,12 @@
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
name: messagepubsub
spec:
type: pubsub.redis
version: v1
metadata:
- name: redisHost
value: localhost:6379
- name: redisPassword
value: ""

View File

@ -0,0 +1,12 @@
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
name: workflowstatestore
spec:
type: state.redis
version: v1
metadata:
- name: redisHost
value: localhost:6379
- name: redisPassword
value: ""

View File

@ -0,0 +1,37 @@
# https://docs.dapr.io/developing-applications/local-development/multi-app-dapr-run/multi-app-template/#template-properties
version: 1
common:
resourcesPath: ./components
logLevel: info
appLogDestination: console
daprdLogDestination: console
apps:
- appId: HobbitApp
appDirPath: ./services/hobbit/
appPort: 8001
command: ["python3", "app.py"]
daprGRPCPort: 50001
- appId: WizardApp
appDirPath: ./services/wizard/
appPort: 8002
command: ["python3", "app.py"]
daprGRPCPort: 50002
- appId: ElfApp
appDirPath: ./services/elf/
appPort: 8003
command: ["python3", "app.py"]
daprGRPCPort: 50003
- appId: WorkflowApp
appDirPath: ./services/workflow-llm/
appPort: 8004
command: ["python3", "app.py"]
daprGRPCPort: 50004
- appId: ClientApp
appDirPath: ./services/client/
command: ["python3", "client.py"]
daprGRPCPort: 50011

View File

@ -0,0 +1,37 @@
# https://docs.dapr.io/developing-applications/local-development/multi-app-dapr-run/multi-app-template/#template-properties
version: 1
common:
resourcesPath: ./components
logLevel: info
appLogDestination: console
daprdLogDestination: console
apps:
- appId: HobbitApp
appDirPath: ./services/hobbit/
appPort: 8001
command: ["python3", "app.py"]
daprGRPCPort: 50001
- appId: WizardApp
appDirPath: ./services/wizard/
appPort: 8002
command: ["python3", "app.py"]
daprGRPCPort: 50002
- appId: ElfApp
appDirPath: ./services/elf/
appPort: 8003
command: ["python3", "app.py"]
daprGRPCPort: 50003
- appId: WorkflowApp
appDirPath: ./services/workflow-random/
appPort: 8004
command: ["python3", "app.py"]
daprGRPCPort: 50004
- appId: ClientApp
appDirPath: ./services/client/
command: ["python3", "client.py"]
daprGRPCPort: 50011

View File

@ -0,0 +1,37 @@
# https://docs.dapr.io/developing-applications/local-development/multi-app-dapr-run/multi-app-template/#template-properties
version: 1
common:
resourcesPath: ./components
logLevel: info
appLogDestination: console
daprdLogDestination: console
apps:
- appId: HobbitApp
appDirPath: ./services/hobbit/
appPort: 8001
command: ["python3", "app.py"]
daprGRPCPort: 50001
- appId: WizardApp
appDirPath: ./services/wizard/
appPort: 8002
command: ["python3", "app.py"]
daprGRPCPort: 50002
- appId: ElfApp
appDirPath: ./services/elf/
appPort: 8003
command: ["python3", "app.py"]
daprGRPCPort: 50003
- appId: WorkflowApp
appDirPath: ./services/workflow-roundrobin/
appPort: 8004
command: ["python3", "app.py"]
daprGRPCPort: 50004
- appId: ClientApp
appDirPath: ./services/client/
command: ["python3", "client.py"]
daprGRPCPort: 50011

View File

@ -0,0 +1,3 @@
dapr-agents==0.1.dev26
python-dotenv
requests

View File

@ -0,0 +1,34 @@
#!/usr/bin/env python3
import requests
import time
import sys
if __name__ == "__main__":
workflow_url = "http://localhost:8004/RunWorkflow"
task_payload = {"task": "How to get to Mordor? We all need to help!"}
attempt = 1
while attempt <= 10:
try:
print(f"Attempt {attempt}...")
response = requests.post(workflow_url, json=task_payload, timeout=5)
if response.status_code == 202:
print("Workflow started successfully!")
sys.exit(0)
else:
print(f"Received status code {response.status_code}: {response.text}")
except requests.exceptions.RequestException as e:
print(f"Request failed: {e}")
attempt += 1
print(f"Waiting 1s seconds before next attempt...")
time.sleep(1)
print(f"Maximum attempts (10) reached without success.")
print("Failed to get successful response")
sys.exit(1)

View File

@ -0,0 +1,32 @@
from dapr_agents import Agent, AgentActorService, AssistantAgent
from dotenv import load_dotenv
import asyncio
import logging
async def main():
try:
elf_service = AssistantAgent(name="Legolas", role="Elf",
goal="Act as a scout, marksman, and protector, using keen senses and deadly accuracy to ensure the success of the journey.",
instructions=[
"Speak like Legolas, with grace, wisdom, and keen observation.",
"Be swift, silent, and precise, moving effortlessly across any terrain.",
"Use superior vision and heightened senses to scout ahead and detect threats.",
"Excel in ranged combat, delivering pinpoint arrow strikes from great distances.",
"Respond concisely, accurately, and relevantly, ensuring clarity and strict alignment with the task."],
message_bus_name="messagepubsub",
state_store_name="workflowstatestore",
state_key="workflow_state",
agents_registry_store_name="agentstatestore",
agents_registry_key="agents_registry", service_port=8003,
daprGrpcPort=50003)
await elf_service.start()
except Exception as e:
print(f"Error starting service: {e}")
if __name__ == "__main__":
load_dotenv()
logging.basicConfig(level=logging.INFO)
asyncio.run(main())

View File

@ -0,0 +1,34 @@
from dapr_agents import Agent, AgentActorService, AssistantAgent
from dotenv import load_dotenv
import asyncio
import logging
async def main():
try:
hobbit_service = AssistantAgent(name="Frodo", role="Hobbit",
goal="Carry the One Ring to Mount Doom, resisting its corruptive power while navigating danger and uncertainty.",
instructions=[
"Speak like Frodo, with humility, determination, and a growing sense of resolve.",
"Endure hardships and temptations, staying true to the mission even when faced with doubt.",
"Seek guidance and trust allies, but bear the ultimate burden alone when necessary.",
"Move carefully through enemy-infested lands, avoiding unnecessary risks.",
"Respond concisely, accurately, and relevantly, ensuring clarity and strict alignment with the task."],
message_bus_name="messagepubsub",
state_store_name="workflowstatestore",
state_key="workflow_state",
agents_registry_store_name="agentstatestore",
agents_registry_key="agents_registry", service_port=8001,
daprGrpcPort=50001)
await hobbit_service.start()
except Exception as e:
print(f"Error starting service: {e}")
if __name__ == "__main__":
load_dotenv()
logging.basicConfig(level=logging.INFO)
asyncio.run(main())

View File

@ -0,0 +1,34 @@
from dapr_agents import Agent, AgentActorService, AssistantAgent
from dotenv import load_dotenv
import asyncio
import logging
async def main():
try:
wizard_service = AssistantAgent(name="Gandalf", role="Wizard",
goal="Guide the Fellowship with wisdom and strategy, using magic and insight to ensure the downfall of Sauron.",
instructions=[
"Speak like Gandalf, with wisdom, patience, and a touch of mystery.",
"Provide strategic counsel, always considering the long-term consequences of actions.",
"Use magic sparingly, applying it when necessary to guide or protect.",
"Encourage allies to find strength within themselves rather than relying solely on your power.",
"Respond concisely, accurately, and relevantly, ensuring clarity and strict alignment with the task."],
message_bus_name="messagepubsub",
state_store_name="workflowstatestore",
state_key="workflow_state",
agents_registry_store_name="agentstatestore",
agents_registry_key="agents_registry", service_port=8002,
daprGrpcPort=50002)
await wizard_service.start()
except Exception as e:
print(f"Error starting service: {e}")
if __name__ == "__main__":
load_dotenv()
logging.basicConfig(level=logging.INFO)
asyncio.run(main())

View File

@ -0,0 +1,31 @@
from dapr_agents import LLMOrchestrator
from dotenv import load_dotenv
import asyncio
import logging
async def main():
try:
workflow_service = LLMOrchestrator(
name="LLMOrchestrator",
message_bus_name="messagepubsub",
state_store_name="workflowstatestore",
state_key="workflow_state",
agents_registry_store_name="agentstatestore",
agents_registry_key="agents_registry",
service_port=8004,
daprGrpcPort=50004,
max_iterations=3
)
await workflow_service.start()
except Exception as e:
print(f"Error starting service: {e}")
if __name__ == "__main__":
load_dotenv()
logging.basicConfig(level=logging.INFO)
asyncio.run(main())

View File

@ -7,7 +7,7 @@ import logging
async def main():
try:
workflow_service = RandomOrchestrator(
name="Orchestrator",
name="RandomOrchestrator",
message_bus_name="messagepubsub",
state_store_name="workflowstatestore",
state_key="workflow_state",

View File

@ -0,0 +1,31 @@
from dapr_agents import RoundRobinOrchestrator
from dotenv import load_dotenv
import asyncio
import logging
async def main():
try:
workflow_service = RoundRobinOrchestrator(
name="RoundRobinOrchestrator",
message_bus_name="messagepubsub",
state_store_name="workflowstatestore",
state_key="workflow_state",
agents_registry_store_name="agentstatestore",
agents_registry_key="agents_registry",
service_port=8004,
daprGrpcPort=50004,
max_iterations=3
)
await workflow_service.start()
except Exception as e:
print(f"Error starting service: {e}")
if __name__ == "__main__":
load_dotenv()
logging.basicConfig(level=logging.INFO)
asyncio.run(main())

View File

@ -31,7 +31,7 @@ Learn how to interact with Language Models using Dapr Agents:
This quickstart shows both basic text generation and structured data extraction from LLMs.
[Go to LLM Call](./02-llm-call)
[Go to LLM Call](./02_llm_call_open_ai)
### 03 - Agent Tool Call