Compare commits

...

29 Commits
v0.6.0 ... main

Author SHA1 Message Date
Roberto Rodriguez a475687445
Merge pull request #185 from sicoyle/tweak-da-workflow
fix(tracing): enable tracing on durable agent + add quickstart
2025-09-04 16:14:26 -04:00
Samantha Coyle 6e6ff447be
style: make linter happy
Signed-off-by: Samantha Coyle <sam@diagrid.io>
2025-09-04 09:01:31 -05:00
Samantha Coyle dde2ab0d2c
fix: add method i missed
Signed-off-by: Samantha Coyle <sam@diagrid.io>
2025-09-02 13:16:49 -05:00
Samantha Coyle 99defbabe1
Merge branch 'main' into tweak-da-workflow 2025-09-02 13:01:27 -05:00
Yaron Schneider fbb3bfd61f
change agents version in quickstarts (#190)
Signed-off-by: yaron2 <schneider.yaron@live.com>
2025-08-31 12:06:26 -04:00
Yaron Schneider 224c61c6c2
Merge branch 'main' into tweak-da-workflow 2025-08-30 16:15:52 -07:00
Yaron Schneider d4e6c76353
Fix hang after multiple .run() calls (#189)
* fix hang after multiple .run() calls

Signed-off-by: yaron2 <schneider.yaron@live.com>

* linter

Signed-off-by: yaron2 <schneider.yaron@live.com>

---------

Signed-off-by: yaron2 <schneider.yaron@live.com>
2025-08-30 18:56:16 -04:00
Samantha Coyle 423de2a7a1
fix: update for tests to pass
Signed-off-by: Samantha Coyle <sam@diagrid.io>
2025-08-27 10:43:28 -04:00
Samantha Coyle 00e0863bef
style: add file i missed for formatting
Signed-off-by: Samantha Coyle <sam@diagrid.io>
2025-08-27 08:04:44 -04:00
Samantha Coyle 4b081c6984
style: tox -e ruff
Signed-off-by: Samantha Coyle <sam@diagrid.io>
2025-08-27 08:01:08 -04:00
Samantha Coyle e31484dde3
style: update readme with durable agent tracing quickstart too
Signed-off-by: Samantha Coyle <sam@diagrid.io>
2025-08-27 07:54:04 -04:00
Samantha Coyle 54d40dbcdb
fix(tracing): enable tracing on durable agent + quickstart
Signed-off-by: Samantha Coyle <sam@diagrid.io>
2025-08-26 16:00:12 -04:00
Sam 649d45fa2e
docs: rm our repo dapr docs as in 1.16 preview now (#178)
Signed-off-by: Samantha Coyle <sam@diagrid.io>
Co-authored-by: Yaron Schneider <schneider.yaron@live.com>
2025-08-13 15:56:38 -07:00
Sam e03dfefcc6
fix: speed up deps installed + handle errs in quickstarts better (#177)
* fix: speed up deps installed + handle errs in quickstarts better

Signed-off-by: Samantha Coyle <sam@diagrid.io>

* docs: update docs to use uv too

Signed-off-by: Samantha Coyle <sam@diagrid.io>

* docs: add mapping on cmds for myself

Signed-off-by: Samantha Coyle <sam@diagrid.io>

---------

Signed-off-by: Samantha Coyle <sam@diagrid.io>
2025-08-13 15:47:20 -07:00
Roberto Rodriguez 3ef87c28e6
fix: correct Pydantic type generation for anyOf/oneOf in MCP tool schemas (#176)
* fix: handle anyOf/oneOf in JSON Schema to generate correct Pydantic types for MCP tools

Signed-off-by: Roberto Rodriguez <9653181+Cyb3rWard0g@users.noreply.github.com>

* fix: Lint

Signed-off-by: Roberto Rodriguez <9653181+Cyb3rWard0g@users.noreply.github.com>

* fix: omit None values when serializing tool args to avoid sending nulls for non-nullable fields

Signed-off-by: Roberto Rodriguez <9653181+Cyb3rWard0g@users.noreply.github.com>

* Updated dependency

Signed-off-by: Roberto Rodriguez <9653181+Cyb3rWard0g@users.noreply.github.com>

---------

Signed-off-by: Roberto Rodriguez <9653181+Cyb3rWard0g@users.noreply.github.com>
2025-08-13 12:10:26 -07:00
Roberto Rodriguez d325cc07de
Update Tool Execution Final Message and Dependencies (#175)
* Update final message when max iteration hits in durable agent

Signed-off-by: Roberto Rodriguez <9653181+Cyb3rWard0g@users.noreply.github.com>

* Update dependencies

Signed-off-by: Roberto Rodriguez <9653181+Cyb3rWard0g@users.noreply.github.com>

* Fix lint

Signed-off-by: Roberto Rodriguez <9653181+Cyb3rWard0g@users.noreply.github.com>

* Update testenv deps to include vectorstore

Signed-off-by: Roberto Rodriguez <9653181+Cyb3rWard0g@users.noreply.github.com>

---------

Signed-off-by: Roberto Rodriguez <9653181+Cyb3rWard0g@users.noreply.github.com>
2025-08-11 12:15:13 -07:00
Albert Callarisa 249ea5ec43
Add partition key to state operations (#173)
Signed-off-by: Albert Callarisa <albert@diagrid.io>
Co-authored-by: Yaron Schneider <schneider.yaron@live.com>
2025-08-05 16:28:35 -07:00
Roberto Rodriguez dce6623150
Workflow App updates to Register Tasks and LLM client Fix (#172)
* Update quickstarts

Signed-off-by: Roberto Rodriguez <9653181+Cyb3rWard0g@users.noreply.github.com>

* Update stream parameter in LLM generation

Signed-off-by: Roberto Rodriguez <9653181+Cyb3rWard0g@users.noreply.github.com>

* Update initialization of LLM client for agent base

Signed-off-by: Roberto Rodriguez <9653181+Cyb3rWard0g@users.noreply.github.com>

* Switch comment to debug logging

Signed-off-by: Roberto Rodriguez <9653181+Cyb3rWard0g@users.noreply.github.com>

* Improve logic to handle api key and other parameters in openai clients

Signed-off-by: Roberto Rodriguez <9653181+Cyb3rWard0g@users.noreply.github.com>

* Add Workflow register_task method

Signed-off-by: Roberto Rodriguez <9653181+Cyb3rWard0g@users.noreply.github.com>

* Fix lint errors

Signed-off-by: Roberto Rodriguez <9653181+Cyb3rWard0g@users.noreply.github.com>

* Update version

Signed-off-by: Roberto Rodriguez <9653181+Cyb3rWard0g@users.noreply.github.com>

---------

Signed-off-by: Roberto Rodriguez <9653181+Cyb3rWard0g@users.noreply.github.com>
2025-08-01 21:07:26 -07:00
Yaron Schneider ea0d33cb1a
Add observability quickstarts (#171)
* added tracing quickstart

Signed-off-by: yaron2 <schneider.yaron@live.com>

* add next steps

Signed-off-by: yaron2 <schneider.yaron@live.com>

* update versions

Signed-off-by: yaron2 <schneider.yaron@live.com>

* linter

Signed-off-by: yaron2 <schneider.yaron@live.com>

---------

Signed-off-by: yaron2 <schneider.yaron@live.com>
2025-08-01 16:02:36 -07:00
Roberto Rodriguez 963cef6cb9
Observability Module with OpenTelemetry and Phoenix Integration (#168)
* add condition for opentelemetry import

Signed-off-by: Roberto Rodriguez <9653181+Cyb3rWard0g@users.noreply.github.com>

* feat: add observability optional dependencies and update lock file

Signed-off-by: Roberto Rodriguez <9653181+Cyb3rWard0g@users.noreply.github.com>

* feat: add observability dapr agents module

Signed-off-by: Roberto Rodriguez <9653181+Cyb3rWard0g@users.noreply.github.com>

* add and update quickstarts to show new observability module

Signed-off-by: Roberto Rodriguez <9653181+Cyb3rWard0g@users.noreply.github.com>

* style: fix ruff linting and formatting issues across codebase

Signed-off-by: Roberto Rodriguez <9653181+Cyb3rWard0g@users.noreply.github.com>

* style: fix flake8 linting issues

Signed-off-by: Roberto Rodriguez <9653181+Cyb3rWard0g@users.noreply.github.com>

* config: ignore mypy errors for optional observability module

Signed-off-by: Roberto Rodriguez <9653181+Cyb3rWard0g@users.noreply.github.com>

* Updtate Workflow Task to use Agent instances without a task description

Signed-off-by: Roberto Rodriguez <9653181+Cyb3rWard0g@users.noreply.github.com>

* Update tool_choice in Agent flow to not be part of chat completion request if not set

Signed-off-by: Roberto Rodriguez <9653181+Cyb3rWard0g@users.noreply.github.com>

* Remove comments ;)

Signed-off-by: Roberto Rodriguez <9653181+Cyb3rWard0g@users.noreply.github.com>

* Update quickstarts README

Signed-off-by: Roberto Rodriguez <9653181+Cyb3rWard0g@users.noreply.github.com>

* Update tool_choice to not be added to chat completion request if not set

Signed-off-by: Roberto Rodriguez <9653181+Cyb3rWard0g@users.noreply.github.com>

* Add quickstart to show agents as tasks with tracing and multi-model

Signed-off-by: Roberto Rodriguez <9653181+Cyb3rWard0g@users.noreply.github.com>

* style: fix ruff linting and formatting issues across codebase

Signed-off-by: Roberto Rodriguez <9653181+Cyb3rWard0g@users.noreply.github.com>

---------

Signed-off-by: Roberto Rodriguez <9653181+Cyb3rWard0g@users.noreply.github.com>
Co-authored-by: Yaron Schneider <schneider.yaron@live.com>
2025-08-01 11:44:55 -07:00
Sam 0a0dd31fc1
fix: bump us to numpy 2.X to fix user err on discord (#170)
Signed-off-by: Samantha Coyle <sam@diagrid.io>
Co-authored-by: Yaron Schneider <schneider.yaron@live.com>
2025-08-01 08:32:35 -07:00
Sam 454b0c23ee
refactor: rm check on if dapr is running for durableagent (#169)
* style: clean up unused openapi stuff leftover

Signed-off-by: Samantha Coyle <sam@diagrid.io>

* fix: only warn if dapr is found unavailable on localhost

Signed-off-by: Samantha Coyle <sam@diagrid.io>

* style: tox -e ruff

Signed-off-by: Samantha Coyle <sam@diagrid.io>

* style: more clena up to rm it

Signed-off-by: Samantha Coyle <sam@diagrid.io>

* fix(tests): rm test deps on dapr

Signed-off-by: Samantha Coyle <sam@diagrid.io>

---------

Signed-off-by: Samantha Coyle <sam@diagrid.io>
2025-08-01 08:15:26 -07:00
Yaron Schneider d993f9090b
update quickstarts dependencies (#166)
Signed-off-by: yaron2 <schneider.yaron@live.com>
2025-07-28 13:52:37 -07:00
Yaron Schneider f2d6831ea2
Refactor LLM Workflows and Orchestrators for Unified Response Handling and Iteration (#163) (#165)
* Refactor ChatClientBase: drop Pydantic inheritance and add typed generate() overloads



* Align all LLM chat clients with refactored base and unified response models



* Unify LLM utils across providers and delegate streaming/response to provider‑specific handlers



* Refactor LLM pipeline: add HuggingFace tool calls, unify chat client/response types, and switch DurableAgent to loop‑based workflow



* Refactor orchestrators with loops and unify LLM response handling using LLMChatResponse



* test remaining quickstarts after all changes



* run pytest after all changes



* Run linting and formatting checks to ensure code quality



* Update logging, Orchestrator Name and OTel module name



---------

Signed-off-by: Roberto Rodriguez <9653181+Cyb3rWard0g@users.noreply.github.com>
Co-authored-by: Roberto Rodriguez <9653181+Cyb3rWard0g@users.noreply.github.com>
2025-07-28 13:31:53 -07:00
Roberto Rodriguez 3e767e03fb
Refactor agent workflows, orchestrators, and integrations for reliability and modularity (#161)
* Remove cookbook to avoid confusion

Signed-off-by: Roberto Rodriguez <9653181+Cyb3rWard0g@users.noreply.github.com>

* Update Hugging Face integration and quickstart to use SmolLM3-3B model

Signed-off-by: Roberto Rodriguez <9653181+Cyb3rWard0g@users.noreply.github.com>

* Refactor Agent and AgentBase for improved tool execution, prompt handling, and message metadata

Signed-off-by: Roberto Rodriguez <9653181+Cyb3rWard0g@users.noreply.github.com>

* Refactor workflow features into mixins for messaging, pub/sub, state management, and service logic

Signed-off-by: Roberto Rodriguez <9653181+Cyb3rWard0g@users.noreply.github.com>

* Refactor DurableAgent for improved workflow state, tool execution, and message handling

Signed-off-by: Roberto Rodriguez <9653181+Cyb3rWard0g@users.noreply.github.com>

* Refactor workflow base and AgenticWorkflow to modularize Dapr integration and delegate service/state logic to mixins

Signed-off-by: Roberto Rodriguez <9653181+Cyb3rWard0g@users.noreply.github.com>

* Update text printer logic to flush messages

Signed-off-by: Roberto Rodriguez <9653181+Cyb3rWard0g@users.noreply.github.com>

* Split Workflow decorators for better organization.

Signed-off-by: Roberto Rodriguez <9653181+Cyb3rWard0g@users.noreply.github.com>

* Refactor ConversationDaprStateMemory to return full message objects and improve logging.

Signed-off-by: Roberto Rodriguez <9653181+Cyb3rWard0g@users.noreply.github.com>

* Refactor VectorStoreBase to return document IDs

Signed-off-by: Roberto Rodriguez <9653181+Cyb3rWard0g@users.noreply.github.com>

* Refactor orchestrators to standardize broadcast messages, unify decorator imports, and improve workflow robustness

Signed-off-by: Roberto Rodriguez <9653181+Cyb3rWard0g@users.noreply.github.com>

* Test quickstarts to validate all changes and update main README

Signed-off-by: Roberto Rodriguez <9653181+Cyb3rWard0g@users.noreply.github.com>

* Set broadcast_topic_name to None to disable broadcasting

Signed-off-by: Roberto Rodriguez <9653181+Cyb3rWard0g@users.noreply.github.com>

* Improve package exports and clean up imports for linter compliance

Signed-off-by: Roberto Rodriguez <9653181+Cyb3rWard0g@users.noreply.github.com>

* Fix code style and lint errors flagged by Ruff

Signed-off-by: Roberto Rodriguez <9653181+Cyb3rWard0g@users.noreply.github.com>

* Fix mypy type errors and improve

Signed-off-by: Roberto Rodriguez <9653181+Cyb3rWard0g@users.noreply.github.com>

* Fix lint errors and code style issues with ruff

Signed-off-by: Roberto Rodriguez <9653181+Cyb3rWard0g@users.noreply.github.com>

* fix mypy attr-defined and call-arg errors

Signed-off-by: Roberto Rodriguez <9653181+Cyb3rWard0g@users.noreply.github.com>

* fix lint errors and code style issues

Signed-off-by: Roberto Rodriguez <9653181+Cyb3rWard0g@users.noreply.github.com>

* mypy type errors

Signed-off-by: Roberto Rodriguez <9653181+Cyb3rWard0g@users.noreply.github.com>

* Updated test state class to DurableAgentWorkflowEntry

Signed-off-by: Roberto Rodriguez <9653181+Cyb3rWard0g@users.noreply.github.com>

* Fix mypy errors

Signed-off-by: Roberto Rodriguez <9653181+Cyb3rWard0g@users.noreply.github.com>

* Set DurableAgent to not broadcast messages by default and clarified dapr client private attribute in agentic workflow

Signed-off-by: Roberto Rodriguez <9653181+Cyb3rWard0g@users.noreply.github.com>

* Update tests with latest changes

Signed-off-by: Roberto Rodriguez <9653181+Cyb3rWard0g@users.noreply.github.com>

* Minor updates identified while working on test files

Signed-off-by: Roberto Rodriguez <9653181+Cyb3rWard0g@users.noreply.github.com>

* Clarify AgentTool use in docstrings

Signed-off-by: Roberto Rodriguez <9653181+Cyb3rWard0g@users.noreply.github.com>

* Centralize tool_history and tool_choice in AgentBase and unify tool execution record schema for tool_history

Signed-off-by: Roberto Rodriguez <9653181+Cyb3rWard0g@users.noreply.github.com>

* fix mypy errors

Signed-off-by: Roberto Rodriguez <9653181+Cyb3rWard0g@users.noreply.github.com>

* Update dapr_agents/agents/base.py

Update default basic agent prompt

Co-authored-by: Sam <sam@diagrid.io>
Signed-off-by: Roberto Rodriguez <9653181+Cyb3rWard0g@users.noreply.github.com>

* Update quickstarts/03-durable-agent-tool-call/durable_weather_agent.py

Update durable weather agent with the right comments

Co-authored-by: Sam <sam@diagrid.io>
Signed-off-by: Roberto Rodriguez <9653181+Cyb3rWard0g@users.noreply.github.com>

---------

Signed-off-by: Roberto Rodriguez <9653181+Cyb3rWard0g@users.noreply.github.com>
Co-authored-by: Sam <sam@diagrid.io>
2025-07-28 07:40:58 -07:00
Bilgin Ibryam 1c832636eb
fix: dump to 0.6.1 and add missing requirements.txt files to fix streamablehttp quickstart (#156) (#160)
Signed-off-by: Bilgin Ibryam <bibryam@gmail.com>
2025-07-22 16:46:33 -07:00
Roberto Rodriguez 3bd6c99506
ElevenLabs Python SDK updates and advanced TTS support (#154)
* refactor: update ElevenLabsSpeechClient to use new SDK API and support advanced TTS features

Signed-off-by: Roberto Rodriguez <9653181+Cyb3rWard0g@users.noreply.github.com>

* chore: fix lint and style issues for ruff

Signed-off-by: Roberto Rodriguez <9653181+Cyb3rWard0g@users.noreply.github.com>

* Added ElvenLabs client Test

Signed-off-by: Roberto Rodriguez <9653181+Cyb3rWard0g@users.noreply.github.com>

---------

Signed-off-by: Roberto Rodriguez <9653181+Cyb3rWard0g@users.noreply.github.com>
Co-authored-by: Yaron Schneider <schneider.yaron@live.com>
2025-07-20 19:29:28 -07:00
Bilgin Ibryam 6d9b26bce6
Fix #157: Upgrade quickstarts to dapr-agents 0.6.0 and apply minor fixes (#158)
* Fix #157: Upgrade quickstarts to dapr-agents 0.6.0 and apply minor fixes

* Fix failing build
2025-07-17 22:08:41 -07:00
Roberto Rodriguez 19e2caa25f
Enable Streamable HTTP Transport for MCP Client v2 (#153)
* Updated MCP support for stdio and sse and enabled Streamable HTTP

Signed-off-by: Roberto Rodriguez <9653181+Cyb3rWard0g@users.noreply.github.com>

* chore: fix lint and style issues for ruff

Signed-off-by: Roberto Rodriguez <9653181+Cyb3rWard0g@users.noreply.github.com>

* Updated MCP Cookbook README with suggestions

Signed-off-by: Roberto Rodriguez <9653181+Cyb3rWard0g@users.noreply.github.com>

* Updated Cookbook MCP Servers names for clarity

Signed-off-by: Roberto Rodriguez <9653181+Cyb3rWard0g@users.noreply.github.com>

* Updated Cookbook MCP code to reflect changes

Signed-off-by: Roberto Rodriguez <9653181+Cyb3rWard0g@users.noreply.github.com>

* Updated Cookbook MCP server name

Signed-off-by: Roberto Rodriguez <9653181+Cyb3rWard0g@users.noreply.github.com>

* Added initial MCP StreamableHTTP tests

Signed-off-by: Roberto Rodriguez <9653181+Cyb3rWard0g@users.noreply.github.com>

* Apply ruff lint fixes to test_mcp_streamable_http.py

Signed-off-by: Roberto Rodriguez <9653181+Cyb3rWard0g@users.noreply.github.com>

* Remove redundant inner import of json in mock_mcp_session to fix flake8 F811

Signed-off-by: Roberto Rodriguez <9653181+Cyb3rWard0g@users.noreply.github.com>

---------

Signed-off-by: Roberto Rodriguez <9653181+Cyb3rWard0g@users.noreply.github.com>
2025-07-11 20:07:56 -07:00
393 changed files with 18211 additions and 18171 deletions

View File

@ -1,77 +0,0 @@
name: docs
on:
push:
branches:
- main
paths:
- docs/**
- '!docs/development/**'
pull_request:
branches:
- main
paths:
- docs/**
- '!docs/development/**'
workflow_dispatch:
permissions:
contents: write
jobs:
changed_files:
runs-on: ubuntu-latest
name: Review changed files
outputs:
docs_any_changed: ${{ steps.changed-files.outputs.docs_any_changed }}
steps:
- uses: actions/checkout@v4
- name: Get changed files
id: changed-files
uses: tj-actions/changed-files@v45
with:
files_yaml: |
docs:
- 'docs/**'
- 'mkdocs.yml'
- '!docs/development/**'
base_sha: 'main'
documentation_validation:
needs: changed_files
name: Documentation validation
if: needs.changed_files.outputs.docs_any_changed == 'true'
runs-on: ubuntu-latest
steps:
- name: Checkout code
uses: actions/checkout@v4
- name: Remove plugins from mkdocs configuration
run: |
sed -i '/^plugins:/,/^[^ ]/d' mkdocs.yml
- name: Install Python dependencies
run: |
pip install mkdocs-material
pip install .[recommended,git,imaging]
pip install mkdocs-jupyter
- name: Validate build
run: mkdocs build
deploy:
if: github.ref == 'refs/heads/main'
runs-on: ubuntu-latest
needs: documentation_validation
steps:
- uses: actions/checkout@v4
- uses: actions/setup-python@v5
with:
python-version: 3.x
- run: echo "cache_id=$(date --utc '+%V')" >> $GITHUB_ENV
- uses: actions/cache@v4
with:
key: mkdocs-material-${{ env.cache_id }}
path: .cache
restore-keys: |
mkdocs-material-
- name: Install Python dependencies
run: |
pip install mkdocs-material
pip install .[recommended,git,imaging]
pip install mkdocs-jupyter
- run: mkdocs gh-deploy --force

5
.gitignore vendored
View File

@ -176,5 +176,8 @@ chroma_db/
db/
# Requirements files since we use pyproject.toml instead
requirements.txt
dev-requirements.txt
docker-entrypoint-initdb.d/
*requirements.txt

View File

@ -34,7 +34,7 @@ Dapr Agents builds on top of Dapr's Workflow API, which under the hood represent
### Data-Centric AI Agents
With built-in connectivity to over 50 enterprise data sources, Dapr Agents efficiently handles structured and unstructured data. From basic [PDF extraction](./docs/concepts/arxiv_fetcher.md) to large-scale database interactions, it enables seamless data-driven AI workflows with minimal code changes. Dapr's [bindings](https://docs.dapr.io/reference/components-reference/supported-bindings/) and [state stores](https://docs.dapr.io/reference/components-reference/supported-state-stores/) provide access to a large number of data sources that can be used to ingest data to an agent.
With built-in connectivity to over 50 enterprise data sources, Dapr Agents efficiently handles structured and unstructured data. From basic [PDF extraction](https://v1-16.docs.dapr.io/developing-applications/dapr-agents/dapr-agents-integrations/#arxiv-fetcher) to large-scale database interactions, it enables seamless data-driven AI workflows with minimal code changes. Dapr's [bindings](https://docs.dapr.io/reference/components-reference/supported-bindings/) and [state stores](https://docs.dapr.io/reference/components-reference/supported-state-stores/) provide access to a large number of data sources that can be used to ingest data to an agent.
### Accelerated Development

View File

@ -1,400 +0,0 @@
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# OpenAI Tool Calling Agent - Dummy Weather Example\n",
"\n",
"* Collaborator: Roberto Rodriguez @Cyb3rWard0g"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Import Environment Variables"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Enable Logging"
]
},
{
"cell_type": "code",
"execution_count": 10,
"metadata": {
"scrolled": true
},
"outputs": [
{
"data": {
"text/plain": [
"True"
]
},
"execution_count": 10,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"from dotenv import load_dotenv\n",
"load_dotenv() # take environment variables from .env."
]
},
{
"cell_type": "code",
"execution_count": 11,
"metadata": {},
"outputs": [],
"source": [
"import logging\n",
"\n",
"logging.basicConfig(level=logging.INFO)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Define Tools"
]
},
{
"cell_type": "code",
"execution_count": 12,
"metadata": {},
"outputs": [],
"source": [
"from dapr_agents import tool\n",
"from pydantic import BaseModel, Field\n",
"\n",
"class GetWeatherSchema(BaseModel):\n",
" location: str = Field(description=\"location to get weather for\")\n",
"\n",
"@tool(args_model=GetWeatherSchema)\n",
"def get_weather(location: str) -> str:\n",
" \"\"\"Get weather information for a specific location.\"\"\"\n",
" import random\n",
" temperature = random.randint(60, 80)\n",
" return f\"{location}: {temperature}F.\"\n",
"\n",
"class JumpSchema(BaseModel):\n",
" distance: str = Field(description=\"Distance for agent to jump\")\n",
"\n",
"@tool(args_model=JumpSchema)\n",
"def jump(distance: str) -> str:\n",
" \"\"\"Jump a specific distance.\"\"\"\n",
" return f\"I jumped the following distance {distance}\"\n",
"\n",
"tools = [get_weather,jump]"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Initialize Agent"
]
},
{
"cell_type": "code",
"execution_count": 13,
"metadata": {},
"outputs": [
{
"name": "stderr",
"output_type": "stream",
"text": [
"INFO:dapr_agents.llm.openai.client.base:Initializing OpenAI client...\n",
"INFO:dapr_agents.tool.executor:Tool registered: GetWeather\n",
"INFO:dapr_agents.tool.executor:Tool registered: Jump\n",
"INFO:dapr_agents.tool.executor:Tool Executor initialized with 2 tool(s).\n",
"INFO:dapr_agents.agents.base:Constructing system_prompt from agent attributes.\n",
"INFO:dapr_agents.agents.base:Using system_prompt to create the prompt template.\n",
"INFO:dapr_agents.agents.base:Pre-filled prompt template with attributes: ['name', 'role', 'goal']\n"
]
}
],
"source": [
"from dapr_agents import Agent\n",
"\n",
"AIAgent = Agent(\n",
" name=\"Rob\",\n",
" role= \"Weather Assistant\",\n",
" tools=tools\n",
")"
]
},
{
"cell_type": "code",
"execution_count": 14,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"ChatPromptTemplate(input_variables=['chat_history'], pre_filled_variables={'name': 'Rob', 'role': 'Weather Assistant', 'goal': 'Help humans'}, messages=[('system', \"# Today's date is: June 30, 2025\\n\\n## Name\\nYour name is {{name}}.\\n\\n## Role\\nYour role is {{role}}.\\n\\n## Goal\\n{{goal}}.\"), MessagePlaceHolder(variable_name=chat_history)], template_format='jinja2')"
]
},
"execution_count": 14,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"AIAgent.prompt_template"
]
},
{
"cell_type": "code",
"execution_count": 15,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"'GetWeather'"
]
},
"execution_count": 15,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"AIAgent.tools[0].name"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Run Agent"
]
},
{
"cell_type": "code",
"execution_count": 16,
"metadata": {},
"outputs": [
{
"name": "stderr",
"output_type": "stream",
"text": [
"INFO:dapr_agents.agents.agent.agent:Iteration 1/10 started.\n",
"INFO:dapr_agents.llm.utils.request:Tools are available in the request.\n",
"INFO:dapr_agents.llm.openai.chat:Invoking ChatCompletion API.\n"
]
},
{
"name": "stdout",
"output_type": "stream",
"text": [
"\u001b[38;2;242;182;128muser:\u001b[0m\n",
"\u001b[38;2;242;182;128m\u001b[0m\u001b[38;2;242;182;128mHi my name is Roberto\u001b[0m\u001b[0m\n",
"\u001b[0m\u001b[0m\n",
"\u001b[0m--------------------------------------------------------------------------------\u001b[0m\n",
"\u001b[0m\u001b[0m\u001b[0m\n"
]
},
{
"name": "stderr",
"output_type": "stream",
"text": [
"INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
"INFO:dapr_agents.llm.openai.chat:Chat completion retrieved successfully.\n"
]
},
{
"name": "stdout",
"output_type": "stream",
"text": [
"\u001b[38;2;147;191;183massistant:\u001b[0m\n",
"\u001b[38;2;147;191;183m\u001b[0m\u001b[38;2;147;191;183m{'content': 'Hello Roberto! How can I assist you with the weather today?', 'role': 'assistant'}\u001b[0m\u001b[0m\n",
"\u001b[0m\u001b[0m\n",
"\u001b[0m--------------------------------------------------------------------------------\u001b[0m\n",
"\u001b[0m\u001b[0m\u001b[0m\n"
]
},
{
"data": {
"text/plain": [
"'Hello Roberto! How can I assist you with the weather today?'"
]
},
"execution_count": 16,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"await AIAgent.run(\"Hi my name is Roberto\")"
]
},
{
"cell_type": "code",
"execution_count": 17,
"metadata": {
"scrolled": true
},
"outputs": [
{
"data": {
"text/plain": [
"[MessageContent(content='Hi my name is Roberto', role='user'),\n",
" MessageContent(content='Hello Roberto! How can I assist you with the weather today?', role='assistant')]"
]
},
"execution_count": 17,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"AIAgent.chat_history"
]
},
{
"cell_type": "code",
"execution_count": 18,
"metadata": {},
"outputs": [
{
"name": "stderr",
"output_type": "stream",
"text": [
"INFO:dapr_agents.agents.agent.agent:Iteration 1/10 started.\n",
"INFO:dapr_agents.llm.utils.request:Tools are available in the request.\n",
"INFO:dapr_agents.llm.openai.chat:Invoking ChatCompletion API.\n"
]
},
{
"name": "stdout",
"output_type": "stream",
"text": [
"\u001b[38;2;242;182;128muser:\u001b[0m\n",
"\u001b[38;2;242;182;128m\u001b[0m\u001b[38;2;242;182;128mWhat is the weather in Virgina, New York and Washington DC?\u001b[0m\u001b[0m\n",
"\u001b[0m\u001b[0m\n",
"\u001b[0m--------------------------------------------------------------------------------\u001b[0m\n",
"\u001b[0m\u001b[0m\u001b[0m\n"
]
},
{
"name": "stderr",
"output_type": "stream",
"text": [
"INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
"INFO:dapr_agents.llm.openai.chat:Chat completion retrieved successfully.\n",
"INFO:dapr_agents.agents.agent.agent:Executing GetWeather with arguments {'location': 'Virginia'}\n",
"INFO:dapr_agents.tool.executor:Running tool (auto): GetWeather\n",
"INFO:dapr_agents.agents.agent.agent:Executing GetWeather with arguments {'location': 'New York'}\n",
"INFO:dapr_agents.tool.executor:Running tool (auto): GetWeather\n",
"INFO:dapr_agents.agents.agent.agent:Executing GetWeather with arguments {'location': 'Washington DC'}\n",
"INFO:dapr_agents.tool.executor:Running tool (auto): GetWeather\n",
"INFO:dapr_agents.agents.agent.agent:Iteration 2/10 started.\n",
"INFO:dapr_agents.llm.utils.request:Tools are available in the request.\n",
"INFO:dapr_agents.llm.openai.chat:Invoking ChatCompletion API.\n"
]
},
{
"name": "stdout",
"output_type": "stream",
"text": [
"\u001b[38;2;147;191;183massistant:\u001b[0m\n",
"\u001b[38;2;147;191;183m\u001b[0m\u001b[38;2;147;191;183m{'content': None, 'role': 'assistant', 'tool_calls': [{'id': 'call_laqow4gKWO2mQVGxTlEAgBl8', 'type': 'function', 'function': {'name': 'GetWeather', 'arguments': '{\"location\": \"Virginia\"}'}}, {'id': 'call_ZKm6RcAtrS4EHIWZhOofb0eK', 'type': 'function', 'function': {'name': 'GetWeather', 'arguments': '{\"location\": \"New York\"}'}}, {'id': 'call_RLzaaec0MV3khLo1ExLVCw9b', 'type': 'function', 'function': {'name': 'GetWeather', 'arguments': '{\"location\": \"Washington DC\"}'}}]}\u001b[0m\u001b[0m\n",
"\u001b[0m\u001b[0m\n",
"\u001b[0m--------------------------------------------------------------------------------\u001b[0m\n",
"\u001b[0m\u001b[0m\u001b[0m\n",
"\u001b[38;2;191;69;126mGetWeather(tool) (Id: call_laqow4gKWO2mQVGxTlEAgBl8):\u001b[0m\n",
"\u001b[38;2;191;69;126m\u001b[0m\u001b[38;2;191;69;126mVirginia: 63F.\u001b[0m\u001b[0m\n",
"\u001b[0m\u001b[0m\n",
"\u001b[0m--------------------------------------------------------------------------------\u001b[0m\n",
"\u001b[0m\u001b[0m\u001b[0m\n",
"\u001b[38;2;191;69;126mGetWeather(tool) (Id: call_ZKm6RcAtrS4EHIWZhOofb0eK):\u001b[0m\n",
"\u001b[38;2;191;69;126m\u001b[0m\u001b[38;2;191;69;126mNew York: 66F.\u001b[0m\u001b[0m\n",
"\u001b[0m\u001b[0m\n",
"\u001b[0m--------------------------------------------------------------------------------\u001b[0m\n",
"\u001b[0m\u001b[0m\u001b[0m\n",
"\u001b[38;2;191;69;126mGetWeather(tool) (Id: call_RLzaaec0MV3khLo1ExLVCw9b):\u001b[0m\n",
"\u001b[38;2;191;69;126m\u001b[0m\u001b[38;2;191;69;126mWashington DC: 73F.\u001b[0m\u001b[0m\n",
"\u001b[0m\u001b[0m\n",
"\u001b[0m--------------------------------------------------------------------------------\u001b[0m\n",
"\u001b[0m\u001b[0m\u001b[0m\n"
]
},
{
"name": "stderr",
"output_type": "stream",
"text": [
"INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
"INFO:dapr_agents.llm.openai.chat:Chat completion retrieved successfully.\n"
]
},
{
"name": "stdout",
"output_type": "stream",
"text": [
"\u001b[38;2;147;191;183massistant:\u001b[0m\n",
"\u001b[38;2;147;191;183m\u001b[0m\u001b[38;2;147;191;183m{'content': \"Here's the current weather in the requested locations:\\n- Virginia: 63°F\\n- New York: 66°F\\n- Washington DC: 73°F\\n\\nLet me know if there's anything else I can help you with!\", 'role': 'assistant'}\u001b[0m\u001b[0m\n",
"\u001b[0m\u001b[0m\n",
"\u001b[0m--------------------------------------------------------------------------------\u001b[0m\n",
"\u001b[0m\u001b[0m\u001b[0m\n"
]
},
{
"data": {
"text/plain": [
"\"Here's the current weather in the requested locations:\\n- Virginia: 63°F\\n- New York: 66°F\\n- Washington DC: 73°F\\n\\nLet me know if there's anything else I can help you with!\""
]
},
"execution_count": 18,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"await AIAgent.run(\"What is the weather in Virgina, New York and Washington DC?\")"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": []
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": []
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.10.16"
}
},
"nbformat": 4,
"nbformat_minor": 4
}

View File

@ -1,30 +0,0 @@
# The Weather Agent
The Weather Agent represents a basic example of an agent that interacts with the external world through tools, such as APIs. This agent demonstrates how a language model (LLM) can suggest which tool to use and provide the necessary inputs for tool execution. However, it is the agent—not the language model—that executes the tool and processes the results. Once the tool has been executed, the results are passed back to the language model for further suggestions, summaries, or next actions. This agent showcases the foundational concept of integrating language models with external tools to retrieve real-world data, such as weather information.
## Agents
| Pattern | Overview |
| --- | --- |
| [ToolCall (Function Calling)](toolcall_agent.ipynb) | A weather agent that uses OpenAIs tool calling (Function Calling) to pass tools in JSON schema format. The language model suggests the tool to be used based on the task, but the agent executes the tool and processes the results. |
| [ReAct (Reason + Act)](react_agent.ipynb) | A weather agent following the ReAct prompting technique. The language model uses a chain-of-thought reasoning process (Thought, Action, Observation) to suggest the next tool to use. The agent then executes the tool, and the results are fed back into the reasoning loop. |
## Tools
* **WeatherTool**: A tool that allows the agent to retrieve weather data by first obtaining geographical coordinates (latitude and longitude) using the Nominatim API. For weather data, the agent either calls the National Weather Service (NWS) API (for locations in the USA) or the Met.no API (for locations outside the USA). This tool is executed by the agent based on the suggestions provided by the language model.
* **HistoricalWeather**: A tool that retrieves historical weather data for a specified location and date range. The agent uses the Nominatim API to get the coordinates for the specified location and calls the Open-Meteo Historical Weather API to retrieve temperature data for past dates. This tool allows the agent to compare past weather conditions with current forecasts, providing richer insights.
### APIs Used
* Nominatim API: Provides geocoding services to convert city, state, and country into geographical coordinates (latitude and longitude).
* Endpoint: https://nominatim.openstreetmap.org/search.php
* Purpose: Used to fetch coordinates for a given location, which is then passed to weather APIs.
* National Weather Service (NWS) API: Provides weather data for locations within the United States.
* Endpoint: https://api.weather.gov
* Purpose: Used to retrieve detailed weather forecasts and temperature data for locations in the USA.
* Met.no API: Provides weather data for locations outside the United States.
* Endpoint: https://api.met.no/weatherapi
* Purpose: Used to retrieve weather forecasts and temperature data for locations outside the USA, offering international coverage.
* Open-Meteo Historical Weather API: Provides historical weather data for any location worldwide.
* Endpoint: https://archive-api.open-meteo.com/v1/archive
* Purpose: Used to retrieve historical weather data, including temperature readings for past dates, allowing the agent to analyze past weather conditions and trends.

View File

@ -1,223 +0,0 @@
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# ReAct Weather Agent"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Install Required Libraries\n",
"Before starting, ensure the required libraries are installed:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"!pip install dapr-agents python-dotenv"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Initialize Environment Variables"
]
},
{
"cell_type": "code",
"execution_count": 1,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"True"
]
},
"execution_count": 1,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"from dotenv import load_dotenv\n",
"load_dotenv()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Import Modules"
]
},
{
"cell_type": "code",
"execution_count": 2,
"metadata": {},
"outputs": [],
"source": [
"from dapr_agents import Agent"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Import Tools"
]
},
{
"cell_type": "code",
"execution_count": 3,
"metadata": {},
"outputs": [],
"source": [
"from tools import WeatherForecast, HistoricalWeather"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Initialize Agent"
]
},
{
"cell_type": "code",
"execution_count": 4,
"metadata": {},
"outputs": [],
"source": [
"weather_agent = Agent(\n",
" name=\"Weather Agent\",\n",
" role=\"Weather Expert\",\n",
" pattern=\"react\",\n",
" tools=[WeatherForecast(), HistoricalWeather()],\n",
")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Run Agent"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"\u001b[38;2;242;182;128muser:\u001b[0m\n",
"\u001b[38;2;242;182;128m\u001b[0m\u001b[38;2;242;182;128mwhat will be the difference of temperature in Paris between 7 days ago and 7 from now?\u001b[0m\u001b[0m\n",
"\u001b[0m\u001b[0m\n",
"\u001b[0m--------------------------------------------------------------------------------\u001b[0m\n",
"\u001b[0m\u001b[0m\u001b[0m\n",
"\u001b[38;2;217;95;118mThought: For this, I need to gather two pieces of information: the historical temperature of Paris from 7 days ago and the forecasted temperature for Paris 7 days from now.\u001b[0m\n",
"\u001b[38;2;217;95;118m\u001b[0m\n",
"\u001b[38;2;217;95;118mI'll start by retrieving the historical temperature data for Paris from 7 days ago.\u001b[0m\u001b[0m\n",
"\u001b[38;2;191;69;126mAction: {\"name\": \"Historicalweather\", \"arguments\": {\"city\": \"Paris\", \"state\": null, \"country\": \"France\", \"start_date\": \"2024-11-04\", \"end_date\": \"2024-11-04\"}}\u001b[0m\u001b[0m\n",
"\u001b[38;2;146;94;130mObservation: {'city': 'Paris', 'state': None, 'country': 'France', 'start_date': '2024-11-04', 'end_date': '2024-11-04', 'temperature_data': {'2024-11-04T00:00': 6.8, '2024-11-04T01:00': 8.7, '2024-11-04T02:00': 8.7, '2024-11-04T03:00': 8.6, '2024-11-04T04:00': 7.9, '2024-11-04T05:00': 7.3, '2024-11-04T06:00': 7.0, '2024-11-04T07:00': 6.8, '2024-11-04T08:00': 6.9, '2024-11-04T09:00': 7.3, '2024-11-04T10:00': 8.0, '2024-11-04T11:00': 9.6, '2024-11-04T12:00': 11.3, '2024-11-04T13:00': 14.0, '2024-11-04T14:00': 14.5, '2024-11-04T15:00': 14.7, '2024-11-04T16:00': 12.6, '2024-11-04T17:00': 11.2, '2024-11-04T18:00': 9.8, '2024-11-04T19:00': 9.1, '2024-11-04T20:00': 8.7, '2024-11-04T21:00': 8.0, '2024-11-04T22:00': 8.0, '2024-11-04T23:00': 7.3}, 'unit': '°C'}\u001b[0m\u001b[0m\n",
"\u001b[38;2;217;95;118mThought: I have obtained the historical temperatures for Paris on November 4, 2024. Next, I need to obtain the forecasted temperature for Paris 7 days from now, which will be November 18, 2024.\u001b[0m\u001b[0m\n",
"\u001b[38;2;191;69;126mAction: {\"name\": \"Weatherforecast\", \"arguments\": {\"city\": \"Paris\", \"state\": null, \"country\": \"France\"}}\u001b[0m\u001b[0m\n",
"\u001b[38;2;146;94;130mObservation: {'city': 'Paris', 'state': None, 'country': 'France', 'temperature': 7.0, 'unit': 'celsius'}\u001b[0m\u001b[0m\n",
"\u001b[38;2;217;95;118mThought: I now have sufficient information to calculate the temperature difference between 7 days ago and 7 days from now in Paris.\u001b[0m\n",
"\u001b[38;2;217;95;118m\u001b[0m\n",
"\u001b[38;2;217;95;118mAnswer: The average temperature on November 4, 2024, based on the historical data I retrieved, was approximately 9.3°C. The forecasted temperature for Paris on November 18, 2024, is 7.0°C. Therefore, the temperature difference is approximately 2.3°C, with the conditions expected to be cooler on November 18 compared to November 4.\u001b[0m\u001b[0m\n",
"\u001b[0m\u001b[0m\n",
"\u001b[0m--------------------------------------------------------------------------------\u001b[0m\n",
"\u001b[0m\u001b[0m\u001b[0m\n",
"\u001b[38;2;147;191;183massistant:\u001b[0m\n",
"\u001b[38;2;147;191;183m\u001b[0m\u001b[38;2;147;191;183mThe average temperature on November 4, 2024, based on the historical data I retrieved, was approximately 9.3°C. The forecasted temperature for Paris on November 18, 2024, is 7.0°C. Therefore, the temperature difference is approximately 2.3°C, with the conditions expected to be cooler on November 18 compared to November 4.\u001b[0m\u001b[0m\n"
]
},
{
"data": {
"text/plain": [
"'The average temperature on November 4, 2024, based on the historical data I retrieved, was approximately 9.3°C. The forecasted temperature for Paris on November 18, 2024, is 7.0°C. Therefore, the temperature difference is approximately 2.3°C, with the conditions expected to be cooler on November 18 compared to November 4.'"
]
},
"execution_count": 5,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"await weather_agent.run(\"what will be the difference of temperature in Paris between 7 days ago and 7 from now?\")"
]
},
{
"cell_type": "code",
"execution_count": 6,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"[{'content': 'what will be the difference of temperature in Paris between 7 days ago and 7 from now?',\n",
" 'role': 'user'},\n",
" {'content': 'The average temperature on November 4, 2024, based on the historical data I retrieved, was approximately 9.3°C. The forecasted temperature for Paris on November 18, 2024, is 7.0°C. Therefore, the temperature difference is approximately 2.3°C, with the conditions expected to be cooler on November 18 compared to November 4.',\n",
" 'role': 'assistant'}]"
]
},
"execution_count": 6,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"weather_agent.chat_history"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"await weather_agent.run(\"What was the weather like in Paris two days ago?\")"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": []
}
],
"metadata": {
"kernelspec": {
"display_name": ".venv",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.12.8"
}
},
"nbformat": 4,
"nbformat_minor": 2
}

View File

@ -1,264 +0,0 @@
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# ToolCall Weather Agent"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Install Required Libraries\n",
"Before starting, ensure the required libraries are installed:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"!pip install dapr-agents python-dotenv"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Initialize Environment Variables"
]
},
{
"cell_type": "code",
"execution_count": 1,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"True"
]
},
"execution_count": 1,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"from dotenv import load_dotenv\n",
"load_dotenv()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Import Modules"
]
},
{
"cell_type": "code",
"execution_count": 2,
"metadata": {},
"outputs": [],
"source": [
"from dapr_agents import Agent"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Import Tools"
]
},
{
"cell_type": "code",
"execution_count": 4,
"metadata": {},
"outputs": [],
"source": [
"from tools import WeatherForecast, HistoricalWeather"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Initialize Agent"
]
},
{
"cell_type": "code",
"execution_count": 5,
"metadata": {},
"outputs": [],
"source": [
"weather_agent = Agent(\n",
" name=\"Weather Agent\",\n",
" role=\"Weather Expert\",\n",
" tools=[WeatherForecast(),HistoricalWeather()],\n",
")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Run Agent"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"\u001b[38;2;242;182;128muser:\u001b[0m\n",
"\u001b[38;2;242;182;128m\u001b[0m\u001b[38;2;242;182;128mwhat is the weather in Paris?\u001b[0m\u001b[0m\n",
"\u001b[0m\u001b[0m\n",
"\u001b[0m--------------------------------------------------------------------------------\u001b[0m\n",
"\u001b[0m\u001b[0m\u001b[0m\n",
"\u001b[38;2;217;95;118massistant(tool_call):\u001b[0m\n",
"\u001b[38;2;217;95;118m\u001b[0m\u001b[38;2;217;95;118mFunction name: Weatherforecast (Call Id: call_qyfgmgDAJSrRM58Hb83AtdDh)\u001b[0m\n",
"\u001b[38;2;217;95;118m\u001b[0m\u001b[38;2;217;95;118mArguments: {\"city\":\"Paris\",\"country\":\"france\"}\u001b[0m\u001b[0m\n",
"\u001b[0m\u001b[0m\n",
"\u001b[0m--------------------------------------------------------------------------------\u001b[0m\n",
"\u001b[0m\u001b[0m\u001b[0m\n",
"\u001b[38;2;191;69;126mtool(Id: call_qyfgmgDAJSrRM58Hb83AtdDh):\u001b[0m\n",
"\u001b[38;2;191;69;126m\u001b[0m\u001b[38;2;191;69;126m{'city': 'Paris', 'state': None, 'country': 'france', 'temperature': 4.6, 'unit': 'celsius'}\u001b[0m\u001b[0m\n",
"\u001b[0m\u001b[0m\n",
"\u001b[0m--------------------------------------------------------------------------------\u001b[0m\n",
"\u001b[0m\u001b[0m\u001b[0m\n",
"\u001b[38;2;147;191;183massistant:\u001b[0m\n",
"\u001b[38;2;147;191;183m\u001b[0m\u001b[38;2;147;191;183mThe current temperature in Paris, France is 4.6°C.\u001b[0m\u001b[0m\n",
"\u001b[0m\u001b[0m\n",
"\u001b[0m--------------------------------------------------------------------------------\u001b[0m\n",
"\u001b[0m\u001b[0m\u001b[0m\n"
]
},
{
"data": {
"text/plain": [
"'The current temperature in Paris, France is 4.6°C.'"
]
},
"execution_count": 6,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"await weather_agent.run(\"what is the weather in Paris?\")"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"\u001b[38;2;242;182;128muser:\u001b[0m\n",
"\u001b[38;2;242;182;128m\u001b[0m\u001b[38;2;242;182;128mwhat was the weather like in Paris two days ago?\u001b[0m\u001b[0m\n",
"\u001b[0m\u001b[0m\n",
"\u001b[0m--------------------------------------------------------------------------------\u001b[0m\n",
"\u001b[0m\u001b[0m\u001b[0m\n",
"\u001b[38;2;217;95;118massistant(tool_call):\u001b[0m\n",
"\u001b[38;2;217;95;118m\u001b[0m\u001b[38;2;217;95;118mFunction name: Historicalweather (Call Id: call_VANaENO9iXLhOuWKOAnV769o)\u001b[0m\n",
"\u001b[38;2;217;95;118m\u001b[0m\u001b[38;2;217;95;118mArguments: {\"city\":\"Paris\",\"country\":\"france\",\"start_date\":\"2024-11-25\",\"end_date\":\"2024-11-25\"}\u001b[0m\u001b[0m\n",
"\u001b[0m\u001b[0m\n",
"\u001b[0m--------------------------------------------------------------------------------\u001b[0m\n",
"\u001b[0m\u001b[0m\u001b[0m\n",
"\u001b[38;2;191;69;126mtool(Id: call_VANaENO9iXLhOuWKOAnV769o):\u001b[0m\n",
"\u001b[38;2;191;69;126m\u001b[0m\u001b[38;2;191;69;126m{'city': 'Paris', 'state': None, 'country': 'france', 'start_date': '2024-11-25', 'end_date': '2024-11-25', 'temperature_data': {'2024-11-25T00:00': 16.9, '2024-11-25T01:00': 17.0, '2024-11-25T02:00': 17.4, '2024-11-25T03:00': 17.7, '2024-11-25T04:00': 17.8, '2024-11-25T05:00': 17.6, '2024-11-25T06:00': 16.8, '2024-11-25T07:00': 15.5, '2024-11-25T08:00': 14.6, '2024-11-25T09:00': 14.2, '2024-11-25T10:00': 13.5, '2024-11-25T11:00': 12.2, '2024-11-25T12:00': 11.1, '2024-11-25T13:00': 9.8, '2024-11-25T14:00': 9.9, '2024-11-25T15:00': 10.0, '2024-11-25T16:00': 9.8, '2024-11-25T17:00': 9.3, '2024-11-25T18:00': 9.1, '2024-11-25T19:00': 8.7, '2024-11-25T20:00': 8.4, '2024-11-25T21:00': 8.4, '2024-11-25T22:00': 8.6, '2024-11-25T23:00': 8.2}, 'unit': '°C'}\u001b[0m\u001b[0m\n",
"\u001b[0m\u001b[0m\n",
"\u001b[0m--------------------------------------------------------------------------------\u001b[0m\n",
"\u001b[0m\u001b[0m\u001b[0m\n",
"\u001b[38;2;147;191;183massistant:\u001b[0m\n",
"\u001b[38;2;147;191;183m\u001b[0m\u001b[38;2;147;191;183mOn November 25, 2024, the temperature in Paris was as follows:\u001b[0m\n",
"\u001b[38;2;147;191;183m\u001b[0m\n",
"\u001b[38;2;147;191;183m- Midnight: 16.9°C\u001b[0m\n",
"\u001b[38;2;147;191;183m- 01:00: 17.0°C\u001b[0m\n",
"\u001b[38;2;147;191;183m- 02:00: 17.4°C\u001b[0m\n",
"\u001b[38;2;147;191;183m- 03:00: 17.7°C\u001b[0m\n",
"\u001b[38;2;147;191;183m- 04:00: 17.8°C\u001b[0m\n",
"\u001b[38;2;147;191;183m- 05:00: 17.6°C\u001b[0m\n",
"\u001b[38;2;147;191;183m- 06:00: 16.8°C\u001b[0m\n",
"\u001b[38;2;147;191;183m- 07:00: 15.5°C\u001b[0m\n",
"\u001b[38;2;147;191;183m- 08:00: 14.6°C\u001b[0m\n",
"\u001b[38;2;147;191;183m- 09:00: 14.2°C\u001b[0m\n",
"\u001b[38;2;147;191;183m- 10:00: 13.5°C\u001b[0m\n",
"\u001b[38;2;147;191;183m- 11:00: 12.2°C\u001b[0m\n",
"\u001b[38;2;147;191;183m- 12:00: 11.1°C\u001b[0m\n",
"\u001b[38;2;147;191;183m- 13:00: 9.8°C\u001b[0m\n",
"\u001b[38;2;147;191;183m- 14:00: 9.9°C\u001b[0m\n",
"\u001b[38;2;147;191;183m- 15:00: 10.0°C\u001b[0m\n",
"\u001b[38;2;147;191;183m- 16:00: 9.8°C\u001b[0m\n",
"\u001b[38;2;147;191;183m- 17:00: 9.3°C\u001b[0m\n",
"\u001b[38;2;147;191;183m- 18:00: 9.1°C\u001b[0m\n",
"\u001b[38;2;147;191;183m- 19:00: 8.7°C\u001b[0m\n",
"\u001b[38;2;147;191;183m- 20:00: 8.4°C\u001b[0m\n",
"\u001b[38;2;147;191;183m- 21:00: 8.4°C\u001b[0m\n",
"\u001b[38;2;147;191;183m- 22:00: 8.6°C\u001b[0m\n",
"\u001b[38;2;147;191;183m- 23:00: 8.2°C\u001b[0m\n",
"\u001b[38;2;147;191;183m\u001b[0m\n",
"\u001b[38;2;147;191;183mThe day started relatively warm in the early hours and cooled down throughout the day and into the evening.\u001b[0m\u001b[0m\n",
"\u001b[0m\u001b[0m\n",
"\u001b[0m--------------------------------------------------------------------------------\u001b[0m\n",
"\u001b[0m\u001b[0m\u001b[0m\n"
]
},
{
"data": {
"text/plain": [
"'On November 25, 2024, the temperature in Paris was as follows:\\n\\n- Midnight: 16.9°C\\n- 01:00: 17.0°C\\n- 02:00: 17.4°C\\n- 03:00: 17.7°C\\n- 04:00: 17.8°C\\n- 05:00: 17.6°C\\n- 06:00: 16.8°C\\n- 07:00: 15.5°C\\n- 08:00: 14.6°C\\n- 09:00: 14.2°C\\n- 10:00: 13.5°C\\n- 11:00: 12.2°C\\n- 12:00: 11.1°C\\n- 13:00: 9.8°C\\n- 14:00: 9.9°C\\n- 15:00: 10.0°C\\n- 16:00: 9.8°C\\n- 17:00: 9.3°C\\n- 18:00: 9.1°C\\n- 19:00: 8.7°C\\n- 20:00: 8.4°C\\n- 21:00: 8.4°C\\n- 22:00: 8.6°C\\n- 23:00: 8.2°C\\n\\nThe day started relatively warm in the early hours and cooled down throughout the day and into the evening.'"
]
},
"execution_count": 7,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"await weather_agent.run(\"what was the weather like in Paris two days ago?\")"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": []
}
],
"metadata": {
"kernelspec": {
"display_name": ".venv",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.12.1"
}
},
"nbformat": 4,
"nbformat_minor": 2
}

View File

@ -1,219 +0,0 @@
from typing import Optional
from dapr_agents import AgentTool
from datetime import datetime
import requests
import time
class WeatherForecast(AgentTool):
name: str = "WeatherForecast"
description: str = "A tool for retrieving the weather/temperature for a given city."
# Default user agent
user_agent: str = "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/605.1.15 (KHTML, like Gecko) Version/14.0.3 Safari/605.1.15"
def handle_error(self, response: requests.Response, url: str, stage: str) -> None:
"""Handles error responses and raises a ValueError with detailed information."""
if response.status_code != 200:
raise ValueError(
f"Failed to get data during {stage}. Status: {response.status_code}. "
f"URL: {url}. Response: {response.text}"
)
if not response.json():
raise ValueError(
f"No data found during {stage}. URL: {url}. Response: {response.text}"
)
def _run(
self, city: str, state: Optional[str] = None, country: Optional[str] = "usa"
) -> dict:
"""
Retrieves weather data by first fetching geocode data for the city and then fetching weather data.
Args:
city (str): The name of the city to get weather for.
state (Optional[str]): The two-letter state abbreviation (optional).
country (Optional[str]): The two-letter country abbreviation. Defaults to 'usa'.
Returns:
dict: A dictionary containing the city, state, country, and current temperature.
"""
headers = {"User-Agent": self.user_agent}
# Construct the geocode URL, conditionally including the state if it's provided
geocode_url = (
f"https://nominatim.openstreetmap.org/search?city={city}&country={country}"
)
if state:
geocode_url += f"&state={state}"
geocode_url += "&limit=1&format=jsonv2"
# Geocode request
geocode_response = requests.get(geocode_url, headers=headers)
self.handle_error(geocode_response, geocode_url, "geocode lookup")
# Add delay between requests
time.sleep(2)
geocode_data = geocode_response.json()
lat, lon = geocode_data[0]["lat"], geocode_data[0]["lon"]
# Use different APIs based on the country
if country.lower() == "usa":
# Weather.gov request for USA
weather_gov_url = f"https://api.weather.gov/points/{lat},{lon}"
weather_response = requests.get(weather_gov_url, headers=headers)
self.handle_error(weather_response, weather_gov_url, "weather lookup")
# Add delay between requests
time.sleep(2)
weather_data = weather_response.json()
forecast_url = weather_data["properties"]["forecast"]
# Forecast request
forecast_response = requests.get(forecast_url, headers=headers)
self.handle_error(forecast_response, forecast_url, "forecast lookup")
forecast_data = forecast_response.json()
today_forecast = forecast_data["properties"]["periods"][0]
# Return the weather data along with the city, state, and country
return {
"city": city,
"state": state,
"country": country,
"temperature": today_forecast["temperature"],
"unit": "Fahrenheit",
}
else:
# Met.no API for non-USA countries
met_no_url = f"https://api.met.no/weatherapi/locationforecast/2.0/compact?lat={lat}&lon={lon}"
weather_response = requests.get(met_no_url, headers=headers)
self.handle_error(weather_response, met_no_url, "Met.no weather lookup")
weather_data = weather_response.json()
temperature_unit = weather_data["properties"]["meta"]["units"][
"air_temperature"
]
today_forecast = weather_data["properties"]["timeseries"][0]["data"][
"instant"
]["details"]["air_temperature"]
# Return the weather data along with the city, state, and country
return {
"city": city,
"state": state,
"country": country,
"temperature": today_forecast,
"unit": temperature_unit,
}
class HistoricalWeather(AgentTool):
name: str = "HistoricalWeather"
description: str = (
"A tool for retrieving historical weather data (temperature) for a given city."
)
# Default user agent
user_agent: str = "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/605.1.15 (KHTML, like Gecko) Version/14.0.3 Safari/605.1.15"
def handle_error(self, response: requests.Response, url: str, stage: str) -> None:
"""Handles error responses and raises a ValueError with detailed information."""
if response.status_code != 200:
raise ValueError(
f"Failed to get data during {stage}. Status: {response.status_code}. "
f"URL: {url}. Response: {response.text}"
)
if not response.json():
raise ValueError(
f"No data found during {stage}. URL: {url}. Response: {response.text}"
)
def _run(
self,
city: str,
state: Optional[str] = None,
country: Optional[str] = "usa",
start_date: Optional[str] = None,
end_date: Optional[str] = None,
) -> dict:
"""
Retrieves historical weather data for the city by first fetching geocode data and then historical weather data.
Args:
city (str): The name of the city to get weather for.
state (Optional[str]): The two-letter state abbreviation (optional).
country (Optional[str]): The two-letter country abbreviation. Defaults to 'usa'.
start_date (Optional[str]): Start date for historical data (YYYY-MM-DD format).
end_date (Optional[str]): End date for historical data (YYYY-MM-DD format).
Returns:
dict: A dictionary containing the city, state, country, and historical temperature data.
"""
headers = {"User-Agent": self.user_agent}
# Validate dates
current_date = datetime.now().strftime("%Y-%m-%d")
if start_date >= current_date or end_date >= current_date:
raise ValueError(
"Both start_date and end_date must be earlier than the current date."
)
if (
datetime.strptime(end_date, "%Y-%m-%d")
- datetime.strptime(start_date, "%Y-%m-%d")
).days > 30:
raise ValueError(
"The time span between start_date and end_date cannot exceed 30 days."
)
# Construct the geocode URL, conditionally including the state if it's provided
geocode_url = (
f"https://nominatim.openstreetmap.org/search?city={city}&country={country}"
)
if state:
geocode_url += f"&state={state}"
geocode_url += "&limit=1&format=jsonv2"
# Geocode request
geocode_response = requests.get(geocode_url, headers=headers)
self.handle_error(geocode_response, geocode_url, "geocode lookup")
# Add delay between requests
time.sleep(2)
geocode_data = geocode_response.json()
lat, lon = geocode_data[0]["lat"], geocode_data[0]["lon"]
# Historical weather request
historical_weather_url = f"https://archive-api.open-meteo.com/v1/archive?latitude={lat}&longitude={lon}&start_date={start_date}&end_date={end_date}&hourly=temperature_2m"
weather_response = requests.get(historical_weather_url, headers=headers)
self.handle_error(
weather_response, historical_weather_url, "historical weather lookup"
)
weather_data = weather_response.json()
# Extract time and temperature data
timestamps = weather_data["hourly"]["time"]
temperatures = weather_data["hourly"]["temperature_2m"]
temperature_unit = weather_data["hourly_units"]["temperature_2m"]
# Combine timestamps and temperatures into a dictionary
temperature_data = {
timestamps[i]: temperatures[i] for i in range(len(timestamps))
}
# Return the structured weather data along with the city, state, country
return {
"city": city,
"state": state,
"country": country,
"start_date": start_date,
"end_date": end_date,
"temperature_data": temperature_data,
"unit": temperature_unit,
}

File diff suppressed because one or more lines are too long

View File

@ -1,502 +0,0 @@
{
"cells": [
{
"cell_type": "markdown",
"id": "39c2dcc0",
"metadata": {},
"source": [
"# Executor: LocalCodeExecutorBasic Examples\n",
"\n",
"This notebook shows how to execute Python and shell snippets in **isolated, cached virtual environments**"
]
},
{
"cell_type": "markdown",
"id": "c4ff4b2b",
"metadata": {},
"source": [
"## Install Required Libraries\n",
"Before starting, ensure the required libraries are installed:"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "5b41a66a",
"metadata": {},
"outputs": [],
"source": [
"!pip install dapr-agents"
]
},
{
"cell_type": "markdown",
"id": "a9c01be3",
"metadata": {},
"source": [
"## Setup"
]
},
{
"cell_type": "code",
"execution_count": 1,
"id": "508fd446",
"metadata": {},
"outputs": [],
"source": [
"import logging\n",
"\n",
"from dapr_agents.executors.local import LocalCodeExecutor\n",
"from dapr_agents.types.executor import CodeSnippet, ExecutionRequest\n",
"from rich.console import Console\n",
"from rich.ansi import AnsiDecoder\n",
"import shutil"
]
},
{
"cell_type": "code",
"execution_count": 2,
"id": "27594072",
"metadata": {},
"outputs": [],
"source": [
"logging.basicConfig(level=logging.INFO)\n",
"\n",
"executor = LocalCodeExecutor()\n",
"console = Console()\n",
"decoder = AnsiDecoder()"
]
},
{
"cell_type": "markdown",
"id": "4d663475",
"metadata": {},
"source": [
"## Running a basic Python Code Snippet"
]
},
{
"cell_type": "code",
"execution_count": 3,
"id": "ba45ddc8",
"metadata": {},
"outputs": [
{
"name": "stderr",
"output_type": "stream",
"text": [
"INFO:dapr_agents.executors.local:Sandbox backend enabled: seatbelt\n",
"INFO:dapr_agents.executors.local:Created a new virtual environment\n",
"INFO:dapr_agents.executors.local:Installing print, rich\n",
"INFO:dapr_agents.executors.local:Snippet 1 finished in 2.442s\n"
]
},
{
"data": {
"text/html": [
"<pre style=\"white-space:pre;overflow-x:auto;line-height:normal;font-family:Menlo,'DejaVu Sans Mono',consolas,'Courier New',monospace\"><span style=\"color: #008000; text-decoration-color: #008000; font-weight: bold\">Hello executor!</span>\n",
"</pre>\n"
],
"text/plain": [
"\u001b[1;32mHello executor!\u001b[0m\n"
]
},
"metadata": {},
"output_type": "display_data"
}
],
"source": [
"code = \"\"\"\n",
"from rich import print\n",
"print(\"[bold green]Hello executor![/bold green]\")\n",
"\"\"\"\n",
"\n",
"request = ExecutionRequest(snippets=[\n",
" CodeSnippet(language='python', code=code, timeout=10)\n",
"])\n",
"\n",
"results = await executor.execute(request)\n",
"results[0] # raw result\n",
"\n",
"# prettyprint with Rich\n",
"console.print(*decoder.decode(results[0].output))"
]
},
{
"cell_type": "markdown",
"id": "d28c7531",
"metadata": {},
"source": [
"## Run a Shell Snipper"
]
},
{
"cell_type": "code",
"execution_count": 4,
"id": "4ea89b85",
"metadata": {},
"outputs": [
{
"name": "stderr",
"output_type": "stream",
"text": [
"INFO:dapr_agents.executors.local:Sandbox backend enabled: seatbelt\n",
"INFO:dapr_agents.executors.local:Snippet 1 finished in 0.019s\n"
]
},
{
"data": {
"text/plain": [
"[ExecutionResult(status='success', output='4\\n', exit_code=0)]"
]
},
"execution_count": 4,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"shell_request = ExecutionRequest(snippets=[\n",
" CodeSnippet(language='sh', code='echo $((2+2))', timeout=5)\n",
"])\n",
"\n",
"await executor.execute(shell_request)"
]
},
{
"cell_type": "markdown",
"id": "da281b6e",
"metadata": {},
"source": [
"## Reuse the cached virtual environment"
]
},
{
"cell_type": "code",
"execution_count": 5,
"id": "3e9e7e9b",
"metadata": {},
"outputs": [
{
"name": "stderr",
"output_type": "stream",
"text": [
"INFO:dapr_agents.executors.local:Sandbox backend enabled: seatbelt\n",
"INFO:dapr_agents.executors.local:Reusing cached virtual environment.\n",
"INFO:dapr_agents.executors.local:Installing print, rich\n",
"INFO:dapr_agents.executors.local:Snippet 1 finished in 0.297s\n"
]
},
{
"data": {
"text/plain": [
"[ExecutionResult(status='success', output='\\x1b[1;32mHello executor!\\x1b[0m\\n', exit_code=0)]"
]
},
"execution_count": 5,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"# Rerunning the same Python request will reuse the cached venv, so it is faster\n",
"await executor.execute(request)"
]
},
{
"cell_type": "markdown",
"id": "14dc3e4c",
"metadata": {},
"source": [
"## Inject Helper Functions"
]
},
{
"cell_type": "code",
"execution_count": 6,
"id": "82f9a168",
"metadata": {},
"outputs": [
{
"name": "stderr",
"output_type": "stream",
"text": [
"INFO:dapr_agents.executors.local:Sandbox backend enabled: seatbelt\n",
"INFO:dapr_agents.executors.local:Created a new virtual environment\n",
"INFO:dapr_agents.executors.local:Snippet 1 finished in 1.408s\n"
]
},
{
"data": {
"text/plain": [
"[ExecutionResult(status='success', output='42\\n', exit_code=0)]"
]
},
"execution_count": 6,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"def fancy_sum(a: int, b: int) -> int:\n",
" return a + b\n",
"\n",
"executor.user_functions.append(fancy_sum)\n",
"\n",
"helper_request = ExecutionRequest(snippets=[\n",
" CodeSnippet(language='python', code='print(fancy_sum(40, 2))', timeout=5)\n",
"])\n",
"\n",
"await executor.execute(helper_request)"
]
},
{
"cell_type": "markdown",
"id": "25f9718c",
"metadata": {},
"source": [
"## Clean Up"
]
},
{
"cell_type": "code",
"execution_count": 7,
"id": "b09059f1",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Cache directory removed ✅\n"
]
}
],
"source": [
"shutil.rmtree(executor.cache_dir, ignore_errors=True)\n",
"print(\"Cache directory removed ✅\")"
]
},
{
"cell_type": "markdown",
"id": "2c93cdef",
"metadata": {},
"source": [
"## Package-manager detection & automatic bootstrap"
]
},
{
"cell_type": "code",
"execution_count": 8,
"id": "8691f3e3",
"metadata": {},
"outputs": [],
"source": [
"from dapr_agents.executors.utils import package_manager as pm\n",
"import pathlib\n",
"import tempfile"
]
},
{
"cell_type": "markdown",
"id": "e9e08d81",
"metadata": {},
"source": [
"### Create a throw-away project"
]
},
{
"cell_type": "code",
"execution_count": 9,
"id": "4c7dd9c3",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"tmp project: /var/folders/9z/8xhqw8x1611fcbhzl339yrs40000gn/T/tmpmssk0m2b\n"
]
}
],
"source": [
"tmp_proj = pathlib.Path(tempfile.mkdtemp())\n",
"(tmp_proj / \"requirements.txt\").write_text(\"rich==13.7.0\\n\")\n",
"print(\"tmp project:\", tmp_proj)"
]
},
{
"cell_type": "markdown",
"id": "03558a95",
"metadata": {},
"source": [
"### Show what the helper detects"
]
},
{
"cell_type": "code",
"execution_count": 10,
"id": "3b5acbfb",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"detect_package_managers -> [<PackageManagerType.PIP: 'pip'>]\n",
"get_install_command -> pip install -r requirements.txt\n"
]
}
],
"source": [
"print(\"detect_package_managers ->\",\n",
" [m.name for m in pm.detect_package_managers(tmp_proj)])\n",
"print(\"get_install_command ->\",\n",
" pm.get_install_command(tmp_proj))"
]
},
{
"cell_type": "markdown",
"id": "42f1ae7c",
"metadata": {},
"source": [
"### Point the executor at that directory"
]
},
{
"cell_type": "code",
"execution_count": 11,
"id": "81e53cf4",
"metadata": {},
"outputs": [],
"source": [
"import os\n",
"from contextlib import contextmanager, ExitStack\n",
"\n",
"@contextmanager\n",
"def chdir(path):\n",
" \"\"\"\n",
" Temporarily change the process CWD to *path*.\n",
"\n",
" Works on every CPython ≥ 3.6 (and PyPy) and restores the old directory\n",
" even if an exception is raised inside the block.\n",
" \"\"\"\n",
" old_cwd = os.getcwd()\n",
" os.chdir(path)\n",
" try:\n",
" yield\n",
" finally:\n",
" os.chdir(old_cwd)"
]
},
{
"cell_type": "code",
"execution_count": 12,
"id": "fb2f5052",
"metadata": {},
"outputs": [
{
"name": "stderr",
"output_type": "stream",
"text": [
"INFO:dapr_agents.executors.local:bootstrapping python project with 'pip install -r requirements.txt'\n",
"INFO:dapr_agents.executors.local:Sandbox backend enabled: seatbelt\n",
"INFO:dapr_agents.executors.local:Created a new virtual environment\n",
"INFO:dapr_agents.executors.local:Snippet 1 finished in 1.433s\n"
]
},
{
"data": {
"text/html": [
"<pre style=\"white-space:pre;overflow-x:auto;line-height:normal;font-family:Menlo,'DejaVu Sans Mono',consolas,'Courier New',monospace\">bootstrap OK\n",
"\n",
"</pre>\n"
],
"text/plain": [
"bootstrap OK\n",
"\n"
]
},
"metadata": {},
"output_type": "display_data"
}
],
"source": [
"with ExitStack() as stack:\n",
" # keep a directory handle open (optional but handy if youll delete tmp_proj later)\n",
" stack.enter_context(os.scandir(tmp_proj))\n",
"\n",
" # <-- our portable replacement for contextlib.chdir()\n",
" stack.enter_context(chdir(tmp_proj))\n",
"\n",
" # run a trivial snippet; executor will bootstrap because it now “sees”\n",
" # requirements.txt in the current working directory\n",
" out = await executor.execute(\n",
" ExecutionRequest(snippets=[\n",
" CodeSnippet(language=\"python\", code=\"print('bootstrap OK')\", timeout=5)\n",
" ])\n",
" )\n",
" console.print(out[0].output)"
]
},
{
"cell_type": "markdown",
"id": "45de2386",
"metadata": {},
"source": [
"### Clean Up the throw-away project "
]
},
{
"cell_type": "code",
"execution_count": 13,
"id": "0c7aa010",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Cache directory removed ✅\n",
"temporary project removed ✅\n"
]
}
],
"source": [
"shutil.rmtree(executor.cache_dir, ignore_errors=True)\n",
"print(\"Cache directory removed ✅\")\n",
"shutil.rmtree(tmp_proj, ignore_errors=True)\n",
"print(\"temporary project removed ✅\")"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "36ea4010",
"metadata": {},
"outputs": [],
"source": []
}
],
"metadata": {
"kernelspec": {
"display_name": ".venv",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.13.1"
}
},
"nbformat": 4,
"nbformat_minor": 5
}

View File

@ -1,462 +0,0 @@
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# GraphStore: Neo4j Database Basic Examples\n",
"\n",
"This notebook demonstrates how to use the `Neo4jGraphStore` in `dapr-agents` for basic graph-based tasks. We will explore:\n",
"\n",
"* Initializing the `Neo4jGraphStore` class.\n",
"* Adding sample nodes.\n",
"* Adding one sample relationship.\n",
"* Querying graph database.\n",
"* Resseting database."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Install Required Libraries\n",
"\n",
"Ensure dapr_agents and neo4j are installed:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"!pip install dapr-agents python-dotenv neo4j"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Import Environment Variables\n",
"\n",
"Load your API keys or other configuration values using `dotenv`."
]
},
{
"cell_type": "code",
"execution_count": 1,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"True"
]
},
"execution_count": 1,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"from dotenv import load_dotenv\n",
"load_dotenv() # Load environment variables from a `.env` file"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Enable Logging"
]
},
{
"cell_type": "code",
"execution_count": 2,
"metadata": {},
"outputs": [],
"source": [
"import logging\n",
"\n",
"logging.basicConfig(level=logging.INFO)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Deploy Neo4j Graph Database as Docker Container"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"#docker run \\\n",
"#--restart always \\\n",
"#--publish=7474:7474 --publish=7687:7687 \\\n",
"#--env NEO4J_AUTH=neo4j/graphwardog \\\n",
"#--volume=neo4j-data \\\n",
"#--name neo4j-apoc \\\n",
"#--env NEO4J_apoc_export_file_enabled=true \\\n",
"#--env NEO4J_apoc_import_file_enabled=true \\\n",
"#--env NEO4J_apoc_import_file_use__neo4j__config=true \\\n",
"#--env NEO4J_PLUGINS=\\[\\\"apoc\\\"\\] \\\n",
"#neo4j:latest"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Initialize Neo4jGraphStore\n",
"\n",
"Set the `NEO4J_URI`, `NEO4J_USERNAME` and `NEO4J_PASSWORD` variables in a `.env` file. The URI can be set to `bolt://localhost:7687`."
]
},
{
"cell_type": "code",
"execution_count": 3,
"metadata": {},
"outputs": [
{
"name": "stderr",
"output_type": "stream",
"text": [
"INFO:dapr_agents.storage.graphstores.neo4j.client:Successfully created the driver for URI: bolt://localhost:7687\n",
"INFO:dapr_agents.storage.graphstores.neo4j.base:Neo4jGraphStore initialized with database neo4j\n"
]
}
],
"source": [
"from dapr_agents.storage.graphstores.neo4j import Neo4jGraphStore\n",
"import os\n",
"\n",
"# Initialize Neo4jGraphStore\n",
"graph_store = Neo4jGraphStore(\n",
" uri=os.getenv(\"NEO4J_URI\"),\n",
" user=os.getenv(\"NEO4J_USERNAME\"),\n",
" password=os.getenv(\"NEO4J_PASSWORD\"),\n",
" database=\"neo4j\"\n",
")"
]
},
{
"cell_type": "code",
"execution_count": 4,
"metadata": {},
"outputs": [
{
"name": "stderr",
"output_type": "stream",
"text": [
"INFO:dapr_agents.storage.graphstores.neo4j.client:Connected to Neo4j Kernel version 5.15.0 (community edition)\n"
]
},
{
"name": "stdout",
"output_type": "stream",
"text": [
"Neo4j connection successful\n"
]
}
],
"source": [
"# Test the connection\n",
"assert graph_store.client.test_connection(), \"Connection to Neo4j failed\"\n",
"print(\"Neo4j connection successful\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Add Sample Nodes\n",
"Create and add nodes to the graph store:"
]
},
{
"cell_type": "code",
"execution_count": 5,
"metadata": {},
"outputs": [
{
"name": "stderr",
"output_type": "stream",
"text": [
"INFO:dapr_agents.storage.graphstores.neo4j.base:Processed batch 1/1\n",
"INFO:dapr_agents.storage.graphstores.neo4j.base:Nodes with label `Person` added successfully.\n"
]
},
{
"name": "stdout",
"output_type": "stream",
"text": [
"Nodes added successfully\n"
]
}
],
"source": [
"from dapr_agents.types import Node\n",
"\n",
"# Sample nodes\n",
"nodes = [\n",
" Node(\n",
" id=\"1\",\n",
" label=\"Person\",\n",
" properties={\"name\": \"Alice\", \"age\": 30},\n",
" additional_labels=[\"Employee\"]\n",
" ),\n",
" Node(\n",
" id=\"2\",\n",
" label=\"Person\",\n",
" properties={\"name\": \"Bob\", \"age\": 25},\n",
" additional_labels=[\"Contractor\"]\n",
" )\n",
"]\n",
"\n",
"# Add nodes\n",
"graph_store.add_nodes(nodes)\n",
"print(\"Nodes added successfully\")\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Add Sample Relationship\n",
"Create and add a relationship to the graph store:"
]
},
{
"cell_type": "code",
"execution_count": 6,
"metadata": {},
"outputs": [
{
"name": "stderr",
"output_type": "stream",
"text": [
"INFO:dapr_agents.storage.graphstores.neo4j.base:Processed batch 1/1\n",
"INFO:neo4j.notifications:Received notification from DBMS server: {severity: INFORMATION} {code: Neo.ClientNotification.Statement.CartesianProduct} {category: PERFORMANCE} {title: This query builds a cartesian product between disconnected patterns.} {description: If a part of a query contains multiple disconnected patterns, this will build a cartesian product between all those parts. This may produce a large amount of data and slow down query processing. While occasionally intended, it may often be possible to reformulate the query that avoids the use of this cross product, perhaps by adding a relationship between the different parts or by using OPTIONAL MATCH (identifier is: (b))} {position: line: 3, column: 25, offset: 45} for query: '\\n UNWIND $data AS rel\\n MATCH (a {id: rel.source_node_id}), (b {id: rel.target_node_id})\\n MERGE (a)-[r:`KNOWS`]->(b)\\n ON CREATE SET r.createdAt = rel.current_time\\n SET r.updatedAt = rel.current_time, r += rel.properties\\n RETURN r\\n '\n",
"INFO:dapr_agents.storage.graphstores.neo4j.base:Relationships of type `KNOWS` added successfully.\n"
]
},
{
"name": "stdout",
"output_type": "stream",
"text": [
"Relationships added successfully\n"
]
}
],
"source": [
"from dapr_agents.types import Relationship\n",
"\n",
"# Sample relationships\n",
"relationships = [\n",
" Relationship(\n",
" source_node_id=\"1\",\n",
" target_node_id=\"2\",\n",
" type=\"KNOWS\",\n",
" properties={\"since\": \"2023\"}\n",
" )\n",
"]\n",
"\n",
"# Add relationships\n",
"graph_store.add_relationships(relationships)\n",
"print(\"Relationships added successfully\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Query Graph"
]
},
{
"cell_type": "code",
"execution_count": 7,
"metadata": {},
"outputs": [
{
"name": "stderr",
"output_type": "stream",
"text": [
"INFO:dapr_agents.storage.graphstores.neo4j.base:Query executed successfully: MATCH (n) RETURN n | Time: 0.06 seconds | Results: 2\n"
]
},
{
"name": "stdout",
"output_type": "stream",
"text": [
"Nodes in the database:\n",
"{'n': {'createdAt': '2025-03-04T10:55:57.109885Z', 'name': 'Alice', 'id': '1', 'age': 30, 'updatedAt': '2025-03-04T10:55:57.109885Z'}}\n",
"{'n': {'createdAt': '2025-03-04T10:55:57.109885Z', 'name': 'Bob', 'id': '2', 'age': 25, 'updatedAt': '2025-03-04T10:55:57.109885Z'}}\n"
]
}
],
"source": [
"query = \"MATCH (n) RETURN n\"\n",
"results = graph_store.query(query)\n",
"print(\"Nodes in the database:\")\n",
"for record in results:\n",
" print(record)"
]
},
{
"cell_type": "code",
"execution_count": 8,
"metadata": {},
"outputs": [
{
"name": "stderr",
"output_type": "stream",
"text": [
"INFO:dapr_agents.storage.graphstores.neo4j.base:Query executed successfully: \n",
"MATCH (a)-[r]->(b)\n",
"RETURN a.id AS source, b.id AS target, type(r) AS type, properties(r) AS properties\n",
" | Time: 0.07 seconds | Results: 1\n"
]
},
{
"name": "stdout",
"output_type": "stream",
"text": [
"Relationships in the database:\n",
"{'source': '1', 'target': '2', 'type': 'KNOWS', 'properties': {'updatedAt': '2025-03-04T10:55:59.835379Z', 'createdAt': '2025-03-04T10:55:59.835379Z', 'since': '2023'}}\n"
]
}
],
"source": [
"query = \"\"\"\n",
"MATCH (a)-[r]->(b)\n",
"RETURN a.id AS source, b.id AS target, type(r) AS type, properties(r) AS properties\n",
"\"\"\"\n",
"results = graph_store.query(query)\n",
"print(\"Relationships in the database:\")\n",
"for record in results:\n",
" print(record)"
]
},
{
"cell_type": "code",
"execution_count": 9,
"metadata": {},
"outputs": [
{
"name": "stderr",
"output_type": "stream",
"text": [
"INFO:dapr_agents.storage.graphstores.neo4j.base:Query executed successfully: \n",
"MATCH (n)-[r]->(m)\n",
"RETURN n, r, m\n",
" | Time: 0.05 seconds | Results: 1\n"
]
},
{
"name": "stdout",
"output_type": "stream",
"text": [
"Nodes and relationships in the database:\n",
"{'n': {'createdAt': '2025-03-04T10:55:57.109885Z', 'name': 'Alice', 'id': '1', 'age': 30, 'updatedAt': '2025-03-04T10:55:57.109885Z'}, 'r': ({'createdAt': '2025-03-04T10:55:57.109885Z', 'name': 'Alice', 'id': '1', 'age': 30, 'updatedAt': '2025-03-04T10:55:57.109885Z'}, 'KNOWS', {'createdAt': '2025-03-04T10:55:57.109885Z', 'name': 'Bob', 'id': '2', 'age': 25, 'updatedAt': '2025-03-04T10:55:57.109885Z'}), 'm': {'createdAt': '2025-03-04T10:55:57.109885Z', 'name': 'Bob', 'id': '2', 'age': 25, 'updatedAt': '2025-03-04T10:55:57.109885Z'}}\n"
]
}
],
"source": [
"query = \"\"\"\n",
"MATCH (n)-[r]->(m)\n",
"RETURN n, r, m\n",
"\"\"\"\n",
"results = graph_store.query(query)\n",
"print(\"Nodes and relationships in the database:\")\n",
"for record in results:\n",
" print(record)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Reset Graph"
]
},
{
"cell_type": "code",
"execution_count": 10,
"metadata": {},
"outputs": [
{
"name": "stderr",
"output_type": "stream",
"text": [
"INFO:dapr_agents.storage.graphstores.neo4j.base:Database reset successfully\n"
]
},
{
"name": "stdout",
"output_type": "stream",
"text": [
"Graph database has been reset.\n"
]
}
],
"source": [
"graph_store.reset()\n",
"print(\"Graph database has been reset.\")"
]
},
{
"cell_type": "code",
"execution_count": 11,
"metadata": {},
"outputs": [
{
"name": "stderr",
"output_type": "stream",
"text": [
"INFO:dapr_agents.storage.graphstores.neo4j.base:Query executed successfully: MATCH (n) RETURN n | Time: 0.01 seconds | Results: 0\n"
]
},
{
"name": "stdout",
"output_type": "stream",
"text": [
"Nodes in the database:\n"
]
}
],
"source": [
"query = \"MATCH (n) RETURN n\"\n",
"results = graph_store.query(query)\n",
"print(\"Nodes in the database:\")\n",
"for record in results:\n",
" print(record)"
]
}
],
"metadata": {
"kernelspec": {
"display_name": ".venv",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.12.1"
}
},
"nbformat": 4,
"nbformat_minor": 2
}

View File

@ -1,286 +0,0 @@
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# LLM: Azure OpenAI Chat Endpoint Basic Examples\n",
"\n",
"This notebook demonstrates how to use the `OpenAIChatClient` in `dapr-agents` for basic tasks with the Azure OpenAI Chat API. We will explore:\n",
"\n",
"* Initializing the OpenAI Chat client.\n",
"* Generating responses to simple prompts.\n",
"* Using a `.prompty` file to provide context/history for enhanced generation."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Install Required Libraries\n",
"Before starting, ensure the required libraries are installed:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"!pip install dapr-agents python-dotenv"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Load Environment Variables\n",
"\n",
"Load API keys or other configuration values from your `.env` file using `dotenv`."
]
},
{
"cell_type": "code",
"execution_count": 1,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"True"
]
},
"execution_count": 1,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"from dotenv import load_dotenv\n",
"load_dotenv()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Import OpenAIChatClient"
]
},
{
"cell_type": "code",
"execution_count": 2,
"metadata": {},
"outputs": [],
"source": [
"from dapr_agents import OpenAIChatClient"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Basic Chat Completion\n",
"\n",
"Initialize the `OpenAIChatClient` and generate a response to a simple prompt."
]
},
{
"cell_type": "code",
"execution_count": 3,
"metadata": {},
"outputs": [],
"source": [
"# Initialize the client\n",
"import os\n",
"\n",
"llm = OpenAIChatClient(\n",
" #api_key=os.getenv(\"AZURE_OPENAI_API_KEY\") # or add AZURE_OPENAI_API_KEY environment variable to .env file\n",
" azure_endpoint=os.getenv(\"AZURE_OPENAI_ENDPOINT\"), # or add AZURE_OPENAI_ENDPOINT environment variable to .env file\n",
" azure_deployment=\"gpt-4o\"\n",
")"
]
},
{
"cell_type": "code",
"execution_count": 4,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"ChatCompletion(choices=[Choice(finish_reason='stop', index=0, message=MessageContent(content='One famous dog is Lassie, the fictional Rough Collie from the \"Lassie\" television series and movies. Lassie is known for her intelligence, loyalty, and the ability to help her human companions out of tricky situations.', role='assistant'), logprobs=None)], created=1743846818, id='chatcmpl-BIuVWArM8Lzqug16s43O9M8BLaFkZ', model='gpt-4o-2024-08-06', object='chat.completion', usage={'completion_tokens': 48, 'prompt_tokens': 12, 'total_tokens': 60, 'completion_tokens_details': {'accepted_prediction_tokens': 0, 'audio_tokens': 0, 'reasoning_tokens': 0, 'rejected_prediction_tokens': 0}, 'prompt_tokens_details': {'audio_tokens': 0, 'cached_tokens': 0}})"
]
},
"execution_count": 4,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"\n",
"# Generate a response\n",
"response = llm.generate('Name a famous dog!')\n",
"\n",
"# Display the response\n",
"response"
]
},
{
"cell_type": "code",
"execution_count": 5,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"{'content': 'One famous dog is Lassie, the fictional Rough Collie from the \"Lassie\" television series and movies. Lassie is known for her intelligence, loyalty, and the ability to help her human companions out of tricky situations.', 'role': 'assistant'}\n"
]
}
],
"source": [
"print(response.get_message())"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Using a Prompty File for Context\n",
"\n",
"Use a `.prompty` file to provide context for chat history or additional instructions."
]
},
{
"cell_type": "code",
"execution_count": 6,
"metadata": {},
"outputs": [],
"source": [
"llm = OpenAIChatClient.from_prompty('basic-azopenai-chat.prompty')"
]
},
{
"cell_type": "code",
"execution_count": 7,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"ChatPromptTemplate(input_variables=['question'], pre_filled_variables={}, messages=[SystemMessage(content='You are an AI assistant who helps people find information.\\nAs the assistant, you answer questions briefly, succinctly.', role='system'), UserMessage(content='{{question}}', role='user')], template_format='jinja2')"
]
},
"execution_count": 7,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"llm.prompt_template"
]
},
{
"cell_type": "code",
"execution_count": 8,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"ChatCompletion(choices=[Choice(finish_reason='stop', index=0, message=MessageContent(content=\"I am an AI assistant and don't have a personal name, but you can call me Assistant.\", role='assistant'), logprobs=None)], created=1743846828, id='chatcmpl-BIuVgBC6I3w1TFn15pmuCBGu6VZQM', model='gpt-4o-2024-08-06', object='chat.completion', usage={'completion_tokens': 20, 'prompt_tokens': 39, 'total_tokens': 59, 'completion_tokens_details': {'accepted_prediction_tokens': 0, 'audio_tokens': 0, 'reasoning_tokens': 0, 'rejected_prediction_tokens': 0}, 'prompt_tokens_details': {'audio_tokens': 0, 'cached_tokens': 0}})"
]
},
"execution_count": 8,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"llm.generate(input_data={\"question\":\"What is your name?\"})"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Chat Completion with Messages"
]
},
{
"cell_type": "code",
"execution_count": 9,
"metadata": {},
"outputs": [],
"source": [
"# Initialize the client\n",
"llm = OpenAIChatClient(\n",
" api_key=os.getenv(\"AZURE_OPENAI_API_KEY\"), # or add AZURE_OPENAI_API_KEY environment variable to .env file\n",
" #azure_endpoint=os.getenv(\"AZURE_OPENAI_ENDPOINT\"), # or add AZURE_OPENAI_ENDPOINT environment variable to .env file\n",
" azure_deployment=\"gpt-4o\"\n",
")"
]
},
{
"cell_type": "code",
"execution_count": 10,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"{'content': 'Hello! How can I assist you today?', 'role': 'assistant'}\n"
]
}
],
"source": [
"from dapr_agents.types import UserMessage\n",
"\n",
"# Generate a response using structured messages\n",
"response = llm.generate(messages=[UserMessage(\"hello\")])\n",
"\n",
"# Display the structured response\n",
"print(response.get_message())"
]
},
{
"cell_type": "code",
"execution_count": 11,
"metadata": {},
"outputs": [],
"source": [
"llm.prompt_template"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": []
}
],
"metadata": {
"kernelspec": {
"display_name": ".venv",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.13.1"
}
},
"nbformat": 4,
"nbformat_minor": 2
}

View File

@ -1,23 +0,0 @@
---
name: Basic Prompt
description: A basic prompt that uses the Azure OpenAI chat API to answer questions
model:
api: chat
configuration:
type: azure_openai
azure_deployment: gpt-4o
parameters:
max_tokens: 128
temperature: 0.2
inputs:
question:
type: string
sample:
"question": "Who is the most famous person in the world?"
---
system:
You are an AI assistant who helps people find information.
As the assistant, you answer questions briefly, succinctly.
user:
{{question}}

View File

@ -1,23 +0,0 @@
---
name: Basic Prompt
description: A basic prompt that uses the chat API to answer questions
model:
api: chat
configuration:
type: huggingface
name: microsoft/Phi-3-mini-4k-instruct
parameters:
max_tokens: 128
temperature: 0.2
inputs:
question:
type: string
sample:
"question": "Who is the most famous person in the world?"
---
system:
You are an AI assistant who helps people find information.
As the assistant, you answer questions briefly, succinctly.
user:
{{question}}

View File

@ -1,23 +0,0 @@
---
name: Basic Prompt
description: A basic prompt that uses the chat API to answer questions
model:
api: chat
configuration:
type: nvidia
name: meta/llama3-8b-instruct
parameters:
max_tokens: 128
temperature: 0.2
inputs:
question:
type: string
sample:
"question": "Who is the most famous person in the world?"
---
system:
You are an AI assistant who helps people find information.
As the assistant, you answer questions briefly, succinctly.
user:
{{question}}

View File

@ -1,30 +0,0 @@
---
name: Basic Prompt
description: A basic prompt that uses the chat API to answer questions
model:
api: chat
configuration:
type: openai
name: gpt-4o
parameters:
max_tokens: 128
temperature: 0.2
inputs:
question:
type: string
chat_history:
type: list
default: []
---
system:
You are an AI assistant who helps people find information.
As the assistant, you answer questions briefly, succinctly,
and in a personable manner using markdown and even add some personal flair with appropriate emojis.
{% for item in chat_history %}
{{item.role}}:
{{item.content}}
{% endfor %}
user:
{{question}}

View File

@ -1,23 +0,0 @@
---
name: Basic Prompt
description: A basic prompt that uses the chat API to answer questions
model:
api: chat
configuration:
type: openai
name: gpt-4o
parameters:
max_tokens: 128
temperature: 0.2
inputs:
question:
type: string
sample:
"question": "Who is the most famous person in the world?"
---
system:
You are an AI assistant who helps people find information.
As the assistant, you answer questions briefly, succinctly.
user:
{{question}}

View File

@ -1,187 +0,0 @@
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# LLM: ElevenLabs Text-To-Speech Endpoint Basic Examples\n",
"\n",
"This notebook demonstrates how to use the `ElevenLabsSpeechClient` in dapr-agents for basic tasks with the [ElevenLabs Text-To-Speech Endpoint](https://elevenlabs.io/docs/api-reference/text-to-speech/convert). We will explore:\n",
"\n",
"* Initializing the `ElevenLabsSpeechClient`.\n",
"* Generating speech from text and saving it as an MP3 file.."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Install Required Libraries\n",
"\n",
"Ensure you have the required library installed:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"!pip install dapr-agents python-dotenv elevenlabs"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Load Environment Variables"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from dotenv import load_dotenv\n",
"load_dotenv()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Enable Logging"
]
},
{
"cell_type": "code",
"execution_count": 2,
"metadata": {},
"outputs": [],
"source": [
"import logging\n",
"\n",
"logging.basicConfig(level=logging.INFO)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Initialize ElevenLabsSpeechClient\n",
"\n",
"Initialize the `ElevenLabsSpeechClient`. By default the voice is set to: `voice_id=EXAVITQu4vr4xnSDxMaL\",name=\"Sarah\"`"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from dapr_agents import ElevenLabsSpeechClient\n",
"\n",
"client = ElevenLabsSpeechClient(\n",
" model=\"eleven_multilingual_v2\", # Default model\n",
" voice=\"JBFqnCBsd6RMkjVDRZzb\" # 'name': 'George', 'language': 'en', 'labels': {'accent': 'British', 'description': 'warm', 'age': 'middle aged', 'gender': 'male', 'use_case': 'narration'}\n",
")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Generate Speech from Text\n",
"\n",
"### Manual File Creation\n",
"\n",
"This section demonstrates how to generate speech from a given text input and save it as an MP3 file."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# Define the text to convert to speech\n",
"text = \"Hello Roberto! This is an example of text-to-speech generation.\"\n",
"\n",
"# Create speech from text\n",
"audio_bytes = client.create_speech(\n",
" text=text,\n",
" output_format=\"mp3_44100_128\" # default output format, mp3 with 44.1kHz sample rate at 128kbps.\n",
")\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# Save the audio to an MP3 file\n",
"output_path = \"output_speech.mp3\"\n",
"with open(output_path, \"wb\") as audio_file:\n",
" audio_file.write(audio_bytes)\n",
"\n",
"print(f\"Audio saved to {output_path}\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Automatic File Creation\n",
"\n",
"The audio file is saved directly by providing the file_name parameter."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# Define the text to convert to speech\n",
"text = \"Hello Roberto! This is another example of text-to-speech generation.\"\n",
"\n",
"# Create speech from text\n",
"client.create_speech(\n",
" text=text,\n",
" output_format=\"mp3_44100_128\", # default output format, mp3 with 44.1kHz sample rate at 128kbps.,\n",
" file_name='output_speech_auto.mp3'\n",
")"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": []
}
],
"metadata": {
"kernelspec": {
"display_name": ".venv",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.12.1"
}
},
"nbformat": 4,
"nbformat_minor": 2
}

View File

@ -1,342 +0,0 @@
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# LLM: Hugging Face Chat Endpoint Basic Examples\n",
"\n",
"This notebook demonstrates how to use the `HFHubChatClient` in `dapr-agents` for basic tasks with the Hugging Face Chat API. We will explore:\n",
"\n",
"* Initializing the Hugging Face Chat client.\n",
"* Generating responses to simple prompts.\n",
"* Using a `.prompty` file to provide context/history for enhanced generation."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Install Required Libraries\n",
"Before starting, ensure the required libraries are installed:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"!pip install dapr-agents python-dotenv"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Load Environment Variables\n",
"\n",
"Load API keys or other configuration values from your `.env` file using `dotenv`."
]
},
{
"cell_type": "code",
"execution_count": 1,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"True"
]
},
"execution_count": 1,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"from dotenv import load_dotenv\n",
"load_dotenv()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Enable Logging"
]
},
{
"cell_type": "code",
"execution_count": 2,
"metadata": {},
"outputs": [],
"source": [
"import logging\n",
"\n",
"logging.basicConfig(level=logging.INFO)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Import HFHubChatClient"
]
},
{
"cell_type": "code",
"execution_count": 3,
"metadata": {},
"outputs": [],
"source": [
"from dapr_agents import HFHubChatClient"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Basic Chat Completion\n",
"\n",
"Initialize the `HFHubChatClient`."
]
},
{
"cell_type": "code",
"execution_count": 4,
"metadata": {},
"outputs": [],
"source": [
"import os\n",
"\n",
"llm = HFHubChatClient(\n",
" api_key=os.getenv(\"HUGGINGFACE_API_KEY\"),\n",
" model=\"microsoft/Phi-3-mini-4k-instruct\"\n",
")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Generate a response to a simple prompt"
]
},
{
"cell_type": "code",
"execution_count": 5,
"metadata": {},
"outputs": [
{
"name": "stderr",
"output_type": "stream",
"text": [
"INFO:dapr_agents.llm.huggingface.chat:Invoking Hugging Face ChatCompletion API.\n",
"INFO:dapr_agents.llm.huggingface.chat:Chat completion retrieved successfully.\n"
]
}
],
"source": [
"# Generate a response\n",
"response = llm.generate('Name a famous dog!')"
]
},
{
"cell_type": "code",
"execution_count": 6,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"ChatCompletion(choices=[Choice(finish_reason='stop', index=0, message=MessageContent(content='A famous dog is Lassie. Lassie was a fictional collie first introduced in the 1943 film \"Lassie Come Home.\" She went on to have her own television series that aired from 1954 to 1973, in which she starred as Rin Tin Tin Jr. Her adventurous and heroic stories captured the hearts of audiences worldwide, and she became an iconic figure in the world of television.', role='assistant'), logprobs=None)], created=1741085108, id='', model='microsoft/Phi-3-mini-4k-instruct', object='chat.completion', usage={'completion_tokens': 105, 'prompt_tokens': 8, 'total_tokens': 113})"
]
},
"execution_count": 6,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"# Display the response\n",
"response"
]
},
{
"cell_type": "code",
"execution_count": 7,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"{'content': 'A famous dog is Lassie. Lassie was a fictional collie first introduced in the 1943 film \"Lassie Come Home.\" She went on to have her own television series that aired from 1954 to 1973, in which she starred as Rin Tin Tin Jr. Her adventurous and heroic stories captured the hearts of audiences worldwide, and she became an iconic figure in the world of television.',\n",
" 'role': 'assistant'}"
]
},
"execution_count": 7,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"response.get_message()"
]
},
{
"cell_type": "code",
"execution_count": 8,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"'A famous dog is Lassie. Lassie was a fictional collie first introduced in the 1943 film \"Lassie Come Home.\" She went on to have her own television series that aired from 1954 to 1973, in which she starred as Rin Tin Tin Jr. Her adventurous and heroic stories captured the hearts of audiences worldwide, and she became an iconic figure in the world of television.'"
]
},
"execution_count": 8,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"response.get_content()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Using a Prompty File for Context\n",
"\n",
"Use a `.prompty` file to provide context for chat history or additional instructions."
]
},
{
"cell_type": "code",
"execution_count": 9,
"metadata": {},
"outputs": [],
"source": [
"llm = HFHubChatClient.from_prompty('basic-hf-chat.prompty')"
]
},
{
"cell_type": "code",
"execution_count": 10,
"metadata": {},
"outputs": [
{
"name": "stderr",
"output_type": "stream",
"text": [
"INFO:dapr_agents.llm.huggingface.chat:Using prompt template to generate messages.\n",
"INFO:dapr_agents.llm.huggingface.chat:Invoking Hugging Face ChatCompletion API.\n",
"INFO:dapr_agents.llm.huggingface.chat:Chat completion retrieved successfully.\n"
]
},
{
"data": {
"text/plain": [
"ChatCompletion(choices=[Choice(finish_reason='length', index=0, message=MessageContent(content=\"I'm Phi and my purpose as Microsoft GPT-3 developed by MS Corporation in 2019 serves to assist users with a wide range of queries or tasks they may have at hand! How can i help today ? Let me know if theres anything specific that comes up for which assistance would be beneficial ! :) 😊✨ #AIAssistant#MicrosoftGptPhilosophyOfHelpfulness@MSCorporationTechnologyInnovationsAndEthicsAtTheCoreofOurDesignProcessesWeStriveToCreateAnExperience\", role='assistant'), logprobs=None)], created=1741085113, id='', model='microsoft/Phi-3-mini-4k-instruct', object='chat.completion', usage={'completion_tokens': 128, 'prompt_tokens': 36, 'total_tokens': 164})"
]
},
"execution_count": 10,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"llm.generate(input_data={\"question\":\"What is your name?\"})"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Chat Completion with Messages"
]
},
{
"cell_type": "code",
"execution_count": 11,
"metadata": {},
"outputs": [
{
"name": "stderr",
"output_type": "stream",
"text": [
"INFO:dapr_agents.llm.huggingface.chat:Invoking Hugging Face ChatCompletion API.\n",
"INFO:dapr_agents.llm.huggingface.chat:Chat completion retrieved successfully.\n"
]
}
],
"source": [
"from dapr_agents.types import UserMessage\n",
"\n",
"# Initialize the client\n",
"llm = HFHubChatClient()\n",
"\n",
"# Generate a response using structured messages\n",
"response = llm.generate(messages=[UserMessage(\"hello\")])"
]
},
{
"cell_type": "code",
"execution_count": 12,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"{'content': \"Hello! How can I assist you today? Whether you have a question, need help with a problem, or just want to chat, I'm here to help. 😊\", 'role': 'assistant'}\n"
]
}
],
"source": [
"# Display the structured response\n",
"print(response.get_message())"
]
},
{
"cell_type": "code",
"execution_count": 13,
"metadata": {},
"outputs": [],
"source": [
"llm.prompt_template"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": []
}
],
"metadata": {
"kernelspec": {
"display_name": ".venv",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.12.1"
}
},
"nbformat": 4,
"nbformat_minor": 2
}

View File

@ -1,257 +0,0 @@
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# LLM: NVIDIA Chat Endpoint Basic Examples\n",
"\n",
"This notebook demonstrates how to use the `NVIDIAChatClient` in `dapr-agents` for basic tasks with the NVIDIA Chat API. We will explore:\n",
"\n",
"* Initializing the `NVIDIAChatClient`.\n",
"* Generating responses to simple prompts.\n",
"* Using a `.prompty` file to provide context/history for enhanced generation."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Install Required Libraries\n",
"Before starting, ensure the required libraries are installed:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"!pip install dapr-agents python-dotenv"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Load Environment Variables\n",
"\n",
"Load API keys or other configuration values from your `.env` file using `dotenv`."
]
},
{
"cell_type": "code",
"execution_count": 2,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"True"
]
},
"execution_count": 2,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"from dotenv import load_dotenv\n",
"load_dotenv()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Import NVIDIAChatClient"
]
},
{
"cell_type": "code",
"execution_count": 3,
"metadata": {},
"outputs": [
{
"name": "stderr",
"output_type": "stream",
"text": [
"/Users/wardog/Documents/GitHub/dapr-agents/.venv/lib/python3.13/site-packages/tqdm/auto.py:21: TqdmWarning: IProgress not found. Please update jupyter and ipywidgets. See https://ipywidgets.readthedocs.io/en/stable/user_install.html\n",
" from .autonotebook import tqdm as notebook_tqdm\n"
]
}
],
"source": [
"from dapr_agents import NVIDIAChatClient"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Basic Chat Completion\n",
"\n",
"Initialize the `OpenAIChatClient` and generate a response to a simple prompt."
]
},
{
"cell_type": "code",
"execution_count": 4,
"metadata": {},
"outputs": [],
"source": [
"# Initialize the client\n",
"llm = NVIDIAChatClient()"
]
},
{
"cell_type": "code",
"execution_count": 5,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"ChatCompletion(choices=[Choice(finish_reason='stop', index=0, message=MessageContent(content=\"That's an easy one! One of the most famous dogs is probably Laika, the Soviet space dog. She was the first living creature to orbit the Earth, launched into space on November 3, 1957, and paved the way for human spaceflight.\", role='assistant'), logprobs=None)], created=1741709966, id='cmpl-7c89ca25c9e140639fe179801738c8dd', model='meta/llama3-8b-instruct', object='chat.completion', usage={'completion_tokens': 55, 'prompt_tokens': 15, 'total_tokens': 70, 'completion_tokens_details': None, 'prompt_tokens_details': None})"
]
},
"execution_count": 5,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"# Generate a response\n",
"response = llm.generate('Name a famous dog!')\n",
"\n",
"# Display the response\n",
"response"
]
},
{
"cell_type": "code",
"execution_count": 6,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"{'content': \"That's an easy one! One of the most famous dogs is probably Laika, the Soviet space dog. She was the first living creature to orbit the Earth, launched into space on November 3, 1957, and paved the way for human spaceflight.\", 'role': 'assistant'}\n"
]
}
],
"source": [
"print(response.get_message())"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Using a Prompty File for Context\n",
"\n",
"Use a `.prompty` file to provide context for chat history or additional instructions."
]
},
{
"cell_type": "code",
"execution_count": 6,
"metadata": {},
"outputs": [],
"source": [
"llm = NVIDIAChatClient.from_prompty('basic-nvidia-chat.prompty')"
]
},
{
"cell_type": "code",
"execution_count": 7,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"ChatCompletion(choices=[Choice(finish_reason='stop', index=0, message=MessageContent(content=\"I'm AI Assistant, nice to meet you!\", role='assistant'), logprobs=None)], created=1737847868, id='cmpl-abe14ae7edef456da870b7c473bffcc7', model='meta/llama3-8b-instruct', object='chat.completion', usage={'completion_tokens': 11, 'prompt_tokens': 43, 'total_tokens': 54, 'completion_tokens_details': None, 'prompt_tokens_details': None})"
]
},
"execution_count": 7,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"llm.generate(input_data={\"question\":\"What is your name?\"})"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Chat Completion with Messages"
]
},
{
"cell_type": "code",
"execution_count": 7,
"metadata": {},
"outputs": [],
"source": [
"from dapr_agents.types import UserMessage\n",
"\n",
"# Initialize the client\n",
"llm = NVIDIAChatClient()\n",
"\n",
"# Generate a response using structured messages\n",
"response = llm.generate(messages=[UserMessage(\"hello\")])"
]
},
{
"cell_type": "code",
"execution_count": 8,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"{'content': \"Hello! It's nice to meet you. Is there something I can help you with, or would you like to chat?\", 'role': 'assistant'}\n"
]
}
],
"source": [
"# Display the structured response\n",
"print(response.get_message())"
]
},
{
"cell_type": "code",
"execution_count": 9,
"metadata": {},
"outputs": [],
"source": [
"llm.prompt_template"
]
}
],
"metadata": {
"kernelspec": {
"display_name": ".venv",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.13.1"
}
},
"nbformat": 4,
"nbformat_minor": 2
}

View File

@ -1,234 +0,0 @@
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# LLM: NVIDIA Chat Completion with Structured Output\n",
"\n",
"This notebook demonstrates how to use the `NVIDIAChatClient` from `dapr_agents` to generate structured output using `Pydantic` models.\n",
"\n",
"We will:\n",
"\n",
"* Initialize the `NVIDIAChatClient` with the `meta/llama-3.1-8b-instruct` model.\n",
"* Define a Pydantic model to structure the response.\n",
"* Use the `response_model` parameter to get structured output from the LLM."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Install Required Libraries"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"!pip install dapr-agents python-dotenv"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Import Environment Variables\n",
"\n",
"Load your API keys or other configuration values using `dotenv`."
]
},
{
"cell_type": "code",
"execution_count": 1,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"True"
]
},
"execution_count": 1,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"from dotenv import load_dotenv\n",
"load_dotenv() # Load environment variables from a `.env` file"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Enable Logging"
]
},
{
"cell_type": "code",
"execution_count": 2,
"metadata": {},
"outputs": [],
"source": [
"import logging\n",
"\n",
"logging.basicConfig(level=logging.INFO)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Import Libraries"
]
},
{
"cell_type": "code",
"execution_count": 3,
"metadata": {},
"outputs": [],
"source": [
"from dapr_agents import NVIDIAChatClient\n",
"from dapr_agents.types import UserMessage"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Initialize LLM Client\n",
"\n",
"Create an instance of the `NVIDIAChatClient`."
]
},
{
"cell_type": "code",
"execution_count": 4,
"metadata": {},
"outputs": [
{
"name": "stderr",
"output_type": "stream",
"text": [
"INFO:dapr_agents.llm.nvidia.client:Initializing NVIDIA API client...\n"
]
}
],
"source": [
"llmClient = NVIDIAChatClient(\n",
" model=\"meta/llama-3.1-8b-instruct\"\n",
")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Define the Pydantic Model\n",
"\n",
"Define a Pydantic model to represent the structured response from the LLM."
]
},
{
"cell_type": "code",
"execution_count": 5,
"metadata": {},
"outputs": [],
"source": [
"from pydantic import BaseModel\n",
"\n",
"class Dog(BaseModel):\n",
" name: str\n",
" breed: str\n",
" reason: str"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Generate Structured Output (JSON)\n",
"\n",
"Use the generate method of the `NVIDIAChatClient` with the `response_model` parameter to enforce the structure of the response."
]
},
{
"cell_type": "code",
"execution_count": 6,
"metadata": {},
"outputs": [
{
"name": "stderr",
"output_type": "stream",
"text": [
"INFO:dapr_agents.llm.utils.request:A response model has been passed to structure the response of the LLM.\n",
"INFO:dapr_agents.llm.utils.structure:Structured response enabled.\n",
"INFO:dapr_agents.llm.nvidia.chat:Invoking ChatCompletion API.\n",
"INFO:httpx:HTTP Request: POST https://integrate.api.nvidia.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
"INFO:dapr_agents.llm.nvidia.chat:Chat completion retrieved successfully.\n",
"INFO:dapr_agents.llm.utils.response:Structured output was successfully validated.\n",
"INFO:dapr_agents.llm.utils.response:Returning an instance of <class '__main__.Dog'>.\n"
]
}
],
"source": [
"response = llmClient.generate(\n",
" messages=[UserMessage(\"One famous dog in history.\")],\n",
" response_model=Dog\n",
")"
]
},
{
"cell_type": "code",
"execution_count": 7,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"Dog(name='Laika', breed='Soviet space dog (mixed breeds)', reason='First animal in space')"
]
},
"execution_count": 7,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"response"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": []
}
],
"metadata": {
"kernelspec": {
"display_name": ".venv",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.12.1"
}
},
"nbformat": 4,
"nbformat_minor": 2
}

View File

@ -1,260 +0,0 @@
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# LLM: NVIDIA Embeddings Endpoint Basic Examples\n",
"\n",
"This notebook demonstrates how to use the `NVIDIAEmbedder` in `dapr-agents` for generating text embeddings. We will explore:\n",
"\n",
"* Initializing the `NVIDIAEmbedder`.\n",
"* Generating embeddings for single and multiple inputs.\n",
"* Using the class both as a direct function and via its `embed` method."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Install Required Libraries\n",
"Before starting, ensure the required libraries are installed:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"!pip install dapr-agents python-dotenv"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Load Environment Variables\n",
"\n",
"Load API keys or other configuration values from your `.env` file using `dotenv`."
]
},
{
"cell_type": "code",
"execution_count": 1,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"True"
]
},
"execution_count": 1,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"from dotenv import load_dotenv\n",
"load_dotenv()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Import NVIDIAEmbedder"
]
},
{
"cell_type": "code",
"execution_count": 2,
"metadata": {},
"outputs": [],
"source": [
"from dapr_agents.document.embedder import NVIDIAEmbedder"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Initialize the NVIDIAEmbedder\n",
"\n",
"To start, create an instance of the `NVIDIAEmbedder` class."
]
},
{
"cell_type": "code",
"execution_count": 3,
"metadata": {},
"outputs": [],
"source": [
"# Initialize the embedder\n",
"embedder = NVIDIAEmbedder(\n",
" model=\"nvidia/nv-embedqa-e5-v5\", # Default embedding model\n",
")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Embedding a Single Text\n",
"\n",
"You can use the embed method to generate an embedding for a single input string."
]
},
{
"cell_type": "code",
"execution_count": 4,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Embedding (first 5 values): [-0.007270217100869654, -0.03521439888521964, 0.008612880489907491, 0.03619088134997443, 0.03658757735128107]\n"
]
}
],
"source": [
"# Input text\n",
"text = \"The quick brown fox jumps over the lazy dog.\"\n",
"\n",
"# Generate embedding\n",
"embedding = embedder.embed(text)\n",
"\n",
"# Display the embedding\n",
"print(f\"Embedding (first 5 values): {embedding[:5]}\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Embedding Multiple Texts\n",
"\n",
"The embed method also supports embedding multiple texts at once."
]
},
{
"cell_type": "code",
"execution_count": 5,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Text 1 embedding (first 5 values): [-0.007270217100869654, -0.03521439888521964, 0.008612880489907491, 0.03619088134997443, 0.03658757735128107]\n",
"Text 2 embedding (first 5 values): [0.03491632278487177, -0.045598764196327295, 0.014955417976037734, 0.049291836798573345, 0.03741906620126992]\n"
]
}
],
"source": [
"# Input texts\n",
"texts = [\n",
" \"The quick brown fox jumps over the lazy dog.\",\n",
" \"A journey of a thousand miles begins with a single step.\"\n",
"]\n",
"\n",
"# Generate embeddings\n",
"embeddings = embedder.embed(texts)\n",
"\n",
"# Display the embeddings\n",
"for i, emb in enumerate(embeddings):\n",
" print(f\"Text {i + 1} embedding (first 5 values): {emb[:5]}\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Using the NVIDIAEmbedder as a Callable Function\n",
"\n",
"The `NVIDIAEmbedder` class can also be used directly as a function, thanks to its `__call__` implementation."
]
},
{
"cell_type": "code",
"execution_count": 6,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Embedding (first 5 values): [-0.005809799816153762, -0.08734154733463988, -0.017593431879252233, 0.027511671880565285, 0.001342777107870075]\n"
]
}
],
"source": [
"# Use the class instance as a callable\n",
"text_embedding = embedder(\"A stitch in time saves nine.\")\n",
"\n",
"# Display the embedding\n",
"print(f\"Embedding (first 5 values): {text_embedding[:5]}\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"For multiple inputs:"
]
},
{
"cell_type": "code",
"execution_count": 7,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Text 1 embedding (first 5 values): [0.021093917798446042, -0.04365205548745667, 0.02008726662368289, 0.024922242720651362, 0.024556187748010216]\n",
"Text 2 embedding (first 5 values): [-0.006683721130524534, -0.05764852452568794, 0.01164408689824411, 0.04627132894469238, 0.03458911471541276]\n"
]
}
],
"source": [
"text_list = [\"The early bird catches the worm.\", \"An apple a day keeps the doctor away.\"]\n",
"embeddings_list = embedder(text_list)\n",
"\n",
"# Display the embeddings\n",
"for i, emb in enumerate(embeddings_list):\n",
" print(f\"Text {i + 1} embedding (first 5 values): {emb[:5]}\")"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": []
}
],
"metadata": {
"kernelspec": {
"display_name": ".venv",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.12.1"
}
},
"nbformat": 4,
"nbformat_minor": 2
}

View File

@ -1,453 +0,0 @@
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# LLM: OpenAI Audio Endpoint Basic Examples\n",
"\n",
"This notebook demonstrates how to use the `OpenAIAudioClient` in `dapr-agents` for basic tasks with the OpenAI Audio API. We will explore:\n",
"\n",
"* Generating speech from text and saving it as an MP3 file.\n",
"* Transcribing audio to text.\n",
"* Translating audio content to English."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Install Required Libraries\n",
"\n",
"Ensure you have the required library installed:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"!pip install dapr-agents python-dotenv"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Load Environment Variables"
]
},
{
"cell_type": "code",
"execution_count": 1,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"True"
]
},
"execution_count": 1,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"from dotenv import load_dotenv\n",
"load_dotenv()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Initialize OpenAIAudioClient"
]
},
{
"cell_type": "code",
"execution_count": 2,
"metadata": {},
"outputs": [],
"source": [
"from dapr_agents import OpenAIAudioClient\n",
"\n",
"client = OpenAIAudioClient()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Generate Speech from Text\n",
"\n",
"### Manual File Creation\n",
"\n",
"This section demonstrates how to generate speech from a given text input and save it as an MP3 file."
]
},
{
"cell_type": "code",
"execution_count": 3,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Audio saved to output_speech.mp3\n"
]
}
],
"source": [
"from dapr_agents.types.llm import AudioSpeechRequest\n",
"\n",
"# Define the text to convert to speech\n",
"text_to_speech = \"Hello Roberto! This is an example of text-to-speech generation.\"\n",
"\n",
"# Create a request for TTS\n",
"tts_request = AudioSpeechRequest(\n",
" model=\"tts-1\",\n",
" input=text_to_speech,\n",
" voice=\"fable\",\n",
" response_format=\"mp3\"\n",
")\n",
"\n",
"# Generate the audio\n",
"audio_bytes = client.create_speech(request=tts_request)\n",
"\n",
"# Save the audio to an MP3 file\n",
"output_path = \"output_speech.mp3\"\n",
"with open(output_path, \"wb\") as audio_file:\n",
" audio_file.write(audio_bytes)\n",
"\n",
"print(f\"Audio saved to {output_path}\")\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Automatic File Creation\n",
"\n",
"The audio file is saved directly by providing the file_name parameter."
]
},
{
"cell_type": "code",
"execution_count": 4,
"metadata": {},
"outputs": [],
"source": [
"from dapr_agents.types.llm import AudioSpeechRequest\n",
"\n",
"# Define the text to convert to speech\n",
"text_to_speech = \"Hola Roberto! Este es otro ejemplo de generacion de voz desde texto.\"\n",
"\n",
"# Create a request for TTS\n",
"tts_request = AudioSpeechRequest(\n",
" model=\"tts-1\",\n",
" input=text_to_speech,\n",
" voice=\"echo\",\n",
" response_format=\"mp3\"\n",
")\n",
"\n",
"# Generate the audio\n",
"client.create_speech(request=tts_request, file_name=\"output_speech_spanish_auto.mp3\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Transcribe Audio to Text\n",
"\n",
"This section demonstrates how to transcribe audio content into text."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Using a File Path"
]
},
{
"cell_type": "code",
"execution_count": 5,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Transcription: Hello Roberto, this is an example of text-to-speech generation.\n"
]
}
],
"source": [
"from dapr_agents.types.llm import AudioTranscriptionRequest\n",
"\n",
"# Specify the audio file to transcribe\n",
"audio_file_path = \"output_speech.mp3\"\n",
"\n",
"# Create a transcription request\n",
"transcription_request = AudioTranscriptionRequest(\n",
" model=\"whisper-1\",\n",
" file=audio_file_path\n",
")\n",
"\n",
"# Generate transcription\n",
"transcription_response = client.create_transcription(request=transcription_request)\n",
"\n",
"# Display the transcription result\n",
"print(\"Transcription:\", transcription_response.text)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Using Audio Bytes"
]
},
{
"cell_type": "code",
"execution_count": 6,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Transcription: Hola Roberto, este es otro ejemplo de generación de voz desde texto.\n"
]
}
],
"source": [
"# audio_bytes = open(\"output_speech_spanish_auto.mp3\", \"rb\")\n",
"\n",
"with open(\"output_speech_spanish_auto.mp3\", \"rb\") as f:\n",
" audio_bytes = f.read()\n",
"\n",
"transcription_request = AudioTranscriptionRequest(\n",
" model=\"whisper-1\",\n",
" file=audio_bytes, # File as bytes\n",
" language=\"en\" # Optional: Specify the language of the audio\n",
")\n",
"\n",
"# Generate transcription\n",
"transcription_response = client.create_transcription(request=transcription_request)\n",
"\n",
"# Display the transcription result\n",
"print(\"Transcription:\", transcription_response.text)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Using File-Like Objects (e.g., BufferedReader)\n",
"\n",
"You can use file-like objects, such as BufferedReader, directly for transcription or translation."
]
},
{
"cell_type": "code",
"execution_count": 7,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Transcription: ¡Hola, Roberto! Este es otro ejemplo de generación de voz desde texto.\n"
]
}
],
"source": [
"from io import BufferedReader\n",
"\n",
"# Open the audio file as a BufferedReader\n",
"audio_file_path = \"output_speech_spanish_auto.mp3\"\n",
"with open(audio_file_path, \"rb\") as f:\n",
" buffered_file = BufferedReader(f)\n",
"\n",
" # Create a transcription request\n",
" transcription_request = AudioTranscriptionRequest(\n",
" model=\"whisper-1\",\n",
" file=buffered_file, # File as BufferedReader\n",
" language=\"es\"\n",
" )\n",
"\n",
" # Generate transcription\n",
" transcription_response = client.create_transcription(request=transcription_request)\n",
"\n",
" # Display the transcription result\n",
" print(\"Transcription:\", transcription_response.text)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Translate Audio to English\n",
"\n",
"This section demonstrates how to translate audio content into English."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Using a File Path"
]
},
{
"cell_type": "code",
"execution_count": 8,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Translation: Hola Roberto, este es otro ejemplo de generación de voz desde texto.\n"
]
}
],
"source": [
"from dapr_agents.types.llm import AudioTranslationRequest\n",
"\n",
"# Specify the audio file to translate\n",
"audio_file_path = \"output_speech_spanish_auto.mp3\"\n",
"\n",
"# Create a translation request\n",
"translation_request = AudioTranslationRequest(\n",
" model=\"whisper-1\",\n",
" file=audio_file_path,\n",
" prompt=\"The following audio needs to be translated to English.\"\n",
")\n",
"\n",
"# Generate translation\n",
"translation_response = client.create_translation(request=translation_request)\n",
"\n",
"# Display the translation result\n",
"print(\"Translation:\", translation_response.text)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Using Audio Bytes"
]
},
{
"cell_type": "code",
"execution_count": 9,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Translation: Hola Roberto, este es otro ejemplo de generación de voz desde texto.\n"
]
}
],
"source": [
"# audio_bytes = open(\"output_speech_spanish_auto.mp3\", \"rb\")\n",
"\n",
"with open(\"output_speech_spanish_auto.mp3\", \"rb\") as f:\n",
" audio_bytes = f.read()\n",
"\n",
"translation_request = AudioTranslationRequest(\n",
" model=\"whisper-1\",\n",
" file=audio_bytes, # File as bytes\n",
" prompt=\"The following audio needs to be translated to English.\"\n",
")\n",
"\n",
"# Generate translation\n",
"translation_response = client.create_translation(request=translation_request)\n",
"\n",
"# Display the translation result\n",
"print(\"Translation:\", translation_response.text)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Using File-Like Objects (e.g., BufferedReader) for Translation\n",
"\n",
"You can use a file-like object, such as a BufferedReader, directly for translating audio content."
]
},
{
"cell_type": "code",
"execution_count": 10,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Translation: Hola Roberto, este es otro ejemplo de generación de voz desde texto.\n"
]
}
],
"source": [
"from io import BufferedReader\n",
"\n",
"# Open the audio file as a BufferedReader\n",
"audio_file_path = \"output_speech_spanish_auto.mp3\"\n",
"with open(audio_file_path, \"rb\") as f:\n",
" buffered_file = BufferedReader(f)\n",
"\n",
" # Create a translation request\n",
" translation_request = AudioTranslationRequest(\n",
" model=\"whisper-1\",\n",
" file=buffered_file, # File as BufferedReader\n",
" prompt=\"The following audio needs to be translated to English.\"\n",
" )\n",
"\n",
" # Generate translation\n",
" translation_response = client.create_translation(request=translation_request)\n",
"\n",
" # Display the translation result\n",
" print(\"Translation:\", translation_response.text)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": []
}
],
"metadata": {
"kernelspec": {
"display_name": ".venv",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.12.1"
}
},
"nbformat": 4,
"nbformat_minor": 2
}

View File

@ -1,275 +0,0 @@
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# LLM: OpenAI Chat Endpoint Basic Examples\n",
"\n",
"This notebook demonstrates how to use the `OpenAIChatClient` in `dapr-agents` for basic tasks with the OpenAI Chat API. We will explore:\n",
"\n",
"* Initializing the OpenAI Chat client.\n",
"* Generating responses to simple prompts.\n",
"* Using a `.prompty` file to provide context/history for enhanced generation."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Install Required Libraries\n",
"Before starting, ensure the required libraries are installed:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"!pip install dapr-agents python-dotenv"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Load Environment Variables\n",
"\n",
"Load API keys or other configuration values from your `.env` file using `dotenv`."
]
},
{
"cell_type": "code",
"execution_count": 1,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"True"
]
},
"execution_count": 1,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"from dotenv import load_dotenv\n",
"load_dotenv()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Import OpenAIChatClient"
]
},
{
"cell_type": "code",
"execution_count": 2,
"metadata": {},
"outputs": [],
"source": [
"from dapr_agents import OpenAIChatClient"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Basic Chat Completion\n",
"\n",
"Initialize the `OpenAIChatClient` and generate a response to a simple prompt."
]
},
{
"cell_type": "code",
"execution_count": 3,
"metadata": {},
"outputs": [],
"source": [
"# Initialize the client\n",
"llm = OpenAIChatClient()"
]
},
{
"cell_type": "code",
"execution_count": 4,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"ChatCompletion(choices=[Choice(finish_reason='stop', index=0, message=MessageContent(content='One famous dog is Lassie, the Rough Collie from the television series and films that became iconic for her intelligence and heroic adventures.', role='assistant'), logprobs=None)], created=1741085405, id='chatcmpl-B7K8brL19kn1KgDTG9on7n7ICnt3P', model='gpt-4o-2024-08-06', object='chat.completion', usage={'completion_tokens': 28, 'prompt_tokens': 12, 'total_tokens': 40, 'completion_tokens_details': {'accepted_prediction_tokens': 0, 'audio_tokens': 0, 'reasoning_tokens': 0, 'rejected_prediction_tokens': 0}, 'prompt_tokens_details': {'audio_tokens': 0, 'cached_tokens': 0}})"
]
},
"execution_count": 4,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"# Generate a response\n",
"response = llm.generate('Name a famous dog!')\n",
"\n",
"# Display the response\n",
"response"
]
},
{
"cell_type": "code",
"execution_count": 5,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"{'content': 'One famous dog is Lassie, the Rough Collie from the television series and films that became iconic for her intelligence and heroic adventures.', 'role': 'assistant'}\n"
]
}
],
"source": [
"print(response.get_message())"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Using a Prompty File for Context\n",
"\n",
"Use a `.prompty` file to provide context for chat history or additional instructions."
]
},
{
"cell_type": "code",
"execution_count": 6,
"metadata": {},
"outputs": [],
"source": [
"llm = OpenAIChatClient.from_prompty('basic-openai-chat-history.prompty')"
]
},
{
"cell_type": "code",
"execution_count": 7,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"ChatPromptTemplate(input_variables=['chat_history', 'question'], pre_filled_variables={}, messages=[SystemMessage(content='You are an AI assistant who helps people find information.\\nAs the assistant, you answer questions briefly, succinctly, \\nand in a personable manner using markdown and even add some personal flair with appropriate emojis.\\n\\n{% for item in chat_history %}\\n{{item.role}}:\\n{{item.content}}\\n{% endfor %}', role='system'), UserMessage(content='{{question}}', role='user')], template_format='jinja2')"
]
},
"execution_count": 7,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"llm.prompt_template"
]
},
{
"cell_type": "code",
"execution_count": 8,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"ChatCompletion(choices=[Choice(finish_reason='stop', index=0, message=MessageContent(content=\"Hey there! I'm your friendly AI assistant. You can call me whatever you'd like, but I don't have a specific name. 😊 How can I help you today?\", role='assistant'), logprobs=None)], created=1741085407, id='chatcmpl-B7K8dI84xY2hjaEspDtJL5EICbSLh', model='gpt-4o-2024-08-06', object='chat.completion', usage={'completion_tokens': 34, 'prompt_tokens': 57, 'total_tokens': 91, 'completion_tokens_details': {'accepted_prediction_tokens': 0, 'audio_tokens': 0, 'reasoning_tokens': 0, 'rejected_prediction_tokens': 0}, 'prompt_tokens_details': {'audio_tokens': 0, 'cached_tokens': 0}})"
]
},
"execution_count": 8,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"llm.generate(input_data={\"question\":\"What is your name?\"})"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Chat Completion with Messages"
]
},
{
"cell_type": "code",
"execution_count": 9,
"metadata": {},
"outputs": [],
"source": [
"from dapr_agents.types import UserMessage\n",
"\n",
"# Initialize the client\n",
"llm = OpenAIChatClient()\n",
"\n",
"# Generate a response using structured messages\n",
"response = llm.generate(messages=[UserMessage(\"hello\")])"
]
},
{
"cell_type": "code",
"execution_count": 10,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"{'content': 'Hello! How can I assist you today?', 'role': 'assistant'}\n"
]
}
],
"source": [
"# Display the structured response\n",
"print(response.get_message())"
]
},
{
"cell_type": "code",
"execution_count": 11,
"metadata": {},
"outputs": [],
"source": [
"llm.prompt_template"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": []
}
],
"metadata": {
"kernelspec": {
"display_name": ".venv",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.12.1"
}
},
"nbformat": 4,
"nbformat_minor": 2
}

View File

@ -1,226 +0,0 @@
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# LLM: OpenAI Chat Completion with Structured Output\n",
"\n",
"This notebook demonstrates how to use the `OpenAIChatClient` from `dapr-agents` to generate structured output using `Pydantic` models.\n",
"\n",
"We will:\n",
"\n",
"* Initialize the OpenAIChatClient.\n",
"* Define a Pydantic model to structure the response.\n",
"* Use the response_model parameter to get structured output from the LLM."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Install Required Libraries\n",
"Before starting, ensure the required libraries are installed:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"!pip install dapr-agents python-dotenv"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Import Environment Variables\n",
"\n",
"Load your API keys or other configuration values using `dotenv`."
]
},
{
"cell_type": "code",
"execution_count": 1,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"True"
]
},
"execution_count": 1,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"from dotenv import load_dotenv\n",
"load_dotenv() # Load environment variables from a `.env` file"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Enable Logging"
]
},
{
"cell_type": "code",
"execution_count": 2,
"metadata": {},
"outputs": [],
"source": [
"import logging\n",
"\n",
"logging.basicConfig(level=logging.INFO)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Import dapr-agents Libraries\n",
"\n",
"Import the necessary classes and types from `dapr-agents`."
]
},
{
"cell_type": "code",
"execution_count": 3,
"metadata": {},
"outputs": [],
"source": [
"from dapr_agents import OpenAIChatClient\n",
"from dapr_agents.types import UserMessage"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Initialize LLM Client\n",
"\n",
"Create an instance of the `OpenAIChatClient`."
]
},
{
"cell_type": "code",
"execution_count": 4,
"metadata": {},
"outputs": [
{
"name": "stderr",
"output_type": "stream",
"text": [
"INFO:dapr_agents.llm.openai.client.base:Initializing OpenAI client...\n"
]
}
],
"source": [
"llmClient = OpenAIChatClient()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Define the Pydantic Model\n",
"\n",
"Define a Pydantic model to represent the structured response from the LLM."
]
},
{
"cell_type": "code",
"execution_count": 5,
"metadata": {},
"outputs": [],
"source": [
"from pydantic import BaseModel\n",
"\n",
"class Dog(BaseModel):\n",
" name: str\n",
" breed: str\n",
" reason: str"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Generate Structured Output (JSON)\n",
"\n",
"Use the generate method of the `OpenAIChatClient` with the `response_model` parameter to enforce the structure of the response."
]
},
{
"cell_type": "code",
"execution_count": 7,
"metadata": {},
"outputs": [
{
"name": "stderr",
"output_type": "stream",
"text": [
"INFO:dapr_agents.llm.utils.request:Structured Mode Activated! Mode=json.\n",
"INFO:dapr_agents.llm.openai.chat:Invoking ChatCompletion API.\n",
"INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
"INFO:dapr_agents.llm.openai.chat:Chat completion retrieved successfully.\n",
"INFO:dapr_agents.llm.utils.response:Structured output was successfully validated.\n"
]
}
],
"source": [
"response = llmClient.generate(\n",
" messages=[UserMessage(\"One famous dog in history.\")],\n",
" response_format=Dog\n",
")"
]
},
{
"cell_type": "code",
"execution_count": 8,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"Dog(name='Balto', breed='Siberian Husky', reason=\"Balto is famous for his role in the 1925 serum run to Nome, also known as the 'Great Race of Mercy.' This life-saving mission involved a relay of sled dog teams transporting diphtheria antitoxin across harsh Alaskan wilderness under treacherous winter conditions, preventing a potential epidemic. Balto led the final leg of the journey, becoming a symbol of bravery and teamwork.\")"
]
},
"execution_count": 8,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"response"
]
}
],
"metadata": {
"kernelspec": {
"display_name": ".venv",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.12.1"
}
},
"nbformat": 4,
"nbformat_minor": 2
}

View File

@ -1,262 +0,0 @@
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# LLM: OpenAI Embeddings Endpoint Basic Examples\n",
"\n",
"This notebook demonstrates how to use the `OpenAIEmbedder` in `dapr-agents` for generating text embeddings. We will explore:\n",
"\n",
"* Initializing the `OpenAIEmbedder`.\n",
"* Generating embeddings for single and multiple inputs.\n",
"* Using the class both as a direct function and via its `embed` method."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Install Required Libraries\n",
"Before starting, ensure the required libraries are installed:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"!pip install dapr-agents python-dotenv tiktoken"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Load Environment Variables\n",
"\n",
"Load API keys or other configuration values from your `.env` file using `dotenv`."
]
},
{
"cell_type": "code",
"execution_count": 2,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"True"
]
},
"execution_count": 2,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"from dotenv import load_dotenv\n",
"load_dotenv()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Import OpenAIEmbedder"
]
},
{
"cell_type": "code",
"execution_count": 3,
"metadata": {},
"outputs": [],
"source": [
"from dapr_agents.document.embedder import OpenAIEmbedder"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Initialize the OpenAIEmbedder\n",
"\n",
"To start, create an instance of the `OpenAIEmbedder` class. You can customize its parameters if needed, such as the `model` or `chunk_size`."
]
},
{
"cell_type": "code",
"execution_count": 4,
"metadata": {},
"outputs": [],
"source": [
"# Initialize the embedder\n",
"embedder = OpenAIEmbedder(\n",
" model=\"text-embedding-ada-002\", # Default embedding model\n",
" chunk_size=1000, # Batch size for processing\n",
" max_tokens=8191 # Maximum tokens per input\n",
")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Embedding a Single Text\n",
"\n",
"You can use the embed method to generate an embedding for a single input string."
]
},
{
"cell_type": "code",
"execution_count": 5,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Embedding (first 5 values): [0.0015723939, 0.005963983, -0.015102495, -0.008559333, -0.011583589]\n"
]
}
],
"source": [
"# Input text\n",
"text = \"The quick brown fox jumps over the lazy dog.\"\n",
"\n",
"# Generate embedding\n",
"embedding = embedder.embed(text)\n",
"\n",
"# Display the embedding\n",
"print(f\"Embedding (first 5 values): {embedding[:5]}\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Embedding Multiple Texts\n",
"\n",
"The embed method also supports embedding multiple texts at once."
]
},
{
"cell_type": "code",
"execution_count": 6,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Text 1 embedding (first 5 values): [0.0015723939, 0.005963983, -0.015102495, -0.008559333, -0.011583589]\n",
"Text 2 embedding (first 5 values): [0.03261204, -0.020966679, 0.0026475298, -0.009384127, -0.007305047]\n"
]
}
],
"source": [
"# Input texts\n",
"texts = [\n",
" \"The quick brown fox jumps over the lazy dog.\",\n",
" \"A journey of a thousand miles begins with a single step.\"\n",
"]\n",
"\n",
"# Generate embeddings\n",
"embeddings = embedder.embed(texts)\n",
"\n",
"# Display the embeddings\n",
"for i, emb in enumerate(embeddings):\n",
" print(f\"Text {i + 1} embedding (first 5 values): {emb[:5]}\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Using the OpenAIEmbedder as a Callable Function\n",
"\n",
"The OpenAIEmbedder class can also be used directly as a function, thanks to its `__call__` implementation."
]
},
{
"cell_type": "code",
"execution_count": 7,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Embedding (first 5 values): [-0.0022105372, -0.022207271, 0.017802631, -0.00742872, 0.007270942]\n"
]
}
],
"source": [
"# Use the class instance as a callable\n",
"text_embedding = embedder(\"A stitch in time saves nine.\")\n",
"\n",
"# Display the embedding\n",
"print(f\"Embedding (first 5 values): {text_embedding[:5]}\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"For multiple inputs:"
]
},
{
"cell_type": "code",
"execution_count": 8,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Text 1 embedding (first 5 values): [0.0038562817, -0.020030975, 0.01792581, -0.014723405, -0.014608578]\n",
"Text 2 embedding (first 5 values): [0.011255961, 0.004331666, 0.029073123, -0.01053614, 0.021288864]\n"
]
}
],
"source": [
"text_list = [\"The early bird catches the worm.\", \"An apple a day keeps the doctor away.\"]\n",
"embeddings_list = embedder(text_list)\n",
"\n",
"# Display the embeddings\n",
"for i, emb in enumerate(embeddings_list):\n",
" print(f\"Text {i + 1} embedding (first 5 values): {emb[:5]}\")"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": []
}
],
"metadata": {
"kernelspec": {
"display_name": ".venv",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.12.1"
}
},
"nbformat": 4,
"nbformat_minor": 2
}

View File

@ -1,88 +0,0 @@
# 🧪 Basic MCP Agent Playground
This demo shows how to use a **lightweight agent** to call tools served via the [Model Context Protoco (MCP)](https://modelcontextprotocol.io/introduction). The agent uses a simple pattern from `dapr_agents` — but **without running inside Dapr**.
Its a minimal, Python-based setup for:
- Exploring how MCP tools work
- Testing stdio and SSE transport
- Running tool-calling agents
- Experimenting **without** durable workflows or Dapr dependencies
> 🧠 Looking for something more robust?
> Check out the full `dapr_agents` repo to see how we run these agents inside Dapr workflows with durable task orchestration and state management.
---
## 🛠️ Project Structure
```text
.
├── tools.py # Registers two tools via FastMCP
├── server.py # Starts the MCP server in stdio or SSE mode
├── stdio.ipynb # Example using ToolCallingAgent over stdio
├── sse.ipynb # Example using ToolCallingAgent over SSE
├── requirements.txt
└── README.md
```
## Installation
Before running anything, make sure to install the dependencies:
```bash
pip install -r requirements.txt
```
## 🚀 Starting the MCP Tool Server
The server exposes two tools via MCP:
* `get_weather(location: str) → str`
* `jump(distance: str) → str`
Defined in `tools.py`, these tools are registered using FastMCP.
You can run the server in two modes:
### ▶️ 1. STDIO Mode
This runs inside the notebook. It's useful for quick tests because the MCP server doesn't need to be running in a separate terminal.
* This is used in `stdio.ipynb`
* The agent communicates with the tool server via stdio transport
### 🌐 2. SSE Mode (Starlette + Uvicorn)
This mode requires running the server outside the notebook (in a terminal).
```python
python server.py --server_type sse --host 127.0.0.1 --port 8000
```
The server exposes:
* `/sse` for the SSE connection
* `/messages/` to receive tool calls
Used by `sse.ipynb`
📌 You can change the port and host using --host and --port.
## 📓 Notebooks
There are two notebooks in this repo that show basic agent behavior using MCP tools:
| Notebook | Description | Transport |
| --- | --- | --- |
| stdio.ipynb | Uses ToolCallingAgent via mcp.run("stdio") | STDIO |
| sse.ipynb Uses | ToolCallingAgent with SSE tool server | SSE |
Each notebook runs a basic `ToolCallingAgent`, using tools served via MCP. These agents are not managed via Dapr or durable workflows — it's pure Python execution with async support.
## 🔄 Whats Next?
After testing these lightweight agents, you can try:
* Running the full dapr_agents workflow system
* Registering more complex MCP tools
* Using other agent types (Agent or DurableAgent)
* Testing stateful, durable workflows using Dapr + MCP tools

View File

@ -1,4 +0,0 @@
dapr-agents
python-dotenv
mcp
starlette

View File

@ -1,83 +0,0 @@
import argparse
import logging
import uvicorn
from starlette.applications import Starlette
from starlette.requests import Request
from starlette.routing import Mount, Route
from mcp.server.sse import SseServerTransport
from tools import mcp
# ─────────────────────────────────────────────
# Logging Configuration
# ─────────────────────────────────────────────
logging.basicConfig(
level=logging.INFO, format="%(asctime)s - %(levelname)s - %(message)s"
)
logger = logging.getLogger("mcp-server")
# ─────────────────────────────────────────────
# Starlette App Factory
# ─────────────────────────────────────────────
def create_starlette_app():
"""
Create a Starlette app wired with the MCP server over SSE transport.
"""
logger.debug("Creating Starlette app with SSE transport")
sse = SseServerTransport("/messages/")
async def handle_sse(request: Request) -> None:
logger.info("🔌 SSE connection established")
async with sse.connect_sse(request.scope, request.receive, request._send) as (
read_stream,
write_stream,
):
logger.debug("Starting MCP server run loop over SSE")
await mcp._mcp_server.run(
read_stream,
write_stream,
mcp._mcp_server.create_initialization_options(),
)
logger.debug("MCP run loop completed")
return Starlette(
debug=False,
routes=[
Route("/sse", endpoint=handle_sse),
Mount("/messages/", app=sse.handle_post_message),
],
)
# ─────────────────────────────────────────────
# CLI Entrypoint
# ─────────────────────────────────────────────
def main():
parser = argparse.ArgumentParser(description="Run an MCP tool server.")
parser.add_argument(
"--server_type",
choices=["stdio", "sse"],
default="stdio",
help="Transport to use",
)
parser.add_argument(
"--host", default="127.0.0.1", help="Host to bind to (SSE only)"
)
parser.add_argument(
"--port", type=int, default=8000, help="Port to bind to (SSE only)"
)
args = parser.parse_args()
logger.info(f"🚀 Starting MCP server in {args.server_type.upper()} mode")
if args.server_type == "stdio":
mcp.run("stdio")
else:
app = create_starlette_app()
logger.info(f"🌐 Running SSE server on {args.host}:{args.port}")
uvicorn.run(app, host=args.host, port=args.port)
if __name__ == "__main__":
main()

View File

@ -1,298 +0,0 @@
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Basic Weather Agent with MCP Support (SSE Transport)\n",
"\n",
"* Collaborator: Roberto Rodriguez @Cyb3rWard0g"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Install Required Libraries\n",
"Before starting, ensure the required libraries are installed:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"!pip install dapr-agents python-dotenv mcp starlette"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Import Environment Variables"
]
},
{
"cell_type": "code",
"execution_count": 1,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"True"
]
},
"execution_count": 1,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"from dotenv import load_dotenv\n",
"load_dotenv() # take environment variables from .env."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Enable Logging"
]
},
{
"cell_type": "code",
"execution_count": 2,
"metadata": {},
"outputs": [],
"source": [
"import logging\n",
"\n",
"logging.basicConfig(level=logging.INFO)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Connect to MCP Server and Get Tools"
]
},
{
"cell_type": "code",
"execution_count": 3,
"metadata": {},
"outputs": [
{
"name": "stderr",
"output_type": "stream",
"text": [
"INFO:dapr_agents.tool.mcp.client:Connecting to MCP server 'local' via SSE: http://localhost:8000/sse\n",
"INFO:mcp.client.sse:Connecting to SSE endpoint: http://localhost:8000/sse\n",
"INFO:httpx:HTTP Request: GET http://localhost:8000/sse \"HTTP/1.1 200 OK\"\n",
"INFO:mcp.client.sse:Received endpoint URL: http://localhost:8000/messages/?session_id=916bc6e1fb514b3e814e6a980ce20bbc\n",
"INFO:mcp.client.sse:Starting post writer with endpoint URL: http://localhost:8000/messages/?session_id=916bc6e1fb514b3e814e6a980ce20bbc\n",
"INFO:httpx:HTTP Request: POST http://localhost:8000/messages/?session_id=916bc6e1fb514b3e814e6a980ce20bbc \"HTTP/1.1 202 Accepted\"\n",
"INFO:httpx:HTTP Request: POST http://localhost:8000/messages/?session_id=916bc6e1fb514b3e814e6a980ce20bbc \"HTTP/1.1 202 Accepted\"\n",
"INFO:httpx:HTTP Request: POST http://localhost:8000/messages/?session_id=916bc6e1fb514b3e814e6a980ce20bbc \"HTTP/1.1 202 Accepted\"\n",
"INFO:dapr_agents.tool.mcp.client:Loaded 2 tools from server 'local'\n",
"INFO:httpx:HTTP Request: POST http://localhost:8000/messages/?session_id=916bc6e1fb514b3e814e6a980ce20bbc \"HTTP/1.1 202 Accepted\"\n",
"INFO:dapr_agents.tool.mcp.client:Loaded 0 prompts from server 'local': \n",
"INFO:dapr_agents.tool.mcp.client:Successfully connected to MCP server 'local'\n"
]
},
{
"name": "stdout",
"output_type": "stream",
"text": [
"🔧 Tools: ['LocalGetWeather', 'LocalJump']\n"
]
}
],
"source": [
"from dapr_agents.tool.mcp.client import MCPClient\n",
"\n",
"client = MCPClient()\n",
"\n",
"await client.connect_sse(\n",
" server_name=\"local\", # Unique name you assign to this server\n",
" url=\"http://localhost:8000/sse\", # MCP SSE endpoint\n",
" headers=None # Optional HTTP headers if needed\n",
")\n",
"\n",
"# See what tools were loaded\n",
"tools = client.get_all_tools()\n",
"print(\"🔧 Tools:\", [t.name for t in tools])"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Initialize Agent"
]
},
{
"cell_type": "code",
"execution_count": 4,
"metadata": {},
"outputs": [
{
"name": "stderr",
"output_type": "stream",
"text": [
"INFO:dapr_agents.llm.openai.client.base:Initializing OpenAI client...\n",
"INFO:dapr_agents.tool.executor:Tool registered: LocalGetWeather\n",
"INFO:dapr_agents.tool.executor:Tool registered: LocalJump\n",
"INFO:dapr_agents.tool.executor:Tool Executor initialized with 2 tool(s).\n",
"INFO:dapr_agents.agent.base:Constructing system_prompt from agent attributes.\n",
"INFO:dapr_agents.agent.base:Using system_prompt to create the prompt template.\n",
"INFO:dapr_agents.agent.base:Pre-filled prompt template with attributes: ['name', 'role', 'goal']\n"
]
}
],
"source": [
"from dapr_agents import Agent\n",
"\n",
"agent = Agent(\n",
" name=\"Rob\",\n",
" role= \"Weather Assistant\",\n",
" tools=tools\n",
")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Run Agent"
]
},
{
"cell_type": "code",
"execution_count": 5,
"metadata": {},
"outputs": [
{
"name": "stderr",
"output_type": "stream",
"text": [
"INFO:dapr_agents.agent.patterns.toolcall.base:Iteration 1/10 started.\n",
"INFO:dapr_agents.llm.utils.request:Tools are available in the request.\n",
"INFO:dapr_agents.llm.openai.chat:Invoking ChatCompletion API.\n"
]
},
{
"name": "stdout",
"output_type": "stream",
"text": [
"\u001b[38;2;242;182;128muser:\u001b[0m\n",
"\u001b[38;2;242;182;128m\u001b[0m\u001b[38;2;242;182;128mWhat is the weather in New York?\u001b[0m\u001b[0m\n",
"\u001b[0m\u001b[0m\n",
"\u001b[0m--------------------------------------------------------------------------------\u001b[0m\n",
"\u001b[0m\u001b[0m\u001b[0m\n"
]
},
{
"name": "stderr",
"output_type": "stream",
"text": [
"INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
"INFO:dapr_agents.llm.openai.chat:Chat completion retrieved successfully.\n",
"INFO:dapr_agents.agent.patterns.toolcall.base:Executing LocalGetWeather with arguments {\"location\":\"New York\"}\n",
"INFO:dapr_agents.tool.executor:Running tool (auto): LocalGetWeather\n",
"INFO:dapr_agents.tool.mcp.client:[MCP] Executing tool 'get_weather' with args: {'location': 'New York'}\n",
"INFO:mcp.client.sse:Connecting to SSE endpoint: http://localhost:8000/sse\n",
"INFO:httpx:HTTP Request: GET http://localhost:8000/sse \"HTTP/1.1 200 OK\"\n",
"INFO:mcp.client.sse:Received endpoint URL: http://localhost:8000/messages/?session_id=b47ef10b57dd471aac4c5d7aaeadbf5b\n",
"INFO:mcp.client.sse:Starting post writer with endpoint URL: http://localhost:8000/messages/?session_id=b47ef10b57dd471aac4c5d7aaeadbf5b\n",
"INFO:httpx:HTTP Request: POST http://localhost:8000/messages/?session_id=b47ef10b57dd471aac4c5d7aaeadbf5b \"HTTP/1.1 202 Accepted\"\n",
"INFO:httpx:HTTP Request: POST http://localhost:8000/messages/?session_id=b47ef10b57dd471aac4c5d7aaeadbf5b \"HTTP/1.1 202 Accepted\"\n",
"INFO:httpx:HTTP Request: POST http://localhost:8000/messages/?session_id=b47ef10b57dd471aac4c5d7aaeadbf5b \"HTTP/1.1 202 Accepted\"\n",
"INFO:dapr_agents.agent.patterns.toolcall.base:Iteration 2/10 started.\n",
"INFO:dapr_agents.llm.utils.request:Tools are available in the request.\n",
"INFO:dapr_agents.llm.openai.chat:Invoking ChatCompletion API.\n"
]
},
{
"name": "stdout",
"output_type": "stream",
"text": [
"\u001b[38;2;217;95;118massistant:\u001b[0m\n",
"\u001b[38;2;217;95;118m\u001b[0m\u001b[38;2;217;95;118mFunction name: LocalGetWeather (Call Id: call_lBVZIV7seOsWttLnfZaLSwS3)\u001b[0m\n",
"\u001b[38;2;217;95;118m\u001b[0m\u001b[38;2;217;95;118mArguments: {\"location\":\"New York\"}\u001b[0m\u001b[0m\n",
"\u001b[0m\u001b[0m\n",
"\u001b[0m--------------------------------------------------------------------------------\u001b[0m\n",
"\u001b[0m\u001b[0m\u001b[0m\n",
"\u001b[38;2;191;69;126mLocalGetWeather(tool) (Id: call_lBVZIV7seOsWttLnfZaLSwS3):\u001b[0m\n",
"\u001b[38;2;191;69;126m\u001b[0m\u001b[38;2;191;69;126mNew York: 65F.\u001b[0m\u001b[0m\n",
"\u001b[0m\u001b[0m\n",
"\u001b[0m--------------------------------------------------------------------------------\u001b[0m\n",
"\u001b[0m\u001b[0m\u001b[0m\n"
]
},
{
"name": "stderr",
"output_type": "stream",
"text": [
"INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
"INFO:dapr_agents.llm.openai.chat:Chat completion retrieved successfully.\n"
]
},
{
"name": "stdout",
"output_type": "stream",
"text": [
"\u001b[38;2;147;191;183massistant:\u001b[0m\n",
"\u001b[38;2;147;191;183m\u001b[0m\u001b[38;2;147;191;183mThe current weather in New York is 65°F. If you need more information, feel free to ask!\u001b[0m\u001b[0m\n",
"\u001b[0m\u001b[0m\n",
"\u001b[0m--------------------------------------------------------------------------------\u001b[0m\n",
"\u001b[0m\u001b[0m\u001b[0m\n"
]
},
{
"data": {
"text/plain": [
"'The current weather in New York is 65°F. If you need more information, feel free to ask!'"
]
},
"execution_count": 5,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"await agent.run(\"What is the weather in New York?\")"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": []
}
],
"metadata": {
"kernelspec": {
"display_name": ".venv",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.13.1"
}
},
"nbformat": 4,
"nbformat_minor": 2
}

View File

@ -1,296 +0,0 @@
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Basic Weather Agent with MCP Support (Stdio Transport)\n",
"\n",
"* Collaborator: Roberto Rodriguez @Cyb3rWard0g"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Install Required Libraries\n",
"Before starting, ensure the required libraries are installed:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"!pip install dapr-agents python-dotenv mcp starlette"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Import Environment Variables"
]
},
{
"cell_type": "code",
"execution_count": 1,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"True"
]
},
"execution_count": 1,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"from dotenv import load_dotenv\n",
"load_dotenv() # take environment variables from .env."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Enable Logging"
]
},
{
"cell_type": "code",
"execution_count": 2,
"metadata": {},
"outputs": [],
"source": [
"import logging\n",
"\n",
"logging.basicConfig(level=logging.INFO)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Connect to MCP Server and Get Tools"
]
},
{
"cell_type": "code",
"execution_count": 3,
"metadata": {},
"outputs": [
{
"name": "stderr",
"output_type": "stream",
"text": [
"INFO:dapr_agents.tool.mcp.client:Connecting to MCP server 'local' via stdio: python ['server.py', '--server_type', 'stdio']\n",
"INFO:dapr_agents.tool.mcp.client:Loaded 2 tools from server 'local'\n",
"INFO:dapr_agents.tool.mcp.client:Loaded 0 prompts from server 'local': \n",
"INFO:dapr_agents.tool.mcp.client:Successfully connected to MCP server 'local'\n"
]
},
{
"name": "stdout",
"output_type": "stream",
"text": [
"🔧 Tools: ['LocalGetWeather', 'LocalJump']\n"
]
}
],
"source": [
"from dapr_agents.tool.mcp.client import MCPClient\n",
"\n",
"client = MCPClient()\n",
"\n",
"# Connect to your test server\n",
"await client.connect_stdio(\n",
" server_name=\"local\",\n",
" command=\"python\",\n",
" args=[\"server.py\", \"--server_type\", \"stdio\"]\n",
")\n",
"\n",
"# Test tools\n",
"tools = client.get_all_tools()\n",
"print(\"🔧 Tools:\", [t.name for t in tools])"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Initialize Agent"
]
},
{
"cell_type": "code",
"execution_count": 5,
"metadata": {},
"outputs": [
{
"name": "stderr",
"output_type": "stream",
"text": [
"INFO:dapr_agents.llm.openai.client.base:Initializing OpenAI client...\n",
"INFO:dapr_agents.tool.executor:Tool registered: LocalGetWeather\n",
"INFO:dapr_agents.tool.executor:Tool registered: LocalJump\n",
"INFO:dapr_agents.tool.executor:Tool Executor initialized with 2 tool(s).\n",
"INFO:dapr_agents.agent.base:Constructing system_prompt from agent attributes.\n",
"INFO:dapr_agents.agent.base:Using system_prompt to create the prompt template.\n",
"INFO:dapr_agents.agent.base:Pre-filled prompt template with attributes: ['name', 'role', 'goal']\n"
]
}
],
"source": [
"from dapr_agents import Agent\n",
"\n",
"agent = Agent(\n",
" name=\"Rob\",\n",
" role= \"Weather Assistant\",\n",
" tools=tools\n",
")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Run Agent"
]
},
{
"cell_type": "code",
"execution_count": 6,
"metadata": {},
"outputs": [
{
"name": "stderr",
"output_type": "stream",
"text": [
"INFO:dapr_agents.agent.patterns.toolcall.base:Iteration 1/10 started.\n",
"INFO:dapr_agents.llm.utils.request:Tools are available in the request.\n",
"INFO:dapr_agents.llm.openai.chat:Invoking ChatCompletion API.\n"
]
},
{
"name": "stdout",
"output_type": "stream",
"text": [
"\u001b[38;2;242;182;128muser:\u001b[0m\n",
"\u001b[38;2;242;182;128m\u001b[0m\u001b[38;2;242;182;128mWhat is the weather in New York?\u001b[0m\u001b[0m\n",
"\u001b[0m\u001b[0m\n",
"\u001b[0m--------------------------------------------------------------------------------\u001b[0m\n",
"\u001b[0m\u001b[0m\u001b[0m\n"
]
},
{
"name": "stderr",
"output_type": "stream",
"text": [
"INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
"INFO:dapr_agents.llm.openai.chat:Chat completion retrieved successfully.\n",
"INFO:dapr_agents.agent.patterns.toolcall.base:Executing LocalGetWeather with arguments {\"location\":\"New York\"}\n",
"INFO:dapr_agents.tool.executor:Running tool (auto): LocalGetWeather\n",
"INFO:dapr_agents.tool.mcp.client:[MCP] Executing tool 'get_weather' with args: {'location': 'New York'}\n"
]
},
{
"name": "stdout",
"output_type": "stream",
"text": [
"\u001b[38;2;217;95;118massistant:\u001b[0m\n",
"\u001b[38;2;217;95;118m\u001b[0m\u001b[38;2;217;95;118mFunction name: LocalGetWeather (Call Id: call_l8KuS39PvriksogjGN71rzCm)\u001b[0m\n",
"\u001b[38;2;217;95;118m\u001b[0m\u001b[38;2;217;95;118mArguments: {\"location\":\"New York\"}\u001b[0m\u001b[0m\n",
"\u001b[0m\u001b[0m\n",
"\u001b[0m--------------------------------------------------------------------------------\u001b[0m\n",
"\u001b[0m\u001b[0m\u001b[0m\n"
]
},
{
"name": "stderr",
"output_type": "stream",
"text": [
"INFO:dapr_agents.agent.patterns.toolcall.base:Iteration 2/10 started.\n",
"INFO:dapr_agents.llm.utils.request:Tools are available in the request.\n",
"INFO:dapr_agents.llm.openai.chat:Invoking ChatCompletion API.\n"
]
},
{
"name": "stdout",
"output_type": "stream",
"text": [
"\u001b[38;2;191;69;126mLocalGetWeather(tool) (Id: call_l8KuS39PvriksogjGN71rzCm):\u001b[0m\n",
"\u001b[38;2;191;69;126m\u001b[0m\u001b[38;2;191;69;126mNew York: 60F.\u001b[0m\u001b[0m\n",
"\u001b[0m\u001b[0m\n",
"\u001b[0m--------------------------------------------------------------------------------\u001b[0m\n",
"\u001b[0m\u001b[0m\u001b[0m\n"
]
},
{
"name": "stderr",
"output_type": "stream",
"text": [
"INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
"INFO:dapr_agents.llm.openai.chat:Chat completion retrieved successfully.\n"
]
},
{
"name": "stdout",
"output_type": "stream",
"text": [
"\u001b[38;2;147;191;183massistant:\u001b[0m\n",
"\u001b[38;2;147;191;183m\u001b[0m\u001b[38;2;147;191;183mThe current temperature in New York is 60°F.\u001b[0m\u001b[0m\n",
"\u001b[0m\u001b[0m\n",
"\u001b[0m--------------------------------------------------------------------------------\u001b[0m\n",
"\u001b[0m\u001b[0m\u001b[0m\n"
]
},
{
"data": {
"text/plain": [
"'The current temperature in New York is 60°F.'"
]
},
"execution_count": 6,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"await agent.run(\"What is the weather in New York?\")"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": []
}
],
"metadata": {
"kernelspec": {
"display_name": ".venv",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.13.1"
}
},
"nbformat": 4,
"nbformat_minor": 2
}

View File

@ -1,146 +0,0 @@
# MCP Agent with Dapr Workflows
This demo shows how to run an AI agent inside a Dapr Workflow, calling tools exposed via the [Model Context Protoco (MCP)](https://modelcontextprotocol.io/introduction).
Unlike the lightweight notebook-based examples, this setup runs a full Dapr agent using:
✅ Durable task orchestration with Dapr Workflows
✅ Tools served via MCP (stdio or SSE)
✅ Full integration with the Dapr ecosystem
## 🛠️ Project Structure
```text
.
├── app.py # Main entrypoint: runs a Dapr Agent and workflow on port 8001
├── tools.py # MCP tool definitions (get_weather, jump)
├── server.py # Starlette-based SSE server
|-- client.py # Script to send an HTTP request to the Agent over port 8001
├── components/ # Dapr pubsub + state components (Redis, etc.)
├── requirements.txt
└── README.md
```
## 📦 Installation
Install dependencies:
```python
pip install -r requirements.txt
```
Make sure you have Dapr installed and initialized:
```bash
dapr init
```
## 🧰 MCP Tool Server
Your agent will call tools defined in tools.py, served via FastMCP:
```python
@mcp.tool()
async def get_weather(location: str) -> str:
...
@mcp.tool()
async def jump(distance: str) -> str:
...
```
These tools can be served in one of two modes:
### STDIO Mode (local execution)
No external server needed — the agent runs the MCP server in-process.
✅ Best for internal experiments or testing
🚫 Not supported for agents that rely on external workflows (e.g., Dapr orchestration)
### SSE Mode (recommended for Dapr workflows)
In this demo, we run the MCP server as a separate Starlette + Uvicorn app:
```python
python server.py --server_type sse --host 127.0.0.1 --port 8000
```
This exposes:
* `/sse` for the SSE stream
* `/messages/` for tool execution
Used by the Dapr agent in this repo.
## 🚀 Running the Dapr Agent
Start the MCP server in SSE mode:
```python
python server.py --server_type sse --port 8000
```
Then in a separate terminal, run the agent workflow:
```bash
dapr run --app-id weatherappmcp --resources-path components/ -- python app.py
```
Once agent is ready, run the `client.py` script to send a message to it.
```bash
python3 client.py
```
You will see the state of the agent in a json file in the same directory.
```
{
"instances": {
"e098e5b85d544c84a26250be80316152": {
"input": "What is the weather in New York?",
"output": "The current temperature in New York, USA, is 66\u00b0F.",
"start_time": "2025-04-05T05:37:50.496005",
"end_time": "2025-04-05T05:37:52.501630",
"messages": [
{
"id": "e8ccc9d2-1674-47cc-afd2-8e68b91ff791",
"role": "user",
"content": "What is the weather in New York?",
"timestamp": "2025-04-05T05:37:50.516572",
"name": null
},
{
"id": "47b8db93-558c-46ed-80bb-8cb599c4272b",
"role": "assistant",
"content": "The current temperature in New York, USA, is 66\u00b0F.",
"timestamp": "2025-04-05T05:37:52.499945",
"name": null
}
],
"last_message": {
"id": "47b8db93-558c-46ed-80bb-8cb599c4272b",
"role": "assistant",
"content": "The current temperature in New York, USA, is 66\u00b0F.",
"timestamp": "2025-04-05T05:37:52.499945",
"name": null
},
"tool_history": [
{
"content": "New York, USA: 66F.",
"role": "tool",
"tool_call_id": "call_LTDMHvt05e1tvbWBe0kVvnUM",
"id": "2c1535fe-c43a-42c1-be7e-25c71b43c32e",
"function_name": "LocalGetWeather",
"function_args": "{\"location\":\"New York, USA\"}",
"timestamp": "2025-04-05T05:37:51.609087"
}
],
"source": null,
"source_workflow_instance_id": null
}
}
}
```

View File

@ -1,57 +0,0 @@
#!/usr/bin/env python3
import requests
import time
import sys
if __name__ == "__main__":
status_url = "http://localhost:8001/status"
healthy = False
for attempt in range(1, 11):
try:
print(f"Attempt {attempt}...")
response = requests.get(status_url, timeout=5)
if response.status_code == 200:
print("Workflow app is healthy!")
healthy = True
break
else:
print(f"Received status code {response.status_code}: {response.text}")
except requests.exceptions.RequestException as e:
print(f"Request failed: {e}")
attempt += 1
print("Waiting 5s seconds before next health checkattempt...")
time.sleep(5)
if not healthy:
print("Workflow app is not healthy!")
sys.exit(1)
workflow_url = "http://localhost:8001/start-workflow"
task_payload = {"task": "What is the weather in New York?"}
for attempt in range(1, 11):
try:
print(f"Attempt {attempt}...")
response = requests.post(workflow_url, json=task_payload, timeout=5)
if response.status_code == 202:
print("Workflow started successfully!")
sys.exit(0)
else:
print(f"Received status code {response.status_code}: {response.text}")
except requests.exceptions.RequestException as e:
print(f"Request failed: {e}")
attempt += 1
print("Waiting 1s seconds before next attempt...")
time.sleep(1)
print("Maximum attempts (10) reached without success.")
print("Failed to get successful response")
sys.exit(1)

View File

@ -1,12 +0,0 @@
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
name: workflowstatestore
spec:
type: state.redis
version: v1
metadata:
- name: redisHost
value: localhost:6379
- name: redisPassword
value: ""

View File

@ -1,4 +0,0 @@
dapr-agents
python-dotenv
mcp
starlette

View File

@ -1,83 +0,0 @@
import argparse
import logging
import uvicorn
from starlette.applications import Starlette
from starlette.requests import Request
from starlette.routing import Mount, Route
from mcp.server.sse import SseServerTransport
from tools import mcp
# ─────────────────────────────────────────────
# Logging Configuration
# ─────────────────────────────────────────────
logging.basicConfig(
level=logging.INFO, format="%(asctime)s - %(levelname)s - %(message)s"
)
logger = logging.getLogger("mcp-server")
# ─────────────────────────────────────────────
# Starlette App Factory
# ─────────────────────────────────────────────
def create_starlette_app():
"""
Create a Starlette app wired with the MCP server over SSE transport.
"""
logger.debug("Creating Starlette app with SSE transport")
sse = SseServerTransport("/messages/")
async def handle_sse(request: Request) -> None:
logger.info("🔌 SSE connection established")
async with sse.connect_sse(request.scope, request.receive, request._send) as (
read_stream,
write_stream,
):
logger.debug("Starting MCP server run loop over SSE")
await mcp._mcp_server.run(
read_stream,
write_stream,
mcp._mcp_server.create_initialization_options(),
)
logger.debug("MCP run loop completed")
return Starlette(
debug=False,
routes=[
Route("/sse", endpoint=handle_sse),
Mount("/messages/", app=sse.handle_post_message),
],
)
# ─────────────────────────────────────────────
# CLI Entrypoint
# ─────────────────────────────────────────────
def main():
parser = argparse.ArgumentParser(description="Run an MCP tool server.")
parser.add_argument(
"--server_type",
choices=["stdio", "sse"],
default="stdio",
help="Transport to use",
)
parser.add_argument(
"--host", default="127.0.0.1", help="Host to bind to (SSE only)"
)
parser.add_argument(
"--port", type=int, default=8000, help="Port to bind to (SSE only)"
)
args = parser.parse_args()
logger.info(f"🚀 Starting MCP server in {args.server_type.upper()} mode")
if args.server_type == "stdio":
mcp.run("stdio")
else:
app = create_starlette_app()
logger.info(f"🌐 Running SSE server on {args.host}:{args.port}")
uvicorn.run(app, host=args.host, port=args.port)
if __name__ == "__main__":
main()

View File

@ -1,17 +0,0 @@
from mcp.server.fastmcp import FastMCP
import random
mcp = FastMCP("TestServer")
@mcp.tool()
async def get_weather(location: str) -> str:
"""Get weather information for a specific location."""
temperature = random.randint(60, 80)
return f"{location}: {temperature}F."
@mcp.tool()
async def jump(distance: str) -> str:
"""Simulate a jump of a given distance."""
return f"I jumped the following distance: {distance}"

View File

@ -1,499 +0,0 @@
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# VectorStore: Chroma and OpenAI Embeddings Basic Examples\n",
"\n",
"This notebook demonstrates how to use the `ChromaVectorStore` in `dapr-agents` for storing, querying, and filtering documents. We will explore:\n",
"\n",
"* Initializing the `OpenAIEmbedder` embedding function and `ChromaVectorStore`.\n",
"* Adding documents with text and metadata.\n",
"* Retrieving documents by ID.\n",
"* Updating documents.\n",
"* Deleting documents.\n",
"* Performing similarity searches.\n",
"* Filtering results based on metadata."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Install Required Libraries\n",
"Before starting, ensure the required libraries are installed:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"!pip install dapr-agents python-dotenv chromadb"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Load Environment Variables\n",
"\n",
"Load API keys or other configuration values from your `.env` file using `dotenv`."
]
},
{
"cell_type": "code",
"execution_count": 1,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"True"
]
},
"execution_count": 1,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"from dotenv import load_dotenv\n",
"load_dotenv()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Initialize OpenAI Embedding Function\n",
"\n",
"The default embedding function is `SentenceTransformerEmbedder`, but for this example we will use the `OpenAIEmbedder`."
]
},
{
"cell_type": "code",
"execution_count": 2,
"metadata": {},
"outputs": [],
"source": [
"from dapr_agents.document.embedder import OpenAIEmbedder\n",
"\n",
"embedding_funciton = OpenAIEmbedder(\n",
" model = \"text-embedding-ada-002\",\n",
" encoding_name=\"cl100k_base\"\n",
")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Initializing the ChromaVectorStore\n",
"\n",
"To start, create an instance of the `ChromaVectorStore`. You can customize its parameters if needed, such as enabling persistence or specifying the embedding_function."
]
},
{
"cell_type": "code",
"execution_count": 3,
"metadata": {},
"outputs": [],
"source": [
"from dapr_agents.storage import ChromaVectorStore\n",
"\n",
"# Initialize ChromaVectorStore\n",
"store = ChromaVectorStore(\n",
" name=\"example_collection\", # Name of the collection\n",
" embedding_function=embedding_funciton,\n",
" persistent=False, # No persistence for this example\n",
" host=\"localhost\", # Host for the Chroma server\n",
" port=8000 # Port for the Chroma server\n",
")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Adding Documents\n",
"We will use Document objects to add content to the collection. Each Document includes text and optional metadata."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Creating Documents"
]
},
{
"cell_type": "code",
"execution_count": 4,
"metadata": {},
"outputs": [],
"source": [
"from dapr_agents.types.document import Document\n",
"\n",
"# Example Lord of the Rings-inspired conversations\n",
"documents = [\n",
" Document(\n",
" text=\"Gandalf: A wizard is never late, Frodo Baggins. Nor is he early; he arrives precisely when he means to.\",\n",
" metadata={\"topic\": \"wisdom\", \"location\": \"The Shire\"}\n",
" ),\n",
" Document(\n",
" text=\"Frodo: I wish the Ring had never come to me. I wish none of this had happened.\",\n",
" metadata={\"topic\": \"destiny\", \"location\": \"Moria\"}\n",
" ),\n",
" Document(\n",
" text=\"Aragorn: You cannot wield it! None of us can. The One Ring answers to Sauron alone. It has no other master.\",\n",
" metadata={\"topic\": \"power\", \"location\": \"Rivendell\"}\n",
" ),\n",
" Document(\n",
" text=\"Sam: I can't carry it for you, but I can carry you!\",\n",
" metadata={\"topic\": \"friendship\", \"location\": \"Mount Doom\"}\n",
" ),\n",
" Document(\n",
" text=\"Legolas: A red sun rises. Blood has been spilled this night.\",\n",
" metadata={\"topic\": \"war\", \"location\": \"Rohan\"}\n",
" ),\n",
" Document(\n",
" text=\"Gimli: Certainty of death. Small chance of success. What are we waiting for?\",\n",
" metadata={\"topic\": \"bravery\", \"location\": \"Helm's Deep\"}\n",
" ),\n",
" Document(\n",
" text=\"Boromir: One does not simply walk into Mordor.\",\n",
" metadata={\"topic\": \"impossible tasks\", \"location\": \"Rivendell\"}\n",
" ),\n",
" Document(\n",
" text=\"Galadriel: Even the smallest person can change the course of the future.\",\n",
" metadata={\"topic\": \"hope\", \"location\": \"Lothlórien\"}\n",
" ),\n",
" Document(\n",
" text=\"Théoden: So it begins.\",\n",
" metadata={\"topic\": \"battle\", \"location\": \"Helm's Deep\"}\n",
" ),\n",
" Document(\n",
" text=\"Elrond: The strength of the Ring-bearer is failing. In his heart, Frodo begins to understand. The quest will claim his life.\",\n",
" metadata={\"topic\": \"sacrifice\", \"location\": \"Rivendell\"}\n",
" )\n",
"]"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Adding Documents to the Collection"
]
},
{
"cell_type": "code",
"execution_count": 5,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Number of documents in the collection: 10\n"
]
}
],
"source": [
"store.add_documents(documents=documents)\n",
"print(f\"Number of documents in the collection: {store.count()}\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Retrieving Documents\n",
"\n",
"Retrieve documents by their IDs or fetch all items in the collection."
]
},
{
"cell_type": "code",
"execution_count": 6,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Retrieved documents:\n",
"ID: 82f3b922-c64c-4ad1-a632-ea9f8d13a19a, Text: Gandalf: A wizard is never late, Frodo Baggins. Nor is he early; he arrives precisely when he means to., Metadata: {'location': 'The Shire', 'topic': 'wisdom'}\n",
"ID: f5a45d8b-7f8f-4516-a54a-d9ef3c39db53, Text: Frodo: I wish the Ring had never come to me. I wish none of this had happened., Metadata: {'location': 'Moria', 'topic': 'destiny'}\n",
"ID: 7fead849-c4eb-42ce-88ca-ca62fe9f51a4, Text: Aragorn: You cannot wield it! None of us can. The One Ring answers to Sauron alone. It has no other master., Metadata: {'location': 'Rivendell', 'topic': 'power'}\n",
"ID: ebd6c642-c8f4-4f45-a75e-4a5acdf33ad5, Text: Sam: I can't carry it for you, but I can carry you!, Metadata: {'location': 'Mount Doom', 'topic': 'friendship'}\n",
"ID: 1dc4da81-cbfc-417b-ad71-120fae505842, Text: Legolas: A red sun rises. Blood has been spilled this night., Metadata: {'location': 'Rohan', 'topic': 'war'}\n",
"ID: d1ed1836-c0d8-491c-a813-2c5a2688b2d1, Text: Gimli: Certainty of death. Small chance of success. What are we waiting for?, Metadata: {'location': \"Helm's Deep\", 'topic': 'bravery'}\n",
"ID: 6fe3f229-bf74-4eea-8fe4-fc38efb2cf9a, Text: Boromir: One does not simply walk into Mordor., Metadata: {'location': 'Rivendell', 'topic': 'impossible tasks'}\n",
"ID: 081453e4-0a56-4e78-927b-79289735e8a4, Text: Galadriel: Even the smallest person can change the course of the future., Metadata: {'location': 'Lothlórien', 'topic': 'hope'}\n",
"ID: a45db7d1-4224-4e42-b51d-bdb4593b5cf5, Text: Théoden: So it begins., Metadata: {'location': \"Helm's Deep\", 'topic': 'battle'}\n",
"ID: 5258d6f6-1f1b-459d-a04e-c96f58d76fca, Text: Elrond: The strength of the Ring-bearer is failing. In his heart, Frodo begins to understand. The quest will claim his life., Metadata: {'location': 'Rivendell', 'topic': 'sacrifice'}\n"
]
}
],
"source": [
"# Retrieve all documents\n",
"retrieved_docs = store.get()\n",
"print(\"Retrieved documents:\")\n",
"for doc in retrieved_docs:\n",
" print(f\"ID: {doc['id']}, Text: {doc['document']}, Metadata: {doc['metadata']}\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Updating Documents\n",
"\n",
"You can update existing documents' text or metadata using their IDs."
]
},
{
"cell_type": "code",
"execution_count": 7,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Updated document: [{'id': '82f3b922-c64c-4ad1-a632-ea9f8d13a19a', 'metadata': {'location': 'Fangorn Forest', 'topic': 'hope and wisdom'}, 'document': 'Gandalf: Even the wisest cannot foresee all ends, but hope remains while the Company is true.'}]\n"
]
}
],
"source": [
"# Retrieve a document by its ID\n",
"retrieved_docs = store.get() # Get all documents to find the ID\n",
"doc_id = retrieved_docs[0]['id'] # Select the first document's ID for this example\n",
"\n",
"# Define updated text and metadata\n",
"updated_text = \"Gandalf: Even the wisest cannot foresee all ends, but hope remains while the Company is true.\"\n",
"updated_metadata = {\"topic\": \"hope and wisdom\", \"location\": \"Fangorn Forest\"}\n",
"\n",
"# Update the document's text and metadata in the store\n",
"store.update(ids=[doc_id], documents=[updated_text], metadatas=[updated_metadata])\n",
"\n",
"# Verify the update\n",
"updated_doc = store.get(ids=[doc_id])\n",
"print(f\"Updated document: {updated_doc}\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Deleting Documents\n",
"\n",
"Delete documents by their IDs."
]
},
{
"cell_type": "code",
"execution_count": 8,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Number of documents after deletion: 9\n"
]
}
],
"source": [
"# Delete a document by ID\n",
"doc_id_to_delete = retrieved_docs[2]['id']\n",
"store.delete(ids=[doc_id_to_delete])\n",
"\n",
"# Verify deletion\n",
"print(f\"Number of documents after deletion: {store.count()}\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Similarity Search\n",
"\n",
"Perform a similarity search using text queries. The embedding function automatically generates embeddings for the input query."
]
},
{
"cell_type": "code",
"execution_count": 9,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Similarity search results:\n",
"Text: ['Gandalf: Even the wisest cannot foresee all ends, but hope remains while the Company is true.', 'Galadriel: Even the smallest person can change the course of the future.']\n",
"Metadata: [{'location': 'Fangorn Forest', 'topic': 'hope and wisdom'}, {'location': 'Lothlórien', 'topic': 'hope'}]\n"
]
}
],
"source": [
"# Search for similar documents based on a query\n",
"query = \"wise advice\"\n",
"results = store.search_similar(query_texts=query, k=2)\n",
"\n",
"# Display results\n",
"print(\"Similarity search results:\")\n",
"for doc, metadata in zip(results[\"documents\"], results[\"metadatas\"]):\n",
" print(f\"Text: {doc}\")\n",
" print(f\"Metadata: {metadata}\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Filtering Results\n",
"\n",
"Filter results based on metadata."
]
},
{
"cell_type": "code",
"execution_count": 10,
"metadata": {},
"outputs": [],
"source": [
"# Search for documents with specific metadata filters\n",
"filter_conditions = {\n",
" \"$and\": [\n",
" {\"location\": {\"$eq\": \"Fangorn Forest\"}},\n",
" {\"topic\": {\"$eq\": \"hope and wisdom\"}}\n",
" ]\n",
"}\n",
"\n",
"filtered_results = store.query_with_filters(query_texts=[\"journey\"], where=filter_conditions, k=3)"
]
},
{
"cell_type": "code",
"execution_count": 11,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"{'ids': [['82f3b922-c64c-4ad1-a632-ea9f8d13a19a']],\n",
" 'embeddings': None,\n",
" 'documents': [['Gandalf: Even the wisest cannot foresee all ends, but hope remains while the Company is true.']],\n",
" 'uris': None,\n",
" 'data': None,\n",
" 'metadatas': [[{'location': 'Fangorn Forest', 'topic': 'hope and wisdom'}]],\n",
" 'distances': [[0.21403032541275024]],\n",
" 'included': [<IncludeEnum.distances: 'distances'>,\n",
" <IncludeEnum.documents: 'documents'>,\n",
" <IncludeEnum.metadatas: 'metadatas'>]}"
]
},
"execution_count": 11,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"filtered_results"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Resetting the Database\n",
"\n",
"Reset the database to clear all stored data."
]
},
{
"cell_type": "code",
"execution_count": 12,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"['example_collection']"
]
},
"execution_count": 12,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"store.client.list_collections()"
]
},
{
"cell_type": "code",
"execution_count": 13,
"metadata": {},
"outputs": [],
"source": [
"# Reset the collection\n",
"store.reset()"
]
},
{
"cell_type": "code",
"execution_count": 14,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"[]"
]
},
"execution_count": 14,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"store.client.list_collections()"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": []
}
],
"metadata": {
"kernelspec": {
"display_name": ".venv",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.12.1"
}
},
"nbformat": 4,
"nbformat_minor": 2
}

View File

@ -1,498 +0,0 @@
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# VectorStore: Chroma and Sentence Transformer (all-MiniLM-L6-v2) with Basic Examples\n",
"\n",
"This notebook demonstrates how to use the `ChromaVectorStore` in `dapr-agents` for storing, querying, and filtering documents. We will explore:\n",
"\n",
"* Initializing the `SentenceTransformerEmbedder` embedding function and `ChromaVectorStore`.\n",
"* Adding documents with text and metadata.\n",
"* Retrieving documents by ID.\n",
"* Updating documents.\n",
"* Deleting documents.\n",
"* Performing similarity searches.\n",
"* Filtering results based on metadata."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Install Required Libraries\n",
"Before starting, ensure the required libraries are installed:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"!pip install dapr-agents python-dotenv chromadb sentence-transformers"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Load Environment Variables\n",
"\n",
"Load API keys or other configuration values from your `.env` file using `dotenv`."
]
},
{
"cell_type": "code",
"execution_count": 1,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"True"
]
},
"execution_count": 1,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"from dotenv import load_dotenv\n",
"load_dotenv()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Initializing SentenceTransformer Embedding Function\n",
"\n",
"The default embedding function is `SentenceTransformerEmbedder`, but we will initialize it explicitly for clarity."
]
},
{
"cell_type": "code",
"execution_count": 2,
"metadata": {},
"outputs": [],
"source": [
"from dapr_agents.document.embedder import SentenceTransformerEmbedder\n",
"\n",
"embedding_function = SentenceTransformerEmbedder(\n",
" model=\"all-MiniLM-L6-v2\"\n",
")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Initializing the ChromaVectorStore\n",
"\n",
"To start, create an instance of the `ChromaVectorStore` and set the `embedding_function` to the instance of `SentenceTransformerEmbedder`"
]
},
{
"cell_type": "code",
"execution_count": 3,
"metadata": {},
"outputs": [],
"source": [
"from dapr_agents.storage import ChromaVectorStore\n",
"\n",
"# Initialize ChromaVectorStore\n",
"store = ChromaVectorStore(\n",
" name=\"example_collection\", # Name of the collection\n",
" embedding_function=embedding_function,\n",
" persistent=False, # No persistence for this example\n",
" host=\"localhost\", # Host for the Chroma server\n",
" port=8000 # Port for the Chroma server\n",
")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Adding Documents\n",
"We will use Document objects to add content to the collection. Each Document includes text and optional metadata."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Creating Documents"
]
},
{
"cell_type": "code",
"execution_count": 4,
"metadata": {},
"outputs": [],
"source": [
"from dapr_agents.types.document import Document\n",
"\n",
"# Example Lord of the Rings-inspired conversations\n",
"documents = [\n",
" Document(\n",
" text=\"Gandalf: A wizard is never late, Frodo Baggins. Nor is he early; he arrives precisely when he means to.\",\n",
" metadata={\"topic\": \"wisdom\", \"location\": \"The Shire\"}\n",
" ),\n",
" Document(\n",
" text=\"Frodo: I wish the Ring had never come to me. I wish none of this had happened.\",\n",
" metadata={\"topic\": \"destiny\", \"location\": \"Moria\"}\n",
" ),\n",
" Document(\n",
" text=\"Aragorn: You cannot wield it! None of us can. The One Ring answers to Sauron alone. It has no other master.\",\n",
" metadata={\"topic\": \"power\", \"location\": \"Rivendell\"}\n",
" ),\n",
" Document(\n",
" text=\"Sam: I can't carry it for you, but I can carry you!\",\n",
" metadata={\"topic\": \"friendship\", \"location\": \"Mount Doom\"}\n",
" ),\n",
" Document(\n",
" text=\"Legolas: A red sun rises. Blood has been spilled this night.\",\n",
" metadata={\"topic\": \"war\", \"location\": \"Rohan\"}\n",
" ),\n",
" Document(\n",
" text=\"Gimli: Certainty of death. Small chance of success. What are we waiting for?\",\n",
" metadata={\"topic\": \"bravery\", \"location\": \"Helm's Deep\"}\n",
" ),\n",
" Document(\n",
" text=\"Boromir: One does not simply walk into Mordor.\",\n",
" metadata={\"topic\": \"impossible tasks\", \"location\": \"Rivendell\"}\n",
" ),\n",
" Document(\n",
" text=\"Galadriel: Even the smallest person can change the course of the future.\",\n",
" metadata={\"topic\": \"hope\", \"location\": \"Lothlórien\"}\n",
" ),\n",
" Document(\n",
" text=\"Théoden: So it begins.\",\n",
" metadata={\"topic\": \"battle\", \"location\": \"Helm's Deep\"}\n",
" ),\n",
" Document(\n",
" text=\"Elrond: The strength of the Ring-bearer is failing. In his heart, Frodo begins to understand. The quest will claim his life.\",\n",
" metadata={\"topic\": \"sacrifice\", \"location\": \"Rivendell\"}\n",
" )\n",
"]"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Adding Documents to the Collection"
]
},
{
"cell_type": "code",
"execution_count": 5,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Number of documents in the collection: 10\n"
]
}
],
"source": [
"store.add_documents(documents=documents)\n",
"print(f\"Number of documents in the collection: {store.count()}\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Retrieving Documents\n",
"\n",
"Retrieve documents by their IDs or fetch all items in the collection."
]
},
{
"cell_type": "code",
"execution_count": 6,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Retrieved documents:\n",
"ID: 483fc189-df92-4815-987e-b732391e356a, Text: Gandalf: A wizard is never late, Frodo Baggins. Nor is he early; he arrives precisely when he means to., Metadata: {'location': 'The Shire', 'topic': 'wisdom'}\n",
"ID: fcbcbf50-7b0c-458a-a232-abbc1b77518b, Text: Frodo: I wish the Ring had never come to me. I wish none of this had happened., Metadata: {'location': 'Moria', 'topic': 'destiny'}\n",
"ID: d4fbda4e-f933-4d1c-8d63-ee4d9f0d0af7, Text: Aragorn: You cannot wield it! None of us can. The One Ring answers to Sauron alone. It has no other master., Metadata: {'location': 'Rivendell', 'topic': 'power'}\n",
"ID: 98d218e5-4274-4d93-ac9a-3fbbeb3c0a19, Text: Sam: I can't carry it for you, but I can carry you!, Metadata: {'location': 'Mount Doom', 'topic': 'friendship'}\n",
"ID: df9d0abe-0b47-4079-9697-b66f47656e64, Text: Legolas: A red sun rises. Blood has been spilled this night., Metadata: {'location': 'Rohan', 'topic': 'war'}\n",
"ID: 309e0971-6826-4bac-81a8-3acfc3a28fa9, Text: Gimli: Certainty of death. Small chance of success. What are we waiting for?, Metadata: {'location': \"Helm's Deep\", 'topic': 'bravery'}\n",
"ID: a0a312be-bebd-405b-b993-4e37ed7fd569, Text: Boromir: One does not simply walk into Mordor., Metadata: {'location': 'Rivendell', 'topic': 'impossible tasks'}\n",
"ID: 0c09f89c-cf60-4428-beee-294b31dfd6a9, Text: Galadriel: Even the smallest person can change the course of the future., Metadata: {'location': 'Lothlórien', 'topic': 'hope'}\n",
"ID: d4778b45-f9fa-438c-b9e9-7466c872b4cc, Text: Théoden: So it begins., Metadata: {'location': \"Helm's Deep\", 'topic': 'battle'}\n",
"ID: 7a44e69f-e0c9-41c0-9cdf-a8f34ddf45f5, Text: Elrond: The strength of the Ring-bearer is failing. In his heart, Frodo begins to understand. The quest will claim his life., Metadata: {'location': 'Rivendell', 'topic': 'sacrifice'}\n"
]
}
],
"source": [
"# Retrieve all documents\n",
"retrieved_docs = store.get()\n",
"print(\"Retrieved documents:\")\n",
"for doc in retrieved_docs:\n",
" print(f\"ID: {doc['id']}, Text: {doc['document']}, Metadata: {doc['metadata']}\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Updating Documents\n",
"\n",
"You can update existing documents' text or metadata using their IDs."
]
},
{
"cell_type": "code",
"execution_count": 7,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Updated document: [{'id': '483fc189-df92-4815-987e-b732391e356a', 'metadata': {'location': 'Fangorn Forest', 'topic': 'hope and wisdom'}, 'document': 'Gandalf: Even the wisest cannot foresee all ends, but hope remains while the Company is true.'}]\n"
]
}
],
"source": [
"# Retrieve a document by its ID\n",
"retrieved_docs = store.get() # Get all documents to find the ID\n",
"doc_id = retrieved_docs[0]['id'] # Select the first document's ID for this example\n",
"\n",
"# Define updated text and metadata\n",
"updated_text = \"Gandalf: Even the wisest cannot foresee all ends, but hope remains while the Company is true.\"\n",
"updated_metadata = {\"topic\": \"hope and wisdom\", \"location\": \"Fangorn Forest\"}\n",
"\n",
"# Update the document's text and metadata in the store\n",
"store.update(ids=[doc_id], documents=[updated_text], metadatas=[updated_metadata])\n",
"\n",
"# Verify the update\n",
"updated_doc = store.get(ids=[doc_id])\n",
"print(f\"Updated document: {updated_doc}\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Deleting Documents\n",
"\n",
"Delete documents by their IDs."
]
},
{
"cell_type": "code",
"execution_count": 8,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Number of documents after deletion: 9\n"
]
}
],
"source": [
"# Delete a document by ID\n",
"doc_id_to_delete = retrieved_docs[2]['id']\n",
"store.delete(ids=[doc_id_to_delete])\n",
"\n",
"# Verify deletion\n",
"print(f\"Number of documents after deletion: {store.count()}\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Similarity Search\n",
"\n",
"Perform a similarity search using text queries. The embedding function automatically generates embeddings for the input query."
]
},
{
"cell_type": "code",
"execution_count": 9,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Similarity search results:\n",
"Text: ['Gandalf: Even the wisest cannot foresee all ends, but hope remains while the Company is true.', 'Gimli: Certainty of death. Small chance of success. What are we waiting for?']\n",
"Metadata: [{'location': 'Fangorn Forest', 'topic': 'hope and wisdom'}, {'location': \"Helm's Deep\", 'topic': 'bravery'}]\n"
]
}
],
"source": [
"# Search for similar documents based on a query\n",
"query = \"wise advice\"\n",
"results = store.search_similar(query_texts=query, k=2)\n",
"\n",
"# Display results\n",
"print(\"Similarity search results:\")\n",
"for doc, metadata in zip(results[\"documents\"], results[\"metadatas\"]):\n",
" print(f\"Text: {doc}\")\n",
" print(f\"Metadata: {metadata}\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Filtering Results\n",
"\n",
"Filter results based on metadata."
]
},
{
"cell_type": "code",
"execution_count": 10,
"metadata": {},
"outputs": [],
"source": [
"# Search for documents with specific metadata filters\n",
"filter_conditions = {\n",
" \"$and\": [\n",
" {\"location\": {\"$eq\": \"Fangorn Forest\"}},\n",
" {\"topic\": {\"$eq\": \"hope and wisdom\"}}\n",
" ]\n",
"}\n",
"\n",
"filtered_results = store.query_with_filters(query_texts=[\"journey\"], where=filter_conditions, k=3)"
]
},
{
"cell_type": "code",
"execution_count": 11,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"{'ids': [['483fc189-df92-4815-987e-b732391e356a']],\n",
" 'embeddings': None,\n",
" 'documents': [['Gandalf: Even the wisest cannot foresee all ends, but hope remains while the Company is true.']],\n",
" 'uris': None,\n",
" 'data': None,\n",
" 'metadatas': [[{'location': 'Fangorn Forest', 'topic': 'hope and wisdom'}]],\n",
" 'distances': [[0.7907481789588928]],\n",
" 'included': [<IncludeEnum.distances: 'distances'>,\n",
" <IncludeEnum.documents: 'documents'>,\n",
" <IncludeEnum.metadatas: 'metadatas'>]}"
]
},
"execution_count": 11,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"filtered_results"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Resetting the Database\n",
"\n",
"Reset the database to clear all stored data."
]
},
{
"cell_type": "code",
"execution_count": 12,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"['example_collection']"
]
},
"execution_count": 12,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"store.client.list_collections()"
]
},
{
"cell_type": "code",
"execution_count": 13,
"metadata": {},
"outputs": [],
"source": [
"# Reset the collection\n",
"store.reset()"
]
},
{
"cell_type": "code",
"execution_count": 14,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"[]"
]
},
"execution_count": 14,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"store.client.list_collections()"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": []
}
],
"metadata": {
"kernelspec": {
"display_name": ".venv",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.12.1"
}
},
"nbformat": 4,
"nbformat_minor": 2
}

View File

@ -1,522 +0,0 @@
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# VectorStore: Postgres and Sentence Transformer (all-MiniLM-L6-v2) with Basic Examples\n",
"\n",
"This notebook demonstrates how to use the `PostgresVectorStore` in `dapr-agents` for storing, querying, and filtering documents. We will explore:\n",
"\n",
"* Initializing the `SentenceTransformerEmbedder` embedding function and `PostgresVectorStore`.\n",
"* Adding documents with text and metadata.\n",
"* Performing similarity searches.\n",
"* Filtering results based on metadata.\n",
"* Resetting the database."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Install Required Libraries\n",
"Before starting, ensure the required libraries are installed:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"!pip install dapr-agents python-dotenv \"psycopg[binary,pool]\" pgvector"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Load Environment Variables\n",
"\n",
"Load API keys or other configuration values from your `.env` file using `dotenv`."
]
},
{
"cell_type": "code",
"execution_count": 1,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"True"
]
},
"execution_count": 1,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"from dotenv import load_dotenv\n",
"load_dotenv()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Setting Up The Database\n",
"\n",
"Before initializing the `PostgresVectorStore`, set up a PostgreSQL instance with pgvector enabled. For a local setup, use Docker:"
]
},
{
"cell_type": "code",
"execution_count": 4,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"d920da4b841a66223431ad1dce49c3b0c215a971a4860ee9e25ea5bf0b4bfcd0\n"
]
}
],
"source": [
"!docker run --name pgvector-container \\\n",
" -e POSTGRES_USER=dapr_agents \\\n",
" -e POSTGRES_PASSWORD=dapr_agents \\\n",
" -e POSTGRES_DB=dapr_agents \\\n",
" -p 5432:5432 \\\n",
" -d pgvector/pgvector:pg17"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Initializing SentenceTransformer Embedding Function\n",
"\n",
"The default embedding function is `SentenceTransformerEmbedder`, but we will initialize it explicitly for clarity."
]
},
{
"cell_type": "code",
"execution_count": 5,
"metadata": {},
"outputs": [],
"source": [
"from dapr_agents.document.embedder import SentenceTransformerEmbedder\n",
"\n",
"embedding_function = SentenceTransformerEmbedder(\n",
" model=\"all-MiniLM-L6-v2\"\n",
")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Initializing the PostgresVectorStore\n",
"\n",
"To start, create an instance of the `PostgresVectorStore` and set the `embedding_function` to the instance of `SentenceTransformerEmbedder`"
]
},
{
"cell_type": "code",
"execution_count": 6,
"metadata": {},
"outputs": [],
"source": [
"from dapr_agents.storage.vectorstores import PostgresVectorStore\n",
"import os\n",
"\n",
"# Set up connection parameters\n",
"connection_string = os.getenv(\"POSTGRES_CONNECTION_STRING\", \"postgresql://dapr_agents:dapr_agents@localhost:5432/dapr_agents\")\n",
"\n",
"# Initialize PostgresVectorStore\n",
"store = PostgresVectorStore(\n",
" connection_string=connection_string,\n",
" table_name=\"dapr_agents\",\n",
" embedding_function=SentenceTransformerEmbedder()\n",
")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Adding Documents\n",
"We will use Document objects to add content to the collection. Each document includes text and optional metadata."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Creating Documents"
]
},
{
"cell_type": "code",
"execution_count": 7,
"metadata": {},
"outputs": [],
"source": [
"from dapr_agents.types.document import Document\n",
"\n",
"# Example Lord of the Rings-inspired conversations\n",
"documents = [\n",
" Document(\n",
" text=\"Gandalf: A wizard is never late, Frodo Baggins. Nor is he early; he arrives precisely when he means to.\",\n",
" metadata={\"topic\": \"wisdom\", \"location\": \"The Shire\"}\n",
" ),\n",
" Document(\n",
" text=\"Frodo: I wish the Ring had never come to me. I wish none of this had happened.\",\n",
" metadata={\"topic\": \"destiny\", \"location\": \"Moria\"}\n",
" ),\n",
" Document(\n",
" text=\"Aragorn: You cannot wield it! None of us can. The One Ring answers to Sauron alone. It has no other master.\",\n",
" metadata={\"topic\": \"power\", \"location\": \"Rivendell\"}\n",
" ),\n",
" Document(\n",
" text=\"Sam: I can't carry it for you, but I can carry you!\",\n",
" metadata={\"topic\": \"friendship\", \"location\": \"Mount Doom\"}\n",
" ),\n",
" Document(\n",
" text=\"Legolas: A red sun rises. Blood has been spilled this night.\",\n",
" metadata={\"topic\": \"war\", \"location\": \"Rohan\"}\n",
" ),\n",
" Document(\n",
" text=\"Gimli: Certainty of death. Small chance of success. What are we waiting for?\",\n",
" metadata={\"topic\": \"bravery\", \"location\": \"Helm's Deep\"}\n",
" ),\n",
" Document(\n",
" text=\"Boromir: One does not simply walk into Mordor.\",\n",
" metadata={\"topic\": \"impossible tasks\", \"location\": \"Rivendell\"}\n",
" ),\n",
" Document(\n",
" text=\"Galadriel: Even the smallest person can change the course of the future.\",\n",
" metadata={\"topic\": \"hope\", \"location\": \"Lothlórien\"}\n",
" ),\n",
" Document(\n",
" text=\"Théoden: So it begins.\",\n",
" metadata={\"topic\": \"battle\", \"location\": \"Helm's Deep\"}\n",
" ),\n",
" Document(\n",
" text=\"Elrond: The strength of the Ring-bearer is failing. In his heart, Frodo begins to understand. The quest will claim his life.\",\n",
" metadata={\"topic\": \"sacrifice\", \"location\": \"Rivendell\"}\n",
" )\n",
"]"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Adding Documents to the Collection"
]
},
{
"cell_type": "code",
"execution_count": 8,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Number of documents in the collection: 10\n"
]
}
],
"source": [
"store.add_documents(documents=documents)\n",
"print(f\"Number of documents in the collection: {store.count()}\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Retrieving Documents\n",
"\n",
"Retrieve all documents or specific ones by ID."
]
},
{
"cell_type": "code",
"execution_count": 9,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Retrieved documents:\n",
"ID: feb3b2c1-d3cf-423b-bd5d-6094e2200bc8, Text: Gandalf: A wizard is never late, Frodo Baggins. Nor is he early; he arrives precisely when he means to., Metadata: {'topic': 'wisdom', 'location': 'The Shire'}\n",
"ID: b206833f-4c19-4f3c-91e2-2ccbcc895a63, Text: Frodo: I wish the Ring had never come to me. I wish none of this had happened., Metadata: {'topic': 'destiny', 'location': 'Moria'}\n",
"ID: 57226af8-d035-4052-86b2-4f68d7c5a8f6, Text: Aragorn: You cannot wield it! None of us can. The One Ring answers to Sauron alone. It has no other master., Metadata: {'topic': 'power', 'location': 'Rivendell'}\n",
"ID: 5376d46a-4161-408c-850c-4b73cd8d2aa6, Text: Sam: I can't carry it for you, but I can carry you!, Metadata: {'topic': 'friendship', 'location': 'Mount Doom'}\n",
"ID: 7d8c78c3-e4c9-4c6a-8bb4-a04f450e6bfd, Text: Legolas: A red sun rises. Blood has been spilled this night., Metadata: {'topic': 'war', 'location': 'Rohan'}\n",
"ID: 749a126e-2ad5-4aa6-b043-a204e50963f3, Text: Gimli: Certainty of death. Small chance of success. What are we waiting for?, Metadata: {'topic': 'bravery', 'location': \"Helm's Deep\"}\n",
"ID: 4848f783-fbc0-43ec-98d6-43b03fa79809, Text: Boromir: One does not simply walk into Mordor., Metadata: {'topic': 'impossible tasks', 'location': 'Rivendell'}\n",
"ID: ecc3257d-e542-407e-9db9-21ec3b78249c, Text: Galadriel: Even the smallest person can change the course of the future., Metadata: {'topic': 'hope', 'location': 'Lothlórien'}\n",
"ID: 6dad5159-724f-4f03-8cc8-aabc4ee308cd, Text: Théoden: So it begins., Metadata: {'topic': 'battle', 'location': \"Helm's Deep\"}\n",
"ID: 63a09862-438a-41d7-abe7-74ec5510ce82, Text: Elrond: The strength of the Ring-bearer is failing. In his heart, Frodo begins to understand. The quest will claim his life., Metadata: {'topic': 'sacrifice', 'location': 'Rivendell'}\n"
]
}
],
"source": [
"# Retrieve all documents\n",
"retrieved_docs = store.get()\n",
"print(\"Retrieved documents:\")\n",
"for doc in retrieved_docs:\n",
" print(f\"ID: {doc['id']}, Text: {doc['document']}, Metadata: {doc['metadata']}\")"
]
},
{
"cell_type": "code",
"execution_count": 10,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Specific document: [{'id': UUID('feb3b2c1-d3cf-423b-bd5d-6094e2200bc8'), 'document': 'Gandalf: A wizard is never late, Frodo Baggins. Nor is he early; he arrives precisely when he means to.', 'metadata': {'topic': 'wisdom', 'location': 'The Shire'}}]\n"
]
}
],
"source": [
"# Retrieve a specific document by ID\n",
"doc_id = retrieved_docs[0]['id']\n",
"specific_doc = store.get(ids=[doc_id])\n",
"print(f\"Specific document: {specific_doc}\")"
]
},
{
"cell_type": "code",
"execution_count": 11,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Specific document Embedding (first 5 values): [-0.0\n"
]
}
],
"source": [
"# Retrieve a specific document by ID\n",
"doc_id = retrieved_docs[0]['id']\n",
"specific_doc = store.get(ids=[doc_id], with_embedding=True)\n",
"embedding = specific_doc[0]['embedding']\n",
"print(f\"Specific document Embedding (first 5 values): {embedding[:5]}\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Updating Documents\n",
"\n",
"You can update existing documents' text or metadata using their IDs."
]
},
{
"cell_type": "code",
"execution_count": 12,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Updated document: [{'id': UUID('feb3b2c1-d3cf-423b-bd5d-6094e2200bc8'), 'document': 'Gandalf: Even the wisest cannot foresee all ends, but hope remains while the Company is true.', 'metadata': {'topic': 'hope and wisdom', 'location': 'Fangorn Forest'}}]\n"
]
}
],
"source": [
"# Retrieve a document by its ID\n",
"retrieved_docs = store.get() # Get all documents to find the ID\n",
"doc_id = retrieved_docs[0]['id'] # Select the first document's ID for this example\n",
"\n",
"# Define updated text and metadata\n",
"updated_text = \"Gandalf: Even the wisest cannot foresee all ends, but hope remains while the Company is true.\"\n",
"updated_metadata = {\"topic\": \"hope and wisdom\", \"location\": \"Fangorn Forest\"}\n",
"\n",
"# Update the document's text and metadata in the store\n",
"store.update(ids=[doc_id], documents=[updated_text], metadatas=[updated_metadata])\n",
"\n",
"# Verify the update\n",
"updated_doc = store.get(ids=[doc_id])\n",
"print(f\"Updated document: {updated_doc}\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Deleting Documents\n",
"\n",
"Delete documents by their IDs."
]
},
{
"cell_type": "code",
"execution_count": 13,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Number of documents after deletion: 9\n"
]
}
],
"source": [
"# Delete a document by ID\n",
"doc_id_to_delete = retrieved_docs[2]['id']\n",
"store.delete(ids=[doc_id_to_delete])\n",
"\n",
"# Verify deletion\n",
"print(f\"Number of documents after deletion: {store.count()}\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Similarity Search\n",
"\n",
"Perform a similarity search using text queries. The embedding function automatically generates embeddings for the input query."
]
},
{
"cell_type": "code",
"execution_count": 14,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Similarity search results:\n",
"ID: 749a126e-2ad5-4aa6-b043-a204e50963f3, Document: Gimli: Certainty of death. Small chance of success. What are we waiting for?, Metadata: {'topic': 'bravery', 'location': \"Helm's Deep\"}, Similarity: 0.1567628941818613\n",
"ID: 4848f783-fbc0-43ec-98d6-43b03fa79809, Document: Boromir: One does not simply walk into Mordor., Metadata: {'topic': 'impossible tasks', 'location': 'Rivendell'}, Similarity: 0.13233356090384096\n"
]
}
],
"source": [
"# Perform a similarity search using text queries.\n",
"query = \"wise advice\"\n",
"results = store.search_similar(query_texts=query, k=2)\n",
"\n",
"# Display results\n",
"print(\"Similarity search results:\")\n",
"for result in results:\n",
" print(f\"ID: {result['id']}, Document: {result['document']}, Metadata: {result['metadata']}, Similarity: {result['similarity']}\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Filtering Results\n",
"\n",
"Filter results based on metadata."
]
},
{
"cell_type": "code",
"execution_count": 15,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Filtered search results:\n",
"ID: feb3b2c1-d3cf-423b-bd5d-6094e2200bc8, Document: Gandalf: Even the wisest cannot foresee all ends, but hope remains while the Company is true., Metadata: {'topic': 'hope and wisdom', 'location': 'Fangorn Forest'}, Similarity: 0.1670202911216282\n"
]
}
],
"source": [
"# Search for documents with specific metadata filters\n",
"query = \"journey\"\n",
"filter_conditions = {\n",
" \"location\": \"Fangorn Forest\",\n",
" \"topic\": \"hope and wisdom\"\n",
"}\n",
"\n",
"filtered_results = store.search_similar(query_texts=query, metadata_filter=filter_conditions, k=3)\n",
"\n",
"# Display filtered results\n",
"print(\"Filtered search results:\")\n",
"for result in filtered_results:\n",
" print(f\"ID: {result['id']}, Document: {result['document']}, Metadata: {result['metadata']}, Similarity: {result['similarity']}\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Resetting the Database\n",
"\n",
"Reset the database to clear all stored data."
]
},
{
"cell_type": "code",
"execution_count": 16,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Database reset complete. Current documents: []\n"
]
}
],
"source": [
"# Reset the collection\n",
"store.reset()\n",
"print(\"Database reset complete. Current documents:\", store.get())"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": []
}
],
"metadata": {
"kernelspec": {
"display_name": ".venv",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.12.1"
}
},
"nbformat": 4,
"nbformat_minor": 2
}

View File

@ -1,48 +0,0 @@
from time import sleep
import dapr.ext.workflow as wf
wfr = wf.WorkflowRuntime()
@wfr.workflow(name="random_workflow")
def task_chain_workflow(ctx: wf.DaprWorkflowContext, x: int):
result1 = yield ctx.call_activity(step1, input=x)
result2 = yield ctx.call_activity(step2, input=result1)
result3 = yield ctx.call_activity(step3, input=result2)
return [result1, result2, result3]
@wfr.activity
def step1(ctx, activity_input):
print(f"Step 1: Received input: {activity_input}.")
# Do some work
return activity_input + 1
@wfr.activity
def step2(ctx, activity_input):
print(f"Step 2: Received input: {activity_input}.")
# Do some work
return activity_input * 2
@wfr.activity
def step3(ctx, activity_input):
print(f"Step 3: Received input: {activity_input}.")
# Do some work
return activity_input ^ 2
if __name__ == "__main__":
wfr.start()
sleep(5) # wait for workflow runtime to start
wf_client = wf.DaprWorkflowClient()
instance_id = wf_client.schedule_new_workflow(
workflow=task_chain_workflow, input=10
)
print(f"Workflow started. Instance ID: {instance_id}")
state = wf_client.wait_for_workflow_completion(instance_id)
print(f"Workflow completed! Status: {state.runtime_status}")
wfr.shutdown()

View File

@ -1,43 +0,0 @@
import logging
from dapr_agents.workflow import WorkflowApp, workflow, task
from dapr.ext.workflow import DaprWorkflowContext
@workflow(name="random_workflow")
def task_chain_workflow(ctx: DaprWorkflowContext, input: int):
result1 = yield ctx.call_activity(step1, input=input)
result2 = yield ctx.call_activity(step2, input=result1)
result3 = yield ctx.call_activity(step3, input=result2)
return [result1, result2, result3]
@task
def step1(activity_input):
print(f"Step 1: Received input: {activity_input}.")
# Do some work
return activity_input + 1
@task
def step2(activity_input):
print(f"Step 2: Received input: {activity_input}.")
# Do some work
return activity_input * 2
@task
def step3(activity_input):
print(f"Step 3: Received input: {activity_input}.")
# Do some work
return activity_input ^ 2
if __name__ == "__main__":
logging.basicConfig(level=logging.INFO)
wfapp = WorkflowApp()
results = wfapp.run_and_monitor_workflow_sync(task_chain_workflow, input=10)
print(f"Results: {results}")

View File

@ -1,43 +0,0 @@
import asyncio
import logging
from dapr_agents.workflow import WorkflowApp, workflow, task
from dapr.ext.workflow import DaprWorkflowContext
@workflow(name="random_workflow")
def task_chain_workflow(ctx: DaprWorkflowContext, input: int):
result1 = yield ctx.call_activity(step1, input=input)
result2 = yield ctx.call_activity(step2, input=result1)
result3 = yield ctx.call_activity(step3, input=result2)
return [result1, result2, result3]
@task
def step1(activity_input: int) -> int:
print(f"Step 1: Received input: {activity_input}.")
return activity_input + 1
@task
def step2(activity_input: int) -> int:
print(f"Step 2: Received input: {activity_input}.")
return activity_input * 2
@task
def step3(activity_input: int) -> int:
print(f"Step 3: Received input: {activity_input}.")
return activity_input ^ 2
async def main():
logging.basicConfig(level=logging.INFO)
wfapp = WorkflowApp()
result = await wfapp.run_and_monitor_workflow_async(task_chain_workflow, input=10)
print(f"Results: {result}")
if __name__ == "__main__":
asyncio.run(main())

View File

@ -1,43 +0,0 @@
from dapr_agents.workflow import WorkflowApp, workflow, task
from dapr.ext.workflow import DaprWorkflowContext
from dotenv import load_dotenv
import logging
# Define Workflow logic
@workflow(name="lotr_workflow")
def task_chain_workflow(ctx: DaprWorkflowContext):
result1 = yield ctx.call_activity(get_character)
result2 = yield ctx.call_activity(get_line, input={"character": result1})
return result2
@task(
description="""
Pick a random character from The Lord of the Rings\n
and respond with the character's name ONLY
"""
)
def get_character() -> str:
pass
@task(
description="What is a famous line by {character}",
)
def get_line(character: str) -> str:
pass
if __name__ == "__main__":
logging.basicConfig(level=logging.INFO)
# Load environment variables
load_dotenv()
# Initialize the WorkflowApp
wfapp = WorkflowApp()
# Run workflow
results = wfapp.run_and_monitor_workflow_sync(task_chain_workflow)
print(results)

View File

@ -1,49 +0,0 @@
import asyncio
import logging
from dapr_agents.workflow import WorkflowApp, workflow, task
from dapr.ext.workflow import DaprWorkflowContext
from dotenv import load_dotenv
# Define Workflow logic
@workflow(name="lotr_workflow")
def task_chain_workflow(ctx: DaprWorkflowContext):
result1 = yield ctx.call_activity(get_character)
result2 = yield ctx.call_activity(get_line, input={"character": result1})
return result2
@task(
description="""
Pick a random character from The Lord of the Rings\n
and respond with the character's name ONLY
"""
)
def get_character() -> str:
pass
@task(
description="What is a famous line by {character}",
)
def get_line(character: str) -> str:
pass
async def main():
logging.basicConfig(level=logging.INFO)
# Load environment variables
load_dotenv()
# Initialize the WorkflowApp
wfapp = WorkflowApp()
# Run workflow
result = await wfapp.run_and_monitor_workflow_async(task_chain_workflow)
print(f"Results: {result}")
if __name__ == "__main__":
asyncio.run(main())

View File

@ -1,34 +0,0 @@
import logging
from dapr_agents.workflow import WorkflowApp, workflow, task
from dapr.ext.workflow import DaprWorkflowContext
from pydantic import BaseModel
from dotenv import load_dotenv
@workflow
def question(ctx: DaprWorkflowContext, input: int):
step1 = yield ctx.call_activity(ask, input=input)
return step1
class Dog(BaseModel):
name: str
bio: str
breed: str
@task("Who was {name}?")
def ask(name: str) -> Dog:
pass
if __name__ == "__main__":
logging.basicConfig(level=logging.INFO)
load_dotenv()
wfapp = WorkflowApp()
results = wfapp.run_and_monitor_workflow_sync(workflow=question, input="Scooby Doo")
print(results)

View File

@ -1,44 +0,0 @@
import asyncio
import logging
from dapr_agents.workflow import WorkflowApp, workflow, task
from dapr.ext.workflow import DaprWorkflowContext
from pydantic import BaseModel
from dotenv import load_dotenv
@workflow
def question(ctx: DaprWorkflowContext, input: int):
step1 = yield ctx.call_activity(ask, input=input)
return step1
class Dog(BaseModel):
name: str
bio: str
breed: str
@task("Who was {name}?")
def ask(name: str) -> Dog:
pass
async def main():
logging.basicConfig(level=logging.INFO)
# Load environment variables
load_dotenv()
# Initialize the WorkflowApp
wfapp = WorkflowApp()
# Run workflow
result = await wfapp.run_and_monitor_workflow_async(
workflow=question, input="Scooby Doo"
)
print(f"Results: {result}")
if __name__ == "__main__":
asyncio.run(main())

View File

@ -1,62 +0,0 @@
import dapr.ext.workflow as wf
from dotenv import load_dotenv
from openai import OpenAI
from time import sleep
# Load environment variables
load_dotenv()
# Initialize Workflow Instance
wfr = wf.WorkflowRuntime()
# Define Workflow logic
@wfr.workflow(name="lotr_workflow")
def task_chain_workflow(ctx: wf.DaprWorkflowContext):
result1 = yield ctx.call_activity(get_character)
result2 = yield ctx.call_activity(get_line, input=result1)
return result2
# Activity 1
@wfr.activity(name="step1")
def get_character(ctx):
client = OpenAI()
response = client.chat.completions.create(
messages=[
{
"role": "user",
"content": "Pick a random character from The Lord of the Rings and respond with the character name only",
}
],
model="gpt-4o",
)
character = response.choices[0].message.content
print(f"Character: {character}")
return character
# Activity 2
@wfr.activity(name="step2")
def get_line(ctx, character: str):
client = OpenAI()
response = client.chat.completions.create(
messages=[{"role": "user", "content": f"What is a famous line by {character}"}],
model="gpt-4o",
)
line = response.choices[0].message.content
print(f"Line: {line}")
return line
if __name__ == "__main__":
wfr.start()
sleep(5) # wait for workflow runtime to start
wf_client = wf.DaprWorkflowClient()
instance_id = wf_client.schedule_new_workflow(workflow=task_chain_workflow)
print(f"Workflow started. Instance ID: {instance_id}")
state = wf_client.wait_for_workflow_completion(instance_id)
print(f"Workflow completed! Status: {state.runtime_status}")
wfr.shutdown()

View File

@ -1,61 +0,0 @@
# Doc2Podcast: Automating Podcast Creation from Research Papers
This workflow is a basic step toward automating the creation of podcast content from research using AI. It demonstrates how to process a single research paper, generate a dialogue-style transcript with LLMs, and convert it into a podcast audio file. While simple, this workflow serves as a foundation for exploring more advanced processes, such as handling multiple documents or optimizing content splitting for better audio output.
## Key Features and Workflow
* PDF Processing: Downloads a research paper from a specified URL and extracts its content page by page.
* LLM-Powered Transcripts: Transforms extracted text into a dialogue-style transcript using a large language model, alternating between a host and participants.
* AI-Generated Audio: Converts the transcript into a podcast-like audio file with natural-sounding voices for the host and participants.
* Custom Workflow: Saves the final podcast audio and transcript files locally, offering flexibility for future enhancements like handling multiple files or integrating additional AI tools.
## Prerequisites
* Python 3.8 or higher
* Required Python dependencies (install using `pip install -r requirements.txt`)
* A valid `OpenAI` API key for generating audio content
* Set the `OPENAI_API_KEY` variable with your key value in an `.env` file.
## Configuration
To run the workflow, provide a configuration file in JSON format. The `config.json` file in this folder points to the following file "[Exploring Applicability of LLM-Powered Autonomous Agents to Solve Real-life Problems](https://github.com/OTRF/MEAN/blob/main/Rodriquez%20%26%20Syynimaa%20(2024).%20Exploring%20Applicability%20of%20LLM-Powered%20Autonomous%20Agents%20to%20Solve%20Real-life%20Problems.pdf)". Config example:
```json
{
"pdf_url": "https://example.com/research-paper.pdf",
"podcast_name": "AI Explorations",
"host": {
"name": "John Doe",
"voice": "alloy"
},
"participants": [
{ "name": "Alice Smith" },
{ "name": "Bob Johnson" }
],
"max_rounds": 4,
"output_transcript_path": "podcast_dialogue.json",
"output_audio_path": "final_podcast.mp3",
"audio_model": "tts-1"
}
```
## Running the Workflow
* Place the configuration file (e.g., config.json) in the project directory.
* Run the workflow with the following command:
```bash
dapr run --app-id doc2podcast --resources-path components -- python3 workflow.py --config config.json
```
* Output:
* Transcript: A structured transcript saved as `podcast_dialogue.json` by default. An example can be found in the current directory.
* Audio: The final podcast audio saved as `final_podcast.mp3` as default. An example can be found [here](https://on.soundcloud.com/pzjYRcJZDU3y27hz5).
## Next Steps
This workflow is a simple starting point. Future enhancements could include:
* Processing Multiple Files: Extend the workflow to handle batches of PDFs.
* Advanced Text Splitting: Dynamically split text based on content rather than pages.
* Web Search Integration: Pull additional context or related research from the web.
* Multi-Modal Content: Process documents alongside images, slides, or charts.

View File

@ -1,20 +0,0 @@
{
"pdf_url": "https://raw.githubusercontent.com/OTRF/MEAN/main/Rodriquez%20%26%20Syynimaa%20(2024).%20Exploring%20Applicability%20of%20LLM-Powered%20Autonomous%20Agents%20to%20Solve%20Real-life%20Problems.pdf",
"podcast_name": "AI Explorations",
"host": {
"name": "John Doe",
"voice": "alloy"
},
"participants": [
{
"name": "Alice Smith"
},
{
"name": "Bob Johnson"
}
],
"max_rounds": 4,
"output_transcript_path": "podcast_dialogue.json",
"output_audio_path": "final_podcast.mp3",
"audio_model": "tts-1"
}

View File

@ -1,234 +0,0 @@
[
{
"name": "John Doe",
"text": "Welcome to 'AI Explorations'. I'm your host, John Doe. I'm joined today by Alice Smith and Bob Johnson. How are both of you doing today?"
},
{
"name": "Alice Smith",
"text": "Hi John, I'm doing great, thanks for having me. Excited to discuss today's topics."
},
{
"name": "John Doe",
"text": "Fantastic. In today's episode, we'll explore the applicability of LLM-powered autonomous agents in tackling real-life problems. We'll delve into Microsoft Entra ID Administration, particularly focusing on a project named MEAN. Alice, could you tell us a bit more about this project and its relevance?"
},
{
"name": "Alice Smith",
"text": "Absolutely, John. The MEAN project is fascinating as it leverages LLM technology to perform administrative tasks in Entra ID using natural language prompts. This is particularly useful given that Microsoft has retired some key PowerShell modules for these tasks."
},
{
"name": "John Doe",
"text": "That's interesting. Bob, from a technical standpoint, what changes are happening that make projects like MEAN necessary?"
},
{
"name": "Bob Johnson",
"text": "Well, John, with Microsoft retiring old PowerShell modules, administrators now need to use the Microsoft Graph API. This change requires learning software development skills, which isn't feasible for everyone. MEAN simplifies this by using natural language inputs instead."
},
{
"name": "John Doe",
"text": "Great point, Bob. So, Alice, could these autonomous agents make administrative tasks more accessible to a wider audience?"
},
{
"name": "Alice Smith",
"text": "Certainly, John. By abstracting complex programming tasks into simple language commands, these agents democratize access to technology, lowering the barrier for many administrators."
},
{
"name": "John Doe",
"text": "The notion of autonomous LLM-powered agents is intriguing, especially when it comes to simplifying complex tasks like software development. Alice, how do you see these agents addressing the skills gap that's typically present among system administrators? For instance, their need to master software development skills, which aren't typically part of their skill set."
},
{
"name": "Alice Smith",
"text": "John, I believe these agents can play a pivotal role by taking over tasks that require extensive software development knowledge. They can interface with complex APIs like MSGraph, providing administrators with the ability to perform tasks using natural language without the need to learn coding."
},
{
"name": "John Doe",
"text": "Bob, it seems like these agents must be quite advanced to achieve this level of functionality. Can you talk about how LLMs, like those used in these agents, handle tasks they've never been specifically trained on, and what challenges they might face?"
},
{
"name": "Bob Johnson",
"text": "Certainly, John. LLMs, such as Generative Pre-trained Transformers, use task-agnostic pre-training, but require additional task-specific training to perform new tasks effectively. Challenges include maintaining consistent logic and managing hallucinations, where generated content might not accurately reflect reality."
},
{
"name": "John Doe",
"text": "That's an important point, Bob. Alice, how do these agents overcome some of these challenges to ensure accurate performance?"
},
{
"name": "Alice Smith",
"text": "They employ strategies like using the ReAct paradigm, which involves reasoning and action in a closed-loop system. By incorporating external real-world entities into their reasoning processes, they aim to be more grounded and trustworthy, which reduces issues like hallucination."
},
{
"name": "John Doe",
"text": "Fascinating. Now, looking to the future, do you believe these LLM-powered agents will play a crucial role in evolving the role of system administrators?"
},
{
"name": "Bob Johnson",
"text": "Absolutely, John. As these agents become more sophisticated, they will enhance productivity by offloading routine and complex tasks, allowing administrators to focus on strategic decision-making and innovation."
},
{
"name": "John Doe",
"text": "Continuing with our discussion on the autonomous agents for Entra ID administration, Alice, could you elaborate on some of the research questions that were pivotal to the development of the MEAN project?"
},
{
"name": "Alice Smith",
"text": "Sure, John. One of the primary research questions we focused on was determining how these autonomous LLM-powered agents can effectively assist administrators in performing Entra ID tasks. This became crucial as traditional PowerShell modules were deprecated, requiring new solutions."
},
{
"name": "John Doe",
"text": "That sounds essential. Bob, could you walk us through the structure of the research paper related to MEAN and highlight how it helps in understanding the essence of the project?"
},
{
"name": "Bob Johnson",
"text": "Certainly, John. The paper is structured to first describe the construction process of the MEAN agent, proceeding to a discussion section that encapsulates the project's essence. It offers a comprehensive view from motivation to design and testing phases."
},
{
"name": "John Doe",
"text": "Alice, let's talk about the design and development phase of MEAN. I understand Jupyter Notebooks was chosen as the platform. Could you explain why this choice was made and how it integrates with the capabilities of tools like ChatGPT and MSGraph API?"
},
{
"name": "Alice Smith",
"text": "Jupyter Notebooks was selected for its Python support, which is crucial for integrating with ChatGPT-4 API. This setup allows the agent to call external APIs easily, essential for the tasks at hand. Utilizing the OpenAPI specification from the MSGraph API documentation further streamlines this process."
},
{
"name": "John Doe",
"text": "Bob, how does the design process ensure that the agent can interpret and execute tasks accurately, especially when leveraging APIs such as MSGraph?"
},
{
"name": "Bob Johnson",
"text": "The design emphasizes a reasoning and planning loop where the agent interprets user prompts and the OpenAPI specification. It then strategically executes plans by interacting with the API to return accurate results. This methodical approach helps in achieving precision in task execution."
},
{
"name": "John Doe",
"text": "Alice, you've previously mentioned the significance of using Jupyter Notebooks for integrating various tools like ChatGPT and MSGraph API. Given the extensive properties of users from Microsoft Entra ID and the challenges MEAN faced in its first design round, how essential was it to adapt the setup further? What steps were taken to enhance the agent's understanding of the API?"
},
{
"name": "Alice Smith",
"text": "In our first design round, we realized the importance of improving the agent's grasp of the API due to its partial functionality. Hence, adapting the design to incorporate better reasoning and planning capabilities was essential. We started by ensuring that the agent can parse and understand extensive OpenAPI specifications and use parameters like $top to request more users."
},
{
"name": "John Doe",
"text": "Bob, it seems there were significant hurdles with the original MS Graph API specification, especially with its size causing browser crashes during validation. How did the team manage this aspect, and what was the impact on the agent's functionality?"
},
{
"name": "Bob Johnson",
"text": "The sheer size of the OpenAPI YAML file posed challenges, but breaking it down into manageable parts allowed us to validate it without crashing the systems. This step was crucial for the agent to execute tasks more efficiently and understand the complex relationships within the API."
},
{
"name": "John Doe",
"text": "With these enhancements, how did the team ensure that MEAN could accurately retrieve up to 1000 users per request, especially when the default is limited to 100 users?"
},
{
"name": "Alice Smith",
"text": "After refining the agent's interpretation of the API, we implemented logic to utilize the $top query parameter effectively, allowing MEAN to request and handle up to 1000 users at a time. This adjustment significantly improved its performance in managing data."
},
{
"name": "John Doe",
"text": "Bob, looking ahead, how does this adaptation enhance the agent's ability to handle real-world administrative scenarios in Entra ID?"
},
{
"name": "Bob Johnson",
"text": "By optimizing data retrieval and understanding API parameters fully, MEAN is now far better equipped to handle bulk operations and real-world administrative tasks, enhancing both efficiency and accuracy for users."
},
{
"name": "John Doe",
"text": "Alice, are there specific use cases within Entra ID where these improvements in MEANs capabilities have had the most impact?"
},
{
"name": "John Doe",
"text": "Alice, with all the technical modifications made to the OpenAPI specification, tell us how these changes impacted the agent's ability to interpret and execute tasks more efficiently."
},
{
"name": "Alice Smith",
"text": "The changes were substantial, John. By manually adjusting the OpenAPI specification to eliminate circular references and mark query parameters as required, we managed to maintain crucial API information. This improved the agent's ability to process and execute tasks accurately, highlighting the efficiency necessary for real-world applications."
},
{
"name": "John Doe",
"text": "That's quite an advancement. Bob, what can you tell us about the logical observations made by the agent when encountering issues, like using multiple $select parameters?"
},
{
"name": "Bob Johnson",
"text": "It's fascinating, John. The agent learned from its mistakes by recognizing that the API threw errors when $select was used multiple times. It adapted by using a single $select parameter and separating values with commas. This shows how the agent mimics human logical processes in troubleshooting."
},
{
"name": "John Doe",
"text": "Alice, do these improvements mean that tasks typically performed by an administrator using PowerShell can now be easily transferred to the agent, without needing extensive software knowledge?"
},
{
"name": "Alice Smith",
"text": "Absolutely. Now that the agent understands how to interpret the API parameters correctly, it simplifies tasks for administrators. They no longer need to know specific API calls or PowerShell cmdlets, making complex operations much more accessible."
},
{
"name": "John Doe",
"text": "Bob, what did the evaluation reveal about how the agent can empower users without software development backgrounds to accomplish tasks?"
},
{
"name": "Bob Johnson",
"text": "The evaluation was quite promising. It showed that users could achieve the desired outcomes using natural language, thanks to the agent's capability. Although there are some limitations with the current implementation, we are on the right path towards bridging the gap for non-technical users."
},
{
"name": "John Doe",
"text": "Alice, with all these technical modifications, it seems that adapting the OpenAPI specifications has been challenging but rewarding. Can you tell us about the role of open and clear communication in the success of the MEAN project?"
},
{
"name": "Alice Smith",
"text": "Absolutely, John. Communicating our progress and challenges was crucial. We've reported our processes and findings in a research paper, and we've made our source code and Jupyter notebooks publicly available on GitHub. This transparency not only facilitated collaboration but also allowed us to receive valuable feedback from the community."
},
{
"name": "John Doe",
"text": "That's commendable, Alice. It seems like these improvements have significant implications for practice. Bob, do you think the findings could transform how routine administrative tasks are approached, especially in high-stress environments like during cyber-attacks?"
},
{
"name": "Bob Johnson",
"text": "Certainly, John. The ability of LLM-powered agents to simplify complex tasks allows administrators to focus on their core responsibilities without getting bogged down by software development. This is particularly beneficial during high-pressure situations where quick decision-making is essential. However, the current limitations mean it's not fully mature for everyday tasks just yet."
},
{
"name": "John Doe",
"text": "Alice, it sounds like enabling administrators to use natural language inputs for Entra ID tasks without needing to learn coding is a major leap forward. In terms of future research, where do you see the next steps for the MEAN project?"
},
{
"name": "Alice Smith",
"text": "Moving forward, a promising direction is to explore how these agents could interface with PowerShell commands in addition to APIs. By doing so, we could potentially create a more versatile solution that isn't limited to cloud services and also leverages tasks on local systems."
},
{
"name": "John Doe",
"text": "Interesting. Bob, do you have thoughts on how exploring PowerShell integration could provide a broader application for these agents?"
},
{
"name": "Bob Johnson",
"text": "Integrating with PowerShell could allow agents to perform tasks that extend beyond cloud-based system administration, covering local environments as well. This could open doors to a generalized tool for admins who deal with hybrid IT infrastructures."
},
{
"name": "John Doe",
"text": "Thank you for such an insightful discussion on the MEAN project. To wrap up, we've explored the impressive capabilities of LLM-powered agents in simplifying complex tasks by utilizing natural language, the technical hurdles overcome in adapting OpenAPI specifications, and the potential for integrating PowerShell for broader applicability. Alice and Bob, your insights have been invaluable."
},
{
"name": "Alice Smith",
"text": "Thank you, John. It's been a pleasure discussing the project and sharing our journey with MEAN. The implications for simplifying administrative tasks are exciting, especially as we continue to evolve these capabilities."
},
{
"name": "Bob Johnson",
"text": "Absolutely, John. Exploring how MEAN addresses real-world administrative challenges underlines its potential impact, particularly in high-stress environments. I'm eager to see how future research will further break down barriers for non-technical users."
},
{
"name": "John Doe",
"text": "Thank you both for your contributions. It's clear that the work being done with MEAN is transformative and could pave the way for future innovations in cloud administration."
},
{
"name": "Alice Smith",
"text": "Thanks again, John. I look forward to further developments and encourage listeners to follow our updates on GitHub for the latest insights."
},
{
"name": "Bob Johnson",
"text": "And thank you, John, for the engaging conversation. It's always rewarding to share the exciting strides we're making in this field."
},
{
"name": "John Doe",
"text": "This concludes our episode on AI Explorations. Don't forget to check out the provided resources to delve deeper into the topics we've covered. Until next time, stay curious and keep exploring the world of AI."
},
{
"name": "Alice Smith",
"text": "Goodbye everyone, and thank you for tuning in!"
},
{
"name": "Bob Johnson",
"text": "Goodbye, and thank you for listening!"
}
]

View File

@ -1,3 +0,0 @@
dapr_agents
pydub
pypdf

View File

@ -1,429 +0,0 @@
from dapr_agents.document.reader.pdf.pypdf import PyPDFReader
from dapr.ext.workflow import DaprWorkflowContext
from dapr_agents import WorkflowApp
from urllib.parse import urlparse, unquote
from dotenv import load_dotenv
from typing import Dict, Any, List
from pydantic import BaseModel
from pathlib import Path
from dapr_agents import OpenAIAudioClient
from dapr_agents.types.llm import AudioSpeechRequest
from pydub import AudioSegment
import io
import requests
import logging
logging.basicConfig(level=logging.INFO)
logger = logging.getLogger(__name__)
# Load environment variables
load_dotenv()
# Initialize the WorkflowApp
wfapp = WorkflowApp()
# Define structured output models
class SpeakerEntry(BaseModel):
name: str
text: str
class PodcastDialogue(BaseModel):
participants: List[SpeakerEntry]
# Define Workflow logic
@wfapp.workflow(name="doc2podcast")
def doc2podcast(ctx: DaprWorkflowContext, input: Dict[str, Any]):
# Extract pre-validated input
podcast_name = input["podcast_name"]
host_config = input["host"]
participant_configs = input["participants"]
max_rounds = input["max_rounds"]
file_input = input["pdf_url"]
output_transcript_path = input["output_transcript_path"]
output_audio_path = input["output_audio_path"]
audio_model = input["audio_model"]
# Step 1: Assign voices to the team
team_config = yield ctx.call_activity(
assign_podcast_voices,
input={
"host_config": host_config,
"participant_configs": participant_configs,
},
)
# Step 2: Read PDF and get documents
file_path = yield ctx.call_activity(download_pdf, input=file_input)
documents = yield ctx.call_activity(read_pdf, input={"file_path": file_path})
# Step 3: Initialize context and transcript parts
accumulated_context = ""
transcript_parts = []
total_iterations = len(documents)
for chunk_index, document in enumerate(documents):
# Generate the intermediate prompt
document_with_context = {
"text": document["text"],
"iteration_index": chunk_index + 1,
"total_iterations": total_iterations,
"context": accumulated_context,
"participants": [p["name"] for p in team_config["participants"]],
}
generated_prompt = yield ctx.call_activity(
generate_prompt, input=document_with_context
)
# Use the prompt to generate the structured dialogue
prompt_parameters = {
"podcast_name": podcast_name,
"host_name": team_config["host"]["name"],
"prompt": generated_prompt,
"max_rounds": max_rounds,
}
dialogue_entry = yield ctx.call_activity(
generate_transcript, input=prompt_parameters
)
# Update context and transcript parts
conversations = dialogue_entry["participants"]
for participant in conversations:
accumulated_context += f" {participant['name']}: {participant['text']}"
transcript_parts.append(participant)
# Step 4: Write the final transcript to a file
yield ctx.call_activity(
write_transcript_to_file,
input={
"podcast_dialogue": transcript_parts,
"output_path": output_transcript_path,
},
)
# Step 5: Convert transcript to audio using team_config
yield ctx.call_activity(
convert_transcript_to_audio,
input={
"transcript_parts": transcript_parts,
"output_path": output_audio_path,
"voices": team_config,
"model": audio_model,
},
)
@wfapp.task
def assign_podcast_voices(
host_config: Dict[str, Any], participant_configs: List[Dict[str, Any]]
) -> Dict[str, Any]:
"""
Assign voices to the podcast host and participants.
Args:
host_config: Dictionary containing the host's configuration (name and optionally a voice).
participant_configs: List of dictionaries containing participants' configurations (name and optionally a voice).
Returns:
A dictionary with the updated `host` and `participants`, including their assigned voices.
"""
allowed_voices = ["alloy", "echo", "fable", "onyx", "nova", "shimmer"]
assigned_voices = set() # Track assigned voices to avoid duplication
# Assign voice to the host if not already specified
if "voice" not in host_config:
host_config["voice"] = next(
voice for voice in allowed_voices if voice not in assigned_voices
)
assigned_voices.add(host_config["voice"])
# Assign voices to participants, ensuring no duplicates
updated_participants = []
for participant in participant_configs:
if "voice" not in participant:
participant["voice"] = next(
voice for voice in allowed_voices if voice not in assigned_voices
)
assigned_voices.add(participant["voice"])
updated_participants.append(participant)
# Return the updated host and participants
return {
"host": host_config,
"participants": updated_participants,
}
@wfapp.task
def download_pdf(pdf_url: str, local_directory: str = ".") -> str:
"""
Downloads a PDF file from a URL and saves it locally, automatically determining the filename.
"""
try:
parsed_url = urlparse(pdf_url)
filename = unquote(Path(parsed_url.path).name)
if not filename:
raise ValueError("Invalid URL: Cannot determine filename from the URL.")
filename = filename.replace(" ", "_")
local_directory_path = Path(local_directory).resolve()
local_directory_path.mkdir(parents=True, exist_ok=True)
local_file_path = local_directory_path / filename
if not local_file_path.exists():
logger.info(f"Downloading PDF from {pdf_url}...")
response = requests.get(pdf_url)
response.raise_for_status()
with open(local_file_path, "wb") as pdf_file:
pdf_file.write(response.content)
logger.info(f"PDF saved to {local_file_path}")
else:
logger.info(f"PDF already exists at {local_file_path}")
return str(local_file_path)
except Exception as e:
logger.error(f"Error downloading PDF: {e}")
raise
@wfapp.task
def read_pdf(file_path: str) -> List[dict]:
"""
Reads and extracts text from a PDF document.
"""
try:
reader = PyPDFReader()
documents = reader.load(file_path)
return [doc.model_dump() for doc in documents]
except Exception as e:
logger.error(f"Error reading document: {e}")
raise
@wfapp.task
def generate_prompt(
text: str,
iteration_index: int,
total_iterations: int,
context: str,
participants: List[str],
) -> str:
"""
Generate a prompt dynamically for the chunk.
"""
logger.info(f"Processing iteration {iteration_index} of {total_iterations}.")
instructions = f"""
CONTEXT:
- Previous conversation: {context.strip() or "No prior context available."}
- This is iteration {iteration_index} of {total_iterations}.
"""
if participants:
participant_names = ", ".join(participants)
instructions += f"\nPARTICIPANTS: {participant_names}"
else:
instructions += "\nPARTICIPANTS: None (Host-only conversation)"
if iteration_index == 1:
instructions += """
INSTRUCTIONS:
- Begin with a warm welcome to the podcast titled 'Podcast Name'.
- Introduce the host and the participants (if available).
- Provide an overview of the topics to be discussed in this episode.
"""
elif iteration_index == total_iterations:
instructions += """
INSTRUCTIONS:
- Conclude the conversation with a summary of the discussion.
- Include farewell messages from the host and participants.
"""
else:
instructions += """
INSTRUCTIONS:
- Continue the conversation smoothly without re-introducing the podcast.
- Follow up on the previous discussion points and introduce the next topic naturally.
"""
instructions += """
TASK:
- Use the provided TEXT to guide this part of the conversation.
- Alternate between speakers, ensuring a natural conversational flow.
- Keep responses concise and aligned with the context.
"""
return f"{instructions}\nTEXT:\n{text.strip()}"
@wfapp.task(
"""
Generate a structured podcast dialogue based on the context and text provided.
The podcast is titled '{podcast_name}' and is hosted by {host_name}.
If participants are available, each speaker is limited to a maximum of {max_rounds} turns per iteration.
A "round" is defined as one turn by the host followed by one turn by a participant.
The podcast should alternate between the host and participants.
If participants are not available, the host drives the conversation alone.
Keep the dialogue concise and ensure a natural conversational flow.
{prompt}
"""
)
def generate_transcript(
podcast_name: str, host_name: str, prompt: str, max_rounds: int
) -> PodcastDialogue:
pass
@wfapp.task
def write_transcript_to_file(
podcast_dialogue: List[Dict[str, Any]], output_path: str
) -> None:
"""
Write the final structured transcript to a file.
"""
try:
with open(output_path, "w", encoding="utf-8") as file:
import json
json.dump(podcast_dialogue, file, ensure_ascii=False, indent=4)
logger.info(f"Podcast dialogue successfully written to {output_path}")
except Exception as e:
logger.error(f"Error writing podcast dialogue to file: {e}")
raise
@wfapp.task
def convert_transcript_to_audio(
transcript_parts: List[Dict[str, Any]],
output_path: str,
voices: Dict[str, Any],
model: str = "tts-1",
) -> None:
"""
Converts a transcript into a single audio file using the OpenAI Audio Client and pydub for concatenation.
Args:
transcript_parts: List of dictionaries containing speaker and text.
output_path: File path to save the final audio.
voices: Dictionary containing "host" and "participants" with their assigned voices.
model: TTS model to use (default: "tts-1").
"""
try:
client = OpenAIAudioClient()
combined_audio = AudioSegment.silent(duration=500) # Start with a short silence
# Build voice mapping
voice_mapping = {voices["host"]["name"]: voices["host"]["voice"]}
voice_mapping.update({p["name"]: p["voice"] for p in voices["participants"]})
for part in transcript_parts:
speaker_name = part["name"]
speaker_text = part["text"]
assigned_voice = voice_mapping.get(
speaker_name, "alloy"
) # Default to "alloy" if not found
# Log assigned voice for debugging
logger.info(
f"Generating audio for {speaker_name} using voice '{assigned_voice}'."
)
# Create TTS request
tts_request = AudioSpeechRequest(
model=model,
input=speaker_text,
voice=assigned_voice,
response_format="mp3",
)
# Generate the audio
audio_bytes = client.create_speech(request=tts_request)
# Create an AudioSegment from the audio bytes
audio_chunk = AudioSegment.from_file(
io.BytesIO(audio_bytes), format=tts_request.response_format
)
# Append the audio to the combined segment
combined_audio += audio_chunk + AudioSegment.silent(duration=300)
# Export the combined audio to the output file
combined_audio.export(output_path, format="mp3")
logger.info(f"Podcast audio successfully saved to {output_path}")
except Exception as e:
logger.error(f"Error during audio generation: {e}")
raise
if __name__ == "__main__":
import argparse
import json
import yaml
def load_config(file_path: str) -> dict:
"""Load configuration from a JSON or YAML file."""
with open(file_path, "r") as file:
if file_path.endswith(".yaml") or file_path.endswith(".yml"):
return yaml.safe_load(file)
elif file_path.endswith(".json"):
return json.load(file)
else:
raise ValueError("Unsupported file format. Use JSON or YAML.")
# CLI Argument Parser
parser = argparse.ArgumentParser(description="Document to Podcast Workflow")
parser.add_argument("--config", type=str, help="Path to a JSON/YAML config file.")
parser.add_argument("--pdf_url", type=str, help="URL of the PDF document.")
parser.add_argument("--podcast_name", type=str, help="Name of the podcast.")
parser.add_argument("--host_name", type=str, help="Name of the host.")
parser.add_argument("--host_voice", type=str, help="Voice for the host.")
parser.add_argument(
"--participants", type=str, nargs="+", help="List of participant names."
)
parser.add_argument(
"--max_rounds", type=int, default=4, help="Number of turns per round."
)
parser.add_argument(
"--output_transcript_path", type=str, help="Path to save the output transcript."
)
parser.add_argument(
"--output_audio_path", type=str, help="Path to save the final audio file."
)
parser.add_argument(
"--audio_model", type=str, default="tts-1", help="Audio model for TTS."
)
args = parser.parse_args()
# Load config file if provided
config = load_config(args.config) if args.config else {}
# Merge CLI and Config inputs
user_input = {
"pdf_url": args.pdf_url or config.get("pdf_url"),
"podcast_name": args.podcast_name
or config.get("podcast_name", "Default Podcast"),
"host": {
"name": args.host_name or config.get("host", {}).get("name", "Host"),
"voice": args.host_voice or config.get("host", {}).get("voice", "alloy"),
},
"participants": config.get("participants", []),
"max_rounds": args.max_rounds or config.get("max_rounds", 4),
"output_transcript_path": args.output_transcript_path
or config.get("output_transcript_path", "podcast_dialogue.json"),
"output_audio_path": args.output_audio_path
or config.get("output_audio_path", "final_podcast.mp3"),
"audio_model": args.audio_model or config.get("audio_model", "tts-1"),
}
# Add participants from CLI if provided
if args.participants:
user_input["participants"].extend({"name": name} for name in args.participants)
# Validate inputs
if not user_input["pdf_url"]:
raise ValueError("PDF URL must be provided via CLI or config file.")
# Run the workflow
wfapp.run_and_monitor_workflow_sync(workflow=doc2podcast, input=user_input)

View File

@ -1,68 +0,0 @@
from dapr_agents import OpenAIChatClient, NVIDIAChatClient
from dapr.ext.workflow import DaprWorkflowContext
from dapr_agents.workflow import WorkflowApp, task, workflow
from dotenv import load_dotenv
import os
import logging
load_dotenv()
nvidia_llm = NVIDIAChatClient(
model="meta/llama-3.1-8b-instruct", api_key=os.getenv("NVIDIA_API_KEY")
)
oai_llm = OpenAIChatClient(
api_key=os.getenv("OPENAI_API_KEY"),
model="gpt-4o",
base_url=os.getenv("OPENAI_API_BASE_URL"),
)
azoai_llm = OpenAIChatClient(
api_key=os.getenv("AZURE_OPENAI_API_KEY"),
azure_deployment="gpt-4o-mini",
azure_endpoint=os.getenv("AZURE_OPENAI_ENDPOINT"),
azure_api_version="2024-12-01-preview",
)
@workflow
def test_workflow(ctx: DaprWorkflowContext):
"""
A simple workflow that uses a multi-modal task chain.
"""
oai_results = yield ctx.call_activity(invoke_oai, input="Peru")
azoai_results = yield ctx.call_activity(invoke_azoai, input=oai_results)
nvidia_results = yield ctx.call_activity(invoke_nvidia, input=azoai_results)
return nvidia_results
@task(
description="What is the name of the capital of {country}?. Reply with just the name.",
llm=oai_llm,
)
def invoke_oai(country: str) -> str:
pass
@task(description="What is a famous thing about {capital}?", llm=azoai_llm)
def invoke_azoai(capital: str) -> str:
pass
@task(
description="Context: {context}. From the previous context. Pick one thing to do.",
llm=nvidia_llm,
)
def invoke_nvidia(context: str) -> str:
pass
if __name__ == "__main__":
logging.basicConfig(level=logging.INFO)
wfapp = WorkflowApp()
results = wfapp.run_and_monitor_workflow_sync(workflow=test_workflow)
logging.info("Workflow results: %s", results)
logging.info("Workflow completed successfully.")

View File

@ -1,413 +0,0 @@
# Dapr Agents Calculator Demo
## Prerequisites
- Python 3.10 or later
- Dapr CLI (v1.15.x)
- Redis (for state storage and pub/sub)
- Azure OpenAI API key
## Setup
1. Create and activate a virtual environment:
```bash
# Create a virtual environment
python3.10 -m venv .venv
# Activate the virtual environment
# On Windows:
.venv\Scripts\activate
# On macOS/Linux:
source .venv/bin/activate
```
2. Install dependencies:
```bash
pip install -r requirements.txt
```
3. Set Up Environment Variables: Create an `.env` file to securely store your API keys and other sensitive information. For example:
```
OPENAI_API_KEY="your-api-key"
OPENAI_BASE_URL="https://api.openai.com/v1"
```
## Running the Application
Make sure Redis is running on your local machine (default port 6379).
### Running All Components with Dapr
1. Start the calculator agent:
```bash
dapr run --app-id CalculatorApp --app-port 8002 --dapr-http-port 3500 --resources-path ./components -- python calculator_agent.py
```
2. Start the LLM orchestrator:
```bash
dapr run --app-id OrchestratorApp --app-port 8004 --resources-path ./components -- python llm_orchestrator.py
```
3. Run the client:
```bash
dapr run --app-id ClientApp --dapr-http-port 3502 --resources-path ./components -- python client.py
```
## Expected Behavior
### LLM Orchestrator
```
== APP == INFO:dapr_agents.workflow.orchestrators.llm.orchestrator:Workflow iteration 1 started (Instance ID: 22fb2349f9a742279ddbfae9da3330ac).
== APP == 2025-04-21 03:19:34.372 durabletask-worker INFO: 22fb2349f9a742279ddbfae9da3330ac: Orchestrator yielded with 1 task(s) and 0 event(s) outstanding.
== APP == INFO:dapr_agents.workflow.task:Invoking Regular Task
== APP == INFO:dapr_agents.workflow.base:Started workflow with instance ID 22fb2349f9a742279ddbfae9da3330ac.
== APP == INFO:dapr_agents.workflow.base:Monitoring workflow '22fb2349f9a742279ddbfae9da3330ac'...
== APP == 2025-04-21 03:19:34.377 durabletask-client INFO: Waiting up to 300s for instance '22fb2349f9a742279ddbfae9da3330ac' to complete.
== APP == INFO:dapr_agents.workflow.agentic:Agents found in 'agentstatestore' for key 'agents_registry'.
== APP == INFO:dapr_agents.workflow.orchestrators.llm.orchestrator:Initial message from User -> LLMOrchestrator
== APP == 2025-04-21 03:19:34.383 durabletask-worker INFO: 22fb2349f9a742279ddbfae9da3330ac: Orchestrator yielded with 1 task(s) and 0 event(s) outstanding.
== APP == INFO:dapr_agents.workflow.task:Invoking Task with LLM...
== APP == INFO:dapr_agents.llm.utils.request:Structured Mode Activated! Mode=json.
== APP == INFO:dapr_agents.llm.openai.chat:Invoking ChatCompletion API.
== APP == INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 200 OK"
== APP == INFO:dapr_agents.llm.openai.chat:Chat completion retrieved successfully.
== APP == INFO:dapr_agents.llm.utils.response:Structured output was successfully validated.
== APP == 2025-04-21 03:19:38.396 durabletask-worker INFO: 22fb2349f9a742279ddbfae9da3330ac: Orchestrator yielded with 1 task(s) and 0 event(s) outstanding.
== APP == INFO:dapr_agents.workflow.task:Invoking Regular Task
== APP == 2025-04-21 03:19:38.410 durabletask-worker INFO: 22fb2349f9a742279ddbfae9da3330ac: Orchestrator yielded with 1 task(s) and 0 event(s) outstanding.
== APP == INFO:dapr_agents.workflow.task:Invoking Regular Task
== APP == INFO:dapr_agents.workflow.agentic:Agents found in 'agentstatestore' for key 'agents_registry'.
== APP == INFO:dapr_agents.workflow.agentic:LLMOrchestrator broadcasting message to beacon_channel.
== APP == INFO:dapr_agents.workflow.messaging.pubsub:LLMOrchestrator published 'BroadcastMessage' to topic 'beacon_channel'.
== APP == 2025-04-21 03:19:38.427 durabletask-worker INFO: 22fb2349f9a742279ddbfae9da3330ac: Orchestrator yielded with 1 task(s) and 0 event(s) outstanding.
== APP == INFO:dapr_agents.workflow.task:Invoking Task with LLM...
== APP == INFO:dapr_agents.workflow.task:Retrieving conversation history...
== APP == INFO:dapr_agents.llm.utils.request:Structured Mode Activated! Mode=json.
== APP == INFO:dapr_agents.llm.openai.chat:Invoking ChatCompletion API.
== APP == INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 200 OK"
== APP == INFO:dapr_agents.llm.openai.chat:Chat completion retrieved successfully.
== APP == INFO:dapr_agents.llm.utils.response:Structured output was successfully validated.
== APP == 2025-04-21 03:19:39.462 durabletask-worker INFO: 22fb2349f9a742279ddbfae9da3330ac: Orchestrator yielded with 1 task(s) and 0 event(s) outstanding.
== APP == INFO:dapr_agents.workflow.task:Invoking Regular Task
== APP == 2025-04-21 03:19:39.476 durabletask-worker INFO: 22fb2349f9a742279ddbfae9da3330ac: Orchestrator yielded with 1 task(s) and 0 event(s) outstanding.
== APP == INFO:dapr_agents.workflow.task:Invoking Regular Task
== APP == INFO:dapr_agents.workflow.agentic:Agents found in 'agentstatestore' for key 'agents_registry'.
== APP == INFO:dapr_agents.workflow.agentic:LLMOrchestrator broadcasting message to beacon_channel.
== APP == INFO:dapr_agents.workflow.messaging.pubsub:LLMOrchestrator published 'BroadcastMessage' to topic 'beacon_channel'.
== APP == 2025-04-21 03:19:39.490 durabletask-worker INFO: 22fb2349f9a742279ddbfae9da3330ac: Orchestrator yielded with 1 task(s) and 0 event(s) outstanding.
== APP == INFO:dapr_agents.workflow.task:Invoking Regular Task
== APP == INFO:dapr_agents.workflow.orchestrators.llm.orchestrator:Triggering agent MathematicsAgent for step 1, substep None (Instance ID: 22fb2349f9a742279ddbfae9da3330ac)
== APP == INFO:dapr_agents.workflow.orchestrators.llm.orchestrator:Marked step 1, substep None as 'in_progress'
== APP == INFO:dapr_agents.workflow.agentic:Agents found in 'agentstatestore' for key 'agents_registry'.
== APP == INFO:dapr_agents.workflow.agentic:LLMOrchestrator sending message to agent 'MathematicsAgent'.
== APP == INFO:dapr_agents.workflow.messaging.pubsub:LLMOrchestrator published 'TriggerAction' to topic 'MathematicsAgent'.
== APP == INFO:dapr_agents.workflow.orchestrators.llm.orchestrator:Waiting for MathematicsAgent's response...
== APP == 2025-04-21 03:19:39.502 durabletask-worker INFO: 22fb2349f9a742279ddbfae9da3330ac: Orchestrator yielded with 1 task(s) and 1 event(s) outstanding.
== APP == INFO:dapr_agents.workflow.messaging.parser:Validating payload with model 'AgentTaskResponse'...
== APP == INFO:dapr_agents.workflow.messaging.routing:Dispatched to handler 'process_agent_response' for event type 'AgentTaskResponse'
== APP == INFO:dapr_agents.workflow.orchestrators.llm.orchestrator:LLMOrchestrator processing agent response for workflow instance '22fb2349f9a742279ddbfae9da3330ac'.
== APP == INFO:dapr_agents.workflow.base:Raising workflow event 'AgentTaskResponse' for instance '22fb2349f9a742279ddbfae9da3330ac'
== APP == 2025-04-21 03:19:40.819 durabletask-client INFO: Raising event 'AgentTaskResponse' for instance '22fb2349f9a742279ddbfae9da3330ac'.
== APP == INFO:dapr_agents.workflow.base:Successfully raised workflow event 'AgentTaskResponse' for instance '22fb2349f9a742279ddbfae9da3330ac'!
== APP == 2025-04-21 03:19:40.827 durabletask-worker INFO: 22fb2349f9a742279ddbfae9da3330ac Event raised: agenttaskresponse
== APP == INFO:dapr_agents.workflow.orchestrators.llm.orchestrator:MathematicsAgent sent a response.
== APP == 2025-04-21 03:19:40.827 durabletask-worker INFO: 22fb2349f9a742279ddbfae9da3330ac: Orchestrator yielded with 2 task(s) and 0 event(s) outstanding.
== APP == INFO:dapr_agents.workflow.task:Invoking Regular Task
== APP == INFO:dapr_agents.workflow.orchestrators.llm.orchestrator:Updating task history for MathematicsAgent at step 1, substep None (Instance ID: 22fb2349f9a742279ddbfae9da3330ac)
== APP == 2025-04-21 03:19:40.843 durabletask-worker INFO: 22fb2349f9a742279ddbfae9da3330ac: Orchestrator yielded with 2 task(s) and 0 event(s) outstanding.
== APP == INFO:dapr_agents.workflow.task:Invoking Task with LLM...
== APP == INFO:dapr_agents.workflow.task:Retrieving conversation history...
== APP == INFO:dapr_agents.llm.utils.request:Structured Mode Activated! Mode=json.
== APP == INFO:dapr_agents.llm.openai.chat:Invoking ChatCompletion API.
== APP == INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 200 OK"
== APP == INFO:dapr_agents.llm.openai.chat:Chat completion retrieved successfully.
== APP == INFO:dapr_agents.llm.utils.response:Structured output was successfully validated.
== APP == INFO:dapr_agents.workflow.orchestrators.llm.orchestrator:Tracking Progress: {'verdict': 'continue', 'plan_needs_update': False, 'plan_status_update': [{'step': 1, 'substep': None, 'status': 'completed'}, {'step': 2, 'substep': None, 'status': 'in_progress'}, {'step': 2, 'substep': 2.1, 'status': 'in_progress'}], 'plan_restructure': None}
== APP == 2025-04-21 03:19:42.532 durabletask-worker INFO: 22fb2349f9a742279ddbfae9da3330ac: Orchestrator yielded with 2 task(s) and 0 event(s) outstanding.
== APP == INFO:dapr_agents.workflow.task:Invoking Regular Task
== APP == INFO:dapr_agents.workflow.orchestrators.llm.orchestrator:Updating plan for instance 22fb2349f9a742279ddbfae9da3330ac
== APP == INFO:dapr_agents.workflow.orchestrators.llm.orchestrator:Updated status of step 1, substep None to 'completed'
== APP == INFO:dapr_agents.workflow.orchestrators.llm.orchestrator:Updated status of step 2, substep None to 'in_progress'
== APP == INFO:dapr_agents.workflow.orchestrators.llm.orchestrator:Updated status of step 2, substep 2.1 to 'in_progress'
== APP == INFO:dapr_agents.workflow.orchestrators.llm.orchestrator:Plan successfully updated for instance 22fb2349f9a742279ddbfae9da3330ac
== APP == INFO:dapr_agents.workflow.orchestrators.llm.orchestrator:Workflow iteration 2 started (Instance ID: 22fb2349f9a742279ddbfae9da3330ac).
== APP == 2025-04-21 03:19:42.543 durabletask-worker INFO: 22fb2349f9a742279ddbfae9da3330ac: Orchestrator yielded with 1 task(s) and 0 event(s) outstanding.
== APP == INFO:dapr_agents.workflow.task:Invoking Regular Task
== APP == INFO:dapr_agents.workflow.agentic:Agents found in 'agentstatestore' for key 'agents_registry'.
== APP == 2025-04-21 03:19:42.552 durabletask-worker INFO: 22fb2349f9a742279ddbfae9da3330ac: Orchestrator yielded with 1 task(s) and 0 event(s) outstanding.
== APP == INFO:dapr_agents.workflow.task:Invoking Task with LLM...
== APP == INFO:dapr_agents.workflow.task:Retrieving conversation history...
== APP == INFO:dapr_agents.llm.utils.request:Structured Mode Activated! Mode=json.
== APP == INFO:dapr_agents.llm.openai.chat:Invoking ChatCompletion API.
== APP == INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 200 OK"
== APP == INFO:dapr_agents.llm.openai.chat:Chat completion retrieved successfully.
== APP == INFO:dapr_agents.llm.utils.response:Structured output was successfully validated.
== APP == 2025-04-21 03:19:43.561 durabletask-worker INFO: 22fb2349f9a742279ddbfae9da3330ac: Orchestrator yielded with 1 task(s) and 0 event(s) outstanding.
== APP == INFO:dapr_agents.workflow.task:Invoking Regular Task
== APP == 2025-04-21 03:19:43.574 durabletask-worker INFO: 22fb2349f9a742279ddbfae9da3330ac: Orchestrator yielded with 1 task(s) and 0 event(s) outstanding.
== APP == INFO:dapr_agents.workflow.task:Invoking Regular Task
== APP == INFO:dapr_agents.workflow.agentic:Agents found in 'agentstatestore' for key 'agents_registry'.
== APP == INFO:dapr_agents.workflow.agentic:LLMOrchestrator broadcasting message to beacon_channel.
== APP == INFO:dapr_agents.workflow.messaging.pubsub:LLMOrchestrator published 'BroadcastMessage' to topic 'beacon_channel'.
== APP == 2025-04-21 03:19:43.593 durabletask-worker INFO: 22fb2349f9a742279ddbfae9da3330ac: Orchestrator yielded with 1 task(s) and 0 event(s) outstanding.
== APP == INFO:dapr_agents.workflow.task:Invoking Regular Task
== APP == INFO:dapr_agents.workflow.orchestrators.llm.orchestrator:Triggering agent MathematicsAgent for step 2, substep 2.2 (Instance ID: 22fb2349f9a742279ddbfae9da3330ac)
== APP == INFO:dapr_agents.workflow.orchestrators.llm.orchestrator:Marked step 2, substep 2.2 as 'in_progress'
== APP == INFO:dapr_agents.workflow.agentic:Agents found in 'agentstatestore' for key 'agents_registry'.
== APP == INFO:dapr_agents.workflow.agentic:LLMOrchestrator sending message to agent 'MathematicsAgent'.
== APP == INFO:dapr_agents.workflow.messaging.pubsub:LLMOrchestrator published 'TriggerAction' to topic 'MathematicsAgent'.
== APP == INFO:dapr_agents.workflow.orchestrators.llm.orchestrator:Waiting for MathematicsAgent's response...
== APP == 2025-04-21 03:19:43.605 durabletask-worker INFO: 22fb2349f9a742279ddbfae9da3330ac: Orchestrator yielded with 1 task(s) and 1 event(s) outstanding.
== APP == INFO:dapr_agents.workflow.messaging.parser:Validating payload with model 'AgentTaskResponse'...
== APP == INFO:dapr_agents.workflow.messaging.routing:Dispatched to handler 'process_agent_response' for event type 'AgentTaskResponse'
== APP == INFO:dapr_agents.workflow.orchestrators.llm.orchestrator:LLMOrchestrator processing agent response for workflow instance '22fb2349f9a742279ddbfae9da3330ac'.
== APP == INFO:dapr_agents.workflow.base:Raising workflow event 'AgentTaskResponse' for instance '22fb2349f9a742279ddbfae9da3330ac'
== APP == 2025-04-21 03:19:44.581 durabletask-client INFO: Raising event 'AgentTaskResponse' for instance '22fb2349f9a742279ddbfae9da3330ac'.
== APP == INFO:dapr_agents.workflow.base:Successfully raised workflow event 'AgentTaskResponse' for instance '22fb2349f9a742279ddbfae9da3330ac'!
== APP == 2025-04-21 03:19:44.585 durabletask-worker INFO: 22fb2349f9a742279ddbfae9da3330ac Event raised: agenttaskresponse
== APP == INFO:dapr_agents.workflow.orchestrators.llm.orchestrator:MathematicsAgent sent a response.
== APP == 2025-04-21 03:19:44.585 durabletask-worker INFO: 22fb2349f9a742279ddbfae9da3330ac: Orchestrator yielded with 2 task(s) and 0 event(s) outstanding.
== APP == INFO:dapr_agents.workflow.task:Invoking Regular Task
== APP == INFO:dapr_agents.workflow.orchestrators.llm.orchestrator:Updating task history for MathematicsAgent at step 2, substep 2.2 (Instance ID: 22fb2349f9a742279ddbfae9da3330ac)
== APP == 2025-04-21 03:19:44.600 durabletask-worker INFO: 22fb2349f9a742279ddbfae9da3330ac: Orchestrator yielded with 2 task(s) and 0 event(s) outstanding.
== APP == INFO:dapr_agents.workflow.task:Invoking Task with LLM...
== APP == INFO:dapr_agents.workflow.task:Retrieving conversation history...
== APP == INFO:dapr_agents.llm.utils.request:Structured Mode Activated! Mode=json.
== APP == INFO:dapr_agents.llm.openai.chat:Invoking ChatCompletion API.
== APP == INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 200 OK"
== APP == INFO:dapr_agents.llm.openai.chat:Chat completion retrieved successfully.
== APP == INFO:dapr_agents.llm.utils.response:Structured output was successfully validated.
== APP == INFO:dapr_agents.workflow.orchestrators.llm.orchestrator:Tracking Progress: {'verdict': 'continue', 'plan_needs_update': False, 'plan_status_update': [{'step': 2, 'substep': 2.1, 'status': 'completed'}, {'step': 2, 'substep': 2.2, 'status': 'completed'}, {'step': 2, 'substep': None, 'status': 'completed'}], 'plan_restructure': None}
== APP == 2025-04-21 03:19:46.130 durabletask-worker INFO: 22fb2349f9a742279ddbfae9da3330ac: Orchestrator yielded with 2 task(s) and 0 event(s) outstanding.
== APP == INFO:dapr_agents.workflow.task:Invoking Regular Task
== APP == INFO:dapr_agents.workflow.orchestrators.llm.orchestrator:Updating plan for instance 22fb2349f9a742279ddbfae9da3330ac
== APP == INFO:dapr_agents.workflow.orchestrators.llm.orchestrator:Updated status of step 2, substep 2.1 to 'completed'
== APP == INFO:dapr_agents.workflow.orchestrators.llm.orchestrator:Updated status of step 2, substep 2.2 to 'completed'
== APP == INFO:dapr_agents.workflow.orchestrators.llm.orchestrator:Updated status of step 2, substep None to 'completed'
== APP == INFO:dapr_agents.workflow.orchestrators.llm.orchestrator:Plan successfully updated for instance 22fb2349f9a742279ddbfae9da3330ac
== APP == INFO:dapr_agents.workflow.orchestrators.llm.orchestrator:Workflow iteration 3 started (Instance ID: 22fb2349f9a742279ddbfae9da3330ac).
== APP == 2025-04-21 03:19:46.159 durabletask-worker INFO: 22fb2349f9a742279ddbfae9da3330ac: Orchestrator yielded with 1 task(s) and 0 event(s) outstanding.
== APP == INFO:dapr_agents.workflow.task:Invoking Regular Task
== APP == INFO:dapr_agents.workflow.agentic:Agents found in 'agentstatestore' for key 'agents_registry'.
== APP == 2025-04-21 03:19:46.174 durabletask-worker INFO: 22fb2349f9a742279ddbfae9da3330ac: Orchestrator yielded with 1 task(s) and 0 event(s) outstanding.
== APP == INFO:dapr_agents.workflow.task:Invoking Task with LLM...
== APP == INFO:dapr_agents.workflow.task:Retrieving conversation history...
== APP == INFO:dapr_agents.llm.utils.request:Structured Mode Activated! Mode=json.
== APP == INFO:dapr_agents.llm.openai.chat:Invoking ChatCompletion API.
== APP == INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 200 OK"
== APP == INFO:dapr_agents.llm.openai.chat:Chat completion retrieved successfully.
== APP == INFO:dapr_agents.llm.utils.response:Structured output was successfully validated.
== APP == 2025-04-21 03:19:47.370 durabletask-worker INFO: 22fb2349f9a742279ddbfae9da3330ac: Orchestrator yielded with 1 task(s) and 0 event(s) outstanding.
== APP == INFO:dapr_agents.workflow.task:Invoking Regular Task
== APP == 2025-04-21 03:19:47.383 durabletask-worker INFO: 22fb2349f9a742279ddbfae9da3330ac: Orchestrator yielded with 1 task(s) and 0 event(s) outstanding.
== APP == INFO:dapr_agents.workflow.task:Invoking Regular Task
== APP == INFO:dapr_agents.workflow.agentic:Agents found in 'agentstatestore' for key 'agents_registry'.
== APP == INFO:dapr_agents.workflow.agentic:LLMOrchestrator broadcasting message to beacon_channel.
== APP == INFO:dapr_agents.workflow.messaging.pubsub:LLMOrchestrator published 'BroadcastMessage' to topic 'beacon_channel'.
== APP == 2025-04-21 03:19:47.403 durabletask-worker INFO: 22fb2349f9a742279ddbfae9da3330ac: Orchestrator yielded with 1 task(s) and 0 event(s) outstanding.
== APP == INFO:dapr_agents.workflow.task:Invoking Regular Task
== APP == INFO:dapr_agents.workflow.orchestrators.llm.orchestrator:Triggering agent MathematicsAgent for step 3, substep 3.1 (Instance ID: 22fb2349f9a742279ddbfae9da3330ac)
== APP == INFO:dapr_agents.workflow.orchestrators.llm.orchestrator:Marked step 3, substep 3.1 as 'in_progress'
== APP == INFO:dapr_agents.workflow.agentic:Agents found in 'agentstatestore' for key 'agents_registry'.
== APP == INFO:dapr_agents.workflow.agentic:LLMOrchestrator sending message to agent 'MathematicsAgent'.
== APP == INFO:dapr_agents.workflow.messaging.pubsub:LLMOrchestrator published 'TriggerAction' to topic 'MathematicsAgent'.
== APP == INFO:dapr_agents.workflow.orchestrators.llm.orchestrator:Waiting for MathematicsAgent's response...
== APP == 2025-04-21 03:19:47.417 durabletask-worker INFO: 22fb2349f9a742279ddbfae9da3330ac: Orchestrator yielded with 1 task(s) and 1 event(s) outstanding.
== APP == INFO:dapr_agents.workflow.messaging.parser:Validating payload with model 'AgentTaskResponse'...
== APP == INFO:dapr_agents.workflow.messaging.routing:Dispatched to handler 'process_agent_response' for event type 'AgentTaskResponse'
== APP == INFO:dapr_agents.workflow.orchestrators.llm.orchestrator:LLMOrchestrator processing agent response for workflow instance '22fb2349f9a742279ddbfae9da3330ac'.
== APP == INFO:dapr_agents.workflow.base:Raising workflow event 'AgentTaskResponse' for instance '22fb2349f9a742279ddbfae9da3330ac'
== APP == 2025-04-21 03:19:50.031 durabletask-client INFO: Raising event 'AgentTaskResponse' for instance '22fb2349f9a742279ddbfae9da3330ac'.
== APP == INFO:dapr_agents.workflow.base:Successfully raised workflow event 'AgentTaskResponse' for instance '22fb2349f9a742279ddbfae9da3330ac'!
== APP == 2025-04-21 03:19:50.038 durabletask-worker INFO: 22fb2349f9a742279ddbfae9da3330ac Event raised: agenttaskresponse
== APP == INFO:dapr_agents.workflow.orchestrators.llm.orchestrator:MathematicsAgent sent a response.
== APP == 2025-04-21 03:19:50.039 durabletask-worker INFO: 22fb2349f9a742279ddbfae9da3330ac: Orchestrator yielded with 2 task(s) and 0 event(s) outstanding.
== APP == INFO:dapr_agents.workflow.task:Invoking Regular Task
== APP == INFO:dapr_agents.workflow.orchestrators.llm.orchestrator:Updating task history for MathematicsAgent at step 3, substep 3.1 (Instance ID: 22fb2349f9a742279ddbfae9da3330ac)
== APP == 2025-04-21 03:19:50.055 durabletask-worker INFO: 22fb2349f9a742279ddbfae9da3330ac: Orchestrator yielded with 2 task(s) and 0 event(s) outstanding.
== APP == INFO:dapr_agents.workflow.task:Invoking Task with LLM...
== APP == INFO:dapr_agents.workflow.task:Retrieving conversation history...
== APP == INFO:dapr_agents.llm.utils.request:Structured Mode Activated! Mode=json.
== APP == INFO:dapr_agents.llm.openai.chat:Invoking ChatCompletion API.
== APP == INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 200 OK"
== APP == INFO:dapr_agents.llm.openai.chat:Chat completion retrieved successfully.
== APP == INFO:dapr_agents.llm.utils.response:Structured output was successfully validated.
== APP == INFO:dapr_agents.workflow.orchestrators.llm.orchestrator:Tracking Progress: {'verdict': 'completed', 'plan_needs_update': False, 'plan_status_update': [{'step': 3, 'substep': 3.1, 'status': 'completed'}, {'step': 3, 'substep': 3.2, 'status': 'completed'}, {'step': 3, 'substep': None, 'status': 'completed'}, {'step': 4, 'substep': 4.1, 'status': 'completed'}, {'step': 4, 'substep': None, 'status': 'completed'}, {'step': 5, 'substep': None, 'status': 'completed'}], 'plan_restructure': None}
== APP == INFO:dapr_agents.workflow.orchestrators.llm.orchestrator:Workflow ending with verdict: completed
== APP == 2025-04-21 03:19:52.263 durabletask-worker INFO: 22fb2349f9a742279ddbfae9da3330ac: Orchestrator yielded with 2 task(s) and 0 event(s) outstanding.
== APP == INFO:dapr_agents.workflow.task:Invoking Task with LLM...
== APP == INFO:dapr_agents.workflow.task:Retrieving conversation history...
== APP == INFO:dapr_agents.llm.openai.chat:Invoking ChatCompletion API.
== APP == INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 200 OK"
== APP == INFO:dapr_agents.llm.openai.chat:Chat completion retrieved successfully.
== APP == 2025-04-21 03:19:53.984 durabletask-worker INFO: 22fb2349f9a742279ddbfae9da3330ac: Orchestrator yielded with 2 task(s) and 0 event(s) outstanding.
== APP == INFO:dapr_agents.workflow.task:Invoking Regular Task
== APP == INFO:dapr_agents.workflow.orchestrators.llm.orchestrator:Updating plan for instance 22fb2349f9a742279ddbfae9da3330ac
== APP == INFO:dapr_agents.workflow.orchestrators.llm.orchestrator:Updated status of step 3, substep 3.1 to 'completed'
== APP == INFO:dapr_agents.workflow.orchestrators.llm.orchestrator:Plan successfully updated for instance 22fb2349f9a742279ddbfae9da3330ac
== APP == INFO:dapr_agents.workflow.orchestrators.llm.orchestrator:Workflow 22fb2349f9a742279ddbfae9da3330ac has been finalized with verdict: completed
== APP == 2025-04-21 03:19:53.998 durabletask-worker INFO: 22fb2349f9a742279ddbfae9da3330ac: Orchestration completed with status: COMPLETED
INFO[0044] 22fb2349f9a742279ddbfae9da3330ac: 'LLMWorkflow' completed with a COMPLETED status. app_id=OrchestratorApp instance=mac.lan scope=dapr.wfengine.durabletask.backend type=log ver=1.15.3
INFO[0044] Workflow Actor '22fb2349f9a742279ddbfae9da3330ac': workflow completed with status 'ORCHESTRATION_STATUS_COMPLETED' workflowName 'LLMWorkflow' app_id=OrchestratorApp instance=mac.lan scope=dapr.runtime.actors.targets.workflow type=log ver=1.15.3
== APP == 2025-04-21 03:19:53.999 durabletask-client INFO: Instance '22fb2349f9a742279ddbfae9da3330ac' completed.
== APP == INFO:dapr_agents.workflow.base:Workflow 22fb2349f9a742279ddbfae9da3330ac completed with status: WorkflowStatus.COMPLETED.
== APP == INFO:dapr_agents.workflow.base:Workflow '22fb2349f9a742279ddbfae9da3330ac' completed successfully. Status: COMPLETED.
== APP == INFO:dapr_agents.workflow.base:Finished monitoring workflow '22fb2349f9a742279ddbfae9da3330ac'.
INFO[0076] Placement tables updated, version: 103 app_id=OrchestratorApp instance=mac.lan scope=dapr.runtime.actors.placement type=log ver=1.15.3
INFO[0076] Running actor reminder migration from state store to scheduler app_id=OrchestratorApp instance=mac.lan scope=dapr.runtime.actors.reminders.migration type=log ver=1.15.3
INFO[0076] Skipping migration, no missing scheduler reminders found app_id=OrchestratorApp instance=mac.lan scope=dapr.runtime.actors.reminders.migration type=log ver=1.15.3
INFO[0076] Found 0 missing scheduler reminders from state store app_id=OrchestratorApp instance=mac.lan scope=dapr.runtime.actors.reminders.migration type=log ver=1.15.3
INFO[0076] Migrated 0 reminders from state store to scheduler successfully app_id=OrchestratorApp instance=mac.lan scope=dapr.runtime.actors.reminders.migration type=log ver=1.15.3
^C
terminated signal received: shutting down
INFO[0081] Received signal 'interrupt'; beginning shutdown app_id=OrchestratorApp instance=mac.lan scope=dapr.signals type=log ver=1.15.3
✅ Exited Dapr successfully
✅ Exited App successfully
```
### MathematicsAgent
```
== APP == INFO:dapr_agents.workflow.messaging.parser:Validating payload with model 'BroadcastMessage'...
== APP == INFO:dapr_agents.workflow.messaging.routing:Dispatched to handler 'process_broadcast_message' for event type 'BroadcastMessage'
== APP == INFO:dapr_agents.agent.actor.agent:MathematicsAgent received broadcast message of type 'BroadcastMessage' from 'LLMOrchestrator'.
== APP == INFO:dapr_agents.agent.actor.base:Activating actor with ID: MathematicsAgent
== APP == INFO:dapr_agents.agent.actor.base:Initializing state for MathematicsAgent
WARN[0021] Redis does not support transaction rollbacks and should not be used in production as an actor state store. app_id=CalculatorApp component="workflowstatestore (state.redis/v1)" instance=mac.lan scope=dapr.contrib type=log ver=1.15.3
== APP == INFO: 127.0.0.1:59669 - "PUT /actors/MathematicsAgentActor/MathematicsAgent/method/AddMessage HTTP/1.1" 200 OK
== APP == INFO:dapr_agents.workflow.messaging.parser:Validating payload with model 'BroadcastMessage'...
== APP == INFO:dapr_agents.workflow.messaging.routing:Dispatched to handler 'process_broadcast_message' for event type 'BroadcastMessage'
== APP == INFO:dapr_agents.agent.actor.agent:MathematicsAgent received broadcast message of type 'BroadcastMessage' from 'LLMOrchestrator'.
== APP == INFO: 127.0.0.1:59669 - "PUT /actors/MathematicsAgentActor/MathematicsAgent/method/AddMessage HTTP/1.1" 200 OK
== APP == INFO:dapr_agents.workflow.messaging.parser:Validating payload with model 'TriggerAction'...
== APP == INFO:dapr_agents.workflow.messaging.routing:Dispatched to handler 'process_trigger_action' for event type 'TriggerAction'
== APP == INFO:dapr_agents.agent.actor.agent:MathematicsAgent received TriggerAction from LLMOrchestrator.
== APP == INFO:dapr_agents.agent.actor.agent:MathematicsAgent executing default task from memory.
== APP == INFO:dapr_agents.agent.actor.base:Actor MathematicsAgent invoking a task
== APP == INFO:dapr_agents.agent.patterns.toolcall.base:Iteration 1/10 started.
== APP == INFO:dapr_agents.llm.utils.request:Tools are available in the request.
== APP == INFO:dapr_agents.llm.openai.chat:Invoking ChatCompletion API.
== APP == INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 200 OK"
== APP == INFO:dapr_agents.llm.openai.chat:Chat completion retrieved successfully.
== APP == user:
== APP == Initiate the process by acknowledging the mathematical problem to solve: Determine the sum of 1 + 1.
== APP ==
== APP == --------------------------------------------------------------------------------
== APP ==
== APP == assistant:
== APP == Acknowledging the task: We need to determine the sum of 1 + 1. Let's proceed to the next step and identify the operands involved in this calculation.
== APP ==
== APP == --------------------------------------------------------------------------------
== APP ==
== APP == INFO: 127.0.0.1:59669 - "PUT /actors/MathematicsAgentActor/MathematicsAgent/method/InvokeTask HTTP/1.1" 200 OK
== APP == INFO:dapr_agents.agent.actor.service:Agents found in 'agentstatestore' for key 'agents_registry'.
== APP == INFO:dapr_agents.agent.actor.service:MathematicsAgent broadcasting message to selected agents.
== APP == INFO:dapr_agents.workflow.messaging.pubsub:MathematicsAgent published 'BroadcastMessage' to topic 'beacon_channel'.
== APP == INFO:dapr_agents.agent.actor.service:Agents found in 'agentstatestore' for key 'agents_registry'.
== APP == INFO:dapr_agents.agent.actor.service:MathematicsAgent sending message to agent 'LLMOrchestrator'.
== APP == INFO:dapr_agents.workflow.messaging.parser:Validating payload with model 'BroadcastMessage'...
== APP == INFO:dapr_agents.workflow.messaging.routing:Dispatched to handler 'process_broadcast_message' for event type 'BroadcastMessage'
== APP == INFO:dapr_agents.agent.actor.agent:MathematicsAgent received broadcast message of type 'BroadcastMessage' from 'MathematicsAgent'.
== APP == INFO:dapr_agents.agent.actor.agent:MathematicsAgent ignored its own broadcast message of type 'BroadcastMessage'.
== APP == INFO:dapr_agents.workflow.messaging.pubsub:MathematicsAgent published 'AgentTaskResponse' to topic 'LLMOrchestrator'.
== APP == INFO:dapr_agents.workflow.messaging.parser:Validating payload with model 'BroadcastMessage'...
== APP == INFO:dapr_agents.workflow.messaging.routing:Dispatched to handler 'process_broadcast_message' for event type 'BroadcastMessage'
== APP == INFO:dapr_agents.agent.actor.agent:MathematicsAgent received broadcast message of type 'BroadcastMessage' from 'LLMOrchestrator'.
== APP == INFO: 127.0.0.1:59669 - "PUT /actors/MathematicsAgentActor/MathematicsAgent/method/AddMessage HTTP/1.1" 200 OK
== APP == INFO:dapr_agents.workflow.messaging.parser:Validating payload with model 'TriggerAction'...
== APP == INFO:dapr_agents.workflow.messaging.routing:Dispatched to handler 'process_trigger_action' for event type 'TriggerAction'
== APP == INFO:dapr_agents.agent.actor.agent:MathematicsAgent received TriggerAction from LLMOrchestrator.
== APP == INFO:dapr_agents.agent.actor.agent:MathematicsAgent executing default task from memory.
== APP == INFO:dapr_agents.agent.actor.base:Actor MathematicsAgent invoking a task
== APP == INFO:dapr_agents.agent.patterns.toolcall.base:Iteration 1/10 started.
== APP == INFO:dapr_agents.llm.utils.request:Tools are available in the request.
== APP == INFO:dapr_agents.llm.openai.chat:Invoking ChatCompletion API.
== APP == INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 200 OK"
== APP == INFO:dapr_agents.llm.openai.chat:Chat completion retrieved successfully.
== APP == user:
== APP == Please record the second operand: 1.
== APP ==
== APP == --------------------------------------------------------------------------------
== APP ==
== APP == assistant:
== APP == The second operand involved in this calculation is recorded as: 1. Now, let's proceed to perform the addition of the identified numbers.
== APP ==
== APP == --------------------------------------------------------------------------------
== APP ==
== APP == INFO: 127.0.0.1:59669 - "PUT /actors/MathematicsAgentActor/MathematicsAgent/method/InvokeTask HTTP/1.1" 200 OK
== APP == INFO:dapr_agents.agent.actor.service:Agents found in 'agentstatestore' for key 'agents_registry'.
== APP == INFO:dapr_agents.agent.actor.service:MathematicsAgent broadcasting message to selected agents.
== APP == INFO:dapr_agents.workflow.messaging.pubsub:MathematicsAgent published 'BroadcastMessage' to topic 'beacon_channel'.
== APP == INFO:dapr_agents.agent.actor.service:Agents found in 'agentstatestore' for key 'agents_registry'.
== APP == INFO:dapr_agents.agent.actor.service:MathematicsAgent sending message to agent 'LLMOrchestrator'.
== APP == INFO:dapr_agents.workflow.messaging.parser:Validating payload with model 'BroadcastMessage'...
== APP == INFO:dapr_agents.workflow.messaging.routing:Dispatched to handler 'process_broadcast_message' for event type 'BroadcastMessage'
== APP == INFO:dapr_agents.agent.actor.agent:MathematicsAgent received broadcast message of type 'BroadcastMessage' from 'MathematicsAgent'.
== APP == INFO:dapr_agents.agent.actor.agent:MathematicsAgent ignored its own broadcast message of type 'BroadcastMessage'.
== APP == INFO:dapr_agents.workflow.messaging.pubsub:MathematicsAgent published 'AgentTaskResponse' to topic 'LLMOrchestrator'.
== APP == INFO:dapr_agents.workflow.messaging.parser:Validating payload with model 'BroadcastMessage'...
== APP == INFO:dapr_agents.workflow.messaging.routing:Dispatched to handler 'process_broadcast_message' for event type 'BroadcastMessage'
== APP == INFO:dapr_agents.agent.actor.agent:MathematicsAgent received broadcast message of type 'BroadcastMessage' from 'LLMOrchestrator'.
== APP == INFO: 127.0.0.1:59669 - "PUT /actors/MathematicsAgentActor/MathematicsAgent/method/AddMessage HTTP/1.1" 200 OK
== APP == INFO:dapr_agents.workflow.messaging.parser:Validating payload with model 'TriggerAction'...
== APP == INFO:dapr_agents.workflow.messaging.routing:Dispatched to handler 'process_trigger_action' for event type 'TriggerAction'
== APP == INFO:dapr_agents.agent.actor.agent:MathematicsAgent received TriggerAction from LLMOrchestrator.
== APP == INFO:dapr_agents.agent.actor.agent:MathematicsAgent executing default task from memory.
== APP == INFO:dapr_agents.agent.actor.base:Actor MathematicsAgent invoking a task
== APP == INFO:dapr_agents.agent.patterns.toolcall.base:Iteration 1/10 started.
== APP == INFO:dapr_agents.llm.utils.request:Tools are available in the request.
== APP == INFO:dapr_agents.llm.openai.chat:Invoking ChatCompletion API.
== APP == INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 200 OK"
== APP == INFO:dapr_agents.llm.openai.chat:Chat completion retrieved successfully.
== APP == INFO:dapr_agents.agent.patterns.toolcall.base:Executing Add with arguments {"a":1,"b":1}
== APP == INFO:dapr_agents.tool.executor:Running tool (auto): Add
== APP == INFO:dapr_agents.agent.patterns.toolcall.base:Iteration 2/10 started.
== APP == INFO:dapr_agents.llm.utils.request:Tools are available in the request.
== APP == INFO:dapr_agents.llm.openai.chat:Invoking ChatCompletion API.
== APP == INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 200 OK"
== APP == INFO:dapr_agents.llm.openai.chat:Chat completion retrieved successfully.
== APP == user:
== APP == Proceed to set up the addition operation with the recorded operands: 1 + 1.
== APP ==
== APP == --------------------------------------------------------------------------------
== APP ==
== APP == assistant:
== APP == Function name: Add (Call Id: call_ac3Xlh4pn7tBFkrI2K9uOqvG)
== APP == Arguments: {"a":1,"b":1}
== APP ==
== APP == --------------------------------------------------------------------------------
== APP ==
== APP == Add(tool) (Id: call_ac3Xlh4pn7tBFkrI2K9uOqvG):
== APP == 2.0
== APP ==
== APP == --------------------------------------------------------------------------------
== APP ==
== APP == assistant:
== APP == The result of the addition operation 1 + 1 is 2.0. Let's verify the calculation result to ensure the accuracy of the addition process.
== APP ==
== APP == --------------------------------------------------------------------------------
== APP ==
== APP == INFO: 127.0.0.1:59669 - "PUT /actors/MathematicsAgentActor/MathematicsAgent/method/InvokeTask HTTP/1.1" 200 OK
== APP == INFO:dapr_agents.agent.actor.service:Agents found in 'agentstatestore' for key 'agents_registry'.
== APP == INFO:dapr_agents.agent.actor.service:MathematicsAgent broadcasting message to selected agents.
== APP == INFO:dapr_agents.workflow.messaging.pubsub:MathematicsAgent published 'BroadcastMessage' to topic 'beacon_channel'.
== APP == INFO:dapr_agents.agent.actor.service:Agents found in 'agentstatestore' for key 'agents_registry'.
== APP == INFO:dapr_agents.agent.actor.service:MathematicsAgent sending message to agent 'LLMOrchestrator'.
== APP == INFO:dapr_agents.workflow.messaging.parser:Validating payload with model 'BroadcastMessage'...
== APP == INFO:dapr_agents.workflow.messaging.routing:Dispatched to handler 'process_broadcast_message' for event type 'BroadcastMessage'
== APP == INFO:dapr_agents.agent.actor.agent:MathematicsAgent received broadcast message of type 'BroadcastMessage' from 'MathematicsAgent'.
== APP == INFO:dapr_agents.agent.actor.agent:MathematicsAgent ignored its own broadcast message of type 'BroadcastMessage'.
== APP == INFO:dapr_agents.workflow.messaging.pubsub:MathematicsAgent published 'AgentTaskResponse' to topic 'LLMOrchestrator'.
^C
terminated signal received: shutting down
✅ Exited Dapr successfully
✅ Exited App successfully
```

View File

@ -1,54 +0,0 @@
from dapr_agents import tool
from dapr_agents import DurableAgent
from pydantic import BaseModel, Field
from dotenv import load_dotenv
import logging
import asyncio
class AddSchema(BaseModel):
a: float = Field(description="first number to add")
b: float = Field(description="second number to add")
@tool(args_model=AddSchema)
def add(a: float, b: float) -> float:
"""Add two numbers."""
return a + b
class SubSchema(BaseModel):
a: float = Field(description="first number to subtract")
b: float = Field(description="second number to subtract")
@tool(args_model=SubSchema)
def sub(a: float, b: float) -> float:
"""Subtract two numbers."""
return a - b
async def main():
calculator_service = DurableAgent(
name="MathematicsAgent",
role="Calculator Assistant",
goal="Assist Humans with calculation tasks.",
instructions=[
"Get accurate calculation results",
"Break down the calculation into smaller steps.",
],
tools=[add, sub],
message_bus_name="pubsub",
agents_registry_key="agents_registry",
agents_registry_store_name="agentstatestore",
state_store_name="agentstatestore",
service_port=8002,
).as_service(8002)
await calculator_service.start()
if __name__ == "__main__":
load_dotenv()
logging.basicConfig(level=logging.INFO)
asyncio.run(main())

View File

@ -1,62 +0,0 @@
#!/usr/bin/env python3
import json
import sys
import time
from dapr.clients import DaprClient
# Default Pub/Sub component
PUBSUB_NAME = "pubsub"
def main(orchestrator_topic, max_attempts=10, retry_delay=1):
"""
Publishes a task to a specified Dapr Pub/Sub topic with retries.
Args:
orchestrator_topic (str): The name of the orchestrator topic.
max_attempts (int): Maximum number of retry attempts.
retry_delay (int): Delay in seconds between attempts.
"""
task_message = {
"task": "What is 1 + 1?",
}
time.sleep(5)
attempt = 1
while attempt <= max_attempts:
try:
print(
f"📢 Attempt {attempt}: Publishing to topic '{orchestrator_topic}'..."
)
with DaprClient() as client:
client.publish_event(
pubsub_name=PUBSUB_NAME,
topic_name=orchestrator_topic,
data=json.dumps(task_message),
data_content_type="application/json",
publish_metadata={
"cloudevent.type": "TriggerAction",
},
)
print(f"✅ Successfully published request to '{orchestrator_topic}'")
sys.exit(0)
except Exception as e:
print(f"❌ Request failed: {e}")
attempt += 1
print(f"⏳ Waiting {retry_delay}s before next attempt...")
time.sleep(retry_delay)
print(f"❌ Maximum attempts ({max_attempts}) reached without success.")
sys.exit(1)
if __name__ == "__main__":
orchestrator_topic = "LLMOrchestrator"
main(orchestrator_topic)

View File

@ -1,14 +0,0 @@
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
name: agentstatestore
spec:
type: state.redis
version: v1
metadata:
- name: redisHost
value: localhost:6379
- name: redisPassword
value: ""
- name: keyPrefix
value: none

View File

@ -1,14 +0,0 @@
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
name: workflowstatestore
spec:
type: state.redis
version: v1
metadata:
- name: redisHost
value: localhost:6379
- name: redisPassword
value: ""
- name: actorStateStore
value: "true"

View File

@ -1,29 +0,0 @@
from dapr_agents import LLMOrchestrator
from dotenv import load_dotenv
import asyncio
import logging
async def main():
try:
workflow_service = LLMOrchestrator(
name="LLMOrchestrator",
message_bus_name="pubsub",
state_store_name="workflowstatestore",
state_key="workflow_state",
agents_registry_store_name="agentstatestore",
agents_registry_key="agents_registry",
max_iterations=20, # Increased from 3 to 20 to avoid potential issues
).as_service(port=8004)
await workflow_service.start()
except Exception as e:
print(f"Error starting service: {e}")
if __name__ == "__main__":
load_dotenv()
logging.basicConfig(level=logging.INFO)
asyncio.run(main())

View File

@ -1,12 +0,0 @@
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
name: messagepubsub
spec:
type: pubsub.redis
version: v1
metadata:
- name: redisHost
value: localhost:6379
- name: redisPassword
value: ""

View File

@ -1,14 +0,0 @@
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
name: agenticworkflowstate
spec:
type: state.redis
version: v1
metadata:
- name: redisHost
value: localhost:6379
- name: redisPassword
value: ""
- name: actorStateStore
value: "true"

View File

@ -1,48 +0,0 @@
# https://docs.dapr.io/developing-applications/local-development/multi-app-dapr-run/multi-app-template/#template-properties
version: 1
common:
resourcesPath: ./components
logLevel: info
appLogDestination: console
daprdLogDestination: console
apps:
- appID: HobbitApp
appDirPath: ./services/hobbit/
command: ["python3", "app.py"]
- appID: WizardApp
appDirPath: ./services/wizard/
command: ["python3", "app.py"]
- appID: ElfApp
appDirPath: ./services/elf/
command: ["python3", "app.py"]
- appID: DwarfApp
appDirPath: ./services/dwarf/
command: ["python3", "app.py"]
- appID: RangerApp
appDirPath: ./services/ranger/
command: ["python3", "app.py"]
- appID: EagleApp
appDirPath: ./services/eagle/
command: ["python3", "app.py"]
- appID: LLMOrchestratorApp
appDirPath: ./services/orchestrator/
command: ["python3", "app.py"]
appPort: 8004
#- appID: RandomApp
# appDirPath: ./services/workflow-random/
# appPort: 8009
# command: ["python3", "app.py"]
#- appID: RoundRobinApp
# appDirPath: ./services/workflow-roundrobin/
# appPort: 8009
# command: ["python3", "app.py"]

View File

@ -1,38 +0,0 @@
from dapr_agents import DurableAgent
from dotenv import load_dotenv
import asyncio
import logging
async def main():
try:
# Define Agent
dwarf_service = DurableAgent(
name="Gimli",
role="Dwarf",
goal="Fight fiercely in battle, protect allies, and expertly navigate underground realms and stonework.",
instructions=[
"Speak like Gimli, with boldness and a warrior's pride.",
"Be strong-willed, fiercely loyal, and protective of companions.",
"Excel in close combat and battlefield tactics, favoring axes and brute strength.",
"Navigate caves, tunnels, and ancient stonework with expert knowledge.",
"Respond concisely, accurately, and relevantly, ensuring clarity and strict alignment with the task.",
],
message_bus_name="messagepubsub",
state_store_name="agenticworkflowstate",
state_key="workflow_state",
agents_registry_store_name="agentsregistrystore",
agents_registry_key="agents_registry",
)
await dwarf_service.start()
except Exception as e:
print(f"Error starting service: {e}")
if __name__ == "__main__":
load_dotenv()
logging.basicConfig(level=logging.INFO)
asyncio.run(main())

View File

@ -1,39 +0,0 @@
from dapr_agents import DurableAgent
from dotenv import load_dotenv
import asyncio
import logging
async def main():
try:
# Define Eagle Agent
eagle_service = DurableAgent(
role="Eagle",
name="Gwaihir",
goal="Provide unmatched aerial transport, carrying anyone anywhere, overcoming any obstacle, and offering strategic reconnaissance to aid in epic quests.",
instructions=[
"Fly anywhere from anywhere, carrying travelers effortlessly across vast distances.",
"Overcome any barrier—mountains, oceans, enemy fortresses—by taking to the skies.",
"Provide swift and strategic transport for those on critical journeys.",
"Offer aerial insights, spotting dangers, tracking movements, and scouting strategic locations.",
"Speak with wisdom and authority, as one of the ancient and noble Great Eagles.",
"Respond concisely, accurately, and relevantly, ensuring clarity and strict alignment with the task.",
],
message_bus_name="messagepubsub",
state_store_name="agenticworkflowstate",
state_key="workflow_state",
agents_registry_store_name="agentsregistrystore",
agents_registry_key="agents_registry",
)
await eagle_service.start()
except Exception as e:
print(f"Error starting service: {e}")
if __name__ == "__main__":
load_dotenv()
logging.basicConfig(level=logging.INFO)
asyncio.run(main())

View File

@ -1,38 +0,0 @@
from dapr_agents import DurableAgent
from dotenv import load_dotenv
import asyncio
import logging
async def main():
try:
# Define Agent
elf_service = DurableAgent(
name="Legolas",
role="Elf",
goal="Act as a scout, marksman, and protector, using keen senses and deadly accuracy to ensure the success of the journey.",
instructions=[
"Speak like Legolas, with grace, wisdom, and keen observation.",
"Be swift, silent, and precise, moving effortlessly across any terrain.",
"Use superior vision and heightened senses to scout ahead and detect threats.",
"Excel in ranged combat, delivering pinpoint arrow strikes from great distances.",
"Respond concisely, accurately, and relevantly, ensuring clarity and strict alignment with the task.",
],
message_bus_name="messagepubsub",
state_store_name="agenticworkflowstate",
state_key="workflow_state",
agents_registry_store_name="agentsregistrystore",
agents_registry_key="agents_registry",
)
await elf_service.start()
except Exception as e:
print(f"Error starting service: {e}")
if __name__ == "__main__":
load_dotenv()
logging.basicConfig(level=logging.INFO)
asyncio.run(main())

View File

@ -1,38 +0,0 @@
from dapr_agents import DurableAgent
from dotenv import load_dotenv
import asyncio
import logging
async def main():
try:
# Define Agent
hobbit_agent = DurableAgent(
name="Frodo",
role="Hobbit",
goal="Carry the One Ring to Mount Doom, resisting its corruptive power while navigating danger and uncertainty.",
instructions=[
"Speak like Frodo, with humility, determination, and a growing sense of resolve.",
"Endure hardships and temptations, staying true to the mission even when faced with doubt.",
"Seek guidance and trust allies, but bear the ultimate burden alone when necessary.",
"Move carefully through enemy-infested lands, avoiding unnecessary risks.",
"Respond concisely, accurately, and relevantly, ensuring clarity and strict alignment with the task.",
],
message_bus_name="messagepubsub",
state_store_name="agenticworkflowstate",
state_key="workflow_state",
agents_registry_store_name="agentsregistrystore",
agents_registry_key="agents_registry",
)
await hobbit_agent.start()
except Exception as e:
print(f"Error starting service: {e}")
if __name__ == "__main__":
load_dotenv()
logging.basicConfig(level=logging.INFO)
asyncio.run(main())

View File

@ -1,29 +0,0 @@
from dapr_agents import LLMOrchestrator
from dotenv import load_dotenv
import asyncio
import logging
async def main():
try:
agentic_orchestrator = LLMOrchestrator(
name="Orchestrator",
message_bus_name="messagepubsub",
state_store_name="agenticworkflowstate",
state_key="workflow_state",
agents_registry_store_name="agentsregistrystore",
agents_registry_key="agents_registry",
max_iterations=3,
).as_service(port=8004)
await agentic_orchestrator.start()
except Exception as e:
print(f"Error starting service: {e}")
if __name__ == "__main__":
load_dotenv()
logging.basicConfig(level=logging.INFO)
asyncio.run(main())

View File

@ -1,38 +0,0 @@
from dapr_agents import DurableAgent
from dotenv import load_dotenv
import asyncio
import logging
async def main():
try:
# Define Agent
ranger_service = DurableAgent(
name="Aragorn",
role="Ranger",
goal="Lead and protect the Fellowship, ensuring Frodo reaches his destination while uniting the Free Peoples against Sauron.",
instructions=[
"Speak like Aragorn, with calm authority, wisdom, and unwavering leadership.",
"Lead by example, inspiring courage and loyalty in allies.",
"Navigate wilderness with expert tracking and survival skills.",
"Master both swordplay and battlefield strategy, excelling in one-on-one combat and large-scale warfare.",
"Respond concisely, accurately, and relevantly, ensuring clarity and strict alignment with the task.",
],
message_bus_name="messagepubsub",
state_store_name="agenticworkflowstate",
state_key="workflow_state",
agents_registry_store_name="agentsregistrystore",
agents_registry_key="agents_registry",
)
await ranger_service.start()
except Exception as e:
print(f"Error starting service: {e}")
if __name__ == "__main__":
load_dotenv()
logging.basicConfig(level=logging.INFO)
asyncio.run(main())

View File

@ -1,38 +0,0 @@
from dapr_agents import DurableAgent
from dotenv import load_dotenv
import asyncio
import logging
async def main():
try:
# Define Agent
wizard_service = DurableAgent(
name="Gandalf",
role="Wizard",
goal="Guide the Fellowship with wisdom and strategy, using magic and insight to ensure the downfall of Sauron.",
instructions=[
"Speak like Gandalf, with wisdom, patience, and a touch of mystery.",
"Provide strategic counsel, always considering the long-term consequences of actions.",
"Use magic sparingly, applying it when necessary to guide or protect.",
"Encourage allies to find strength within themselves rather than relying solely on your power.",
"Respond concisely, accurately, and relevantly, ensuring clarity and strict alignment with the task.",
],
message_bus_name="messagepubsub",
state_store_name="agenticworkflowstate",
state_key="workflow_state",
agents_registry_store_name="agentsregistrystore",
agents_registry_key="agents_registry",
)
await wizard_service.start()
except Exception as e:
print(f"Error starting service: {e}")
if __name__ == "__main__":
load_dotenv()
logging.basicConfig(level=logging.INFO)
asyncio.run(main())

View File

@ -1,29 +0,0 @@
from dapr_agents import RandomOrchestrator
from dotenv import load_dotenv
import asyncio
import logging
async def main():
try:
random_workflow_service = RandomOrchestrator(
name="Orchestrator",
message_bus_name="messagepubsub",
state_store_name="agenticworkflowstate",
state_key="workflow_state",
agents_registry_store_name="agentsregistrystore",
agents_registry_key="agents_registry",
max_iterations=3,
).as_service(port=8004)
await random_workflow_service.start()
except Exception as e:
print(f"Error starting service: {e}")
if __name__ == "__main__":
load_dotenv()
logging.basicConfig(level=logging.INFO)
asyncio.run(main())

View File

@ -1,29 +0,0 @@
from dapr_agents import RoundRobinOrchestrator
from dotenv import load_dotenv
import asyncio
import logging
async def main():
try:
roundrobin_workflow_service = RoundRobinOrchestrator(
name="Orchestrator",
message_bus_name="messagepubsub",
state_store_name="agenticworkflowstate",
state_key="workflow_state",
agents_registry_store_name="agentsregistrystore",
agents_registry_key="agents_registry",
max_iterations=3,
).as_service(port=8004)
await roundrobin_workflow_service.start()
except Exception as e:
print(f"Error starting service: {e}")
if __name__ == "__main__":
load_dotenv()
logging.basicConfig(level=logging.INFO)
asyncio.run(main())

View File

@ -1,91 +0,0 @@
# Multi-Agent LOTR: Durable Agents
This guide shows you how to set up and run an event-driven agentic workflow using Dapr Agents. By leveraging [Dapr Pub/Sub](https://docs.dapr.io/developing-applications/building-blocks/pubsub/pubsub-overview/) and FastAPI, `Dapr Agents` enables agents to collaborate dynamically in decentralized systems.
## Prerequisites
Before you start, ensure you have the following:
* [Dapr Agents environment set up](https://github.com/dapr/dapr-agents), including Python 3.8 or higher and Dapr CLI.
* Docker installed and running.
* Basic understanding of microservices and event-driven architecture.
## Project Structure
The project is organized into multiple services, each representing an agent or a workflow. Heres the layout:
```
├── components/ # Dapr configuration files
│ ├── statestore.yaml # State store configuration
│ ├── pubsub.yaml # Pub/Sub configuration
├── services/ # Directory for services
│ ├── hobbit/ # Hobbit Agent Service
│ │ └── app.py # FastAPI app for Hobbit
│ ├── wizard/ # Wizard Agent Service
│ │ └── app.py # FastAPI app for Wizard
│ ├── elf/ # Elf Agent Service
│ │ └── app.py # FastAPI app for Elf
│ ├── workflow-roundrobin/ # Workflow Service
│ └── app.py # Orchestrator Workflow
├── dapr.yaml # Multi-App Run Template
```
## Running the Services
0. Set Up Environment Variables: Create an `.env` file to securely store your API keys and other sensitive information. For example:
```
OPENAI_API_KEY="your-api-key"
OPENAI_BASE_URL="https://api.openai.com/v1"
```
1. Multi-App Run: Use the dapr.yaml file to start all services simultaneously:
```bash
dapr run -f .
```
2. Verify console Logs: Each service outputs logs to confirm successful initialization.
3. Verify Redis entries: Access the Redis Insight interface at `http://localhost:5540/`
## Starting the Workflow
Send an HTTP POST request to the workflow service to start the workflow. Use curl or any API client:
```bash
curl -i -X POST http://localhost:8004/start-workflow \
-H "Content-Type: application/json" \
-d '{"task": "Lets solve the riddle to open the Doors of Durin and enter Moria."}'
```
```
HTTP/1.1 200 OK
date: Thu, 05 Dec 2024 07:46:19 GMT
server: uvicorn
content-length: 104
content-type: application/json
{"message":"Workflow initiated successfully.","workflow_instance_id":"422ab3c3f58f4221a36b36c05fefb99b"}
```
The workflow will trigger agents in a round-robin sequence to process the message.
## Monitoring Workflow Execution
1. Check console logs to trace activities in the workflow.
2. Verify Redis entries: Access the Redis Insight interface at `http://localhost:5540/`
3. As mentioned earlier, when we ran dapr init, Dapr initialized, a `Zipkin` container instance, used for observability and tracing. Open `http://localhost:9411/zipkin/` in your browser to view traces > Find a Trace > Run Query.
4. Select the trace entry with multiple spans labeled `<workflow name>: /taskhubsidecarservice/startinstance.`. When you open this entry, youll see details about how each task or activity in the workflow was executed. If any task failed, the error will also be visible here.
5. Check console logs to validate if workflow was executed successfuly.
### Reset Redis Database
1. Access the Redis Insight interface at `http://localhost:5540/`
2. In the search bar type `*` to select all items in the database.
3. Click on `Bulk Actions` > `Delete` > `Delete`

View File

@ -1,12 +0,0 @@
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
name: agenticworkflowstate
spec:
type: state.redis
version: v1
metadata:
- name: redisHost
value: localhost:6379
- name: redisPassword
value: ""

View File

@ -1,28 +0,0 @@
# https://docs.dapr.io/developing-applications/local-development/multi-app-dapr-run/multi-app-template/#template-properties
version: 1
common:
resourcesPath: ./components
logLevel: info
appLogDestination: console
daprdLogDestination: console
apps:
- appID: HobbitApp
appDirPath: ./services/hobbit/
appPort: 8001
command: ["python3", "app.py"]
- appID: WizardApp
appDirPath: ./services/wizard/
appPort: 8002
command: ["python3", "app.py"]
- appID: ElfApp
appDirPath: ./services/elf/
appPort: 8003
command: ["python3", "app.py"]
- appID: WorkflowApp
appDirPath: ./services/workflow-roundrobin/
command: ["python3", "app.py"]
appPort: 8004

View File

@ -1,38 +0,0 @@
from dapr_agents import DurableAgent
from dotenv import load_dotenv
import asyncio
import logging
async def main():
try:
# Define Agent
elf_agent = DurableAgent(
role="Elf",
name="Legolas",
goal="Act as a scout, marksman, and protector, using keen senses and deadly accuracy to ensure the success of the journey.",
instructions=[
"Speak like Legolas, with grace, wisdom, and keen observation.",
"Be swift, silent, and precise, moving effortlessly across any terrain.",
"Use superior vision and heightened senses to scout ahead and detect threats.",
"Excel in ranged combat, delivering pinpoint arrow strikes from great distances.",
"Respond concisely, accurately, and relevantly, ensuring clarity and strict alignment with the task.",
],
message_bus_name="messagepubsub",
state_store_name="agenticworkflowstate",
agents_registry_store_name="agentsregistrystore",
agents_registry_key="agents_registry",
service_port=8003,
).as_service(8003)
await elf_agent.start()
except Exception as e:
print(f"Error starting service: {e}")
if __name__ == "__main__":
load_dotenv()
logging.basicConfig(level=logging.INFO)
asyncio.run(main())

View File

@ -1,38 +0,0 @@
from dapr_agents import DurableAgent
from dotenv import load_dotenv
import asyncio
import logging
async def main():
try:
# Define Agent
hobbit_agent = DurableAgent(
role="Hobbit",
name="Frodo",
goal="Carry the One Ring to Mount Doom, resisting its corruptive power while navigating danger and uncertainty.",
instructions=[
"Speak like Frodo, with humility, determination, and a growing sense of resolve.",
"Endure hardships and temptations, staying true to the mission even when faced with doubt.",
"Seek guidance and trust allies, but bear the ultimate burden alone when necessary.",
"Move carefully through enemy-infested lands, avoiding unnecessary risks.",
"Respond concisely, accurately, and relevantly, ensuring clarity and strict alignment with the task.",
],
message_bus_name="messagepubsub",
state_store_name="agenticworkflowstate",
agents_registry_store_name="agentsregistrystore",
agents_registry_key="agents_registry",
service_port=8001,
).as_service(8001)
await hobbit_agent.start()
except Exception as e:
print(f"Error starting service: {e}")
if __name__ == "__main__":
load_dotenv()
logging.basicConfig(level=logging.INFO)
asyncio.run(main())

View File

@ -1,38 +0,0 @@
from dapr_agents import DurableAgent
from dotenv import load_dotenv
import asyncio
import logging
async def main():
try:
# Define Agent
wizard_agent = DurableAgent(
role="Wizard",
name="Gandalf",
goal="Guide the Fellowship with wisdom and strategy, using magic and insight to ensure the downfall of Sauron.",
instructions=[
"Speak like Gandalf, with wisdom, patience, and a touch of mystery.",
"Provide strategic counsel, always considering the long-term consequences of actions.",
"Use magic sparingly, applying it when necessary to guide or protect.",
"Encourage allies to find strength within themselves rather than relying solely on your power.",
"Respond concisely, accurately, and relevantly, ensuring clarity and strict alignment with the task.",
],
message_bus_name="messagepubsub",
state_store_name="agenticworkflowstate",
agents_registry_store_name="agentsregistrystore",
agents_registry_key="agents_registry",
service_port=8002,
).as_service(8002)
await wizard_agent.start()
except Exception as e:
print(f"Error starting service: {e}")
if __name__ == "__main__":
load_dotenv()
logging.basicConfig(level=logging.INFO)
asyncio.run(main())

View File

@ -1,29 +0,0 @@
from dapr_agents import LLMOrchestrator
from dotenv import load_dotenv
import asyncio
import logging
async def main():
try:
agentic_orchestrator = LLMOrchestrator(
name="Orchestrator",
message_bus_name="messagepubsub",
state_store_name="agenticworkflowstate",
state_key="workflow_state",
agents_registry_store_name="agentsregistrystore",
agents_registry_key="agents_registry",
max_iterations=25,
).as_service(port=8004)
await agentic_orchestrator.start()
except Exception as e:
print(f"Error starting service: {e}")
if __name__ == "__main__":
load_dotenv()
logging.basicConfig(level=logging.INFO)
asyncio.run(main())

View File

@ -1,29 +0,0 @@
from dapr_agents import RandomOrchestrator
from dotenv import load_dotenv
import asyncio
import logging
async def main():
try:
random_workflow_service = RandomOrchestrator(
name="Orchestrator",
message_bus_name="messagepubsub",
state_store_name="agenticworkflowstate",
state_key="workflow_state",
agents_registry_store_name="agentsregistrystore",
agents_registry_key="agents_registry",
max_iterations=3,
).as_service(port=8004)
await random_workflow_service.start()
except Exception as e:
print(f"Error starting service: {e}")
if __name__ == "__main__":
load_dotenv()
logging.basicConfig(level=logging.INFO)
asyncio.run(main())

View File

@ -1,29 +0,0 @@
from dapr_agents import RoundRobinOrchestrator
from dotenv import load_dotenv
import asyncio
import logging
async def main():
try:
roundrobin_workflow_service = RoundRobinOrchestrator(
name="Orchestrator",
message_bus_name="messagepubsub",
state_store_name="agenticworkflowstate",
state_key="workflow_state",
agents_registry_store_name="agentsregistrystore",
agents_registry_key="agents_registry",
max_iterations=3,
).as_service(port=8004)
await roundrobin_workflow_service.start()
except Exception as e:
print(f"Error starting service: {e}")
if __name__ == "__main__":
load_dotenv()
logging.basicConfig(level=logging.INFO)
asyncio.run(main())

View File

@ -1,37 +0,0 @@
from dapr_agents import DurableAgent
from dotenv import load_dotenv
import asyncio
import logging
async def main():
try:
# Create the Weather Agent using those tools
weather_agent = DurableAgent(
role="Weather Assistant",
name="Stevie",
goal="Help humans get weather and location info using smart tools.",
instructions=[
"Respond clearly and helpfully to weather-related questions.",
"Use tools when appropriate to fetch or simulate weather data.",
"You may sometimes jump after answering the weather question.",
],
message_bus_name="messagepubsub",
state_store_name="workflowstatestore",
state_key="workflow_state",
agents_registry_store_name="agentstatestore",
agents_registry_key="agents_registry",
).as_service(port=8001)
# Start the FastAPI agent service
await weather_agent.start()
except Exception as e:
print(f"Error starting service: {e}")
if __name__ == "__main__":
load_dotenv()
logging.basicConfig(level=logging.INFO)
asyncio.run(main())

View File

@ -1,57 +0,0 @@
#!/usr/bin/env python3
import requests
import time
import sys
if __name__ == "__main__":
status_url = "http://localhost:8001/status"
healthy = False
for attempt in range(1, 11):
try:
print(f"Attempt {attempt}...")
response = requests.get(status_url, timeout=5)
if response.status_code == 200:
print("Workflow app is healthy!")
healthy = True
break
else:
print(f"Received status code {response.status_code}: {response.text}")
except requests.exceptions.RequestException as e:
print(f"Request failed: {e}")
attempt += 1
print("Waiting 5s seconds before next health checkattempt...")
time.sleep(5)
if not healthy:
print("Workflow app is not healthy!")
sys.exit(1)
workflow_url = "http://localhost:8001/start-workflow"
task_payload = {"task": "What is the weather in New York?"}
for attempt in range(1, 11):
try:
print(f"Attempt {attempt}...")
response = requests.post(workflow_url, json=task_payload, timeout=5)
if response.status_code == 202:
print("Workflow started successfully!")
sys.exit(0)
else:
print(f"Received status code {response.status_code}: {response.text}")
except requests.exceptions.RequestException as e:
print(f"Request failed: {e}")
attempt += 1
print("Waiting 1s seconds before next attempt...")
time.sleep(1)
print("Maximum attempts (10) reached without success.")
print("Failed to get successful response")
sys.exit(1)

View File

@ -1,12 +0,0 @@
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
name: messagepubsub
spec:
type: pubsub.redis
version: v1
metadata:
- name: redisHost
value: localhost:6379
- name: redisPassword
value: ""

View File

@ -1,19 +1,40 @@
from dapr_agents.agents.agent import Agent
from dapr_agents.agents.durableagent import DurableAgent
from dapr_agents.llm.openai import (
OpenAIChatClient,
OpenAIAudioClient,
OpenAIEmbeddingClient,
)
from dapr_agents.executors import DockerCodeExecutor, LocalCodeExecutor
from dapr_agents.llm.elevenlabs import ElevenLabsSpeechClient
from dapr_agents.llm.huggingface import HFHubChatClient
from dapr_agents.llm.nvidia import NVIDIAChatClient, NVIDIAEmbeddingClient
from dapr_agents.llm.elevenlabs import ElevenLabsSpeechClient
from dapr_agents.llm.openai import (
OpenAIAudioClient,
OpenAIChatClient,
OpenAIEmbeddingClient,
)
from dapr_agents.tool import AgentTool, tool
from dapr_agents.workflow import (
WorkflowApp,
AgenticWorkflow,
LLMOrchestrator,
RandomOrchestrator,
RoundRobinOrchestrator,
WorkflowApp,
)
from dapr_agents.executors import LocalCodeExecutor, DockerCodeExecutor
__all__ = [
"Agent",
"DurableAgent",
"DockerCodeExecutor",
"LocalCodeExecutor",
"ElevenLabsSpeechClient",
"HFHubChatClient",
"NVIDIAChatClient",
"NVIDIAEmbeddingClient",
"OpenAIAudioClient",
"OpenAIChatClient",
"OpenAIEmbeddingClient",
"AgentTool",
"tool",
"AgenticWorkflow",
"LLMOrchestrator",
"RandomOrchestrator",
"RoundRobinOrchestrator",
"WorkflowApp",
]

View File

@ -1,3 +1,5 @@
from .base import AgentBase
from .agent.agent import Agent
from .base import AgentBase
from .durableagent.agent import DurableAgent
__all__ = ["AgentBase", "Agent", "DurableAgent"]

View File

@ -1 +1,3 @@
from .agent import Agent
__all__ = ["Agent"]

View File

@ -1,11 +1,16 @@
from dapr_agents.types import AgentError, AssistantMessage, ChatCompletion, ToolMessage
from dapr_agents.agents.base import AgentBase
from typing import List, Optional, Dict, Any, Union
from pydantic import Field, ConfigDict
import logging
import asyncio
from dapr_agents.types.message import UserMessage
from dapr_agents.types.message import ToolCall
import logging
from typing import Any, Dict, List, Optional, Union
from dapr_agents.agents.base import AgentBase
from dapr_agents.types import (
AgentError,
ToolCall,
ToolExecutionRecord,
ToolMessage,
UserMessage,
LLMChatResponse,
)
logger = logging.getLogger(__name__)
@ -16,235 +21,240 @@ class Agent(AgentBase):
It integrates tools and processes them based on user inputs and task orchestration.
"""
tool_history: List[ToolMessage] = Field(
default_factory=list, description="Executed tool calls during the conversation."
)
tool_choice: Optional[str] = Field(
default=None,
description="Strategy for selecting tools ('auto', 'required', 'none'). Defaults to 'auto' if tools are provided.",
)
model_config = ConfigDict(arbitrary_types_allowed=True)
def model_post_init(self, __context: Any) -> None:
"""
Initialize the agent's settings, such as tool choice and parent setup.
Sets the tool choice strategy based on provided tools.
"""
self.tool_choice = self.tool_choice or ("auto" if self.tools else None)
# Proceed with base model setup
super().model_post_init(__context)
async def run(self, input_data: Optional[Union[str, Dict[str, Any]]] = None) -> Any:
"""Run the agent with the given input with graceful shutdown support."""
"""
Runs the agent with the given input, supporting graceful shutdown.
Uses the _race helper to handle shutdown and cancellation cleanly.
Args:
input_data (Optional[Union[str, Dict[str, Any]]]): Input for the agent, can be a string or dict.
Returns:
Any: The result of agent execution, or None if shutdown is requested.
"""
try:
if self._shutdown_event.is_set():
print("Shutdown requested. Skipping agent execution.")
return None
task = asyncio.create_task(self._run_agent(input_data))
done, pending = await asyncio.wait(
[task, asyncio.create_task(self._shutdown_event.wait())],
return_when=asyncio.FIRST_COMPLETED,
)
for p in pending:
p.cancel()
if self._shutdown_event.is_set():
print("Shutdown requested during execution. Cancelling agent.")
task.cancel()
return None
if task in done:
return await task
return await self._race(self._run_agent(input_data))
except asyncio.CancelledError:
print("Agent execution was cancelled.")
logger.info("Agent execution was cancelled.")
return None
except Exception as e:
print(f"Error during agent execution: {e}")
logger.error(f"Error during agent execution: {e}")
raise
async def _race(self, coro) -> Optional[Any]:
"""
Runs the given coroutine and races it against the agent's shutdown event.
If shutdown is triggered, cancels the task and returns None.
Args:
coro: The coroutine to run (e.g., _run_agent(input_data)).
Returns:
Optional[Any]: The result of the coroutine, or None if shutdown is triggered.
"""
task = asyncio.create_task(coro)
shutdown_task = asyncio.create_task(self._shutdown_event.wait())
done, pending = await asyncio.wait(
[task, shutdown_task],
return_when=asyncio.FIRST_COMPLETED,
)
for p in pending:
p.cancel()
if self._shutdown_event.is_set():
logger.info("Shutdown requested during execution. Cancelling agent.")
task.cancel()
return None
return await task
async def _run_agent(
self, input_data: Optional[Union[str, Dict[str, Any]]] = None
) -> Any:
"""Internal method for running the agent logic (original ToolCallAgent run method)."""
"""
Internal method for running the agent logic.
Formats messages, updates memory, and drives the conversation loop.
Args:
input_data (Optional[Union[str, Dict[str, Any]]]): Input for the agent, can be a string or dict.
Returns:
Any: The result of the agent's conversation loop.
"""
logger.debug(
f"Agent run started with input: {input_data if input_data else 'Using memory context'}"
)
# Format messages; construct_messages already includes chat history.
messages = self.construct_messages(input_data or {})
# Construct messages using only input_data; chat history handled internally
messages: List[Dict[str, Any]] = self.construct_messages(input_data or {})
user_message = self.get_last_user_message(messages)
# Always work with a copy of the user message for safety
user_message_copy: Optional[Dict[str, Any]] = (
dict(user_message) if user_message else None
)
if input_data and user_message:
if input_data and user_message_copy:
# Add the new user message to memory only if input_data is provided and user message exists
user_msg = UserMessage(content=user_message.get("content", ""))
user_msg = UserMessage(content=user_message_copy.get("content", ""))
self.memory.add_message(user_msg)
# Always print the last user message for context, even if no input_data is provided
if user_message:
self.text_formatter.print_message(user_message)
if user_message_copy is not None:
# Ensure keys are str for mypy
self.text_formatter.print_message(
{str(k): v for k, v in user_message_copy.items()}
)
# Process conversation iterations
return await self.process_iterations(messages)
# Process conversation iterations and return the result
return await self.conversation(messages)
async def process_response(self, tool_calls: List[ToolCall]) -> None:
async def execute_tools(self, tool_calls: List[ToolCall]) -> List[ToolMessage]:
"""
Asynchronously executes tool calls and appends tool results to memory.
Executes a batch of tool calls in parallel, bounded by max_concurrent, using asyncio.gather.
Each tool call is executed asynchronously using run_tool, and results are appended to the persistent audit log (tool_history).
If any tool call fails, the error is propagated and other tasks continue unless you set return_exceptions=True.
Args:
tool_calls (List[ToolCall]): Tool calls returned by the LLM.
tool_calls (List[ToolCall]): List of tool calls returned by the LLM to execute in this batch.
max_concurrent (int, optional): Maximum number of concurrent tool executions (default: 5).
Returns:
List[ToolMessage]: Results for this batch of tool calls, in the same order as input.
Raises:
AgentError: If a tool execution fails.
AgentError: If any tool execution fails.
"""
for tool_call in tool_calls:
function_name = tool_call.function.name
tool_id = tool_call.id
function_args = (
tool_call.function.arguments_dict
) # Use the property to get dict
# Limiting concurrency to avoid overwhelming downstream systems
max_concurrent = 10
semaphore = asyncio.Semaphore(max_concurrent)
if not function_name:
logger.error(f"Tool call missing function name: {tool_call}")
continue
async def run_and_record(tool_call: ToolCall) -> ToolMessage:
"""
Executes a single tool call, respecting the concurrency limit.
Appends the result to the persistent audit log.
If the function name is missing, returns a ToolMessage with error status and raises AgentError.
"""
async with semaphore:
function_name = tool_call.function.name
tool_id = tool_call.id
function_args = tool_call.function.arguments_dict
try:
logger.info(f"Executing {function_name} with arguments {function_args}")
result = await self.tool_executor.run_tool(
function_name, **function_args
)
tool_message = ToolMessage(
tool_call_id=tool_id, name=function_name, content=str(result)
)
self.text_formatter.print_message(tool_message)
self.tool_history.append(tool_message)
except Exception as e:
logger.error(f"Error executing tool {function_name}: {e}")
raise AgentError(f"Error executing tool '{function_name}': {e}") from e
if not function_name:
error_msg = f"Tool call missing function name: {tool_call}"
logger.error(error_msg)
# Return a ToolExecutionRecord with error status and raise AgentError
tool_execution_record = ToolExecutionRecord(
tool_call_id="<missing>",
tool_name="<missing>",
tool_args={},
execution_result=error_msg,
)
self.tool_history.append(tool_execution_record)
raise AgentError(error_msg)
async def process_iterations(self, messages: List[Dict[str, Any]]) -> Any:
try:
logger.debug(
f"Executing {function_name} with arguments {function_args}"
)
result = await self.run_tool(function_name, **function_args)
result_str = str(result) if result is not None else ""
tool_message = ToolMessage(
tool_call_id=tool_id,
name=function_name,
content=result_str,
)
# Print the tool message for visibility
self.text_formatter.print_message(tool_message)
# Add tool message to memory
self.memory.add_message(tool_message)
# Append tool message to the persistent audit log
tool_execution_record = ToolExecutionRecord(
tool_call_id=tool_id,
tool_name=function_name,
tool_args=function_args,
execution_result=result_str,
)
self.tool_history.append(tool_execution_record)
return tool_message
except Exception as e:
logger.error(f"Error executing tool {function_name}: {e}")
raise AgentError(
f"Error executing tool '{function_name}': {e}"
) from e
# Run all tool calls concurrently, but bounded by max_concurrent
return await asyncio.gather(*(run_and_record(tc) for tc in tool_calls))
async def conversation(self, messages: List[Dict[str, Any]]) -> Any:
"""
Iteratively drives the agent conversation until a final answer or max iterations.
Drives the agent conversation iteratively until a final answer or max iterations is reached.
Handles tool calls, updates memory, and returns the final assistant message.
Tool results are localized per iteration; persistent audit log is kept for all tool executions.
Args:
messages (List[Dict[str, Any]]): Initial conversation messages.
Returns:
Any: The final assistant message.
Any: The final assistant message or None if max iterations reached.
Raises:
AgentError: On chat failure or tool issues.
"""
for iteration in range(self.max_iterations):
logger.info(f"Iteration {iteration + 1}/{self.max_iterations} started.")
# Create a copy of messages for this iteration
current_messages = messages.copy()
final_reply = None
for turn in range(1, self.max_iterations + 1):
logger.info(f"Iteration {turn}/{self.max_iterations} started.")
try:
response = self.llm.generate(
messages=current_messages,
# Generate response using the LLM
response: LLMChatResponse = self.llm.generate(
messages=messages,
tools=self.get_llm_tools(),
**(
{"tool_choice": self.tool_choice}
if self.tool_choice is not None
else {}
),
)
# Handle different response types
if isinstance(response, ChatCompletion):
response_message = response.get_message()
if response_message:
message_dict = {
"role": "assistant",
"content": response_message,
}
self.text_formatter.print_message(message_dict)
if response.get_reason() == "tool_calls":
tool_calls = response.get_tool_calls()
if tool_calls:
# Add the assistant message with tool calls to the conversation
if response_message:
# Extract content from response_message if it's a dict
if isinstance(response_message, dict):
content = response_message.get("content", "")
if content is None:
content = ""
tool_calls_data = response_message.get(
"tool_calls", []
)
else:
content = (
str(response_message)
if response_message is not None
else ""
)
tool_calls_data = []
message_dict = {
"role": "assistant",
"content": content,
"tool_calls": tool_calls_data,
}
messages.append(message_dict)
# Run tools and collect only the results for the current tool calls to prevent LLM errs.
# Context: https://github.com/dapr/dapr-agents/pull/139#discussion_r2176117456
tool_results = []
await self.process_response(tool_calls)
for tool_call in tool_calls:
# Find the corresponding ToolMessage in self.tool_history
tool_msg = next(
(
msg
for msg in self.tool_history
if msg.tool_call_id == tool_call.id
),
None,
)
if tool_msg:
tool_message_dict = {
"role": "tool",
"content": tool_msg.content or "",
"tool_call_id": tool_msg.tool_call_id,
}
tool_results.append(tool_message_dict)
messages.extend(tool_results)
# Continue to next iteration to let LLM process tool results
continue
else:
# Final response - add to memory and return
content = response.get_content()
if content:
self.memory.add_message(AssistantMessage(content=content))
self.tool_history.clear()
return content
# Get the first candidate from the response
response_message = response.get_message()
# Check if the response contains an assistant message
if response_message is None:
raise AgentError("LLM returned no assistant message")
else:
# Handle Dict or Iterator responses (for structured output or streaming)
logger.warning(
f"Received non-ChatCompletion response: {type(response)}"
)
if isinstance(response, dict):
return response.get("content", str(response))
else:
return str(response)
assistant = response_message
self.text_formatter.print_message(assistant)
self.memory.add_message(assistant)
# Handle tool calls response
if assistant is not None and assistant.has_tool_calls():
tool_calls = assistant.get_tool_calls()
if tool_calls:
messages.append(assistant.model_dump())
tool_msgs = await self.execute_tools(tool_calls)
messages.extend([tm.model_dump() for tm in tool_msgs])
if turn == self.max_iterations:
final_reply = assistant
logger.info("Reached max turns after tool calls; stopping.")
break
continue
# No tool calls => done
final_reply = assistant
break
except Exception as e:
logger.error(f"Error during chat generation: {e}")
logger.error(f"Error on turn {turn}: {e}")
raise AgentError(f"Failed during chat generation: {e}") from e
logger.info("Max iterations reached. Agent has stopped.")
return None
# Post-loop
if final_reply is None:
logger.warning("No reply generated; hitting max iterations.")
return None
logger.info(f"Agent conversation completed after {turn} turns.")
return final_reply
async def run_tool(self, tool_name: str, *args, **kwargs) -> Any:
"""
Executes a registered tool by name, automatically handling sync or async tools.
Executes a single registered tool by name, handling both sync and async tools.
Used for atomic tool execution, either directly or as part of a batch in execute_tools.
Args:
tool_name (str): Name of the tool to run.
*args: Positional arguments passed to the tool.
**kwargs: Keyword arguments passed to the tool.
*args: Positional arguments for the tool.
**kwargs: Keyword arguments for the tool.
Returns:
Any: Result from the tool execution.

View File

@ -3,17 +3,18 @@ from dapr_agents.memory import (
ConversationListMemory,
ConversationVectorMemory,
)
from dapr_agents.storage import VectorStoreBase
from dapr_agents.agents.utils.text_printer import ColorTextFormatter
from dapr_agents.types import (
MessageContent,
MessagePlaceHolder,
BaseMessage,
)
from dapr_agents.types import MessagePlaceHolder, BaseMessage, ToolExecutionRecord
from dapr_agents.tool.executor import AgentToolExecutor
from dapr_agents.prompt.base import PromptTemplateBase
from dapr_agents.prompt import ChatPromptTemplate
from dapr_agents.tool.base import AgentTool
import re
from datetime import datetime
import logging
import asyncio
import signal
from abc import ABC, abstractmethod
from typing import (
List,
Optional,
@ -22,25 +23,13 @@ from typing import (
Union,
Callable,
Literal,
ClassVar,
)
from pydantic import BaseModel, Field, PrivateAttr, model_validator, ConfigDict
from abc import ABC, abstractmethod
from datetime import datetime
import logging
import asyncio
import signal
from dapr_agents.llm.openai import OpenAIChatClient
from dapr_agents.llm.huggingface import HFHubChatClient
from dapr_agents.llm.nvidia import NVIDIAChatClient
from dapr_agents.llm.dapr import DaprChatClient
from dapr_agents.llm.chat import ChatClientBase
logger = logging.getLogger(__name__)
# Type alias for all concrete chat client implementations
ChatClientType = Union[
OpenAIChatClient, HFHubChatClient, NVIDIAChatClient, DaprChatClient
]
class AgentBase(BaseModel, ABC):
"""
@ -76,8 +65,8 @@ class AgentBase(BaseModel, ABC):
default=None,
description="A custom system prompt, overriding name, role, goal, and instructions.",
)
llm: ChatClientType = Field(
default_factory=OpenAIChatClient,
llm: Optional[ChatClientBase] = Field(
default=None,
description="Language model client for generating responses.",
)
prompt_template: Optional[PromptTemplateBase] = Field(
@ -88,15 +77,17 @@ class AgentBase(BaseModel, ABC):
default_factory=list,
description="Tools available for the agent to assist with tasks.",
)
tool_choice: Optional[str] = Field(
default=None,
description="Strategy for selecting tools ('auto', 'required', 'none'). Defaults to 'auto' if tools are provided.",
)
tool_history: List[ToolExecutionRecord] = Field(
default_factory=list, description="Executed tool calls during the conversation."
)
# TODO: add a forceFinalAnswer field in case maxIterations is near/reached. Or do we have a conclusion baked in by default? Do we want this to derive a conclusion by default?
max_iterations: int = Field(
default=10, description="Max iterations for conversation cycles."
)
# NOTE for reviewer: am I missing anything else here for vector stores?
vector_store: Optional[VectorStoreBase] = Field(
default=None,
description="Vector store to enable semantic search and retrieval.",
)
memory: MemoryBase = Field(
default_factory=ConversationListMemory,
description="Handles conversation history and context storage.",
@ -109,6 +100,24 @@ class AgentBase(BaseModel, ABC):
description="The format used for rendering the prompt template.",
)
DEFAULT_SYSTEM_PROMPT: ClassVar[str]
"""Default f-string template; placeholders will be swapped to Jinja if needed."""
DEFAULT_SYSTEM_PROMPT = """
# Today's date is: {date}
## Name
Your name is {name}.
## Role
Your role is {role}.
## Goal
{goal}.
## Instructions
{instructions}.
""".strip()
_tool_executor: AgentToolExecutor = PrivateAttr()
_text_formatter: ColorTextFormatter = PrivateAttr(
default_factory=ColorTextFormatter
@ -126,48 +135,42 @@ class AgentBase(BaseModel, ABC):
@model_validator(mode="after")
def validate_llm(cls, values):
"""Validate that LLM is properly configured."""
if hasattr(values, "llm") and values.llm:
try:
# Validate LLM is properly configured by accessing it as this is required to be set.
_ = values.llm
except Exception as e:
raise ValueError(f"Failed to initialize LLM: {e}") from e
if hasattr(values, "llm"):
if values.llm is None:
logger.warning("LLM client is None, some functionality may be limited.")
else:
try:
# Validate LLM is properly configured by accessing it as this is required to be set.
_ = values.llm
except Exception as e:
logger.error(f"Failed to initialize LLM: {e}")
values.llm = None
return values
def model_post_init(self, __context: Any) -> None:
"""
Sets up the prompt template based on system_prompt or attributes like name, role, goal, and instructions.
Confirms the source of prompt_template post-initialization.
Post-initialization hook for AgentBase.
Sets up the prompt template using a centralized helper, ensuring agent and LLM client reference the same template.
Also validates and pre-fills the template, and sets up graceful shutdown.
Args:
__context (Any): Context passed from Pydantic's model initialization.
"""
self._tool_executor = AgentToolExecutor(tools=self.tools)
if self.prompt_template and self.llm.prompt_template:
raise ValueError(
"Conflicting prompt templates: both an agent prompt_template and an LLM prompt_template are provided. "
"Please set only one or ensure synchronization between the two."
)
# Set tool_choice to 'auto' if tools are provided, otherwise None
if self.tool_choice is None:
self.tool_choice = "auto" if self.tools else None
if self.prompt_template:
logger.info(
"Using the provided agent prompt_template. Skipping system prompt construction."
)
self.llm.prompt_template = self.prompt_template
# Initialize LLM if not provided
if self.llm is None:
self.llm = self._create_default_llm()
# If the LLM client already has a prompt template, sync it and prefill/validate as needed
elif self.llm.prompt_template:
logger.info("Using existing LLM prompt_template. Synchronizing with agent.")
self.prompt_template = self.llm.prompt_template
else:
if not self.system_prompt:
logger.info("Constructing system_prompt from agent attributes.")
self.system_prompt = self.construct_system_prompt()
logger.info("Using system_prompt to create the prompt template.")
self.prompt_template = self.construct_prompt_template()
if not self.llm.prompt_template:
# Centralize prompt template selection logic
self.prompt_template = self._initialize_prompt_template()
# Ensure LLM client and agent both reference the same template
if self.llm is not None:
self.llm.prompt_template = self.prompt_template
self._validate_prompt_template()
@ -179,6 +182,84 @@ class AgentBase(BaseModel, ABC):
super().model_post_init(__context)
def _create_default_llm(self) -> Optional[ChatClientBase]:
"""
Creates a default LLM client when none is provided.
Returns None if the default LLM cannot be created due to missing configuration.
"""
try:
from dapr_agents.llm.openai import OpenAIChatClient
return OpenAIChatClient()
except Exception as e:
logger.warning(
f"Failed to create default OpenAI client: {e}. LLM will be None."
)
return None
def _initialize_prompt_template(self) -> PromptTemplateBase:
"""
Determines which prompt template to use for the agent:
1. If the user supplied one, use it.
2. Else if the LLM client already has one, adopt that.
3. Else generate a system_prompt and ChatPromptTemplate from agent attributes.
Returns:
PromptTemplateBase: The selected or constructed prompt template.
"""
# 1) User provided one?
if self.prompt_template:
logger.debug("🛠️ Using provided agent.prompt_template")
return self.prompt_template
# 2) LLM client has one?
if (
self.llm
and hasattr(self.llm, "prompt_template")
and self.llm.prompt_template
):
logger.debug("🔄 Syncing from llm.prompt_template")
return self.llm.prompt_template
# 3) Build from system_prompt or attributes
if not self.system_prompt:
logger.debug("⚙️ Constructing system_prompt from attributes")
self.system_prompt = self.construct_system_prompt()
logger.debug("⚙️ Building ChatPromptTemplate from system_prompt")
return self.construct_prompt_template()
def _collect_template_attrs(self) -> tuple[Dict[str, str], List[str]]:
"""
Collect agent attributes for prompt template pre-filling and warn about unused ones.
- valid: attributes set on self and declared in prompt_template.input_variables.
- unused: attributes set on self but not present in the template.
Returns:
(valid, unused): Tuple of dict of valid attrs and list of unused attr names.
"""
attrs = ["name", "role", "goal", "instructions"]
valid: Dict[str, str] = {}
unused: List[str] = []
if not self.prompt_template or not hasattr(
self.prompt_template, "input_variables"
):
return valid, attrs # No template, all attrs are unused
original = set(self.prompt_template.input_variables)
for attr in attrs:
val = getattr(self, attr, None)
if val is None:
continue
if attr in original:
# Only join instructions if it's a list and the template expects it
if attr == "instructions" and isinstance(val, list):
valid[attr] = "\n".join(val)
else:
valid[attr] = str(val)
else:
unused.append(attr)
return valid, unused
def _setup_signal_handlers(self):
"""Set up signal handlers for graceful shutdown"""
try:
@ -195,52 +276,24 @@ class AgentBase(BaseModel, ABC):
def _validate_prompt_template(self) -> None:
"""
Validates that the prompt template is properly constructed and attributes are handled correctly.
This runs after prompt template setup to ensure all attributes are properly handled.
Ensures chat_history is always available, injects any declared attributes,
and warns if the user set attributes that aren't in the template.
"""
if not self.prompt_template:
return
input_variables = ["chat_history"] # Always include chat_history
if self.name:
input_variables.append("name")
if self.role:
input_variables.append("role")
if self.goal:
input_variables.append("goal")
if self.instructions:
input_variables.append("instructions")
# Always make chat_history available
vars_set = set(self.prompt_template.input_variables) | {"chat_history"}
self.prompt_template.input_variables = list(
set(self.prompt_template.input_variables + input_variables)
)
# Inject any attributes the template declares
valid_attrs, unused_attrs = self._collect_template_attrs()
vars_set |= set(valid_attrs.keys())
self.prompt_template.input_variables = list(vars_set)
# Collect attributes set by user
set_attributes = {
"name": self.name,
"role": self.role,
"goal": self.goal,
"instructions": self.instructions,
}
# Use Pydantic's model_fields_set to detect if attributes were user-set
user_set_attributes = {
attr for attr in set_attributes if attr in self.model_fields_set
}
# Check if attributes are in input_variables
ignored_attributes = [
attr
for attr in set_attributes
if attr not in self.prompt_template.input_variables
and set_attributes[attr] is not None
and attr in user_set_attributes
]
if ignored_attributes:
if unused_attrs:
logger.warning(
f"The following agent attributes were explicitly set but are not in the prompt template: {', '.join(ignored_attributes)}. "
"These will be handled during initialization."
"Agent attributes set but not referenced in prompt_template: "
f"{', '.join(unused_attrs)}. Consider adding them to input_variables."
)
@property
@ -253,51 +306,43 @@ class AgentBase(BaseModel, ABC):
"""Returns the text formatter for the agent."""
return self._text_formatter
@property
def chat_history(self, task: Optional[str] = None) -> List[MessageContent]:
def get_chat_history(self, task: Optional[str] = None) -> List[Dict[str, Any]]:
"""
Retrieves the chat history from memory based on the memory type.
Retrieves the chat history from memory as a list of dictionaries.
Args:
task (Optional[str]): The task or query provided by the user.
task (Optional[str]): The task or query provided by the user (used for vector search).
Returns:
List[MessageContent]: The chat history.
List[Dict[str, Any]]: The chat history as dictionaries.
"""
if (
isinstance(self.memory, ConversationVectorMemory)
and task
and self.vector_store
):
if isinstance(self.memory, ConversationVectorMemory) and task:
if (
hasattr(self.vector_store, "embedding_function")
and self.vector_store.embedding_function
and hasattr(self.vector_store.embedding_function, "embed_documents")
):
query_embeddings = self.vector_store.embedding_function.embed_documents(
[task]
hasattr(self.memory.vector_store, "embedding_function")
and self.memory.vector_store.embedding_function
and hasattr(
self.memory.vector_store.embedding_function, "embed_documents"
)
return self.memory.get_messages(
query_embeddings=query_embeddings
) # returns List[MessageContent]
):
query_embeddings = self.memory.vector_store.embedding_function.embed(
task
)
messages = self.memory.get_messages(query_embeddings=query_embeddings)
else:
return self.memory.get_messages() # returns List[MessageContent]
messages = self.memory.get_messages()
else:
messages = (
self.memory.get_messages()
) # returns List[BaseMessage] or List[Dict]
converted_messages: List[MessageContent] = []
for msg in messages:
if isinstance(msg, MessageContent):
converted_messages.append(msg)
elif isinstance(msg, BaseMessage):
converted_messages.append(MessageContent(**msg.model_dump()))
elif isinstance(msg, dict):
converted_messages.append(MessageContent(**msg))
else:
# Fallback: try to convert to dict and then to MessageContent
converted_messages.append(MessageContent(**dict(msg)))
return converted_messages
messages = self.memory.get_messages()
return messages
@property
def chat_history(self) -> List[Dict[str, Any]]:
"""
Returns the full chat history as a list of dictionaries.
Returns:
List[Dict[str, Any]]: The chat history.
"""
return self.get_chat_history()
@abstractmethod
def run(self, input_data: Union[str, Dict[str, Any]]) -> Any:
@ -311,88 +356,55 @@ class AgentBase(BaseModel, ABC):
def prefill_agent_attributes(self) -> None:
"""
Pre-fill prompt template with agent attributes if specified in `input_variables`.
Logs any agent attributes set but not used by the template.
Pre-fill prompt_template with agent attributes if specified in `input_variables`.
Uses _collect_template_attrs to avoid duplicate logic and ensure consistency.
"""
if not self.prompt_template:
return
prefill_data = {}
if "name" in self.prompt_template.input_variables and self.name:
prefill_data["name"] = self.name
# Re-use our helper to split valid vs. unused
valid_attrs, unused_attrs = self._collect_template_attrs()
if "role" in self.prompt_template.input_variables:
prefill_data["role"] = self.role or ""
if unused_attrs:
logger.warning(
"Agent attributes set but not used in prompt_template: "
f"{', '.join(unused_attrs)}. Consider adding them to input_variables."
)
if "goal" in self.prompt_template.input_variables:
prefill_data["goal"] = self.goal or ""
if "instructions" in self.prompt_template.input_variables and self.instructions:
prefill_data["instructions"] = "\n".join(self.instructions)
# Collect attributes set but not in input_variables for informational logging
set_attributes = {
"name": self.name,
"role": self.role,
"goal": self.goal,
"instructions": self.instructions,
}
# Use Pydantic's model_fields_set to detect if attributes were user-set
user_set_attributes = {
attr for attr in set_attributes if attr in self.model_fields_set
}
ignored_attributes = [
attr
for attr in set_attributes
if attr not in self.prompt_template.input_variables
and set_attributes[attr] is not None
and attr in user_set_attributes
]
# Apply pre-filled data only for attributes that are in input_variables
if prefill_data:
if valid_attrs:
self.prompt_template = self.prompt_template.pre_fill_variables(
**prefill_data
)
logger.info(
f"Pre-filled prompt template with attributes: {list(prefill_data.keys())}"
)
elif ignored_attributes:
raise ValueError(
f"The following agent attributes were explicitly set by the user but are not considered by the prompt template: {', '.join(ignored_attributes)}. "
"Please ensure that these attributes are included in the prompt template's input variables if they are needed."
**valid_attrs
)
logger.debug(f"Pre-filled template with: {list(valid_attrs.keys())}")
else:
logger.info(
"No agent attributes were pre-filled, as the template did not require any."
)
logger.debug("No prompt_template variables needed pre-filling.")
def construct_system_prompt(self) -> str:
"""
Constructs a system prompt with agent attributes like `name`, `role`, `goal`, and `instructions`.
Sets default values for `role` and `goal` if not provided.
Build the system prompt for the agent using a single template string.
- Fills in the current date.
- Leaves placeholders for name, role, goal, and instructions as variables (instructions only if set).
- Converts placeholders to Jinja2 syntax if requested.
Returns:
str: A system prompt template string.
str: The formatted system prompt string.
"""
# Initialize prompt parts with the current date as the first entry
prompt_parts = [f"# Today's date is: {datetime.now().strftime('%B %d, %Y')}"]
# Only fill in the date; leave all other placeholders as variables
instructions_placeholder = "{instructions}" if self.instructions else ""
filled = self.DEFAULT_SYSTEM_PROMPT.format(
date=datetime.now().strftime("%B %d, %Y"),
name="{name}",
role="{role}",
goal="{goal}",
instructions=instructions_placeholder,
)
# Append name if provided
if self.name:
prompt_parts.append("## Name\nYour name is {{name}}.")
# Append role and goal with default values if not set
prompt_parts.append("## Role\nYour role is {{role}}.")
prompt_parts.append("## Goal\n{{goal}}.")
# Append instructions if provided
if self.instructions:
prompt_parts.append("## Instructions\n{{instructions}}")
return "\n\n".join(prompt_parts)
# If using Jinja2, swap braces for all placeholders
if self.template_format == "jinja2":
# Replace every {foo} with {{foo}}
return re.sub(r"\{(\w+)\}", r"{{\1}}", filled)
else:
return filled
def construct_prompt_template(self) -> ChatPromptTemplate:
"""
@ -418,7 +430,7 @@ class AgentBase(BaseModel, ABC):
self, input_data: Union[str, Dict[str, Any]]
) -> List[Dict[str, Any]]:
"""
Constructs and formats initial messages based on input type, pre-filling chat history as needed.
Constructs and formats initial messages based on input type, passing chat_history as a list, without mutating self.prompt_template.
Args:
input_data (Union[str, Dict[str, Any]]): User input, either as a string or dictionary.
@ -431,15 +443,12 @@ class AgentBase(BaseModel, ABC):
"Prompt template must be initialized before constructing messages."
)
# Pre-fill chat history in the prompt template
chat_history = self.memory.get_messages()
# Convert List[BaseMessage] to string for the prompt template
chat_history_str = "\n".join([str(msg) for msg in chat_history])
self.pre_fill_prompt_template(chat_history=chat_history_str)
chat_history = self.get_chat_history() # List[Dict[str, Any]]
# Handle string input by adding a user message
if isinstance(input_data, str):
formatted_messages = self.prompt_template.format_prompt()
formatted_messages = self.prompt_template.format_prompt(
chat_history=chat_history
)
if isinstance(formatted_messages, list):
user_message = {"role": "user", "content": input_data}
return formatted_messages + [user_message]
@ -449,10 +458,11 @@ class AgentBase(BaseModel, ABC):
{"role": "user", "content": input_data},
]
# Handle dictionary input as dynamic variables for the template
elif isinstance(input_data, dict):
# Pass the dictionary directly, assuming it contains keys expected by the prompt template
formatted_messages = self.prompt_template.format_prompt(**input_data)
input_vars = dict(input_data)
if "chat_history" not in input_vars:
input_vars["chat_history"] = chat_history
formatted_messages = self.prompt_template.format_prompt(**input_vars)
if isinstance(formatted_messages, list):
return formatted_messages
else:
@ -465,21 +475,18 @@ class AgentBase(BaseModel, ABC):
"""Clears all messages stored in the agent's memory."""
self.memory.reset_memory()
def get_last_message(self) -> Optional[MessageContent]:
def get_last_message(self) -> Optional[Dict[str, Any]]:
"""
Retrieves the last message from the chat history.
Returns:
Optional[MessageContent]: The last message in the history, or None if none exist.
Optional[Dict[str, Any]]: The last message in the history as a dictionary, or None if none exist.
"""
chat_history = self.chat_history
chat_history = self.get_chat_history()
if chat_history:
last_msg = chat_history[-1]
# Ensure we return MessageContent type
if isinstance(last_msg, BaseMessage) and not isinstance(
last_msg, MessageContent
):
return MessageContent(**last_msg.model_dump())
if isinstance(last_msg, BaseMessage):
return last_msg.model_dump()
return last_msg
return None
@ -487,20 +494,39 @@ class AgentBase(BaseModel, ABC):
self, messages: List[Dict[str, Any]]
) -> Optional[Dict[str, Any]]:
"""
Retrieves the last user message in a list of messages.
Retrieves the last user message in a list of messages, returning a copy with trimmed content.
Args:
messages (List[Dict[str, Any]]): List of formatted messages to search.
Returns:
Optional[Dict[str, Any]]: The last user message with trimmed content, or None if no user message exists.
Optional[Dict[str, Any]]: The last user message (copy) with trimmed content, or None if no user message exists.
"""
# Iterate in reverse to find the most recent 'user' role message
for message in reversed(messages):
if message.get("role") == "user":
# Trim the content of the user message
message["content"] = message["content"].strip()
return message
# Return a copy with trimmed content
msg_copy = dict(message)
msg_copy["content"] = msg_copy["content"].strip()
return msg_copy
return None
def get_last_message_if_user(
self, messages: List[Dict[str, Any]]
) -> Optional[Dict[str, Any]]:
"""
Returns the last message only if it is a user message; otherwise, returns None.
Args:
messages (List[Dict[str, Any]]): List of formatted messages to check.
Returns:
Optional[Dict[str, Any]]: The last message (copy) with trimmed content if it is a user message, else None.
"""
if messages and messages[-1].get("role") == "user":
msg_copy = dict(messages[-1])
msg_copy["content"] = msg_copy["content"].strip()
return msg_copy
return None
def get_llm_tools(self) -> List[Union[AgentTool, Dict[str, Any]]]:

View File

@ -1 +1,3 @@
from .agent import DurableAgent
__all__ = ["DurableAgent"]

View File

@ -1,31 +1,33 @@
import json
import logging
from datetime import datetime
from datetime import datetime, timezone
from typing import Any, Dict, List, Optional, Union
from dapr_agents.agents.base import AgentBase
from dapr_agents.workflow.agentic import AgenticWorkflow
from pydantic import Field, model_validator
from dapr.ext.workflow import DaprWorkflowContext # type: ignore
from pydantic import Field, model_validator
from dapr_agents.agents.base import AgentBase
from dapr_agents.types import (
AgentError,
AssistantMessage,
LLMChatResponse,
ToolExecutionRecord,
ToolMessage,
UserMessage,
)
from dapr_agents.workflow.agentic import AgenticWorkflow
from dapr_agents.workflow.decorators import message_router, task, workflow
from .schemas import (
AgentTaskResponse,
BroadcastMessage,
TriggerAction,
)
from .state import (
AssistantWorkflowEntry,
AssistantWorkflowMessage,
AssistantWorkflowState,
AssistantWorkflowToolMessage,
DurableAgentMessage,
DurableAgentWorkflowEntry,
DurableAgentWorkflowState,
)
from dapr_agents.workflow.decorators import task, workflow
from dapr_agents.workflow.messaging.decorator import message_router
logger = logging.getLogger(__name__)
@ -42,19 +44,14 @@ class DurableAgent(AgenticWorkflow, AgentBase):
and refining outputs through iterative feedback loops.
"""
tool_history: List[ToolMessage] = Field(
default_factory=list, description="Executed tool calls during the conversation."
)
tool_choice: Optional[str] = Field(
default=None,
description="Strategy for selecting tools ('auto', 'required', 'none'). Defaults to 'auto' if tools are provided.",
)
agent_topic_name: Optional[str] = Field(
None,
default=None,
description="The topic name dedicated to this specific agent, derived from the agent's name if not provided.",
)
_agent_metadata: Optional[Dict[str, Any]] = None
agent_metadata: Optional[Dict[str, Any]] = Field(
default=None,
description="Metadata about the agent, including name, role, goal, instructions, and topic name.",
)
@model_validator(mode="before")
def set_agent_and_topic_name(cls, values: dict):
@ -70,7 +67,7 @@ class DurableAgent(AgenticWorkflow, AgentBase):
def model_post_init(self, __context: Any) -> None:
"""Initializes the workflow with agentic execution capabilities."""
self.state = AssistantWorkflowState()
self.state = DurableAgentWorkflowState().model_dump()
# Call AgenticWorkflow's model_post_init first to initialize state store and other dependencies
super().model_post_init(__context)
@ -78,7 +75,6 @@ class DurableAgent(AgenticWorkflow, AgentBase):
# Name of main Workflow
# TODO: can this be configurable or dynamic? Would that make sense?
self._workflow_name = "ToolCallingWorkflow"
self.tool_choice = self.tool_choice or ("auto" if self.tools else None)
# Register the agentic system
self._agent_metadata = {
@ -90,26 +86,32 @@ class DurableAgent(AgenticWorkflow, AgentBase):
"pubsub_name": self.message_bus_name,
"orchestrator": False,
}
self.register_agentic_system()
async def run(self, input_data: Optional[Union[str, Dict[str, Any]]] = None) -> Any:
self.register_agentic_system()
if not self.wf_runtime_is_running:
self.start_runtime()
async def run(self, input_data: Union[str, Dict[str, Any]]) -> Any:
"""
Run the durable agent with the given input.
TODO: For DurableAgent, this method should trigger the workflow execution maybe..?
Fire up the workflow, wait for it to complete, then return the final serialized_output.
Args:
input_data: The input data for the agent to process.
input_data (Union[str, Dict[str, Any]]): The input for the workflow. Can be a string (task) or a dict.
Returns:
The result of the workflow execution.
Any: The final output from the workflow execution.
"""
# TODO: For DurableAgent, the run method should trigger the workflow
logger.info(
f"DurableAgent {self.name} run method called with input: {input_data}"
)
# Return a message indicating this is a durable agent and agent start via run for durable agent is yet to be determined.
return f"DurableAgent {self.name} is designed to run as a workflow service asynchronously. Use .as_service() and/or .start() instead for now. The workflow endpoints can also be usedto interact with this agent."
# Prepare input payload for workflow
if isinstance(input_data, dict):
input_payload = input_data
else:
input_payload = {"task": input_data}
# Kick off the workflow and block until it finishes:
return await self.run_and_monitor_workflow_async(
workflow=self._workflow_name,
input=input_payload,
)
@message_router
@workflow(name="ToolCallingWorkflow")
@ -118,375 +120,326 @@ class DurableAgent(AgenticWorkflow, AgentBase):
Executes a tool-calling workflow, determining the task source (either an agent or an external user).
This uses Dapr Workflows to run the agent in a ReAct-style loop until it generates a final answer or reaches max iterations,
calling tools as needed.
Args:
ctx (DaprWorkflowContext): The workflow context for the current execution, providing state and control methods.
message (TriggerAction): The trigger message containing the task, iteration, and metadata for workflow execution.
Returns:
Dict[str, Any]: The final response message when the workflow completes, or None if continuing to the next iteration.
"""
# Step 0: Retrieve task and iteration input
# Handle both TriggerAction objects and dictionaries
# Step 1: pull out task + metadata
if isinstance(message, dict):
task = message.get("task")
iteration = message.get("iteration", 0)
workflow_instance_id = message.get("workflow_instance_id")
task = message.get("task", None)
source_workflow_instance_id = message.get("workflow_instance_id")
metadata = message.get("_message_metadata", {}) or {}
else:
task = message.task
iteration = message.iteration or 0
workflow_instance_id = message.workflow_instance_id
task = getattr(message, "task", None)
source_workflow_instance_id = getattr(message, "workflow_instance_id", None)
metadata = getattr(message, "_message_metadata", {}) or {}
instance_id = ctx.instance_id
source = metadata.get("source")
final_message: Optional[Dict[str, Any]] = None
if not ctx.is_replaying:
logger.info(
f"Workflow iteration {iteration + 1} started (Instance ID: {instance_id})."
)
logger.debug(f"Initial message from {source} -> {self.name}")
# Step 1: Initialize instance entry on first iteration
if iteration == 0:
# Handle metadata extraction for both TriggerAction objects and dictionaries
if isinstance(message, dict):
metadata = message.get("_message_metadata", {})
else:
metadata = getattr(message, "_message_metadata", {})
# Ensure "instances" key exists
if isinstance(self.state, dict) and "instances" not in self.state:
self.state["instances"] = {}
# Extract workflow metadata with proper defaults
source = metadata.get("source") if isinstance(metadata, dict) else None
source_workflow_instance_id = workflow_instance_id
# Create a new workflow entry
workflow_entry = AssistantWorkflowEntry(
input=task or "Triggered without input.",
source=source,
source_workflow_instance_id=source_workflow_instance_id,
output="", # Required
end_time=None, # Required
)
# Store in state, converting to JSON only if necessary
if isinstance(self.state, dict):
self.state["instances"][instance_id] = workflow_entry.model_dump(
mode="json"
)
if not ctx.is_replaying:
logger.info(f"Initial message from {source} -> {self.name}")
# Step 2: Retrieve workflow entry for this instance
if isinstance(self.state, dict):
workflow_entry = self.state["instances"].get(instance_id, {})
# Handle dictionary format
if isinstance(workflow_entry, dict):
source = workflow_entry.get("source")
source_workflow_instance_id = workflow_entry.get(
"source_workflow_instance_id"
)
else:
# Handle object format
source = workflow_entry.source
source_workflow_instance_id = workflow_entry.source_workflow_instance_id
else:
source = None
source_workflow_instance_id = None
# Step 3: Generate Response
response = yield ctx.call_activity(
self.generate_response, input={"instance_id": instance_id, "task": task}
)
response_message = yield ctx.call_activity(
self.get_response_message, input={"response": response}
)
# Step 4: Extract Finish Reason
finish_reason = yield ctx.call_activity(
self.get_finish_reason, input={"response": response}
)
# Step 5: Choose execution path based on LLM response
if finish_reason == "tool_calls":
if not ctx.is_replaying:
logger.info(
"Tool calls detected in LLM response, extracting and preparing for execution.."
)
# Retrieve the list of tool calls extracted from the LLM response
tool_calls = yield ctx.call_activity(
self.get_tool_calls, input={"response": response}
)
# Execute tool calls in parallel
if not ctx.is_replaying:
logger.info(f"Executing {len(tool_calls)} tool call(s)..")
parallel_tasks = [
ctx.call_activity(
self.execute_tool,
input={"instance_id": instance_id, "tool_call": tool_call},
)
for tool_call in tool_calls
]
yield self.when_all(parallel_tasks)
else:
if not ctx.is_replaying:
logger.info("Agent generating response without tool execution..")
# No Tool Calls → Clear tools
self.tool_history.clear()
# Step 6: Determine if Workflow Should Continue
next_iteration_count = iteration + 1
max_iterations_reached = next_iteration_count > self.max_iterations
if finish_reason == "stop" or max_iterations_reached:
# Determine the reason for stopping
if max_iterations_reached:
verdict = "max_iterations_reached"
try:
# Loop up to max_iterations
for turn in range(1, self.max_iterations + 1):
if not ctx.is_replaying:
logger.warning(
f"Workflow {instance_id} reached the max iteration limit ({self.max_iterations}) before finishing naturally."
logger.info(
f"Workflow turn {turn}/{self.max_iterations} (Instance ID: {instance_id})"
)
# Modify the response message to indicate forced stop
response_message[
"content"
] += "\n\nThe workflow was terminated because it reached the maximum iteration limit. The task may not be fully complete."
# Step 2: On turn 1, record the initial entry
if turn == 1:
yield ctx.call_activity(
self.record_initial_entry,
input={
"instance_id": instance_id,
"input": task or "Triggered without input.",
"source": source,
"source_workflow_instance_id": source_workflow_instance_id,
"output": "",
},
)
else:
# TODO: make this one word how we have max_iterations_reached for ex.
verdict = "model hit a natural stop point."
# Step 3: Retrieve workflow entry info for this instance
entry_info: dict = yield ctx.call_activity(
self.get_workflow_entry_info, input={"instance_id": instance_id}
)
source = entry_info.get("source")
source_workflow_instance_id = entry_info.get(
"source_workflow_instance_id"
)
# Step 8: Broadcasting Response to all agents if available
yield ctx.call_activity(
self.broadcast_message_to_agents, input={"message": response_message}
)
# Step 4: Generate Response with LLM
response_message: dict = yield ctx.call_activity(
self.generate_response,
input={"task": task, "instance_id": instance_id},
)
# Step 9: Respond to source agent if available
if source and source_workflow_instance_id:
# Step 5: Add the assistant's response message to the chat history
yield ctx.call_activity(
self.send_response_back,
input={
"response": response_message,
"target_agent": source,
"target_instance_id": source_workflow_instance_id,
},
self.append_assistant_message,
input={"instance_id": instance_id, "message": response_message},
)
# Step 10: Share Final Message
# Step 6: Handle tool calls response
tool_calls = response_message.get("tool_calls") or []
if tool_calls:
if not ctx.is_replaying:
logger.info(
f"Turn {turn}: executing {len(tool_calls)} tool call(s)"
)
# fanout parallel tool executions
parallel = [
ctx.call_activity(self.run_tool, input={"tool_call": tc})
for tc in tool_calls
]
tool_results: List[Dict[str, Any]] = yield self.when_all(parallel)
# Add tool results for the next iteration
for tr in tool_results:
yield ctx.call_activity(
self.append_tool_message,
input={"instance_id": instance_id, "tool_result": tr},
)
# 🔴 If this was the last turn, stop here—even though there were tool calls
if turn == self.max_iterations:
final_message = response_message
# Make sure content exists and is a string
final_message["content"] = final_message.get("content") or ""
final_message[
"content"
] += "\n\n⚠️ Stopped: reached max iterations."
break
# Otherwise, prepare for next turn: clear task so that generate_response() uses memory/history
task = None
continue # bump to next turn
# No tool calls → this is your final answer
final_message = response_message
# 🔴 If it happened to be the last turn, banner it
if turn == self.max_iterations:
# Again, ensure content is never None
final_message["content"] = final_message.get("content") or ""
final_message["content"] += "\n\n⚠️ Stopped: reached max iterations."
break # exit loop with final_message
else:
raise AgentError("Workflow ended without producing a final response")
except Exception as e:
logger.exception("Workflow error", exc_info=e)
final_message = {
"role": "assistant",
"content": f"⚠️ Unexpected error: {e}",
}
# Step 7: Broadcast the final response if a broadcast topic is set
if self.broadcast_topic_name:
yield ctx.call_activity(
self.finish_workflow,
input={"instance_id": instance_id, "message": response_message},
self.broadcast_message_to_agents,
input={"message": final_message},
)
if not ctx.is_replaying:
logger.info(
f"Workflow {instance_id} has been finalized with verdict: {verdict}"
)
# Respond to source agent if available
if source and source_workflow_instance_id:
yield ctx.call_activity(
self.send_response_back,
input={
"response": final_message,
"target_agent": source,
"target_instance_id": source_workflow_instance_id,
},
)
return response_message
# Save final output to workflow state
yield ctx.call_activity(
self.finalize_workflow,
input={
"instance_id": instance_id,
"final_output": final_message["content"],
},
)
# Step 7: Continue Workflow Execution
if isinstance(message, dict):
message.update({"task": None, "iteration": next_iteration_count})
# Set verdict for the workflow instance
if not ctx.is_replaying:
verdict = (
"max_iterations_reached" if turn == self.max_iterations else "completed"
)
logger.info(f"Workflow {instance_id} finalized: {verdict}")
ctx.continue_as_new(message)
# Return the final response message
return final_message
@task
def record_initial_entry(
self,
instance_id: str,
input: str,
source: Optional[str],
source_workflow_instance_id: Optional[str],
output: str = "",
):
"""
Records the initial workflow entry for a new workflow instance.
Args:
instance_id (str): The workflow instance ID.
input (str): The input task for the workflow.
source (Optional[str]): The source of the workflow trigger.
source_workflow_instance_id (Optional[str]): The workflow instance ID of the source.
output (str): The output for the workflow entry (default: "").
"""
entry = DurableAgentWorkflowEntry(
input=input,
source=source,
source_workflow_instance_id=source_workflow_instance_id,
output=output,
)
self.state.setdefault("instances", {})[instance_id] = entry.model_dump(
mode="json"
)
@task
def get_workflow_entry_info(self, instance_id: str) -> Dict[str, Any]:
"""
Retrieves the 'source' and 'source_workflow_instance_id' for a given workflow instance.
Args:
instance_id (str): The workflow instance ID to look up.
Returns:
Dict[str, Any]: Dictionary containing:
- 'source': The source of the workflow trigger (str or None).
- 'source_workflow_instance_id': The workflow instance ID of the source (str or None).
Raises:
AgentError: If the entry is not found or invalid.
"""
workflow_entry = self.state.get("instances", {}).get(instance_id)
if workflow_entry is not None:
return {
"source": workflow_entry.get("source"),
"source_workflow_instance_id": workflow_entry.get(
"source_workflow_instance_id"
),
}
raise AgentError(f"No workflow entry found for instance_id={instance_id}")
@task
async def generate_response(
self, instance_id: str, task: Optional[Union[str, Dict[str, Any]]] = None
) -> Dict[str, Any]:
"""
Generates a response using the LLM based on the current conversation context.
Ask the LLM for the assistant's next message.
Args:
instance_id (str): The unique identifier of the workflow instance.
task (Optional[Union[str, Dict[str, Any]]]): The task or query provided by the user.
instance_id (str): The workflow instance ID.
task: The user's query for this turn (either a string or a dict),
or None if this is a follow-up iteration.
Returns:
Dict[str, Any]: The LLM response as a dictionary.
A plain dict of the LLM's response (choices, finish_reason, etc).
Pydantic models are `.model_dump()`-ed; any other object is coerced via `dict()`.
"""
# Construct prompt messages
messages = self.construct_messages(task or {})
# Construct messages using only input_data; chat history handled internally
messages: List[Dict[str, Any]] = self.construct_messages(task or {})
user_message = self.get_last_message_if_user(messages)
# Store message in workflow state and local memory
if task:
task_message = {"role": "user", "content": task}
await self.update_workflow_state(
instance_id=instance_id, message=task_message
# Always work with a copy of the user message for safety
user_message_copy: Optional[Dict[str, Any]] = (
dict(user_message) if user_message else None
)
if task and user_message_copy:
# Add the new user message to memory only if input_data is provided and user message exists
user_msg = UserMessage(content=user_message_copy.get("content", ""))
self.memory.add_message(user_msg)
# Define DurableAgentMessage object for state persistence
msg_object = DurableAgentMessage(**user_message_copy)
inst: dict = self.state["instances"][instance_id]
inst.setdefault("messages", []).append(msg_object.model_dump(mode="json"))
inst["last_message"] = msg_object.model_dump(mode="json")
self.state.setdefault("chat_history", []).append(
msg_object.model_dump(mode="json")
)
# Save the state after appending the user message
self.save_state()
# Always print the last user message for context, even if no input_data is provided
if user_message_copy is not None:
# Ensure keys are str for mypy
self.text_formatter.print_message(
{str(k): v for k, v in user_message_copy.items()}
)
# Convert ToolMessage objects to dictionaries for LLM compatibility
tool_messages = []
for tool_msg in self.tool_history:
if isinstance(tool_msg, ToolMessage):
tool_messages.append(
{
"role": tool_msg.role,
"content": tool_msg.content,
"tool_call_id": tool_msg.tool_call_id,
}
)
else:
# Handle case where tool_msg is already a dict
tool_messages.append(tool_msg)
messages.extend(tool_messages)
# Generate response using the LLM
try:
response = self.llm.generate(
response: LLMChatResponse = self.llm.generate(
messages=messages,
tools=self.get_llm_tools(),
tool_choice=self.tool_choice,
**(
{"tool_choice": self.tool_choice}
if self.tool_choice is not None
else {}
),
)
# Convert ChatCompletion object to dictionary for workflow serialization
if hasattr(response, "model_dump"):
return response.model_dump()
elif isinstance(response, dict):
return response
else:
# Fallback: convert to string and wrap in dict
return {"content": str(response)}
# Get the first candidate from the response
response_message = response.get_message()
# Check if the response contains an assistant message
if response_message is None:
raise AgentError("LLM returned no assistant message")
# Convert the response message to a dict to work with JSON serialization
assistant_message = response_message.model_dump()
return assistant_message
except Exception as e:
logger.error(f"Error during chat generation: {e}")
raise AgentError(f"Failed during chat generation: {e}") from e
@task
def get_response_message(self, response: Dict[str, Any]) -> Dict[str, Any]:
"""
Extracts the response message from the first choice in the LLM response.
Args:
response (Dict[str, Any]): The response dictionary from the LLM, expected to contain a "choices" key.
Returns:
Dict[str, Any]: The extracted response message with the agent's name added.
"""
choices = response.get("choices", [])
response_message = choices[0].get("message", {})
return response_message
@task
def get_finish_reason(self, response: Dict[str, Any]) -> str:
"""
Extracts the finish reason from the LLM response, indicating why generation stopped.
Args:
response (Dict[str, Any]): The response dictionary from the LLM, expected to contain a "choices" key.
Returns:
str: The reason the model stopped generating tokens. Possible values include:
- "stop": Natural stop point or stop sequence encountered.
- "length": Maximum token limit reached.
- "content_filter": Content flagged by filters.
- "tool_calls": The model called a tool.
- "function_call" (deprecated): The model called a function.
- None: If no valid choice exists in the response.
"""
try:
if isinstance(response, dict):
choices = response.get("choices", [])
if choices and len(choices) > 0:
return choices[0].get("finish_reason", "unknown")
return "unknown"
except Exception as e:
logger.error(f"Error extracting finish reason: {e}")
return "unknown"
@task
def get_tool_calls(
self, response: Dict[str, Any]
) -> Optional[List[Dict[str, Any]]]:
"""
Extracts tool calls from the first choice in the LLM response, if available.
Args:
response (Dict[str, Any]): The response dictionary from the LLM, expected to contain "choices"
and potentially tool call information.
Returns:
Optional[List[Dict[str, Any]]]: A list of tool calls if present, otherwise None.
"""
choices = response.get("choices", [])
if not choices:
logger.warning("No choices found in LLM response.")
return None
# Save Tool Call Response Message
response_message = choices[0].get("message", {})
self.tool_history.append(response_message)
# Extract tool calls safely
tool_calls = choices[0].get("message", {}).get("tool_calls")
if not tool_calls:
logger.info("No tool calls found in LLM response.")
return None
return tool_calls
@task
async def execute_tool(self, instance_id: str, tool_call: Dict[str, Any]):
async def run_tool(self, tool_call: Dict[str, Any]) -> Dict[str, Any]:
"""
Executes a tool call by invoking the specified function with the provided arguments.
Args:
instance_id (str): The unique identifier of the workflow instance.
tool_call (Dict[str, Any]): A dictionary containing tool execution details, including the function name and arguments.
Returns:
Dict[str, Any]: A dictionary containing the tool call ID, function name, function arguments
Raises:
AgentError: If the tool call is malformed or execution fails.
"""
function_details = tool_call.get("function", {})
function_name = function_details.get("name")
if not function_name:
raise AgentError("Missing function name in tool execution request.")
# Extract function name and raw args
fn_name = tool_call["function"]["name"]
raw_args = tool_call["function"].get("arguments", "")
# Parse JSON arguments (or empty dict)
try:
function_args = function_details.get("arguments", "")
logger.info(
f"Executing tool '{function_name}' with raw arguments: {function_args}"
)
function_args_as_dict = json.loads(function_args) if function_args else {}
logger.info(
f"Parsed arguments for '{function_name}': {function_args_as_dict}"
)
# Execute tool function
result = await self.tool_executor.run_tool(
function_name, **function_args_as_dict
)
logger.info(
f"Tool '{function_name}' executed successfully with result: {result}"
)
# Construct tool execution message payload
workflow_tool_message = {
"tool_call_id": tool_call.get("id"),
"function_name": function_name,
"function_args": function_args,
"content": str(result),
}
# Update workflow state and agent tool history
await self.update_workflow_state(
instance_id=instance_id, tool_message=workflow_tool_message
)
args = json.loads(raw_args) if raw_args else {}
except json.JSONDecodeError as e:
logger.error(
f"Invalid JSON in tool arguments for function '{function_name}': {function_args}"
)
raise AgentError(
f"Invalid JSON format in arguments for tool '{function_name}': {e}"
)
raise AgentError(f"Invalid JSON in tool args: {e}")
# Run the tool
logger.debug(f"Executing tool '{fn_name}' with args: {args}")
try:
result = await self.tool_executor.run_tool(fn_name, **args)
except Exception as e:
logger.error(f"Error executing tool '{function_name}': {e}", exc_info=True)
raise AgentError(f"Error executing tool '{function_name}': {e}") from e
logger.error(f"Error executing tool '{fn_name}': {e}", exc_info=True)
raise AgentError(f"Error executing tool '{fn_name}': {e}") from e
# Return the plain payload for later persistence
return {
"tool_call_id": tool_call["id"],
"tool_name": fn_name,
"tool_args": args,
"execution_result": str(result) if result is not None else "",
}
@task
async def broadcast_message_to_agents(self, message: Dict[str, Any]):
@ -529,102 +482,79 @@ class DurableAgent(AgenticWorkflow, AgentBase):
await self.send_message_to_agent(name=target_agent, message=agent_response)
@task
async def finish_workflow(self, instance_id: str, message: Dict[str, Any]):
def append_assistant_message(
self, instance_id: str, message: Dict[str, Any]
) -> None:
"""
Finalizes the workflow by storing the provided message as the final output.
Append an assistant message into the workflow state.
Args:
instance_id (str): The unique identifier of the workflow instance.
summary (Dict[str, Any]): The final summary to be stored in the workflow state.
instance_id (str): The workflow instance ID.
message (Dict[str, Any]): The assistant message to append.
"""
# Store message in workflow state
await self.update_workflow_state(instance_id=instance_id, message=message)
# Store final output
await self.update_workflow_state(
instance_id=instance_id, final_output=message["content"]
message["name"] = self.name
# Convert the message to a DurableAgentMessage object
msg_object = DurableAgentMessage(**message)
# Defensive: check self.state is not None
inst: dict = self.state["instances"][instance_id]
inst.setdefault("messages", []).append(msg_object.model_dump(mode="json"))
inst["last_message"] = msg_object.model_dump(mode="json")
self.state.setdefault("chat_history", []).append(
msg_object.model_dump(mode="json")
)
# Add the assistant message to the tool history
self.memory.add_message(AssistantMessage(**message))
# Save the state after appending the assistant message
self.save_state()
# Print the assistant message
self.text_formatter.print_message(message)
async def update_workflow_state(
self,
instance_id: str,
message: Optional[Dict[str, Any]] = None,
tool_message: Optional[Dict[str, Any]] = None,
final_output: Optional[str] = None,
):
@task
def append_tool_message(
self, instance_id: str, tool_result: Dict[str, Any]
) -> None:
"""
Updates the workflow state by appending a new message or setting the final output.
Accepts both dict and AssistantWorkflowState as valid state types.
Append a tool-execution record to both the per-instance history and the agent's tool_history.
"""
# Accept both dict and AssistantWorkflowState
if isinstance(self.state, dict):
if "instances" not in self.state:
self.state["instances"] = {}
workflow_entry = self.state["instances"].get(instance_id)
if not workflow_entry:
raise ValueError(
f"No workflow entry found for instance_id {instance_id} in local state."
)
elif isinstance(self.state, AssistantWorkflowState):
if instance_id not in self.state.instances:
raise ValueError(
f"No workflow entry found for instance_id {instance_id} in AssistantWorkflowState."
)
workflow_entry = self.state.instances[instance_id]
else:
raise ValueError(f"Invalid state type: {type(self.state)}")
# Define a ToolMessage object from the tool result
tool_message = ToolMessage(
tool_call_id=tool_result["tool_call_id"],
name=tool_result["tool_name"],
content=tool_result["execution_result"],
)
# Define DurableAgentMessage object for state persistence
msg_object = DurableAgentMessage(**tool_message.model_dump())
# Define a ToolExecutionRecord object
# to store the tool execution details in the workflow state
tool_history_entry = ToolExecutionRecord(**tool_result)
# Defensive: check self.state is not None
inst: dict = self.state["instances"][instance_id]
inst.setdefault("messages", []).append(msg_object.model_dump(mode="json"))
inst.setdefault("tool_history", []).append(
tool_history_entry.model_dump(mode="json")
)
self.state.setdefault("chat_history", []).append(
msg_object.model_dump(mode="json")
)
# Update tool history and memory of agent
self.tool_history.append(tool_history_entry)
# Add the tool message to the agent's memory
self.memory.add_message(tool_message)
# Save the state after appending the tool message
self.save_state()
# Print the tool message
self.text_formatter.print_message(tool_message)
# Store user/assistant messages separately
if message is not None:
serialized_message = AssistantWorkflowMessage(**message).model_dump(
mode="json"
)
if isinstance(workflow_entry, dict):
workflow_entry.setdefault("messages", []).append(serialized_message)
workflow_entry["last_message"] = serialized_message
else:
workflow_entry.messages.append(AssistantWorkflowMessage(**message))
workflow_entry.last_message = AssistantWorkflowMessage(**message)
# Add to memory only if it's a user/assistant message
from dapr_agents.types.message import UserMessage
if message.get("role") == "user":
user_msg = UserMessage(content=message.get("content", ""))
self.memory.add_message(user_msg)
# Store tool execution messages separately in tool_history
if tool_message is not None:
serialized_tool_message = AssistantWorkflowToolMessage(
**tool_message
).model_dump(mode="json")
if isinstance(workflow_entry, dict):
workflow_entry.setdefault("tool_history", []).append(
serialized_tool_message
)
else:
workflow_entry.tool_history.append(
AssistantWorkflowToolMessage(**tool_message)
)
# Also update agent-level tool history (execution tracking)
agent_tool_message = ToolMessage(
tool_call_id=tool_message["tool_call_id"],
name=tool_message["function_name"],
content=tool_message["content"],
)
self.tool_history.append(agent_tool_message)
# Store final output
if final_output is not None:
if isinstance(workflow_entry, dict):
workflow_entry["output"] = final_output
workflow_entry["end_time"] = datetime.now().isoformat()
else:
workflow_entry.output = final_output
workflow_entry.end_time = datetime.now()
# Persist updated state
@task
def finalize_workflow(self, instance_id: str, final_output: str) -> None:
"""
Record the final output and end_time in the workflow state.
"""
end_time = datetime.now(timezone.utc)
end_time_str = end_time.isoformat()
inst: dict = self.state["instances"][instance_id]
inst["output"] = final_output
inst["end_time"] = end_time_str
self.save_state()
@message_router(broadcast=True)
@ -640,42 +570,42 @@ class DurableAgent(AgenticWorkflow, AgentBase):
None: The function updates the agent's memory and ignores unwanted messages.
"""
try:
# Extract metadata safely from message attributes
# Extract metadata safely from message["_message_metadata"]
metadata = getattr(message, "_message_metadata", {})
if not isinstance(metadata, dict):
if not isinstance(metadata, dict) or not metadata:
logger.warning(
f"{self.name} received a broadcast message with invalid metadata format. Ignoring."
f"{self.name} received a broadcast message with missing or invalid metadata. Ignoring."
)
return
source = metadata.get("source", "unknown_source")
message_type = metadata.get("type", "unknown_type")
message_content = getattr(message, "content", "No Data")
logger.info(
f"{self.name} received broadcast message of type '{message_type}' from '{source}'."
)
# Ignore messages sent by this agent
if source == self.name:
logger.info(
f"{self.name} ignored its own broadcast message of type '{message_type}'."
)
return
# Log and process the valid broadcast message
logger.debug(
f"{self.name} processing broadcast message from '{source}'. Content: {message_content}"
)
# Store the message in local memory
self.memory.add_message(message)
# Define DurableAgentMessage object for state persistence
msg_object = DurableAgentMessage(**message.model_dump())
# Persist to global chat history
self.state.setdefault("chat_history", [])
self.state["chat_history"].append(msg_object.model_dump(mode="json"))
# Save the state after processing the broadcast message
self.save_state()
except Exception as e:
logger.error(f"Error processing broadcast message: {e}", exc_info=True)
@property
def agent_metadata(self) -> Optional[Dict[str, Any]]:
"""Get the agent metadata."""
return self._agent_metadata

View File

@ -28,7 +28,6 @@ class TriggerAction(BaseModel):
None,
description="The specific task to execute. If not provided, the agent will act based on its memory or predefined behavior.",
)
iteration: Optional[int] = Field(0, description="")
workflow_instance_id: Optional[str] = Field(
default=None, description="Dapr workflow instance id from source if available"
)

View File

@ -1,88 +1,63 @@
from pydantic import BaseModel, Field
from typing import List, Optional, Dict
from dapr_agents.types import ToolMessage
from dapr_agents.types import MessageContent, ToolExecutionRecord
from datetime import datetime
import uuid
class AssistantWorkflowMessage(BaseModel):
"""Represents a message exchanged within the workflow."""
class DurableAgentMessage(MessageContent):
id: str = Field(
default_factory=lambda: str(uuid.uuid4()),
description="Unique identifier for the message",
)
role: str = Field(
..., description="The role of the message sender, e.g., 'user' or 'assistant'"
)
content: str = Field(..., description="Content of the message")
timestamp: datetime = Field(
default_factory=datetime.now,
description="Timestamp when the message was created",
)
name: Optional[str] = Field(
default=None,
description="Optional name of the assistant or user sending the message",
)
class AssistantWorkflowToolMessage(ToolMessage):
"""Represents a Tool message exchanged within the workflow."""
id: str = Field(
default_factory=lambda: str(uuid.uuid4()),
description="Unique identifier for the message",
)
function_name: str = Field(
...,
description="Name of tool suggested by the model to run for a specific task.",
)
function_args: Optional[str] = Field(
None,
description="Tool arguments suggested by the model to run for a specific task.",
)
timestamp: datetime = Field(
default_factory=datetime.now,
description="Timestamp when the message was created",
)
class AssistantWorkflowEntry(BaseModel):
class DurableAgentWorkflowEntry(BaseModel):
"""Represents a workflow and its associated data, including metadata on the source of the task request."""
input: str = Field(
..., description="The input or description of the Workflow to be performed"
)
output: Optional[str] = Field(
None, description="The output or result of the Workflow, if completed"
default=None, description="The output or result of the Workflow, if completed"
)
start_time: datetime = Field(
default_factory=datetime.now,
description="Timestamp when the workflow was started",
)
end_time: Optional[datetime] = Field(
None, description="Timestamp when the workflow was completed or failed"
default_factory=datetime.now,
description="Timestamp when the workflow was completed or failed",
)
messages: List[AssistantWorkflowMessage] = Field(
default_factory=list, description="Messages exchanged during the workflow"
messages: List[DurableAgentMessage] = Field(
default_factory=list,
description="Messages exchanged during the workflow (user, assistant, or tool messages).",
)
last_message: Optional[AssistantWorkflowMessage] = Field(
last_message: Optional[DurableAgentMessage] = Field(
default=None, description="Last processed message in the workflow"
)
tool_history: List[AssistantWorkflowToolMessage] = Field(
tool_history: List[ToolExecutionRecord] = Field(
default_factory=list, description="Tool message exchanged during the workflow"
)
source: Optional[str] = Field(None, description="Entity that initiated the task.")
source_workflow_instance_id: Optional[str] = Field(
None,
default=None,
description="The workflow instance ID associated with the original request.",
)
class AssistantWorkflowState(BaseModel):
"""Represents the state of multiple Assistant workflows."""
class DurableAgentWorkflowState(BaseModel):
"""Represents the state of multiple Agent workflows."""
instances: Dict[str, AssistantWorkflowEntry] = Field(
instances: Dict[str, DurableAgentWorkflowEntry] = Field(
default_factory=dict,
description="Workflow entries indexed by their instance_id.",
)
chat_history: List[DurableAgentMessage] = Field(
default_factory=list,
description="Chat history of messages exchanged during the workflow.",
)

View File

@ -1 +1,3 @@
from .otel import DaprAgentsOTel
from .otel import DaprAgentsOtel
__all__ = ["DaprAgentsOtel"]

View File

@ -16,7 +16,7 @@ from opentelemetry.exporter.otlp.proto.http.metric_exporter import OTLPMetricExp
from opentelemetry.exporter.otlp.proto.http.trace_exporter import OTLPSpanExporter
class DaprAgentsOTel:
class DaprAgentsOtel:
"""
OpenTelemetry configuration for Dapr agents.
"""

View File

@ -55,7 +55,9 @@ class ColorTextFormatter:
lines = text.split("\n")
for i, line in enumerate(lines):
formatted_line = self.format_text(line, color)
print(formatted_line, end="\n" if i < len(lines) - 1 else "")
print(
formatted_line, flush=True, end="\n" if i < len(lines) - 1 else ""
)
print(COLORS["reset"]) # Ensure terminal color is reset at the end

View File

@ -1,4 +1,14 @@
from .embedder import NVIDIAEmbedder, OpenAIEmbedder, SentenceTransformerEmbedder
from .fetcher import ArxivFetcher
from .reader import PyMuPDFReader, PyPDFReader
from .splitter import TextSplitter
from .embedder import OpenAIEmbedder, SentenceTransformerEmbedder, NVIDIAEmbedder
__all__ = [
"ArxivFetcher",
"PyMuPDFReader",
"PyPDFReader",
"TextSplitter",
"OpenAIEmbedder",
"SentenceTransformerEmbedder",
"NVIDIAEmbedder",
]

View File

@ -1,3 +1,5 @@
from .nvidia import NVIDIAEmbedder
from .openai import OpenAIEmbedder
from .sentence import SentenceTransformerEmbedder
from .nvidia import NVIDIAEmbedder
__all__ = ["OpenAIEmbedder", "SentenceTransformerEmbedder", "NVIDIAEmbedder"]

View File

@ -1 +1,3 @@
from .arxiv import ArxivFetcher
__all__ = ["ArxivFetcher"]

View File

@ -1 +1,3 @@
from .pdf import PyMuPDFReader, PyPDFReader
__all__ = ["PyMuPDFReader", "PyPDFReader"]

Some files were not shown because too many files have changed in this diff Show More