Merge branch 'v1.16' into feat-conversation-api-toolcalling

This commit is contained in:
Mark Fussell 2025-09-09 20:56:54 -07:00 committed by GitHub
commit e43436b0cf
No known key found for this signature in database
GPG Key ID: B5690EEEBB952194
114 changed files with 2907 additions and 763 deletions

View File

@ -14,23 +14,27 @@ This folder contains a template and infrastructure as code to recreate and recon
1) Export any environment variables you want to override with your values using `./infra/main.parameters.json` as a reference for the variable names. e.g.
In a new terminal:
Bash/sh/zsh:
```bash
export AZURE_RESOURCE_GROUP=rg-dapr-docs-test
export IDENTITY_RESOURCE_GROUP=rg-my-identities
export AZURE_STATICWEBSITE_NAME=daprdocs-latest
export AZURE_RESOURCE_GROUP=docs-website
export IDENTITY_RESOURCE_GROUP=dapr-identities
export AZURE_STATICWEBSITE_NAME=daprdocs-v1-1
```
Where `daprdocs-v1-1` should be updated with the new preview version.
PowerShell
```PowerShell
setx AZURE_RESOURCE_GROUP "rg-dapr-docs-test"
setx IDENTITY_RESOURCE_GROUP "rg-my-identities"
setx AZURE_STATICWEBSITE_NAME "daprdocs-latest"
setx AZURE_RESOURCE_GROUP "docs-website"
setx IDENTITY_RESOURCE_GROUP "dapr-identities"
setx AZURE_STATICWEBSITE_NAME "daprdocs-v1-1"
```
This assumes you have an existing [user-assigned managed identity](https://learn.microsoft.com/entra/identity/managed-identities-azure-resources/how-manage-user-assigned-managed-identities?pivots=identity-mi-methods-azp) (see L39 in `./infra/main.bicep` to use or modify name) in a resource group that you can reference as the runtime identity of this static web app. We recommend storing this in a different resource group from your application, to keep the permissions and lifecycles separate of your identity and your web app. We also recommend narrowly limiting who has access to view, contribute or own this identity, and also only apply it to single resource scopes, not to entire resource groups or subscriptions, to avoid elevation of priviledges.
Where `daprdocs-v1-1` should be updated with the new preview version.
This assumes you have an existing [user-assigned managed identity](https://learn.microsoft.com/entra/identity/managed-identities-azure-resources/how-manage-user-assigned-managed-identities?pivots=identity-mi-methods-azp) (see L39 in `./infra/main.bicep` to use or modify name) in a resource group that you can reference as the runtime identity of this static web app. We recommend storing this in a different resource group from your application, to keep the permissions and lifecycles separate of your identity and your web app. We also recommend narrowly limiting who has access to view, contribute or own this identity, and also only apply it to single resource scopes, not to entire resource groups or subscriptions, to avoid elevation of priviledges.
2) Deploy using the Azure Dev CLI
@ -40,7 +44,7 @@ Start by creating a create a side-by-side azd environment:
azd env new
```
For example, you can name the new environment something like: `dapr-docs-v1-15`.
For example, you can name the new environment something like: `dapr-docs-v1-1`.
Now, deploy the Dapr docs SWA in the new azd environment using the following command:
@ -48,7 +52,7 @@ Now, deploy the Dapr docs SWA in the new azd environment using the following com
azd up
```
You will be prompted for the subscription and location (region) to use. The Resource Group and Static Web App will now be created and usable. Typical deployment times are only 20-60 seconds.
You will be prompted for the subscription and location (region) to use. The Resource Group and Static Web App will now be created and usable. Typical deployment times are only 20-60 seconds.
## Configure the Static Web App in portal.azure.com
@ -58,4 +62,4 @@ You will be prompted for the subscription and location (region) to use. The Res
## Configure your CI/CD pipeline
You will need a rotatable token or ideally a managed identity (coming soon) for your pipeline to have Web publishing access grants to the Static Web App. Get the token from the Overview blade -> Manage Access Token command of the SWA, and store it in the vault/secret for the repo matching your Github Action (or other CI/CD pipeline)'s workflow file. One example for the current/main release of Dapr docs is [here](https://github.com/dapr/docs/blob/v1.13/.github/workflows/website-root.yml#L57). This is an elevated operation that likely needs an admin or maintainer to perform.
You will need a rotatable token or ideally a managed identity (coming soon) for your pipeline to have Web publishing access grants to the Static Web App. Get the token from the Overview blade -> Manage Access Token command of the SWA, and store it in the vault/secret for the repo matching your Github Action (or other CI/CD pipeline)'s workflow file. One example for the current/main release of Dapr docs is [here](https://github.com/dapr/docs/blob/v1.13/.github/workflows/website-root.yml#L57). This is an elevated operation that likely needs an admin or maintainer to perform.

View File

@ -39,7 +39,7 @@ jobs:
- name: Build Hugo Website
run: |
git config --global --add safe.directory /github/workspace
hugo
hugo --minify
- name: Deploy Website
id: builddeploy
uses: Azure/static-web-apps-deploy@v1

View File

@ -93,6 +93,22 @@ updatedAt | timestamp | Timestamp of the actor registered/updated.
}
```
## Disabling the Placement service
The Placement service can be disabled with the following setting:
```
global.actors.enabled=false
```
The Placement service is not deployed with this setting in Kubernetes mode. This not only disables actor deployment, but also disables workflows, given that workflows use actors. This setting only applies in Kubernetes mode, however initializing Dapr with `--slim` excludes the Placement service from being deployed in self-hosted mode.
For more information on running Dapr on Kubernetes, visit the [Kubernetes hosting page](https://docs.dapr.io/operations/hosting/kubernetes/).
## Related links
[Learn more about the Placement API.]({{% ref placement_api %}})
[Learn more about the Placement API.]({{% ref placement_api %}})

View File

@ -5,7 +5,7 @@ linkTitle: "Scheduler"
description: "Overview of the Dapr scheduler service"
---
The Dapr Scheduler service is used to schedule different types of jobs, running in [self-hosted mode]({{% ref self-hosted %}}) or on [Kubernetes]({{% ref kubernetes %}}).
The Dapr Scheduler service is used to schedule different types of jobs, running in [self-hosted mode]({{% ref self-hosted %}}) or on [Kubernetes]({{% ref kubernetes %}}).
- Jobs created through the Jobs API
- Actor reminder jobs (used by the actor reminders)
- Actor reminder jobs created by the Workflow API (which uses actor reminders)
@ -14,10 +14,13 @@ From Dapr v1.15, the Scheduler service is used by default to schedule actor remi
There is no concept of a leader Scheduler instance. All Scheduler service replicas are considered peers. All receive jobs to be scheduled for execution and the jobs are allocated between the available Scheduler service replicas for load balancing of the trigger events.
The diagram below shows how the Scheduler service is used via the jobs API when called from your application. All the jobs that are tracked by the Scheduler service are stored in an embedded etcd database.
The diagram below shows how the Scheduler service is used via the jobs API when called from your application. All the jobs that are tracked by the Scheduler service are stored in the Etcd database.
<img src="/images/scheduler/scheduler-architecture.png" alt="Diagram showing the Scheduler control plane service and the jobs API">
By default, Etcd is embedded in the Scheduler service, which means that the Scheduler service runs its own instance of Etcd.
See [Scheduler service flags]({{% ref "#flag-tuning" %}}) for more information on how to configure the Scheduler service.
## Actor Reminders
Prior to Dapr v1.15, [actor reminders]({{% ref "actors-timers-reminders#actor-reminders" %}}) were run using the Placement service. Now, by default, the [`SchedulerReminders` feature flag]({{% ref "support-preview-features#current-preview-features" %}}) is set to `true`, and all new actor reminders you create are run using the Scheduler service to make them more scalable.
@ -73,6 +76,45 @@ The Scheduler service is deployed as part of `dapr init -k`, or via the Dapr Hel
When a Kubernetes namespace is deleted, all the Job and Actor Reminders corresponding to that namespace are deleted.
## Docker Compose Example
Here's how to expose the etcd ports in a Docker Compose configuration for standalone mode.
When running in HA mode, you only need to expose the ports for one scheduler instance to perform backup operations.
```yaml
version: "3.5"
services:
scheduler-0:
image: "docker.io/daprio/scheduler:1.16.0"
command:
- "./scheduler"
- "--etcd-data-dir=/var/run/dapr/scheduler"
- "--id=scheduler-0"
- "--etcd-initial-cluster=scheduler-0=http://scheduler-0:2380,scheduler-1=http://scheduler-1:2380,scheduler-2=http://scheduler-2:2380"
ports:
- 2379:2379
volumes:
- ./dapr_scheduler/0:/var/run/dapr/scheduler
scheduler-1:
image: "docker.io/daprio/scheduler:1.16.0"
command:
- "./scheduler"
- "--etcd-data-dir=/var/run/dapr/scheduler"
- "--id=scheduler-1"
- "--etcd-initial-cluster=scheduler-0=http://scheduler-0:2380,scheduler-1=http://scheduler-1:2380,scheduler-2=http://scheduler-2:2380"
volumes:
- ./dapr_scheduler/1:/var/run/dapr/scheduler
scheduler-2:
image: "docker.io/daprio/scheduler:1.16.0"
command:
- "./scheduler"
- "--etcd-data-dir=/var/run/dapr/scheduler"
- "--id=scheduler-2"
- "--etcd-initial-cluster=scheduler-0=http://scheduler-0:2380,scheduler-1=http://scheduler-1:2380,scheduler-2=http://scheduler-2:2380"
volumes:
- ./dapr_scheduler/2:/var/run/dapr/scheduler
```
## Back Up and Restore Scheduler Data
In production environments, it's recommended to perform periodic backups of this data at an interval that aligns with your recovery point objectives.
@ -89,32 +131,6 @@ Here's how to port forward and connect to the etcd instance:
kubectl port-forward svc/dapr-scheduler-server 2379:2379 -n dapr-system
```
#### Docker Compose Example
Here's how to expose the etcd ports in a Docker Compose configuration for standalone mode:
```yaml
scheduler-1:
image: "diagrid/dapr/scheduler:dev110-linux-arm64"
command: ["./scheduler",
"--etcd-data-dir", "/var/run/dapr/scheduler",
"--replica-count", "3",
"--id","scheduler-1",
"--initial-cluster", "scheduler-1=http://scheduler-1:2380,scheduler-0=http://scheduler-0:2380,scheduler-2=http://scheduler-2:2380",
"--etcd-client-ports", "scheduler-0=2379,scheduler-1=2379,scheduler-2=2379",
"--etcd-client-http-ports", "scheduler-0=2330,scheduler-1=2330,scheduler-2=2330",
"--log-level=debug"
]
ports:
- 2379:2379
volumes:
- ./dapr_scheduler/1:/var/run/dapr/scheduler
networks:
- network
```
When running in HA mode, you only need to expose the ports for one scheduler instance to perform backup operations.
### Performing Backup and Restore
Once you have access to the etcd ports, you can follow the [official etcd backup and restore documentation](https://etcd.io/docs/v3.5/op-guide/recovery/) to perform backup and restore operations. The process involves using standard etcd commands to create snapshots and restore from them.
@ -135,6 +151,83 @@ If you are not using any features that require the Scheduler service (Jobs API,
For more information on running Dapr on Kubernetes, visit the [Kubernetes hosting page]({{% ref kubernetes %}}).
## Flag tuning
A number of Etcd flags are exposed on Scheduler which can be used to tune for your deployment use case.
### External Etcd database
Scheduler can be configured to use an external Etcd database instead of the embedded one inside the Scheduler service replicas.
It may be interesting to decouple the storage volume from the Scheduler StatefulSet or container, because of how the cluster or environment is administered or what storage backend is being used.
It can also be the case that moving the persistent storage outside of the scheduler runtime completely is desirable, or there is some existing Etcd cluster provider which will be reused.
Externalising the Etcd database also means that the Scheduler replicas can be horizontally scaled at will, however note that during scale events, job triggering will be paused.
Scheduler replica count does not need to match the [Etcd node count constraints](https://etcd.io/docs/v3.3/faq/#what-is-maximum-cluster-size).
To use an external Etcd cluster, set the `--etcd-embed` flag to `false` and provide the `--etcd-client-endpoints` flag with the endpoints of your Etcd cluster.
Optionally also include `--etcd-client-username` and `--etcd-client-password` flags for authentication if the Etcd cluster requires it.
```
--etcd-embed bool When enabled, the Etcd database is embedded in the scheduler server. If false, the scheduler connects to an external Etcd cluster using the --etcd-client-endpoints flag. (default true)
--etcd-client-endpoints stringArray Comma-separated list of etcd client endpoints to connect to. Only used when --etcd-embed is false.
--etcd-client-username string Username for etcd client authentication. Only used when --etcd-embed is false.
--etcd-client-password string Password for etcd client authentication. Only used when --etcd-embed is false.
```
Helm:
```yaml
dapr_scheduler.etcdEmbed=true
dapr_scheduler.etcdClientEndpoints=[]
dapr_scheduler.etcdClientUsername=""
dapr_scheduler.etcdClientPassword=""
```
### Etcd leadership election tuning
To improve the speed of election leadership of rescue nodes in the event of a failure, the following flag may be used to speed up the election process.
```
--etcd-initial-election-tick-advance Whether to fast-forward initial election ticks on boot for faster election. When it is true, then local member fast-forwards election ticks to speed up “initial” leader election trigger. This benefits the case of larger election ticks. Disabling this would slow down initial bootstrap process for cross datacenter deployments. Make your own tradeoffs by configuring this flag at the cost of slow initial bootstrap.
```
Helm:
```yaml
dapr_scheduler.etcdInitialElectionTickAdvance=true
```
### Storage tuning
The following options can be used to tune the embedded Etcd storage to the needs of your deployment.
A deeper understanding of what these flags do can be found in the [Etcd documentation](https://etcd.io/docs/v3.5/op-guide/configuration/).
{{% alert title="Note" color="primary" %}}
Changing these flags can greatly change the performance and behaviour of the Scheduler, so caution is advised when modifying them from the default set by Dapr.
Changing these settings should always been done first in a testing environment, and monitored closely before applying to production.
{{% /alert %}}
```
--etcd-backend-batch-interval string Maximum time before committing the backend transaction. (default "50ms")
--etcd-backend-batch-limit int Maximum operations before committing the backend transaction. (default 5000)
--etcd-compaction-mode string Compaction mode for etcd. Can be 'periodic' or 'revision' (default "periodic")
--etcd-compaction-retention string Compaction retention for etcd. Can express time or number of revisions, depending on the value of 'etcd-compaction-mode' (default "10m")
--etcd-experimental-bootstrap-defrag-threshold-megabytes uint Minimum number of megabytes needed to be freed for etcd to consider running defrag during bootstrap. Needs to be set to non-zero value to take effect. (default 100)
--etcd-max-snapshots uint Maximum number of snapshot files to retain (0 is unlimited). (default 10)
--etcd-max-wals uint Maximum number of write-ahead logs to retain (0 is unlimited). (default 10)
--etcd-snapshot-count uint Number of committed transactions to trigger a snapshot to disk. (default 10000)
```
Helm:
```yaml
dapr_scheduler.etcdBackendBatchInterval="50ms"
dapr_scheduler.etcdBackendBatchLimit=5000
dapr_scheduler.etcdCompactionMode="periodic"
dapr_scheduler.etcdCompactionRetention="10m"
dapr_scheduler.etcdDefragThresholdMB=100
dapr_scheduler.etcdMaxSnapshots=10
```
## Related links
[Learn more about the Jobs API.]({{% ref jobs_api %}})

View File

@ -21,6 +21,25 @@ Dapr provides end-to-end security with the service invocation API, with the abil
<img src="/images/security-end-to-end-communication.png" width=1000>
## Application Identity
In Dapr, Application Identity is built around the concept of an App ID.
The App ID is the single atomic unit of identity in Dapr:
- Every Dapr-enabled application has an App ID. Multiple replicas of the application share the same App ID.
- All routing, service discovery, security policies, and access control in Dapr are derived from this App ID.
- Service-to-service communication in Dapr uses the App ID instead of relying on IP addresses or hostnames, enabling stable and portable addressing across environments.
For example, when one service calls another using Dapr's service invocation API, it calls the target by its App ID rather than its network location.
This abstraction ensures that security policies, mutual TLS (mTLS) certificates, and access controls consistently apply at the application identity level.
## Namespaces and Scoping
While App IDs uniquely identify applications, namespaces provide an additional layer of scoping and isolation, especially in multi-tenant or large environments.
- Namespaces allow operators to deploy Dapr applications in logically separated groups.
- Two applications can have the same App ID in different namespaces without conflicting because security, routing, and discovery are namespace-aware.
## Service invocation scoping access policy
Dapr applications can be scoped to namespaces for deployment and security. You can call between services deployed to different namespaces. Read the [Service invocation across namespaces]({{% ref "service-invocation-namespaces" %}}) article for more details.
@ -134,6 +153,12 @@ With Dapr OAuth 2.0 middleware, you can enable OAuth authorization on Dapr endpo
You can adopt common network security technologies, such as network security groups (NSGs), demilitarized zones (DMZs), and firewalls, to provide layers of protection over your networked resources. For example, unless configured to talk to an external binding target, Dapr sidecars dont open connections to the internet and most binding implementations only use outbound connections. You can design your firewall rules to allow outbound connections only through designated ports.
## Run as non-root in Kubernetes
When running in Kubernetes, Dapr services ensure each process is running as non-root. This is done by checking the UID & GID of the process is `65532`, and fatal erroring if it is not what is expected. If you must run a non-default UID & GID in Kubernetes, set the following env var to skip this check.
```bash
DAPR_UNSAFE_SKIP_CONTAINER_UID_GID_CHECK="true"
```
# Security policies
Dapr has an extensive set of security policies you can apply to your applications. You can scope what they are able to do, either through a policy setting in the sidecar configuration, or with the component specification.

View File

@ -1,16 +1,16 @@
---
type: docs
title: "Contributing to Dapr agents"
linkTitle: "Dapr agents"
title: "Contributing to Dapr Agents"
linkTitle: "Dapr Agents"
weight: 85
description: Guidelines for contributing to Dapr agents
description: Guidelines for contributing to Dapr Agents
---
When contributing to Dapr agents, the following rules and best-practices should be followed.
When contributing to Dapr Agents, the following rules and best-practices should be followed.
## Examples
The examples directory contains code samples for users to run to try out specific functionality of the various Dapr agents packages and extensions. When writing new and updated samples keep in mind:
The examples directory contains code samples for users to run to try out specific functionality of the various Dapr Agents packages and extensions. When writing new and updated samples keep in mind:
- All examples should be runnable on Windows, Linux, and MacOS. While Python code is consistent among operating systems, any pre/post example commands should provide options through [codetabs]({{< ref "contributing-docs.md#tabbed-content" >}})
- Contain steps to download/install any required pre-requisites. Someone coming in with a fresh OS install should be able to start on the example and complete it without an error. Links to external download pages are fine.

View File

@ -296,6 +296,7 @@ Next, create a new website for the future Dapr release. To do this, you'll need
- Configure DNS via request from CNCF.
#### Prerequisites
- Docs maintainer status in the `dapr/docs` repo.
- Access to the active Dapr Azure Subscription with Contributor or Owner access to create resources.
- [Azure Developer CLI](https://learn.microsoft.com/azure/developer/azure-developer-cli/install-azd?tabpane=winget-windows%2Cbrew-mac%2Cscript-linux&pivots=os-windows) installed on your machine.
@ -310,7 +311,7 @@ Deploy a new Azure Static Web App for the future Dapr release. For this example,
```bash
cd .github/iac/swa
```
1. Log into Azure Developer CLI (`azd`) using the Dapr Azure subscription.
```bash
@ -319,47 +320,49 @@ Deploy a new Azure Static Web App for the future Dapr release. For this example,
1. In the browser prompt, verify you're logging in as Dapr and complete the login.
1. In a new terminal, replace the following values with the website values you prefer.
1. In the same terminal, set these environment variables:
```bash
export AZURE_RESOURCE_GROUP=rg-dapr-docs-test
export IDENTITY_RESOURCE_GROUP=rg-my-identities
export AZURE_STATICWEBSITE_NAME=daprdocs-latest
export AZURE_RESOURCE_GROUP=docs-website
export IDENTITY_RESOURCE_GROUP=dapr-identities
export AZURE_STATICWEBSITE_NAME==daprdocs-v1-1
```
Where `daprdocs-v1-1` should be updated with the new preview version.
1. Create a new [`azd` environment](https://learn.microsoft.com/azure/developer/azure-developer-cli/faq#what-is-an-environment-name).
```bash
azd env new
```
1. When prompted, enter a new environment name. For this example, you'd name the environment something like: `dapr-docs-v1-1`.
1. When prompted, enter a new environment name. For this example, you'd name the environment something like: `dapr-docs-v1-1`.
1. Once the environment is created, deploy the Dapr docs SWA into the new environment using the following command:
```bash
azd up
```
1. When prompted, select an Azure subscription and location. Match these to the Dapr Azure subscription.
1. When prompted, select an Azure subscription (Dapr Tests) and deployment location (West US 2).
#### Configure the SWA in the Azure portal
Head over to the Dapr subscription in the [Azure portal](https://portal.azure.com) and verify that your new Dapr docs site has been deployed.
Head over to the Dapr subscription in the [Azure portal](https://portal.azure.com) and verify that your new Dapr docs site has been deployed.
Optionally, grant the correct minimal permissions for inbound publishing and outbound access to dependencies using the **Static Web App** > **Access control (IAM)** blade in the portal.
#### Configure DNS
1. In the Azure portal, from the new SWA you just created, naviage to **Custom domains** from the left side menu.
1. In the Azure portal, from the new SWA you just created, naviage to **Custom domains** from the left side menu.
1. Copy the "CNAME" value of the web app.
1. Using your own account, [submit a CNCF ticket](https://jira.linuxfoundation.org/secure/Dashboard.jspa) to create a new domain name mapped to the CNAME value you copied. For this example, to create a new domain for Dapr v1.1, you'd request to map to `v1-1.docs.dapr.io`.
1. Using your own account, [submit a CNCF ticket](https://jira.linuxfoundation.org/secure/Dashboard.jspa) to create a new domain name mapped to the CNAME value you copied. For this example, to create a new domain for Dapr v1.1, you'd request to map to `v1-1.docs.dapr.io`.
Request resolution may take some time.
1. Once the new domain has been confirmed, return to the static web app in the portal.
1. Navigate to the **Custom domains** blade and select **+ Add**.
1. Select **Custom domain on other DNS**.
1. Select **Custom domain on other DNS**.
1. Enter `v1-1.docs.dapr.io` under **Domain name**. Click **Next**.
1. Keep **Hostname record type** as `CNAME`, and copy the value of **Value**.
1. Click **Add**.

View File

@ -29,9 +29,10 @@ Supported formats:
If `period` is omitted, the callback will be invoked only once.
Supported formats:
- time.Duration format, e.g. `2h30m`
- time.Duration format (Sub-second precision is supported when using duration values), e.g. `2h30m`, `500ms`
- [ISO 8601 duration](https://en.wikipedia.org/wiki/ISO_8601#Durations) format, e.g. `PT2H30M`, `R5/PT1M30S`
Note: Actual trigger resolution may vary by runtime and environment.
---
`ttl` is an optional parameter that sets time at which or time interval after which the timer/reminder will be expired and deleted. If `ttl` is omitted, no restrictions are applied.

View File

@ -112,8 +112,6 @@ The code examples below leverage Dapr SDKs to invoke the output bindings endpoin
Here's an example of using a console app with top-level statements in .NET 6+:
Here's an example of using a console app with top-level statements in .NET 6+:
```csharp
using System.Text;
using System.Threading.Tasks;

View File

@ -121,8 +121,6 @@ Below are code examples that leverage Dapr SDKs to demonstrate an input binding.
The following example demonstrates how to configure an input binding using ASP.NET Core controllers.
The following example demonstrates how to configure an input binding using ASP.NET Core controllers.
```csharp
using System.Collections.Generic;
using System.Threading.Tasks;
@ -152,6 +150,15 @@ app.MapPost("checkout", ([FromBody] int orderId) =>
});
```
The following example demonstrates how to configure the same input binding using a minimal API approach:
```csharp
app.MapPost("checkout", ([FromBody] int orderId) =>
{
Console.WriteLine($"Received Message: {orderId}");
return $"CID{orderId}"
});
```
{{% /tab %}}
{{% tab "Java" %}}

View File

@ -75,7 +75,7 @@ Want to put the Dapr conversation API to the test? Walk through the following qu
| Quickstart/tutorial | Description |
| ------------------- | ----------- |
| [Conversation quickstart]({{% ref conversation-quickstart %}}) | Learn how to interact with Large Language Models (LLMs) using the conversation API. |
| [Conversation quickstart]({{% ref conversation-quickstart %}}) | Learn how to interact with Large Language Models (LLMs) using the conversation API. |
### Start using the conversation API directly in your app

View File

@ -10,8 +10,7 @@ Now that you've learned about the [jobs building block]({{% ref jobs-overview %}
into the features and concepts included with Dapr Jobs and the various SDKs. Dapr Jobs:
- Provides a robust and scalable API for scheduling operations to be triggered in the future.
- Exposes several capabilities which are common across all supported languages.
- Supports sub-second precision when using duration values (for example `500ms`). Actual trigger resolution may vary by runtime; Cron-based schedules are at the seconds level only.
## Job identity

View File

@ -120,6 +120,12 @@ Even if the message fails to deliver, or your application crashes, Dapr attempts
All Dapr pub/sub components support the at-least-once guarantee.
### Subscription startup reliability
Dapr automatically retries failed subscription startups to improve reliability during deployment scenarios. This ensures your pub/sub applications remain resilient even when facing temporary connectivity or permission issues.
When Dapr encounters errors starting subscriptions, it shows an error message in the logs and continues to try to start the subscription.
### Consumer groups and competing consumers pattern
Dapr handles the burden of dealing with consumer groups and the competing consumers pattern. In the competing consumers pattern, multiple application instances using a single consumer group compete for the message. Dapr enforces the competing consumer pattern when replicas use the same `app-id` without explicit consumer group overrides.

View File

@ -137,7 +137,7 @@ app.MapGet("/dapr/subscribe", () =>
route = "/messages",
metadata = new Dictionary<string, string>
{
{ "isRawPayload", "true" },
{ "rawPayload", "true" },
{ "content-type", "application/json" }
}
}

View File

@ -199,8 +199,6 @@ Messages are pulled by the application from Dapr. This means no endpoint is need
Any number of pubsubs and topics can be subscribed to at once.
As messages are sent to the given message handler code, there is no concept of routes or bulk subscriptions.
> **Note:** Only a single pubsub/topic pair per application may be subscribed at a time.
The example below shows the different ways to stream subscribe to a topic.
{{< tabpane text=true >}}

View File

@ -123,6 +123,24 @@ spec:
key: tls.key
```
### Server-Sent Events
SSE enables real-time communication with streaming servers and MCP servers.
HTTP endpoints support [Server-Sent Events (SSE)](https://developer.mozilla.org/en-US/docs/Web/API/Server-sent_events/Using_server-sent_events).
To use SSE, set the `Accept` header to `text/event-stream` in the `HTTPEndpoint` resource or in the service invocation request.
```yaml
apiVersion: dapr.io/v1alpha1
kind: HTTPEndpoint
metadata:
name: "mcp-server"
spec:
baseUrl: https://my-mcp-server:443
headers:
- name: "Accept"
value: "test/event-stream"
```
## Related Links
- [HTTPEndpoint reference]({{% ref httpendpoints-schema %}})

View File

@ -95,7 +95,7 @@ You can group write, update, and delete operations into a request, which are the
### Actor state
Transactional state stores can be used to store actor state. To specify which state store to use for actors, specify value of property `actorStateStore` as `true` in the state store component's metadata section. Actors state is stored with a specific scheme in transactional state stores, allowing for consistent querying. Only a single state store component can be used as the state store for all actors. Read the [state API reference]({{% ref state_api.md %}}) and the [actors API reference]({{% ref actors_api.md %}}) to learn more about state stores for actors.
Transactional state stores can be used to store actor state. To specify which state store to use for actors, specify value of property `actorStateStore` as `true` in the state store component's metadata section. Actors state is stored with a specific scheme in transactional state stores, allowing for consistent querying. Only a single state store component can be used as the state store for all actors. If your state store is backed by a distributed database, you must make sure that it provides strong consistency. Read the [state API reference]({{% ref state_api.md %}}) and the [actors API reference]({{% ref actors_api.md %}}) to learn more about state stores for actors.
#### Time to Live (TTL) on actor state
You should always set the TTL metadata field (`ttlInSeconds`), or the equivalent API call in your chosen SDK when saving actor state to ensure that state eventually removed. Read [actors overview]({{% ref actors-overview.md %}}) for more information.

View File

@ -867,7 +867,8 @@ public class DemoWorkflow extends Workflow {
- The `TestWorkflow` method
- Creating the workflow with input and output.
- API calls. In the example below, these calls start and call the workflow activities.
```go
package main
@ -877,8 +878,11 @@ import (
"log"
"time"
"github.com/dapr/go-sdk/client"
"github.com/dapr/go-sdk/workflow"
"github.com/dapr/durabletask-go/api"
"github.com/dapr/durabletask-go/backend"
"github.com/dapr/durabletask-go/client"
"github.com/dapr/durabletask-go/task"
dapr "github.com/dapr/go-sdk/client"
)
var stage = 0
@ -888,110 +892,68 @@ const (
)
func main() {
w, err := workflow.NewWorker()
if err != nil {
log.Fatal(err)
}
registry := task.NewTaskRegistry()
fmt.Println("Worker initialized")
if err := w.RegisterWorkflow(TestWorkflow); err != nil {
if err := registry.AddOrchestrator(TestWorkflow); err != nil {
log.Fatal(err)
}
fmt.Println("TestWorkflow registered")
if err := w.RegisterActivity(TestActivity); err != nil {
if err := registry.AddActivity(TestActivity); err != nil {
log.Fatal(err)
}
fmt.Println("TestActivity registered")
// Start workflow runner
if err := w.Start(); err != nil {
log.Fatal(err)
daprClient, err := dapr.NewClient()
if err != nil {
log.Fatalf("failed to create Dapr client: %v", err)
}
client := client.NewTaskHubGrpcClient(daprClient.GrpcClientConn(), backend.DefaultLogger())
if err := client.StartWorkItemListener(context.TODO(), registry); err != nil {
log.Fatalf("failed to start work item listener: %v", err)
}
fmt.Println("runner started")
daprClient, err := client.NewClient()
if err != nil {
log.Fatalf("failed to intialise client: %v", err)
}
defer daprClient.Close()
ctx := context.Background()
// Start workflow test
respStart, err := daprClient.StartWorkflow(ctx, &client.StartWorkflowRequest{
InstanceID: "a7a4168d-3a1c-41da-8a4f-e7f6d9c718d9",
WorkflowComponent: workflowComponent,
WorkflowName: "TestWorkflow",
Options: nil,
Input: 1,
SendRawInput: false,
})
id, err := client.ScheduleNewOrchestration(ctx, "TestWorkflow", api.WithInput(1))
if err != nil {
log.Fatalf("failed to start workflow: %v", err)
}
fmt.Printf("workflow started with id: %v\n", respStart.InstanceID)
fmt.Printf("workflow started with id: %v\n", id)
// Pause workflow test
err = daprClient.PauseWorkflow(ctx, &client.PauseWorkflowRequest{
InstanceID: "a7a4168d-3a1c-41da-8a4f-e7f6d9c718d9",
WorkflowComponent: workflowComponent,
})
err = client.PurgeOrchestrationState(ctx, id)
if err != nil {
log.Fatalf("failed to pause workflow: %v", err)
}
respGet, err := daprClient.GetWorkflow(ctx, &client.GetWorkflowRequest{
InstanceID: "a7a4168d-3a1c-41da-8a4f-e7f6d9c718d9",
WorkflowComponent: workflowComponent,
})
respGet, err := client.FetchOrchestrationMetadata(ctx, id)
if err != nil {
log.Fatalf("failed to get workflow: %v", err)
}
if respGet.RuntimeStatus != workflow.StatusSuspended.String() {
log.Fatalf("workflow not paused: %v", respGet.RuntimeStatus)
}
fmt.Printf("workflow paused\n")
fmt.Printf("workflow paused: %s\n", respGet.RuntimeStatus)
// Resume workflow test
err = daprClient.ResumeWorkflow(ctx, &client.ResumeWorkflowRequest{
InstanceID: "a7a4168d-3a1c-41da-8a4f-e7f6d9c718d9",
WorkflowComponent: workflowComponent,
})
err = client.ResumeOrchestration(ctx, id, "")
if err != nil {
log.Fatalf("failed to resume workflow: %v", err)
}
fmt.Printf("workflow running: %s\n", respGet.RuntimeStatus)
respGet, err = daprClient.GetWorkflow(ctx, &client.GetWorkflowRequest{
InstanceID: "a7a4168d-3a1c-41da-8a4f-e7f6d9c718d9",
WorkflowComponent: workflowComponent,
})
respGet, err = client.FetchOrchestrationMetadata(ctx, id)
if err != nil {
log.Fatalf("failed to get workflow: %v", err)
}
if respGet.RuntimeStatus != workflow.StatusRunning.String() {
log.Fatalf("workflow not running")
}
fmt.Println("workflow resumed")
fmt.Printf("workflow resumed: %s\n", respGet.RuntimeStatus)
fmt.Printf("stage: %d\n", stage)
// Raise Event Test
err = daprClient.RaiseEventWorkflow(ctx, &client.RaiseEventWorkflowRequest{
InstanceID: "a7a4168d-3a1c-41da-8a4f-e7f6d9c718d9",
WorkflowComponent: workflowComponent,
EventName: "testEvent",
EventData: "testData",
SendRawData: false,
})
err = client.RaiseEvent(ctx, id, "testEvent", api.WithEventPayload("testData"))
if err != nil {
fmt.Printf("failed to raise event: %v", err)
}
@ -1002,10 +964,7 @@ func main() {
fmt.Printf("stage: %d\n", stage)
respGet, err = daprClient.GetWorkflow(ctx, &client.GetWorkflowRequest{
InstanceID: "a7a4168d-3a1c-41da-8a4f-e7f6d9c718d9",
WorkflowComponent: workflowComponent,
})
respGet, err = client.FetchOrchestrationMetadata(ctx, id)
if err != nil {
log.Fatalf("failed to get workflow: %v", err)
}
@ -1013,166 +972,36 @@ func main() {
fmt.Printf("workflow status: %v\n", respGet.RuntimeStatus)
// Purge workflow test
err = daprClient.PurgeWorkflow(ctx, &client.PurgeWorkflowRequest{
InstanceID: "a7a4168d-3a1c-41da-8a4f-e7f6d9c718d9",
WorkflowComponent: workflowComponent,
})
err = client.PurgeOrchestrationState(ctx, id)
if err != nil {
log.Fatalf("failed to purge workflow: %v", err)
}
respGet, err = daprClient.GetWorkflow(ctx, &client.GetWorkflowRequest{
InstanceID: "a7a4168d-3a1c-41da-8a4f-e7f6d9c718d9",
WorkflowComponent: workflowComponent,
})
if err != nil && respGet != nil {
log.Fatal("failed to purge workflow")
}
fmt.Println("workflow purged")
fmt.Printf("stage: %d\n", stage)
// Terminate workflow test
respStart, err = daprClient.StartWorkflow(ctx, &client.StartWorkflowRequest{
InstanceID: "a7a4168d-3a1c-41da-8a4f-e7f6d9c718d9",
WorkflowComponent: workflowComponent,
WorkflowName: "TestWorkflow",
Options: nil,
Input: 1,
SendRawInput: false,
})
if err != nil {
log.Fatalf("failed to start workflow: %v", err)
}
fmt.Printf("workflow started with id: %s\n", respStart.InstanceID)
err = daprClient.TerminateWorkflow(ctx, &client.TerminateWorkflowRequest{
InstanceID: "a7a4168d-3a1c-41da-8a4f-e7f6d9c718d9",
WorkflowComponent: workflowComponent,
})
if err != nil {
log.Fatalf("failed to terminate workflow: %v", err)
}
respGet, err = daprClient.GetWorkflow(ctx, &client.GetWorkflowRequest{
InstanceID: "a7a4168d-3a1c-41da-8a4f-e7f6d9c718d9",
WorkflowComponent: workflowComponent,
})
if err != nil {
log.Fatalf("failed to get workflow: %v", err)
}
if respGet.RuntimeStatus != workflow.StatusTerminated.String() {
log.Fatal("failed to terminate workflow")
}
fmt.Println("workflow terminated")
err = daprClient.PurgeWorkflow(ctx, &client.PurgeWorkflowRequest{
InstanceID: "a7a4168d-3a1c-41da-8a4f-e7f6d9c718d9",
WorkflowComponent: workflowComponent,
})
respGet, err = daprClient.GetWorkflow(ctx, &client.GetWorkflowRequest{
InstanceID: "a7a4168d-3a1c-41da-8a4f-e7f6d9c718d9",
WorkflowComponent: workflowComponent,
})
if err == nil || respGet != nil {
log.Fatalf("failed to purge workflow: %v", err)
}
fmt.Println("workflow purged")
stage = 0
fmt.Println("workflow client test")
wfClient, err := workflow.NewClient()
if err != nil {
log.Fatalf("[wfclient] faield to initialize: %v", err)
}
id, err := wfClient.ScheduleNewWorkflow(ctx, "TestWorkflow", workflow.WithInstanceID("a7a4168d-3a1c-41da-8a4f-e7f6d9c718d9"), workflow.WithInput(1))
if err != nil {
log.Fatalf("[wfclient] failed to start workflow: %v", err)
}
fmt.Printf("[wfclient] started workflow with id: %s\n", id)
metadata, err := wfClient.FetchWorkflowMetadata(ctx, id)
if err != nil {
log.Fatalf("[wfclient] failed to get worfklow: %v", err)
}
fmt.Printf("[wfclient] workflow status: %v\n", metadata.RuntimeStatus.String())
if stage != 1 {
log.Fatalf("Workflow assertion failed while validating the wfclient. Stage 1 expected, current: %d", stage)
}
fmt.Printf("[wfclient] stage: %d\n", stage)
// raise event
if err := wfClient.RaiseEvent(ctx, id, "testEvent", workflow.WithEventPayload("testData")); err != nil {
log.Fatalf("[wfclient] failed to raise event: %v", err)
}
fmt.Println("[wfclient] event raised")
// Sleep to allow the workflow to advance
time.Sleep(time.Second)
if stage != 2 {
log.Fatalf("Workflow assertion failed while validating the wfclient. Stage 2 expected, current: %d", stage)
}
fmt.Printf("[wfclient] stage: %d\n", stage)
// stop workflow
if err := wfClient.TerminateWorkflow(ctx, id); err != nil {
log.Fatalf("[wfclient] failed to terminate workflow: %v", err)
}
fmt.Println("[wfclient] workflow terminated")
if err := wfClient.PurgeWorkflow(ctx, id); err != nil {
log.Fatalf("[wfclient] failed to purge workflow: %v", err)
}
fmt.Println("[wfclient] workflow purged")
// stop workflow runtime
if err := w.Shutdown(); err != nil {
log.Fatalf("failed to shutdown runtime: %v", err)
}
fmt.Println("workflow worker successfully shutdown")
}
func TestWorkflow(ctx *workflow.WorkflowContext) (any, error) {
func TestWorkflow(ctx *task.OrchestrationContext) (any, error) {
var input int
if err := ctx.GetInput(&input); err != nil {
return nil, err
}
var output string
if err := ctx.CallActivity(TestActivity, workflow.ActivityInput(input)).Await(&output); err != nil {
if err := ctx.CallActivity(TestActivity, task.WithActivityInput(input)).Await(&output); err != nil {
return nil, err
}
err := ctx.WaitForExternalEvent("testEvent", time.Second*60).Await(&output)
err := ctx.WaitForSingleEvent("testEvent", time.Second*60).Await(&output)
if err != nil {
return nil, err
}
if err := ctx.CallActivity(TestActivity, workflow.ActivityInput(input)).Await(&output); err != nil {
if err := ctx.CallActivity(TestActivity, task.WithActivityInput(input)).Await(&output); err != nil {
return nil, err
}
return output, nil
}
func TestActivity(ctx workflow.ActivityContext) (any, error) {
func TestActivity(ctx task.ActivityContext) (any, error) {
var input int
if err := ctx.GetInput(&input); err != nil {
return "", err

View File

@ -381,8 +381,13 @@ Learn more about these HTTP calls in the [workflow API reference guide]({{% ref
{{< /tabpane >}}
## Next steps
Now that you've learned how to manage workflows, learn how to execute workflows across multiple applications
{{< button text="Multi Application Workflows>>" page="workflow-multi-app.md" >}}
## Related links
- [Try out the Workflow quickstart]({{% ref workflow-quickstart.md %}})
- Try out the full SDK examples:
- [Python example](https://github.com/dapr/python-sdk/blob/master/examples/demo_workflow/app.py)

View File

@ -33,50 +33,66 @@ The workflow app executes the appropriate workflow code and then sends a gRPC re
<img src="/images/workflow-overview/workflow-engine-protocol.png" width=500 alt="Dapr Workflow Engine Protocol" />
All interactions happen over a single gRPC channel and are initiated by the application, which means the application doesn't need to open any inbound ports. The details of these interactions are internally handled by the language-specific Dapr Workflow authoring SDK.
All interactions happen over a single gRPC channel and are initiated by the application, which means the application doesn't need to open any inbound ports.
The details of these interactions are internally handled by the language-specific Dapr Workflow authoring SDK.
### Differences between workflow and actor sidecar interactions
### Differences between workflow and application actor interactions
If you're familiar with Dapr actors, you may notice a few differences in terms of how sidecar interactions works for workflows compared to actors.
If you're familiar with Dapr actors, you may notice a few differences in terms of how sidecar interactions works for workflows compared to application defined actors.
| Actors | Workflows |
| ------ | --------- |
| Actors can interact with the sidecar using either HTTP or gRPC. | Workflows only use gRPC. Due to the workflow gRPC protocol's complexity, an SDK is _required_ when implementing workflows. |
| Actors created by the application can interact with the sidecar using either HTTP or gRPC. | Workflows only use gRPC. Due to the workflow gRPC protocol's complexity, an SDK is _required_ when implementing workflows. |
| Actor operations are pushed to application code from the sidecar. This requires the application to listen on a particular _app port_. | For workflows, operations are _pulled_ from the sidecar by the application using a streaming protocol. The application doesn't need to listen on any ports to run workflows. |
| Actors explicitly register themselves with the sidecar. | Workflows do not register themselves with the sidecar. The embedded engine doesn't keep track of workflow types. This responsibility is instead delegated to the workflow application and its SDK. |
## Workflow distributed tracing
The `durabletask-go` core used by the workflow engine writes distributed traces using Open Telemetry SDKs. These traces are captured automatically by the Dapr sidecar and exported to the configured Open Telemetry provider, such as Zipkin.
The [`durabletask-go`](https://github.com/dapr/durabletask-go) core used by the workflow engine writes distributed traces using Open Telemetry SDKs.
These traces are captured automatically by the Dapr sidecar and exported to the configured Open Telemetry provider, such as Zipkin.
Each workflow instance managed by the engine is represented as one or more spans. There is a single parent span representing the full workflow execution and child spans for the various tasks, including spans for activity task execution and durable timers.
Each workflow instance managed by the engine is represented as one or more spans.
There is a single parent span representing the full workflow execution and child spans for the various tasks, including spans for activity task execution and durable timers.
> Workflow activity code currently **does not** have access to the trace context.
## Internal workflow actors
## Workflow actors
There are two types of actors that are internally registered within the Dapr sidecar in support of the workflow engine:
Upon the workflow client connecting to the sidecar, there are two types of actors that are registered in support of the workflow engine:
- `dapr.internal.{namespace}.{appID}.workflow`
- `dapr.internal.{namespace}.{appID}.activity`
The `{namespace}` value is the Dapr namespace and defaults to `default` if no namespace is configured. The `{appID}` value is the app's ID. For example, if you have a workflow app named "wfapp", then the type of the workflow actor would be `dapr.internal.default.wfapp.workflow` and the type of the activity actor would be `dapr.internal.default.wfapp.activity`.
The `{namespace}` value is the Dapr namespace and defaults to `default` if no namespace is configured.
The `{appID}` value is the app's ID.
For example, if you have a workflow app named "wfapp", then the type of the workflow actor would be `dapr.internal.default.wfapp.workflow` and the type of the activity actor would be `dapr.internal.default.wfapp.activity`.
The following diagram demonstrates how internal workflow actors operate in a Kubernetes scenario:
The following diagram demonstrates how workflow actors operate in a Kubernetes scenario:
<img src="/images/workflow-overview/workflow-execution.png" alt="Diagram demonstrating internally registered actors across a cluster" />
Just like user-defined actors, internal workflow actors are distributed across the cluster by the actor placement service. They also maintain their own state and make use of reminders. However, unlike actors that live in application code, these _internal_ actors are embedded into the Dapr sidecar. Application code is completely unaware that these actors exist.
Just like user-defined actors, workflow actors are distributed across the cluster by the hashing lookup table provided by the actor placement service.
They also maintain their own state and make use of reminders.
However, unlike actors that live in application code, these workflow actors are embedded into the Dapr sidecar.
Application code is completely unaware that these actors exist.
{{% alert title="Note" color="primary" %}}
The internal workflow actor types are only registered after an app has registered a workflow using a Dapr Workflow SDK. If an app never registers a workflow, then the internal workflow actors are never registered.
The workflow actor types are only registered after an app has registered a workflow using a Dapr Workflow SDK.
If an app never registers a workflow, then the internal workflow actors are never registered.
{{% /alert %}}
### Workflow actors
There are 2 different types of actors used with workflows: workflow actors and activity actors. Workflow actors are responsible for managing the state and placement of all workflows running in the app. A new instance of the workflow actor is activated for every workflow instance that gets created. The ID of the workflow actor is the ID of the workflow. This internal actor stores the state of the workflow as it progresses and determines the node on which the workflow code executes via the actor placement service.
There are 2 different types of actors used with workflows: workflow actors and activity actors.
Workflow actors are responsible for managing the state and placement of all workflows running in the app.
A new instance of the workflow actor is activated for every workflow instance that gets scheduled.
The ID of the workflow actor is the ID of the workflow.
This workflow actor stores the state of the workflow as it progresses, and determines the node on which the workflow code executes via the actor lookup table.
Each workflow actor saves its state using the following keys in the configured state store:
As workflows are based on actors, all workflow and activity work is randomly distributed across all replicas of the application implementing workflows.
There is no locality or relationship between where a workflow is started and where each work item is executed.
Each workflow actor saves its state using the following keys in the configured actor state store:
| Key | Description |
| --- | ----------- |
@ -86,7 +102,9 @@ Each workflow actor saves its state using the following keys in the configured s
| `metadata` | Contains meta information about the workflow as a JSON blob and includes details such as the length of the inbox, the length of the history, and a 64-bit integer representing the workflow generation (for cases where the instance ID gets reused). The length information is used to determine which keys need to be read or written to when loading or saving workflow state updates. |
{{% alert title="Warning" color="warning" %}}
Workflow actor state remains in the state store even after a workflow has completed. Creating a large number of workflows could result in unbounded storage usage. To address this either purge workflows using their ID or directly delete entries in the workflow DB store.
Workflow actor state remains in the state store even after a workflow has completed.
Creating a large number of workflows could result in unbounded storage usage.
To address this either purge workflows using their ID or directly delete entries in the workflow DB store.
{{% /alert %}}
The following diagram illustrates the typical lifecycle of a workflow actor.
@ -103,13 +121,13 @@ To summarize:
### Activity actors
Activity actors are responsible for managing the state and placement of all workflow activity invocations. A new instance of the activity actor is activated for every activity task that gets scheduled by a workflow. The ID of the activity actor is the ID of the workflow combined with a sequence number (sequence numbers start with 0). For example, if a workflow has an ID of `876bf371` and is the third activity to be scheduled by the workflow, it's ID will be `876bf371::2` where `2` is the sequence number.
Activity actors are responsible for managing the state and placement of all workflow activity invocations.
A new instance of the activity actor is activated for every activity task that gets scheduled by a workflow.
The ID of the activity actor is the ID of the workflow combined with a sequence number (sequence numbers start with 0), as well as the "generation" (incremented during instances of rerunning from using `continue as new`).
For example, if a workflow has an ID of `876bf371` and is the third activity to be scheduled by the workflow, it's ID will be `876bf371::2::1` where `2` is the sequence number, and `1` is the generation.
If the activity is scheduled again after a `continue as new`, the ID will be `876bf371::2::2`.
Each activity actor stores a single key into the state store:
| Key | Description |
| --- | ----------- |
| `activityState` | The key contains the activity invocation payload, which includes the serialized activity input data. This key is deleted automatically after the activity invocation has completed. |
No state is stored by activity actors, and instead all resulting data is sent back to the parent workflow actor.
The following diagram illustrates the typical lifecycle of an activity actor.
@ -118,39 +136,49 @@ The following diagram illustrates the typical lifecycle of an activity actor.
Activity actors are short-lived:
1. Activity actors are activated when a workflow actor schedules an activity task.
1. Activity actors then immediately call into the workflow application to invoke the associated activity code.
1. Activity actors then immediately call into the workflow application to invoke the associated activity code.
1. Once the activity code has finished running and has returned its result, the activity actor sends a message to the parent workflow actor with the execution results.
1. The activity actor then deactivates itself.
1. Once the results are sent, the workflow is triggered to move forward to its next step.
### Reminder usage and execution guarantees
The Dapr Workflow ensures workflow fault-tolerance by using [actor reminders]({{% ref "../actors/actors-timers-reminders.md##actor-reminders" %}}) to recover from transient system failures. Prior to invoking application workflow code, the workflow or activity actor will create a new reminder. If the application code executes without interruption, the reminder is deleted. However, if the node or the sidecar hosting the associated workflow or activity crashes, the reminder will reactivate the corresponding actor and the execution will be retried.
The Dapr Workflow ensures workflow fault-tolerance by using [actor reminders]({{% ref "../actors/actors-timers-reminders.md##actor-reminders" %}}) to recover from transient system failures.
Prior to invoking application workflow code, the workflow or activity actor will create a new reminder.
These reminders are made "one shot", meaning that they will expire after successful triggering.
If the application code executes without interruption, the reminder is triggered and expired.
However, if the node or the sidecar hosting the associated workflow or activity crashes, the reminder will reactivate the corresponding actor and the execution will be retried, forever.
<img src="/images/workflow-overview/workflow-actor-reminder-flow.png" width=600 alt="Diagram showing the process of invoking workflow actors"/>
{{% alert title="Important" color="warning" %}}
Too many active reminders in a cluster may result in performance issues. If your application is already using actors and reminders heavily, be mindful of the additional load that Dapr Workflows may add to your system.
{{% /alert %}}
### State store usage
Dapr Workflows use actors internally to drive the execution of workflows. Like any actors, these internal workflow actors store their state in the configured state store. Any state store that supports actors implicitly supports Dapr Workflow.
Dapr Workflows use actors internally to drive the execution of workflows.
Like any actors, these workflow actors store their state in the configured actor state store.
Any state store that supports actors implicitly supports Dapr Workflow.
As discussed in the [workflow actors]({{% ref "workflow-architecture.md#workflow-actors" %}}) section, workflows save their state incrementally by appending to a history log. The history log for a workflow is distributed across multiple state store keys so that each "checkpoint" only needs to append the newest entries.
As discussed in the [workflow actors]({{% ref "workflow-architecture.md#workflow-actors" %}}) section, workflows save their state incrementally by appending to a history log.
The history log for a workflow is distributed across multiple state store keys so that each "checkpoint" only needs to append the newest entries.
The size of each checkpoint is determined by the number of concurrent actions scheduled by the workflow before it goes into an idle state. [Sequential workflows]({{% ref "workflow-overview.md#task-chaining" %}}) will therefore make smaller batch updates to the state store, while [fan-out/fan-in workflows]({{% ref "workflow-overview.md#fan-outfan-in" %}}) will require larger batches. The size of the batch is also impacted by the size of inputs and outputs when workflows [invoke activities]({{% ref "workflow-features-concepts.md#workflow-activities" %}}) or [child workflows]({{% ref "workflow-features-concepts.md#child-workflows" %}}).
The size of each checkpoint is determined by the number of concurrent actions scheduled by the workflow before it goes into an idle state.
[Sequential workflows]({{% ref "workflow-overview.md#task-chaining" %}}) will therefore make smaller batch updates to the state store, while [fan-out/fan-in workflows]({{% ref "workflow-overview.md#fan-outfan-in" %}}) will require larger batches.
The size of the batch is also impacted by the size of inputs and outputs when workflows [invoke activities]({{% ref "workflow-features-concepts.md#workflow-activities" %}}) or [child workflows]({{% ref "workflow-features-concepts.md#child-workflows" %}}).
<img src="/images/workflow-overview/workflow-state-store-interactions.png" width=600 alt="Diagram of workflow actor state store interactions"/>
Different state store implementations may implicitly put restrictions on the types of workflows you can author. For example, the Azure Cosmos DB state store limits item sizes to 2 MB of UTF-8 encoded JSON ([source](https://learn.microsoft.com/azure/cosmos-db/concepts-limits#per-item-limits)). The input or output payload of an activity or child workflow is stored as a single record in the state store, so a item limit of 2 MB means that workflow and activity inputs and outputs can't exceed 2 MB of JSON-serialized data.
Different state store implementations may implicitly put restrictions on the types of workflows you can author.
For example, the Azure Cosmos DB state store limits item sizes to 2 MB of UTF-8 encoded JSON ([source](https://learn.microsoft.com/azure/cosmos-db/concepts-limits#per-item-limits)).
The input or output payload of an activity or child workflow is stored as a single record in the state store, so a item limit of 2 MB means that workflow and activity inputs and outputs can't exceed 2 MB of JSON-serialized data.
Similarly, if a state store imposes restrictions on the size of a batch transaction, that may limit the number of parallel actions that can be scheduled by a workflow.
Workflow state can be purged from a state store, including all its history. Each Dapr SDK exposes APIs for purging all metadata related to specific workflow instances.
Workflow state can be purged from a state store, including all its history.
Each Dapr SDK exposes APIs for purging all metadata related to specific workflow instances.
## Workflow scalability
Because Dapr Workflows are internally implemented using actors, Dapr Workflows have the same scalability characteristics as actors. The placement service:
Because Dapr Workflows are internally implemented using actors, Dapr Workflows have the same scalability characteristics as actors.
The placement service:
- Doesn't distinguish between workflow actors and actors you define in your application
- Will load balance workflows using the same algorithms that it uses for actors
@ -162,41 +190,49 @@ The expected scalability of a workflow is determined by the following factors:
- The scalability of the state store configured for actors
- The scalability of the actor placement service and the reminder subsystem
The implementation details of the workflow code in the target application also plays a role in the scalability of individual workflow instances. Each workflow instance executes on a single node at a time, but a workflow can schedule activities and child workflows which run on other nodes.
The implementation details of the workflow code in the target application also plays a role in the scalability of individual workflow instances.
Each workflow instance executes on a single node at a time, but a workflow can schedule activities and child workflows which run on other nodes.
Workflows can also schedule these activities and child workflows to run in parallel, allowing a single workflow to potentially distribute compute tasks across all available nodes in the cluster.
<img src="/images/workflow-overview/workflow-actor-scale-out.png" width=800 alt="Diagram of workflow and activity actors scaled out across multiple Dapr instances"/>
{{% alert title="Important" color="warning" %}}
Currently, there are no global limits imposed on workflow and activity concurrency. A runaway workflow could therefore potentially consume all resources in a cluster if it attempts to schedule too many tasks in parallel. Use care when authoring Dapr Workflows that schedule large batches of work in parallel.
Also, the Dapr Workflow engine requires that all instances of each workflow app register the exact same set of workflows and activities. In other words, it's not possible to scale certain workflows or activities independently. All workflows and activities within an app must be scaled together.
By default, there are no global limits imposed on workflow and activity concurrency.
A runaway workflow could therefore potentially consume all resources in a cluster if it attempts to schedule too many tasks in parallel.
Use care when authoring Dapr Workflows that schedule large batches of work in parallel.
{{% /alert %}}
Workflows don't control the specifics of how load is distributed across the cluster. For example, if a workflow schedules 10 activity tasks to run in parallel, all 10 tasks may run on as many as 10 different compute nodes or as few as a single compute node. The actual scale behavior is determined by the actor placement service, which manages the distribution of the actors that represent each of the workflow's tasks.
## Workflow backend
The workflow backend is responsible for orchestrating and preserving the state of workflows. At any given time, only one backend can be supported. You can configure the workflow backend as a component, similar to any other component in Dapr. Configuration requires:
1. Specifying the type of workflow backend.
1. Providing the configuration specific to that backend.
For instance, the following sample demonstrates how to define a actor backend component. Dapr workflow currently supports only the actor backend by default, and users are not required to define an actor backend component to use it.
You can configure the maximum concurrent workflows and activities that can be executed at any one time with the following configuration.
These limits are imposed on a _per_ sidecar basis, meaning that if you have 10 replicas of your workflow app, the effective limit is 10 times the configured value.
These limits do not distinguish between different workflow or activity definitions.
```yaml
apiVersion: dapr.io/v1alpha1
kind: Component
kind: Configuration
metadata:
name: actorbackend
name: appconfig
spec:
type: workflowbackend.actor
version: v1
workflow:
maxConcurrentWorkflowInvocations: 100 # Default is infinite
maxConcurrentActivityInvocations: 1000 # Default is infinite
```
{{% alert title="Important" color="warning" %}}
The Dapr Workflow engine requires that all instances of a workflow app register the exact same set of workflows and activities.
In other words, it's not possible to scale certain workflows or activities independently.
All workflows and activities within an app must be scaled together.
{{% /alert %}}
Workflows don't control the specifics of how load is distributed across the cluster.
For example, if a workflow schedules 10 activity tasks to run in parallel, all 10 tasks may run on as many as 10 different compute nodes or as few as a single compute node.
The actual scale behavior is determined by the actor placement service, which manages the distribution of the actors that represent each of the workflow's tasks.
## Workflow latency
In order to provide guarantees around durability and resiliency, Dapr Workflows frequently write to the state store and rely on reminders to drive execution. Dapr Workflows therefore may not be appropriate for latency-sensitive workloads. Expected sources of high latency include:
In order to provide guarantees around durability and resiliency, Dapr Workflows frequently write to the state store and rely on reminders to drive execution.
Dapr Workflows therefore may not be appropriate for latency-sensitive workloads.
Expected sources of high latency include:
- Latency from the state store when persisting workflow state.
- Latency from the state store when rehydrating workflows with large histories.
@ -205,6 +241,28 @@ In order to provide guarantees around durability and resiliency, Dapr Workflows
See the [Reminder usage and execution guarantees section]({{% ref "workflow-architecture.md#reminder-usage-and-execution-guarantees" %}}) for more details on how the design of workflow actors may impact execution latency.
## Increasing scheduling throughput
By default, when a client schedules a workflow, the workflow engine waits for the workflow to be fully started before returning a response to the client.
Waiting for the workflow to start before returning can decrease the scheduling throughput of workflows.
When scheduling a workflow with a start time, the workflow engine does not wait for the workflow to start before returning a response to the client.
To increase scheduling throughput, consider adding a start time of "now" when scheduling a workflow.
An example of scheduling a workflow with a start time of "now" in the Go SDK is shown below:
```go
client.ScheduleNewWorkflow(ctx, "MyCoolWorkflow", workflow.WithStartTime(time.Now()))
```
## Workflows cluster deployment when using Dapr Shared with workflow
{{% alert title="Note" color="primary" %}}
The following feature is only available when the [Workflows Clustered Deployment preview feature is enabled]({{% ref "preview-features.md" %}}).
{{% /alert %}}
When using [Dapr Shared]({{% ref "kubernetes-dapr-shared" %}}), it can be the case that there are multiple daprd sidecars running behind a single load balancer or service.
As such, the instance to which a worker receiving work, may not be the same instance that receives the work result.
Dapr creates a third actor type to handle this scenario: `dapr.internal.{namespace}.{appID}.executor` to handle routing of the worker results back to the correct workflow actor to ensure correct operation.
## Next steps
{{< button text="Author workflows >>" page="howto-author-workflow.md" >}}

View File

@ -6,7 +6,7 @@ weight: 2000
description: "Learn more about the Dapr Workflow features and concepts"
---
Now that you've learned about the [workflow building block]({{% ref workflow-overview.md %}}) at a high level, let's deep dive into the features and concepts included with the Dapr Workflow engine and SDKs. Dapr Workflow exposes several core features and concepts which are common across all supported languages.
Now that you've learned about the [workflow building block]({{% ref workflow-overview.md %}}) at a high level, let's deep dive into the features and concepts included with the Dapr Workflow engine and SDKs. Dapr Workflow exposes several core features and concepts which are common across all supported languages.
{{% alert title="Note" color="primary" %}}
For more information on how workflow state is managed, see the [workflow architecture guide]({{% ref workflow-architecture.md %}}).
@ -14,7 +14,9 @@ For more information on how workflow state is managed, see the [workflow archite
## Workflows
Dapr Workflows are functions you write that define a series of tasks to be executed in a particular order. The Dapr Workflow engine takes care of scheduling and execution of the tasks, including managing failures and retries. If the app hosting your workflows is scaled out across multiple machines, the workflow engine may also load balance the execution of workflows and their tasks across multiple machines.
Dapr Workflows are functions you write that define a series of tasks to be executed in a particular order.
The Dapr Workflow engine takes care of scheduling and execution of the tasks, including managing failures and retries.
If the app hosting your workflows is scaled out across multiple machines, the workflow engine load balances the execution of workflows and their tasks across multiple machines.
There are several different kinds of tasks that a workflow can schedule, including
- [Activities]({{% ref "workflow-features-concepts.md#workflow-activities" %}}) for executing custom logic
@ -32,7 +34,7 @@ Only one workflow instance with a given ID can exist at any given time. However,
Dapr Workflows maintain their execution state by using a technique known as [event sourcing](https://learn.microsoft.com/azure/architecture/patterns/event-sourcing). Instead of storing the current state of a workflow as a snapshot, the workflow engine manages an append-only log of history events that describe the various steps that a workflow has taken. When using the workflow SDK, these history events are stored automatically whenever the workflow "awaits" for the result of a scheduled task.
When a workflow "awaits" a scheduled task, it unloads itself from memory until the task completes. Once the task completes, the workflow engine schedules the workflow function to run again. This second workflow function execution is known as a _replay_.
When a workflow "awaits" a scheduled task, it unloads itself from memory until the task completes. Once the task completes, the workflow engine schedules the workflow function to run again. This second workflow function execution is known as a _replay_.
When a workflow function is replayed, it runs again from the beginning. However, when it encounters a task that already completed, instead of scheduling that task again, the workflow engine:
@ -57,16 +59,16 @@ As discussed in the [workflow replay]({{% ref "#workflow-replay" %}}) section, w
You can use the following two techniques to write workflows that may need to schedule extreme numbers of tasks:
1. **Use the _continue-as-new_ API**:
1. **Use the _continue-as-new_ API**:
Each workflow SDK exposes a _continue-as-new_ API that workflows can invoke to restart themselves with a new input and history. The _continue-as-new_ API is especially ideal for implementing "eternal workflows", like monitoring agents, which would otherwise be implemented using a `while (true)`-like construct. Using _continue-as-new_ is a great way to keep the workflow history size small.
> The _continue-as-new_ API truncates the existing history, replacing it with a new history.
1. **Use child workflows**:
1. **Use child workflows**:
Each workflow SDK exposes an API for creating child workflows. A child workflow behaves like any other workflow, except that it's scheduled by a parent workflow. Child workflows have:
- Their own history
- The benefit of distributing workflow function execution across multiple machines.
- Their own history
- The benefit of distributing workflow function execution across multiple machines.
If a workflow needs to schedule thousands of tasks or more, it's recommended that those tasks be distributed across child workflows so that no single workflow's history size grows too large.
### Updating workflow code
@ -145,18 +147,6 @@ Workflows can also wait for multiple external event signals of the same name, in
Learn more about [external system interaction.]({{% ref "workflow-patterns.md#external-system-interaction" %}})
## Workflow backend
Dapr Workflow relies on the Durable Task Framework for Go (a.k.a. [durabletask-go](https://github.com/dapr/durabletask-go)) as the core engine for executing workflows. This engine is designed to support multiple backend implementations. For example, the [durabletask-go](https://github.com/dapr/durabletask-go) repo includes a SQLite implementation and the Dapr repo includes an Actors implementation.
By default, Dapr Workflow supports the Actors backend, which is stable and scalable. However, you can choose a different backend supported in Dapr Workflow. For example, [SQLite](https://github.com/dapr/durabletask-go/tree/main/backend/sqlite)(TBD future release) could be an option for backend for local development and testing.
The backend implementation is largely decoupled from the workflow core engine or the programming model that you see. The backend primarily impacts:
- How workflow state is stored
- How workflow execution is coordinated across replicas
In that sense, it's similar to Dapr's state store abstraction, except designed specifically for workflow. All APIs and programming model features are the same, regardless of which backend is used.
## Purging
Workflow state can be purged from a state store, purging all its history and removing all metadata related to a specific workflow instance. The purge capability is used for workflows that have run to a `COMPLETED`, `FAILED`, or `TERMINATED` state.
@ -165,11 +155,11 @@ Learn more in [the workflow API reference guide]({{% ref workflow_api.md %}}).
## Limitations
### Workflow determinism and code restraints
### Workflow determinism and code restraints
To take advantage of the workflow replay technique, your workflow code needs to be deterministic. For your workflow code to be deterministic, you may need to work around some limitations.
#### Workflow functions must call deterministic APIs.
#### Workflow functions must call deterministic APIs.
APIs that generate random numbers, random UUIDs, or the current date are _non-deterministic_. To work around this limitation, you can:
- Use these APIs in activity functions, or
- (Preferred) Use built-in equivalent APIs offered by the SDK. For example, each authoring SDK provides an API for retrieving the current time in a deterministic manner.
@ -269,9 +259,9 @@ const currentTime = ctx.CurrentUTCDateTime()
{{< /tabpane >}}
#### Workflow functions must only interact _indirectly_ with external state.
External data includes any data that isn't stored in the workflow state. Workflows must not interact with global variables, environment variables, the file system, or make network calls.
#### Workflow functions must only interact _indirectly_ with external state.
External data includes any data that isn't stored in the workflow state. Workflows must not interact with global variables, environment variables, the file system, or make network calls.
Instead, workflows should interact with external state _indirectly_ using workflow inputs, activity tasks, and through external event handling.
For example, instead of this:
@ -377,11 +367,11 @@ err := ctx.CallActivity(MakeHttpCallActivity, workflow.ActivityInput("https://ex
{{< /tabpane >}}
#### Workflow functions must execute only on the workflow dispatch thread.
#### Workflow functions must execute only on the workflow dispatch thread.
The implementation of each language SDK requires that all workflow function operations operate on the same thread (goroutine, etc.) that the function was scheduled on. Workflow functions must never:
- Schedule background threads, or
- Use APIs that schedule a callback function to run on another thread.
- Use APIs that schedule a callback function to run on another thread.
Failure to follow this rule could result in undefined behavior. Any background processing should instead be delegated to activity tasks, which can be scheduled to run serially or concurrently.
For example, instead of this:
@ -478,19 +468,18 @@ task.Await(nil)
Make sure updates you make to the workflow code maintain its determinism. A couple examples of code updates that can break workflow determinism:
- **Changing workflow function signatures**:
Changing the name, input, or output of a workflow or activity function is considered a breaking change and must be avoided.
- **Changing workflow function signatures**:
Changing the name, input, or output of a workflow or activity function is considered a breaking change and must be avoided.
- **Changing the number or order of workflow tasks**:
- **Changing the number or order of workflow tasks**:
Changing the number or order of workflow tasks causes a workflow instance's history to no longer match the code and may result in runtime errors or other unexpected behavior.
To work around these constraints:
- Instead of updating existing workflow code, leave the existing workflow code as-is and create new workflow definitions that include the updates.
- Upstream code that creates workflows should only be updated to create instances of the new workflows.
- Instead of updating existing workflow code, leave the existing workflow code as-is and create new workflow definitions that include the updates.
- Upstream code that creates workflows should only be updated to create instances of the new workflows.
- Leave the old code around to ensure that existing workflow instances can continue to run without interruption. If and when it's known that all instances of the old workflow logic have completed, then the old workflow code can be safely deleted.
## Next steps
{{< button text="Workflow patterns >>" page="workflow-patterns.md" >}}

View File

@ -0,0 +1,147 @@
---
type: docs
title: Multi Application Workflows
linkTitle: Multi Application Workflows
weight: 7000
description: "Executing workflows across multiple applications"
---
It is often the case that a single workflow spans multiple applications, microservices, or programing languages.
This is where an activity or a child workflow will be executed on a different application than the one hosting the parent workflow.
Some scenarios where this is useful include:
- A Machine Learning (ML) training activity must be executed on GPU-enabled machines, while the rest of the workflow runs on CPU-only orchestration machines.
- Activities need access to sensitive data or credentials that are only available to particular identities or locales.
- Different parts of the workflow need to be executed in different trust zones or networks.
- Different parts of the workflow need to be executed in different geographic regions due to data residency requirements.
- An involved business process spans multiple teams or departments, each owning their own application.
- Implementation of a workflow spans different programming lanaguages based on team expertise or existing codebases.
- Different team boundaries or microservice ownership.
## Multi-application workflows
Like all building blocks in Dapr, workflow execution routing is based on the [App ID of the hosting Dapr application]({{% ref "security-concept.md#application-identity" %}}).
By default, the full workflow execution is hosted on the app ID that started the workflow.
This workflow will be executed across all replicas of that app ID, not just the single replica which scheduled the workflow.
It is possible to execute activities or child workflows on different app IDs by specifying the target app ID parameter, inside the workflow execution code.
Upon execution, the target app ID will execute the activity or child workflow, and return the result to the parent workflow of the originating app ID.
The entire Workflow execution may be distributed across multiple app IDs with no limit, with each activity or child workflow specifying the target app ID.
The final history of the workflow will be saved by the app ID that hosts the very parent (or can consider it the root) workflow.
{{% alert title="Restrictions" color="primary" %}}
Like other building blocks and resources in Dapr, workflows are scoped to a single namespace.
This means that all app IDs involved in a multi-application workflow must be in the same namespace.
Similarly, all app IDs must use the same actor state store.
Finally, the target app ID must have the activity or child workflow defined, otherwise the parent workflow will retry indefinitely.
{{% /alert %}}
{{% alert title="Important Limitations" color="warning" %}}
- **SDKs supporting multi-application workflows** - Multi-application workflows are used via the SDKs. Currently Java (activities calling) and Go (both activities and child workflows calling) SDKs are supported. The SDKs (Python, .NET, JavaScript) are planned for future releases.
{{% /alert %}}
## Error handling
When calling multi-application activities or child workflows:
- If the target application does not exist, the call will be retried using the provided retry policy.
- If the target application exists but doesn't contain the specified activity or workflow, the call will return an error.
- Standard workflow retry policies apply to multi-application calls.
It is paramount that there is co-ordination between the teams owning the different app IDs to ensure that the activities and child workflows are defined and available when needed.
## Multi-application activity example
<img src="/images/workflow-overview/workflow-multi-app-callactivity.png" width=800 alt="Diagram showing multi-application call activity workflow pattern">
The following example shows how to execute the activity `ActivityA` on the target app `App2`.
{{< tabpane text=true >}}
{{% tab "Go" %}}
```go
func TestWorkflow(ctx *task.OrchestrationContext) (any, error) {
var output string
err := ctx.CallActivity("ActivityA",
task.WithActivityInput("my-input"),
task.WithActivityAppID("App2"), // Here we set the target app ID which will execute this activity.
).Await(&output)
if err != nil {
return nil, err
}
return output, nil
}
```
{{% /tab %}}
{{% tab "Java" %}}
```java
public class CrossAppWorkflow implements Workflow {
@Override
public WorkflowStub create() {
return ctx -> {
String output = ctx.callActivity(
"ActivityA",
"my-input",
new WorkflowTaskOptions("App2"), // Here we set the target app ID which will execute this activity.
String.class
).await();
ctx.complete(output);
};
}
}
```
{{% /tab %}}
{{< /tabpane >}}
## Multi-application child workflow example
<img src="/images/workflow-overview/workflow-multi-app-child-workflow.png" width=800 alt="Diagram showing multi-application child workflow pattern">
The following example shows how to execute the child workflow `Workflow2` on the target app `App2`.
{{< tabpane text=true >}}
{{% tab "Go" %}}
```go
func TestWorkflow(ctx *task.OrchestrationContext) (any, error) {
var output string
err := ctx.CallSubOrchestrator("Workflow2",
task.WithSubOrchestratorInput("my-input"),
task.WithSubOrchestratorAppID("App2"), // Here we set the target app ID which will execute this child workflow.
).Await(&output)
if err != nil {
return nil, err
}
return output, nil
}
```
{{% /tab %}}
{{< /tabpane >}}
## Related links
- [Try out Dapr Workflows using the quickstart]({{% ref workflow-quickstart.md %}})
- [Workflow overview]({{% ref workflow-overview.md %}})
- [Workflow API reference]({{% ref workflow_api.md %}})
- Try out the following examples:
- [Python](https://github.com/dapr/python-sdk/tree/master/examples/demo_workflow)
- [JavaScript](https://github.com/dapr/js-sdk/tree/main/examples/workflow)
- [.NET](https://github.com/dapr/dotnet-sdk/tree/master/examples/Workflow)
- [Java](https://github.com/dapr/java-sdk/tree/master/examples/src/main/java/io/dapr/examples/workflows)
- [Go](https://github.com/dapr/go-sdk/tree/main/examples/workflow/README.md)

View File

@ -6,14 +6,15 @@ weight: 1000
description: "Overview of Dapr Workflow"
---
Dapr workflow makes it easy for developers to write business logic and integrations in a reliable way. Since Dapr workflows are stateful, they support long-running and fault-tolerant applications, ideal for orchestrating microservices. Dapr workflow works seamlessly with other Dapr building blocks, such as service invocation, pub/sub, state management, and bindings.
Dapr workflow makes it easy for developers to write business logic and integrations in a reliable way.
Since Dapr workflows are stateful, they support long-running and fault-tolerant applications, ideal for orchestrating microservices.
Dapr workflow works seamlessly with other Dapr building blocks, such as service invocation, pub/sub, state management, and bindings.
The durable, resilient Dapr Workflow capability:
- Offers a built-in workflow runtime for driving Dapr Workflow execution.
- Provides SDKs for authoring workflows in code, using any language.
- Provides HTTP and gRPC APIs for managing workflows (start, query, pause/resume, raise event, terminate, purge).
- Integrates with any other workflow runtime via workflow components.
<img src="/images/workflow-overview/workflow-overview.png" width=800 alt="Diagram showing basics of Dapr Workflow">
@ -28,19 +29,29 @@ Some example scenarios that Dapr Workflow can perform are:
### Workflows and activities
With Dapr Workflow, you can write activities and then orchestrate those activities in a workflow. Workflow activities are:
With Dapr Workflow, you can write activities and then orchestrate those activities in a workflow.
Workflow activities are:
- The basic unit of work in a workflow
- Used for calling other (Dapr) services, interacting with state stores, and pub/sub brokers.
- Used for calling external third party services.
[Learn more about workflow activities.]({{% ref "workflow-features-concepts.md##workflow-activities" %}})
### Child workflows
In addition to activities, you can write workflows to schedule other workflows as child workflows. A child workflow has its own instance ID, history, and status that is independent of the parent workflow that started it, except for the fact that terminating the parent workflow terminates all of the child workflows created by it. Child workflow also supports automatic retry policies.
In addition to activities, you can write workflows to schedule other workflows as child workflows.
A child workflow has its own instance ID, history, and status that is independent of the parent workflow that started it, except for the fact that terminating the parent workflow terminates all of the child workflows created by it.
Child workflow also supports automatic retry policies.
[Learn more about child workflows.]({{% ref "workflow-features-concepts.md#child-workflows" %}})
### Multi-application workflows
Multi-application workflows, enable you to orchestrate complex business processes that span across multiple applications. This allows a workflow to call activities or start child workflows in different applications, distributing the workflow execution while maintaining the security, reliability and durability guarantees of Dapr's workflow engine.
[Learn more about multi-application workflows.]({{% ref "workflow-multi-app.md" %}})
### Timers and reminders
Same as Dapr actors, you can schedule reminder-like durable delays for any time range.
@ -49,7 +60,8 @@ Same as Dapr actors, you can schedule reminder-like durable delays for any time
### Workflow HTTP calls to manage a workflow
When you create an application with workflow code and run it with Dapr, you can call specific workflows that reside in the application. Each individual workflow can be:
When you create an application with workflow code and run it with Dapr, you can call specific workflows that reside in the application.
Each individual workflow can be:
- Started or terminated through a POST request
- Triggered to deliver a named event through a POST request
@ -61,13 +73,15 @@ When you create an application with workflow code and run it with Dapr, you can
## Workflow patterns
Dapr Workflow simplifies complex, stateful coordination requirements in microservice architectures. The following sections describe several application patterns that can benefit from Dapr Workflow.
Dapr Workflow simplifies complex, stateful coordination requirements in microservice architectures.
The following sections describe several application patterns that can benefit from Dapr Workflow.
Learn more about [different types of workflow patterns]({{% ref workflow-patterns.md %}})
## Workflow SDKs
The Dapr Workflow _authoring SDKs_ are language-specific SDKs that contain types and functions to implement workflow logic. The workflow logic lives in your application and is orchestrated by the Dapr Workflow engine running in the Dapr sidecar via a gRPC stream.
The Dapr Workflow _authoring SDKs_ are language-specific SDKs that contain types and functions to implement workflow logic.
The workflow logic lives in your application and is orchestrated by the Dapr Workflow engine running in the Dapr sidecar via a gRPC stream.
### Supported SDKs

View File

@ -28,7 +28,6 @@ Dapr Workflow solves these complexities by allowing you to implement the task ch
{{< tabpane text=true >}}
{{% tab "Python" %}}
<!--python-->
```python
import dapr.ext.workflow as wf
@ -65,7 +64,7 @@ def step3(ctx, activity_input):
def error_handler(ctx, error):
print(f'Executing error handler: {error}.')
# Do some compensating work
# Apply some compensating work
```
> **Note** Workflow retry policies will be available in a future version of the Python SDK.
@ -73,7 +72,6 @@ def error_handler(ctx, error):
{{% /tab %}}
{{% tab "JavaScript" %}}
<!--javascript-->
```javascript
import { DaprWorkflowClient, WorkflowActivityContext, WorkflowContext, WorkflowRuntime, TWorkflow } from "@dapr/dapr";
@ -141,13 +139,13 @@ async function start() {
start().catch((e) => {
console.error(e);
process.exit(1);
# Apply custom compensation logic
});
```
{{% /tab %}}
{{% tab ".NET" %}}
<!--dotnet-->
```csharp
// Expotential backoff retry policy that survives long outages
@ -180,7 +178,6 @@ catch (TaskFailedException) // Task failures are surfaced as TaskFailedException
{{% /tab %}}
{{% tab "Java" %}}
<!--java-->
```java
public class ChainWorkflow extends Workflow {
@ -235,7 +232,6 @@ public class ChainWorkflow extends Workflow {
{{% /tab %}}
{{% tab "Go" %}}
<!--go-->
```go
func TaskChainWorkflow(ctx *workflow.WorkflowContext) (any, error) {
@ -314,7 +310,6 @@ Dapr Workflows provides a way to express the fan-out/fan-in pattern as a simple
{{< tabpane text=true >}}
{{% tab "Python" %}}
<!--python-->
```python
import time
@ -354,7 +349,6 @@ def process_results(ctx, final_result: int):
{{% /tab %}}
{{% tab "JavaScript" %}}
<!--javascript-->
```javascript
import {
@ -462,7 +456,6 @@ start().catch((e) => {
{{% /tab %}}
{{% tab ".NET" %}}
<!--dotnet-->
```csharp
// Get a list of N work items to process in parallel.
@ -487,7 +480,6 @@ await context.CallActivityAsync("PostResults", sum);
{{% /tab %}}
{{% tab "Java" %}}
<!--java-->
```java
public class FaninoutWorkflow extends Workflow {
@ -513,7 +505,6 @@ public class FaninoutWorkflow extends Workflow {
{{% /tab %}}
{{% tab "Go" %}}
<!--go-->
```go
func BatchProcessingWorkflow(ctx *workflow.WorkflowContext) (any, error) {
@ -593,7 +584,7 @@ It's possible to go further and limit the degree of concurrency using simple, la
{{< tabpane text=true >}}
{{% tab ".NET" %}}
<!-- .NET -->
```csharp
//Revisiting the earlier example...
@ -624,13 +615,12 @@ await context.CallActivityAsync("PostResults", sum);
{{< /tabpane >}}
With the release of 1.16, it's even easier to process workflow activities in parallel while putting an upper cap on
concurrency by using the following extension methods on the `WorkflowContext`:
You can process workflow activities in parallel while putting an upper cap on concurrency by using the following extension methods on the `WorkflowContext`:
{{< tabpane text=true >}}
{{% tab header=".NET" %}}
<!-- .NET -->
```csharp
//Revisiting the earlier example...
// Get a list of work items to process
@ -742,7 +732,6 @@ Dapr Workflow supports this pattern natively by allowing you to implement _etern
{{< tabpane text=true >}}
{{% tab "Python" %}}
<!--python-->
```python
from dataclasses import dataclass
@ -789,7 +778,6 @@ def send_alert(ctx, message: str):
{{% /tab %}}
{{% tab "JavaScript" %}}
<!--javascript-->
```javascript
const statusMonitorWorkflow: TWorkflow = async function* (ctx: WorkflowContext): any {
@ -817,7 +805,6 @@ const statusMonitorWorkflow: TWorkflow = async function* (ctx: WorkflowContext):
{{% /tab %}}
{{% tab ".NET" %}}
<!--dotnet-->
```csharp
public override async Task<object> RunAsync(WorkflowContext context, MyEntityState myEntityState)
@ -858,7 +845,6 @@ public override async Task<object> RunAsync(WorkflowContext context, MyEntitySta
{{% /tab %}}
{{% tab "Java" %}}
<!--java-->
```java
public class MonitorWorkflow extends Workflow {
@ -900,7 +886,6 @@ public class MonitorWorkflow extends Workflow {
{{% /tab %}}
{{% tab "Go" %}}
<!--go-->
```go
type JobStatus struct {
@ -983,7 +968,6 @@ The following example code shows how this pattern can be implemented using Dapr
{{< tabpane text=true >}}
{{% tab "Python" %}}
<!--python-->
```python
from dataclasses import dataclass
@ -1042,7 +1026,6 @@ def place_order(_, order: Order) -> None:
{{% /tab %}}
{{% tab "JavaScript" %}}
<!--javascript-->
```javascript
import {
@ -1182,7 +1165,6 @@ start().catch((e) => {
{{% /tab %}}
{{% tab ".NET" %}}
<!--dotnet-->
```csharp
public override async Task<OrderResult> RunAsync(WorkflowContext context, OrderPayload order)
@ -1226,7 +1208,6 @@ public override async Task<OrderResult> RunAsync(WorkflowContext context, OrderP
{{% /tab %}}
{{% tab "Java" %}}
<!--java-->
```java
public class ExternalSystemInteractionWorkflow extends Workflow {
@ -1263,7 +1244,6 @@ public class ExternalSystemInteractionWorkflow extends Workflow {
{{% /tab %}}
{{% tab "Go" %}}
<!--go-->
```go
type Order struct {
@ -1326,7 +1306,6 @@ The code that delivers the event to resume the workflow execution is external to
{{< tabpane text=true >}}
{{% tab "Python" %}}
<!--python-->
```python
from dapr.clients import DaprClient
@ -1343,7 +1322,6 @@ with DaprClient() as d:
{{% /tab %}}
{{% tab "JavaScript" %}}
<!--javascript-->
```javascript
import { DaprClient } from "@dapr/dapr";
@ -1356,7 +1334,6 @@ import { DaprClient } from "@dapr/dapr";
{{% /tab %}}
{{% tab ".NET" %}}
<!--dotnet-->
```csharp
// Raise the workflow event to the waiting workflow
@ -1370,7 +1347,6 @@ await daprClient.RaiseWorkflowEventAsync(
{{% /tab %}}
{{% tab "Java" %}}
<!--java-->
```java
System.out.println("**SendExternalMessage: RestartEvent**");
@ -1380,7 +1356,6 @@ client.raiseEvent(restartingInstanceId, "RestartEvent", "RestartEventPayload");
{{% /tab %}}
{{% tab "Go" %}}
<!--go-->
```go
func raiseEvent() {
@ -1409,6 +1384,209 @@ func raiseEvent() {
External events don't have to be directly triggered by humans. They can also be triggered by other systems. For example, a workflow may need to pause and wait for a payment to be received. In this case, a payment system might publish an event to a pub/sub topic on receipt of a payment, and a listener on that topic can raise an event to the workflow using the raise event workflow API.
## Compensation
The compensation pattern (also known as the saga pattern) provides a mechanism for rolling back or undoing operations that have already been executed when a workflow fails partway through. This pattern is particularly important for long-running workflows that span multiple microservices where traditional database transactions are not feasible.
In distributed microservice architectures, you often need to coordinate operations across multiple services. When these operations cannot be wrapped in a single transaction, the compensation pattern provides a way to maintain consistency by defining compensating actions for each step in the workflow.
The compensation pattern addresses several critical challenges:
- **Distributed Transaction Management**: When a workflow spans multiple microservices, each with their own data stores, traditional ACID transactions are not possible. The compensation pattern provides transactional consistency by ensuring operations are either all completed successfully or all undone through compensation.
- **Partial Failure Recovery**: If a workflow fails after some steps have completed successfully, the compensation pattern allows you to undo those completed steps gracefully.
- **Business Process Integrity**: Ensures that business processes can be properly rolled back in case of failures, maintaining the integrity of your business operations.
- **Long-Running Processes**: For workflows that may run for hours, days, or longer, traditional locking mechanisms are impractical. Compensation provides a way to handle failures in these scenarios.
Common use cases for the compensation pattern include:
- **E-commerce Order Processing**: Reserve inventory, charge payment, and ship orders. If shipping fails, you need to release the inventory and refund the payment.
- **Financial Transactions**: In a money transfer, if crediting the destination account fails, you need to rollback the debit from the source account.
- **Resource Provisioning**: When provisioning cloud resources across multiple providers, if one step fails, you need to clean up all previously provisioned resources.
- **Multi-Step Business Processes**: Any business process that involves multiple irreversible steps that may need to be undone in case of later failures.
Dapr Workflow provides support for the compensation pattern, allowing you to register compensation activities for each step and execute them in reverse order when needed.
Here's an example workflow for an e-commerce process:
1. A workflow is triggered when an order is received.
1. A reservation is made for the order in the inventory.
1. The payment is processed.
1. The order is shipped.
1. If any of the above actions results in an error, the actions are compensated with another action:
- The shipment is cancelled.
- The payment is refunded.
- The inventory reservation is released.
The following diagram illustrates this flow.
<img src="/images/workflow-overview/workflows-compensation.png" width=600 alt="Diagram showing how the compensation pattern."/>
{{< tabpane text=true >}}
{{% tab "Java" %}}
```java
public class PaymentProcessingWorkflow implements Workflow {
@Override
public WorkflowStub create() {
return ctx -> {
ctx.getLogger().info("Starting Workflow: " + ctx.getName());
var orderId = ctx.getInput(String.class);
List<String> compensations = new ArrayList<>();
try {
// Step 1: Reserve inventory
String reservationId = ctx.callActivity(ReserveInventoryActivity.class.getName(), orderId, String.class).await();
ctx.getLogger().info("Inventory reserved: {}", reservationId);
compensations.add("ReleaseInventory");
// Step 2: Process payment
String paymentId = ctx.callActivity(ProcessPaymentActivity.class.getName(), orderId, String.class).await();
ctx.getLogger().info("Payment processed: {}", paymentId);
compensations.add("RefundPayment");
// Step 3: Ship order
String shipmentId = ctx.callActivity(ShipOrderActivity.class.getName(), orderId, String.class).await();
ctx.getLogger().info("Order shipped: {}", shipmentId);
compensations.add("CancelShipment");
} catch (TaskFailedException e) {
ctx.getLogger().error("Activity failed: {}", e.getMessage());
// Execute compensations in reverse order
Collections.reverse(compensations);
for (String compensation : compensations) {
try {
switch (compensation) {
case "CancelShipment":
String shipmentCancelResult = ctx.callActivity(
CancelShipmentActivity.class.getName(),
orderId,
String.class).await();
ctx.getLogger().info("Shipment cancellation completed: {}", shipmentCancelResult);
break;
case "RefundPayment":
String refundResult = ctx.callActivity(
RefundPaymentActivity.class.getName(),
orderId,
String.class).await();
ctx.getLogger().info("Payment refund completed: {}", refundResult);
break;
case "ReleaseInventory":
String releaseResult = ctx.callActivity(
ReleaseInventoryActivity.class.getName(),
orderId,
String.class).await();
ctx.getLogger().info("Inventory release completed: {}", releaseResult);
break;
}
} catch (TaskFailedException ex) {
ctx.getLogger().error("Compensation activity failed: {}", ex.getMessage());
}
}
ctx.complete("Order processing failed, compensation applied");
}
// Step 4: Send confirmation
ctx.callActivity(SendConfirmationActivity.class.getName(), orderId, Void.class).await();
ctx.getLogger().info("Confirmation sent for order: {}", orderId);
ctx.complete("Order processed successfully: " + orderId);
};
}
}
// Example activities
class ReserveInventoryActivity implements WorkflowActivity {
@Override
public Object run(WorkflowActivityContext ctx) {
String orderId = ctx.getInput(String.class);
// Logic to reserve inventory
String reservationId = "reservation_" + orderId;
System.out.println("Reserved inventory for order: " + orderId);
return reservationId;
}
}
class ReleaseInventoryActivity implements WorkflowActivity {
@Override
public Object run(WorkflowActivityContext ctx) {
String reservationId = ctx.getInput(String.class);
// Logic to release inventory reservation
System.out.println("Released inventory reservation: " + reservationId);
return "Released: " + reservationId;
}
}
class ProcessPaymentActivity implements WorkflowActivity {
@Override
public Object run(WorkflowActivityContext ctx) {
String orderId = ctx.getInput(String.class);
// Logic to process payment
String paymentId = "payment_" + orderId;
System.out.println("Processed payment for order: " + orderId);
return paymentId;
}
}
class RefundPaymentActivity implements WorkflowActivity {
@Override
public Object run(WorkflowActivityContext ctx) {
String paymentId = ctx.getInput(String.class);
// Logic to refund payment
System.out.println("Refunded payment: " + paymentId);
return "Refunded: " + paymentId;
}
}
class ShipOrderActivity implements WorkflowActivity {
@Override
public Object run(WorkflowActivityContext ctx) {
String orderId = ctx.getInput(String.class);
// Logic to ship order
String shipmentId = "shipment_" + orderId;
System.out.println("Shipped order: " + orderId);
return shipmentId;
}
}
class CancelShipmentActivity implements WorkflowActivity {
@Override
public Object run(WorkflowActivityContext ctx) {
String shipmentId = ctx.getInput(String.class);
// Logic to cancel shipment
System.out.println("Canceled shipment: " + shipmentId);
return "Canceled: " + shipmentId;
}
}
class SendConfirmationActivity implements WorkflowActivity {
@Override
public Object run(WorkflowActivityContext ctx) {
String orderId = ctx.getInput(String.class);
// Logic to send confirmation
System.out.println("Sent confirmation for order: " + orderId);
return null;
}
}
```
{{% /tab %}}
{{< /tabpane >}}
The key benefits of using Dapr Workflow's compensation pattern include:
- **Compensation Control**: You have full control over when and how compensation activities are executed.
- **Flexible Configuration**: You can implement custom logic for determining which compensations to run.
- **Error Handling**: Handle compensation failures according to your specific business requirements.
- **Simple Implementation**: No additional framework dependencies - just standard workflow activities and exception handling.
The compensation pattern ensures that your distributed workflows can maintain consistency and recover gracefully from failures, making it an essential tool for building reliable microservice architectures.
## Next steps
{{< button text="Workflow architecture >>" page="workflow-architecture.md" >}}
@ -1418,7 +1596,7 @@ External events don't have to be directly triggered by humans. They can also be
- [Try out Dapr Workflows using the quickstart]({{% ref workflow-quickstart.md %}})
- [Workflow overview]({{% ref workflow-overview.md %}})
- [Workflow API reference]({{% ref workflow_api.md %}})
- Try out the following examples:
- Try out the following examples:
- [Python](https://github.com/dapr/python-sdk/tree/master/examples/demo_workflow)
- [JavaScript](https://github.com/dapr/js-sdk/tree/main/examples/workflow)
- [.NET](https://github.com/dapr/dotnet-sdk/tree/master/examples/Workflow)

View File

@ -3,7 +3,7 @@ type: docs
title: "Dapr Agents"
linkTitle: "Dapr Agents"
weight: 25
description: "A framework for building production-grade resilient AI agent systems at scale"
description: "A framework for building durable and resilient AI agent systems at scale"
---
### What is Dapr Agents?

View File

@ -6,7 +6,7 @@ weight: 40
description: "Learn about the core concepts of Dapr Agents"
---
Dapr Agents provides a structured way to build and orchestrate applications that use LLMs without getting bogged down in infrastructure details. The primary goal is to make AI development by abstracting away the complexities of working with LLMs, tools, memory management, and distributed systems, allowing developers to focus on the business logic of their AI applications. Agents in this framework are the fundamental building blocks.
Dapr Agents provides a structured way to build and orchestrate applications that use LLMs without getting bogged down in infrastructure details. The primary goal is to enable AI development by abstracting away the complexities of working with LLMs, tools, memory management, and distributed systems, allowing developers to focus on the business logic of their AI applications. Agents in this framework are the fundamental building blocks.
## Agents
@ -104,15 +104,15 @@ An agentic system is a distributed system that requires a variety of behaviors a
Dapr Agents provides a unified interface to connect with LLM inference APIs. This abstraction allows developers to seamlessly integrate their agents with cutting-edge language models for reasoning and decision-making. The framework includes multiple LLM clients for different providers and modalities:
- `DaprChatClient`: Unified API for LLM interactions via Dapr's Conversation API with built-in security (scopes, secrets, PII obfuscation), resiliency (timeouts, retries, circuit breakers), and observability via OpenTelemetry & Prometheus
- `OpenAIChatClient`: Full spectrum support for OpenAI models including chat, embeddings, and audio
- `HFHubChatClient`: For Hugging Face models supporting both chat and embeddings
- `NVIDIAChatClient`: For NVIDIA AI Foundation models supporting local inference and chat
- `ElevenLabs`: Support for speech and voice capabilities
- `DaprChatClient`: Unified API for LLM interactions via Dapr's Conversation API with built-in security (scopes, secrets, PII obfuscation), resiliency (timeouts, retries, circuit breakers), and observability via OpenTelemetry & Prometheus
### Prompt Flexibility
Dapr Agents supports flexible prompt templates to shape agent behavior and reasoning. Users can define placeholders within prompts, enabling dynamic input of context for inference calls. By leveraging prompt formatting with [Jinja templates](https://jinja.palletsprojects.com/en/stable/templates/), users can include loops, conditions, and variables, providing precise control over the structure and content of prompts. This flexibility ensures that LLM responses are tailored to the task at hand, offering modularity and adaptability for diverse use cases.
Dapr Agents supports flexible prompt templates to shape agent behavior and reasoning. Users can define placeholders within prompts, enabling dynamic input of context for inference calls. By leveraging prompt formatting with [Jinja templates](https://jinja.palletsprojects.com/en/stable/templates/) and Python f-string formatting, users can include loops, conditions, and variables, providing precise control over the structure and content of prompts. This flexibility ensures that LLM responses are tailored to the task at hand, offering modularity and adaptability for diverse use cases.
### Structured Outputs
@ -165,7 +165,7 @@ This is supported directly through LLM parametric knowledge and enhanced by [Fun
### MCP Support
Dapr Agents includes built-in support for the [Model Context Protocol (MCP)](https://modelcontextprotocol.io/), enabling agents to dynamically discover and invoke external tools through a standardized interface. Using the provided MCPClient, agents can connect to MCP servers via two transport options: stdio for local development and sse for remote or distributed environments.
Dapr Agents includes built-in support for the [Model Context Protocol (MCP)](https://modelcontextprotocol.io/), enabling agents to dynamically discover and invoke external tools through a standardized interface. Using the provided MCPClient, agents can connect to MCP servers via three transport options: stdio for local development, sse for remote or distributed environments, and via streamable HTTP transport.
```python
client = MCPClient()
@ -229,7 +229,7 @@ curl -i -X POST http://localhost:8001/start-workflow \
-H "Content-Type: application/json" \
-d '{"task": "I want to find flights to Paris"}'
```
Unlike conversational agents that provide immediate synchronous responses, durable agents operate as headless services that are triggered asynchronously. You trigger it, receive a workflow instance ID, and can track progress over time. This enables long-running, fault-tolerant operations that can span multiple systems and survive restarts, making them ideal for complex multi-step processes in production environments.
Unlike conversational agents that provide immediate synchronous responses, durable agents operate as headless services that are triggered asynchronously. You trigger it, receive a workflow instance ID, and can track progress over time. This enables long-running, fault-tolerant operations that can span multiple systems and survive restarts, making them ideal for complex multi-step processes in environments requiring high levels of durability and resiliency.
## Multi-agent Systems (MAS)
@ -298,7 +298,7 @@ Agent tasks enable workflows to leverage specialized agents with their own tools
### Workflow Patterns
Workflows enable the implementation of various agentic patterns through structured orchestration, including Prompt Chaining, Routing, Parallelization, Orchestrator-Workers, Evaluator-Optimizer, Human-in-the-loop, and others. For detailed implementations and examples of these patterns, see the [Patterns documentation]({{< ref "dapr-agents-patterns.md" >}}).
Workflows enable the implementation of various agentic patterns through structured orchestration, including Prompt Chaining, Routing, Parallelization, Orchestrator-Workers, Evaluator-Optimizer, Human-in-the-loop, and others. For detailed implementations and examples of these patterns, see the [Patterns documentation]({{< ref dapr-agents-patterns.md >}}).
### Workflows vs. Durable Agents
@ -309,14 +309,14 @@ Both DurableAgent and workflow-based agent orchestration use Dapr workflows behi
| Control | Developer-defined process flow | Agent determines next steps |
| Predictability | Higher | Lower |
| Flexibility | Fixed overall structure, flexible within steps | Completely flexible |
| Reliability | Very high (workflow engine guarantees) | Depends on agent implementation |
| Complexity | Simpler to reason about | Harder to debug and understand |
| Reliability | Very high (workflow engine guarantees) | Very high (underlying agent implementation guarantees) |
| Complexity | Structured workflow patterns | Dynamic, flexible execution paths |
| Use Cases | Business processes, regulated domains | Open-ended research, creative tasks |
The key difference lies in control flow determination: with DurableAgent, the workflow is created dynamically by the LLM's planning decisions, executing entirely within a single agent context. In contrast, with deterministic workflows, the developer explicitly defines the coordination between one or more LLM interactions, providing structured orchestration across multiple tasks or agents.
The key difference lies in control flow determination: with DurableAgent, the underlying workflow is created dynamically by the LLM's planning decisions, executing entirely within a single agent context. In contrast, with deterministic workflows, the developer explicitly defines the coordination between one or more LLM interactions, providing structured orchestration across multiple tasks or agents.
## Event-driven Orchestration
## Event-Driven Orchestration
Event-driven agent orchestration enables multiple specialized agents to collaborate through asynchronous [Pub/Sub messaging](https://docs.dapr.io/developing-applications/building-blocks/pubsub/pubsub-overview/). This approach provides powerful collaborative problem-solving, parallel processing, and division of responsibilities among specialized agents through independent scaling, resilience via service isolation, and clear separation of responsibilities.
### Core Participants

View File

@ -133,7 +133,6 @@ Create a `requirements.txt` file with the necessary dependencies:
```txt
dapr-agents
python-dotenv
```
Create and activate a virtual environment, then install the dependencies:
@ -163,7 +162,7 @@ This command starts a Dapr sidecar with the conversation component and launches
### 6. Enable Redis Insights (Optional)
Dapr uses [Redis]({{% ref setup-redis.md %}}) by default for state management and pub/sub messaging, which are fundamental to Dapr Agents's agentic workflows. To inspect the Redis instance, a great tool to use is Redis Insight, and you can use it to inspect the agent memory populated earlier. To run Redis Insights, run:
Dapr uses [Redis]({{% ref setup-redis.md %}}) by default for state management and pub/sub messaging, which are fundamental to Dapr Agents's agentic workflows. To inspect the Redis instance, a great UI tool to use is Redis Insight, and you can use it to inspect the agent memory populated earlier. To run Redis Insights, run:
```bash
docker run --rm -d --name redisinsight -p 5540:5540 redis/redisinsight:latest

View File

@ -8,17 +8,17 @@ description: "Overview of Dapr Agents and its key features"
![Agent Overview](/images/dapr-agents/concepts-agents-overview.png)
Dapr Agents is a developer framework for building production-grade, resilient AI agent systems powered by Large Language Models (LLMs). Built on the battle-tested Dapr project, it enables developers to create autonomous systems that reason through problems, make dynamic decisions, and collaborate seamlessly. It includes built-in observability and stateful workflow execution to ensure agentic workflows complete successfully, regardless of complexity. Whether you're developing single-agent applications or complex multi-agent workflows, Dapr Agents provides the infrastructure for intelligent, adaptive systems that scale across environments.
Dapr Agents is a developer framework for building durable and resilient AI agent systems powered by Large Language Models (LLMs). Built on the battle-tested Dapr project, it enables developers to create autonomous systems that reason through problems, make dynamic decisions, and collaborate seamlessly. It includes built-in observability and stateful workflow execution to ensure agentic workflows complete successfully, regardless of complexity. Whether you're developing single-agent applications or complex multi-agent workflows, Dapr Agents provides the infrastructure for intelligent, adaptive systems that scale across environments.
## Core Capabilities
- **Scale and Efficiency**: Run thousands of agents efficiently on a single core. Dapr distributes single and multi-agent apps transparently across fleets of machines and handles their lifecycle.
- **Workflow Resilience**: Automatically retries agentic workflows and ensures task completion.
- **Workflow Resilience**: Automatically retry agentic workflows and to ensure task completion.
- **Data-Driven Agents**: Directly integrate with databases, documents, and unstructured data by connecting to dozens of different data sources.
- **Multi-Agent Systems**: Secure and observable by default, enabling collaboration between agents.
- **Kubernetes-Native**: Easily deploy and manage agents in Kubernetes environments.
- **Platform-Ready**: Access scopes and declarative resources enable platform teams to integrate Dapr agents into their systems.
- **Platform-Ready**: Access scopes and declarative resources enable platform teams to integrate Dapr Agents into their systems.
- **Vendor-Neutral & Open Source**: Avoid vendor lock-in and gain flexibility across cloud and on-premises deployments.
## Key Features
@ -26,17 +26,17 @@ Dapr Agents is a developer framework for building production-grade, resilient AI
Dapr Agents provides specialized modules designed for creating intelligent, autonomous systems. Each module is designed to work independently, allowing you to use any combination that fits your application needs.
| Building Block | Description |
| Feature | Description |
|----------------------------------------------------------------------------------------------|-------------|
| [**LLM Integration**]({{% ref "dapr-agents-core-concepts.md#1-llm-integration" %}}) | Uses Dapr [Conversation API]({{% ref conversation-overview.md %}}) to abstract LLM inference APIs for chat completion, or provides native clients for other LLM integrations such as embeddings, audio, etc.
| [**Structured Outputs**]({{% ref "dapr-agents-core-concepts.md#2-structured-outputs" %}}) | Leverage capabilities like OpenAI's Function Calling to generate predictable, reliable results following JSON Schema and OpenAPI standards for tool integration.
| [**Tool Selection**]({{% ref "dapr-agents-core-concepts.md#3-tool-selection" %}}) | Dynamic tool selection based on requirements, best action, and execution through [Function Calling](https://platform.openai.com/docs/guides/function-calling) capabilities.
| [**MCP Support**]({{% ref "dapr-agents-core-concepts.md#4-mcp-support" %}}) | Built-in support for [Model Context Protocol](https://modelcontextprotocol.io/) enabling agents to dynamically discover and invoke external tools through standardized interfaces.
| [**Memory Management**]({{% ref "dapr-agents-core-concepts.md#5-memory" %}}) | Retain context across interactions with options from simple in-memory lists to vector databases, integrating with [Dapr state stores]({{% ref state-management-overview.md %}}) for scalable, persistent memory.
| [**Durable Agents**]({{% ref "dapr-agents-core-concepts.md#durableagent" %}}) | Workflow-backed agents that provide fault-tolerant execution with persistent state management and automatic retry mechanisms for long-running processes.
| [**Headless Agents**]({{% ref "dapr-agents-core-concepts.md#7-agent-services" %}}) | Expose agents over REST for long-running tasks, enabling programmatic access and integration without requiring user interfaces or human intervention.
| [**Event-Driven Communication**]({{% ref "dapr-agents-core-concepts.md#8-message-driven-communication" %}}) | Enable agent collaboration through [Pub/Sub messaging]({{% ref pubsub-overview.md %}}) for event-driven communication, task distribution, and real-time coordination in distributed systems.
| [**Agent Orchestration**]({{% ref "dapr-agents-core-concepts.md#9-workflow-orchestration" %}}) | Deterministic agent orchestration using [Dapr Workflows]({{% ref workflow-overview.md %}}) with higher-level tasks that interact with LLMs for complex multi-step processes.
| [**LLM Integration**]({{% ref "dapr-agents-core-concepts.md#llm-integration" %}}) | Uses Dapr [Conversation API]({{% ref conversation-overview.md %}}) to abstract LLM inference APIs for chat completion, or provides native clients for other LLM integrations such as embeddings, audio, etc.
| [**Structured Outputs**]({{% ref "dapr-agents-core-concepts.md#structured-outputs" %}}) | Leverage capabilities like OpenAI's Function Calling to generate predictable, reliable results following JSON Schema and OpenAPI standards for tool integration.
| [**Tool Selection**]({{% ref "dapr-agents-core-concepts.md#tool-calling" %}}) | Dynamic tool selection based on requirements, best action, and execution through [Function Calling](https://platform.openai.com/docs/guides/function-calling) capabilities.
| [**MCP Support**]({{% ref "dapr-agents-core-concepts.md#mcp-support" %}}) | Built-in support for [Model Context Protocol](https://modelcontextprotocol.io/) enabling agents to dynamically discover and invoke external tools through standardized interfaces.
| [**Memory Management**]({{% ref "dapr-agents-core-concepts.md#memory" %}}) | Retain context across interactions with options from simple in-memory lists to vector databases, integrating with [Dapr state stores]({{% ref state-management-overview.md %}}) for scalable, persistent memory.
| [**Durable Agents**]({{% ref "dapr-agents-core-concepts.md#durable-agents" %}}) | Workflow-backed agents that provide fault-tolerant execution with persistent state management and automatic retry mechanisms for long-running processes.
| [**Headless Agents**]({{% ref "dapr-agents-core-concepts.md#agent-services" %}}) | Expose agents over REST for long-running tasks, enabling programmatic access and integration without requiring user interfaces or human intervention.
| [**Event-Driven Communication**]({{% ref "dapr-agents-core-concepts.md#event-driven-orchestration" %}}) | Enable agent collaboration through [Pub/Sub messaging]({{% ref pubsub-overview.md %}}) for event-driven communication, task distribution, and real-time coordination in distributed systems.
| [**Agent Orchestration**]({{% ref "dapr-agents-core-concepts.md#deterministic-workflows" %}}) | Deterministic agent orchestration using [Dapr Workflows]({{% ref workflow-overview.md %}}) with higher-level tasks that interact with LLMs for complex multi-step processes.
## Agentic Patterns
@ -49,12 +49,13 @@ These patterns exist along a spectrum of autonomy, from predictable workflow-bas
| Pattern | Description |
|----------------------------------------------------------------------------------------|-------------|
| [**Augmented LLM**]({{% ref "dapr-agents-patterns.md#augmented-llm" %}}) | Enhances a language model with external capabilities like memory and tools, providing a foundation for AI-driven applications.
| [**Prompt Chaining**]({{% ref "dapr-agents-patterns.md#prompt-chaining" %}}) | Decomposes complex tasks into a sequence of steps where each LLM call processes the output of the previous one.
| [**Routing**]({{% ref "dapr-agents-patterns.md#routing" %}}) | Classifies inputs and directs them to specialized follow-up tasks, enabling separation of concerns and expert specialization.
| [**Parallelization**]({{% ref "dapr-agents-patterns.md#parallelization" %}}) | Processes multiple dimensions of a problem simultaneously with outputs aggregated programmatically for improved efficiency.
| [**Orchestrator-Workers**]({{% ref "dapr-agents-patterns.md#orchestrator-workers" %}}) | Features a central orchestrator LLM that dynamically breaks down tasks, delegates them to worker LLMs, and synthesizes results.
| [**Evaluator-Optimizer**]({{% ref "dapr-agents-patterns.md#evaluator-optimizer" %}}) | Implements a dual-LLM process where one model generates responses while another provides evaluation and feedback in an iterative loop.
| [**Durable Agent**]({{% ref "dapr-agents-patterns.md#durable-agent" %}}) | Extends the Augmented LLM by adding durability and persistence to agent interactions using Dapr's state stores.
| [**Prompt Chaining**]({{% ref "dapr-agents-patterns.md#prompt-chaining" %}}) | Decomposes complex tasks into a sequence of steps where each LLM call processes the output of the previous one.
| [**Evaluator-Optimizer**]({{% ref "dapr-agents-patterns.md#evaluator-optimizer" %}}) | Implements a dual-LLM process where one model generates responses while another provides evaluation and feedback in an iterative loop.
| [**Parallelization**]({{% ref "dapr-agents-patterns.md#parallelization" %}}) | Processes multiple dimensions of a problem simultaneously with outputs aggregated programmatically for improved efficiency.
| [**Routing**]({{% ref "dapr-agents-patterns.md#routing" %}}) | Classifies inputs and directs them to specialized follow-up tasks, enabling separation of concerns and expert specialization.
| [**Orchestrator-Workers**]({{% ref "dapr-agents-patterns.md#orchestrator-workers" %}}) | Features a central orchestrator LLM that dynamically breaks down tasks, delegates them to worker LLMs, and synthesizes results.
## Developer Experience
@ -63,24 +64,24 @@ Dapr Agents is a Python framework built on top of the [Python Dapr SDK]({{% ref
### Getting Started
Get started with Dapr Agents by following the instructions on the [Getting Started page]({{% ref "dapr-agents-getting-started.md" %}}).
Get started with Dapr Agents by following the instructions on the [Getting Started page]({{% ref dapr-agents-getting-started.md %}}).
### Framework Integrations
Dapr Agents integrates with popular Python frameworks and tools. For detailed integration guides and examples, see the [integrations page]({{% ref "dapr-agents-integrations.md" %}}).
Dapr Agents integrates with popular Python frameworks and tools. For detailed integration guides and examples, see the [integrations page]({{% ref "developing-applications/dapr-agents/dapr-agents-integrations.md" %}}).
## Operational Support
Dapr Agents inherits Dapr's enterprise-grade operational capabilities, providing comprehensive support for production deployments of agentic systems.
Dapr Agents inherits Dapr's enterprise-grade operational capabilities, providing comprehensive support for durable and reliable deployments of agentic systems.
### Built-in Operational Features
- **[Observability]({{% ref observability-concept.md %}})** - Distributed tracing, metrics collection, and logging for agent interactions and workflow execution
- **[Security]({{% ref security-concept.md %}})** - mTLS encryption, access control, and secrets management for secure agent communication
- **[Resiliency]({{% ref resiliency-concept.md %}})** - Automatic retries, circuit breakers, and timeout policies for fault-tolerant agent operations
- **[Infrastructure Abstraction]({{% ref components-concept.md %}})** - Dapr components abstract LLM providers, memory stores, storage and messaging backends, enabling seamless transitions between development and production environments
- **[Infrastructure Abstraction]({{% ref components-concept.md %}})** - Dapr components abstract LLM providers, memory stores, storage and messaging backends, enabling seamless transitions between different environments
These capabilities enable teams to monitor agent performance, secure multi-agent communications, and ensure reliable execution of complex agentic workflows in production environments.
These capabilities enable teams to monitor agent performance, secure multi-agent communications, and ensure reliable execution of complex agentic workflows.
## Contributing

View File

@ -346,7 +346,7 @@ Enterprise applications often need durable execution and reliability that go bey
<img src="/images/dapr-agents/agents-stateful-llm.png" width=600 alt="Diagram showing how the durable agent pattern works">
This pattern doesn't just persist message history it dynamically creates workflows with durable activities for each interaction, where LLM calls and tool executions are stored reliably in Dapr's state stores. This makes it ideal for production environments where reliability is critical.
This pattern doesn't just persist message history it dynamically creates workflows with durable activities for each interaction, where LLM calls and tool executions are stored reliably in Dapr's state stores. This makes it ideal for environments where reliability and durability is critical.
The Durable Agent also enables the "headless agents" approach where autonomous systems that operate without direct user interaction. Dapr's Durable Agent exposes REST and Pub/Sub APIs, making it ideal for long-running operations that are triggered by other applications or external events.

View File

@ -10,7 +10,7 @@ description: "Get started with Dapr Agents through practical step-by-step exampl
#### Before you begin
- [Set up your local Dapr environment]({{% ref "install-dapr-cli.md" %}}).
- [Set up your local Dapr environment]({{% ref install-dapr-cli.md %}}).
## Quickstarts
@ -21,8 +21,8 @@ description: "Get started with Dapr Agents through practical step-by-step exampl
| [LLM Call with Dapr Chat Client](https://github.com/dapr/dapr-agents/tree/main/quickstarts/02_llm_call_dapr)<br>Explore interaction with Language Models through Dapr Agents' `DaprChatClient`, featuring basic text generation with plain text prompts and templates. | - **Text Completion**: Generating responses to prompts <br> - **Swapping LLM providers**: Switching LLM backends without application code change <br> - **Resilience**: Setting timeout, retry and circuit-breaking <br> - **PII Obfuscation**: Automatically detect and mask sensitive user information |
| [LLM Call with OpenAI Client](https://github.com/dapr/dapr-agents/tree/main/quickstarts/02_llm_call_open_ai)<br>Leverage native LLM client libraries with Dapr Agents using the OpenAI Client for chat completion, audio processing, and embeddings. | - **Text Completion**: Generating responses to prompts <br> - **Structured Outputs**: Converting LLM responses to Pydantic objects <br><br> *Note: Other quickstarts for specific clients are available for [Elevenlabs](https://github.com/dapr/dapr-agents/tree/main/quickstarts/02_llm_call_elevenlabs), [Hugging Face](https://github.com/dapr/dapr-agents/tree/main/quickstarts/02_llm_call_hugging_face), and [Nvidia](https://github.com/dapr/dapr-agents/tree/main/quickstarts/02_llm_call_nvidia).* |
| [Agent Tool Call](https://github.com/dapr/dapr-agents/tree/main/quickstarts/03-agent-tool-call)<br>Build your first AI agent with custom tools by creating a practical weather assistant that fetches information and performs actions. | - **Tool Definition**: Creating reusable tools with the `@tool` decorator <br> - **Agent Configuration**: Setting up agents with roles, goals, and tools <br> - **Function Calling**: Enabling LLMs to execute Python functions |
| [Agentic Workflow](https://github.com/dapr/dapr-agents/tree/main/quickstarts/04-agentic-workflow)<br>Dive into stateful workflows with Dapr Agents by orchestrating sequential and parallel tasks through powerful workflow capabilities. | - **LLM-powered Tasks**: Using language models in workflows <br> - **Task Chaining**: Creating resilient multi-step processes executing in sequence <br> - **Fan-out/Fan-in**: Executing activities in parallel; then synchronizing these activities until all preceding activities have completed |
| [Multi-Agent Workflows](https://github.com/dapr/dapr-agents/tree/main/quickstarts/05-multi-agent-workflow-dapr-workflows)<br>Explore advanced event-driven workflows featuring a Lord of the Rings themed multi-agent system where autonomous agents collaborate to solve problems. | - **Multi-agent Systems**: Creating a network of specialized agents <br> - **Event-driven Architecture**: Implementing pub/sub messaging between agents <br> - **Workflow Orchestration**: Coordinating agents through different selection strategies|
| [Multi-Agent Workflow on Kubernetes](https://github.com/dapr/dapr-agents/tree/main/quickstarts/07-k8s-multi-agent-workflow)<br>Run multi-agent workflows in Kubernetes, demonstrating deployment and orchestration of event-driven agent systems in a containerized environment. | - **Kubernetes Deployment**: Running agents on Kubernetes <br> - **Container Orchestration**: Managing agent lifecycles with K8s <br> - **Service Communication**: Inter-agent communication in K8s |
| [Agentic Workflow](https://github.com/dapr/dapr-agents/tree/main/quickstarts/04-llm-based-workflows)<br>Dive into stateful workflows with Dapr Agents by orchestrating sequential and parallel tasks through powerful workflow capabilities. | - **LLM-powered Tasks**: Using language models in workflows <br> - **Task Chaining**: Creating resilient multi-step processes executing in sequence <br> - **Fan-out/Fan-in**: Executing activities in parallel; then synchronizing these activities until all preceding activities have completed |
| [Multi-Agent Workflows](https://github.com/dapr/dapr-agents/tree/main/quickstarts/05-multi-agent-workflows)<br>Explore advanced event-driven workflows featuring a Lord of the Rings themed multi-agent system where autonomous agents collaborate to solve problems. | - **Multi-agent Systems**: Creating a network of specialized agents <br> - **Event-driven Architecture**: Implementing pub/sub messaging between agents <br> - **Workflow Orchestration**: Coordinating agents through different selection strategies|
| [Multi-Agent Workflow on Kubernetes](https://github.com/dapr/dapr-agents/tree/main/quickstarts/05-multi-agent-workflow-k8s)<br>Run multi-agent workflows in Kubernetes, demonstrating deployment and orchestration of event-driven agent systems in a containerized environment. | - **Kubernetes Deployment**: Running agents on Kubernetes <br> - **Container Orchestration**: Managing agent lifecycles with K8s <br> - **Service Communication**: Inter-agent communication in K8s |
| [Document Agent with Chainlit](https://github.com/dapr/dapr-agents/tree/main/quickstarts/06-document-agent-chainlit)<br>Create a conversational agent with an operational UI that can upload, and learn unstructured documents while retaining long-term memory. | - **Conversational Document Agent**: Upload and converse over unstructured documents <br> - **Cloud Agnostic Storage**: Upload files to multiple storage providers <br> - **Conversation Memory Storage**: Persists conversation history using external storage. |
| [Data Agent with MCP and Chainlit](https://github.com/dapr/dapr-agents/tree/main/quickstarts/08-data-agent-mcp-chainlit)<br>Build a conversational agent over a Postgres database using Model Composition Protocol (MCP) with a ChatGPT-like interface. | - **Database Querying**: Natural language queries to relational databases <br> - **MCP Integration**: Connecting to databases without DB-specific code <br> - **Data Analysis**: Complex data analysis through conversation |

View File

@ -6,8 +6,8 @@ weight: 30
description: "Understanding the benefits and use cases for Dapr Agents"
---
Dapr Agents is an open-source framework for building and orchestrating LLM-based autonomous agents that leverages Dapr's proven distributed systems foundation. Unlike other agentic frameworks that require developers to build infrastructure from scratch, Dapr Agents enables teams to focus on agent intelligence by providing enterprise-grade scalability, state management, and messaging capabilities out of the box. This approach eliminates the complexity of recreating distributed system fundamentals while delivering production-ready agentic workflows.RetryClaude can make mistakes. Please double-check responses.
Dapr Agents is an open-source framework for building and orchestrating LLM-based autonomous agents that leverages Dapr's proven distributed systems foundation. Unlike other agentic frameworks that require developers to build infrastructure from scratch, Dapr Agents enables teams to focus on agent intelligence by providing enterprise-grade scalability, state management, and messaging capabilities out of the box. This approach eliminates the complexity of recreating distributed system fundamentals while delivering agentic workflows powered by Dapr.
### Challenges with Existing Frameworks
Many agentic frameworks today attempt to redefine how microservices are built and orchestrated by developing their own platforms for core distributed system capabilities. While these efforts showcase innovation, they often lead to steep learning curves, fragmented systems, and unnecessary complexity when scaling or adapting to new environments.
@ -46,10 +46,10 @@ Dapr Agents places durability at the core of its architecture, leveraging [Dapr
* **Durable Agent Execution**: DurableAgents are fundamentally workflow-backed, ensuring all LLM calls and tool executions remain durable, auditable, and resumable. Workflow checkpointing guarantees agents can recover from any point of failure while maintaining state consistency.
* **Deterministic Multi-Agent Orchestration**: Workflows provide centralized control over task dependencies and coordination between multiple agents. Dapr's code-first workflow engine enables reliable orchestration of complex business processes while preserving agent autonomy where appropriate.
By integrating workflows as the foundational layer, Dapr Agents enables systems that combine the reliability of deterministic execution with the intelligence of LLM-powered agents, ensuring production-grade reliability and scalability.
By integrating workflows as the foundational layer, Dapr Agents enables systems that combine the reliability of deterministic execution with the intelligence of LLM-powered agents, ensuring reliability and scalability.
{{% alert title="Note" color="info" %}}
Workflows in Dapr Agents provide the foundation for building production-ready agentic systems that combine reliable execution with LLM-powered intelligence.
Workflows in Dapr Agents provide the foundation for building durable agentic systems that combine reliable execution with LLM-powered intelligence.
{{% /alert %}}
### Modular Component Model
@ -61,7 +61,7 @@ Dapr Agents utilizes [Dapr's pluggable component framework]({{% ref components-c
* **Seamless Transitions**: Develop locally with default configurations and deploy effortlessly to cloud environments by simply updating component definitions.
{{% alert title="Note" color="info" %}}
Developers can easily switch between different components (e.g., Redis to DynamoDB, OpenAI, Anthropic) based on their deployment environment, ensuring portability and adaptability.
Developers can easily switch between different components (e.g., Redis to DynamoDB, OpenAI to Anthropic) based on their deployment environment, ensuring portability and adaptability.
{{% /alert %}}
### Message-Driven Communication

View File

@ -138,35 +138,9 @@ On Windows, the environment variable needs to be set before starting the `dapr`
### Authenticate to AWS if using AWS SSO based profiles
If you authenticate to AWS using [AWS SSO](https://aws.amazon.com/single-sign-on/), some AWS SDKs (including the Go SDK) don't yet support this natively. There are several utilities you can use to "bridge the gap" between AWS SSO-based credentials and "legacy" credentials, such as:
- [AwsHelper](https://pypi.org/project/awshelper/)
- [aws-sso-util](https://github.com/benkehoe/aws-sso-util)
If you authenticate to AWS using [AWS SSO](https://aws.amazon.com/single-sign-on/), the AWS SDK for Go (both v1 and v2) provides native support for AWS SSO credential providers. This means you can use AWS SSO profiles directly without additional utilities.
{{< tabpane text=true >}}
<!-- linux -->
{{% tab "Linux/MacOS" %}}
If using AwsHelper, start Dapr like this:
```bash
AWS_PROFILE=myprofile awshelper dapr run...
```
or
```bash
AWS_PROFILE=myprofile awshelper daprd...
```
{{% /tab %}}
<!-- windows -->
{{% tab "Windows" %}}
On Windows, the environment variable needs to be set before starting the `awshelper` command; doing it inline (like in Linux/MacOS) is not supported.
{{% /tab %}}
{{< /tabpane >}}
For more information about AWS SSO support in the AWS SDK for Go, see the [AWS blog post](https://aws.amazon.com/blogs/developer/aws-sso-support-in-the-aws-sdk-for-go/).
## Next steps

View File

@ -9,38 +9,40 @@ aliases:
weight: 10000
---
Most Azure components for Dapr support authenticating with Microsoft Entra ID. Thanks to this:
- Administrators can leverage all the benefits of fine-tuned permissions with Azure Role-Based Access Control (RBAC).
- Applications running on Azure services such as Azure Container Apps, Azure Kubernetes Service, Azure VMs, or any other Azure platform services can leverage [Managed Identities (MI)](https://learn.microsoft.com/azure/active-directory/managed-identities-azure-resources/overview) and [Workload Identity](https://learn.microsoft.com/azure/aks/workload-identity-overview). These offer the ability to authenticate your applications without having to manage sensitive credentials.
## About authentication with Microsoft Entra ID
Microsoft Entra ID is Azure's identity and access management (IAM) solution, which is used to authenticate and authorize users and services.
Microsoft Entra ID is Azure's identity and access management (IAM) solution, which is used to authenticate and authorize users and services. It's built on top of open standards such OAuth 2.0, which allows services (applications) to obtain access tokens to make requests to Azure services, including Azure Storage, Azure Service Bus, Azure Key Vault, Azure Cosmos DB, Azure Database for Postgres, Azure SQL, etc.
Microsoft Entra ID is built on top of open standards such OAuth 2.0, which allows services (applications) to obtain access tokens to make requests to Azure services, including Azure Storage, Azure Service Bus, Azure Key Vault, Azure Cosmos DB, Azure Database for Postgres, Azure SQL, etc.
## Options to authenticate
> In Azure terminology, an application is also called a "Service Principal".
Applications can authenticate with Microsoft Entra ID and obtain an access token to make requests to Azure services through several methods:
Some Azure components offer alternative authentication methods, such as systems based on "shared keys" or "access tokens". Although these are valid and supported by Dapr, you should authenticate your Dapr components using Microsoft Entra ID whenever possible to take advantage of many benefits, including:
- [Workload identity federation]({{< ref howto-wif.md >}}) - The recommended way to configure your Microsoft Entra ID tenant to trust an external identity provider. This includes service accounts from Kubernetes or AKS clusters. [Learn more about workload identity federation](https://learn.microsoft.com/entra/workload-id/workload-identities-overview).
- [System and user assigned managed identities]({{< ref howto-mi.md >}}) - Less granular than workload identity federation, but retains some of the benefits. [Learn more about system and user assigned managed identities](https://learn.microsoft.com/azure/aks/use-managed-identity).
- [Client ID and secret]({{ < ref howto-aad.md >}}) - Not recommended as it requires you to maintian and associate credentials at the application level.
- Pod Identities - [Deprecated approach for authenticating applications running on Kubernetes pods](https://learn.microsoft.com/azure/aks/use-azure-ad-pod-identity) at a pod level. This should no longer be used.
- [Managed Identities and Workload Identity](#managed-identities-and-workload-identity)
- [Role-Based Access Control](#role-based-access-control)
- [Auditing](#auditing)
- [(Optional) Authentication using certificates](#optional-authentication-using-certificates)
If you are just getting started, it is recommended to use workload identity federation.
### Managed Identities and Workload Identity
## Managed identities and workload identity federation
With Managed Identities (MI), your application can authenticate with Microsoft Entra ID and obtain an access token to make requests to Azure services. When your application is running on a supported Azure service (such as Azure VMs, Azure Container Apps, Azure Web Apps, etc), an identity for your application can be assigned at the infrastructure level.
With Managed Identities (MI), your application can authenticate with Microsoft Entra ID and obtain an access token to make requests to Azure services. When your application is running on a supported Azure service (such as Azure VMs, Azure Container Apps, Azure Web Apps, etc), an identity for your application can be assigned at the infrastructure level. You can also setup Microsoft Entra ID to federate trust to your Dapr application identity directly by using a [Federated Identity Credential](https://learn.microsoft.com/graph/api/resources/federatedidentitycredentials-overview?view=graph-rest-1.0). This allows you to configure access to your Microsoft resources even when not running on Microsoft infrastructure. To see how to configure Dapr to use a federated identity, see the section on [Authenticating with a Federated Identity Credential](#authenticating-with-a-federated-identity-credential).
This is done through [system or user assigned managed identities]({{< ref howto-mi.md >}}), or [workload identity federation]({{< ref howto-wif.md >}}).
Once using MI, your code doesn't have to deal with credentials, which:
Once using managed identities, your code doesn't have to deal with credentials, which:
- Removes the challenge of managing credentials safely
- Allows greater separation of concerns between development and operations teams
- Reduces the number of people with access to credentials
- Simplifies operational aspectsespecially when multiple environments are used
Applications running on Azure Kubernetes Service can similarly leverage [Workload Identity](https://learn.microsoft.com/azure/aks/workload-identity-overview) to automatically provide an identity to individual pods.
While some Dapr Azure components offer alternative authentication methods, such as systems based on "shared keys" or "access tokens", you should always try to authenticate your Dapr components using Microsoft Entra ID whenever possible. This offers many benefits, including:
- [Role-Based Access Control](#role-based-access-control)
- [Auditing](#auditing)
- [(Optional) Authentication using certificates](#optional-authentication-using-certificates)
It's recommended that applications running on Azure Kubernetes Service leverage [workload identity federation](https://learn.microsoft.com/entra/workload-id/workload-identity-federation) to automatically provide an identity to individual pods.
### Role-Based Access Control
@ -112,6 +114,104 @@ When running on Kubernetes, you can also use references to Kubernetes secrets fo
When running on Azure Kubernetes Service (AKS), you can authenticate components using Workload Identity. Refer to the Azure AKS documentation on [enabling Workload Identity](https://learn.microsoft.com/azure/aks/workload-identity-overview) for your Kubernetes resources.
#### Authenticating with a Federated Identity Credential
You can use a [Federated Identity Credential](https://learn.microsoft.com/graph/api/resources/federatedidentitycredentials-overview?view=graph-rest-1.0) in Microsoft Entra ID to federate trust directly to your Dapr installation regardless of where it is running. This allows you to easily configure access rules against your Dapr application's [SPIFFE](https://spiffe.io/) ID consistently across different clouds.
In order to federate trust, you must be running Dapr Sentry with JWT issuing and OIDC discovery enabled. These can be configured using the following Dapr Sentry helm values:
```yaml
jwt:
# Enable JWT token issuance by Sentry
enabled: true
# Issuer value for JWT tokens
issuer: "<your-issuer-domain>"
oidc:
enabled: true
server:
# Port for the OIDC HTTP server
port: 9080
tls:
# Enable TLS for the OIDC HTTP server
enabled: true
# TLS certificate file for the OIDC HTTP server
certFile: "<path-to-tls-cert.pem>"
# TLS certificate file for the OIDC HTTP server
keyFile: "<path-to-tls-key.pem>"
```
{{% alert title="Warning" color="warning" %}}
The `issuer` value must match exactly the value you provide when creating the Federated Identity Credential in Microsoft Entra ID.
{{% /alert %}}
Providing these settings exposes the following endpoints on your Dapr Sentry installation on the provided OIDC HTTP port:
```
/.well-known/openid-configuration
/jwks.json
```
You also need to provide the Dapr runtime configuration to request a JWT token with the Azure audience `api://AzureADTokenExchange`.
When running in standalone mode, this can be provided using the flag `--sentry-request-jwt-audiences=api://AzureADTokenExchange`.
When running in Kubernetes, this can be provided by decorating the application Kubernetes manifest with the annotations `"dapr.io/sentry-request-jwt-audiences": "api://AzureADTokenExchange"`.
This ensures Sentry service issues a JWT token with the correct audience, which is required for Microsoft Entra ID to validate the token.
In order for Microsoft Entra ID to be able to access the OIDC endpoints, you must expose them on a public address. You must ensure that the domain that you are serving these endpoints via is the same as the issuer you provided when configuration Dapr Sentry.
You can now create your federated credential in Microsoft Entra ID.
```shell
cat > creds.json <<EOF
{
"name": "DaprAppIDSpiffe",
"issuer": "https://<your-issuer-domain>",
"subject": spiffe://public/ns/<dapr-app-id-namespace>/<dapr-app-id>",
"audiences": ["api://AzureADTokenExchange"],
"description": "Credential for Dapr App ID"
}
EOF
export APP_ID=$(az ad app create --display-name my-dapr-app --enable-access-token-issuance --enable-id-token-issuance | jq .id)
az ad sp create --id $APP_ID
az ad app federated-credential create --id $APP_ID --parameters ./creds.json
```
Now that you have a federated credential for your Microsoft Entra ID Application Registration, you can assign the desired roles to it's service principal.
An example of assigning "Storage Blob Data Owner" role is below.
```shell
az role assignment create --assignee-object-id $APP_ID --assignee-principal-type ServicePrincipal --role "Storage Blob Data Owner" --scope "/subscriptions/$SUBSCRIPTION/resourceGroups/$GROUP/providers/Microsoft.Storage/storageAccounts/$ACCOUNT_NAME"
```
To configure a Dapr Component to access an Azure resource using the federated credentail, you first need to fetch your `clientId` and `tenantId`:
```shell
CLIENT_ID=$(az ad app show --id $APP_ID --query appId --output tsv)
TENANT_ID=$(az account show --query tenantId --output tsv)
```
Then you can create your Azure Dapr Component and simply provide these value:
```yaml
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
name: azureblob
spec:
type: state.azure.blobstorage
version: v2
initTimeout: 10s # Increase the init timeout to allow enough time for Azure to perform the token exchange
metadata:
- name: clientId
value: $CLIENT_ID
- name: tenantId
value: $TENANT_ID
- name: accountName
value: $ACCOUNT_NAME
- name: containerName
value: $CONTAINER_NAME
```
The Dapr runtime uses these details to authenticate with Microsoft Entra ID, using the Dapr Sentry issued JWT token to exchange for an access token to access the Azure resource.
#### Authenticating using Azure CLI credentials (development-only)
> **Important:** This authentication method is recommended for **development only**.

View File

@ -0,0 +1,111 @@
---
type: docs
title: "How to: Use workload identity federation"
linkTitle: "How to: Use workload identity federation"
weight: 20000
description: "Learn how to configure Dapr to use workload identity federation on Azure."
---
This guide will help you configure your Kubernetes cluster to run Dapr with Azure workload identity federation.
## What is it?
[Workload identity federation](https://learn.microsoft.com/entra/workload-id/workload-identities-overview)
is a way for your applications to authenticate to Azure without having to store or manage credentials as part of
your releases.
By using workload identity federation, any Dapr components running on Kubernetes and AKS that target Azure can authenticate transparently
with no extra configuration.
## Guide
We'll show how to configure an Azure Key Vault resource against your AKS cluster. You can adapt this guide for different
Dapr Azure components by substituting component definitions as necessary.
For this How To, we'll use this [Dapr AKS secrets sample app](https://github.com/dapr/samples/dapr-aks-workload-identity-federation).
### Prerequisites
- AKS cluster with workload identity enabled
- Microsoft Entra ID tenant
### 1 - Enable workload identity federation
Follow [the Azure documentation for enabling workload identity federation on your AKS cluster](https://learn.microsoft.com/azure/aks/workload-identity-deploy-cluster#deploy-your-application4).
The HowTo walks through configuring your Azure Entra ID tenant to trust an identity that originates from your AKS cluster issuer.
It also guides you in setting up a [Kubernetes service account](https://kubernetes.io/docs/concepts/security/service-accounts/) which
is associated with an Azure managed identity you create.
Once completed, return here to continue with step 2.
### 2 - Add a secret to Azure Key Vault
In the Azure Key Vault you created and add a secret called `dapr` with the value of `Hello Dapr!`.
### 3 - Configure the Azure Key Vault dapr component
By this point, you should have a Kubernetes service account with a name similar to `workload-identity-sa0a1b2c`.
Apply the following to your Kubernetes cluster, remembering to update `your-key-vault` with the name of your key vault:
```yaml
---
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
name: demo-secret-store # Be sure not to change this, as our app will be looking for it.
spec:
type: secretstores.azure.keyvault
version: v1
metadata:
- name: vaultName
value: your-key-vault # Replace
```
You'll notice that we have not provided any details specific to authentication in the component definition. This is intentional, as Dapr is able to leverage the Kubernetes service account to transparently authenticate to Azure.
### 4 - Deploy the test application
Go to the [workload identity federation sample application](https://github.com/dapr/samples/dapr-aks-workload-identity-federation) and prepare a build of the image.
Make sure the image is pushed up to a registry that your AKS cluster has visibility and permission to pull from.
Next, create a deployment for our sample AKS secrets app container along with a Dapr sidecar.
Remember to update `dapr-wif-k8s-service-account` with your service account name and `dapraksworkloadidentityfederation` with an image your cluster can resolve:
```yaml
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: aks-dapr-wif-secrets
labels:
app: aks-dapr-wif-secrets
spec:
replicas: 1
selector:
matchLabels:
app: aks-dapr-wif-secrets
template:
metadata:
labels:
app: aks-dapr-wif-secrets
azure.workload.identity/use: "true" # Important
annotations:
dapr.io/enabled: "true" # Enable Dapr
dapr.io/app-id: "aks-dapr-wif-secrets"
spec:
serviceAccountName: dapr-wif-k8s-service-account # Remember to replace
containers:
- name: workload-id-demo
image: dapraksworkloadidentityfederation # Remember to replace
imagePullPolicy: Always
```
Once the application is up and running, it should output the following:
```
Fetched Secret: Hello dapr!
```

View File

@ -6,8 +6,12 @@ weight: 10000
description: "How to develop and run Dapr applications with the Dapr extension"
---
{{% alert title="Deprecation notice" color="primary" %}}
The extension was previously supported by Microsoft, but is now deprecated. The extension will remain available in the Visual Studio Code marketplace, but it will no longer receive updates or support.
Dapr offers a *preview* [Dapr Visual Studio Code extension](https://marketplace.visualstudio.com/items?itemName=ms-azuretools.vscode-dapr) for local development which enables users a variety of features related to better managing their Dapr applications and debugging of your Dapr applications for all supported Dapr languages which are .NET, Go, PHP, Python and Java.
{{% /alert %}}
The *deprecated* [Dapr Visual Studio Code extension](https://marketplace.visualstudio.com/items?itemName=ms-azuretools.vscode-dapr) for local development which enables users a variety of features related to better managing their Dapr applications and debugging of your Dapr applications for all supported Dapr languages which are .NET, Go, PHP, Python and Java.
<a href="vscode:extension/ms-azuretools.vscode-dapr" class="btn btn-primary" role="button">Open in VSCode</a>

View File

@ -8,6 +8,11 @@ aliases:
- /developing-applications/ides/vscode/vscode-manual-configuration/
---
{{% alert title="Deprecation notice" color="primary" %}}
The extension was previously supported by Microsoft, but is now deprecated. The extension will remain available in the Visual Studio Code marketplace, but it will no longer receive updates or support.
{{% /alert %}}
## Manual debugging
When developing Dapr applications, you typically use the Dapr CLI to start your daprized service similar to this:

View File

@ -6,6 +6,11 @@ weight: 30000
description: "How to setup a containerized development environment with Dapr"
---
{{% alert title="Deprecation notice" color="primary" %}}
The extension was previously supported by Microsoft, but is now deprecated. The extension will remain available in the Visual Studio Code marketplace, but it will no longer receive updates or support.
{{% /alert %}}
The Visual Studio Code [Dev Containers extension](https://code.visualstudio.com/docs/remote/containers) lets you use a self-contained Docker container as a complete development environment, without installing any additional packages, libraries, or utilities in your local filesystem.
Dapr has pre-built Dev Containers for C# and JavaScript/TypeScript; you can pick the one of your choice for a ready made environment. Note these pre-built containers automatically update to the latest Dapr release.

View File

@ -81,21 +81,14 @@ Content-Length: 12
client.saveState("MyStateStore", "MyKey", "My Message").block();
```
In this example, `My Message` is saved. It is not quoted because Dapr's API internally parse the JSON request object before saving it.
{{% /tab %}}
{{< /tabpane >}}
In this example, `My Message` is saved. It is not quoted because Dapr's API internally parse the JSON request
object before saving it.
```JSON
[
{
"key": "MyKey",
"value": "My Message"
}
]
```
In this example, `My Message` is saved. It is not quoted because Dapr's API internally serializes the string before
serving it.
## PubSub
@ -109,7 +102,7 @@ object before saving it.
await client.PublishEventAsync("MyPubSubName", "TopicName", "My Message");
```
The event is published and the content is serialized to `byte[]` and sent to Dapr sidecar. The subscriber receives it as a [CloudEvent](https://github.com/cloudevents/spec). Cloud event defines `data` as String. The Dapr SDK also provides a built-in deserializer for `CloudEvent` object.
The event is published and the content is serialized to `byte[]` and sent to Dapr sidecar. The subscriber receives it as a [CloudEvent](https://github.com/cloudevents/spec). Cloud event defines `data` as string. The Dapr SDK also provides a built-in deserializer for the `CloudEvent` object.
```csharp
public async Task<IActionResult> HandleMessage(string message)

View File

@ -1756,8 +1756,11 @@ import (
"log"
"time"
"github.com/dapr/go-sdk/client"
"github.com/dapr/go-sdk/workflow"
"github.com/dapr/durabletask-go/api"
"github.com/dapr/durabletask-go/backend"
"github.com/dapr/durabletask-go/client"
"github.com/dapr/durabletask-go/task"
dapr "github.com/dapr/go-sdk/client"
)
var (
@ -1771,41 +1774,35 @@ func main() {
fmt.Println("*** Welcome to the Dapr Workflow console app sample!")
fmt.Println("*** Using this app, you can place orders that start workflows.")
w, err := workflow.NewWorker()
if err != nil {
log.Fatalf("failed to start worker: %v", err)
}
registry := task.NewTaskRegistry()
if err := w.RegisterWorkflow(OrderProcessingWorkflow); err != nil {
if err := registry.AddOrchestrator(OrderProcessingWorkflow); err != nil {
log.Fatal(err)
}
if err := w.RegisterActivity(NotifyActivity); err != nil {
if err := registry.AddActivity(NotifyActivity); err != nil {
log.Fatal(err)
}
if err := w.RegisterActivity(RequestApprovalActivity); err != nil {
if err := registry.AddActivity(RequestApprovalActivity); err != nil {
log.Fatal(err)
}
if err := w.RegisterActivity(VerifyInventoryActivity); err != nil {
if err := registry.AddActivity(VerifyInventoryActivity); err != nil {
log.Fatal(err)
}
if err := w.RegisterActivity(ProcessPaymentActivity); err != nil {
if err := registry.AddActivity(ProcessPaymentActivity); err != nil {
log.Fatal(err)
}
if err := w.RegisterActivity(UpdateInventoryActivity); err != nil {
if err := registry.AddActivity(UpdateInventoryActivity); err != nil {
log.Fatal(err)
}
if err := w.Start(); err != nil {
log.Fatal(err)
daprClient, err := dapr.NewClient()
if err != nil {
log.Fatalf("failed to create Dapr client: %v", err)
}
daprClient, err := client.NewClient()
if err != nil {
log.Fatalf("failed to initialise dapr client: %v", err)
}
wfClient, err := workflow.NewClient(workflow.WithDaprClient(daprClient))
if err != nil {
log.Fatalf("failed to initialise workflow client: %v", err)
client := client.NewTaskHubGrpcClient(daprClient.GrpcClientConn(), backend.DefaultLogger())
if err := client.StartWorkItemListener(context.TODO(), registry); err != nil {
log.Fatalf("failed to start work item listener: %v", err)
}
inventory := []InventoryItem{
@ -1830,19 +1827,21 @@ func main() {
TotalCost: totalCost,
}
id, err := wfClient.ScheduleNewWorkflow(context.Background(), workflowName, workflow.WithInput(orderPayload))
id, err := client.ScheduleNewOrchestration(context.TODO(), workflowName,
api.WithInput(orderPayload),
)
if err != nil {
log.Fatalf("failed to start workflow: %v", err)
}
waitCtx, cancel := context.WithTimeout(context.Background(), 30*time.Second)
_, err = wfClient.WaitForWorkflowCompletion(waitCtx, id)
cancel()
defer cancel()
_, err = client.WaitForOrchestrationCompletion(waitCtx, id)
if err != nil {
log.Fatalf("failed to wait for workflow: %v", err)
}
respFetch, err := wfClient.FetchWorkflowMetadata(context.Background(), id, workflow.WithFetchPayloads(true))
respFetch, err := client.FetchOrchestrationMetadata(context.Background(), id, api.WithFetchPayloads(true))
if err != nil {
log.Fatalf("failed to get workflow: %v", err)
}
@ -1852,7 +1851,7 @@ func main() {
fmt.Println("Purchase of item is complete")
}
func restockInventory(daprClient client.Client, inventory []InventoryItem) error {
func restockInventory(daprClient dapr.Client, inventory []InventoryItem) error {
for _, item := range inventory {
itemSerialized, err := json.Marshal(item)
if err != nil {
@ -1865,7 +1864,6 @@ func restockInventory(daprClient client.Client, inventory []InventoryItem) error
}
return nil
}
```
#### `order-processor/workflow.go`
@ -1881,18 +1879,18 @@ import (
"log"
"time"
"github.com/dapr/durabletask-go/task"
"github.com/dapr/go-sdk/client"
"github.com/dapr/go-sdk/workflow"
)
// OrderProcessingWorkflow is the main workflow for orchestrating activities in the order process.
func OrderProcessingWorkflow(ctx *workflow.WorkflowContext) (any, error) {
orderID := ctx.InstanceID()
func OrderProcessingWorkflow(ctx *task.OrchestrationContext) (any, error) {
orderID := ctx.ID
var orderPayload OrderPayload
if err := ctx.GetInput(&orderPayload); err != nil {
return nil, err
}
err := ctx.CallActivity(NotifyActivity, workflow.ActivityInput(Notification{
err := ctx.CallActivity(NotifyActivity, task.WithActivityInput(Notification{
Message: fmt.Sprintf("Received order %s for %d %s - $%d", orderID, orderPayload.Quantity, orderPayload.ItemName, orderPayload.TotalCost),
})).Await(nil)
if err != nil {
@ -1900,8 +1898,8 @@ func OrderProcessingWorkflow(ctx *workflow.WorkflowContext) (any, error) {
}
var verifyInventoryResult InventoryResult
if err := ctx.CallActivity(VerifyInventoryActivity, workflow.ActivityInput(InventoryRequest{
RequestID: orderID,
if err := ctx.CallActivity(VerifyInventoryActivity, task.WithActivityInput(InventoryRequest{
RequestID: string(orderID),
ItemName: orderPayload.ItemName,
Quantity: orderPayload.Quantity,
})).Await(&verifyInventoryResult); err != nil {
@ -1910,64 +1908,64 @@ func OrderProcessingWorkflow(ctx *workflow.WorkflowContext) (any, error) {
if !verifyInventoryResult.Success {
notification := Notification{Message: fmt.Sprintf("Insufficient inventory for %s", orderPayload.ItemName)}
err := ctx.CallActivity(NotifyActivity, workflow.ActivityInput(notification)).Await(nil)
err := ctx.CallActivity(NotifyActivity, task.WithActivityInput(notification)).Await(nil)
return OrderResult{Processed: false}, err
}
if orderPayload.TotalCost > 5000 {
var approvalRequired ApprovalRequired
if err := ctx.CallActivity(RequestApprovalActivity, workflow.ActivityInput(orderPayload)).Await(&approvalRequired); err != nil {
if err := ctx.CallActivity(RequestApprovalActivity, task.WithActivityInput(orderPayload)).Await(&approvalRequired); err != nil {
return OrderResult{Processed: false}, err
}
if err := ctx.WaitForExternalEvent("manager_approval", time.Second*200).Await(nil); err != nil {
if err := ctx.WaitForSingleEvent("manager_approval", time.Second*200).Await(nil); err != nil {
return OrderResult{Processed: false}, err
}
// TODO: Confirm timeout flow - this will be in the form of an error.
if approvalRequired.Approval {
if err := ctx.CallActivity(NotifyActivity, workflow.ActivityInput(Notification{Message: fmt.Sprintf("Payment for order %s has been approved!", orderID)})).Await(nil); err != nil {
if err := ctx.CallActivity(NotifyActivity, task.WithActivityInput(Notification{Message: fmt.Sprintf("Payment for order %s has been approved!", orderID)})).Await(nil); err != nil {
log.Printf("failed to notify of a successful order: %v\n", err)
}
} else {
if err := ctx.CallActivity(NotifyActivity, workflow.ActivityInput(Notification{Message: fmt.Sprintf("Payment for order %s has been rejected!", orderID)})).Await(nil); err != nil {
if err := ctx.CallActivity(NotifyActivity, task.WithActivityInput(Notification{Message: fmt.Sprintf("Payment for order %s has been rejected!", orderID)})).Await(nil); err != nil {
log.Printf("failed to notify of an unsuccessful order :%v\n", err)
}
return OrderResult{Processed: false}, err
}
}
err = ctx.CallActivity(ProcessPaymentActivity, workflow.ActivityInput(PaymentRequest{
RequestID: orderID,
err = ctx.CallActivity(ProcessPaymentActivity, task.WithActivityInput(PaymentRequest{
RequestID: string(orderID),
ItemBeingPurchased: orderPayload.ItemName,
Amount: orderPayload.TotalCost,
Quantity: orderPayload.Quantity,
})).Await(nil)
if err != nil {
if err := ctx.CallActivity(NotifyActivity, workflow.ActivityInput(Notification{Message: fmt.Sprintf("Order %s failed!", orderID)})).Await(nil); err != nil {
if err := ctx.CallActivity(NotifyActivity, task.WithActivityInput(Notification{Message: fmt.Sprintf("Order %s failed!", orderID)})).Await(nil); err != nil {
log.Printf("failed to notify of a failed order: %v", err)
}
return OrderResult{Processed: false}, err
}
err = ctx.CallActivity(UpdateInventoryActivity, workflow.ActivityInput(PaymentRequest{
RequestID: orderID,
err = ctx.CallActivity(UpdateInventoryActivity, task.WithActivityInput(PaymentRequest{
RequestID: string(orderID),
ItemBeingPurchased: orderPayload.ItemName,
Amount: orderPayload.TotalCost,
Quantity: orderPayload.Quantity,
})).Await(nil)
if err != nil {
if err := ctx.CallActivity(NotifyActivity, workflow.ActivityInput(Notification{Message: fmt.Sprintf("Order %s failed!", orderID)})).Await(nil); err != nil {
if err := ctx.CallActivity(NotifyActivity, task.WithActivityInput(Notification{Message: fmt.Sprintf("Order %s failed!", orderID)})).Await(nil); err != nil {
log.Printf("failed to notify of a failed order: %v", err)
}
return OrderResult{Processed: false}, err
}
if err := ctx.CallActivity(NotifyActivity, workflow.ActivityInput(Notification{Message: fmt.Sprintf("Order %s has completed!", orderID)})).Await(nil); err != nil {
if err := ctx.CallActivity(NotifyActivity, task.WithActivityInput(Notification{Message: fmt.Sprintf("Order %s has completed!", orderID)})).Await(nil); err != nil {
log.Printf("failed to notify of a successful order: %v", err)
}
return OrderResult{Processed: true}, err
}
// NotifyActivity outputs a notification message
func NotifyActivity(ctx workflow.ActivityContext) (any, error) {
func NotifyActivity(ctx task.ActivityContext) (any, error) {
var input Notification
if err := ctx.GetInput(&input); err != nil {
return "", err
@ -1977,7 +1975,7 @@ func NotifyActivity(ctx workflow.ActivityContext) (any, error) {
}
// ProcessPaymentActivity is used to process a payment
func ProcessPaymentActivity(ctx workflow.ActivityContext) (any, error) {
func ProcessPaymentActivity(ctx task.ActivityContext) (any, error) {
var input PaymentRequest
if err := ctx.GetInput(&input); err != nil {
return "", err
@ -1987,7 +1985,7 @@ func ProcessPaymentActivity(ctx workflow.ActivityContext) (any, error) {
}
// VerifyInventoryActivity is used to verify if an item is available in the inventory
func VerifyInventoryActivity(ctx workflow.ActivityContext) (any, error) {
func VerifyInventoryActivity(ctx task.ActivityContext) (any, error) {
var input InventoryRequest
if err := ctx.GetInput(&input); err != nil {
return nil, err
@ -2019,7 +2017,7 @@ func VerifyInventoryActivity(ctx workflow.ActivityContext) (any, error) {
}
// UpdateInventoryActivity modifies the inventory.
func UpdateInventoryActivity(ctx workflow.ActivityContext) (any, error) {
func UpdateInventoryActivity(ctx task.ActivityContext) (any, error) {
var input PaymentRequest
if err := ctx.GetInput(&input); err != nil {
return nil, err
@ -2053,7 +2051,7 @@ func UpdateInventoryActivity(ctx workflow.ActivityContext) (any, error) {
}
// RequestApprovalActivity requests approval for the order
func RequestApprovalActivity(ctx workflow.ActivityContext) (any, error) {
func RequestApprovalActivity(ctx task.ActivityContext) (any, error) {
var input OrderPayload
if err := ctx.GetInput(&input); err != nil {
return nil, err
@ -2061,7 +2059,6 @@ func RequestApprovalActivity(ctx workflow.ActivityContext) (any, error) {
fmt.Printf("RequestApprovalActivity: Requesting approval for payment of %dUSD for %d %s\n", input.TotalCost, input.Quantity, input.ItemName)
return ApprovalRequired{Approval: true}, nil
}
```
{{% /tab %}}

View File

@ -7,7 +7,6 @@ description: "Define secret scopes by augmenting the existing configuration reso
description: "Define secret scopes by augmenting the existing configuration resource with restrictive permissions."
---
In addition to [scoping which applications can access a given component]({{% ref "component-scopes.md"%}}), you can also scope a named secret store component to one or more secrets for an application. By defining `allowedSecrets` and/or `deniedSecrets` lists, you restrict applications to access only specific secrets.
In addition to [scoping which applications can access a given component]({{% ref "component-scopes.md"%}}), you can also scope a named secret store component to one or more secrets for an application. By defining `allowedSecrets` and/or `deniedSecrets` lists, you restrict applications to access only specific secrets.
For more information about configuring a Configuration resource:

View File

@ -113,6 +113,29 @@ You should see the following response:
✅ Success! Dapr has been installed to namespace dapr-system. To verify, run `dapr status -k' in your terminal. To get started, go here: https://docs.dapr.io/getting-started
```
## IAM Roles for Service Accounts (IRSA)
You can attach custom annotations to the ServiceAccounts created by the `dapr_rbac` Helm subchart—useful for enabling IAM Roles for Service Accounts (IRSA) on AWS EKS.
This enables fine-grained, secure access control for Dapr components using EKSs IRSA mechanism.
Update your Dapr Helm values files to include the following necessary annotations for the ServiceAccounts.
See [here]({{% ref authenticating-aws.md %}}) for more information on AWS authentication.
```yaml
serviceAccount:
operator:
annotations:
eks.amazonaws.com/role-arn: arn:aws:iam::<ACCOUNT_ID>:role/operator-role
injector:
annotations: {}
placement:
annotations: {}
scheduler:
annotations: {}
sentry:
annotations: {}
````
## Troubleshooting
### Access permissions

View File

@ -6,8 +6,8 @@ weight: 50000
description: "Configure Scheduler to persist its database to make it resilient to restarts"
---
The [Scheduler]({{% ref scheduler.md %}}) service is responsible for writing jobs to its embedded Etcd database and scheduling them for execution.
By default, the Scheduler service database writes data to a Persistent Volume Claim volume of size `1Gb`, using the cluster's default [storage class](https://kubernetes.io/docs/concepts/storage/storage-classes/).
The [Scheduler]({{% ref scheduler.md %}}) service is responsible for writing jobs to its Etcd database and scheduling them for execution.
By default, the Scheduler service database embeds Etcd and writes data to a Persistent Volume Claim volume of size `1Gb`, using the cluster's default [storage class](https://kubernetes.io/docs/concepts/storage/storage-classes/).
This means that there is no additional parameter required to run the scheduler service reliably on most Kubernetes deployments, although you will need [additional configuration](#storage-class) if a default StorageClass is not available or when running a production environment.
{{% alert title="Warning" color="warning" %}}
@ -85,6 +85,14 @@ kubectl delete pvc -n dapr-system dapr-scheduler-data-dir-dapr-scheduler-server-
Persistent Volume Claims are not deleted automatically with an [uninstall]({{< ref dapr-uninstall.md >}}). This is a deliberate safety measure to prevent accidental data loss.
{{% /alert %}}
{{% alert title="Note" color="primary" %}}
For storage providers that do NOT support dynamic volume expansion: If Dapr has ever been installed on the cluster before, the Scheduler's Persistent Volume Claims must be manually uninstalled in order for new ones with increased storage size to be created.
```bash
kubectl delete pvc -n dapr-system dapr-scheduler-data-dir-dapr-scheduler-server-0 dapr-scheduler-data-dir-dapr-scheduler-server-1 dapr-scheduler-data-dir-dapr-scheduler-server-2
```
Persistent Volume Claims are not deleted automatically with an [uninstall]({{< ref dapr-uninstall.md >}}). This is a deliberate safety measure to prevent accidental data loss.
{{% /alert %}}
#### Increase existing Scheduler Storage Size
{{% alert title="Warning" color="warning" %}}

View File

@ -343,10 +343,20 @@ By default, the Dapr sidecar injector injects a sidecar without any `seccompProf
Refer to [the Arguments and Annotations overview]({{% ref "arguments-annotations-overview.md" %}}) to set the appropriate `seccompProfile` on the sidecar container.
## Best Practices
## Run as non-root
When running in Kubernetes, Dapr services ensure each process is running as non-root.
This is done by checking the UID & GID of the process is `65532`, and fatal erroring if it is not what is expected.
If you must run a non-default UID & GID in Kubernetes, set the following env var to skip this check.
```bash
DAPR_UNSAFE_SKIP_CONTAINER_UID_GID_CHECK="true"
```
## Best Practices
Watch this video for a deep dive into the best practices for running Dapr in production with Kubernetes.
{{< youtube id=_U9wJqq-H1g >}}
## Related links

View File

@ -6,7 +6,7 @@ weight: 50000
description: "Configure Scheduler to persist its database to make it resilient to restarts"
---
The [Scheduler]({{% ref scheduler.md %}}) service is responsible for writing jobs to its embedded database and scheduling them for execution.
The [Scheduler]({{% ref scheduler.md %}}) service is responsible for writing jobs to its Etcd database and scheduling them for execution.
By default, the Scheduler service database writes this data to the local volume `dapr_scheduler`, meaning that **this data is persisted across restarts**.
The host file location for this local volume is typically located at either `/var/lib/docker/volumes/dapr_scheduler/_data` or `~/.local/share/containers/storage/volumes/dapr_scheduler/_data`, depending on your container runtime.

View File

@ -16,3 +16,9 @@ description: See and measure the message calls to components and between network
- Review the [Observability API reference documentation]({{% ref health_api.md %}}).
- Read the [general overview of the observability concept]({{% ref observability-concept %}}) in Dapr.
{{% /alert %}}
{{% alert title="Dapr Observability in Action!" color="primary" %}}
Dapr has a public Grafana dashboard demonstrating observability in action on the longhaul testing environment.
- [Dapr Public Grafana Dashboard](https://dapr.grafana.net/public-dashboards/86d748b233804e74a16d8243b4b64e18)
- Read more about: [Longhaul performance and stability]({{% ref perf-longhaul.md %}})
{{% /alert %}}

View File

@ -0,0 +1,205 @@
---
type: docs
title: "How-To: Set up Dash0 for distributed tracing"
linkTitle: "Dash0"
weight: 5000
description: "Set up Dash0 for distributed tracing"
---
Dapr captures metrics, traces, and logs that can be sent directly to Dash0 through the OpenTelemetry Collector. Dash0 is an OpenTelemetry-native observability platform that provides comprehensive monitoring capabilities for distributed applications.
## Configure Dapr tracing with the OpenTelemetry Collector and Dash0
By using the OpenTelemetry Collector with the OTLP exporter to send data to Dash0, you can configure Dapr to create traces for each application in your Kubernetes cluster and collect them in Dash0 for analysis and monitoring.
## Prerequisites
* A running Kubernetes cluster with `kubectl` installed
* Helm v3+
* [Dapr installed in the cluster](https://docs.dapr.io/operations/hosting/kubernetes/kubernetes-deploy/)
* A Dash0 account ([Get started with a 14-day free trial](https://www.dash0.com/pricing))
* Your Dash0 **Auth Token** and **OTLP/gRPC endpoint** (find both under **Settings → Auth Tokens** and **Settings → Endpoints**)
## Configure the OpenTelemetry Collector
1) Create a namespace for the Collector
```bash
kubectl create namespace opentelemetry
```
2) Create a Secret with your Dash0 **Auth Token** and **Endpoint**
```bash
kubectl create secret generic dash0-secrets \
--from-literal=dash0-authorization-token="<your_auth_token>" \
--from-literal=dash0-endpoint="<your_otlp_grpc_endpoint>" \
--namespace opentelemetry
```
3) Add the OpenTelemetry Helm repo (once)
```bash
helm repo add open-telemetry https://open-telemetry.github.io/opentelemetry-helm-charts
helm repo update
```
4) Create `values.yaml` for the Collector
This config:
* Reads token + endpoint from the Secret via env vars
* Enables OTLP receivers (gRPC + HTTP)
* Sends **traces, metrics, and logs** to Dash0 via OTLP/gRPC with Bearer auth
```yaml
mode: deployment
fullnameOverride: otel-collector
replicaCount: 1
image:
repository: otel/opentelemetry-collector-k8s
extraEnvs:
- name: DASH0_AUTHORIZATION_TOKEN
valueFrom:
secretKeyRef:
name: dash0-secrets
key: dash0-authorization-token
- name: DASH0_ENDPOINT
valueFrom:
secretKeyRef:
name: dash0-secrets
key: dash0-endpoint
config:
receivers:
otlp:
protocols:
grpc: {}
http: {}
processors:
batch: {}
exporters:
otlp/dash0:
auth:
authenticator: bearertokenauth/dash0
endpoint: ${env:DASH0_ENDPOINT}
extensions:
bearertokenauth/dash0:
scheme: Bearer
token: ${env:DASH0_AUTHORIZATION_TOKEN}
health_check: {}
service:
extensions:
- bearertokenauth/dash0
- health_check
pipelines:
traces:
receivers: [otlp]
processors: [batch]
exporters: [otlp/dash0]
metrics:
receivers: [otlp]
processors: [batch]
exporters: [otlp/dash0]
logs:
receivers: [otlp]
processors: [batch]
exporters: [otlp/dash0]
```
5) Install/upgrade the Collector with Helm
```bash
helm upgrade --install otel-collector open-telemetry/opentelemetry-collector \
--namespace opentelemetry \
-f values.yaml
```
## Configure Dapr to send telemetry to the Collector
1) Create a configuration
Create `dapr-config.yaml`:
```yaml
apiVersion: dapr.io/v1alpha1
kind: Configuration
metadata:
name: tracing
namespace: default
spec:
tracing:
samplingRate: "1"
otel:
endpointAddress: "otel-collector.opentelemetry.svc.cluster.local:4317"
isSecure: false
protocol: grpc
```
Apply it:
```bash
kubectl apply -f dapr-config.yaml
```
2) Annotate your application(s)
In each Deployment/Pod you want traced by Dapr, add:
```yaml
metadata:
annotations:
dapr.io/config: "tracing"
```
## Verify the setup
1. Check that the OpenTelemetry Collector is running:
```bash
kubectl get pods -n opentelemetry
```
2. Check the collector logs to ensure it's receiving and forwarding telemetry:
```bash
kubectl logs -n opentelemetry deployment/otel-collector
```
3. Deploy a sample application with Dapr tracing enabled and generate some traffic to verify traces are being sent to Dash0. You can use the [Dapr Kubernetes quickstart tutorial](https://github.com/dapr/quickstarts/tree/master/tutorials/hello-kubernetes) for testing.
## Viewing traces
Once your setup is complete and telemetry data is flowing, you can view traces in Dash0:
1. Navigate to your Dash0 account
2. Go to the **Traces** section
3. You should see distributed traces from your Dapr applications
4. Use filters to narrow down traces by service name, operation, or time range
<img src="/images/dash0-dapr-trace-overview.png" width=1200 alt="Dash0 Trace Overview">
<img src="/images/dash0-dapr-trace.png" width=1200 alt="Dash0 Trace Details">
## Cleanup
```bash
helm -n opentelemetry uninstall otel-collector
kubectl -n opentelemetry delete secret dash0-secrets
kubectl delete ns opentelemetry
```
## Related Links
* [Dapr Kubernetes quickstart tutorial](https://github.com/dapr/quickstarts/tree/master/tutorials/hello-kubernetes)
* [Dapr observability quickstart](https://github.com/dapr/quickstarts/tree/master/tutorials/observability)
* [Dash0 documentation](https://www.dash0.com/docs)
* [OpenTelemetry Collector documentation](https://opentelemetry.io/docs/collector/)

View File

@ -0,0 +1,139 @@
---
type: docs
title: "Using Dynatrace OpenTelemetry Collector to collect traces to send to Dynatrace"
linkTitle: "Using the Dynatrace OpenTelemetry Collector"
weight: 1000
description: "How to push trace events to Dynatrace, using the Dynatrace OpenTelemetry Collector."
---
Dapr integrates with the [Dynatrace Collector](https://docs.dynatrace.com/docs/ingest-from/opentelemetry/collector) using the OpenTelemetry protocol (OTLP). This guide walks through an example using Dapr to push traces to Dynatrace, using the Dynatrace version of the OpenTelemetry Collector.
{{% alert title="Note" color="primary" %}}
This guide refers to the Dynatrace OpenTelemetry Collector, which uses the same Helm chart as the open-source collector but overridden with the Dynatrace-maintained image for better support and Dynatrace-specific features.
{{% /alert %}}
## Prerequisites
- [Install Dapr on Kubernetes]({{< ref kubernetes >}})
- Access to a Dynatrace tenant and an API token with `openTelemetryTrace.ingest`, `metrics.ingest`, and `logs.ingest` scopes
- Helm
## Set up Dynatrace OpenTelemetry Collector to push to your Dynatrace instance
To push traces to your Dynatrace instance, install the Dynatrace OpenTelemetry Collector on your Kubernetes cluster.
1. Create a Kubernetes secret with your Dynatrace credentials:
```sh
kubectl create secret generic dynatrace-otelcol-dt-api-credentials \
--from-literal=DT_ENDPOINT=https://YOUR_TENANT.live.dynatrace.com/api/v2/otlp \
--from-literal=DT_API_TOKEN=dt0s01.YOUR_TOKEN_HERE
```
Replace `YOUR_TENANT` with your Dynatrace tenant ID and `YOUR_TOKEN_HERE` with your Dynatrace API token.
1. Use the Dynatrace OpenTelemetry Collector distribution for better defaults and support than the open source version. Download and inspect the [`collector-helm-values.yaml`](https://github.com/Dynatrace/dynatrace-otel-collector/blob/main/config_examples/collector-helm-values.yaml) file. This is based on the [k8s enrichment demo](https://docs.dynatrace.com/docs/ingest-from/opentelemetry/collector/use-cases/kubernetes/k8s-enrich#demo-configuration) and includes Kubernetes metadata enrichment for proper pod/namespace/cluster context.
1. Deploy the Dynatrace Collector with Helm.
```sh
helm repo add open-telemetry https://open-telemetry.github.io/opentelemetry-helm-charts
helm repo update
helm upgrade -i dynatrace-collector open-telemetry/opentelemetry-collector -f collector-helm-values.yaml
```
## Set up Dapr to send traces to the Dynatrace Collector
Create a Dapr configuration file to enable tracing and send traces to the OpenTelemetry Collector via [OTLP](https://opentelemetry.io/docs/specs/otel/protocol/).
1. Update the following file to ensure the `endpointAddress` points to your Dynatrace OpenTelemetry Collector service in your Kubernetes cluster. If deployed in the `default` namespace, it's typically `dynatrace-collector.default.svc.cluster.local`.
**Important:** Ensure the `endpointAddress` does NOT include the `http://` prefix to avoid URL encoding issues:
```yaml
apiVersion: dapr.io/v1alpha1
kind: Configuration
metadata:
name: tracing
spec:
tracing:
samplingRate: "1"
otel:
endpointAddress: "dynatrace-collector.default.svc.cluster.local:4318" # Update with your collector's service address
```
1. Apply the configuration with:
```sh
kubectl apply -f collector-config-otel.yaml
```
## Deploy your app with tracing
Apply the `tracing` configuration by adding a `dapr.io/config` annotation to the Dapr applications that you want to include in distributed tracing, as shown in the following example:
```yaml
apiVersion: apps/v1
kind: Deployment
metadata:
...
spec:
...
template:
metadata:
...
annotations:
dapr.io/enabled: "true"
dapr.io/app-id: "MyApp"
dapr.io/app-port: "8080"
dapr.io/config: "tracing"
```
{{% alert title="Note" color="primary" %}}
If you are using one of the Dapr tutorials, such as [distributed calculator](https://github.com/dapr/quickstarts/tree/master/tutorials/distributed-calculator), you will need to update the `appconfig` configuration to `tracing`.
{{% /alert %}}
You can register multiple tracing exporters at the same time, and the tracing logs are forwarded to all registered exporters.
That's it! There's no need to include any SDKs or instrument your application code. Dapr automatically handles the distributed tracing for you.
## View traces
Deploy and run some applications. After a few minutes, you should see traces appearing in your Dynatrace tenant:
1. Navigate to **Search > Distributed tracing** in your Dynatrace UI.
2. Filter by service names to see your Dapr applications and their associated tracing spans.
<img src="/images/open-telemetry-collector-dynatrace-traces.png" width=1200 alt="Dynatrace showing tracing data.">
{{% alert title="Note" color="primary" %}}
Only operations going through Dapr API exposed by Dapr sidecar (for example, service invocation or event publishing) are displayed in Dynatrace distributed traces.
{{% /alert %}}
{{% alert title="Disable OneAgent daprd monitoring" color="warning" %}}
If you are running Dynatrace OneAgent in your cluster, you should exclude the `daprd` sidecar container from OneAgent monitoring to prevent interferences in this configuration. Excluding it prevents any automatic injection attempts that could break functionality or result in confusing traces.
Add this annotation to your application deployments or globally in your dynakube configuration file:
```yaml
metadata:
annotations:
dapr.io/enabled: "true"
dapr.io/app-id: "MyApp"
dapr.io/app-port: "8080"
dapr.io/config: "tracing"
container.inject.dynatrace.com/daprd: "false" # Exclude dapr sidecar from being auto-monitored by OneAgent
```
{{% /alert %}}
## Related links
- Try out the [observability quickstart](https://github.com/dapr/quickstarts/tree/master/tutorials/observability/README.md)
- Learn how to set [tracing configuration options]({{< ref "configuration-overview.md#tracing" >}})
- [Dynatrace OpenTelemetry documentation](https://docs.dynatrace.com/docs/ingest-from/opentelemetry)
- Enrich OTLP telemetry data [with Kubernetes metadata
](https://docs.dynatrace.com/docs/ingest-from/opentelemetry/collector/use-cases/kubernetes/k8s-enrich)

View File

@ -77,4 +77,5 @@ Learn how to set up tracing with one of the following tools:
- [New Relic]({{% ref newrelic.md %}})
- [Jaeger]({{% ref open-telemetry-collector-jaeger.md %}})
- [Zipkin]({{% ref zipkin.md %}})
- [Datadog]({{% ref datadog.md %}})
- [Datadog]({{% ref datadog.md %}})
- [Dash0]({{% ref dash0.md %}})

View File

@ -111,6 +111,138 @@ If you decide to generate trace headers yourself, there are three ways this can
Read [the trace context overview]({{% ref w3c-tracing-overview %}}) for more background and examples on W3C trace context and headers.
### Baggage Support
Dapr supports two distinct mechanisms for propagating W3C Baggage alongside trace context:
1. **Context Baggage (OpenTelemetry)**
- Follows OpenTelemetry conventions with decoded values
- Used when working with OpenTelemetry context propagation
- Values are stored and transmitted in their original, unencoded form
- Recommended for OpenTelemetry integrations and when working with application context
2. **Header/Metadata Baggage**
- You must URL encode special characters (for example, `%20` for spaces, `%2F` for slashes) when setting header/metadata baggage
- Values remain percent-encoded in transport as required by the W3C Baggage spec
- Values stay encoded when inspecting raw headers/metadata
- Only OpenTelemetry APIs will decode the values
- Example: Use `serverNode=DF%2028` (not `serverNode=DF 28`) when setting header baggage
For security purposes, context baggage and header baggage are strictly separated and never merged between domains. This ensures that baggage values maintain their intended format and security properties.
#### Using Baggage with Dapr
You can propagate baggage using either mechanism, depending on your use case.
1. **In your application code**: Set the baggage in the context before making a Dapr API call
2. **When calling Dapr**: Pass the context to any Dapr API call
3. **Inside Dapr**: The Dapr runtime automatically picks up the baggage
4. **Propagation**: Dapr automatically propagates the baggage to downstream services, maintaining the appropriate encoding for each mechanism
Here are examples of both mechanisms:
**1. Using Context Baggage (OpenTelemetry)**
When using OpenTelemetry SDK:
{{< tabpane text=true >}}
{{% tab header="Go" %}}
```go
import otelbaggage "go.opentelemetry.io/otel/baggage"
// Set baggage in context (values remain unencoded)
baggage, err = otelbaggage.Parse("userId=cassie,serverNode=DF%2028")
...
ctx := otelbaggage.ContextWithBaggage(t.Context(), baggage)
)
// Pass this context to any Dapr API call
client.InvokeMethodWithContent(ctx, "serviceB", ...)
```
**2. Using Header/Metadata Baggage**
When using gRPC metadata:
```go
import "google.golang.org/grpc/metadata"
// Set URL-encoded baggage in context
ctx = metadata.AppendToOutgoingContext(ctx,
"baggage", "userId=cassie,serverNode=DF%2028",
)
// Pass this context to any Dapr API call
client.InvokeMethodWithContent(ctx, "serviceB", ...)
```
**3. Receiving Baggage in Target Service**
In your target service, you can access the propagated baggage:
```go
// Using OpenTelemetry (values are automatically decoded)
import "go.opentelemetry.io/otel/baggage"
bag := baggage.FromContext(ctx)
userID := bag.Member("userId").Value() // "cassie"
```
```go
// Using raw gRPC metadata (values remain percent-encoded)
import "google.golang.org/grpc/metadata"
md, _ := metadata.FromIncomingContext(ctx)
if values := md.Get("baggage"); len(values) > 0 {
// values[0] contains the percent-encoded string you set: "userId=cassie,serverNode=DF%2028"
// Remember: You must URL encode special characters when setting baggage
// To decode the values, use OpenTelemetry APIs:
bag, err := baggage.Parse(values[0])
...
userID := bag.Member("userId").Value() // "cassie"
}
```
*HTTP Example (URL-encoded):*
```bash
curl -X POST http://localhost:3500/v1.0/invoke/serviceB/method/hello \
-H "Content-Type: application/json" \
-H "baggage: userID=cassie,serverNode=DF%2028" \
-d '{"message": "Hello service B"}'
```
*gRPC Example (URL-encoded):*
```go
ctx = grpcMetadata.AppendToOutgoingContext(ctx,
"baggage", "userID=cassie,serverNode=DF%2028",
)
```
{{% /tab %}}
{{< /tabpane >}}
#### Common Use Cases
Baggage is useful for:
- Propagating user IDs or correlation IDs across services
- Passing tenant or environment information
- Maintaining consistent context across service boundaries
- Debugging and troubleshooting distributed transactions
#### Best Practices
1. **Choose the Right Mechanism**
- Use Context Baggage when working with OpenTelemetry
- Use Header Baggage when working directly with HTTP/gRPC
2. **Security Considerations**
- Be mindful that baggage is propagated across service boundaries
- Don't include sensitive information in baggage
- Remember that context and header baggage remain separate
## Related Links
- [Observability concepts]({{% ref observability-concept.md %}})

View File

@ -48,7 +48,7 @@ When a request arrives without a trace ID, Dapr creates a new one. Otherwise, it
These are the specific trace context headers that are generated and propagated by Dapr for HTTP and gRPC.
{{< tabpane text=true >}}
<!-- HTTP -->
{{% tab "HTTP" %}}
Copy these headers when propagating a trace context header from an HTTP response to an HTTP request:
@ -73,14 +73,67 @@ tracestate: congo=t61rcWkgMzE
[Learn more about the tracestate fields details](https://www.w3.org/TR/trace-context/#tracestate-header).
**Baggage Support**
Dapr supports [W3C Baggage](https://www.w3.org/TR/baggage/) for propagating key-value pairs alongside trace context through two distinct mechanisms:
1. **Context Baggage (OpenTelemetry)**
- Follows OpenTelemetry conventions with decoded values
- Used when propagating baggage through application context
- Values are stored in their original, unencoded form
- Example of how it would be printed with OpenTelemetry APIs:
```
baggage: userId=cassie,serverNode=DF 28,isVIP=true
```
2. **HTTP Header Baggage**
- You must URL encode special characters (for example, `%20` for spaces, `%2F` for slashes) when setting header baggage
- Values remain percent-encoded in HTTP headers as required by the W3C Baggage spec
- Values stay encoded when inspecting raw headers in Dapr
- Only OpenTelemetry APIs like `otelbaggage.Parse()` will decode the values
- Example (note the URL-encoded space `%20`):
```bash
curl -X POST http://localhost:3500/v1.0/invoke/serviceB/method/hello \
-H "Content-Type: application/json" \
-H "baggage: userId=cassie,serverNode=DF%2028,isVIP=true" \
-d '{"message": "Hello service B"}'
```
For security purposes, context baggage and header baggage are strictly separated and never merged between domains. This ensures that baggage values maintain their intended format and security properties in each domain.
Multiple baggage headers are supported and will be combined according to the W3C specification. Dapr automatically propagates baggage across service calls while maintaining the appropriate encoding for each domain.
{{% /tab %}}
<!-- gRPC -->
{{% tab "gRPC" %}}
In the gRPC API calls, trace context is passed through `grpc-trace-bin` header.
**Baggage Support**
Dapr supports [W3C Baggage](https://www.w3.org/TR/baggage/) for propagating key-value pairs alongside trace context through two distinct mechanisms:
1. **Context Baggage (OpenTelemetry)**
- Follows OpenTelemetry conventions with decoded values
- Used when propagating baggage through gRPC context
- Values are stored in their original, unencoded form
- Example of how it would be printed with OpenTelemetry APIs:
```
baggage: userId=cassie,serverNode=DF 28,isVIP=true
```
2. **gRPC Metadata Baggage**
- You must URL encode special characters (for example, `%20` for spaces, `%2F` for slashes) when setting metadata baggage
- Values remain percent-encoded in gRPC metadata
- Example (note the URL-encoded space `%20`):
```
baggage: userId=cassie,serverNode=DF%2028,isVIP=true
```
For security purposes, context baggage and metadata baggage are strictly separated and never merged between domains. This ensures that baggage values maintain their intended format and security properties in each domain.
Multiple baggage metadata entries are supported and will be combined according to the W3C specification. Dapr automatically propagates baggage across service calls while maintaining the appropriate encoding for each domain.
{{% /tab %}}
{{< /tabpane >}}

View File

@ -0,0 +1,49 @@
---
type: docs
title: "Longhaul performance and stability"
linkTitle: "Longhaul performance and stability"
weight: 10000
description: ""
---
This article provides longhaul performance and stability benchmarks for Dapr on Kubernetes.
The longhaul tests are designed to run for a period of a week, validating the stability of Dapr and its components, while measuring resource utilization and performance over time.
## Public Dashboard
You can access the live longhaul test results on the public Grafana dashboard. This dashboard is updated in near real-time, showing the latest results from the longhaul tests.
[Dapr Longhaul Dashboard](https://dapr.grafana.net/public-dashboards/86d748b233804e74a16d8243b4b64e18).
## System overview
The longhaul environment is run on a 3 node managed Azure Kubernetes Service (AKS) cluster, using standard D2s_v5 nodes running 2 cores and 8GB of RAM, with network acceleration.
## Test Applications
- Feed generator
- Hashtag Actor
- Hashtag counter
- Message Analyzer
- Pubsub Workflow
- Streaming Pubsub Publisher / Producer
- Streaming Pubsub Subscriber / Consumer
- Snapshot App
- Validation Worker App
- Scheduler Jobs App
- Workflow Gen App
- Scheduler Actor Reminders - Client
- Scheduler Actor Reminders - Server
- Scheduler Workflow App
## Redeployments
The longhaul test environment is redeployed every 7 days (Fridays at 08:00 UTC).
## Test Infrastructure
The test infrastructure is sourced from this [GitHub repository](https://github.com/dapr/test-infra).
It is a mixture of Bicep IaC templates and Helm charts to deploy the test applications and Dapr.

View File

@ -232,7 +232,7 @@ kubectl rollout restart -n <DAPR_NAMESPACE> deployment/dapr-sentry
kubectl rollout restart deploy/dapr-operator -n <DAPR_NAMESPACE>
kubectl rollout restart statefulsets/dapr-placement-server -n <DAPR_NAMESPACE>
kubectl rollout restart deploy/dapr-sidecar-injector -n <DAPR_NAMESPACE>
kubectl rollout restart deploy/dapr-scheduler-server -n <DAPR_NAMESPACE>
kubectl rollout restart statefulsets/dapr-scheduler-server -n <DAPR_NAMESPACE>
```
4. Restart your Dapr applications to pick up the latest trust bundle.

View File

@ -21,4 +21,5 @@ For CLI there is no explicit opt-in, just the version that this was first made a
| **Actor State TTL** | Allow actors to save records to state stores with Time To Live (TTL) set to automatically clean up old data. In its current implementation, actor state with TTL may not be reflected correctly by clients, read [Actor State Transactions]({{% ref actors_api.md %}}) for more information. | `ActorStateTTL` | [Actor State Transactions]({{% ref actors_api.md %}}) | v1.11 |
| **Component Hot Reloading** | Allows for Dapr-loaded components to be "hot reloaded". A component spec is reloaded when it is created/updated/deleted in Kubernetes or on file when running in self-hosted mode. Ignores changes to actor state stores and workflow backends. | `HotReload`| [Hot Reloading]({{% ref components-concept.md %}}) | v1.13 |
| **Subscription Hot Reloading** | Allows for declarative subscriptions to be "hot reloaded". A subscription is reloaded either when it is created/updated/deleted in Kubernetes, or on file in self-hosted mode. In-flight messages are unaffected when reloading. | `HotReload`| [Hot Reloading]({{% ref "subscription-methods.md#declarative-subscriptions" %}}) | v1.14 |
| **Scheduler Actor Reminders** | Scheduler actor reminders are actor reminders stored in the Scheduler control plane service, as opposed to the Placement control plane service actor reminder system. The `SchedulerReminders` preview feature defaults to `true`, but you can disable Scheduler actor reminders by setting it to `false`. | `SchedulerReminders`| [Scheduler actor reminders]({{% ref "scheduler.md#actor-reminders" %}}) | v1.14 |
| **Scheduler Actor Reminders** | Scheduler actor reminders are actor reminders stored in the Scheduler control plane service, as opposed to the Placement control plane service actor reminder system. The `SchedulerReminders` preview feature defaults to `true`, but you can disable Scheduler actor reminders by setting it to `false`. | `SchedulerReminders`| [Scheduler actor reminders]({{% ref "scheduler.md#actor-reminders" %}}) | v1.14 |
| **Workflows Clustered Deployment** | Enable Workflows to function when workflow clients communicate to multiple daprds of the same appID who are behind a loadbalancer. Only relevant when using [Dapr shared]({{% ref "kubernetes-dapr-shared" %}}) | `WorkflowsClusteredDeployment`| [Dapr Shared]({{% ref "kubernetes-dapr-shared" %}}) | v1.16 |

View File

@ -45,12 +45,16 @@ The table below shows the versions of Dapr releases that have been tested togeth
| Release date | Runtime | CLI | SDKs | Dashboard | Status | Release notes |
|--------------------|:--------:|:--------|---------|---------|---------|------------|
| May 5th 2025 | 1.15.5</br> | 1.15.0 | Java 1.14.1 </br>Go 1.12.0 </br>PHP 1.2.0 </br>Python 1.15.0 </br>.NET 1.15.4 </br>JS 3.5.2 </br>Rust 0.16.1 | 0.15.0 | Supported (current) | [v1.15.5 release notes](https://github.com/dapr/dapr/releases/tag/v1.15.5) |
| April 4th 2025 | 1.15.4</br> | 1.15.0 | Java 1.14.0 </br>Go 1.12.0 </br>PHP 1.2.0 </br>Python 1.15.0 </br>.NET 1.15.4 </br>JS 3.5.2 </br>Rust 0.16.1 | 0.15.0 | Supported (current) | [v1.15.4 release notes](https://github.com/dapr/dapr/releases/tag/v1.15.4) |
| March 5rd 2025 | 1.15.3</br> | 1.15.0 | Java 1.14.0 </br>Go 1.12.0 </br>PHP 1.2.0 </br>Python 1.15.0 </br>.NET 1.15.4 </br>JS 3.5.2 </br>Rust 0.16.1 | 0.15.0 | Supported (current) | [v1.15.3 release notes](https://github.com/dapr/dapr/releases/tag/v1.15.3) |
| March 3rd 2025 | 1.15.2</br> | 1.15.0 | Java 1.14.0 </br>Go 1.12.0 </br>PHP 1.2.0 </br>Python 1.15.0 </br>.NET 1.15.0 </br>JS 3.5.0 </br>Rust 0.16 | 0.15.0 | Supported (current) | [v1.15.2 release notes](https://github.com/dapr/dapr/releases/tag/v1.15.2) |
| February 28th 2025 | 1.15.1</br> | 1.15.0 | Java 1.14.0 </br>Go 1.12.0 </br>PHP 1.2.0 </br>Python 1.15.0 </br>.NET 1.15.0 </br>JS 3.5.0 </br>Rust 0.16 | 0.15.0 | Supported (current) | [v1.15.1 release notes](https://github.com/dapr/dapr/releases/tag/v1.15.1) |
| February 27th 2025 | 1.15.0</br> | 1.15.0 | Java 1.14.0 </br>Go 1.12.0 </br>PHP 1.2.0 </br>Python 1.15.0 </br>.NET 1.15.0 </br>JS 3.5.0 </br>Rust 0.16 | 0.15.0 | Supported | [v1.15.0 release notes](https://github.com/dapr/dapr/releases/tag/v1.15.0) |
| July 31st 2025 | 1.15.9</br> | 1.15.0 | Java 1.14.2, 1.15.0 </br>Go 1.12.0 </br>PHP 1.2.0 </br>Python 1.15.0 </br>.NET 1.15.4 </br>JS 3.5.2 </br>Rust 0.16.1 | 0.15.0 | Supported (current) | [v1.15.9 release notes](https://github.com/dapr/dapr/releases/tag/v1.15.9) |
| July 18th 2025 | 1.15.8</br> | 1.15.0 | Java 1.14.2, 1.15.0 </br>Go 1.12.0 </br>PHP 1.2.0 </br>Python 1.15.0 </br>.NET 1.15.4 </br>JS 3.5.2 </br>Rust 0.16.1 | 0.15.0 | Supported (current) | [v1.15.8 release notes](https://github.com/dapr/dapr/releases/tag/v1.15.8) |
| July 16th 2025 | 1.15.7</br> | 1.15.0 | Java 1.14.1, 1.15.0 </br>Go 1.12.0 </br>PHP 1.2.0 </br>Python 1.15.0 </br>.NET 1.15.4 </br>JS 3.5.2 </br>Rust 0.16.1 | 0.15.0 | Supported (current) | [v1.15.7 release notes](https://github.com/dapr/dapr/releases/tag/v1.15.7) |
| June 20th 2025 | 1.15.6</br> | 1.15.0 | Java 1.14.1, 1.15.0 </br>Go 1.12.0 </br>PHP 1.2.0 </br>Python 1.15.0 </br>.NET 1.15.4 </br>JS 3.5.2 </br>Rust 0.16.1 | 0.15.0 | Supported (current) | [v1.15.6 release notes](https://github.com/dapr/dapr/releases/tag/v1.15.6) |
| May 5th 2025 | 1.15.5</br> | 1.15.0 | Java 1.14.1, 1.15.0 </br>Go 1.12.0 </br>PHP 1.2.0 </br>Python 1.15.0 </br>.NET 1.15.4 </br>JS 3.5.2 </br>Rust 0.16.1 | 0.15.0 | Supported (current) | [v1.15.5 release notes](https://github.com/dapr/dapr/releases/tag/v1.15.5) |
| April 4th 2025 | 1.15.4</br> | 1.15.0 | Java 1.14.0, 1.15.0 </br>Go 1.12.0 </br>PHP 1.2.0 </br>Python 1.15.0 </br>.NET 1.15.4 </br>JS 3.5.2 </br>Rust 0.16.1 | 0.15.0 | Supported (current) | [v1.15.4 release notes](https://github.com/dapr/dapr/releases/tag/v1.15.4) |
| March 5rd 2025 | 1.15.3</br> | 1.15.0 | Java 1.14.0, 1.15.0 </br>Go 1.12.0 </br>PHP 1.2.0 </br>Python 1.15.0 </br>.NET 1.15.4 </br>JS 3.5.2 </br>Rust 0.16.1 | 0.15.0 | Supported (current) | [v1.15.3 release notes](https://github.com/dapr/dapr/releases/tag/v1.15.3) |
| March 3rd 2025 | 1.15.2</br> | 1.15.0 | Java 1.14.0, 1.15.0 </br>Go 1.12.0 </br>PHP 1.2.0 </br>Python 1.15.0 </br>.NET 1.15.0 </br>JS 3.5.0 </br>Rust 0.16 | 0.15.0 | Supported (current) | [v1.15.2 release notes](https://github.com/dapr/dapr/releases/tag/v1.15.2) |
| February 28th 2025 | 1.15.1</br> | 1.15.0 | Java 1.14.0, 1.15.0 </br>Go 1.12.0 </br>PHP 1.2.0 </br>Python 1.15.0 </br>.NET 1.15.0 </br>JS 3.5.0 </br>Rust 0.16 | 0.15.0 | Supported (current) | [v1.15.1 release notes](https://github.com/dapr/dapr/releases/tag/v1.15.1) |
| February 27th 2025 | 1.15.0</br> | 1.15.0 | Java 1.14.0, 1.15.0 </br>Go 1.12.0 </br>PHP 1.2.0 </br>Python 1.15.0 </br>.NET 1.15.0 </br>JS 3.5.0 </br>Rust 0.16 | 0.15.0 | Supported | [v1.15.0 release notes](https://github.com/dapr/dapr/releases/tag/v1.15.0) |
| September 16th 2024 | 1.14.4</br> | 1.14.1 | Java 1.12.0 </br>Go 1.11.0 </br>PHP 1.2.0 </br>Python 1.14.0 </br>.NET 1.14.0 </br>JS 3.3.1 | 0.15.0 | Supported | [v1.14.4 release notes](https://github.com/dapr/dapr/releases/tag/v1.14.4) |
| September 13th 2024 | 1.14.3</br> | 1.14.1 | Java 1.12.0 </br>Go 1.11.0 </br>PHP 1.2.0 </br>Python 1.14.0 </br>.NET 1.14.0 </br>JS 3.3.1 | 0.15.0 | ⚠️ Recalled | [v1.14.3 release notes](https://github.com/dapr/dapr/releases/tag/v1.14.3) |
| September 6th 2024 | 1.14.2</br> | 1.14.1 | Java 1.12.0 </br>Go 1.11.0 </br>PHP 1.2.0 </br>Python 1.14.0 </br>.NET 1.14.0 </br>JS 3.3.1 | 0.15.0 | Supported | [v1.14.2 release notes](https://github.com/dapr/dapr/releases/tag/v1.14.2) |

View File

@ -305,7 +305,7 @@ docker: Error response from daemon: Ports are not available: exposing port TCP 0
To resolve this error, open a command prompt in an elevated terminal and run:
```bash
nat stop winnat
net stop winnat
dapr init
net start winnat
```
```

View File

@ -40,6 +40,9 @@ The metadata API returns information related to Dapr's connection to the app. Th
### Scheduler connection details
Information related to the connection to one or more scheduler hosts.
### Workflow API runtime details
Information related to the Workflow API runtime details.
### Attributes
The metadata API allows you to store additional attribute information in the format of key-value pairs. These are ephemeral in-memory and are not persisted if a sidecar is reloaded. This information should be added at the time of a sidecar creation (for example, after the application has started).
@ -86,6 +89,7 @@ httpEndpoints | [Metadata API Response HttpEndpoint](#metadataapirespon
subscriptions | [Metadata API Response Subscription](#metadataapiresponsesubscription)[] | A json encoded array of pub/sub subscriptions metadata.
appConnectionProperties| [Metadata API Response AppConnectionProperties](#metadataapiresponseappconnectionproperties) | A json encoded object of app connection properties.
scheduler | [Metadata API Response Scheduler](#metadataapiresponsescheduler) | A json encoded object of scheduler connection properties.
workflows | [Metadata API Response Workflows](#metadataapiresponseworkflows) | A json encoded object of workflows runtime properties
<a id="metadataapiresponseactor"></a>**Metadata API Response Registered Actor**
@ -152,6 +156,11 @@ Name | Type | Description
---- | ---- | -----------
connected_addresses | string[] | List of strings representing the addresses of the conntected scheduler hosts.
<a id="metadataapiresponseworkflows"></a>**Metadata API Response Workflows**
Name | Type | Description
---- | ---- | -----------
connectedWorkers | integer | Number of connected workflow workers.
### Examples
@ -232,6 +241,9 @@ curl http://localhost:3500/v1.0/metadata
"10.244.0.48:50006",
"10.244.0.49:50006"
]
},
"workflows": {
"connectedWorkers": 1
}
}
```
@ -362,6 +374,9 @@ Get the metadata information to confirm your custom attribute was added:
"10.244.0.48:50006",
"10.244.0.49:50006"
]
},
"workflows": {
"connectedWorkers": 1
}
}
```

View File

@ -134,6 +134,18 @@ In case you are invoking `mathService` on a different namespace, you can use the
In this URL, `testing` is the namespace that `mathService` is running in.
### Headers in Service Invocation Requests
When Dapr invokes a service, it automatically adds the following headers to the request:
| Header | Description | Example |
|--------|-------------|---------|
| `dapr-caller-app-id` | The ID of the calling application | `myapp` |
| `dapr-caller-namespace` | The namespace of the calling application | `production` |
| `dapr-callee-app-id` | The ID of the called application | `mathService` |
These headers are available in both HTTP and gRPC service invocation requests.
#### Non-Dapr Endpoint Example
If the `mathService` service was a non-Dapr application, then it could be invoked using service invocation via an `HTTPEndpoint`, as well as a Fully Qualified Domain Name (FQDN) URL.

View File

@ -544,7 +544,7 @@ curl -X POST http://localhost:3500/v1.0/state/starwars/transaction \
## Configuring state store for actors
Actors don't support multiple state stores and require a transactional state store to be used with Dapr. [View which services currently implement the transactional state store interface]({{% ref "supported-state-stores.md" %}}).
Actors don't support multiple state stores and require a transactional state store to be used with Dapr. [View which services currently implement the transactional state store interface]({{% ref "supported-state-stores.md" %}}). If your state store is backed by a distributed database, you must make sure that it provides strong consistency.
Specify which state store to be used for actors with a `true` value for the property `actorStateStore` in the metadata section of the `statestore.yaml` component file.
For example, the following components yaml will configure Redis to be used as the state store for Actors.

View File

@ -46,6 +46,7 @@ dapr init [flags]
| `--container-runtime` | | `docker` | Used to pass in a different container runtime other than Docker. Supported container runtimes are: `docker`, `podman` |
| `--dev` | | | Creates Redis and Zipkin deployments when run in Kubernetes. |
| `--scheduler-volume` | | | Self-hosted only. Optionally, you can specify a volume for the scheduler service data directory. By default, without this flag, scheduler data is not persisted and not resilient to restarts. |
| `--scheduler-override-broadcast-host-port` | | localhost:50006 (6060 for Windows) | Self-hosted only. Specify the scheduler broadcast host and port, for example: 192.168.42.42:50006. |
### Examples
@ -70,7 +71,7 @@ Dapr can also run [Slim self-hosted mode]({{% ref self-hosted-no-docker.md %}}),
dapr init -s
```
> To switch to Dapr Github container registry as the default registry, set the `DAPR_DEFAULT_IMAGE_REGISTRY` environment variable value to be `GHCR`. To switch back to Docker Hub as default registry, unset this environment variable.
> To switch to Dapr Github container registry as the default registry, set the `DAPR_DEFAULT_IMAGE_REGISTRY` environment variable value to be `GHCR`. To switch back to Docker Hub as default registry, unset this environment variable.
**Specify a runtime version**
@ -148,7 +149,7 @@ dapr init --network mynet
Verify all containers are running in the specified network.
```bash
docker ps
docker ps
```
Uninstall Dapr from that Docker network.
@ -157,6 +158,18 @@ Uninstall Dapr from that Docker network.
dapr uninstall --all --network mynet
```
**Specify scheduler broadcast host and port**
You can specify the scheduler broadcast host and port, for example: 192.168.42.42:50006.
This is necessary when you have to connect to the scheduler using a different host and port, as the scheduler only allows connections matching this host and port.
By default, the scheduler will use localhost:50006 (6060 for Windows).
```bash
dapr init --scheduler-override-broadcast-host-port 192.168.42.42:50006
```
{{% /tab %}}
{{% tab "Kubernetes" %}}
@ -192,11 +205,11 @@ dapr init -k --set global.tag=1.0.0 --set dapr_operator.logLevel=error
You can also specify a private registry to pull container images from. As of now `dapr init -k` does not use specific images for sentry, operator, placement, scheduler, and sidecar. It relies on only Dapr runtime container image `dapr` for all these images.
Scenario 1 : dapr image hosted directly under root folder in private registry -
Scenario 1 : dapr image hosted directly under root folder in private registry -
```bash
dapr init -k --image-registry docker.io/username
```
Scenario 2 : dapr image hosted under a new/different directory in private registry -
Scenario 2 : dapr image hosted under a new/different directory in private registry -
```bash
dapr init -k --image-registry docker.io/username/<directory-name>
```

View File

@ -33,6 +33,8 @@ spec:
# value: <integer>
# - name: publicAccessLevel
# value: <publicAccessLevel>
# - name: disableEntityManagement
# value: <bool>
```
{{% alert title="Warning" color="warning" %}}
The above example uses secrets as plain strings. It is recommended to use a secret store for the secrets as described [here]({{% ref component-secrets.md %}}).
@ -45,10 +47,11 @@ The above example uses secrets as plain strings. It is recommended to use a secr
| `accountName` | Y | Input/Output | The name of the Azure Storage account | `"myexmapleaccount"` |
| `accountKey` | Y* | Input/Output | The access key of the Azure Storage account. Only required when not using Microsoft Entra ID authentication. | `"access-key"` |
| `containerName` | Y | Output | The name of the Blob Storage container to write to | `myexamplecontainer` |
| `endpoint` | N | Input/Output | Optional custom endpoint URL. This is useful when using the [Azurite emulator](https://github.com/Azure/azurite) or when using custom domains for Azure Storage (although this is not officially supported). The endpoint must be the full base URL, including the protocol (`http://` or `https://`), the IP or FQDN, and optional port. | `"http://127.0.0.1:10000"`
| `endpoint` | N | Input/Output | Optional custom endpoint URL. This is useful when using the [Azurite emulator](https://github.com/Azure/azurite) or when using custom domains for Azure Storage (although this is not officially supported). The endpoint must be the full base URL, including the protocol (`http://` or `https://`), the IP or FQDN, and optional port. | `"http://127.0.0.1:10000"` |
| `decodeBase64` | N | Output | Configuration to decode base64 file content before saving to Blob Storage. (In case of saving a file with binary content). Defaults to `false` | `true`, `false` |
| `getBlobRetryCount` | N | Output | Specifies the maximum number of HTTP GET requests that will be made while reading from a RetryReader Defaults to `10` | `1`, `2`
| `publicAccessLevel` | N | Output | Specifies whether data in the container may be accessed publicly and the level of access (only used if the container is created by Dapr). Defaults to `none` | `blob`, `container`, `none`
| `getBlobRetryCount` | N | Output | Specifies the maximum number of HTTP GET requests that will be made while reading from a RetryReader Defaults to `10` | `1`, `2` |
| `publicAccessLevel` | N | Output | Specifies whether data in the container may be accessed publicly and the level of access (only used if the container is created by Dapr). Defaults to `none` | `blob`, `container`, `none` |
| `disableEntityManagement` | N | Output | Configuration to disable entity management. When set to `true`, the binding skips the attempt to create the specified storage container. This is useful when operating with minimal Azure AD permissions. Defaults to `false` | `true`, `false` |
### Microsoft Entra ID authentication
@ -86,14 +89,14 @@ To perform a create blob operation, invoke the Azure Blob Storage binding with a
##### Save text to a random generated UUID blob
{{< tabpane text=true >}}
{{% tab %}}
{{% tab "Windows" %}}
On Windows, utilize cmd prompt (PowerShell has different escaping mechanism)
```bash
curl -d "{ \"operation\": \"create\", \"data\": \"Hello World\" }" http://localhost:<dapr-port>/v1.0/bindings/<binding-name>
```
{{% /tab %}}
{{% tab %}}
{{% tab "Linux" %}}
```bash
curl -d '{ "operation": "create", "data": "Hello World" }' \
http://localhost:<dapr-port>/v1.0/bindings/<binding-name>
@ -106,14 +109,14 @@ To perform a create blob operation, invoke the Azure Blob Storage binding with a
{{< tabpane text=true >}}
{{% tab %}}
{{% tab "Windows" %}}
```bash
curl -d "{ \"operation\": \"create\", \"data\": \"Hello World\", \"metadata\": { \"blobName\": \"my-test-file.txt\" } }" \
http://localhost:<dapr-port>/v1.0/bindings/<binding-name>
```
{{% /tab %}}
{{% tab %}}
{{% tab "Linux" %}}
```bash
curl -d '{ "operation": "create", "data": "Hello World", "metadata": { "blobName": "my-test-file.txt" } }' \
http://localhost:<dapr-port>/v1.0/bindings/<binding-name>
@ -150,13 +153,13 @@ Then you can upload it as you would normally:
{{< tabpane text=true >}}
{{% tab %}}
{{% tab "Windows" %}}
```bash
curl -d "{ \"operation\": \"create\", \"data\": \"YOUR_BASE_64_CONTENT\", \"metadata\": { \"blobName\": \"my-test-file.jpg\" } }" http://localhost:<dapr-port>/v1.0/bindings/<binding-name>
```
{{% /tab %}}
{{% tab %}}
{{% tab "Linux" %}}
```bash
curl -d '{ "operation": "create", "data": "YOUR_BASE_64_CONTENT", "metadata": { "blobName": "my-test-file.jpg" } }' \
http://localhost:<dapr-port>/v1.0/bindings/<binding-name>
@ -199,13 +202,13 @@ The metadata parameters are:
{{< tabpane text=true >}}
{{% tab %}}
{{% tab "Windows" %}}
```bash
curl -d '{ \"operation\": \"get\", \"metadata\": { \"blobName\": \"myblob\" }}' http://localhost:<dapr-port>/v1.0/bindings/<binding-name>
```
{{% /tab %}}
{{% tab %}}
{{% tab "Linux" %}}
```bash
curl -d '{ "operation": "get", "metadata": { "blobName": "myblob" }}' \
http://localhost:<dapr-port>/v1.0/bindings/<binding-name>
@ -247,13 +250,13 @@ The metadata parameters are:
{{< tabpane text=true >}}
{{% tab %}}
{{% tab "Windows" %}}
```bash
curl -d '{ \"operation\": \"delete\", \"metadata\": { \"blobName\": \"myblob\" }}' http://localhost:<dapr-port>/v1.0/bindings/<binding-name>
```
{{% /tab %}}
{{% tab %}}
{{% tab "Linux" %}}
```bash
curl -d '{ "operation": "delete", "metadata": { "blobName": "myblob" }}' \
http://localhost:<dapr-port>/v1.0/bindings/<binding-name>
@ -266,13 +269,13 @@ The metadata parameters are:
{{< tabpane text=true >}}
{{% tab %}}
{{% tab "Windows" %}}
```bash
curl -d '{ \"operation\": \"delete\", \"metadata\": { \"blobName\": \"myblob\", \"deleteSnapshots\": \"only\" }}' http://localhost:<dapr-port>/v1.0/bindings/<binding-name>
```
{{% /tab %}}
{{% tab %}}
{{% tab "Linux" %}}
```bash
curl -d '{ "operation": "delete", "metadata": { "blobName": "myblob", "deleteSnapshots": "only" }}' \
http://localhost:<dapr-port>/v1.0/bindings/<binding-name>
@ -285,13 +288,13 @@ The metadata parameters are:
{{< tabpane text=true >}}
{{% tab %}}
{{% tab "Windows" %}}
```bash
curl -d '{ \"operation\": \"delete\", \"metadata\": { \"blobName\": \"myblob\", \"deleteSnapshots\": \"include\" }}' http://localhost:<dapr-port>/v1.0/bindings/<binding-name>
```
{{% /tab %}}
{{% tab %}}
{{% tab "Linux" %}}
```bash
curl -d '{ "operation": "delete", "metadata": { "blobName": "myblob", "deleteSnapshots": "include" }}' \
http://localhost:<dapr-port>/v1.0/bindings/<binding-name>

View File

@ -0,0 +1,95 @@
---
type: docs
title: "Apache Dubbo binding spec"
linkTitle: "Dubbo"
description: "Detailed documentation on the Apache Dubbo binding component"
aliases:
- "/operations/components/setup-bindings/supported-bindings/dubbo/"
---
## Component format
To set up an Apache Dubbo binding, create a component of type `bindings.dubbo`.
See [this guide]({{% ref "howto-bindings.md#1-create-a-binding" %}}) on how to create and apply a binding configuration.
```yaml
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
name: <NAME>
spec:
type: bindings.dubbo
version: v1
metadata:
- name: interfaceName
value: "com.example.UserService"
- name: methodName
value: "getUser"
# Optional
- name: version
value: "1.0.0"
- name: group
value: "mygroup"
- name: providerHostname
value: "localhost"
- name: providerPort
value: "8080"
````
{{% alert title="Note" color="info" %}}
The Dubbo binding does not require authentication or secret configuration by default.
However, if your Dubbo deployment requires secure communication, you can integrate Dapr's [secret store]({{% ref component-secrets.md %}}) for sensitive values.
{{% /alert %}}
## Spec metadata fields
| Field | Required | Binding support | Details | Example |
| ------------------ | :------: | --------------- | ----------------------------------------- | --------------------------- |
| `interfaceName` | Y | Output | The Dubbo interface name to invoke. | `"com.example.UserService"` |
| `methodName` | Y | Output | The method name to call on the interface. | `"getUser"` |
| `version` | N | Output | Version of the Dubbo service. | `"1.0.0"` |
| `group` | N | Output | Group name for the Dubbo service. | `"mygroup"` |
| `providerHostname` | N | Output | Hostname of the Dubbo provider. | `"localhost"` |
| `providerPort` | N | Output | Port of the Dubbo provider. | `"8080"` |
---
## Binding support
This component supports **output binding** with the following operation:
* `create`: invokes a Dubbo service method.
---
## Example: Invoke a Dubbo Service
To invoke a Dubbo service using the binding:
```json
{
"operation": "create",
"metadata": {
"interfaceName": "com.example.UserService",
"methodName": "getUser",
"version": "1.0.0",
"providerHostname": "localhost",
"providerPort": "8080"
},
"data": {
"userId": "12345"
}
}
```
The `data` field contains the request payload sent to the Dubbo service method.
---
## Related links
- [Basic schema for a Dapr component]({{% ref component-schema %}})
- [Bindings building block]({{% ref bindings %}})
- [How-To: Trigger application with input binding]({{% ref howto-triggers.md %}})
- [How-To: Use bindings to interface with external resources]({{% ref howto-bindings.md %}})
- [Bindings API reference]({{% ref bindings_api.md %}})

View File

@ -47,6 +47,8 @@ spec:
value: "<bool>"
- name: encodeBase64
value: "<bool>"
- name: contentType
value: "<string>"
```
{{% alert title="Warning" color="warning" %}}
@ -70,6 +72,12 @@ The above example uses secrets as plain strings. It is recommended to use a secr
| `client_x509_cert_url` | N | Output | If using explicit credentials, this field should contain the `client_x509_cert_url` field from the service account json | `https://www.googleapis.com/robot/v1/metadata/x509/<PROJECT_NAME>.iam.gserviceaccount.com`|
| `decodeBase64` | N | Output | Configuration to decode base64 file content before saving to bucket storage. (In case of saving a file with binary content). `true` is the only allowed positive value. Other positive variations like `"True", "1"` are not acceptable. Defaults to `false` | `true`, `false` |
| `encodeBase64` | N | Output | Configuration to encode base64 file content before return the content. (In case of opening a file with binary content). `true` is the only allowed positive value. Other positive variations like `"True", "1"` are not acceptable. Defaults to `false` | `true`, `false` |
| `contentType` | N | Output | The MIME type to set for objects created in the bucket. If not specified, GCP attempts to auto-detect the content type. | `"text/csv"`, `"application/json"`, `"image/png"` |
## GCP Credentials
Since the GCP Storage Bucket component uses the GCP Go Client Libraries, by default it authenticates using **Application Default Credentials**. This is explained further in the [Authenticate to GCP Cloud services using client libraries](https://cloud.google.com/docs/authentication/client-libraries) guide.
Also, see how to [Set up Application Default Credentials](https://cloud.google.com/docs/authentication/provide-credentials-adc).
## GCP Credentials
@ -105,19 +113,20 @@ To perform a create operation, invoke the GCP Storage Bucket binding with a `POS
The metadata parameters are:
- `key` - (optional) the name of the object
- `decodeBase64` - (optional) configuration to decode base64 file content before saving to storage
- `contentType` - (optional) the MIME type of the object being created
#### Examples
##### Save text to a random generated UUID file
{{< tabpane text=true >}}
{{% tab %}}
{{% tab "Windows" %}}
On Windows, utilize cmd prompt (PowerShell has different escaping mechanism)
```bash
curl -d "{ \"operation\": \"create\", \"data\": \"Hello World\" }" http://localhost:<dapr-port>/v1.0/bindings/<binding-name>
```
{{% /tab %}}
{{% tab %}}
{{% tab "Linux" %}}
```bash
curl -d '{ "operation": "create", "data": "Hello World" }' \
http://localhost:<dapr-port>/v1.0/bindings/<binding-name>
@ -130,14 +139,14 @@ The metadata parameters are:
{{< tabpane text=true >}}
{{% tab %}}
{{% tab "Windows" %}}
```bash
curl -d "{ \"operation\": \"create\", \"data\": \"Hello World\", \"metadata\": { \"key\": \"my-test-file.txt\" } }" \
http://localhost:<dapr-port>/v1.0/bindings/<binding-name>
```
{{% /tab %}}
{{% tab %}}
{{% tab "Linux" %}}
```bash
curl -d '{ "operation": "create", "data": "Hello World", "metadata": { "key": "my-test-file.txt" } }' \
http://localhost:<dapr-port>/v1.0/bindings/<binding-name>
@ -146,6 +155,25 @@ The metadata parameters are:
{{< /tabpane >}}
##### Save a CSV file with correct content type
{{< tabpane text=true >}}
{{% tab %}}
```bash
curl -d "{ \"operation\": \"create\", \"data\": \"$(cat data.csv | base64)\", \"metadata\": { \"key\": \"data.csv\", \"contentType\": \"text/csv\", \"decodeBase64\": \"true\" } }" \
http://localhost:<dapr-port>/v1.0/bindings/<binding-name>
```
{{% /tab %}}
{{% tab %}}
```bash
curl -d '{ "operation": "create", "data": "'"$(base64 < data.csv)"'", "metadata": { "key": "data.csv", "contentType": "text/csv", "decodeBase64": "true" } }' \
http://localhost:<dapr-port>/v1.0/bindings/<binding-name>
```
{{% /tab %}}
{{< /tabpane >}}
##### Upload a file
@ -155,20 +183,21 @@ Then you can upload it as you would normally:
{{< tabpane text=true >}}
{{% tab %}}
{{% tab "Windows" %}}
```bash
curl -d "{ \"operation\": \"create\", \"data\": \"(YOUR_FILE_CONTENTS)\", \"metadata\": { \"key\": \"my-test-file.jpg\" } }" http://localhost:<dapr-port>/v1.0/bindings/<binding-name>
curl -d "{ \"operation\": \"create\", \"data\": \"(YOUR_FILE_CONTENTS)\", \"metadata\": { \"key\": \"my-test-file.jpg\", \"contentType\": \"image/jpeg\" } }" http://localhost:<dapr-port>/v1.0/bindings/<binding-name>
```
{{% /tab %}}
{{% tab %}}
{{% tab "Linux" %}}
```bash
curl -d '{ "operation": "create", "data": "$(cat my-test-file.jpg)", "metadata": { "key": "my-test-file.jpg" } }' \
curl -d '{ "operation": "create", "data": "$(cat my-test-file.jpg | base64)", "metadata": { "key": "my-test-file.jpg", "contentType": "image/jpeg", "decodeBase64": "true" } }' \
http://localhost:<dapr-port>/v1.0/bindings/<binding-name>
```
{{% /tab %}}
{{< /tabpane >}}
#### Response
The response body will contain the following JSON:
@ -202,13 +231,13 @@ The metadata parameters are:
{{< tabpane text=true >}}
{{% tab %}}
{{% tab "Windows" %}}
```bash
curl -d '{ \"operation\": \"get\", \"metadata\": { \"key\": \"my-test-file.txt\" }}' http://localhost:<dapr-port>/v1.0/bindings/<binding-name>
```
{{% /tab %}}
{{% tab %}}
{{% tab "Linux" %}}
```bash
curl -d '{ "operation": "get", "metadata": { "key": "my-test-file.txt" }}' \
http://localhost:<dapr-port>/v1.0/bindings/<binding-name>
@ -312,13 +341,13 @@ The metadata parameters are:
{{< tabpane text=true >}}
{{% tab %}}
{{% tab "Windows" %}}
```bash
curl -d '{ \"operation\": \"delete\", \"metadata\": { \"key\": \"my-test-file.txt\" }}' http://localhost:<dapr-port>/v1.0/bindings/<binding-name>
```
{{% /tab %}}
{{% tab %}}
{{% tab "Linux" %}}
```bash
curl -d '{ "operation": "delete", "metadata": { "key": "my-test-file.txt" }}' \
http://localhost:<dapr-port>/v1.0/bindings/<binding-name>
@ -401,13 +430,15 @@ To perform a copy object operation, invoke the GCP bucket binding with a `POST`
{
"operation": "copy",
"metadata": {
"destinationBucket": "destination-bucket-name",
"key": "source-file.txt",
"destinationBucket": "destination-bucket-name"
}
}
```
The metadata parameters are:
- `key` - the name of the source object (required)
- `destinationBucket` - the name of the destination bucket (required)
### Move objects
@ -418,13 +449,15 @@ To perform a move object operation, invoke the GCP bucket binding with a `POST`
{
"operation": "move",
"metadata": {
"destinationBucket": "destination-bucket-name",
"key": "source-file.txt",
"destinationBucket": "destination-bucket-name"
}
}
```
The metadata parameters are:
- `key` - the name of the source object (required)
- `destinationBucket` - the name of the destination bucket (required)
### Rename objects
@ -435,13 +468,15 @@ To perform a rename object operation, invoke the GCP bucket binding with a `POST
{
"operation": "rename",
"metadata": {
"newName": "object-new-name",
"key": "old-name.txt",
"newName": "new-name.txt"
}
}
```
The metadata parameters are:
- `key` - the current name of the object (required)
- `newName` - the new name of the object (required)
## Related links

View File

@ -73,14 +73,14 @@ To perform a create operation, invoke the Huawei OBS binding with a `POST` metho
##### Save text to a random generated UUID file
{{< tabpane text=true >}}
{{% tab %}}
{{% tab "Windows" %}}
On Windows, utilize cmd prompt (PowerShell has different escaping mechanism)
```bash
curl -d "{ \"operation\": \"create\", \"data\": \"Hello World\" }" http://localhost:<dapr-port>/v1.0/bindings/<binding-name>
```
{{% /tab %}}
{{% tab %}}
{{% tab "Linux" %}}
```bash
curl -d '{ "operation": "create", "data": "Hello World" }' \
http://localhost:<dapr-port>/v1.0/bindings/<binding-name>
@ -93,14 +93,14 @@ To perform a create operation, invoke the Huawei OBS binding with a `POST` metho
{{< tabpane text=true >}}
{{% tab %}}
{{% tab "Windows" %}}
```bash
curl -d "{ \"operation\": \"create\", \"data\": \"Hello World\", \"metadata\": { \"key\": \"my-test-file.txt\" } }" \
http://localhost:<dapr-port>/v1.0/bindings/<binding-name>
```
{{% /tab %}}
{{% tab %}}
{{% tab "Linux" %}}
```bash
curl -d '{ "operation": "create", "data": "Hello World", "metadata": { "key": "my-test-file.txt" } }' \
http://localhost:<dapr-port>/v1.0/bindings/<binding-name>
@ -135,14 +135,14 @@ To upload a binary file (for example, _.jpg_, _.zip_), invoke the Huawei OBS bin
{{< tabpane text=true >}}
{{% tab %}}
{{% tab "Windows" %}}
```bash
curl -d "{ \"operation\": \"upload\", \"data\": { \"sourceFile\": \".\my-test-file.jpg\" }, \"metadata\": { \"key\": \"my-test-file.jpg\" } }" \
http://localhost:<dapr-port>/v1.0/bindings/<binding-name>
```
{{% /tab %}}
{{% tab %}}
{{% tab "Linux" %}}
```bash
curl -d '{ "operation": "upload", "data": { "sourceFile": "./my-test-file.jpg" }, "metadata": { "key": "my-test-file.jpg" } }' \
http://localhost:<dapr-port>/v1.0/bindings/<binding-name>
@ -176,13 +176,13 @@ The metadata parameters are:
{{< tabpane text=true >}}
{{% tab %}}
{{% tab "Windows" %}}
```bash
curl -d '{ \"operation\": \"get\", \"metadata\": { \"key\": \"my-test-file.txt\" }}' http://localhost:<dapr-port>/v1.0/bindings/<binding-name>
```
{{% /tab %}}
{{% tab %}}
{{% tab "Linux" %}}
```bash
curl -d '{ "operation": "get", "metadata": { "key": "my-test-file.txt" }}' \
http://localhost:<dapr-port>/v1.0/bindings/<binding-name>
@ -220,13 +220,13 @@ The metadata parameters are:
{{< tabpane text=true >}}
{{% tab %}}
{{% tab "Windows" %}}
```bash
curl -d '{ \"operation\": \"delete\", \"metadata\": { \"key\": \"my-test-file.txt\" }}' http://localhost:<dapr-port>/v1.0/bindings/<binding-name>
```
{{% /tab %}}
{{% tab %}}
{{% tab "Linux" %}}
```bash
curl -d '{ "operation": "delete", "metadata": { "key": "my-test-file.txt" }}' \
http://localhost:<dapr-port>/v1.0/bindings/<binding-name>
@ -267,13 +267,13 @@ The data parameters are:
{{< tabpane text=true >}}
{{% tab %}}
{{% tab "Windows" %}}
```bash
curl -d '{ \"operation\": \"list\", \"data\": { \"maxResults\": 5, \"prefix\": \"dapr-\", \"marker\": \"obstest\", \"delimiter\": \"jpg\" }}' http://localhost:<dapr-port>/v1.0/bindings/<binding-name>
```
{{% /tab %}}
{{% tab %}}
{{% tab "Linux" %}}
```bash
curl -d '{ "operation": "list", "data": { "maxResults": 5, "prefix": "dapr-", "marker": "obstest", "delimiter": "jpg" }}' \
http://localhost:<dapr-port>/v1.0/bindings/<binding-name>

View File

@ -38,6 +38,8 @@ spec:
value: "*****************"
- name: direction
value: "input, output"
- name: endpoint
value: "http://localhost:4566" # Optional: Custom endpoint (e.g. for LocalStack)
```
{{% alert title="Warning" color="warning" %}}
The above example uses secrets as plain strings. It is recommended to use a secret store for the secrets as described [here]({{% ref component-secrets.md %}}).
@ -55,6 +57,8 @@ The above example uses secrets as plain strings. It is recommended to use a secr
| `secretKey` | Y | Output | The AWS Secret Access Key to access this resource | `"secretAccessKey"` |
| `sessionToken` | N | Output | The AWS session token to use | `"sessionToken"` |
| `direction` | N | Input/Output | The direction of the binding | `"input"`, `"output"`, `"input, output"` |
| `endpoint` | N | Input | Custom endpoint for Kinesis and DynamoDB (for example to enable AWS LocalStack support) | `"http://localhost:4566"` |
{{% alert title="Important" color="warning" %}}
When running the Dapr sidecar (daprd) with your application on EKS (AWS Kubernetes), if you're using a node/pod that has already been attached to an IAM policy defining access to AWS resources, you **must not** provide AWS access-key, secret-key, and tokens in the definition of the component spec you're using.

View File

@ -59,14 +59,14 @@ To perform a create file operation, invoke the Local Storage binding with a `POS
##### Save text to a random generated UUID file
{{< tabpane text=true >}}
{{% tab %}}
{{% tab "Windows" %}}
On Windows, utilize cmd prompt (PowerShell has different escaping mechanism)
```bash
curl -d "{ \"operation\": \"create\", \"data\": \"Hello World\" }" http://localhost:<dapr-port>/v1.0/bindings/<binding-name>
```
{{% /tab %}}
{{% tab %}}
{{% tab "Linux" %}}
```bash
curl -d '{ "operation": "create", "data": "Hello World" }' \
http://localhost:<dapr-port>/v1.0/bindings/<binding-name>
@ -79,14 +79,14 @@ To perform a create file operation, invoke the Local Storage binding with a `POS
{{< tabpane text=true >}}
{{% tab %}}
{{% tab "Windows" %}}
```bash
curl -d "{ \"operation\": \"create\", \"data\": \"Hello World\", \"metadata\": { \"fileName\": \"my-test-file.txt\" } }" \
http://localhost:<dapr-port>/v1.0/bindings/<binding-name>
```
{{% /tab %}}
{{% tab %}}
{{% tab "Linux" %}}
```bash
curl -d '{ "operation": "create", "data": "Hello World", "metadata": { "fileName": "my-test-file.txt" } }' \
http://localhost:<dapr-port>/v1.0/bindings/<binding-name>
@ -102,13 +102,13 @@ To upload a file, encode it as Base64. The binding should automatically detect t
{{< tabpane text=true >}}
{{% tab %}}
{{% tab "Windows" %}}
```bash
curl -d "{ \"operation\": \"create\", \"data\": \"YOUR_BASE_64_CONTENT\", \"metadata\": { \"fileName\": \"my-test-file.jpg\" } }" http://localhost:<dapr-port>/v1.0/bindings/<binding-name>
```
{{% /tab %}}
{{% tab %}}
{{% tab "Linux" %}}
```bash
curl -d '{ "operation": "create", "data": "YOUR_BASE_64_CONTENT", "metadata": { "fileName": "my-test-file.jpg" } }' \
http://localhost:<dapr-port>/v1.0/bindings/<binding-name>
@ -145,13 +145,13 @@ To perform a get file operation, invoke the Local Storage binding with a `POST`
{{< tabpane text=true >}}
{{% tab %}}
{{% tab "Windows" %}}
```bash
curl -d '{ \"operation\": \"get\", \"metadata\": { \"fileName\": \"myfile\" }}' http://localhost:<dapr-port>/v1.0/bindings/<binding-name>
```
{{% /tab %}}
{{% tab %}}
{{% tab "Linux" %}}
```bash
curl -d '{ "operation": "get", "metadata": { "fileName": "myfile" }}' \
http://localhost:<dapr-port>/v1.0/bindings/<binding-name>
@ -189,13 +189,13 @@ If you only want to list the files beneath a particular directory below the `roo
{{< tabpane text=true >}}
{{% tab %}}
{{% tab "Windows" %}}
```bash
curl -d '{ \"operation\": \"list\", \"metadata\": { \"fileName\": \"my/cool/directory\" }}' http://localhost:<dapr-port>/v1.0/bindings/<binding-name>
```
{{% /tab %}}
{{% tab %}}
{{% tab "Linux" %}}
```bash
curl -d '{ "operation": "list", "metadata": { "fileName": "my/cool/directory" }}' \
http://localhost:<dapr-port>/v1.0/bindings/<binding-name>
@ -225,13 +225,13 @@ To perform a delete file operation, invoke the Local Storage binding with a `POS
{{< tabpane text=true >}}
{{% tab %}}
{{% tab "Windows" %}}
```bash
curl -d '{ \"operation\": \"delete\", \"metadata\": { \"fileName\": \"myfile\" }}' http://localhost:<dapr-port>/v1.0/bindings/<binding-name>
```
{{% /tab %}}
{{% tab %}}
{{% tab "Linux" %}}
```bash
curl -d '{ "operation": "delete", "metadata": { "fileName": "myfile" }}' \
http://localhost:<dapr-port>/v1.0/bindings/<binding-name>

View File

@ -103,7 +103,7 @@ Read more about the importance and usage of these parameters in the [Azure OpenA
#### Examples
{{< tabpane text=true >}}
{{% tab %}}
{{% tab "Linux" %}}
```bash
curl -d '{ "data": {"deploymentId: "my-model" , "prompt": "A dog is ", "maxTokens":15}, "operation": "completion" }' \
http://localhost:<dapr-port>/v1.0/bindings/<binding-name>
@ -176,7 +176,7 @@ Each message is of the form:
{{< tabpane text=true >}}
{{% tab %}}
{{% tab "Linux" %}}
```bash
curl -d '{
"data": {

View File

@ -0,0 +1,125 @@
---
type: docs
title: "Apache RocketMQ binding spec"
linkTitle: "RocketMQ"
description: "Detailed documentation on the Apache RocketMQ binding component"
aliases:
- "/operations/components/setup-bindings/supported-bindings/rocketmq/"
---
## Component format
To set up an Apache RocketMQ binding, create a component of type `bindings.rocketmq`.
See [this guide]({{% ref "howto-bindings.md#1-create-a-binding" %}}) on how to create and apply a binding configuration.
```yaml
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
name: <NAME>
spec:
type: bindings.rocketmq
version: v1
metadata:
- name: accessProto
value: "tcp"
- name: nameServer
value: "localhost:9876"
- name: endpoint
value: "http://localhost:8080"
- name: topics
value: "topic1,topic2"
- name: consumerGroup
value: "my-consumer-group"
# Optional
- name: consumerBatchSize
value: "10"
- name: consumerThreadNums
value: "4"
- name: retries
value: "3"
- name: instanceId
value: "my-instance"
````
{{% alert title="Warning" color="warning" %}}
The above example uses secrets as plain strings.
It is recommended to use a secret store for the secrets as described [here]({{% ref component-secrets.md %}}).
{{% /alert %}}
## Spec metadata fields
| Field | Required | Binding support | Details | Example |
| -------------------- | :------: | --------------- | --------------------------------------------------------------------- | ------------------------------ |
| `topics` | Y | Input/Output | Comma-separated list of topics for publishing or subscribing. | `"topic1,topic2"` |
| `nameServer` | N | Input/Output | Address of the RocketMQ name server. | `"localhost:9876"` |
| `endpoint` | N | Input/Output | RocketMQ endpoint (for `http` protocol). | `"http://localhost:8080"` |
| `accessProto` | N | Input/Output | SDK protocol for connecting to RocketMQ. | `"tcp"`, `"tcp-cgo"`, `"http"` |
| `consumerGroup` | N | Input/Output | Consumer group name for RocketMQ subscribers. | `"my-consumer-group"` |
| `consumerBatchSize` | N | Input | Batch size for consuming messages. | `"10"` |
| `consumerThreadNums` | N | Input | Number of consumer threads (for `tcp-cgo` protocol). | `"4"` |
| `instanceId` | N | Input/Output | RocketMQ namespace instance ID. | `"my-instance"` |
| `nameServerDomain` | N | Input/Output | Domain name for the RocketMQ name server. | `"rocketmq.example.com"` |
| `retries` | N | Input/Output | Number of retry attempts to connect to the RocketMQ broker. | `"3"` |
| `accessKey` | N | Input/Output | Access key for authentication. Required if access control is enabled. | `"access-key"` |
| `secretKey` | N | Input/Output | Secret key for authentication. Required if access control is enabled. | `"secret-key"` |
> **Note**: `accessKey` and `secretKey` can be stored in a Dapr secret store instead of the YAML file for improved security.
### Authentication Using Access Keys
To use access key authentication, include the following metadata fields in the configuration:
```yaml
- name: accessKey
secretKeyRef:
name: rocketmq-secrets
key: accessKey
- name: secretKey
secretKeyRef:
name: rocketmq-secrets
key: secretKey
```
This allows secure retrieval of credentials from a secret store.
## Binding support
This component supports both **input and output** binding interfaces.
This component supports **output binding** with the following operations:
* `create`: publishes a new message
* `read`: consumes messages from RocketMQ topics
## Set topic per-request
You can override the topic in component metadata on a per-request basis:
```json
{
"operation": "create",
"metadata": {
"topics": "dynamicTopic"
},
"data": "This is a test message for RocketMQ!"
}
```
## Retry behavior
Use the `retries` metadata field to specify how many times Dapr should attempt to connect to RocketMQ before failing:
```yaml
- name: retries
value: "5"
```
## Related links
- [Basic schema for a Dapr component]({{% ref component-schema %}})
- [Bindings building block]({{% ref bindings %}})
- [How-To: Trigger application with input binding]({{% ref howto-triggers.md %}})
- [How-To: Use bindings to interface with external resources]({{% ref howto-bindings.md %}})
- [Bindings API reference]({{% ref bindings_api.md %}})

View File

@ -190,14 +190,14 @@ Valid values for `presignTTL` are [Go duration strings](https://pkg.go.dev/maze.
{{< tabpane text=true >}}
{{% tab %}}
{{% tab "Windows" %}}
```bash
curl -d "{ \"operation\": \"create\", \"data\": \"Hello World\", \"metadata\": { \"presignTTL\": \"15m\" } }" \
http://localhost:<dapr-port>/v1.0/bindings/<binding-name>
```
{{% /tab %}}
{{% tab %}}
{{% tab "Linux" %}}
```bash
curl -d '{ "operation": "create", "data": "Hello World", "metadata": { "presignTTL": "15m" } }' \
http://localhost:<dapr-port>/v1.0/bindings/<binding-name>
@ -222,14 +222,14 @@ The response body contains the following example JSON:
##### Save text to a random generated UUID file
{{< tabpane text=true >}}
{{% tab %}}
{{% tab "Windows" %}}
On Windows, utilize cmd prompt (PowerShell has different escaping mechanism)
```bash
curl -d "{ \"operation\": \"create\", \"data\": \"Hello World\" }" http://localhost:<dapr-port>/v1.0/bindings/<binding-name>
```
{{% /tab %}}
{{% tab %}}
{{% tab "Linux" %}}
```bash
curl -d '{ "operation": "create", "data": "Hello World" }' \
http://localhost:<dapr-port>/v1.0/bindings/<binding-name>
@ -242,14 +242,14 @@ The response body contains the following example JSON:
{{< tabpane text=true >}}
{{% tab %}}
{{% tab "Windows" %}}
```bash
curl -d "{ \"operation\": \"create\", \"data\": \"Hello World\", \"metadata\": { \"key\": \"my-test-file.txt\" } }" \
http://localhost:<dapr-port>/v1.0/bindings/<binding-name>
```
{{% /tab %}}
{{% tab %}}
{{% tab "Linux" %}}
```bash
curl -d '{ "operation": "create", "data": "Hello World", "metadata": { "key": "my-test-file.txt" } }' \
http://localhost:<dapr-port>/v1.0/bindings/<binding-name>
@ -293,13 +293,13 @@ Then you can upload it as you would normally:
{{< tabpane text=true >}}
{{% tab %}}
{{% tab "Windows" %}}
```bash
curl -d "{ \"operation\": \"create\", \"data\": \"YOUR_BASE_64_CONTENT\", \"metadata\": { \"key\": \"my-test-file.jpg\" } }" http://localhost:<dapr-port>/v1.0/bindings/<binding-name>
```
{{% /tab %}}
{{% tab %}}
{{% tab "Linux" %}}
```bash
curl -d '{ "operation": "create", "data": "YOUR_BASE_64_CONTENT", "metadata": { "key": "my-test-file.jpg" } }' \
http://localhost:<dapr-port>/v1.0/bindings/<binding-name>
@ -313,13 +313,13 @@ To upload a file from a supplied path (relative or absolute), use the `filepath`
{{< tabpane text=true >}}
{{% tab %}}
{{% tab "Windows" %}}
```bash
curl -d '{ \"operation\": \"create\", \"metadata\": { \"filePath\": \"my-test-file.txt\" }}' http://localhost:<dapr-port>/v1.0/bindings/<binding-name>
```
{{% /tab %}}
{{% tab %}}
{{% tab "Linux" %}}
```bash
curl -d '{ "operation": "create", "metadata": { "filePath": "my-test-file.txt" }}' \
http://localhost:<dapr-port>/v1.0/bindings/<binding-name>
@ -346,14 +346,14 @@ Valid values for `presignTTL` are [Go duration strings](https://pkg.go.dev/maze.
{{< tabpane text=true >}}
{{% tab %}}
{{% tab "Windows" %}}
```bash
curl -d "{ \"operation\": \"presign\", \"metadata\": { \"presignTTL\": \"15m\", \"key\": \"my-test-file.txt\" } }" \
http://localhost:<dapr-port>/v1.0/bindings/<binding-name>
```
{{% /tab %}}
{{% tab %}}
{{% tab "Linux" %}}
```bash
curl -d '{ "operation": "presign", "metadata": { "presignTTL": "15m", "key": "my-test-file.txt" } }' \
http://localhost:<dapr-port>/v1.0/bindings/<binding-name>
@ -393,13 +393,13 @@ The metadata parameters are:
{{< tabpane text=true >}}
{{% tab %}}
{{% tab "Windows" %}}
```bash
curl -d '{ \"operation\": \"get\", \"metadata\": { \"key\": \"my-test-file.txt\" }}' http://localhost:<dapr-port>/v1.0/bindings/<binding-name>
```
{{% /tab %}}
{{% tab %}}
{{% tab "Linux" %}}
```bash
curl -d '{ "operation": "get", "metadata": { "key": "my-test-file.txt" }}' \
http://localhost:<dapr-port>/v1.0/bindings/<binding-name>
@ -437,13 +437,13 @@ The metadata parameters are:
{{< tabpane text=true >}}
{{% tab %}}
{{% tab "Windows" %}}
```bash
curl -d '{ \"operation\": \"delete\", \"metadata\": { \"key\": \"my-test-file.txt\" }}' http://localhost:<dapr-port>/v1.0/bindings/<binding-name>
```
{{% /tab %}}
{{% tab %}}
{{% tab "Linux" %}}
```bash
curl -d '{ "operation": "delete", "metadata": { "key": "my-test-file.txt" }}' \
http://localhost:<dapr-port>/v1.0/bindings/<binding-name>

View File

@ -81,13 +81,13 @@ To perform a create file operation, invoke the SFTP binding with a `POST` method
{{< tabpane text=true >}}
{{% tab %}}
{{% tab "Windows" %}}
```bash
curl -d "{ \"operation\": \"create\", \"data\": \"YOUR_BASE_64_CONTENT\", \"metadata\": { \"fileName\": \"my-test-file.jpg\" } }" http://localhost:<dapr-port>/v1.0/bindings/<binding-name>
```
{{% /tab %}}
{{% tab %}}
{{% tab "Linux" %}}
```bash
curl -d '{ "operation": "create", "data": "YOUR_BASE_64_CONTENT", "metadata": { "fileName": "my-test-file.jpg" } }' \
http://localhost:<dapr-port>/v1.0/bindings/<binding-name>
@ -124,13 +124,13 @@ To perform a get file operation, invoke the SFTP binding with a `POST` method an
{{< tabpane text=true >}}
{{% tab %}}
{{% tab "Windows" %}}
```bash
curl -d '{ \"operation\": \"get\", \"metadata\": { \"fileName\": \"filename\" }}' http://localhost:<dapr-port>/v1.0/bindings/<binding-name>
```
{{% /tab %}}
{{% tab %}}
{{% tab "Linux" %}}
```bash
curl -d '{ "operation": "get", "metadata": { "fileName": "filename" }}' \
http://localhost:<dapr-port>/v1.0/bindings/<binding-name>
@ -168,13 +168,13 @@ If you only want to list the files beneath a particular directory below the `roo
{{< tabpane text=true >}}
{{% tab %}}
{{% tab "Windows" %}}
```bash
curl -d '{ \"operation\": \"list\", \"metadata\": { \"fileName\": \"my/cool/directory\" }}' http://localhost:<dapr-port>/v1.0/bindings/<binding-name>
```
{{% /tab %}}
{{% tab %}}
{{% tab "Linux" %}}
```bash
curl -d '{ "operation": "list", "metadata": { "fileName": "my/cool/directory" }}' \
http://localhost:<dapr-port>/v1.0/bindings/<binding-name>
@ -204,13 +204,13 @@ To perform a delete file operation, invoke the SFTP binding with a `POST` method
{{< tabpane text=true >}}
{{% tab %}}
{{% tab "Windows" %}}
```bash
curl -d '{ \"operation\": \"delete\", \"metadata\": { \"fileName\": \"myfile\" }}' http://localhost:<dapr-port>/v1.0/bindings/<binding-name>
```
{{% /tab %}}
{{% tab %}}
{{% tab "Linux" %}}
```bash
curl -d '{ "operation": "delete", "metadata": { "fileName": "myfile" }}' \
http://localhost:<dapr-port>/v1.0/bindings/<binding-name>

View File

@ -26,6 +26,8 @@ spec:
value: "items"
- name: region
value: "us-west-2"
- name: endpoint
value: "sqs.us-west-2.amazonaws.com"
- name: accessKey
value: "*****************"
- name: secretKey
@ -45,11 +47,12 @@ The above example uses secrets as plain strings. It is recommended to use a secr
| Field | Required | Binding support | Details | Example |
|--------------------|:--------:|------------|-----|---------|
| `queueName` | Y | Input/Output | The SQS queue name | `"myqueue"` |
| `region` | Y | Input/Output | The specific AWS region | `"us-east-1"` |
| `accessKey` | Y | Input/Output | The AWS Access Key to access this resource | `"key"` |
| `secretKey` | Y | Input/Output | The AWS Secret Access Key to access this resource | `"secretAccessKey"` |
| `sessionToken` | N | Input/Output | The AWS session token to use | `"sessionToken"` |
| `direction` | N | Input/Output | The direction of the binding | `"input"`, `"output"`, `"input, output"` |
| `region` | Y | Input/Output | The specific AWS region | `"us-east-1"` |
| `endpoint` | N | Output | The specific AWS endpoint | `"sqs.us-east-1.amazonaws.com"` |
| `accessKey` | Y | Input/Output | The AWS Access Key to access this resource | `"key"` |
| `secretKey` | Y | Input/Output | The AWS Secret Access Key to access this resource | `"secretAccessKey"` |
| `sessionToken` | N | Input/Output | The AWS session token to use | `"sessionToken"` |
| `direction` | N | Input/Output | The direction of the binding | `"input"`, `"output"`, `"input, output"` |
{{% alert title="Important" color="warning" %}}
When running the Dapr sidecar (daprd) with your application on EKS (AWS Kubernetes), if you're using a node/pod that has already been attached to an IAM policy defining access to AWS resources, you **must not** provide AWS access-key, secret-key, and tokens in the definition of the component spec you're using.

View File

@ -34,6 +34,26 @@ The above example uses secrets as plain strings. It is recommended to use a secr
| `model` | N | The Ollama LLM to use. Defaults to `llama3.2:latest`. | `phi4:latest` |
| `cacheTTL` | N | A time-to-live value for a prompt cache to expire. Uses Golang duration format. | `10m` |
### OpenAI Compatibility
Ollama is compatible with [OpenAI's API](https://ollama.com/blog/openai-compatibility). You can use the OpenAI component with Ollama models with the following changes:
```yaml
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
name: ollama-openai
spec:
type: conversation.openai # use the openai component type
metadata:
- name: key
value: 'ollama' # just any non-empty string
- name: model
value: gpt-oss:20b # an ollama model (https://ollama.com/search) in this case openai open source model. See https://ollama.com/library/gpt-oss
- name: endpoint
value: 'http://localhost:11434/v1' # ollama endpoint
```
## Related links
- [Conversation API overview]({{< ref conversation-overview.md >}})

View File

@ -25,6 +25,10 @@ spec:
value: 'https://api.openai.com/v1'
- name: cacheTTL
value: 10m
# - name: apiType # Optional
# value: 'azure'
# - name: apiVersion # Optional
# value: '2025-01-01-preview'
```
{{% alert title="Warning" color="warning" %}}
@ -37,9 +41,12 @@ The above example uses secrets as plain strings. It is recommended to use a secr
|--------------------|:--------:|---------|---------|
| `key` | Y | API key for OpenAI. | `mykey` |
| `model` | N | The OpenAI LLM to use. Defaults to `gpt-4-turbo`. | `gpt-4-turbo` |
| `endpoint` | N | Custom API endpoint URL for OpenAI API-compatible services. If not specified, the default OpenAI API endpoint is used. | `https://api.openai.com/v1` |
| `endpoint` | N | Custom API endpoint URL for OpenAI API-compatible services. If not specified, the default OpenAI API endpoint is used. Required when `apiType` is set to `azure`. | `https://api.openai.com/v1`, `https://example.openai.azure.com/` |
| `cacheTTL` | N | A time-to-live value for a prompt cache to expire. Uses Golang duration format. | `10m` |
| `apiType` | N | Specifies the API provider type. Required when using a provider that does not follow the default OpenAI API endpoint conventions. | `azure` |
| `apiVersion`| N | The API version to use. Required when the `apiType` is set to `azure`. | `2025-04-01-preview` |
## Related links
- [Conversation API overview]({{% ref conversation-overview.md %}})
- [Conversation API overview]({{% ref conversation-overview.md %}})
- [Azure OpenAI in Azure AI Foundry Models API lifecycle](https://learn.microsoft.com/azure/ai-foundry/openai/api-version-lifecycle)

View File

@ -75,33 +75,34 @@ The above example uses secrets as plain strings. It is recommended to use a secr
## Spec metadata fields
| Field | Required | Details | Example |
|--------------------|:--------:|---------|---------|
| redisHost | Y | Connection-string for the redis host | `localhost:6379`, `redis-master.default.svc.cluster.local:6379`
| redisPassword | N | Password for Redis host. No Default. Can be `secretKeyRef` to use a secret reference | `""`, `"KeFg23!"`
| redisUsername | N | Username for Redis host. Defaults to empty. Make sure your redis server version is 6 or above, and have created acl rule correctly. | `""`, `"default"`
| useEntraID | N | Implements EntraID support for Azure Cache for Redis. Before enabling this: <ul><li>The `redisHost` name must be specified in the form of `"server:port"`</li><li>TLS must be enabled</li></ul> Learn more about this setting under [Create a Redis instance > Azure Cache for Redis]({{% ref "#setup-redis" %}}) | `"true"`, `"false"` |
| enableTLS | N | If the Redis instance supports TLS with public certificates, can be configured to be enabled or disabled. Defaults to `"false"` | `"true"`, `"false"`
| maxRetries | N | Maximum number of retries before giving up. Defaults to `3` | `5`, `10`
| maxRetryBackoff | N | Maximum backoff between each retry. Defaults to `2` seconds; `"-1"` disables backoff. | `3000000000`
| failover | N | Property to enable failover configuration. Needs sentinelMasterName to be set. The redisHost should be the sentinel host address. See [Redis Sentinel Documentation](https://redis.io/docs/manual/sentinel/). Defaults to `"false"` | `"true"`, `"false"`
| sentinelMasterName | N | The sentinel master name. See [Redis Sentinel Documentation](https://redis.io/docs/manual/sentinel/) | `"mymaster"`
| redeliverInterval | N | The interval between checking for pending messages to redelivery. Defaults to `"60s"`. `"0"` disables redelivery. | `"30s"`
| processingTimeout | N | The amount time a message must be pending before attempting to redeliver it. Defaults to `"15s"`. `"0"` disables redelivery. | `"30s"`
| redisType | N | The type of redis. There are two valid values, one is `"node"` for single node mode, the other is `"cluster"` for redis cluster mode. Defaults to `"node"`. | `"cluster"`
| redisDB | N | Database selected after connecting to redis. If `"redisType"` is `"cluster"` this option is ignored. Defaults to `"0"`. | `"0"`
| redisMaxRetries | N | Alias for `maxRetries`. If both values are set `maxRetries` is ignored. | `"5"`
| redisMinRetryInterval | N | Minimum backoff for redis commands between each retry. Default is `"8ms"`; `"-1"` disables backoff. | `"8ms"`
| redisMaxRetryInterval | N | Alias for `maxRetryBackoff`. If both values are set `maxRetryBackoff` is ignored. | `"5s"`
| dialTimeout | N | Dial timeout for establishing new connections. Defaults to `"5s"`. | `"5s"`
| readTimeout | N | Timeout for socket reads. If reached, redis commands will fail with a timeout instead of blocking. Defaults to `"3s"`, `"-1"` for no timeout. | `"3s"`
| writeTimeout | N | Timeout for socket writes. If reached, redis commands will fail with a timeout instead of blocking. Defaults is readTimeout. | `"3s"`
| poolSize | N | Maximum number of socket connections. Default is 10 connections per every CPU as reported by runtime.NumCPU. | `"20"`
| poolTimeout | N | Amount of time client waits for a connection if all connections are busy before returning an error. Default is readTimeout + 1 second. | `"5s"`
| maxConnAge | N | Connection age at which the client retires (closes) the connection. Default is to not close aged connections. | `"30m"`
| minIdleConns | N | Minimum number of idle connections to keep open in order to avoid the performance degradation associated with creating new connections. Defaults to `"0"`. | `"2"`
| idleCheckFrequency | N | Frequency of idle checks made by idle connections reaper. Default is `"1m"`. `"-1"` disables idle connections reaper. | `"-1"`
| idleTimeout | N | Amount of time after which the client closes idle connections. Should be less than server's timeout. Default is `"5m"`. `"-1"` disables idle timeout check. | `"10m"`
| Field | Required | Details | Example |
|-----------------------|:--------:|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|-----------------------------------------------------------------|
| redisHost | Y | Connection string for the redis host | `localhost:6379`, `redis-master.default.svc.cluster.local:6379` |
| redisPassword | N | Password for Redis host. No Default. Can be `secretKeyRef` to use a secret reference | `""`, `"KeFg23!"` |
| redisUsername | N | Username for Redis host. Defaults to empty. Make sure your redis server version is 6 or above, and have created acl rule correctly. | `""`, `"default"` |
| useEntraID | N | Implements EntraID support for Azure Cache for Redis. Before enabling this: <ul><li>The `redisHost` name must be specified in the form of `"server:port"`</li><li>TLS must be enabled</li></ul> Learn more about this setting under [Create a Redis instance > Azure Cache for Redis]({{% ref "#setup-redis" %}}) | `"true"`, `"false"` |
| enableTLS | N | If the Redis instance supports TLS with public certificates, can be configured to be enabled or disabled. Defaults to `"false"` | `"true"`, `"false"` |
| maxRetries | N | Maximum number of retries before giving up. Defaults to `3` | `5`, `10` |
| maxRetryBackoff | N | Maximum backoff between each retry. Defaults to `2` seconds; `"-1"` disables backoff. | `3000000000` |
| failover | N | Enable failover configuration. Needs sentinelMasterName to be set. The redisHost should be the sentinel host address. See [Redis Sentinel Documentation](https://redis.io/docs/manual/sentinel/). Defaults to `"false"` | `"true"`, `"false"` |
| sentinelMasterName | N | The sentinel master name. See [Redis Sentinel Documentation](https://redis.io/docs/manual/sentinel/) | `"mymaster"` |
| sentinelPassword | N | Password for Redis Sentinel. No Default. Applicable only when “failover” is true, and Redis Sentinel has authentication enabled | `""`, `"KeFg23!"`
| redeliverInterval | N | The interval between checking for pending messages for redelivery. Defaults to `"60s"`. `"0"` disables redelivery. | `"30s"` |
| processingTimeout | N | The amount of time a message must be pending before attempting to redeliver it. Defaults to `"15s"`. `"0"` disables redelivery. | `"30s"` |
| redisType | N | The type of redis. There are two valid values, one is `"node"` for single node mode, the other is `"cluster"` for redis cluster mode. Defaults to `"node"`. | `"cluster"` |
| redisDB | N | Database selected after connecting to redis. If `"redisType"` is `"cluster"` this option is ignored. Defaults to `"0"`. | `"0"` |
| redisMaxRetries | N | Alias for `maxRetries`. If both values are set `maxRetries` is ignored. | `"5"` |
| redisMinRetryInterval | N | Minimum backoff for redis commands between each retry. Default is `"8ms"`; `"-1"` disables backoff. | `"8ms"` |
| redisMaxRetryInterval | N | Alias for `maxRetryBackoff`. If both values are set `maxRetryBackoff` is ignored. | `"5s"` |
| dialTimeout | N | Dial timeout for establishing new connections. Defaults to `"5s"`. | `"5s"` |
| readTimeout | N | Timeout for socket reads. If reached, redis commands will fail with a timeout instead of blocking. Defaults to `"3s"`, `"-1"` for no timeout. | `"3s"` |
| writeTimeout | N | Timeout for socket writes. If reached, redis commands will fail with a timeout instead of blocking. Defaults is readTimeout. | `"3s"` |
| poolSize | N | Maximum number of socket connections. Default is 10 connections per every CPU as reported by runtime (NumCPU) | `"20" |
| poolTimeout | N | Amount of time client waits for a connection if all connections are busy before returning an error. Default is readTimeout + 1 second. | `"5s"` |
| maxConnAge | N | Connection age at which the client retires (closes) the connection. Default is to not close aged connections. | `"30m"` |
| minIdleConns | N | Minimum number of idle connections to keep open in order to avoid the performance degradation associated with creating new connections. Defaults to `"0"`. | `"2"` |
| idleCheckFrequency | N | Frequency of idle checks made by idle connections reaper. Default is `"1m"`. `"-1"` disables idle connections reaper. | `"-1"` |
| idleTimeout | N | Amount of time after which the client closes idle connections. Should be less than server's timeout. Default is `"5m"`. `"-1"` disables idle timeout check. | `"10m"` |
## Setup Redis
@ -182,6 +183,19 @@ You can use [Helm](https://helm.sh/) to quickly create a Redis instance in our K
{{< /tabpane >}}
## Redis Sentinel behavior
Use `redisType: "node"` when connecting to Redis Sentinel. Additionally, set `failover` to `"true"` and `sentinelMasterName` to the name of the master node.
Failover characteristics:
- Lock loss during failover: Locks may be lost during master failover if they weren't replicated to the promoted replica before the original master failed
- Failover window: Brief server unavailability (typically seconds) during automatic master promotion
- Consistency: All operations route to the current master, maintaining lock consistency
{{% alert title="Warning" color="warning" %}}
Consider the trade-off of running Redis with high-availability and failover with the potential of lock loss during failover events. Your application should tolerate brief lock loss during failover scenarios.
{{% /alert %}}
## Related links
- [Basic schema for a Dapr component]({{% ref component-schema %}})
- [Distributed lock building block]({{% ref distributed-lock-api-overview %}})

View File

@ -36,6 +36,8 @@ spec:
value: "authorization"
- name: forceHTTPS
value: "false"
- name: pathFilter
value: ".*/users/.*"
```
{{% alert title="Warning" color="warning" %}}
@ -54,6 +56,7 @@ The above example uses secrets as plain strings. It is recommended to use a secr
| redirectURL | The URL of your web application that the authorization server should redirect to once the user has authenticated | `"https://myapp.com"`
| authHeaderName | The authorization header name to forward to your application | `"authorization"`
| forceHTTPS | If true, enforces the use of TLS/SSL | `"true"`,`"false"` |
| pathFilter | Applies the middleware only to requests matching the given path pattern | `".*/users/.*"`
## Dapr configuration
@ -71,6 +74,67 @@ spec:
type: middleware.http.oauth2
```
## Request path filtering
The `pathFilter` field allows you to selectively apply OAuth2 authentication based on the HTTP request path using a regex pattern. This enables scenarios such as configuring multiple OAuth2 middlewares with different scopes for different API endpoints, implementing the least privilege principle by ensuring users only receive the minimum permissions necessary for their intended operation.
### Example: Separate read-only and admin user access
In the following configuration:
- Requests to `/api/users/*` endpoints receive tokens with a read-only user scopes
- Requests to `/api/admin/*` endpoints receive tokens with full admin scopes
This reduces security risk by preventing unnecessary privilege access and limiting the blast radius of compromised tokens.
```yaml
# User with read-only access scope
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
name: oauth2-users
spec:
type: middleware.http.oauth2
version: v1
metadata:
- name: clientId
value: "<your client ID>"
- name: clientSecret
value: "<your client secret>"
- name: scopes
value: "user:read profile:read"
- name: authURL
value: "https://accounts.google.com/o/oauth2/v2/auth"
- name: tokenURL
value: "https://accounts.google.com/o/oauth2/token"
- name: redirectURL
value: "http://myapp.com/callback"
- name: pathFilter
value: "^/api/users/.*"
---
# User with full admin access scope
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
name: oauth2-admin
spec:
type: middleware.http.oauth2
version: v1
metadata:
- name: clientId
value: "<your client ID>"
- name: clientSecret
value: "<your client secret>"
- name: scopes
value: "admin:read admin:write user:read user:write"
- name: authURL
value: "https://accounts.google.com/o/oauth2/v2/auth"
- name: tokenURL
value: "https://accounts.google.com/o/oauth2/token"
- name: redirectURL
value: "http://myapp.com/callback"
- name: pathFilter
value: "^/api/admin/.*"
```
## Related links
- [Configure API authorization with OAuth]({{% ref oauth %}})

View File

@ -30,6 +30,8 @@ spec:
value: "https://accounts.google.com/o/oauth2/token"
- name: headerName
value: "authorization"
- name: pathFilter
value: ".*/users/.*"
```
{{% alert title="Warning" color="warning" %}}
@ -47,6 +49,7 @@ The above example uses secrets as plain strings. It is recommended to use a secr
| headerName | The authorization header name to forward to your application | `"authorization"`
| endpointParamsQuery | Specifies additional parameters for requests to the token endpoint | `true`
| authStyle | Optionally specifies how the endpoint wants the client ID & client secret sent. See the table of possible values below | `0`
| pathFilter | Applies the middleware only to requests matching the given path pattern | `".*/users/.*"`
### Possible values for `authStyle`
@ -72,6 +75,63 @@ spec:
type: middleware.http.oauth2clientcredentials
```
## Request path filtering
The `pathFilter` field allows you to selectively apply OAuth2 authentication based on the HTTP request path using a regex pattern. This enables scenarios such as configuring multiple OAuth2 middlewares with different scopes for different API endpoints, implementing the least privilege principle by ensuring users only receive the minimum permissions necessary for their intended operation.
### Example: Separate read-only and admin user access
In the following configuration:
- Requests to `/api/users/*` endpoints receive tokens with a read-only user scopes
- Requests to `/api/admin/*` endpoints receive tokens with full admin scopes
This reduces security risk by preventing unnecessary privilege access and limiting the blast radius of compromised tokens.
```yaml
# User with read-only access scope
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
name: oauth2clientcredentials-users
spec:
type: middleware.http.oauth2clientcredentials
version: v1
metadata:
- name: clientId
value: "<your client ID>"
- name: clientSecret
value: "<your client secret>"
- name: scopes
value: "user:read profile:read"
- name: tokenURL
value: "https://accounts.google.com/o/oauth2/token"
- name: headerName
value: "authorization"
- name: pathFilter
value: "^/api/users/.*"
---
# User with full admin access scope
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
name: oauth2clientcredentials-admin
spec:
type: middleware.http.oauth2clientcredentials
version: v1
metadata:
- name: clientId
value: "<your client ID>"
- name: clientSecret
value: "<your client secret>"
- name: scopes
value: "admin:read admin:write user:read user:write"
- name: tokenURL
value: "https://accounts.google.com/o/oauth2/token"
- name: headerName
value: "authorization"
- name: pathFilter
value: "^/api/admin/.*"
```
## Related links
- [Middleware]({{% ref middleware.md %}})
- [Configuration concept]({{% ref configuration-concept.md %}})

View File

@ -39,7 +39,7 @@ spec:
metadata:
- name: url
value: "file://router.wasm"
- guestConfig
- name: guestConfig
value: {"environment":"production"}
```

View File

@ -0,0 +1,157 @@
---
type: docs
title: "AWS Cloudmap"
linkTitle: "AWS Cloudmap"
description: Detailed information on the AWS Cloudmap name resolution component
---
This component uses [AWS Cloud Map](https://aws.amazon.com/cloud-map/) for service discovery in Dapr. It supports both HTTP and DNS namespaces, allowing services to discover and connect to other services using AWS Cloud Map's service discovery capabilities.
## Configuration format
Name resolution is configured via the [Dapr Configuration]({{< ref configuration-overview.md >}}).
Within the configuration YAML, set the `spec.nameResolution.component` property to `"aws.cloudmap"`, then pass configuration options in the `spec.nameResolution.configuration` dictionary.
```yaml
apiVersion: dapr.io/v1alpha1
kind: Configuration
metadata:
name: appconfig
spec:
nameResolution:
component: "aws.cloudmap"
version: "v1"
configuration:
# Required: AWS CloudMap namespace configuration (one of these is required)
namespaceName: "my-namespace" # The name of your CloudMap namespace
# namespaceId: "ns-xxxxxx" # Alternative: Use namespace ID instead of name
# Optional: AWS authentication (choose one authentication method)
# Option 1: Environment variables AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY
# Option 2: IAM roles for Amazon EKS
# Option 3: Explicit credentials (not recommended for production)
accessKey: "****"
secretKey: "****"
sessionToken: "****" # Optional
# Optional: AWS region and endpoint configuration
region: "us-west-2"
endpoint: "http://localhost:4566" # Optional: Custom endpoint for testing
# Optional: Dapr configuration
defaultDaprPort: 50002 # Default port for Dapr sidecar if not specified in instance attributes
```
## Specification
### AWS authentication
The component supports multiple authentication methods:
1. Environment Variables:
- AWS_ACCESS_KEY_ID
- AWS_SECRET_ACCESS_KEY
- AWS_SESSION_TOKEN (optional)
2. IAM Roles:
- When running on AWS (EKS, EC2, etc.), the component can use IAM roles
3. Explicit Credentials:
- Provided in the component metadata (not recommended for production)
### Required permissions
The AWS credentials must have the following permissions:
```json
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"servicediscovery:DiscoverInstances",
"servicediscovery:GetNamespace",
"servicediscovery:ListNamespaces"
],
"Resource": "*"
}
]
}
```
### Spec configuration fields
| Field | Required | Type | Default | Description |
|-----------------|-------------------------------------|--------|---------|-------------|
| namespaceName | One of namespaceName or namespaceId | string | "" | The name of your AWS CloudMap namespace |
| namespaceId | One of namespaceName or namespaceId | string | "" | The ID of your AWS CloudMap namespace |
| region | N | string | "" | AWS region. If not provided, will be determined from environment or instance metadata |
| endpoint | N | string | "" | Custom endpoint for AWS CloudMap API. Useful for testing with LocalStack |
| defaultDaprPort | N | number | 3500 | Default port for Dapr sidecar if not specified in instance attributes |
### Service registration
To use this name resolver, your services must be registered in AWS CloudMap. When registering instances, ensure they have the following attributes:
1. Required: One of these address attributes:
- `AWS_INSTANCE_IPV4`: IPv4 address of the instance
- `AWS_INSTANCE_IPV6`: IPv6 address of the instance
- `AWS_INSTANCE_CNAME`: Hostname of the instance
2. Optional: Dapr sidecar port attribute:
- `DAPR_PORT`: The port that the Dapr sidecar is listening on
- If not specified, the component will use the `defaultDaprPort` from configuration (defaults to 3500)
The resolver only returns healthy instances (those with `HEALTHY` status) to ensure reliable service communication.
Example instance attributes:
```json
{
"AWS_INSTANCE_IPV4": "10.0.0.1",
"DAPR_PORT": "50002"
}
```
## Example Usage
Name resolution is configured via the [Dapr Configuration]({{< ref configuration-overview.md >}}). Here are some examples of its usage.
### Minimal Configuration
```yaml
apiVersion: dapr.io/v1alpha1
kind: Configuration
metadata:
name: appconfig
spec:
nameResolution:
component: "aws.cloudmap"
configuration:
namespaceName: "mynamespace.dev"
defaultDaprPort: 50002
```
### Local Development with LocalStack
```yaml
apiVersion: dapr.io/v1alpha1
kind: Configuration
metadata:
name: appconfig
spec:
nameResolution:
component: "aws.cloudmap"
configuration:
namespaceName: "my-namespace"
region: "us-east-1"
endpoint: "http://localhost:4566"
accessKey: "test"
secretKey: "test"
```
### Related Links
- [Service invocation building block]({{< ref service-invocation >}})
- [AWS Cloudmap documentation](https://aws.amazon.com/cloud-map/)

View File

@ -0,0 +1,49 @@
---
type: docs
title: "Nameformat"
linkTitle: "NameFormat"
description: Detailed information on the NameFormat name resolution component
---
The Name Format name resolver provides a flexible way to resolve service names using a configurable format string with placeholders. This is useful in scenarios where you want to map service names to predictable DNS names following a specific pattern.
Consider using this name resolver if there is no specific name resolver available for your service registry, but your service registry can expose services via internal DNS names using predictable naming conventions.
## Configuration Format
Name resolution is configured via the [Dapr Configuration]({{< ref configuration-overview.md >}}).
Within the configuration YAML, set the `spec.nameResolution.component` property to `"nameformat"`, then pass configuration options in the `spec.nameResolution.configuration` dictionary.
```yaml
apiVersion: dapr.io/v1alpha1
kind: Configuration
metadata:
name: appconfig
spec:
nameResolution:
component: "nameformat"
configuration:
format: "service-{appid}.default.svc.cluster.local" # Replace with your desired format pattern
```
## Spec configuration fields
| Field | Required | Details | Example |
|---------|----------|---------|---------|
| format | Y | The format string to use for name resolution. Must contain the `{appid}` placeholder which is replaced with the actual service name. | `"service-{appid}.default.svc.cluster.local"` |
## Examples
When configured with `format: "service-{appid}.default.svc.cluster.local"`, the resolver transforms service names as follows:
- Service ID "myapp" → "service-myapp.default.svc.cluster.local"
- Service ID "frontend" → "service-frontend.default.svc.cluster.local"
## Notes
- Empty service IDs are not allowed and results in an error.
- The format string must be provided in the configuration
- The format string must contain at least one `{appid}` placeholder

View File

@ -129,6 +129,7 @@ spec:
| sessionTimeout | N | The timeout used to detect client failures when using Kafkas group management facility. If the broker fails to receive any heartbeats from the consumer before the expiration of this session timeout, then the consumer is removed and initiates a rebalance. Defaults to "10s". | `"20s"` |
| consumerGroupRebalanceStrategy | N | The strategy to use for consumer group rebalancing. Supported values: `range`, `sticky`, `roundrobin`. Default is `range` | `"sticky"` |
| escapeHeaders | N | Enables URL escaping of the message header values received by the consumer. Allows receiving content with special characters that are usually not allowed in HTTP headers. Default is `false`. | `true` |
| excludeHeaderMetaRegex | N | A regular expression to exclude keys from being converted from headers to metadata when consuming messages and from metadata to headers when publishing messages. This capability avoids unwanted downstream side effects for topic consumers. | '"^valueSchemaType$"`
The `secretKeyRef` above is referencing a [kubernetes secrets store]({{% ref kubernetes-secret-store.md %}}) to access the tls information. Visit [here]({{% ref setup-secret-store.md %}}) to learn more about how to configure a secret store component.
@ -677,11 +678,36 @@ def my_topic_subscriber(event_data=Body()):
app.include_router(router)
```
{{% /tab %}}
{{< /tabpane >}}
### Avoiding downstream side effects when publishing messages requiring custom metadata
Dapr allows customizing the publishing behavior by setting custom publish metadata.
For instance, to publish in avro format, it is required to set the `valueSchemaType=Avro` metadata.
However, by default these metadata items get converted to Kafka headers and published along with the message. This default behavior is very helpful for instance to forward tracing headers across a chain of publishers/consumers.
In certain scenario, however, it has unwanted side effects.
Let's assume you consume an Avro message using Dapr with the headers above.If this message cannot be consumed successfully and configured to be sent to a dead letter topic, `valueSchemaType=Avro` will be automatically carried forward when publishing to the dead letter topic, requiring the set up of a schema associated with this topic. In many scenarios, it is preferable to publish dead letter messages in JSON only, as complying to a determined schema is not possible.
To avoid this behavior, the kafka-pubsub component can be configured to exclude certain metadata keys from being converted to/from headers.
```yaml
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
name: kafka-pubsub-exclude-metadata
type: pubsub.kafka
version: v1
metadata:
- name: brokers # Required. Kafka broker connection setting
value: "dapr-kafka.myapp.svc.cluster.local:9092"
- name: authType # Required.
value: "none"
- name: excludeMetaHeaderRegex
value: "^valueSchemaType$" # Optional. Excludes `valueSchemaType` header from being published to headers and converted to metadata
```
### Overriding default consumer group rebalancing
In Kafka, rebalancing strategies determine how partitions are assigned to consumers within a consumer group. The default strategy is "range", but "roundrobin" and "sticky" are also available.
- `Range`:

View File

@ -95,6 +95,7 @@ The above example uses secrets as plain strings. It is recommended to use a [sec
| subscribeMode | N | Subscription mode indicates the cursor persistence, durable subscription retains messages and persists the current position. Default: `"durable"` | `"durable"`, `"non_durable"` |
| partitionKey | N | Sets the key of the message for routing policy. Default: `""` | |
| `maxConcurrentHandlers` | N | Defines the maximum number of concurrent message handlers. Default: `100` | `10`
| replicateSubscriptionState | N | Enable replication of subscription state across geo-replicated Pulsar clusters. Default: `"false"` | `"true"`, `"false"` |
### Authenticate using Token

View File

@ -66,6 +66,8 @@ spec:
value: {podName}
- name: heartBeat
value: 10s
- name: publishMessagePropertiesToMetadata
value: "true"
```
{{% alert title="Warning" color="warning" %}}
@ -102,7 +104,7 @@ The above example uses secrets as plain strings. It is recommended to use a secr
| clientKey | Required for using TLS | TLS client key in PEM format. Must be used with `clientCert`. Can be `secretKeyRef` to use a secret reference. | `"-----BEGIN RSA PRIVATE KEY-----\n<base64-encoded PKCS8>\n-----END RSA PRIVATE KEY-----"`
| clientName | N | This RabbitMQ [client-provided connection name](https://www.rabbitmq.com/connections.html#client-provided-names) is a custom identifier. If set, the identifier is mentioned in RabbitMQ server log entries and management UI. Can be set to {uuid}, {podName}, or {appID}, which is replaced by Dapr runtime to the real value. | `"app1"`, `{uuid}`, `{podName}`, `{appID}`
| heartBeat | N | Defines the heartbeat interval with the server, detecting the aliveness of the peer TCP connection with the RabbitMQ server. Defaults to `10s` . | `"10s"`
| `publishMessagePropertiesToMetadata` | N | Whether to publish AMQP message properties (headers, message ID, etc.) to the metadata. | "true", "false"
## Communication using TLS
@ -475,6 +477,11 @@ spec:
singleActiveConsumer: "true"
```
## Publishing message properties to metadata
To enable [message properties](https://www.rabbitmq.com/docs/publishers#message-properties) being published in the metadata, set the `publishMessagePropertiesToMetadata` field to `"true"` in the component spec.
This will include properties such as message ID, timestamp, and headers in the metadata of the published message.
## Related links
- [Basic schema for a Dapr component]({{% ref component-schema %}}) in the Related links section

View File

@ -297,12 +297,7 @@ In Kubernetes, you store the client secret or the certificate into the Kubernete
```bash
kubectl apply -f azurekeyvault.yaml
```
1. Create and assign a managed identity at the pod-level via either:
- [Microsoft Entra ID workload identity](https://learn.microsoft.com/azure/aks/workload-identity-overview) (preferred method)
- [Microsoft Entra ID pod identity](https://docs.microsoft.com/azure/aks/use-azure-ad-pod-identity#create-a-pod-identity)
**Important**: While both Microsoft Entra ID pod identity and workload identity are in preview, currently Microsoft Entra ID Workload Identity is planned for general availability (stable state).
1. Create and assign a managed identity at the pod-level via [Microsoft Entra ID workload identity](https://learn.microsoft.com/azure/aks/workload-identity-overview)
1. After creating a workload identity, give it `read` permissions:
- [On your desired KeyVault instance](https://docs.microsoft.com/azure/key-vault/general/assign-access-policy?tabpane=azure-cli#assign-the-access-policy)
@ -321,7 +316,7 @@ In Kubernetes, you store the client secret or the certificate into the Kubernete
#### Using Azure managed identity directly vs. via Microsoft Entra ID workload identity
When using **managed identity directly**, you can have multiple identities associated with an app, requiring `azureClientId` to specify which identity should be used.
When using **managed identity directly**, you can have multiple identities associated with an app, requiring `azureClientId` to specify which identity should be used.
However, when using **managed identity via Microsoft Entra ID workload identity**, `azureClientId` is not necessary and has no effect. The Azure identity to be used is inferred from the service account tied to an Azure identity via the Azure federated identity.

View File

@ -22,7 +22,7 @@ spec:
metadata:
- name: region
value: "[huaweicloud_region]"
- name: accessKey
- name: accessKey
value: "[huaweicloud_access_key]"
- name: secretAccessKey
value: "[huaweicloud_secret_access_key]"

View File

@ -0,0 +1,66 @@
---
type: docs
title: "Tencent Cloud Secrets Manager (SSM)"
linkTitle: "Tencent Cloud Secrets Manager (SSM)"
description: Detailed information on the Tencent Cloud Secrets Manager (SSM) - secret store component
aliases:
- "/operations/components/setup-secret-store/supported-secret-stores/tencentcloud-ssm/"
---
## Component format
To setup Tencent Cloud Secrets Manager (SSM) secret store create a component of type `secretstores.tencentcloud.ssm`.
See [this guide]({{% ref "setup-secret-store.md#apply-the-configuration" %}}) on how to create and apply a secretstore configuration.
See this guide on [referencing secrets]({{% ref component-secrets.md %}}) to retrieve and use the secret with Dapr components.
```yaml
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
name: tencentcloudssm
spec:
type: secretstores.tencentcloud.ssm
version: v1
metadata:
- name: region
value: "[tencentcloud_region]"
- name: secretId
value: "[tencentcloud_secret_id]"
- name: secretKey
value: "[tencentcloud_secret_key]"
- name: token
value: "[tencentcloud_secret_token]"
```
{{% alert title="Warning" color="warning" %}}
The above example uses secrets as plain strings.
It is recommended to use a local secret store such as [Kubernetes secret store]({{% ref kubernetes-secret-store.md %}}) or a [local file]({{% ref file-secret-store.md %}}) to bootstrap secure key storage.
{{% /alert %}}
## Spec metadata fields
| Field | Required | Details | Example |
| --------------- | :------: | ---------------------------------------------------------------- | ------------------- |
| region | Y | The specific region the Tencent SSM instance is deployed in | `"ap-beijing-3"` |
| secretId | Y | The SecretId of the Tencent Cloud account | `"xyz"` |
| secretKey | Y | The SecretKey of the Tencent Cloud account | `"xyz"` |
| token | N | The Token of the Tencent Cloud account. This is required only if using temporary credentials | `""` |
## Optional per-request metadata properties
The following [optional query parameters]({{% ref "secrets_api#query-parameters" %}}) can be provided when retrieving secrets from this secret store:
Query Parameter | Description
--------- | -----------
`metadata.version_id` | Version for the given secret key.
## Setup Tencent Cloud Secrets Manager (SSM)
Setup Tencent Cloud Secrets Manager (SSM) using the Tencent Cloud documentation: https://www.tencentcloud.com/products/ssm
## Related links
- [Secrets building block]({{% ref secrets %}})
- [How-To: Retrieve a secret]({{% ref "howto-secrets.md" %}})
- [How-To: Reference secrets in Dapr components]({{% ref component-secrets.md %}})
- [Secrets API reference]({{% ref secrets_api.md %}})

View File

@ -0,0 +1,78 @@
---
type: docs
title: "Alibaba Cloud TableStore"
linkTitle: "Alibaba Cloud TableStore"
description: "Detailed information on the Alibaba Cloud TableStore state store component for use with Dapr"
aliases:
- "/operations/components/setup-state-store/supported-state-stores/setup-alicloud-tablestore/"
---
## Component format
To set up an Alibaba Cloud TableStore state store, create a component of type `state.alicloud.tablestore`.
See [this guide]({{% ref "howto-get-save-state.md#step-1-setup-a-state-store" %}}) on how to create and apply a state store configuration.
```yaml
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
name: <NAME>
spec:
type: state.alicloud.tablestore
version: v1
metadata:
- name: endpoint
value: <REPLACE-WITH-ENDPOINT>
- name: instanceName
value: <REPLACE-WITH-INSTANCE-NAME>
- name: tableName
value: <REPLACE-WITH-TABLE-NAME>
- name: accessKeyID
value: <REPLACE-WITH-ACCESS-KEY-ID>
- name: accessKey
value: <REPLACE-WITH-ACCESS-KEY>
````
{{% alert title="Warning" color="warning" %}}
The above example uses secrets as plain strings.
It is recommended to use a secret store for the secrets as described [here]({{% ref component-secrets.md %}}).
{{% /alert %}}
## Spec metadata fields
| Field | Required | Details | Example |
| -------------- | :------: | --------------------------------------------------------------------------------- | ----------------------------------- |
| `endpoint` | Y | The endpoint of the Alibaba Cloud TableStore instance | `"https://tablestore.aliyuncs.com"` |
| `instanceName` | Y | The name of the Alibaba Cloud TableStore instance | `"my_instance"` |
| `tableName` | Y | The name of the table to use for Dapr state. Will be created if it does not exist | `"my_table"` |
| `accessKeyID` | Y | The access key ID for authentication | `"my_access_key_id"` |
| `accessKey` | Y | The access key for authentication | `"my_access_key"` |
---
## Authentication
Alibaba Cloud TableStore supports authentication using an **Access Key** and **Access Key ID**.
You can also use Dapr's \[secret store]\({{% ref component-secrets.md %}}) to securely store these values instead of including them directly in the YAML file.
Example using secret references:
```yaml
- name: accessKeyID
secretKeyRef:
name: alicloud-secrets
key: accessKeyID
- name: accessKey
secretKeyRef:
name: alicloud-secrets
key: accessKey
```
---
## Related links
- [Basic schema for a Dapr component]({{% ref component-schema %}})
- Read [this guide]({{% ref "howto-get-save-state.md#step-2-save-and-retrieve-a-single-state" %}}) for instructions on configuring state store components
- [State management building block]({{% ref state-management %}})

View File

@ -96,6 +96,10 @@ For example, if installing using the example above, the Cassandra DNS would be:
[Apache Ignite](https://ignite.apache.org/)'s integration with Cassandra as a caching layer is not supported by this component.
## Apache Ignite
[Apache Ignite](https://ignite.apache.org/)'s integration with Cassandra as a caching layer is not supported by this component.
## Related links
- [Basic schema for a Dapr component]({{% ref component-schema %}})
- Read [this guide]({{% ref "howto-get-save-state.md#step-2-save-and-retrieve-a-single-state" %}}) for instructions on configuring state store components

View File

@ -29,4 +29,5 @@ The following table lists the environment variables used by the Dapr runtime, CL
| DAPR_COMPONENTS_SOCKETS_EXTENSION | .NET and Java pluggable component SDKs | A per-SDK configuration that indicates the default file extension applied to socket files created by the SDKs. Not a Dapr-enforced behavior. |
| DAPR_PLACEMENT_METADATA_ENABLED | Dapr placement | Enable an endpoint for the Placement service that exposes placement table information on actor usage. Set to `true` to enable in self-hosted mode. [Learn more about the Placement API]({{% ref placement_api.md %}}) |
| DAPR_HOST_IP | Dapr sidecar | The host's chosen IP address. If not specified, will loop over the network interfaces and select the first non-loopback address it finds.|
| DAPR_HEALTH_TIMEOUT | SDKs | Sets the time on the "wait for sidecar" availability. Overrides the default timeout setting of 60 seconds. |
| DAPR_HEALTH_TIMEOUT | SDKs | Sets the time on the "wait for sidecar" availability. Overrides the default timeout setting of 60 seconds. |
| DAPR_UNSAFE_SKIP_CONTAINER_UID_GID_CHECK | Dapr control plane & sidecar | Disable the check that ensures the Dapr containers are not running as root on Kubernetes linux. This is not recommended for production environments. Set to `true` to disable the check. |

View File

@ -150,7 +150,7 @@
features:
input: false
output: true
- component: Twilio
- component: Twilio SMS
link: twilio
state: Alpha
version: v1
@ -158,7 +158,7 @@
features:
input: false
output: true
- component: SendGrid
- component: Twillio SendGrid
link: sendgrid
state: Alpha
version: v1
@ -190,3 +190,19 @@
features:
input: false
output: true
- component: Apache Dubbo
link: dubbo
state: Alpha
version: v1
since: "1.7"
features:
input: false
output: true
- component: RocketMQ
link: rocketmq
state: Alpha
version: v1
since: "1.2"
features:
input: true
output: true

View File

@ -33,3 +33,8 @@
state: Alpha
version: v1
since: "1.16"
- component: Local echo
link: local-echo
state: Stable
version: v1
since: "1.15"

View File

@ -0,0 +1,5 @@
- component: CloudMap
link: nr-awscloudmap
state: Alpha
version: v1
since: "1.16"

View File

@ -8,3 +8,8 @@
state: Alpha
version: v1
since: "1.13"
- component: NameFormat
link: nr-nameformat
state: Alpha
version: v1
since: "1.16"

View File

@ -0,0 +1,5 @@
- component: HuaweiCloud Cloud Secret Management Service (CSMS)
link: huaweicloud-csms
state: Alpha
version: v1
since: "1.8"

View File

@ -0,0 +1,5 @@
- component: Tencent Cloud Secrets Manager (SSM)
link: tencentcloud-ssm
state: Alpha
version: v1
since: "1.9"

View File

@ -0,0 +1,8 @@
- component: AliCloud TableStore
link: setup-alicloud-tablestore
state: Alpha
version: v1
since: "1.3"
features:
crud: true
etag: true

View File

@ -8,4 +8,4 @@
transactions: true
etag: true
ttl: true
query: false
workflow: false

View File

@ -8,7 +8,7 @@
transactions: false
etag: true
ttl: false
query: false
workflow: false
- component: Azure Cosmos DB
link: setup-azure-cosmosdb
state: Stable
@ -19,7 +19,7 @@
transactions: true
etag: true
ttl: true
query: true
workflow: false
- component: Microsoft SQL Server
link: setup-sqlserver
state: Stable
@ -30,7 +30,7 @@
transactions: true
etag: true
ttl: true
query: false
workflow: false
- component: Azure Table Storage
link: setup-azure-tablestorage
state: Stable
@ -41,4 +41,4 @@
transactions: false
etag: true
ttl: false
query: false
workflow: false

View File

@ -8,7 +8,7 @@
transactions: false
etag: true
ttl: false
query: false
workflow: false
- component: Apache Cassandra
link: setup-cassandra
state: Stable
@ -19,7 +19,7 @@
transactions: false
etag: false
ttl: true
query: false
workflow: false
- component: CockroachDB
link: setup-cockroachdb
state: Stable
@ -30,7 +30,7 @@
transactions: true
etag: true
ttl: true
query: true
workflow: false
- component: Couchbase
link: setup-couchbase
state: Alpha
@ -41,7 +41,7 @@
transactions: false
etag: true
ttl: false
query: false
workflow: false
- component: etcd
link: setup-etcd
state: Beta
@ -52,7 +52,7 @@
transactions: true
etag: true
ttl: true
query: false
workflow: false
- component: Hashicorp Consul
link: setup-consul
state: Alpha
@ -63,7 +63,7 @@
transactions: false
etag: false
ttl: false
query: false
workflow: false
- component: Hazelcast
link: setup-hazelcast
state: Alpha
@ -74,7 +74,7 @@
transactions: false
etag: false
ttl: false
query: false
workflow: false
- component: In-memory
link: setup-inmemory
state: Stable
@ -85,7 +85,7 @@
transactions: true
etag: true
ttl: true
query: false
workflow: true
- component: JetStream KV
link: setup-jetstream-kv
state: Alpha
@ -96,7 +96,7 @@
transactions: false
etag: false
ttl: false
query: false
workflow: false
- component: Memcached
link: setup-memcached
state: Stable
@ -107,7 +107,7 @@
transactions: false
etag: false
ttl: true
query: false
workflow: false
- component: MongoDB
link: setup-mongodb
state: Stable
@ -118,7 +118,7 @@
transactions: true
etag: true
ttl: true
query: true
workflow: true
- component: MySQL & MariaDB
link: setup-mysql
state: Stable
@ -129,7 +129,7 @@
transactions: true
etag: true
ttl: true
query: false
workflow: true
- component: Oracle Database
link: setup-oracledatabase
state: Beta
@ -140,7 +140,7 @@
transactions: true
etag: true
ttl: true
query: false
workflow: false
- component: PostgreSQL v1
link: setup-postgresql-v1
state: Stable
@ -151,7 +151,7 @@
transactions: true
etag: true
ttl: true
query: true
workflow: true
- component: PostgreSQL v2
link: setup-postgresql-v2
state: Stable
@ -162,7 +162,7 @@
transactions: true
etag: true
ttl: true
query: false
workflow: true
- component: Redis
link: setup-redis
state: Stable
@ -173,7 +173,7 @@
transactions: true
etag: true
ttl: true
query: true
workflow: true
- component: RethinkDB
link: setup-rethinkdb
state: Beta
@ -184,7 +184,7 @@
transactions: false
etag: false
ttl: false
query: false
workflow: false
- component: SQLite
link: setup-sqlite
state: Stable
@ -195,7 +195,7 @@
transactions: true
etag: true
ttl: true
query: false
workflow: false
- component: Zookeeper
link: setup-zookeeper
state: Alpha
@ -206,4 +206,4 @@
transactions: false
etag: true
ttl: false
query: false
workflow: false

View File

@ -9,6 +9,7 @@
etag: true
ttl: true
query: false
workflow: false
- component: Coherence
link: setup-coherence
state: Alpha
@ -30,4 +31,4 @@
transactions: false
etag: true
ttl: true
query: false
workflow: false

View File

@ -1,9 +1,11 @@
{{- $groups := dict
" Generic" $.Site.Data.components.secret_stores.generic
"Generic" $.Site.Data.components.secret_stores.generic
"Microsoft Azure" $.Site.Data.components.secret_stores.azure
"Alibaba Cloud" $.Site.Data.components.secret_stores.alibaba
"Google Cloud Platform (GCP)" $.Site.Data.components.secret_stores.gcp
"Amazon Web Services (AWS)" $.Site.Data.components.secret_stores.aws
"Tencent Cloud" $.Site.Data.components.secret_stores.tencentcloud
"HuaweiCloud Cloud" $.Site.Data.components.secret_stores.huaweicloud
}}
<style>

View File

@ -5,6 +5,7 @@
"Amazon Web Services (AWS)" $.Site.Data.components.state_stores.aws
"Cloudflare" $.Site.Data.components.state_stores.cloudflare
"Oracle Cloud" $.Site.Data.components.state_stores.oracle
"Alibaba Cloud" $.Site.Data.components.state_stores.alicloud
}}
{{ range $group, $components := $groups }}
@ -17,7 +18,7 @@
<th>ETag</th>
<th>TTL</th>
<th>Actors</th>
<th>Query</th>
<th>Workflow</th>
<th>Status</th>
<th>Component version</th>
<th>Since runtime version</th>
@ -29,44 +30,45 @@
</td>
<td align="center">
{{ if .features.crud }}
<span role="img" aria-label="CRUD: Supported"></span>
<span role="img" aria-label="CRUD: Supported"></span>
{{else}}
<img src="/images/emptybox.png" alt="CRUD: Not supported" aria-label="CRUD: Not supported" />
<img src="/images/emptybox.png" alt="CRUD: Not supported" aria-label="CRUD: Not supported" />
{{ end }}
</td>
<td align="center">
{{ if .features.transactions }}
<span role="img" aria-label="Transactions: Supported"></span>
<span role="img" aria-label="Transactions: Supported"></span>
{{else}}
<img src="/images/emptybox.png" alt="Transactions: Not supported" aria-label="Transactions: Not supported" />
<img src="/images/emptybox.png" alt="Transactions: Not supported"
aria-label="Transactions: Not supported" />
{{ end }}
</td>
<td align="center">
{{ if .features.etag }}
<span role="img" aria-label="ETag: Supported"></span>
<span role="img" aria-label="ETag: Supported"></span>
{{else}}
<img src="/images/emptybox.png" alt="ETag: Not supported" aria-label="ETag: Not supported" />
<img src="/images/emptybox.png" alt="ETag: Not supported" aria-label="ETag: Not supported" />
{{ end }}
</td>
<td align="center">
{{ if .features.ttl }}
<span role="img" aria-label="TTL: Supported"></span>
<span role="img" aria-label="TTL: Supported"></span>
{{else}}
<img src="/images/emptybox.png" alt="TTL: Not supported" aria-label="TTL: Not supported" />
<img src="/images/emptybox.png" alt="TTL: Not supported" aria-label="TTL: Not supported" />
{{ end }}
</td>
<td align="center">
{{ if (and .features.transactions .features.etag) }}
<span role="img" aria-label="Actors: Supported"></span>
<span role="img" aria-label="Actors: Supported"></span>
{{else}}
<img src="/images/emptybox.png" alt="Actors: Not supported" aria-label="Actors: Not supported" />
<img src="/images/emptybox.png" alt="Actors: Not supported" aria-label="Actors: Not supported" />
{{ end }}
</td>
<td align="center">
{{ if .features.query }}
<span role="img" aria-label="Query: Supported"></span>
{{ if .features.workflow }}
<span role="img" aria-label="Workflow: Supported"></span>
{{else}}
<img src="/images/emptybox.png" alt="Query: Not supported" aria-label="Query: Not supported" />
<img src="/images/emptybox.png" alt="Workflow: Not supported" aria-label="Workflow: Not supported" />
{{ end }}
</td>
<td>{{ .state }}</td>
@ -77,4 +79,4 @@
</table>
{{ end }}
{{ partial "components/componenttoc.html" . }}
{{ partial "components/componenttoc.html" . }}

Some files were not shown because too many files have changed in this diff Show More