unstage some commits and review
Signed-off-by: Hannah Hunter <hannahhunter@microsoft.com>
|
@ -16,8 +16,8 @@ The following branches are currently maintained:
|
|||
|
||||
| Branch | Website | Description |
|
||||
| ------------------------------------------------------------ | -------------------------- | ------------------------------------------------------------------------------------------------ |
|
||||
| [v1.14](https://github.com/dapr/docs) (primary) | https://docs.dapr.io | Latest Dapr release documentation. Typo fixes, clarifications, and most documentation goes here. |
|
||||
| [v1.15](https://github.com/dapr/docs/tree/v1.15) (pre-release) | https://v1-15.docs.dapr.io/ | Pre-release documentation. Doc updates that are only applicable to v1.15+ go here. |
|
||||
| [v1.15](https://github.com/dapr/docs) (primary) | https://docs.dapr.io | Latest Dapr release documentation. Typo fixes, clarifications, and most documentation goes here. |
|
||||
| [v1.16](https://github.com/dapr/docs/tree/v1.16) (pre-release) | https://v1-16.docs.dapr.io/ | Pre-release documentation. Doc updates that are only applicable to v1.15+ go here. |
|
||||
|
||||
For more information visit the [Dapr branch structure](https://docs.dapr.io/contributing/docs-contrib/contributing-docs/#branch-guidance) document.
|
||||
|
||||
|
|
|
@ -22,7 +22,7 @@ Dapr provides the following building blocks:
|
|||
|----------------|----------|-------------|
|
||||
| [**Service-to-service invocation**]({{< ref "service-invocation-overview.md" >}}) | `/v1.0/invoke` | Service invocation enables applications to communicate with each other through well-known endpoints in the form of http or gRPC messages. Dapr provides an endpoint that acts as a combination of a reverse proxy with built-in service discovery, while leveraging built-in distributed tracing and error handling.
|
||||
| [**Publish and subscribe**]({{< ref "pubsub-overview.md" >}}) | `/v1.0/publish` `/v1.0/subscribe`| Pub/Sub is a loosely coupled messaging pattern where senders (or publishers) publish messages to a topic, to which subscribers subscribe. Dapr supports the pub/sub pattern between applications.
|
||||
| [**Workflows**]({{< ref "workflow-overview.md" >}}) | `/v1.0/workflow` | The Workflow API enables you to define long running, persistent processes or data flows that span multiple microservices using Dapr workflows or workflow components. The Workflow API can be combined with other Dapr API building blocks. For example, a workflow can call another service with service invocation or retrieve secrets, providing flexibility and portability.
|
||||
| [**Workflows**]({{< ref "workflow-overview.md" >}}) | `/v1.0/workflow` | The Workflow API enables you to define long running, persistent processes or data flows that span multiple microservices using Dapr workflows. The Workflow API can be combined with other Dapr API building blocks. For example, a workflow can call another service with service invocation or retrieve secrets, providing flexibility and portability.
|
||||
| [**State management**]({{< ref "state-management-overview.md" >}}) | `/v1.0/state` | Application state is anything an application wants to preserve beyond a single session. Dapr provides a key/value-based state and query APIs with pluggable state stores for persistence.
|
||||
| [**Bindings**]({{< ref "bindings-overview.md" >}}) | `/v1.0/bindings` | A binding provides a bi-directional connection to an external cloud/on-premise service or system. Dapr allows you to invoke the external service through the Dapr binding API, and it allows your application to be triggered by events sent by the connected service.
|
||||
| [**Actors**]({{< ref "actors-overview.md" >}}) | `/v1.0/actors` | An actor is an isolated, independent unit of compute and state with single-threaded execution. Dapr provides an actor implementation based on the virtual actor pattern which provides a single-threaded programming model and where actors are garbage collected when not in use.
|
||||
|
|
|
@ -78,13 +78,6 @@ Pub/sub broker components are message brokers that can pass messages to/from ser
|
|||
- [List of pub/sub brokers]({{< ref supported-pubsub >}})
|
||||
- [Pub/sub broker implementations](https://github.com/dapr/components-contrib/tree/master/pubsub)
|
||||
|
||||
### Workflows
|
||||
|
||||
A [workflow]({{< ref workflow-overview.md >}}) is custom application logic that defines a reliable business process or data flow. Workflow components are workflow runtimes (or engines) that run the business logic written for that workflow and store their state into a state store.
|
||||
|
||||
<!--- [List of supported workflows]()
|
||||
- [Workflow implementations](https://github.com/dapr/components-contrib/tree/master/workflows)-->
|
||||
|
||||
### State stores
|
||||
|
||||
State store components are data stores (databases, files, memory) that store key-value pairs as part of the [state management]({{< ref "state-management-overview.md" >}}) building block.
|
||||
|
|
|
@ -5,28 +5,136 @@ linkTitle: "Scheduler"
|
|||
description: "Overview of the Dapr scheduler service"
|
||||
---
|
||||
|
||||
The Dapr Scheduler service is used to schedule jobs, running in [self-hosted mode]({{< ref self-hosted >}}) or on [Kubernetes]({{< ref kubernetes >}}).
|
||||
The Dapr Scheduler service is used to schedule different types of jobs, running in [self-hosted mode]({{< ref self-hosted >}}) or on [Kubernetes]({{< ref kubernetes >}}).
|
||||
- Jobs created through the Jobs API
|
||||
- Actor reminder jobs (used by the actor reminders)
|
||||
- Actor reminder jobs created by the Workflow API (which uses actor reminders)
|
||||
|
||||
The diagram below shows how the Scheduler service is used via the jobs API when called from your application. All the jobs that are tracked by the Scheduler service are stored in an embedded Etcd database.
|
||||
From Dapr v1.15, the Scheduler service is used by default to schedule actor reminders as well as actor reminders for the Workflow API.
|
||||
|
||||
There is no concept of a leader Scheduler instance. All Scheduler service replicas are considered peers. All receive jobs to be scheduled for execution and the jobs are allocated between the available Scheduler service replicas for load balancing of the trigger events.
|
||||
|
||||
The diagram below shows how the Scheduler service is used via the jobs API when called from your application. All the jobs that are tracked by the Scheduler service are stored in an embedded etcd database.
|
||||
|
||||
<img src="/images/scheduler/scheduler-architecture.png" alt="Diagram showing the Scheduler control plane service and the jobs API">
|
||||
|
||||
## Actor reminders
|
||||
## Actor Reminders
|
||||
|
||||
Prior to Dapr v1.15, [actor reminders]({{< ref "actors-timers-reminders.md#actor-reminders" >}}) were run using the Placement service. Now, by default, the [`SchedulerReminders` feature flag]({{< ref "support-preview-features.md#current-preview-features" >}}) is set to `true`, and all new actor reminders you create are run using the Scheduler service to make them more scalable.
|
||||
|
||||
When you deploy Dapr v1.15, any _existing_ actor reminders are migrated from the Placement service to the Scheduler service as a one time operation for each actor type. You can prevent this migration by setting the `SchedulerReminders` flag to `false` in application configuration file for the actor type.
|
||||
When you deploy Dapr v1.15, any _existing_ actor reminders are automatically migrated from the Actor State Store to the Scheduler service as a one time operation for each actor type. Each replica will only migrate the reminders whose actor type and id are associated with that host. This means that only when all replicas implementing an actor type are upgraded to 1.15, will all the reminders associated with that type be migrated. There will be _no_ loss of reminder triggers during the migration. However, you can prevent this migration and keep the existing actor reminders running using the Actor State Store by setting the `SchedulerReminders` flag to `false` in the application configuration file for the actor type.
|
||||
|
||||
To confirm that the migration was successful, check the Dapr sidecar logs for the following:
|
||||
|
||||
```sh
|
||||
Running actor reminder migration from state store to scheduler
|
||||
```
|
||||
coupled with
|
||||
```sh
|
||||
Migrated X reminders from state store to scheduler successfully
|
||||
```
|
||||
or
|
||||
```sh
|
||||
Skipping migration, no missing scheduler reminders found
|
||||
```
|
||||
|
||||
## Job Locality
|
||||
|
||||
### Default Job Behavior
|
||||
|
||||
By default, when the Scheduler service triggers jobs, they are sent back to a single replica for the same app ID that scheduled the job in a randomly load balanced manner. This provides basic load balancing across your application's replicas, which is suitable for most use cases where strict locality isn't required.
|
||||
|
||||
### Using Actor Reminders for Perfect Locality
|
||||
|
||||
For users who require perfect job locality (having jobs triggered on the exact same host that created them), actor reminders provide a solution. To enforce perfect locality for a job:
|
||||
|
||||
1. Create an actor type with a random UUID that is unique to the specific replica
|
||||
2. Use this actor type to create an actor reminder
|
||||
|
||||
This approach ensures that the job will always be triggered on the same host which created it, rather than being randomly distributed among replicas.
|
||||
|
||||
## Job Triggering
|
||||
|
||||
### Job Failure Policy and Staging Queue
|
||||
|
||||
When the Scheduler service triggers a job and it has a client side error, the job is retried by default with a 1s interval and 3 maximum retries.
|
||||
|
||||
For non-client side errors, for example, when a job cannot be sent to an available Dapr sidecar at trigger time, it is placed in a staging queue within the Scheduler service. Jobs remain in this queue until a suitable sidecar instance becomes available, at which point they are automatically sent to the appropriate Dapr sidecar instance.
|
||||
|
||||
## Self-hosted mode
|
||||
|
||||
The Scheduler service Docker container is started automatically as part of `dapr init`. It can also be run manually as a process if you are running in [slim-init mode]({{< ref self-hosted-no-docker.md >}}).
|
||||
|
||||
The Scheduler can be run in both high availability (HA) and non-HA modes in self-hosted deployments. However, non-HA mode is not recommended for production use. If switching between non-HA and HA modes, the existing data directory must be removed, which results in loss of jobs and actor reminders. [Run a back-up]({{< ref "#back-up-and-restore-scheduler-data" >}}) before making this change to avoid losing data.
|
||||
|
||||
## Kubernetes mode
|
||||
|
||||
The Scheduler service is deployed as part of `dapr init -k`, or via the Dapr Helm charts. You can run Scheduler in high availability (HA) mode. [Learn more about setting HA mode in your Kubernetes service.]({{< ref "kubernetes-production.md#individual-service-ha-helm-configuration" >}})
|
||||
The Scheduler service is deployed as part of `dapr init -k`, or via the Dapr Helm charts. Scheduler always runs in high availability (HA) mode in Kubernetes deployments. Scaling the Scheduler service replicas up or down is not possible without incurring data loss due to the nature of the embedded data store. [Learn more about setting HA mode in your Kubernetes service.]({{< ref "kubernetes-production.md#individual-service-ha-helm-configuration" >}})
|
||||
|
||||
When a Kubernetes namespace is deleted, all the Job and Actor Reminders corresponding to that namespace are deleted.
|
||||
|
||||
## Back Up and Restore Scheduler Data
|
||||
|
||||
In production environments, it's recommended to perform periodic backups of this data at an interval that aligns with your recovery point objectives.
|
||||
|
||||
### Port Forward for Backup Operations
|
||||
|
||||
To perform backup and restore operations, you'll need to access the embedded etcd instance. This requires port forwarding to expose the etcd ports (port 2379).
|
||||
|
||||
#### Kubernetes Example
|
||||
|
||||
Here's how to port forward and connect to the etcd instance:
|
||||
|
||||
```shell
|
||||
kubectl port-forward svc/dapr-scheduler-server 2379:2379 -n dapr-system
|
||||
```
|
||||
|
||||
#### Docker Compose Example
|
||||
|
||||
Here's how to expose the etcd ports in a Docker Compose configuration for standalone mode:
|
||||
|
||||
```yaml
|
||||
scheduler-1:
|
||||
image: "diagrid/dapr/scheduler:dev110-linux-arm64"
|
||||
command: ["./scheduler",
|
||||
"--etcd-data-dir", "/var/run/dapr/scheduler",
|
||||
"--replica-count", "3",
|
||||
"--id","scheduler-1",
|
||||
"--initial-cluster", "scheduler-1=http://scheduler-1:2380,scheduler-0=http://scheduler-0:2380,scheduler-2=http://scheduler-2:2380",
|
||||
"--etcd-client-ports", "scheduler-0=2379,scheduler-1=2379,scheduler-2=2379",
|
||||
"--etcd-client-http-ports", "scheduler-0=2330,scheduler-1=2330,scheduler-2=2330",
|
||||
"--log-level=debug"
|
||||
]
|
||||
ports:
|
||||
- 2379:2379
|
||||
volumes:
|
||||
- ./dapr_scheduler/1:/var/run/dapr/scheduler
|
||||
networks:
|
||||
- network
|
||||
```
|
||||
|
||||
When running in HA mode, you only need to expose the ports for one scheduler instance to perform backup operations.
|
||||
|
||||
### Performing Backup and Restore
|
||||
|
||||
Once you have access to the etcd ports, you can follow the [official etcd backup and restore documentation](https://etcd.io/docs/v3.5/op-guide/recovery/) to perform backup and restore operations. The process involves using standard etcd commands to create snapshots and restore from them.
|
||||
|
||||
## Monitoring Scheduler's etcd Metrics
|
||||
|
||||
Port forward the Scheduler instance and view etcd's metrics with the following:
|
||||
|
||||
```shell
|
||||
curl -s http://localhost:2379/metrics
|
||||
```
|
||||
|
||||
Fine tune the embedded etcd to your needs by [reviewing and configuring the Scheduler's etcd flags as needed](https://github.com/dapr/dapr/blob/master/charts/dapr/README.md#dapr-scheduler-options).
|
||||
|
||||
## Disabling the Scheduler service
|
||||
|
||||
If you are not using any features that require the Scheduler service (Jobs API, Actor Reminders, or Workflows), you can disable it by setting `global.scheduler.enabled=false`.
|
||||
|
||||
For more information on running Dapr on Kubernetes, visit the [Kubernetes hosting page]({{< ref kubernetes >}}).
|
||||
|
||||
## Related links
|
||||
|
||||
[Learn more about the Jobs API.]({{< ref jobs_api.md >}})
|
||||
[Learn more about the Jobs API.]({{< ref jobs_api.md >}})
|
||||
|
|
|
@ -27,11 +27,11 @@ Creating a new actor follows a local call like `http://localhost:3500/v1.0/actor
|
|||
|
||||
The Dapr runtime SDKs have language-specific actor frameworks. For example, the .NET SDK has C# actors. The goal is for all the Dapr language SDKs to have an actor framework. Currently .NET, Java, Go and Python SDK have actor frameworks.
|
||||
|
||||
### Does Dapr have any SDKs I can use if I want to work with a particular programming language or framework?
|
||||
## Does Dapr have any SDKs I can use if I want to work with a particular programming language or framework?
|
||||
|
||||
To make using Dapr more natural for different languages, it includes [language specific SDKs]({{<ref sdks>}}) for Go, Java, JavaScript, .NET, Python, PHP, Rust and C++. These SDKs expose the functionality in the Dapr building blocks, such as saving state, publishing an event or creating an actor, through a typed language API rather than calling the http/gRPC API. This enables you to write a combination of stateless and stateful functions and actors all in the language of your choice. And because these SDKs share the Dapr runtime, you get cross-language actor and functions support.
|
||||
|
||||
### What frameworks does Dapr integrate with?
|
||||
## What frameworks does Dapr integrate with?
|
||||
Dapr can be integrated with any developer framework. For example, in the Dapr .NET SDK you can find ASP.NET Core integration, which brings stateful routing controllers that respond to pub/sub events from other services.
|
||||
|
||||
Dapr is integrated with the following frameworks;
|
||||
|
|
|
@ -46,7 +46,7 @@ Each of these building block APIs is independent, meaning that you can use any n
|
|||
|----------------|-------------|
|
||||
| [**Service-to-service invocation**]({{< ref "service-invocation-overview.md" >}}) | Resilient service-to-service invocation enables method calls, including retries, on remote services, wherever they are located in the supported hosting environment.
|
||||
| [**Publish and subscribe**]({{< ref "pubsub-overview.md" >}}) | Publishing events and subscribing to topics between services enables event-driven architectures to simplify horizontal scalability and make them resilient to failure. Dapr provides at-least-once message delivery guarantee, message TTL, consumer groups and other advance features.
|
||||
| [**Workflows**]({{< ref "workflow-overview.md" >}}) | The workflow API can be combined with other Dapr building blocks to define long running, persistent processes or data flows that span multiple microservices using Dapr workflows or workflow components.
|
||||
| [**Workflows**]({{< ref "workflow-overview.md" >}}) | The workflow API can be combined with other Dapr building blocks to define long running, persistent processes or data flows that span multiple microservices using Dapr workflows.
|
||||
| [**State management**]({{< ref "state-management-overview.md" >}}) | With state management for storing and querying key/value pairs, long-running, highly available, stateful services can be easily written alongside stateless services in your application. The state store is pluggable and examples include AWS DynamoDB, Azure Cosmos DB, Azure SQL Server, GCP Firebase, PostgreSQL or Redis, among others.
|
||||
| [**Resource bindings**]({{< ref "bindings-overview.md" >}}) | Resource bindings with triggers builds further on event-driven architectures for scale and resiliency by receiving and sending events to and from any external source such as databases, queues, file systems, etc.
|
||||
| [**Actors**]({{< ref "actors-overview.md" >}}) | A pattern for stateful and stateless objects that makes concurrency simple, with method and state encapsulation. Dapr provides many capabilities in its actor runtime, including concurrency, state, and life-cycle management for actor activation/deactivation, and timers and reminders to wake up actors.
|
||||
|
@ -76,7 +76,7 @@ Dapr exposes its HTTP and gRPC APIs as a sidecar architecture, either as a conta
|
|||
## Hosting environments
|
||||
|
||||
Dapr can be hosted in multiple environments, including:
|
||||
- Self-hosted on a Windows/Linux/macOS machine for local development
|
||||
- Self-hosted on a Windows/Linux/macOS machine for local development and in production
|
||||
- On Kubernetes or clusters of physical or virtual machines in production
|
||||
|
||||
### Self-hosted local development
|
||||
|
|
|
@ -57,7 +57,7 @@ This simplifies some choices, but also carries some consideration:
|
|||
|
||||
## Actor communication
|
||||
|
||||
You can interact with Dapr to invoke the actor method by calling HTTP/gRPC endpoint.
|
||||
You can interact with Dapr to invoke the actor method by calling the HTTP endpoint.
|
||||
|
||||
```bash
|
||||
POST/GET/PUT/DELETE http://localhost:3500/v1.0/actors/<actorType>/<actorId>/<method/state/timers/reminders>
|
||||
|
|
|
@ -108,7 +108,7 @@ Refer [api spec]({{< ref "actors_api.md#invoke-timer" >}}) for more details.
|
|||
## Actor reminders
|
||||
|
||||
{{% alert title="Note" color="primary" %}}
|
||||
In Dapr v1.15, actor reminders are stored by default in the [Scheduler service]({{< ref "scheduler.md#actor-reminders" >}}).
|
||||
In Dapr v1.15, actor reminders are stored by default in the [Scheduler service]({{< ref "scheduler.md#actor-reminders" >}}). When upgrading to Dapr v1.15 all existing reminders are automatically migrated to the Scheduler service with no loss of reminders as a one time operation for each actor type.
|
||||
{{% /alert %}}
|
||||
|
||||
Reminders are a mechanism to trigger *persistent* callbacks on an actor at specified times. Their functionality is similar to timers. But unlike timers, reminders are triggered under all circumstances until the actor explicitly unregisters them or the actor is explicitly deleted or the number in invocations is exhausted. Specifically, reminders are triggered across actor deactivations and failovers because the Dapr actor runtime persists the information about the actors' reminders using Dapr actor state provider.
|
||||
|
@ -137,6 +137,10 @@ You can remove the actor reminder by calling
|
|||
DELETE http://localhost:3500/v1.0/actors/<actorType>/<actorId>/reminders/<name>
|
||||
```
|
||||
|
||||
If an actor reminder is triggered and the app does not return a 2** code to the runtime (for example, because of a connection issue),
|
||||
actor reminders will be retried up to three times with a backoff interval of one second between each attempt. There may be
|
||||
additional retries attempted in accordance with any optionally applied [actor resiliency policy]({{< ref "override-default-retries.md" >}}).
|
||||
|
||||
Refer [api spec]({{< ref "actors_api.md#invoke-reminder" >}}) for more details.
|
||||
|
||||
## Error handling
|
||||
|
|
|
@ -10,18 +10,46 @@ description: "Overview of the conversation API building block"
|
|||
The conversation API is currently in [alpha]({{< ref "certification-lifecycle.md#certification-levels" >}}).
|
||||
{{% /alert %}}
|
||||
|
||||
Dapr's conversation API reduces the complexity of securely and reliably interacting with Large Language Models (LLM) at scale. Whether you're a developer who doesn't have the necessary native SDKs or a polyglot shop who just wants to focus on the prompt aspects of LLM interactions, the conversation API provides one consistent API entry point to talk to underlying LLM providers.
|
||||
|
||||
Using the Dapr conversation API, you can reduce the complexity of interacting with Large Language Models (LLMs) and enable critical performance and security functionality with features like prompt caching and personally identifiable information (PII) data obfuscation.
|
||||
<img src="/images/conversation-overview.png" width=800 alt="Diagram showing the flow of a user's app communicating with Dapr's LLM components.">
|
||||
|
||||
In additon to enabling critical performance and security functionality (like [prompt caching]({{< ref "#prompt-caching" >}}) and [PII scrubbing]({{< ref "#personally-identifiable-information-pii-obfuscation" >}})), you can also pair the conversation API with Dapr functionalities, like:
|
||||
- Resiliency circuit breakers and retries to circumvent limit and token errors, or
|
||||
- Middleware to authenticate requests coming to and from the LLM
|
||||
|
||||
Dapr provides observability by issuing metrics for your LLM interactions.
|
||||
|
||||
## Features
|
||||
|
||||
The following features are out-of-the-box for [all the supported conversation components]({{< ref supported-conversation >}}).
|
||||
|
||||
### Prompt caching
|
||||
|
||||
To significantly reduce latency and cost, frequent prompts are stored in a cache to be reused, instead of reprocessing the information for every new request. Prompt caching optimizes performance by storing and reusing prompts that are often repeated across multiple API calls.
|
||||
Prompt caching optimizes performance by storing and reusing prompts that are often repeated across multiple API calls. To significantly reduce latency and cost, Dapr stores frequent prompts in a local cache to be reused by your cluster, pod, or other, instead of reprocessing the information for every new request.
|
||||
|
||||
### Personally identifiable information (PII) obfuscation
|
||||
|
||||
The PII obfuscation feature identifies and removes any PII from a conversation response. This feature protects your privacy by eliminating sensitive details like names, addresses, phone numbers, or other details that could be used to identify an individual.
|
||||
The PII obfuscation feature identifies and removes any form of sensitve user information from a conversation response. Simply enable PII obfuscation on input and output data to protect your privacy and scrub sensitive details that could be used to identify an individual.
|
||||
|
||||
The PII scrubber obfuscates the following user information:
|
||||
- Phone number
|
||||
- Email address
|
||||
- IP address
|
||||
- Street address
|
||||
- Credit cards
|
||||
- Social Security number
|
||||
- ISBN
|
||||
- Media Access Control (MAC) address
|
||||
- Secure Hash Algorithm 1 (SHA-1) hex
|
||||
- SHA-256 hex
|
||||
- MD5 hex
|
||||
|
||||
## Demo
|
||||
|
||||
Watch the demo presented during [Diagrid's Dapr v1.15 celebration](https://www.diagrid.io/videos/dapr-1-15-deep-dive) to see how the conversation API works using the .NET SDK.
|
||||
|
||||
<iframe width="560" height="315" src="https://www.youtube-nocookie.com/embed/NTnwoDhHIcQ?si=37SDcOHtEpgCIwkG&start=5444" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen></iframe>
|
||||
|
||||
## Try out conversation
|
||||
|
||||
|
@ -31,7 +59,7 @@ Want to put the Dapr conversation API to the test? Walk through the following qu
|
|||
|
||||
| Quickstart/tutorial | Description |
|
||||
| ------------------- | ----------- |
|
||||
| [Conversation quickstart](todo) | . |
|
||||
| [Conversation quickstart](todo) | TODO |
|
||||
|
||||
### Start using the conversation API directly in your app
|
||||
|
||||
|
@ -40,4 +68,4 @@ Want to skip the quickstarts? Not a problem. You can try out the conversation bu
|
|||
## Next steps
|
||||
|
||||
- [How-To: Converse with an LLM using the conversation API]({{< ref howto-conversation-layer.md >}})
|
||||
- [Conversation API components]({{< ref supported-conversation >}})
|
||||
- [Conversation API components]({{< ref supported-conversation >}})
|
||||
|
|
|
@ -14,6 +14,7 @@ Let's get started using the [conversation API]({{< ref conversation-overview.md
|
|||
|
||||
- Set up one of the available Dapr components (echo) that work with the conversation API.
|
||||
- Add the conversation client to your application.
|
||||
- Run the connection using `dapr run`.
|
||||
|
||||
## Set up the conversation component
|
||||
|
||||
|
@ -33,8 +34,29 @@ spec:
|
|||
version: v1
|
||||
```
|
||||
|
||||
### Use the OpenAI component
|
||||
|
||||
To interface with a real LLM, use one of the other [supported conversation components]({{< ref "supported-conversation" >}}), including OpenAI, Hugging Face, Anthropic, DeepSeek, and more.
|
||||
|
||||
For example, to swap out the `echo` mock component with an `OpenAI` component, replace the `conversation.yaml` file with the following. You'll need to copy your API key into the component file.
|
||||
|
||||
```
|
||||
apiVersion: dapr.io/v1alpha1
|
||||
kind: Component
|
||||
metadata:
|
||||
name: openai
|
||||
spec:
|
||||
type: conversation.openai
|
||||
metadata:
|
||||
- name: key
|
||||
value: <REPLACE_WITH_YOUR_KEY>
|
||||
- name: model
|
||||
value: gpt-4-turbo
|
||||
```
|
||||
|
||||
## Connect the conversation client
|
||||
|
||||
The following examples use an HTTP client to send a POST request to Dapr's sidecar HTTP endpoint. You can also use [the Dapr SDK client instead]({{< ref "#related-links" >}}).
|
||||
|
||||
{{< tabs ".NET" "Go" "Rust" >}}
|
||||
|
||||
|
@ -42,8 +64,30 @@ spec:
|
|||
<!-- .NET -->
|
||||
{{% codetab %}}
|
||||
|
||||
```dotnet
|
||||
todo
|
||||
```csharp
|
||||
using Dapr.AI.Conversation;
|
||||
using Dapr.AI.Conversation.Extensions;
|
||||
|
||||
var builder = WebApplication.CreateBuilder(args);
|
||||
|
||||
builder.Services.AddDaprConversationClient();
|
||||
|
||||
var app = builder.Build();
|
||||
|
||||
var conversationClient = app.Services.GetRequiredService<DaprConversationClient>();
|
||||
var response = await conversationClient.ConverseAsync("conversation",
|
||||
new List<DaprConversationInput>
|
||||
{
|
||||
new DaprConversationInput(
|
||||
"Please write a witty haiku about the Dapr distributed programming framework at dapr.io",
|
||||
DaprConversationRole.Generic)
|
||||
});
|
||||
|
||||
Console.WriteLine("Received the following from the LLM:");
|
||||
foreach (var resp in response.Outputs)
|
||||
{
|
||||
Console.WriteLine($"\t{resp.Result}");
|
||||
}
|
||||
```
|
||||
|
||||
{{% /codetab %}}
|
||||
|
@ -68,12 +112,12 @@ func main() {
|
|||
}
|
||||
|
||||
input := dapr.ConversationInput{
|
||||
Message: "hello world",
|
||||
// Role: nil, // Optional
|
||||
// ScrubPII: nil, // Optional
|
||||
Content: "Please write a witty haiku about the Dapr distributed programming framework at dapr.io",
|
||||
// Role: "", // Optional
|
||||
// ScrubPII: false, // Optional
|
||||
}
|
||||
|
||||
fmt.Printf("conversation input: %s\n", input.Message)
|
||||
fmt.Printf("conversation input: %s\n", input.Content)
|
||||
|
||||
var conversationComponent = "echo"
|
||||
|
||||
|
@ -110,14 +154,14 @@ async fn main() -> Result<(), Box<dyn std::error::Error>> {
|
|||
|
||||
let mut client = DaprClient::connect(address).await?;
|
||||
|
||||
let input = ConversationInputBuilder::new("hello world").build();
|
||||
let input = ConversationInputBuilder::new("Please write a witty haiku about the Dapr distributed programming framework at dapr.io").build();
|
||||
|
||||
let conversation_component = "echo";
|
||||
|
||||
let request =
|
||||
ConversationRequestBuilder::new(conversation_component, vec![input.clone()]).build();
|
||||
|
||||
println!("conversation input: {:?}", input.message);
|
||||
println!("conversation input: {:?}", input.content);
|
||||
|
||||
let response = client.converse_alpha1(request).await?;
|
||||
|
||||
|
@ -130,6 +174,94 @@ async fn main() -> Result<(), Box<dyn std::error::Error>> {
|
|||
|
||||
{{< /tabs >}}
|
||||
|
||||
## Run the conversation connection
|
||||
|
||||
Start the connection using the `dapr run` command. For example, for this scenario, we're running `dapr run` on an application with the app ID `conversation` and pointing to our conversation YAML file in the `./config` directory.
|
||||
|
||||
{{< tabs ".NET" "Go" "Rust" >}}
|
||||
|
||||
<!-- .NET -->
|
||||
{{% codetab %}}
|
||||
|
||||
```bash
|
||||
dapr run --app-id conversation --dapr-grpc-port 50001 --log-level debug --resources-path ./config -- dotnet run
|
||||
```
|
||||
|
||||
{{% /codetab %}}
|
||||
|
||||
<!-- Go -->
|
||||
{{% codetab %}}
|
||||
|
||||
```bash
|
||||
dapr run --app-id conversation --dapr-grpc-port 50001 --log-level debug --resources-path ./config -- go run ./main.go
|
||||
```
|
||||
|
||||
**Expected output**
|
||||
|
||||
```
|
||||
- '== APP == conversation output: Please write a witty haiku about the Dapr distributed programming framework at dapr.io'
|
||||
```
|
||||
|
||||
{{% /codetab %}}
|
||||
|
||||
<!-- Rust -->
|
||||
{{% codetab %}}
|
||||
|
||||
```bash
|
||||
dapr run --app-id=conversation --resources-path ./config --dapr-grpc-port 3500 -- cargo run --example conversation
|
||||
```
|
||||
|
||||
**Expected output**
|
||||
|
||||
```
|
||||
- 'conversation input: hello world'
|
||||
- 'conversation output: hello world'
|
||||
```
|
||||
|
||||
{{% /codetab %}}
|
||||
|
||||
{{< /tabs >}}
|
||||
|
||||
## Advanced features
|
||||
|
||||
The conversation API supports the following features:
|
||||
|
||||
1. **Prompt caching:** Allows developers to cache prompts in Dapr, leading to much faster response times and reducing costs on egress and on inserting the prompt into the LLM provider's cache.
|
||||
|
||||
1. **PII scrubbing:** Allows for the obfuscation of data going in and out of the LLM.
|
||||
|
||||
To learn how to enable these features, see the [conversation API reference guide]({{< ref conversation_api.md >}}).
|
||||
|
||||
## Related links
|
||||
|
||||
Try out the conversation API using the full examples provided in the supported SDK repos.
|
||||
|
||||
|
||||
{{< tabs ".NET" "Go" "Rust" >}}
|
||||
|
||||
<!-- .NET -->
|
||||
{{% codetab %}}
|
||||
|
||||
[Dapr conversation example with the .NET SDK](https://github.com/dapr/dotnet-sdk/tree/master/examples/AI/ConversationalAI)
|
||||
|
||||
{{% /codetab %}}
|
||||
|
||||
<!-- Go -->
|
||||
{{% codetab %}}
|
||||
|
||||
[Dapr conversation example with the Go SDK](https://github.com/dapr/go-sdk/tree/main/examples/conversation)
|
||||
|
||||
{{% /codetab %}}
|
||||
|
||||
<!-- Rust -->
|
||||
{{% codetab %}}
|
||||
|
||||
[Dapr conversation example with the Rust SDK](https://github.com/dapr/rust-sdk/tree/main/examples/src/conversation)
|
||||
|
||||
{{% /codetab %}}
|
||||
|
||||
{{< /tabs >}}
|
||||
|
||||
|
||||
## Next steps
|
||||
|
||||
|
|
|
@ -2,7 +2,7 @@
|
|||
type: docs
|
||||
title: "How-To: Schedule and handle triggered jobs"
|
||||
linkTitle: "How-To: Schedule and handle triggered jobs"
|
||||
weight: 2000
|
||||
weight: 5000
|
||||
description: "Learn how to use the jobs API to schedule and handle triggered jobs"
|
||||
---
|
||||
|
||||
|
@ -20,7 +20,103 @@ When you [run `dapr init` in either self-hosted mode or on Kubernetes]({{< ref i
|
|||
|
||||
In your code, set up and schedule jobs within your application.
|
||||
|
||||
{{< tabs "Go" >}}
|
||||
{{< tabs ".NET" "Go" >}}
|
||||
|
||||
{{% codetab %}}
|
||||
|
||||
<!-- .NET -->
|
||||
|
||||
The following .NET SDK code sample schedules the job named `prod-db-backup`. The job data contains information
|
||||
about the database that you'll be seeking to backup regularly. Over the course of this example, you'll:
|
||||
- Define types used in the rest of the example
|
||||
- Register an endpoint during application startup that handles all job trigger invocations on the service
|
||||
- Register the job with Dapr
|
||||
|
||||
In the following example, you'll create records that you'll serialize and register alongside the job so the information
|
||||
is available when the job is triggered in the future:
|
||||
- The name of the backup task (`db-backup`)
|
||||
- The backup task's `Metadata`, including:
|
||||
- The database name (`DBName`)
|
||||
- The database location (`BackupLocation`)
|
||||
|
||||
Create an ASP.NET Core project and add the latest version of `Dapr.Jobs` from NuGet.
|
||||
|
||||
> **Note:** While it's not strictly necessary
|
||||
for your project to use the `Microsoft.NET.Sdk.Web` SDK to create jobs, as of the time this documentation is authored,
|
||||
only the service that schedules a job receives trigger invocations for it. As those invocations expect an endpoint
|
||||
that can handle the job trigger and requires the `Microsoft.NET.Sdk.Web` SDK, it's recommended that you
|
||||
use an ASP.NET Core project for this purpose.
|
||||
|
||||
Start by defining types to persist our backup job data and apply our own JSON property name attributes to the properties
|
||||
so they're consistent with other language examples.
|
||||
|
||||
```cs
|
||||
//Define the types that we'll represent the job data with
|
||||
internal sealed record BackupJobData([property: JsonPropertyName("task")] string Task, [property: JsonPropertyName("metadata")] BackupMetadata Metadata);
|
||||
internal sealed record BackupMetadata([property: JsonPropertyName("DBName")]string DatabaseName, [property: JsonPropertyName("BackupLocation")] string BackupLocation);
|
||||
```
|
||||
|
||||
Next, set up a handler as part of your application setup that will be called anytime a job is triggered on your
|
||||
application. It's the responsibility of this handler to identify how jobs should be processed based on the job name provided.
|
||||
|
||||
This works by registering a handler with ASP.NET Core at `/job/<job-name>`, where `<job-name>` is parameterized and
|
||||
passed into this handler delegate, meeting Dapr's expectation that an endpoint is available to handle triggered named jobs.
|
||||
|
||||
Populate your `Program.cs` file with the following:
|
||||
|
||||
```cs
|
||||
using System.Text;
|
||||
using System.Text.Json;
|
||||
using Dapr.Jobs;
|
||||
using Dapr.Jobs.Extensions;
|
||||
using Dapr.Jobs.Models;
|
||||
using Dapr.Jobs.Models.Responses;
|
||||
|
||||
var builder = WebApplication.CreateBuilder(args);
|
||||
builder.Services.AddDaprJobsClient();
|
||||
var app = builder.Build();
|
||||
|
||||
//Registers an endpoint to receive and process triggered jobs
|
||||
var cancellationTokenSource = new CancellationTokenSource(TimeSpan.FromSeconds(5));
|
||||
app.MapDaprScheduledJobHandler((string jobName, ReadOnlyMemory<byte> jobPayload, ILogger logger, CancellationToken cancellationToken) => {
|
||||
logger?.LogInformation("Received trigger invocation for job '{jobName}'", jobName);
|
||||
switch (jobName)
|
||||
{
|
||||
case "prod-db-backup":
|
||||
// Deserialize the job payload metadata
|
||||
var jobData = JsonSerializer.Deserialize<BackupJobData>(jobPayload);
|
||||
|
||||
// Process the backup operation - we assume this is implemented elsewhere in your code
|
||||
await BackupDatabaseAsync(jobData, cancellationToken);
|
||||
break;
|
||||
}
|
||||
}, cancellationTokenSource.Token);
|
||||
|
||||
await app.RunAsync();
|
||||
```
|
||||
|
||||
Finally, the job itself needs to be registered with Dapr so it can be triggered at a later point in time. You can do this
|
||||
by injecting a `DaprJobsClient` into a class and executing as part of an inbound operation to your application, but for
|
||||
this example's purposes, it'll go at the bottom of the `Program.cs` file you started above. Because you'll be using the
|
||||
`DaprJobsClient` you registered with dependency injection, start by creating a scope so you can access it.
|
||||
|
||||
```cs
|
||||
//Create a scope so we can access the registered DaprJobsClient
|
||||
await using scope = app.Services.CreateAsyncScope();
|
||||
var daprJobsClient = scope.ServiceProvider.GetRequiredService<DaprJobsClient>();
|
||||
|
||||
//Create the payload we wish to present alongside our future job triggers
|
||||
var jobData = new BackupJobData("db-backup", new BackupMetadata("my-prod-db", "/backup-dir"));
|
||||
|
||||
//Serialize our payload to UTF-8 bytes
|
||||
var serializedJobData = JsonSerializer.SerializeToUtf8Bytes(jobData);
|
||||
|
||||
//Schedule our backup job to run every minute, but only repeat 10 times
|
||||
await daprJobsClient.ScheduleJobAsync("prod-db-backup", DaprJobSchedule.FromDuration(TimeSpan.FromMinutes(1)),
|
||||
serializedJobData, repeats: 10);
|
||||
```
|
||||
|
||||
{{% /codetab %}}
|
||||
|
||||
{{% codetab %}}
|
||||
|
||||
|
@ -92,66 +188,8 @@ In this example, at trigger time, which is `@every 1s` according to the `Schedul
|
|||
}
|
||||
```
|
||||
|
||||
At the trigger time, the `prodDBBackupHandler` function is called, executing the desired business logic for this job at trigger time. For example:
|
||||
|
||||
#### HTTP
|
||||
|
||||
When you create a job using Dapr's Jobs API, Dapr will automatically assume there is an endpoint available at
|
||||
`/job/<job-name>`. For instance, if you schedule a job named `test`, Dapr expects your application to listen for job
|
||||
events at `/job/test`. Ensure your application has a handler set up for this endpoint to process the job when it is
|
||||
triggered. For example:
|
||||
|
||||
*Note: The following example is in Go but applies to any programming language.*
|
||||
|
||||
```go
|
||||
|
||||
func main() {
|
||||
...
|
||||
http.HandleFunc("/job/", handleJob)
|
||||
http.HandleFunc("/job/<job-name>", specificJob)
|
||||
...
|
||||
}
|
||||
|
||||
func specificJob(w http.ResponseWriter, r *http.Request) {
|
||||
// Handle specific triggered job
|
||||
}
|
||||
|
||||
func handleJob(w http.ResponseWriter, r *http.Request) {
|
||||
// Handle the triggered jobs
|
||||
}
|
||||
```
|
||||
|
||||
#### gRPC
|
||||
|
||||
When a job reaches its scheduled trigger time, the triggered job is sent back to the application via the following
|
||||
callback function:
|
||||
|
||||
*Note: The following example is in Go but applies to any programming language with gRPC support.*
|
||||
|
||||
```go
|
||||
import rtv1 "github.com/dapr/dapr/pkg/proto/runtime/v1"
|
||||
...
|
||||
func (s *JobService) OnJobEventAlpha1(ctx context.Context, in *rtv1.JobEventRequest) (*rtv1.JobEventResponse, error) {
|
||||
// Handle the triggered job
|
||||
}
|
||||
```
|
||||
|
||||
This function processes the triggered jobs within the context of your gRPC server. When you set up the server, ensure that
|
||||
you register the callback server, which will invoke this function when a job is triggered:
|
||||
|
||||
```go
|
||||
...
|
||||
js := &JobService{}
|
||||
rtv1.RegisterAppCallbackAlphaServer(server, js)
|
||||
```
|
||||
|
||||
In this setup, you have full control over how triggered jobs are received and processed, as they are routed directly
|
||||
through this gRPC method.
|
||||
|
||||
#### SDKs
|
||||
|
||||
For SDK users, handling triggered jobs is simpler. When a job is triggered, Dapr will automatically route the job to the
|
||||
event handler you set up during the server initialization. For example, in Go, you'd register the event handler like this:
|
||||
When a job is triggered, Dapr will automatically route the job to the event handler you set up during the server
|
||||
initialization. For example, in Go, you'd register the event handler like this:
|
||||
|
||||
```go
|
||||
...
|
||||
|
|
|
@ -0,0 +1,121 @@
|
|||
---
|
||||
type: docs
|
||||
title: "Features and concepts"
|
||||
linkTitle: "Features and concepts"
|
||||
weight: 2000
|
||||
description: "Learn more about the Dapr Jobs features and concepts"
|
||||
---
|
||||
|
||||
Now that you've learned about the [jobs building block]({{< ref jobs-overview.md >}}) at a high level, let's deep dive
|
||||
into the features and concepts included with Dapr Jobs and the various SDKs. Dapr Jobs:
|
||||
- Provides a robust and scalable API for scheduling operations to be triggered in the future.
|
||||
- Exposes several capabilities which are common across all supported languages.
|
||||
|
||||
|
||||
|
||||
## Job identity
|
||||
|
||||
All jobs are registered with a case-sensitive job name. These names are intended to be unique across all services
|
||||
interfacing with the Dapr runtime. The name is used as an identifier when creating and modifying the job as well as
|
||||
to indicate which job a triggered invocation is associated with.
|
||||
|
||||
Only one job can be associated with a name at any given time. Any attempt to create a new job using the same name
|
||||
as an existing job will result in an overwrite of this existing job.
|
||||
|
||||
## Scheduling Jobs
|
||||
A job can be scheduled using any of the following mechanisms:
|
||||
- Intervals using Cron expressions, duration values, or period expressions
|
||||
- Specific dates and times
|
||||
|
||||
For all time-based schedules, if a timestamp is provided with a time zone via the RFC3339 specification, that
|
||||
time zone is used. When not provided, the time zone used by the server running Dapr is used.
|
||||
In other words, do **not** assume that times run in UTC time zone, unless otherwise specified when scheduling
|
||||
the job.
|
||||
|
||||
### Schedule using a Cron expression
|
||||
When scheduling a job to execute on a specific interval using a Cron expression, the expression is written using 6
|
||||
fields spanning the values specified in the table below:
|
||||
|
||||
| seconds | minutes | hours | day of month | month | day of week |
|
||||
| -- | -- | -- | -- | -- | -- |
|
||||
| 0-59 | 0-59 | 0-23 | 1-31 | 1-12/jan-dec | 0-6/sun-sat |
|
||||
|
||||
#### Example 1
|
||||
`"0 30 * * * *"` triggers every hour on the half-hour mark.
|
||||
|
||||
#### Example 2
|
||||
`"0 15 3 * * *"` triggers every day at 03:15.
|
||||
|
||||
### Schedule using a duration value
|
||||
You can schedule jobs using [a Go duration string](https://pkg.go.dev/time#ParseDuration), in which
|
||||
a string consists of a (possibly) signed sequence of decimal numbers, each with an optional fraction and a unit suffix.
|
||||
Valid time units are `"ns"`, `"us"`, `"ms"`, `"s"`, `"m"`, or `"h"`.
|
||||
|
||||
#### Example 1
|
||||
`"2h45m"` triggers every 2 hours and 45 minutes.
|
||||
|
||||
#### Example 2
|
||||
`"37m25s"` triggers every 37 minutes and 25 seconds.
|
||||
|
||||
### Schedule using a period expression
|
||||
The following period expressions are supported. The "@every" expression also accepts a [Go duration string](https://pkg.go.dev/time#ParseDuration).
|
||||
|
||||
| Entry | Description | Equivalent Cron expression |
|
||||
| -- | -- | -- |
|
||||
| @every | Run every (e.g. "@every 1h30m") | N/A |
|
||||
| @yearly (or @annually) | Run once a year, midnight, January 1st | 0 0 0 1 1 * |
|
||||
| @monthly | Run once a month, midnight, first of month | 0 0 0 1 * * |
|
||||
| @weekly | Run once a week, midnight on Sunday | 0 0 0 * * 0 |
|
||||
| @daily or @midnight | Run once a day at midnight | 0 0 0 * * * |
|
||||
| @hourly | Run once an hour at the beginning of the hour | 0 0 * * * * |
|
||||
|
||||
### Schedule using a specific date/time
|
||||
A job can also be scheduled to run at a particular point in time by providing a date using the
|
||||
[RFC3339 specification](https://www.rfc-editor.org/rfc/rfc3339).
|
||||
|
||||
#### Example 1
|
||||
`"2025-12-09T16:09:53+00:00"` Indicates that the job should be run on December 9, 2025 at 4:09:53 PM UTC.
|
||||
|
||||
## Scheduled triggers
|
||||
When a scheduled Dapr job is triggered, the runtime sends a message back to the service that scheduled the job using
|
||||
either the HTTP or gRPC approach, depending on which is registered with Dapr when the service starts.
|
||||
|
||||
### gRPC
|
||||
When a job reaches its scheduled trigger time, the triggered job is sent back to the application via the following
|
||||
callback function:
|
||||
|
||||
> **Note:** The following example is in Go, but applies to any programming language with gRPC support.
|
||||
|
||||
```go
|
||||
import rtv1 "github.com/dapr/dapr/pkg/proto/runtime/v1"
|
||||
...
|
||||
func (s *JobService) OnJobEventAlpha1(ctx context.Context, in *rtv1.JobEventRequest) (*rtv1.JobEventResponse, error) {
|
||||
// Handle the triggered job
|
||||
}
|
||||
```
|
||||
|
||||
This function processes the triggered jobs within the context of your gRPC server. When you set up the server, ensure that
|
||||
you register the callback server, which invokes this function when a job is triggered:
|
||||
|
||||
```go
|
||||
...
|
||||
js := &JobService{}
|
||||
rtv1.RegisterAppCallbackAlphaServer(server, js)
|
||||
```
|
||||
|
||||
In this setup, you have full control over how triggered jobs are received and processed, as they are routed directly
|
||||
through this gRPC method.
|
||||
|
||||
### HTTP
|
||||
If a gRPC server isn't registered with Dapr when the application starts up, Dapr instead triggers jobs by making a
|
||||
POST request to the endpoint `/job/<job-name>`. The body includes the following information about the job:
|
||||
- `Schedule`: When the job triggers occur
|
||||
- `RepeatCount`: An optional value indicating how often the job should repeat
|
||||
- `DueTime`: An optional point in time representing either the one time when the job should execute (if not recurring)
|
||||
or the not-before time from which the schedule should take effect
|
||||
- `Ttl`: An optional value indicating when the job should expire
|
||||
- `Payload`: A collection of bytes containing data originally stored when the job was scheduled
|
||||
|
||||
The `DueTime` and `Ttl` fields will reflect an RC3339 timestamp value reflective of the time zone provided when the job was
|
||||
originally scheduled. If no time zone was provided, these values indicate the time zone used by the server running
|
||||
Dapr.
|
|
@ -8,7 +8,7 @@ description: "Overview of the jobs API building block"
|
|||
|
||||
Many applications require job scheduling, or the need to take an action in the future. The jobs API is an orchestrator for scheduling these future jobs, either at a specific time or for a specific interval.
|
||||
|
||||
Not only does the jobs API help you with scheduling jobs, but internally, Dapr uses the scheduler service to schedule actor reminders.
|
||||
Not only does the jobs API help you with scheduling jobs, but internally, Dapr uses the Scheduler service to schedule actor reminders.
|
||||
|
||||
Jobs in Dapr consist of:
|
||||
- [The jobs API building block]({{< ref jobs_api.md >}})
|
||||
|
@ -57,7 +57,9 @@ The jobs API provides several features to make it easy for you to schedule jobs.
|
|||
|
||||
### Schedule jobs across multiple replicas
|
||||
|
||||
The Scheduler service enables the scheduling of jobs to scale across multiple replicas, while guaranteeing that a job is only triggered by 1 scheduler service instance.
|
||||
When you create a job, it replaces any existing job with the same name. This means that every time a job is created, it resets the count and only keeps 1 record in the embedded etcd for that job. Therefore, you don't need to worry about multiple jobs being created and firing off — only the most recent job is recorded and executed, even if all your apps schedule the same job on startup.
|
||||
|
||||
The Scheduler service enables the scheduling of jobs to scale across multiple replicas, while guaranteeing that a job is only triggered by 1 Scheduler service instance.
|
||||
|
||||
## Try out the jobs API
|
||||
|
||||
|
|
|
@ -108,6 +108,26 @@ with DaprClient() as client:
|
|||
topic_name='orders',
|
||||
publish_metadata={'cloudevent.id': 'd99b228f-6c73-4e78-8c4d-3f80a043d317', 'cloudevent.source': 'payment'}
|
||||
)
|
||||
|
||||
# or
|
||||
|
||||
cloud_event = {
|
||||
'specversion': '1.0',
|
||||
'type': 'com.example.event',
|
||||
'source': 'payment',
|
||||
'id': 'd99b228f-6c73-4e78-8c4d-3f80a043d317',
|
||||
'data': {'orderId': i},
|
||||
'datacontenttype': 'application/json',
|
||||
...
|
||||
}
|
||||
|
||||
# Set the data content type to 'application/cloudevents+json'
|
||||
result = client.publish_event(
|
||||
pubsub_name='order_pub_sub',
|
||||
topic_name='orders',
|
||||
data=json.dumps(cloud_event),
|
||||
data_content_type='application/cloudevents+json',
|
||||
)
|
||||
```
|
||||
|
||||
{{% /codetab %}}
|
||||
|
|
|
@ -70,7 +70,7 @@ app.get('/dapr/subscribe', (_req, res) => {
|
|||
## Retries and dead letter topics
|
||||
|
||||
By default, when a dead letter topic is set, any failing message immediately goes to the dead letter topic. As a result it is recommend to always have a retry policy set when using dead letter topics in a subscription.
|
||||
To enable the retry of a message before sending it to the dead letter topic, apply a [retry resiliency policy]({{< ref "policies.md#retries" >}}) to the pub/sub component.
|
||||
To enable the retry of a message before sending it to the dead letter topic, apply a [retry resiliency policy]({{< ref "retries-overview.md" >}}) to the pub/sub component.
|
||||
|
||||
This example shows how to set a constant retry policy named `pubsubRetry`, with 10 maximum delivery attempts applied every 5 seconds for the `pubsub` pub/sub component.
|
||||
|
||||
|
|
|
@ -20,7 +20,7 @@ Not using CloudEvents disables support for tracing, event deduplication per mess
|
|||
|
||||
To disable CloudEvent wrapping, set the `rawPayload` metadata to `true` as part of the publishing request. This allows subscribers to receive these messages without having to parse the CloudEvent schema.
|
||||
|
||||
{{< tabs curl "Python SDK" "PHP SDK">}}
|
||||
{{< tabs curl ".NET" "Python" "PHP">}}
|
||||
|
||||
{{% codetab %}}
|
||||
```bash
|
||||
|
@ -28,6 +28,43 @@ curl -X "POST" http://localhost:3500/v1.0/publish/pubsub/TOPIC_A?metadata.rawPay
|
|||
```
|
||||
{{% /codetab %}}
|
||||
|
||||
{{% codetab %}}
|
||||
|
||||
```csharp
|
||||
using Dapr.Client;
|
||||
|
||||
var builder = WebApplication.CreateBuilder(args);
|
||||
builder.Services.AddControllers().AddDapr();
|
||||
|
||||
var app = builder.Build();
|
||||
|
||||
app.MapPost("/publish", async (DaprClient daprClient) =>
|
||||
{
|
||||
var message = new Message(
|
||||
Guid.NewGuid().ToString(),
|
||||
$"Hello at {DateTime.UtcNow}",
|
||||
DateTime.UtcNow
|
||||
);
|
||||
|
||||
await daprClient.PublishEventAsync(
|
||||
"pubsub", // pubsub name
|
||||
"messages", // topic name
|
||||
message, // message data
|
||||
new Dictionary<string, string>
|
||||
{
|
||||
{ "rawPayload", "true" },
|
||||
{ "content-type", "application/json" }
|
||||
}
|
||||
);
|
||||
|
||||
return Results.Ok(message);
|
||||
});
|
||||
|
||||
app.Run();
|
||||
```
|
||||
|
||||
{{% /codetab %}}
|
||||
|
||||
{{% codetab %}}
|
||||
```python
|
||||
from dapr.clients import DaprClient
|
||||
|
@ -74,9 +111,52 @@ Dapr apps are also able to subscribe to raw events coming from existing pub/sub
|
|||
|
||||
### Programmatically subscribe to raw events
|
||||
|
||||
When subscribing programmatically, add the additional metadata entry for `rawPayload` so the Dapr sidecar automatically wraps the payloads into a CloudEvent that is compatible with current Dapr SDKs.
|
||||
When subscribing programmatically, add the additional metadata entry for `rawPayload` to allow the subscriber to receive a message that is not wrapped by a CloudEvent. For .NET, this metadata entry is called `isRawPayload`.
|
||||
|
||||
{{< tabs "Python" "PHP SDK" >}}
|
||||
{{< tabs ".NET" "Python" "PHP" >}}
|
||||
|
||||
{{% codetab %}}
|
||||
|
||||
```csharp
|
||||
using System.Text.Json;
|
||||
using System.Text.Json.Serialization;
|
||||
|
||||
var builder = WebApplication.CreateBuilder(args);
|
||||
var app = builder.Build();
|
||||
|
||||
app.MapGet("/dapr/subscribe", () =>
|
||||
{
|
||||
var subscriptions = new[]
|
||||
{
|
||||
new
|
||||
{
|
||||
pubsubname = "pubsub",
|
||||
topic = "messages",
|
||||
route = "/messages",
|
||||
metadata = new Dictionary<string, string>
|
||||
{
|
||||
{ "isRawPayload", "true" },
|
||||
{ "content-type", "application/json" }
|
||||
}
|
||||
}
|
||||
};
|
||||
return Results.Ok(subscriptions);
|
||||
});
|
||||
|
||||
app.MapPost("/messages", async (HttpContext context) =>
|
||||
{
|
||||
using var reader = new StreamReader(context.Request.Body);
|
||||
var json = await reader.ReadToEndAsync();
|
||||
|
||||
Console.WriteLine($"Raw message received: {json}");
|
||||
|
||||
return Results.Ok();
|
||||
});
|
||||
|
||||
app.Run();
|
||||
```
|
||||
|
||||
{{% /codetab %}}
|
||||
|
||||
{{% codetab %}}
|
||||
|
||||
|
@ -151,7 +231,7 @@ spec:
|
|||
default: /dsstatus
|
||||
pubsubname: pubsub
|
||||
metadata:
|
||||
rawPayload: "true"
|
||||
isRawPayload: "true"
|
||||
scopes:
|
||||
- app1
|
||||
- app2
|
||||
|
@ -161,4 +241,5 @@ scopes:
|
|||
|
||||
- Learn more about [publishing and subscribing messages]({{< ref pubsub-overview.md >}})
|
||||
- List of [pub/sub components]({{< ref supported-pubsub >}})
|
||||
- Read the [API reference]({{< ref pubsub_api.md >}})
|
||||
- Read the [API reference]({{< ref pubsub_api.md >}})
|
||||
- Read the .NET sample on how to [consume Kafka messages without CloudEvents](https://github.com/dapr/samples/pubsub-raw-payload)
|
|
@ -203,7 +203,112 @@ As messages are sent to the given message handler code, there is no concept of r
|
|||
|
||||
The example below shows the different ways to stream subscribe to a topic.
|
||||
|
||||
{{< tabs Go>}}
|
||||
{{< tabs Python Go >}}
|
||||
|
||||
{{% codetab %}}
|
||||
|
||||
You can use the `subscribe` method, which returns a `Subscription` object and allows you to pull messages from the stream by calling the `next_message` method. This runs in and may block the main thread while waiting for messages.
|
||||
|
||||
```python
|
||||
import time
|
||||
|
||||
from dapr.clients import DaprClient
|
||||
from dapr.clients.grpc.subscription import StreamInactiveError
|
||||
|
||||
counter = 0
|
||||
|
||||
|
||||
def process_message(message):
|
||||
global counter
|
||||
counter += 1
|
||||
# Process the message here
|
||||
print(f'Processing message: {message.data()} from {message.topic()}...')
|
||||
return 'success'
|
||||
|
||||
|
||||
def main():
|
||||
with DaprClient() as client:
|
||||
global counter
|
||||
|
||||
subscription = client.subscribe(
|
||||
pubsub_name='pubsub', topic='orders', dead_letter_topic='orders_dead'
|
||||
)
|
||||
|
||||
try:
|
||||
while counter < 5:
|
||||
try:
|
||||
message = subscription.next_message()
|
||||
|
||||
except StreamInactiveError as e:
|
||||
print('Stream is inactive. Retrying...')
|
||||
time.sleep(1)
|
||||
continue
|
||||
if message is None:
|
||||
print('No message received within timeout period.')
|
||||
continue
|
||||
|
||||
# Process the message
|
||||
response_status = process_message(message)
|
||||
|
||||
if response_status == 'success':
|
||||
subscription.respond_success(message)
|
||||
elif response_status == 'retry':
|
||||
subscription.respond_retry(message)
|
||||
elif response_status == 'drop':
|
||||
subscription.respond_drop(message)
|
||||
|
||||
finally:
|
||||
print("Closing subscription...")
|
||||
subscription.close()
|
||||
|
||||
|
||||
if __name__ == '__main__':
|
||||
main()
|
||||
|
||||
```
|
||||
|
||||
You can also use the `subscribe_with_handler` method, which accepts a callback function executed for each message received from the stream. This runs in a separate thread, so it doesn't block the main thread.
|
||||
|
||||
```python
|
||||
import time
|
||||
|
||||
from dapr.clients import DaprClient
|
||||
from dapr.clients.grpc._response import TopicEventResponse
|
||||
|
||||
counter = 0
|
||||
|
||||
|
||||
def process_message(message):
|
||||
# Process the message here
|
||||
global counter
|
||||
counter += 1
|
||||
print(f'Processing message: {message.data()} from {message.topic()}...')
|
||||
return TopicEventResponse('success')
|
||||
|
||||
|
||||
def main():
|
||||
with (DaprClient() as client):
|
||||
# This will start a new thread that will listen for messages
|
||||
# and process them in the `process_message` function
|
||||
close_fn = client.subscribe_with_handler(
|
||||
pubsub_name='pubsub', topic='orders', handler_fn=process_message,
|
||||
dead_letter_topic='orders_dead'
|
||||
)
|
||||
|
||||
while counter < 5:
|
||||
time.sleep(1)
|
||||
|
||||
print("Closing subscription...")
|
||||
close_fn()
|
||||
|
||||
|
||||
if __name__ == '__main__':
|
||||
main()
|
||||
```
|
||||
|
||||
[Learn more about streaming subscriptions using the Python SDK client.]({{< ref "python-client.md#streaming-message-subscription" >}})
|
||||
|
||||
{{% /codetab %}}
|
||||
|
||||
{{% codetab %}}
|
||||
|
||||
|
|
|
@ -309,6 +309,8 @@ context.AddMetadata("dapr-stream", "true");
|
|||
|
||||
### Streaming gRPCs and Resiliency
|
||||
|
||||
> Currently, resiliency policies are not supported for service invocation via gRPC.
|
||||
|
||||
When proxying streaming gRPCs, due to their long-lived nature, [resiliency]({{< ref "resiliency-overview.md" >}}) policies are applied on the "initial handshake" only. As a consequence:
|
||||
|
||||
- If the stream is interrupted after the initial handshake, it will not be automatically re-established by Dapr. Your application will be notified that the stream has ended, and will need to recreate it.
|
||||
|
|
|
@ -6,7 +6,9 @@ weight: 4000
|
|||
description: "The Dapr Workflow engine architecture"
|
||||
---
|
||||
|
||||
[Dapr Workflows]({{< ref "workflow-overview.md" >}}) allow developers to define workflows using ordinary code in a variety of programming languages. The workflow engine runs inside of the Dapr sidecar and orchestrates workflow code deployed as part of your application. This article describes:
|
||||
[Dapr Workflows]({{< ref "workflow-overview.md" >}}) allow developers to define workflows using ordinary code in a variety of programming languages. The workflow engine runs inside of the Dapr sidecar and orchestrates workflow code deployed as part of your application. Dapr Workflows are built on top of Dapr Actors providing durability and scalability for workflow execution.
|
||||
|
||||
This article describes:
|
||||
|
||||
- The architecture of the Dapr Workflow engine
|
||||
- How the workflow engine interacts with application code
|
||||
|
@ -72,7 +74,7 @@ The internal workflow actor types are only registered after an app has registere
|
|||
|
||||
### Workflow actors
|
||||
|
||||
Workflow actors are responsible for managing the state and placement of all workflows running in the app. A new instance of the workflow actor is activated for every workflow instance that gets created. The ID of the workflow actor is the ID of the workflow. This internal actor stores the state of the workflow as it progresses and determines the node on which the workflow code executes via the actor placement service.
|
||||
There are 2 different types of actors used with workflows: workflow actors and activity actors. Workflow actors are responsible for managing the state and placement of all workflows running in the app. A new instance of the workflow actor is activated for every workflow instance that gets created. The ID of the workflow actor is the ID of the workflow. This internal actor stores the state of the workflow as it progresses and determines the node on which the workflow code executes via the actor placement service.
|
||||
|
||||
Each workflow actor saves its state using the following keys in the configured state store:
|
||||
|
||||
|
@ -84,7 +86,7 @@ Each workflow actor saves its state using the following keys in the configured s
|
|||
| `metadata` | Contains meta information about the workflow as a JSON blob and includes details such as the length of the inbox, the length of the history, and a 64-bit integer representing the workflow generation (for cases where the instance ID gets reused). The length information is used to determine which keys need to be read or written to when loading or saving workflow state updates. |
|
||||
|
||||
{{% alert title="Warning" color="warning" %}}
|
||||
In the [Alpha release of the Dapr Workflow engine]({{< ref support-preview-features.md >}}), workflow actor state will remain in the state store even after a workflow has completed. Creating a large number of workflows could result in unbounded storage usage. In a future release, data retention policies will be introduced that can automatically purge the state store of old workflow state.
|
||||
Workflow actor state remains in the state store even after a workflow has completed. Creating a large number of workflows could result in unbounded storage usage. To address this either purge workflows using their ID or directly delete entries in the workflow DB store.
|
||||
{{% /alert %}}
|
||||
|
||||
The following diagram illustrates the typical lifecycle of a workflow actor.
|
||||
|
@ -122,7 +124,7 @@ Activity actors are short-lived:
|
|||
|
||||
### Reminder usage and execution guarantees
|
||||
|
||||
The Dapr Workflow ensures workflow fault-tolerance by using [actor reminders]({{< ref "howto-actors.md#actor-timers-and-reminders" >}}) to recover from transient system failures. Prior to invoking application workflow code, the workflow or activity actor will create a new reminder. If the application code executes without interruption, the reminder is deleted. However, if the node or the sidecar hosting the associated workflow or activity crashes, the reminder will reactivate the corresponding actor and the execution will be retried.
|
||||
The Dapr Workflow ensures workflow fault-tolerance by using [actor reminders]({{< ref "../actors/actors-timers-reminders.md##actor-reminders" >}}) to recover from transient system failures. Prior to invoking application workflow code, the workflow or activity actor will create a new reminder. If the application code executes without interruption, the reminder is deleted. However, if the node or the sidecar hosting the associated workflow or activity crashes, the reminder will reactivate the corresponding actor and the execution will be retried.
|
||||
|
||||
<img src="/images/workflow-overview/workflow-actor-reminder-flow.png" width=600 alt="Diagram showing the process of invoking workflow actors"/>
|
||||
|
||||
|
|
|
@ -195,7 +195,7 @@ string randomString = GetRandomString();
|
|||
// DON'T DO THIS!
|
||||
Instant currentTime = Instant.now();
|
||||
UUID newIdentifier = UUID.randomUUID();
|
||||
string randomString = GetRandomString();
|
||||
String randomString = getRandomString();
|
||||
```
|
||||
|
||||
{{% /codetab %}}
|
||||
|
@ -242,7 +242,7 @@ string randomString = await context.CallActivityAsync<string>("GetRandomString")
|
|||
```java
|
||||
// Do this!!
|
||||
Instant currentTime = context.getCurrentInstant();
|
||||
Guid newIdentifier = context.NewGuid();
|
||||
Guid newIdentifier = context.newGuid();
|
||||
String randomString = context.callActivity(GetRandomString.class.getName(), String.class).await();
|
||||
```
|
||||
|
||||
|
@ -338,7 +338,7 @@ Do this:
|
|||
|
||||
```csharp
|
||||
// Do this!!
|
||||
string configuation = workflowInput.Configuration; // imaginary workflow input argument
|
||||
string configuration = workflowInput.Configuration; // imaginary workflow input argument
|
||||
string data = await context.CallActivityAsync<string>("MakeHttpCall", "https://example.com/api/data");
|
||||
```
|
||||
|
||||
|
@ -348,7 +348,7 @@ string data = await context.CallActivityAsync<string>("MakeHttpCall", "https://e
|
|||
|
||||
```java
|
||||
// Do this!!
|
||||
String configuation = ctx.getInput(InputType.class).getConfiguration(); // imaginary workflow input argument
|
||||
String configuration = ctx.getInput(InputType.class).getConfiguration(); // imaginary workflow input argument
|
||||
String data = ctx.callActivity(MakeHttpCall.class, "https://example.com/api/data", String.class).await();
|
||||
```
|
||||
|
||||
|
@ -358,7 +358,7 @@ String data = ctx.callActivity(MakeHttpCall.class, "https://example.com/api/data
|
|||
|
||||
```javascript
|
||||
// Do this!!
|
||||
const configuation = workflowInput.getConfiguration(); // imaginary workflow input argument
|
||||
const configuration = workflowInput.getConfiguration(); // imaginary workflow input argument
|
||||
const data = yield ctx.callActivity(makeHttpCall, "https://example.com/api/data");
|
||||
```
|
||||
|
||||
|
|
|
@ -1,28 +0,0 @@
|
|||
---
|
||||
type: docs
|
||||
title: "Bridge to Kubernetes support for Dapr services"
|
||||
linkTitle: "Bridge to Kubernetes"
|
||||
weight: 300
|
||||
description: "Debug Dapr apps locally which still connected to your Kubernetes cluster"
|
||||
---
|
||||
|
||||
Bridge to Kubernetes allows you to run and debug code on your development computer, while still connected to your Kubernetes cluster with the rest of your application or services. This type of debugging is often called *local tunnel debugging*.
|
||||
|
||||
{{< button text="Learn more about Bridge to Kubernetes" link="https://aka.ms/bridge-vscode-dapr" >}}
|
||||
|
||||
## Debug Dapr apps
|
||||
|
||||
Bridge to Kubernetes supports debugging Dapr apps on your machine, while still having them interact with the services and applications running on your Kubernetes cluster. This example showcases Bridge to Kubernetes enabling a developer to debug the [distributed calculator quickstart](https://github.com/dapr/quickstarts/tree/master/tutorials/distributed-calculator):
|
||||
|
||||
<div class="embed-responsive embed-responsive-16by9">
|
||||
<iframe width="560" height="315" src="https://www.youtube-nocookie.com/embed/rxwg-__otso" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe>
|
||||
</div>
|
||||
|
||||
{{% alert title="Isolation mode" color="warning" %}}
|
||||
[Isolation mode](https://aka.ms/bridge-isolation-vscode-dapr) is currently not supported with Dapr apps. Make sure to launch Bridge to Kubernetes mode without isolation.
|
||||
{{% /alert %}}
|
||||
|
||||
## Further reading
|
||||
|
||||
- [Bridge to Kubernetes documentation](https://code.visualstudio.com/docs/containers/bridge-to-kubernetes)
|
||||
- [VSCode integration]({{< ref vscode >}})
|
|
@ -2,6 +2,6 @@
|
|||
type: docs
|
||||
title: "Components"
|
||||
linkTitle: "Components"
|
||||
weight: 30
|
||||
weight: 70
|
||||
description: "Learn more about developing Dapr's pluggable and middleware components"
|
||||
---
|
||||
|
|
|
@ -0,0 +1,8 @@
|
|||
---
|
||||
type: docs
|
||||
title: "Error codes"
|
||||
linkTitle: "Error codes"
|
||||
weight: 30
|
||||
description: "Error codes and messages you may encounter while using Dapr"
|
||||
---
|
||||
|
|
@ -0,0 +1,206 @@
|
|||
---
|
||||
type: docs
|
||||
title: "Error codes reference guide"
|
||||
linkTitle: "Reference"
|
||||
description: "List of gRPC and HTTP error codes in Dapr and their descriptions"
|
||||
weight: 20
|
||||
---
|
||||
|
||||
The following tables list the error codes returned by Dapr runtime.
|
||||
The error codes are returned in the response body of an HTTP request or in the `ErrorInfo` section of a gRPC status response, if one is present.
|
||||
An effort is underway to enrich all gRPC error responses according to the [Richer Error Model]({{< ref "grpc-error-codes.md#richer-grpc-error-model" >}}). Error codes without a corresponding gRPC code indicate those errors have not yet been updated to this model.
|
||||
|
||||
### Actors API
|
||||
|
||||
| HTTP Code | gRPC Code | Description |
|
||||
| ---------------------------------- | --------- | ----------------------------------------------------------------------- |
|
||||
| `ERR_ACTOR_INSTANCE_MISSING` | | Missing actor instance |
|
||||
| `ERR_ACTOR_INVOKE_METHOD` | | Error invoking actor method |
|
||||
| `ERR_ACTOR_RUNTIME_NOT_FOUND` | | Actor runtime not found |
|
||||
| `ERR_ACTOR_STATE_GET` | | Error getting actor state |
|
||||
| `ERR_ACTOR_STATE_TRANSACTION_SAVE` | | Error saving actor transaction |
|
||||
| `ERR_ACTOR_REMINDER_CREATE` | | Error creating actor reminder |
|
||||
| `ERR_ACTOR_REMINDER_DELETE` | | Error deleting actor reminder |
|
||||
| `ERR_ACTOR_REMINDER_GET` | | Error getting actor reminder |
|
||||
| `ERR_ACTOR_REMINDER_NON_HOSTED` | | Reminder operation on non-hosted actor type |
|
||||
| `ERR_ACTOR_TIMER_CREATE` | | Error creating actor timer |
|
||||
| `ERR_ACTOR_NO_APP_CHANNEL` | | App channel not initialized |
|
||||
| `ERR_ACTOR_STACK_DEPTH` | | Maximum actor call stack depth exceeded |
|
||||
| `ERR_ACTOR_NO_PLACEMENT` | | Placement service not configured |
|
||||
| `ERR_ACTOR_RUNTIME_CLOSED` | | Actor runtime is closed |
|
||||
| `ERR_ACTOR_NAMESPACE_REQUIRED` | | Actors must have a namespace configured when running in Kubernetes mode |
|
||||
| `ERR_ACTOR_NO_ADDRESS` | | No address found for actor |
|
||||
|
||||
|
||||
### Workflows API
|
||||
|
||||
| HTTP Code | gRPC Code | Description |
|
||||
| ---------------------------------- | --------- | --------------------------------------------------------------------------------------- |
|
||||
| `ERR_GET_WORKFLOW` | | Error getting workflow |
|
||||
| `ERR_START_WORKFLOW` | | Error starting workflow |
|
||||
| `ERR_PAUSE_WORKFLOW` | | Error pausing workflow |
|
||||
| `ERR_RESUME_WORKFLOW` | | Error resuming workflow |
|
||||
| `ERR_TERMINATE_WORKFLOW` | | Error terminating workflow |
|
||||
| `ERR_PURGE_WORKFLOW` | | Error purging workflow |
|
||||
| `ERR_RAISE_EVENT_WORKFLOW` | | Error raising event in workflow |
|
||||
| `ERR_WORKFLOW_COMPONENT_MISSING` | | Missing workflow component |
|
||||
| `ERR_WORKFLOW_COMPONENT_NOT_FOUND` | | Workflow component not found |
|
||||
| `ERR_WORKFLOW_EVENT_NAME_MISSING` | | Missing workflow event name |
|
||||
| `ERR_WORKFLOW_NAME_MISSING` | | Workflow name not configured |
|
||||
| `ERR_INSTANCE_ID_INVALID` | | Invalid workflow instance ID. (Only alphanumeric and underscore characters are allowed) |
|
||||
| `ERR_INSTANCE_ID_NOT_FOUND` | | Workflow instance ID not found |
|
||||
| `ERR_INSTANCE_ID_PROVIDED_MISSING` | | Missing workflow instance ID |
|
||||
| `ERR_INSTANCE_ID_TOO_LONG` | | Workflow instance ID too long |
|
||||
|
||||
|
||||
### State management API
|
||||
|
||||
| HTTP Code | gRPC Code | Description |
|
||||
| --------------------------------------- | --------------------------------------- | ----------------------------------------- |
|
||||
| `ERR_STATE_TRANSACTION` | | Error in state transaction |
|
||||
| `ERR_STATE_SAVE` | | Error saving state |
|
||||
| `ERR_STATE_GET` | | Error getting state |
|
||||
| `ERR_STATE_DELETE` | | Error deleting state |
|
||||
| `ERR_STATE_BULK_DELETE` | | Error deleting state in bulk |
|
||||
| `ERR_STATE_BULK_GET` | | Error getting state in bulk |
|
||||
| `ERR_NOT_SUPPORTED_STATE_OPERATION` | | Operation not supported in transaction |
|
||||
| `ERR_STATE_QUERY` | `DAPR_STATE_QUERY_FAILED` | Error querying state |
|
||||
| `ERR_STATE_STORE_NOT_FOUND` | `DAPR_STATE_NOT_FOUND` | State store not found |
|
||||
| `ERR_STATE_STORE_NOT_CONFIGURED` | `DAPR_STATE_NOT_CONFIGURED` | State store not configured |
|
||||
| `ERR_STATE_STORE_NOT_SUPPORTED` | `DAPR_STATE_TRANSACTIONS_NOT_SUPPORTED` | State store does not support transactions |
|
||||
| `ERR_STATE_STORE_NOT_SUPPORTED` | `DAPR_STATE_QUERYING_NOT_SUPPORTED` | State store does not support querying |
|
||||
| `ERR_STATE_STORE_TOO_MANY_TRANSACTIONS` | `DAPR_STATE_TOO_MANY_TRANSACTIONS` | Too many operations per transaction |
|
||||
| `ERR_MALFORMED_REQUEST` | `DAPR_STATE_ILLEGAL_KEY` | Invalid key |
|
||||
|
||||
|
||||
### Configuration API
|
||||
|
||||
| HTTP Code | gRPC Code | Description |
|
||||
| ---------------------------------------- | --------- | -------------------------------------- |
|
||||
| `ERR_CONFIGURATION_GET` | | Error getting configuration |
|
||||
| `ERR_CONFIGURATION_STORE_NOT_CONFIGURED` | | Configuration store not configured |
|
||||
| `ERR_CONFIGURATION_STORE_NOT_FOUND` | | Configuration store not found |
|
||||
| `ERR_CONFIGURATION_SUBSCRIBE` | | Error subscribing to configuration |
|
||||
| `ERR_CONFIGURATION_UNSUBSCRIBE` | | Error unsubscribing from configuration |
|
||||
|
||||
|
||||
### Crypto API
|
||||
|
||||
| HTTP Code | gRPC Code | Description |
|
||||
| ------------------------------------- | --------- | ------------------------------- |
|
||||
| `ERR_CRYPTO` | | Error in crypto operation |
|
||||
| `ERR_CRYPTO_KEY` | | Error retrieving crypto key |
|
||||
| `ERR_CRYPTO_PROVIDER_NOT_FOUND` | | Crypto provider not found |
|
||||
| `ERR_CRYPTO_PROVIDERS_NOT_CONFIGURED` | | Crypto providers not configured |
|
||||
|
||||
|
||||
### Secrets API
|
||||
|
||||
| HTTP Code | gRPC Code | Description |
|
||||
| ---------------------------------- | --------- | --------------------------- |
|
||||
| `ERR_SECRET_GET` | | Error getting secret |
|
||||
| `ERR_SECRET_STORE_NOT_FOUND` | | Secret store not found |
|
||||
| `ERR_SECRET_STORES_NOT_CONFIGURED` | | Secret store not configured |
|
||||
| `ERR_PERMISSION_DENIED` | | Permission denied by policy |
|
||||
|
||||
|
||||
### Pub/Sub and messaging errors
|
||||
|
||||
| HTTP Code | gRPC Code | Description |
|
||||
| ----------------------------- | -------------------------------------- | -------------------------------------- |
|
||||
| `ERR_PUBSUB_EMPTY` | `DAPR_PUBSUB_NAME_EMPTY` | Pubsub name is empty |
|
||||
| `ERR_PUBSUB_NOT_FOUND` | `DAPR_PUBSUB_NOT_FOUND` | Pubsub not found |
|
||||
| `ERR_PUBSUB_NOT_FOUND` | `DAPR_PUBSUB_TEST_NOT_FOUND` | Pubsub not found |
|
||||
| `ERR_PUBSUB_NOT_CONFIGURED` | `DAPR_PUBSUB_NOT_CONFIGURED` | Pubsub not configured |
|
||||
| `ERR_TOPIC_NAME_EMPTY` | `DAPR_PUBSUB_TOPIC_NAME_EMPTY` | Topic name is empty |
|
||||
| `ERR_PUBSUB_FORBIDDEN` | `DAPR_PUBSUB_FORBIDDEN` | Access to topic forbidden for APP ID |
|
||||
| `ERR_PUBSUB_PUBLISH_MESSAGE` | `DAPR_PUBSUB_PUBLISH_MESSAGE` | Error publishing message |
|
||||
| `ERR_PUBSUB_REQUEST_METADATA` | `DAPR_PUBSUB_METADATA_DESERIALIZATION` | Error deserializing metadata |
|
||||
| `ERR_PUBSUB_CLOUD_EVENTS_SER` | `DAPR_PUBSUB_CLOUD_EVENT_CREATION` | Error creating CloudEvent |
|
||||
| `ERR_PUBSUB_EVENTS_SER` | `DAPR_PUBSUB_MARSHAL_ENVELOPE` | Error marshalling Cloud Event envelope |
|
||||
| `ERR_PUBSUB_EVENTS_SER` | `DAPR_PUBSUB_MARSHAL_EVENTS` | Error marshalling events to bytes |
|
||||
| `ERR_PUBSUB_EVENTS_SER` | `DAPR_PUBSUB_UNMARSHAL_EVENTS` | Error unmarshalling events |
|
||||
| `ERR_PUBLISH_OUTBOX` | | Error publishing message to outbox |
|
||||
|
||||
|
||||
### Conversation API
|
||||
|
||||
| HTTP Code | gRPC Code | Description |
|
||||
| --------------------------------- | --------- | --------------------------------------------- |
|
||||
| `ERR_CONVERSATION_INVALID_PARMS` | | Invalid parameters for conversation component |
|
||||
| `ERR_CONVERSATION_INVOKE` | | Error invoking conversation |
|
||||
| `ERR_CONVERSATION_MISSING_INPUTS` | | Missing inputs for conversation |
|
||||
| `ERR_CONVERSATION_NOT_FOUND` | | Conversation not found |
|
||||
|
||||
|
||||
### Service Invocation / Direct Messaging API
|
||||
|
||||
| HTTP Code | gRPC Code | Description |
|
||||
| ------------------- | --------- | ---------------------- |
|
||||
| `ERR_DIRECT_INVOKE` | | Error invoking service |
|
||||
|
||||
|
||||
### Bindings API
|
||||
|
||||
| HTTP Code | gRPC Code | Description |
|
||||
| --------------------------- | --------- | ----------------------------- |
|
||||
| `ERR_INVOKE_OUTPUT_BINDING` | | Error invoking output binding |
|
||||
|
||||
|
||||
### Distributed Lock API
|
||||
|
||||
| HTTP Code | gRPC Code | Description |
|
||||
| ------------------------------- | --------- | ------------------------- |
|
||||
| `ERR_LOCK_STORE_NOT_CONFIGURED` | | Lock store not configured |
|
||||
| `ERR_LOCK_STORE_NOT_FOUND` | | Lock store not found |
|
||||
| `ERR_TRY_LOCK` | | Error acquiring lock |
|
||||
| `ERR_UNLOCK` | | Error releasing lock |
|
||||
|
||||
|
||||
### Healthz
|
||||
|
||||
| HTTP Code | gRPC Code | Description |
|
||||
| ------------------------------- | --------- | --------------------------- |
|
||||
| `ERR_HEALTH_NOT_READY` | | Dapr not ready |
|
||||
| `ERR_HEALTH_APPID_NOT_MATCH` | | Dapr App ID does not match |
|
||||
| `ERR_OUTBOUND_HEALTH_NOT_READY` | | Dapr outbound not ready |
|
||||
|
||||
|
||||
### Common
|
||||
|
||||
| HTTP Code | gRPC Code | Description |
|
||||
| ---------------------------- | --------- | -------------------------- |
|
||||
| `ERR_API_UNIMPLEMENTED` | | API not implemented |
|
||||
| `ERR_APP_CHANNEL_NIL` | | App channel is nil |
|
||||
| `ERR_BAD_REQUEST` | | Bad request |
|
||||
| `ERR_BODY_READ` | | Error reading request body |
|
||||
| `ERR_INTERNAL` | | Internal error |
|
||||
| `ERR_MALFORMED_REQUEST` | | Malformed request |
|
||||
| `ERR_MALFORMED_REQUEST_DATA` | | Malformed request data |
|
||||
| `ERR_MALFORMED_RESPONSE` | | Malformed response |
|
||||
|
||||
|
||||
### Scheduler/Jobs API
|
||||
|
||||
| HTTP Code | gRPC Code | Description |
|
||||
| ------------------------------- | ------------------------------- | -------------------------------------- |
|
||||
| `DAPR_SCHEDULER_SCHEDULE_JOB` | `DAPR_SCHEDULER_SCHEDULE_JOB` | Error scheduling job |
|
||||
| `DAPR_SCHEDULER_JOB_NAME` | `DAPR_SCHEDULER_JOB_NAME` | Job name should only be set in the url |
|
||||
| `DAPR_SCHEDULER_JOB_NAME_EMPTY` | `DAPR_SCHEDULER_JOB_NAME_EMPTY` | Job name is empty |
|
||||
| `DAPR_SCHEDULER_GET_JOB` | `DAPR_SCHEDULER_GET_JOB` | Error getting job |
|
||||
| `DAPR_SCHEDULER_LIST_JOBS` | `DAPR_SCHEDULER_LIST_JOBS` | Error listing jobs |
|
||||
| `DAPR_SCHEDULER_DELETE_JOB` | `DAPR_SCHEDULER_DELETE_JOB` | Error deleting job |
|
||||
| `DAPR_SCHEDULER_EMPTY` | `DAPR_SCHEDULER_EMPTY` | Required argument is empty |
|
||||
| `DAPR_SCHEDULER_SCHEDULE_EMPTY` | `DAPR_SCHEDULER_SCHEDULE_EMPTY` | No schedule provided for job |
|
||||
|
||||
|
||||
### Generic
|
||||
|
||||
| HTTP Code | gRPC Code | Description |
|
||||
| --------- | --------- | ------------- |
|
||||
| `ERROR` | `ERROR` | Generic error |
|
||||
|
||||
## Next steps
|
||||
|
||||
- [Handling HTTP error codes]({{< ref http-error-codes.md >}})
|
||||
- [Handling gRPC error codes]({{< ref grpc-error-codes.md >}})
|
|
@ -0,0 +1,62 @@
|
|||
---
|
||||
type: docs
|
||||
title: "Errors overview"
|
||||
linkTitle: "Overview"
|
||||
weight: 10
|
||||
description: "Overview of Dapr errors"
|
||||
---
|
||||
|
||||
An error code is a numeric or alphamueric code that indicates the nature of an error and, when possible, why it occured.
|
||||
|
||||
Dapr error codes are standardized strings for over 80+ common errors across HTTP and gRPC requests when using the Dapr APIs. These codes are both:
|
||||
- Returned in the JSON response body of the request.
|
||||
- When enabled, logged in debug-level logs in the runtime.
|
||||
- If you're running in Kubernetes, error codes are logged in the sidecar.
|
||||
- If you're running in self-hosted, you can enable and run debug logs.
|
||||
|
||||
## Error format
|
||||
|
||||
Dapr error codes consist of a prefix, a category, and shorthand of the error itself. For example:
|
||||
|
||||
| Prefix | Category | Error shorthand |
|
||||
| ------ | -------- | --------------- |
|
||||
| ERR_ | PUBSUB_ | NOT_FOUND |
|
||||
|
||||
Some of the most common errors returned include:
|
||||
|
||||
- ERR_ACTOR_TIMER_CREATE
|
||||
- ERR_PURGE_WORKFLOW
|
||||
- ERR_STATE_STORE_NOT_FOUND
|
||||
- ERR_HEALTH_NOT_READY
|
||||
|
||||
> **Note:** [See a full list of error codes in Dapr.]({{< ref error-codes-reference.md >}})
|
||||
|
||||
An error returned for a state store not found might look like the following:
|
||||
|
||||
```json
|
||||
{
|
||||
"error": "Bad Request",
|
||||
"error_msg": "{\"errorCode\":\"ERR_STATE_STORE_NOT_FOUND\",\"message\":\"state store <name> is not found\",\"details\":[{\"@type\":\"type.googleapis.com/google.rpc.ErrorInfo\",\"domain\":\"dapr.io\",\"metadata\":{\"appID\":\"nodeapp\"},\"reason\":\"DAPR_STATE_NOT_FOUND\"}]}",
|
||||
"status": 400
|
||||
}
|
||||
```
|
||||
|
||||
The returned error includes:
|
||||
- The error code: `ERR_STATE_STORE_NOT_FOUND`
|
||||
- The error message describing the issue: `state store <name> is not found`
|
||||
- The app ID in which the error is occuring: `nodeapp`
|
||||
- The reason for the error: `DAPR_STATE_NOT_FOUND`
|
||||
|
||||
## Dapr error code metrics
|
||||
|
||||
Metrics help you see when exactly errors are occuring from within the runtime. Error code metrics are collected using the `error_code_total` endpoint. This endpoint is disabled by default. You can [enable it using the `recordErrorCodes` field in your configuration file]({{< ref "metrics-overview.md#configuring-metrics-for-error-codes" >}}).
|
||||
|
||||
## Demo
|
||||
|
||||
Watch a demo presented during [Diagrid's Dapr v1.15 celebration](https://www.diagrid.io/videos/dapr-1-15-deep-dive) to see how to enable error code metrics and handle error codes returned in the runtime.
|
||||
|
||||
<iframe width="560" height="315" src="https://www.youtube-nocookie.com/embed/NTnwoDhHIcQ?si=I2uCB_TINGxlu-9v&start=2812" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen></iframe>
|
||||
|
||||
## Next step
|
||||
|
||||
{{< button text="See a list of all Dapr error codes" page="error-codes-reference" >}}
|
|
@ -1,20 +1,18 @@
|
|||
---
|
||||
type: docs
|
||||
title: Dapr errors
|
||||
linkTitle: "Dapr errors"
|
||||
weight: 700
|
||||
description: "Information on Dapr errors and how to handle them"
|
||||
title: Handling gRPC error codes
|
||||
linkTitle: "gRPC"
|
||||
weight: 40
|
||||
description: "Information on Dapr gRPC errors and how to handle them"
|
||||
---
|
||||
|
||||
## Error handling: Understanding errors model and reporting
|
||||
|
||||
Initially, errors followed the [Standard gRPC error model](https://grpc.io/docs/guides/error/#standard-error-model). However, to provide more detailed and informative error messages, an enhanced error model has been defined which aligns with the gRPC [Richer error model](https://grpc.io/docs/guides/error/#richer-error-model).
|
||||
|
||||
{{% alert title="Note" color="primary" %}}
|
||||
Not all Dapr errors have been converted to the richer gRPC error model.
|
||||
{{% /alert %}}
|
||||
|
||||
### Standard gRPC Error Model
|
||||
## Standard gRPC Error Model
|
||||
|
||||
The [Standard gRPC error model](https://grpc.io/docs/guides/error/#standard-error-model) is an approach to error reporting in gRPC. Each error response includes an error code and an error message. The error codes are standardized and reflect common error conditions.
|
||||
|
||||
|
@ -25,7 +23,7 @@ ERROR:
|
|||
Message: input key/keyPrefix 'bad||keyname' can't contain '||'
|
||||
```
|
||||
|
||||
### Richer gRPC Error Model
|
||||
## Richer gRPC Error Model
|
||||
|
||||
The [Richer gRPC error model](https://grpc.io/docs/guides/error/#richer-error-model) extends the standard error model by providing additional context and details about the error. This model includes the standard error `code` and `message`, along with a `details` section that can contain various types of information, such as `ErrorInfo`, `ResourceInfo`, and `BadRequest` details.
|
||||
|
|
@ -0,0 +1,21 @@
|
|||
---
|
||||
type: docs
|
||||
title: "Handling HTTP error codes"
|
||||
linkTitle: "HTTP"
|
||||
description: "Detailed reference of the Dapr HTTP error codes and how to handle them"
|
||||
weight: 30
|
||||
---
|
||||
|
||||
For HTTP calls made to Dapr runtime, when an error is encountered, an error JSON is returned in response body. The JSON contains an error code and an descriptive error message.
|
||||
|
||||
```
|
||||
{
|
||||
"errorCode": "ERR_STATE_GET",
|
||||
"message": "Requested state key does not exist in state store."
|
||||
}
|
||||
```
|
||||
|
||||
## Related
|
||||
|
||||
- [Error code reference list]({{< ref error-codes-reference.md >}})
|
||||
- [Handling gRPC error codes]({{< ref grpc-error-codes.md >}})
|
|
@ -80,10 +80,16 @@ In production scenarios, it is recommended to use a solution such as:
|
|||
|
||||
If running on AWS EKS, you can [link an IAM role to a Kubernetes service account](https://docs.aws.amazon.com/eks/latest/userguide/create-service-account-iam-policy-and-role.html), which your pod can use.
|
||||
|
||||
All of these solutions solve the same problem: They allow the Dapr runtime process (or sidecar) to retrive credentials dynamically, so that explicit credentials aren't needed. This provides several benefits, such as automated key rotation, and avoiding having to manage secrets.
|
||||
All of these solutions solve the same problem: They allow the Dapr runtime process (or sidecar) to retrieve credentials dynamically, so that explicit credentials aren't needed. This provides several benefits, such as automated key rotation, and avoiding having to manage secrets.
|
||||
|
||||
Both Kiam and Kube2IAM work by intercepting calls to the [instance metadata service](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/configuring-instance-metadata-service.html).
|
||||
|
||||
### Setting Up Dapr with AWS EKS Pod Identity
|
||||
|
||||
EKS Pod Identities provide the ability to manage credentials for your applications, similar to the way that Amazon EC2 instance profiles provide credentials to Amazon EC2 instances. Instead of creating and distributing your AWS credentials to the containers or using the Amazon EC2 instance’s role, you associate an IAM role with a Kubernetes service account and configure your Pods to use the service account.
|
||||
|
||||
To see a comprehensive example on how to authorize pod access to AWS Secrets Manager from EKS using AWS EKS Pod Identity, [follow the sample in this repository](https://github.com/dapr/samples/tree/master/dapr-eks-podidentity).
|
||||
|
||||
### Use an instance profile when running in stand-alone mode on AWS EC2
|
||||
|
||||
If running Dapr directly on an AWS EC2 instance in stand-alone mode, you can use instance profiles.
|
||||
|
@ -130,7 +136,6 @@ On Windows, the environment variable needs to be set before starting the `dapr`
|
|||
|
||||
{{< /tabs >}}
|
||||
|
||||
|
||||
### Authenticate to AWS if using AWS SSO based profiles
|
||||
|
||||
If you authenticate to AWS using [AWS SSO](https://aws.amazon.com/single-sign-on/), some AWS SDKs (including the Go SDK) don't yet support this natively. There are several utilities you can use to "bridge the gap" between AWS SSO-based credentials and "legacy" credentials, such as:
|
||||
|
@ -157,7 +162,7 @@ AWS_PROFILE=myprofile awshelper daprd...
|
|||
<!-- windows -->
|
||||
{{% codetab %}}
|
||||
|
||||
On Windows, the environment variable needs to be set before starting the `awshelper` command, doing it inline (like in Linxu/MacOS) is not supported.
|
||||
On Windows, the environment variable needs to be set before starting the `awshelper` command; doing it inline (like in Linux/MacOS) is not supported.
|
||||
|
||||
{{% /codetab %}}
|
||||
|
||||
|
@ -169,4 +174,7 @@ On Windows, the environment variable needs to be set before starting the `awshel
|
|||
|
||||
## Related links
|
||||
|
||||
For more information, see [how the AWS SDK (which Dapr uses) handles credentials](https://docs.aws.amazon.com/sdk-for-go/v1/developer-guide/configuring-sdk.html#specifying-credentials).
|
||||
- For more information, see [how the AWS SDK (which Dapr uses) handles credentials](https://docs.aws.amazon.com/sdk-for-go/v1/developer-guide/configuring-sdk.html#specifying-credentials).
|
||||
- [EKS Pod Identity Documentation](https://docs.aws.amazon.com/eks/latest/userguide/pod-identities.html)
|
||||
- [AWS SDK Credentials Configuration](https://docs.aws.amazon.com/sdk-for-go/v1/developer-guide/configuring-sdk.html#specifying-credentials)
|
||||
- [Set up an Elastic Kubernetes Service (EKS) cluster](https://docs.dapr.io/operations/hosting/kubernetes/cluster/setup-eks/)
|
||||
|
|
|
@ -1,21 +0,0 @@
|
|||
---
|
||||
type: docs
|
||||
title: "How to: Integrate using Testcontainers Dapr Module"
|
||||
linkTitle: "Dapr Testcontainers"
|
||||
weight: 3000
|
||||
description: "Use the Dapr Testcontainer module from your Java application"
|
||||
---
|
||||
|
||||
You can use the Testcontainers Dapr Module provided by Diagrid to set up Dapr locally for your Java applications. Simply add the following dependency to your Maven project:
|
||||
|
||||
```xml
|
||||
<dependency>
|
||||
<groupId>io.diagrid.dapr</groupId>
|
||||
<artifactId>testcontainers-dapr</artifactId>
|
||||
<version>0.10.x</version>
|
||||
</dependency>
|
||||
```
|
||||
|
||||
[If you're using Spring Boot, you can also use the Spring Boot Starter.](https://github.com/diagridio/spring-boot-starter-dapr)
|
||||
|
||||
{{< button text="Use the Testcontainers Dapr Module" link="https://github.com/diagridio/testcontainers-dapr" >}}
|
|
@ -2,7 +2,7 @@
|
|||
type: docs
|
||||
title: "How to: Use the gRPC interface in your Dapr application"
|
||||
linkTitle: "gRPC interface"
|
||||
weight: 6000
|
||||
weight: 400
|
||||
description: "Use the Dapr gRPC API in your application"
|
||||
---
|
||||
|
|
@ -124,6 +124,7 @@ apps:
|
|||
appDirPath: ./nodeapp/
|
||||
appPort: 3000
|
||||
containerImage: ghcr.io/dapr/samples/hello-k8s-node:latest
|
||||
containerImagePullPolicy: Always
|
||||
createService: true
|
||||
env:
|
||||
APP_PORT: 3000
|
||||
|
@ -134,6 +135,7 @@ apps:
|
|||
|
||||
> **Note:**
|
||||
> - If the `containerImage` field is not specified, `dapr run -k -f` produces an error.
|
||||
> - The containerImagePullPolicy indicates that a new container image is always downloaded for this app.
|
||||
> - The `createService` field defines a basic service in Kubernetes (ClusterIP or LoadBalancer) that targets the `--app-port` specified in the template. If `createService` isn't specified, the application is not accessible from outside the cluster.
|
||||
|
||||
For a more in-depth example and explanation of the template properties, see [Multi-app template]({{< ref multi-app-template.md >}}).
|
||||
|
@ -169,4 +171,4 @@ Watch [this video for an overview on Multi-App Run in Kubernetes](https://youtu.
|
|||
|
||||
- [Learn the Multi-App Run template file structure and its properties]({{< ref multi-app-template.md >}})
|
||||
- [Try out the self-hosted Multi-App Run template with the Service Invocation quickstart]({{< ref serviceinvocation-quickstart.md >}})
|
||||
- [Try out the Kubernetes Multi-App Run template with the `hello-kubernetes` tutorial](https://github.com/dapr/quickstarts/tree/master/tutorials/hello-kubernetes)
|
||||
- [Try out the Kubernetes Multi-App Run template with the `hello-kubernetes` tutorial](https://github.com/dapr/quickstarts/tree/master/tutorials/hello-kubernetes)
|
||||
|
|
|
@ -203,6 +203,7 @@ apps:
|
|||
appLogDestination: file # (optional), can be file, console or fileAndConsole. default is fileAndConsole.
|
||||
daprdLogDestination: file # (optional), can be file, console or fileAndConsole. default is file.
|
||||
containerImage: ghcr.io/dapr/samples/hello-k8s-node:latest # (optional) URI of the container image to be used when deploying to Kubernetes dev/test environment.
|
||||
containerImagePullPolicy: IfNotPresent # (optional), the container image is downloaded if one is not present locally, otherwise the local one is used.
|
||||
createService: true # (optional) Create a Kubernetes service for the application when deploying to dev/test environment.
|
||||
- appID: backend # optional
|
||||
appDirPath: .dapr/backend/ # REQUIRED
|
||||
|
@ -285,39 +286,39 @@ The properties for the Multi-App Run template align with the `dapr run -k` CLI f
|
|||
|
||||
{{< table "table table-white table-striped table-bordered" >}}
|
||||
|
||||
| Properties | Required | Details | Example |
|
||||
|--------------------------|:--------:|--------|---------|
|
||||
| `appDirPath` | Y | Path to the your application code | `./webapp/`, `./backend/` |
|
||||
| `appID` | N | Application's app ID. If not provided, will be derived from `appDirPath` | `webapp`, `backend` |
|
||||
| `appChannelAddress` | N | The network address the application listens on. Can be left to the default value by convention. | `127.0.0.1` | `localhost` |
|
||||
| `appProtocol` | N | The protocol Dapr uses to talk to the application. | `http`, `grpc` |
|
||||
| `appPort` | N | The port your application is listening on | `8080`, `3000` |
|
||||
| `daprHTTPPort` | N | Dapr HTTP port | |
|
||||
| `daprGRPCPort` | N | Dapr GRPC port | |
|
||||
| `daprInternalGRPCPort` | N | gRPC port for the Dapr Internal API to listen on; used when parsing the value from a local DNS component | |
|
||||
| `metricsPort` | N | The port that Dapr sends its metrics information to | |
|
||||
| `unixDomainSocket` | N | Path to a unix domain socket dir mount. If specified, communication with the Dapr sidecar uses unix domain sockets for lower latency and greater throughput when compared to using TCP ports. Not available on Windows. | `/tmp/test-socket` |
|
||||
| `profilePort` | N | The port for the profile server to listen on | |
|
||||
| `enableProfiling` | N | Enable profiling via an HTTP endpoint | |
|
||||
| `apiListenAddresses` | N | Dapr API listen addresses | |
|
||||
| `logLevel` | N | The log verbosity. | |
|
||||
| `appMaxConcurrency` | N | The concurrency level of the application; default is unlimited | |
|
||||
| `placementHostAddress` | N | | |
|
||||
| `appSSL` | N | Enable https when Dapr invokes the application | |
|
||||
| `daprHTTPMaxRequestSize` | N | Max size of the request body in MB. | |
|
||||
| `daprHTTPReadBufferSize` | N | Max size of the HTTP read buffer in KB. This also limits the maximum size of HTTP headers. The default 4 KB | |
|
||||
| `enableAppHealthCheck` | N | Enable the app health check on the application | `true`, `false` |
|
||||
| `appHealthCheckPath` | N | Path to the health check file | `/healthz` |
|
||||
| `appHealthProbeInterval` | N | Interval to probe for the health of the app in seconds
|
||||
| |
|
||||
| `appHealthProbeTimeout` | N | Timeout for app health probes in milliseconds | |
|
||||
| `appHealthThreshold` | N | Number of consecutive failures for the app to be considered unhealthy | |
|
||||
| `enableApiLogging` | N | Enable the logging of all API calls from application to Dapr | |
|
||||
| `env` | N | Map to environment variable; environment variables applied per application will overwrite environment variables shared across applications | `DEBUG`, `DAPR_HOST_ADD` |
|
||||
| `appLogDestination` | N | Log destination for outputting app logs; Its value can be file, console or fileAndConsole. Default is fileAndConsole | `file`, `console`, `fileAndConsole` |
|
||||
| `daprdLogDestination` | N | Log destination for outputting daprd logs; Its value can be file, console or fileAndConsole. Default is file | `file`, `console`, `fileAndConsole` |
|
||||
| `containerImage`| N | URI of the container image to be used when deploying to Kubernetes dev/test environment. | `ghcr.io/dapr/samples/hello-k8s-python:latest`
|
||||
| `createService`| N | Create a Kubernetes service for the application when deploying to dev/test environment. | `true`, `false` |
|
||||
| Properties | Required | Details | Example |
|
||||
|----------------------------|:--------:|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|------------------------------------------------|
|
||||
| `appDirPath` | Y | Path to the your application code | `./webapp/`, `./backend/` |
|
||||
| `appID` | N | Application's app ID. If not provided, will be derived from `appDirPath` | `webapp`, `backend` |
|
||||
| `appChannelAddress` | N | The network address the application listens on. Can be left to the default value by convention. | `127.0.0`, `localhost` |
|
||||
| `appProtocol` | N | The protocol Dapr uses to talk to the application. | `http`, `grpc` |
|
||||
| `appPort` | N | The port your application is listening on | `8080`, `3000` |
|
||||
| `daprHTTPPort` | N | Dapr HTTP port | |
|
||||
| `daprGRPCPort` | N | Dapr GRPC port | |
|
||||
| `daprInternalGRPCPort` | N | gRPC port for the Dapr Internal API to listen on; used when parsing the value from a local DNS component | |
|
||||
| `metricsPort` | N | The port that Dapr sends its metrics information to | |
|
||||
| `unixDomainSocket` | N | Path to a unix domain socket dir mount. If specified, communication with the Dapr sidecar uses unix domain sockets for lower latency and greater throughput when compared to using TCP ports. Not available on Windows. | `/tmp/test-socket` |
|
||||
| `profilePort` | N | The port for the profile server to listen on | |
|
||||
| `enableProfiling` | N | Enable profiling via an HTTP endpoint | |
|
||||
| `apiListenAddresses` | N | Dapr API listen addresses | |
|
||||
| `logLevel` | N | The log verbosity. | |
|
||||
| `appMaxConcurrency` | N | The concurrency level of the application; default is unlimited | |
|
||||
| `placementHostAddress` | N | | |
|
||||
| `appSSL` | N | Enable https when Dapr invokes the application | |
|
||||
| `daprHTTPMaxRequestSize` | N | Max size of the request body in MB. | |
|
||||
| `daprHTTPReadBufferSize` | N | Max size of the HTTP read buffer in KB. This also limits the maximum size of HTTP headers. The default 4 KB | |
|
||||
| `enableAppHealthCheck` | N | Enable the app health check on the application | `true`, `false` |
|
||||
| `appHealthCheckPath` | N | Path to the health check file | `/healthz` |
|
||||
| `appHealthProbeInterval` | N | Interval to probe for the health of the app in seconds | |
|
||||
| `appHealthProbeTimeout` | N | Timeout for app health probes in milliseconds | |
|
||||
| `appHealthThreshold` | N | Number of consecutive failures for the app to be considered unhealthy | |
|
||||
| `enableApiLogging` | N | Enable the logging of all API calls from application to Dapr | |
|
||||
| `env` | N | Map to environment variable; environment variables applied per application will overwrite environment variables shared across applications | `DEBUG`, `DAPR_HOST_ADD` |
|
||||
| `appLogDestination` | N | Log destination for outputting app logs; Its value can be file, console or fileAndConsole. Default is fileAndConsole | `file`, `console`, `fileAndConsole` |
|
||||
| `daprdLogDestination` | N | Log destination for outputting daprd logs; Its value can be file, console or fileAndConsole. Default is file | `file`, `console`, `fileAndConsole` |
|
||||
| `containerImage` | N | URI of the container image to be used when deploying to Kubernetes dev/test environment. | `ghcr.io/dapr/samples/hello-k8s-python:latest` |
|
||||
| `containerImagePullPolicy` | N | The container image pull policy (default to `Always`). | `Always`, `IfNotPresent`, `Never` |
|
||||
| `createService` | N | Create a Kubernetes service for the application when deploying to dev/test environment. | `true`, `false` |
|
||||
|
||||
{{< /table >}}
|
||||
|
||||
|
|
|
@ -1,9 +1,9 @@
|
|||
---
|
||||
type: docs
|
||||
title: "Serialization in Dapr's SDKs"
|
||||
linkTitle: "Serialization"
|
||||
linkTitle: "SDK Serialization"
|
||||
description: "How Dapr serializes data within the SDKs"
|
||||
weight: 2000
|
||||
weight: 400
|
||||
aliases:
|
||||
- '/developing-applications/sdks/serialization/'
|
||||
---
|
|
@ -10,7 +10,7 @@ no_list: true
|
|||
Hit the ground running with our Dapr quickstarts, complete with code samples aimed to get you started quickly with Dapr.
|
||||
|
||||
{{% alert title="Note" color="primary" %}}
|
||||
We are actively working on adding to our quickstart library. In the meantime, you can explore Dapr through our [tutorials]({{< ref "getting-started/tutorials/_index.md" >}}).
|
||||
Each release, the quickstart library has new examples added for the APIs and SDKs. You can also explore Dapr through the [tutorials]({{< ref "getting-started/tutorials/_index.md" >}}).
|
||||
|
||||
{{% /alert %}}
|
||||
|
||||
|
@ -33,4 +33,4 @@ Hit the ground running with our Dapr quickstarts, complete with code samples aim
|
|||
| [Resiliency]({{< ref resiliency >}}) | Define and apply fault-tolerance policies to your Dapr API requests. |
|
||||
| [Cryptography]({{< ref cryptography-quickstart.md >}}) | Encrypt and decrypt data using Dapr's cryptographic APIs. |
|
||||
| [Jobs]({{< ref jobs-quickstart.md >}}) | Schedule, retrieve, and delete jobs using Dapr's jobs APIs. |
|
||||
|
||||
| [Conversation]({{< ref conversation-quickstart.md >}}) | Securely and reliably interact with Large Language Models (LLMs). |
|
|
@ -20,8 +20,8 @@ As a quick overview of the .NET actors quickstart:
|
|||
1. Using a `SmartDevice.Service` microservice, you host:
|
||||
- Two `SmokeDetectorActor` smoke alarm objects
|
||||
- A `ControllerActor` object that commands and controls the smart devices
|
||||
1. Using a `SmartDevice.Client` console app, the client app interacts with each actor, or the controller, to perform actions in aggregate.
|
||||
1. The `SmartDevice.Interfaces` contains the shared interfaces and data types used by both the service and client apps.
|
||||
2. Using a `SmartDevice.Client` console app, the client app interacts with each actor, or the controller, to perform actions in aggregate.
|
||||
3. The `SmartDevice.Interfaces` contains the shared interfaces and data types used by both the service and client apps.
|
||||
|
||||
<img src="/images/actors-quickstart/actors-quickstart.png" width=800 style="padding-bottom:15px;">
|
||||
|
||||
|
@ -30,10 +30,13 @@ As a quick overview of the .NET actors quickstart:
|
|||
For this example, you will need:
|
||||
|
||||
- [Dapr CLI and initialized environment](https://docs.dapr.io/getting-started).
|
||||
- [.NET SDK or .NET 6 SDK installed](https://dotnet.microsoft.com/download).
|
||||
<!-- IGNORE_LINKS -->
|
||||
- [Docker Desktop](https://www.docker.com/products/docker-desktop)
|
||||
<!-- END_IGNORE -->
|
||||
- [.NET 6](https://dotnet.microsoft.com/download/dotnet/6.0), [.NET 8](https://dotnet.microsoft.com/download/dotnet/8.0) or [.NET 9](https://dotnet.microsoft.com/download/dotnet/9.0) installed
|
||||
|
||||
**NOTE:** .NET 6 is the minimally supported version of .NET for the Dapr .NET SDK packages in this release. Only .NET 8 and .NET 9
|
||||
will be supported in Dapr v1.16 and later releases.
|
||||
|
||||
### Step 1: Set up the environment
|
||||
|
||||
|
|
|
@ -443,10 +443,13 @@ In the YAML file:
|
|||
For this example, you will need:
|
||||
|
||||
- [Dapr CLI and initialized environment](https://docs.dapr.io/getting-started).
|
||||
- [.NET SDK or .NET 6 SDK installed](https://dotnet.microsoft.com/download).
|
||||
<!-- IGNORE_LINKS -->
|
||||
- [Docker Desktop](https://www.docker.com/products/docker-desktop)
|
||||
<!-- END_IGNORE -->
|
||||
- [.NET 6](https://dotnet.microsoft.com/download/dotnet/6.0), [.NET 8](https://dotnet.microsoft.com/download/dotnet/8.0) or [.NET 9](https://dotnet.microsoft.com/download/dotnet/9.0) installed
|
||||
|
||||
**NOTE:** .NET 6 is the minimally supported version of .NET for the Dapr .NET SDK packages in this release. Only .NET 8 and .NET 9
|
||||
will be supported in Dapr v1.16 and later releases.
|
||||
|
||||
### Step 1: Set up the environment
|
||||
|
||||
|
|
|
@ -272,10 +272,13 @@ setTimeout(() => {
|
|||
For this example, you will need:
|
||||
|
||||
- [Dapr CLI and initialized environment](https://docs.dapr.io/getting-started).
|
||||
- [.NET SDK or .NET 6 SDK installed](https://dotnet.microsoft.com/download).
|
||||
<!-- IGNORE_LINKS -->
|
||||
- [Docker Desktop](https://www.docker.com/products/docker-desktop)
|
||||
<!-- END_IGNORE -->
|
||||
- [.NET 6](https://dotnet.microsoft.com/download/dotnet/6.0), [.NET 8](https://dotnet.microsoft.com/download/dotnet/8.0) or [.NET 9](https://dotnet.microsoft.com/download/dotnet/9.0) installed
|
||||
|
||||
**NOTE:** .NET 6 is the minimally supported version of .NET for the Dapr .NET SDK packages in this release. Only .NET 8 and .NET 9
|
||||
will be supported in Dapr v1.16 and later releases.
|
||||
|
||||
### Step 1: Set up the environment
|
||||
|
||||
|
|
|
@ -0,0 +1,782 @@
|
|||
---
|
||||
type: docs
|
||||
title: "Quickstart: Conversation"
|
||||
linkTitle: Conversation
|
||||
weight: 90
|
||||
description: Get started with the Dapr conversation building block
|
||||
---
|
||||
|
||||
{{% alert title="Alpha" color="warning" %}}
|
||||
The conversation building block is currently in **alpha**.
|
||||
{{% /alert %}}
|
||||
|
||||
Let's take a look at how the [Dapr conversation building block]({{< ref conversation-overview.md >}}) makes interacting with Large Language Models (LLMs) easier. In this quickstart, you use the echo component to communicate with the mock LLM and ask it for a poem about Dapr.
|
||||
|
||||
You can try out this conversation quickstart by either:
|
||||
|
||||
- [Running the application in this sample with the Multi-App Run template file]({{< ref "#run-the-app-with-the-template-file" >}}), or
|
||||
- [Running the application without the template]({{< ref "#run-the-app-without-the-template" >}})
|
||||
|
||||
{{% alert title="Note" color="primary" %}}
|
||||
Currently, only the HTTP quickstart sample is available in Python and JavaScript.
|
||||
{{% /alert %}}
|
||||
|
||||
## Run the app with the template file
|
||||
|
||||
{{< tabs Python JavaScript ".NET" Go >}}
|
||||
|
||||
<!-- Python -->
|
||||
{{% codetab %}}
|
||||
|
||||
|
||||
### Step 1: Pre-requisites
|
||||
|
||||
For this example, you will need:
|
||||
|
||||
- [Dapr CLI and initialized environment](https://docs.dapr.io/getting-started).
|
||||
- [Python 3.7+ installed](https://www.python.org/downloads/).
|
||||
<!-- IGNORE_LINKS -->
|
||||
- [Docker Desktop](https://www.docker.com/products/docker-desktop)
|
||||
<!-- END_IGNORE -->
|
||||
|
||||
### Step 2: Set up the environment
|
||||
|
||||
Clone the [sample provided in the Quickstarts repo](https://github.com/dapr/quickstarts/tree/master/conversation).
|
||||
|
||||
```bash
|
||||
git clone https://github.com/dapr/quickstarts.git
|
||||
```
|
||||
|
||||
From the root of the Quickstarts directory, navigate into the conversation directory:
|
||||
|
||||
```bash
|
||||
cd conversation/python/http/conversation
|
||||
```
|
||||
|
||||
Install the dependencies:
|
||||
|
||||
```bash
|
||||
pip3 install -r requirements.txt
|
||||
```
|
||||
|
||||
### Step 3: Launch the conversation service
|
||||
|
||||
Navigate back to the `http` directory and start the conversation service with the following command:
|
||||
|
||||
```bash
|
||||
dapr run -f .
|
||||
```
|
||||
|
||||
**Expected output**
|
||||
|
||||
```
|
||||
== APP - conversation == Input sent: What is dapr?
|
||||
== APP - conversation == Output response: What is dapr?
|
||||
```
|
||||
|
||||
### What happened?
|
||||
|
||||
When you ran `dapr init` during Dapr install, the [`dapr.yaml` Multi-App Run template file]({{< ref "#dapryaml-multi-app-run-template-file" >}}) was generated in the `.dapr/components` directory.
|
||||
|
||||
Running `dapr run -f .` in this Quickstart started [conversation.go]({{< ref "#programcs-conversation-app" >}}).
|
||||
|
||||
#### `dapr.yaml` Multi-App Run template file
|
||||
|
||||
Running the [Multi-App Run template file]({{< ref multi-app-dapr-run >}}) with `dapr run -f .` starts all applications in your project. This Quickstart has only one application, so the `dapr.yaml` file contains the following:
|
||||
|
||||
```yml
|
||||
version: 1
|
||||
common:
|
||||
resourcesPath: ../../components/
|
||||
apps:
|
||||
- appID: conversation
|
||||
appDirPath: ./conversation/
|
||||
command: ["python3", "app.py"]
|
||||
```
|
||||
|
||||
#### Echo mock LLM component
|
||||
|
||||
In [`conversation/components`](https://github.com/dapr/quickstarts/tree/master/conversation/components) directly of the quickstart, the [`conversation.yaml` file](https://github.com/dapr/quickstarts/tree/master/conversation/components/conversation.yml) configures the echo LLM component.
|
||||
|
||||
```yml
|
||||
apiVersion: dapr.io/v1alpha1
|
||||
kind: Component
|
||||
metadata:
|
||||
name: echo
|
||||
spec:
|
||||
type: conversation.echo
|
||||
version: v1
|
||||
```
|
||||
|
||||
To interface with a real LLM, swap out the mock component with one of [the supported conversation components]({{< ref "supported-conversation" >}}). For example, to use an OpenAI component, see the [example in the conversation how-to guide]({{< ref "howto-conversation-layer.md#use-the-openai-component" >}})
|
||||
|
||||
#### `app.py` conversation app
|
||||
|
||||
In the application code:
|
||||
- The app sends an input "What is dapr?" to the echo mock LLM component.
|
||||
- The mock LLM echoes "What is dapr?".
|
||||
|
||||
```python
|
||||
import logging
|
||||
import requests
|
||||
import os
|
||||
|
||||
logging.basicConfig(level=logging.INFO)
|
||||
|
||||
base_url = os.getenv('BASE_URL', 'http://localhost') + ':' + os.getenv(
|
||||
'DAPR_HTTP_PORT', '3500')
|
||||
|
||||
CONVERSATION_COMPONENT_NAME = 'echo'
|
||||
|
||||
input = {
|
||||
'name': 'echo',
|
||||
'inputs': [{'message':'What is dapr?'}],
|
||||
'parameters': {},
|
||||
'metadata': {}
|
||||
}
|
||||
|
||||
# Send input to conversation endpoint
|
||||
result = requests.post(
|
||||
url='%s/v1.0-alpha1/conversation/%s/converse' % (base_url, CONVERSATION_COMPONENT_NAME),
|
||||
json=input
|
||||
)
|
||||
|
||||
logging.info('Input sent: What is dapr?')
|
||||
|
||||
# Parse conversation output
|
||||
data = result.json()
|
||||
output = data["outputs"][0]["result"]
|
||||
|
||||
logging.info('Output response: ' + output)
|
||||
```
|
||||
|
||||
{{% /codetab %}}
|
||||
|
||||
<!-- JavaScript -->
|
||||
{{% codetab %}}
|
||||
|
||||
|
||||
### Step 1: Pre-requisites
|
||||
|
||||
For this example, you will need:
|
||||
|
||||
- [Dapr CLI and initialized environment](https://docs.dapr.io/getting-started).
|
||||
- [Latest Node.js installed](https://nodejs.org/).
|
||||
<!-- IGNORE_LINKS -->
|
||||
- [Docker Desktop](https://www.docker.com/products/docker-desktop)
|
||||
<!-- END_IGNORE -->
|
||||
|
||||
### Step 2: Set up the environment
|
||||
|
||||
Clone the [sample provided in the Quickstarts repo](https://github.com/dapr/quickstarts/tree/master/conversation).
|
||||
|
||||
```bash
|
||||
git clone https://github.com/dapr/quickstarts.git
|
||||
```
|
||||
|
||||
From the root of the Quickstarts directory, navigate into the conversation directory:
|
||||
|
||||
```bash
|
||||
cd conversation/javascript/http/conversation
|
||||
```
|
||||
|
||||
Install the dependencies:
|
||||
|
||||
```bash
|
||||
npm install
|
||||
```
|
||||
|
||||
### Step 3: Launch the conversation service
|
||||
|
||||
Navigate back to the `http` directory and start the conversation service with the following command:
|
||||
|
||||
```bash
|
||||
dapr run -f .
|
||||
```
|
||||
|
||||
**Expected output**
|
||||
|
||||
```
|
||||
== APP - conversation == Input sent: What is dapr?
|
||||
== APP - conversation == Output response: What is dapr?
|
||||
```
|
||||
|
||||
### What happened?
|
||||
|
||||
When you ran `dapr init` during Dapr install, the [`dapr.yaml` Multi-App Run template file]({{< ref "#dapryaml-multi-app-run-template-file" >}}) was generated in the `.dapr/components` directory.
|
||||
|
||||
Running `dapr run -f .` in this Quickstart started [conversation.go]({{< ref "#programcs-conversation-app" >}}).
|
||||
|
||||
#### `dapr.yaml` Multi-App Run template file
|
||||
|
||||
Running the [Multi-App Run template file]({{< ref multi-app-dapr-run >}}) with `dapr run -f .` starts all applications in your project. This Quickstart has only one application, so the `dapr.yaml` file contains the following:
|
||||
|
||||
```yml
|
||||
version: 1
|
||||
common:
|
||||
resourcesPath: ../../components/
|
||||
apps:
|
||||
- appID: conversation
|
||||
appDirPath: ./conversation/
|
||||
daprHTTPPort: 3502
|
||||
command: ["npm", "run", "start"]
|
||||
```
|
||||
|
||||
#### Echo mock LLM component
|
||||
|
||||
In [`conversation/components`](https://github.com/dapr/quickstarts/tree/master/conversation/components) directly of the quickstart, the [`conversation.yaml` file](https://github.com/dapr/quickstarts/tree/master/conversation/components/conversation.yml) configures the echo LLM component.
|
||||
|
||||
```yml
|
||||
apiVersion: dapr.io/v1alpha1
|
||||
kind: Component
|
||||
metadata:
|
||||
name: echo
|
||||
spec:
|
||||
type: conversation.echo
|
||||
version: v1
|
||||
```
|
||||
|
||||
To interface with a real LLM, swap out the mock component with one of [the supported conversation components]({{< ref "supported-conversation" >}}). For example, to use an OpenAI component, see the [example in the conversation how-to guide]({{< ref "howto-conversation-layer.md#use-the-openai-component" >}})
|
||||
|
||||
#### `index.js` conversation app
|
||||
|
||||
In the application code:
|
||||
- The app sends an input "What is dapr?" to the echo mock LLM component.
|
||||
- The mock LLM echoes "What is dapr?".
|
||||
|
||||
```javascript
|
||||
const conversationComponentName = "echo";
|
||||
|
||||
async function main() {
|
||||
const daprHost = process.env.DAPR_HOST || "http://localhost";
|
||||
const daprHttpPort = process.env.DAPR_HTTP_PORT || "3500";
|
||||
|
||||
const inputBody = {
|
||||
name: "echo",
|
||||
inputs: [{ message: "What is dapr?" }],
|
||||
parameters: {},
|
||||
metadata: {},
|
||||
};
|
||||
|
||||
const reqURL = `${daprHost}:${daprHttpPort}/v1.0-alpha1/conversation/${conversationComponentName}/converse`;
|
||||
|
||||
try {
|
||||
const response = await fetch(reqURL, {
|
||||
method: "POST",
|
||||
headers: {
|
||||
"Content-Type": "application/json",
|
||||
},
|
||||
body: JSON.stringify(inputBody),
|
||||
});
|
||||
|
||||
console.log("Input sent: What is dapr?");
|
||||
|
||||
const data = await response.json();
|
||||
const result = data.outputs[0].result;
|
||||
console.log("Output response:", result);
|
||||
} catch (error) {
|
||||
console.error("Error:", error.message);
|
||||
process.exit(1);
|
||||
}
|
||||
}
|
||||
|
||||
main().catch((error) => {
|
||||
console.error("Unhandled error:", error);
|
||||
process.exit(1);
|
||||
});
|
||||
```
|
||||
|
||||
{{% /codetab %}}
|
||||
|
||||
<!-- .NET -->
|
||||
{{% codetab %}}
|
||||
|
||||
|
||||
### Step 1: Pre-requisites
|
||||
|
||||
For this example, you will need:
|
||||
|
||||
- [Dapr CLI and initialized environment](https://docs.dapr.io/getting-started).
|
||||
- [.NET 8 SDK+ installed](https://dotnet.microsoft.com/download).
|
||||
<!-- IGNORE_LINKS -->
|
||||
- [Docker Desktop](https://www.docker.com/products/docker-desktop)
|
||||
<!-- END_IGNORE -->
|
||||
|
||||
### Step 2: Set up the environment
|
||||
|
||||
Clone the [sample provided in the Quickstarts repo](https://github.com/dapr/quickstarts/tree/master/conversation).
|
||||
|
||||
```bash
|
||||
git clone https://github.com/dapr/quickstarts.git
|
||||
```
|
||||
|
||||
From the root of the Quickstarts directory, navigate into the conversation directory:
|
||||
|
||||
```bash
|
||||
cd conversation/csharp/sdk
|
||||
```
|
||||
|
||||
### Step 3: Launch the conversation service
|
||||
|
||||
Start the conversation service with the following command:
|
||||
|
||||
```bash
|
||||
dapr run -f .
|
||||
```
|
||||
|
||||
**Expected output**
|
||||
|
||||
```
|
||||
== APP - conversation == Input sent: What is dapr?
|
||||
== APP - conversation == Output response: What is dapr?
|
||||
```
|
||||
|
||||
### What happened?
|
||||
|
||||
When you ran `dapr init` during Dapr install, the [`dapr.yaml` Multi-App Run template file]({{< ref "#dapryaml-multi-app-run-template-file" >}}) was generated in the `.dapr/components` directory.
|
||||
|
||||
Running `dapr run -f .` in this Quickstart started the [conversation Program.cs]({{< ref "#programcs-conversation-app" >}}).
|
||||
|
||||
#### `dapr.yaml` Multi-App Run template file
|
||||
|
||||
Running the [Multi-App Run template file]({{< ref multi-app-dapr-run >}}) with `dapr run -f .` starts all applications in your project. This Quickstart has only one application, so the `dapr.yaml` file contains the following:
|
||||
|
||||
```yml
|
||||
version: 1
|
||||
common:
|
||||
resourcesPath: ../../components/
|
||||
apps:
|
||||
- appDirPath: ./conversation/
|
||||
appID: conversation
|
||||
daprHTTPPort: 3500
|
||||
command: ["dotnet", "run"]
|
||||
```
|
||||
|
||||
#### Echo mock LLM component
|
||||
|
||||
In [`conversation/components`](https://github.com/dapr/quickstarts/tree/master/conversation/components), the [`conversation.yaml` file](https://github.com/dapr/quickstarts/tree/master/conversation/components/conversation.yml) configures the echo mock LLM component.
|
||||
|
||||
```yml
|
||||
apiVersion: dapr.io/v1alpha1
|
||||
kind: Component
|
||||
metadata:
|
||||
name: echo
|
||||
spec:
|
||||
type: conversation.echo
|
||||
version: v1
|
||||
```
|
||||
|
||||
To interface with a real LLM, swap out the mock component with one of [the supported conversation components]({{< ref "supported-conversation" >}}). For example, to use an OpenAI component, see the [example in the conversation how-to guide]({{< ref "howto-conversation-layer.md#use-the-openai-component" >}})
|
||||
|
||||
#### `Program.cs` conversation app
|
||||
|
||||
In the application code:
|
||||
- The app sends an input "What is dapr?" to the echo mock LLM component.
|
||||
- The mock LLM echoes "What is dapr?".
|
||||
|
||||
```csharp
|
||||
using Dapr.AI.Conversation;
|
||||
using Dapr.AI.Conversation.Extensions;
|
||||
|
||||
class Program
|
||||
{
|
||||
private const string ConversationComponentName = "echo";
|
||||
|
||||
static async Task Main(string[] args)
|
||||
{
|
||||
const string prompt = "What is dapr?";
|
||||
|
||||
var builder = WebApplication.CreateBuilder(args);
|
||||
builder.Services.AddDaprConversationClient();
|
||||
var app = builder.Build();
|
||||
|
||||
//Instantiate Dapr Conversation Client
|
||||
var conversationClient = app.Services.GetRequiredService<DaprConversationClient>();
|
||||
|
||||
try
|
||||
{
|
||||
// Send a request to the echo mock LLM component
|
||||
var response = await conversationClient.ConverseAsync(ConversationComponentName, [new(prompt, DaprConversationRole.Generic)]);
|
||||
Console.WriteLine("Input sent: " + prompt);
|
||||
|
||||
if (response != null)
|
||||
{
|
||||
Console.Write("Output response:");
|
||||
foreach (var resp in response.Outputs)
|
||||
{
|
||||
Console.WriteLine($" {resp.Result}");
|
||||
}
|
||||
}
|
||||
}
|
||||
catch (Exception ex)
|
||||
{
|
||||
Console.WriteLine("Error: " + ex.Message);
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
{{% /codetab %}}
|
||||
|
||||
<!-- Go -->
|
||||
{{% codetab %}}
|
||||
|
||||
|
||||
### Step 1: Pre-requisites
|
||||
|
||||
For this example, you will need:
|
||||
|
||||
- [Dapr CLI and initialized environment](https://docs.dapr.io/getting-started).
|
||||
- [Latest version of Go](https://go.dev/dl/).
|
||||
<!-- IGNORE_LINKS -->
|
||||
- [Docker Desktop](https://www.docker.com/products/docker-desktop)
|
||||
<!-- END_IGNORE -->
|
||||
|
||||
### Step 2: Set up the environment
|
||||
|
||||
Clone the [sample provided in the Quickstarts repo](https://github.com/dapr/quickstarts/tree/master/conversation).
|
||||
|
||||
```bash
|
||||
git clone https://github.com/dapr/quickstarts.git
|
||||
```
|
||||
|
||||
From the root of the Quickstarts directory, navigate into the conversation directory:
|
||||
|
||||
```bash
|
||||
cd conversation/go/sdk
|
||||
```
|
||||
|
||||
### Step 3: Launch the conversation service
|
||||
|
||||
Start the conversation service with the following command:
|
||||
|
||||
```bash
|
||||
dapr run -f .
|
||||
```
|
||||
|
||||
**Expected output**
|
||||
|
||||
```
|
||||
== APP - conversation == Input sent: What is dapr?
|
||||
== APP - conversation == Output response: What is dapr?
|
||||
```
|
||||
|
||||
### What happened?
|
||||
|
||||
When you ran `dapr init` during Dapr install, the [`dapr.yaml` Multi-App Run template file]({{< ref "#dapryaml-multi-app-run-template-file" >}}) was generated in the `.dapr/components` directory.
|
||||
|
||||
Running `dapr run -f .` in this Quickstart started [conversation.go]({{< ref "#programcs-conversation-app" >}}).
|
||||
|
||||
#### `dapr.yaml` Multi-App Run template file
|
||||
|
||||
Running the [Multi-App Run template file]({{< ref multi-app-dapr-run >}}) with `dapr run -f .` starts all applications in your project. This Quickstart has only one application, so the `dapr.yaml` file contains the following:
|
||||
|
||||
```yml
|
||||
version: 1
|
||||
common:
|
||||
resourcesPath: ../../components/
|
||||
apps:
|
||||
- appDirPath: ./conversation/
|
||||
appID: conversation
|
||||
daprHTTPPort: 3501
|
||||
command: ["go", "run", "."]
|
||||
```
|
||||
|
||||
#### Echo mock LLM component
|
||||
|
||||
In [`conversation/components`](https://github.com/dapr/quickstarts/tree/master/conversation/components) directly of the quickstart, the [`conversation.yaml` file](https://github.com/dapr/quickstarts/tree/master/conversation/components/conversation.yml) configures the echo LLM component.
|
||||
|
||||
```yml
|
||||
apiVersion: dapr.io/v1alpha1
|
||||
kind: Component
|
||||
metadata:
|
||||
name: echo
|
||||
spec:
|
||||
type: conversation.echo
|
||||
version: v1
|
||||
```
|
||||
|
||||
To interface with a real LLM, swap out the mock component with one of [the supported conversation components]({{< ref "supported-conversation" >}}). For example, to use an OpenAI component, see the [example in the conversation how-to guide]({{< ref "howto-conversation-layer.md#use-the-openai-component" >}})
|
||||
|
||||
#### `conversation.go` conversation app
|
||||
|
||||
In the application code:
|
||||
- The app sends an input "What is dapr?" to the echo mock LLM component.
|
||||
- The mock LLM echoes "What is dapr?".
|
||||
|
||||
```go
|
||||
package main
|
||||
|
||||
import (
|
||||
"context"
|
||||
"fmt"
|
||||
"log"
|
||||
|
||||
dapr "github.com/dapr/go-sdk/client"
|
||||
)
|
||||
|
||||
func main() {
|
||||
client, err := dapr.NewClient()
|
||||
if err != nil {
|
||||
panic(err)
|
||||
}
|
||||
|
||||
input := dapr.ConversationInput{
|
||||
Message: "What is dapr?",
|
||||
// Role: nil, // Optional
|
||||
// ScrubPII: nil, // Optional
|
||||
}
|
||||
|
||||
fmt.Println("Input sent:", input.Message)
|
||||
|
||||
var conversationComponent = "echo"
|
||||
|
||||
request := dapr.NewConversationRequest(conversationComponent, []dapr.ConversationInput{input})
|
||||
|
||||
resp, err := client.ConverseAlpha1(context.Background(), request)
|
||||
if err != nil {
|
||||
log.Fatalf("err: %v", err)
|
||||
}
|
||||
|
||||
fmt.Println("Output response:", resp.Outputs[0].Result)
|
||||
}
|
||||
```
|
||||
|
||||
{{% /codetab %}}
|
||||
|
||||
{{< /tabs >}}
|
||||
|
||||
## Run the app without the template
|
||||
|
||||
{{< tabs Python JavaScript ".NET" Go >}}
|
||||
|
||||
<!-- Python -->
|
||||
{{% codetab %}}
|
||||
|
||||
|
||||
### Step 1: Pre-requisites
|
||||
|
||||
For this example, you will need:
|
||||
|
||||
- [Dapr CLI and initialized environment](https://docs.dapr.io/getting-started).
|
||||
- [Python 3.7+ installed](https://www.python.org/downloads/).
|
||||
<!-- IGNORE_LINKS -->
|
||||
- [Docker Desktop](https://www.docker.com/products/docker-desktop)
|
||||
<!-- END_IGNORE -->
|
||||
|
||||
### Step 2: Set up the environment
|
||||
|
||||
Clone the [sample provided in the Quickstarts repo](https://github.com/dapr/quickstarts/tree/master/conversation).
|
||||
|
||||
```bash
|
||||
git clone https://github.com/dapr/quickstarts.git
|
||||
```
|
||||
|
||||
From the root of the Quickstarts directory, navigate into the conversation directory:
|
||||
|
||||
```bash
|
||||
cd conversation/python/http/conversation
|
||||
```
|
||||
|
||||
Install the dependencies:
|
||||
|
||||
```bash
|
||||
pip3 install -r requirements.txt
|
||||
```
|
||||
|
||||
### Step 3: Launch the conversation service
|
||||
|
||||
Navigate back to the `http` directory and start the conversation service with the following command:
|
||||
|
||||
```bash
|
||||
dapr run --app-id conversation --resources-path ../../../components -- python3 app.py
|
||||
```
|
||||
|
||||
> **Note**: Since Python3.exe is not defined in Windows, you may need to use `python app.py` instead of `python3 app.py`.
|
||||
|
||||
**Expected output**
|
||||
|
||||
```
|
||||
== APP - conversation == Input sent: What is dapr?
|
||||
== APP - conversation == Output response: What is dapr?
|
||||
```
|
||||
|
||||
{{% /codetab %}}
|
||||
|
||||
<!-- JavaScript -->
|
||||
{{% codetab %}}
|
||||
|
||||
|
||||
### Step 1: Pre-requisites
|
||||
|
||||
For this example, you will need:
|
||||
|
||||
- [Dapr CLI and initialized environment](https://docs.dapr.io/getting-started).
|
||||
- [Latest Node.js installed](https://nodejs.org/).
|
||||
<!-- IGNORE_LINKS -->
|
||||
- [Docker Desktop](https://www.docker.com/products/docker-desktop)
|
||||
<!-- END_IGNORE -->
|
||||
|
||||
### Step 2: Set up the environment
|
||||
|
||||
Clone the [sample provided in the Quickstarts repo](https://github.com/dapr/quickstarts/tree/master/conversation).
|
||||
|
||||
```bash
|
||||
git clone https://github.com/dapr/quickstarts.git
|
||||
```
|
||||
|
||||
From the root of the Quickstarts directory, navigate into the conversation directory:
|
||||
|
||||
```bash
|
||||
cd conversation/javascript/http/conversation
|
||||
```
|
||||
|
||||
Install the dependencies:
|
||||
|
||||
```bash
|
||||
npm install
|
||||
```
|
||||
|
||||
### Step 3: Launch the conversation service
|
||||
|
||||
Navigate back to the `http` directory and start the conversation service with the following command:
|
||||
|
||||
```bash
|
||||
dapr run --app-id conversation --resources-path ../../../components/ -- npm run start
|
||||
```
|
||||
|
||||
**Expected output**
|
||||
|
||||
```
|
||||
== APP - conversation == Input sent: What is dapr?
|
||||
== APP - conversation == Output response: What is dapr?
|
||||
```
|
||||
|
||||
{{% /codetab %}}
|
||||
|
||||
<!-- .NET -->
|
||||
{{% codetab %}}
|
||||
|
||||
|
||||
### Step 1: Pre-requisites
|
||||
|
||||
For this example, you will need:
|
||||
|
||||
- [Dapr CLI and initialized environment](https://docs.dapr.io/getting-started).
|
||||
- [.NET 8+ SDK installed](https://dotnet.microsoft.com/download).
|
||||
<!-- IGNORE_LINKS -->
|
||||
- [Docker Desktop](https://www.docker.com/products/docker-desktop)
|
||||
<!-- END_IGNORE -->
|
||||
|
||||
### Step 2: Set up the environment
|
||||
|
||||
Clone the [sample provided in the Quickstarts repo](https://github.com/dapr/quickstarts/tree/master/conversation).
|
||||
|
||||
```bash
|
||||
git clone https://github.com/dapr/quickstarts.git
|
||||
```
|
||||
|
||||
From the root of the Quickstarts directory, navigate into the conversation directory:
|
||||
|
||||
```bash
|
||||
cd conversation/csharp/sdk/conversation
|
||||
```
|
||||
|
||||
Install the dependencies:
|
||||
|
||||
```bash
|
||||
dotnet build
|
||||
```
|
||||
|
||||
### Step 3: Launch the conversation service
|
||||
|
||||
Start the conversation service with the following command:
|
||||
|
||||
```bash
|
||||
dapr run --app-id conversation --resources-path ../../../components/ -- dotnet run
|
||||
```
|
||||
|
||||
**Expected output**
|
||||
|
||||
```
|
||||
== APP - conversation == Input sent: What is dapr?
|
||||
== APP - conversation == Output response: What is dapr?
|
||||
```
|
||||
|
||||
{{% /codetab %}}
|
||||
|
||||
<!-- Go -->
|
||||
{{% codetab %}}
|
||||
|
||||
|
||||
### Step 1: Pre-requisites
|
||||
|
||||
For this example, you will need:
|
||||
|
||||
- [Dapr CLI and initialized environment](https://docs.dapr.io/getting-started).
|
||||
- [Latest version of Go](https://go.dev/dl/).
|
||||
<!-- IGNORE_LINKS -->
|
||||
- [Docker Desktop](https://www.docker.com/products/docker-desktop)
|
||||
<!-- END_IGNORE -->
|
||||
|
||||
### Step 2: Set up the environment
|
||||
|
||||
Clone the [sample provided in the Quickstarts repo](https://github.com/dapr/quickstarts/tree/master/conversation).
|
||||
|
||||
```bash
|
||||
git clone https://github.com/dapr/quickstarts.git
|
||||
```
|
||||
|
||||
From the root of the Quickstarts directory, navigate into the conversation directory:
|
||||
|
||||
```bash
|
||||
cd conversation/go/sdk/conversation
|
||||
```
|
||||
|
||||
Install the dependencies:
|
||||
|
||||
```bash
|
||||
go build .
|
||||
```
|
||||
|
||||
### Step 3: Launch the conversation service
|
||||
|
||||
Start the conversation service with the following command:
|
||||
|
||||
```bash
|
||||
dapr run --app-id conversation --resources-path ../../../components/ -- go run .
|
||||
```
|
||||
|
||||
**Expected output**
|
||||
|
||||
```
|
||||
== APP - conversation == Input sent: What is dapr?
|
||||
== APP - conversation == Output response: What is dapr?
|
||||
```
|
||||
|
||||
{{% /codetab %}}
|
||||
|
||||
{{< /tabs >}}
|
||||
|
||||
## Demo
|
||||
|
||||
Watch the demo presented during [Diagrid's Dapr v1.15 celebration](https://www.diagrid.io/videos/dapr-1-15-deep-dive) to see how the conversation API works using the .NET SDK.
|
||||
|
||||
<iframe width="560" height="315" src="https://www.youtube-nocookie.com/embed/NTnwoDhHIcQ?si=37SDcOHtEpgCIwkG&start=5444" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen></iframe>
|
||||
|
||||
## Tell us what you think!
|
||||
|
||||
We're continuously working to improve our Quickstart examples and value your feedback. Did you find this Quickstart helpful? Do you have suggestions for improvement?
|
||||
|
||||
Join the discussion in our [discord channel](https://discord.com/channels/778680217417809931/953427615916638238).
|
||||
|
||||
## Next steps
|
||||
|
||||
- HTTP samples of this quickstart:
|
||||
- [Python](https://github.com/dapr/quickstarts/tree/master/conversation/python/http)
|
||||
- [JavaScript](https://github.com/dapr/quickstarts/tree/master/conversation/javascript/http)
|
||||
- [.NET](https://github.com/dapr/quickstarts/tree/master/conversation/csharp/http)
|
||||
- [Go](https://github.com/dapr/quickstarts/tree/master/conversation/go/http)
|
||||
- Learn more about [the conversation building block]({{< ref conversation-overview.md >}})
|
||||
|
||||
{{< button text="Explore Dapr tutorials >>" page="getting-started/tutorials/_index.md" >}}
|
|
@ -358,10 +358,13 @@ console.log("Published data: " + JSON.stringify(order));
|
|||
For this example, you will need:
|
||||
|
||||
- [Dapr CLI and initialized environment](https://docs.dapr.io/getting-started).
|
||||
- [.NET SDK or .NET 6 SDK installed](https://dotnet.microsoft.com/download).
|
||||
<!-- IGNORE_LINKS -->
|
||||
- [Docker Desktop](https://www.docker.com/products/docker-desktop)
|
||||
<!-- END_IGNORE -->
|
||||
- [.NET 6](https://dotnet.microsoft.com/download/dotnet/6.0), [.NET 8](https://dotnet.microsoft.com/download/dotnet/8.0) or [.NET 9](https://dotnet.microsoft.com/download/dotnet/9.0) installed
|
||||
|
||||
**NOTE:** .NET 6 is the minimally supported version of .NET for the Dapr .NET SDK packages in this release. Only .NET 8 and .NET 9
|
||||
will be supported in Dapr v1.16 and later releases.
|
||||
|
||||
### Step 2: Set up the environment
|
||||
|
||||
|
|
|
@ -247,10 +247,13 @@ Order-processor output:
|
|||
For this example, you will need:
|
||||
|
||||
- [Dapr CLI and initialized environment](https://docs.dapr.io/getting-started).
|
||||
- [.NET SDK or .NET 6 SDK installed](https://dotnet.microsoft.com/download).
|
||||
<!-- IGNORE_LINKS -->
|
||||
- [Docker Desktop](https://www.docker.com/products/docker-desktop)
|
||||
<!-- END_IGNORE -->
|
||||
- [.NET 6](https://dotnet.microsoft.com/download/dotnet/6.0), [.NET 8](https://dotnet.microsoft.com/download/dotnet/8.0) or [.NET 9](https://dotnet.microsoft.com/download/dotnet/9.0) installed
|
||||
|
||||
**NOTE:** .NET 6 is the minimally supported version of .NET for the Dapr .NET SDK packages in this release. Only .NET 8 and .NET 9
|
||||
will be supported in Dapr v1.16 and later releases.
|
||||
|
||||
### Step 1: Set up the environment
|
||||
|
||||
|
|
|
@ -315,10 +315,13 @@ console.log("Order passed: " + res.config.data);
|
|||
For this example, you will need:
|
||||
|
||||
- [Dapr CLI and initialized environment](https://docs.dapr.io/getting-started).
|
||||
- [.NET SDK or .NET 7 SDK installed](https://dotnet.microsoft.com/download).
|
||||
<!-- IGNORE_LINKS -->
|
||||
- [Docker Desktop](https://www.docker.com/products/docker-desktop)
|
||||
<!-- END_IGNORE -->
|
||||
- [.NET 6](https://dotnet.microsoft.com/download/dotnet/6.0), [.NET 8](https://dotnet.microsoft.com/download/dotnet/8.0) or [.NET 9](https://dotnet.microsoft.com/download/dotnet/9.0) installed
|
||||
|
||||
**NOTE:** .NET 6 is the minimally supported version of .NET for the Dapr .NET SDK packages in this release. Only .NET 8 and .NET 9
|
||||
will be supported in Dapr v1.16 and later releases.
|
||||
|
||||
### Step 2: Set up the environment
|
||||
|
||||
|
@ -439,13 +442,11 @@ app.MapPost("/orders", (Order order) =>
|
|||
In the Program.cs file for the `checkout` service, you'll notice there's no need to rewrite your app code to use Dapr's service invocation. You can enable service invocation by simply adding the `dapr-app-id` header, which specifies the ID of the target service.
|
||||
|
||||
```csharp
|
||||
var client = new HttpClient();
|
||||
client.DefaultRequestHeaders.Accept.Add(new System.Net.Http.Headers.MediaTypeWithQualityHeaderValue("application/json"));
|
||||
var client = DaprClient.CreateInvokeHttpClient(appId: "order-processor");
|
||||
var cts = new CancellationTokenSource();
|
||||
|
||||
client.DefaultRequestHeaders.Add("dapr-app-id", "order-processor");
|
||||
|
||||
var response = await client.PostAsync($"{baseURL}/orders", content);
|
||||
Console.WriteLine("Order passed: " + order);
|
||||
var response = await client.PostAsJsonAsync("/orders", order, cts.Token);
|
||||
Console.WriteLine("Order passed: " + order);
|
||||
```
|
||||
|
||||
{{% /codetab %}}
|
||||
|
@ -1089,13 +1090,11 @@ dapr run --app-id checkout --app-protocol http --dapr-http-port 3500 -- dotnet r
|
|||
In the Program.cs file for the `checkout` service, you'll notice there's no need to rewrite your app code to use Dapr's service invocation. You can enable service invocation by simply adding the `dapr-app-id` header, which specifies the ID of the target service.
|
||||
|
||||
```csharp
|
||||
var client = new HttpClient();
|
||||
client.DefaultRequestHeaders.Accept.Add(new System.Net.Http.Headers.MediaTypeWithQualityHeaderValue("application/json"));
|
||||
var client = DaprClient.CreateInvokeHttpClient(appId: "order-processor");
|
||||
var cts = new CancellationTokenSource();
|
||||
|
||||
client.DefaultRequestHeaders.Add("dapr-app-id", "order-processor");
|
||||
|
||||
var response = await client.PostAsync($"{baseURL}/orders", content);
|
||||
Console.WriteLine("Order passed: " + order);
|
||||
var response = await client.PostAsJsonAsync("/orders", order, cts.Token);
|
||||
Console.WriteLine("Order passed: " + order);
|
||||
```
|
||||
|
||||
### Step 5: Use with Multi-App Run
|
||||
|
|
|
@ -288,10 +288,13 @@ In the YAML file:
|
|||
For this example, you will need:
|
||||
|
||||
- [Dapr CLI and initialized environment](https://docs.dapr.io/getting-started).
|
||||
- [.NET SDK or .NET 6 SDK installed](https://dotnet.microsoft.com/download).
|
||||
<!-- IGNORE_LINKS -->
|
||||
- [Docker Desktop](https://www.docker.com/products/docker-desktop)
|
||||
<!-- END_IGNORE -->
|
||||
- [.NET 6](https://dotnet.microsoft.com/download/dotnet/6.0), [.NET 8](https://dotnet.microsoft.com/download/dotnet/8.0) or [.NET 9](https://dotnet.microsoft.com/download/dotnet/9.0) installed
|
||||
|
||||
**NOTE:** .NET 6 is the minimally supported version of .NET for the Dapr .NET SDK packages in this release. Only .NET 8 and .NET 9
|
||||
will be supported in Dapr v1.16 and later releases.
|
||||
|
||||
### Step 1: Set up the environment
|
||||
|
||||
|
|
|
@ -145,9 +145,12 @@ metrics:
|
|||
- /payments/{paymentID}/refund
|
||||
- /payments/{paymentID}/details
|
||||
excludeVerbs: false
|
||||
recordErrorCodes: true
|
||||
```
|
||||
|
||||
In the examples above, the path filter `/orders/{orderID}/items/{itemID}` would return _a single metric count_ matching all the `orderID`s and all the `itemID`s, rather than multiple metrics for each `itemID`. For more information, see [HTTP metrics path matching]({{< ref "metrics-overview.md#http-metrics-path-matching" >}})
|
||||
In the examples above, the path filter `/orders/{orderID}/items/{itemID}` would return _a single metric count_ matching all the `orderID`s and all the `itemID`s, rather than multiple metrics for each `itemID`. For more information, see [HTTP metrics path matching]({{< ref "metrics-overview.md#http-metrics-path-matching" >}}).
|
||||
|
||||
The above example also enables [recording error code metrics]({{< ref "metrics-overview.md#configuring-metrics-for-error-codes" >}}), which is disabled by default.
|
||||
|
||||
The following table lists the properties for metrics:
|
||||
|
||||
|
|
|
@ -66,7 +66,7 @@ This guide walks you through installing an Elastic Kubernetes Service (EKS) clus
|
|||
1. Create the cluster by running the following command:
|
||||
|
||||
```bash
|
||||
eksctl create cluster -f cluster.yaml
|
||||
eksctl create cluster -f cluster-config.yaml
|
||||
```
|
||||
|
||||
1. Verify the kubectl context:
|
||||
|
|
|
@ -8,12 +8,13 @@ description: "Overview of how to get Dapr running on your Kubernetes cluster"
|
|||
|
||||
Dapr can be configured to run on any supported versions of Kubernetes. To achieve this, Dapr begins by deploying the following Kubernetes services, which provide first-class integration to make running applications with Dapr easy.
|
||||
|
||||
| Kubernetes services | Description |
|
||||
| ------------------- | ----------- |
|
||||
| `dapr-operator` | Manages [component]({{< ref components >}}) updates and Kubernetes services endpoints for Dapr (state stores, pub/subs, etc.) |
|
||||
| Kubernetes services | Description |
|
||||
|-------------------------|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
|
||||
| `dapr-operator` | Manages [component]({{< ref components >}}) updates and Kubernetes services endpoints for Dapr (state stores, pub/subs, etc.) |
|
||||
| `dapr-sidecar-injector` | Injects Dapr into [annotated](#adding-dapr-to-a-kubernetes-deployment) deployment pods and adds the environment variables `DAPR_HTTP_PORT` and `DAPR_GRPC_PORT` to enable user-defined applications to easily communicate with Dapr without hard-coding Dapr port values. |
|
||||
| `dapr-placement` | Used for [actors]({{< ref actors >}}) only. Creates mapping tables that map actor instances to pods |
|
||||
| `dapr-sentry` | Manages mTLS between services and acts as a certificate authority. For more information read the [security overview]({{< ref "security-concept.md" >}}) |
|
||||
| `dapr-placement` | Used for [actors]({{< ref actors >}}) only. Creates mapping tables that map actor instances to pods |
|
||||
| `dapr-sentry` | Manages mTLS between services and acts as a certificate authority. For more information read the [security overview]({{< ref "security-concept.md" >}}) |
|
||||
| `dapr-scheduler` | Provides distributed job scheduling capabilities used by the Jobs API, Workflow API, and Actor Reminders |
|
||||
|
||||
<img src="/images/overview-kubernetes.png" width=1000>
|
||||
|
||||
|
@ -61,4 +62,3 @@ For information about:
|
|||
- [Upgrade Dapr on a Kubernetes cluster]({{< ref kubernetes-upgrade >}})
|
||||
- [Production guidelines for Dapr on Kubernetes]({{< ref kubernetes-production.md >}})
|
||||
- [Dapr Kubernetes Quickstart](https://github.com/dapr/quickstarts/tree/master/tutorials/hello-kubernetes)
|
||||
- [Use Bridge to Kubernetes to debug Dapr apps locally, while connected to your Kubernetes cluster]({{< ref bridge-to-kubernetes >}})
|
||||
|
|
|
@ -120,13 +120,15 @@ In some scenarios, nodes may have memory and/or cpu pressure and the Dapr contro
|
|||
for eviction. To prevent this, you can set a critical priority class name for the Dapr control plane pods. This ensures that
|
||||
the Dapr control plane pods are not evicted unless all other pods with lower priority are evicted.
|
||||
|
||||
It's particularly important to protect the Dapr control plane components from eviction, especially the Scheduler service. When Schedulers are rescheduled or restarted, it can be highly disruptive to inflight jobs, potentially causing them to fire duplicate times. To prevent such disruptions, you should ensure the Dapr control plane components have a higher priority class than your application workloads.
|
||||
|
||||
Learn more about [Protecting Mission-Critical Pods](https://kubernetes.io/blog/2023/01/12/protect-mission-critical-pods-priorityclass/).
|
||||
|
||||
There are two built-in critical priority classes in Kubernetes:
|
||||
- `system-cluster-critical`
|
||||
- `system-node-critical` (highest priority)
|
||||
|
||||
It's recommended to set the `priorityClassName` to `system-cluster-critical` for the Dapr control plane pods.
|
||||
It's recommended to set the `priorityClassName` to `system-cluster-critical` for the Dapr control plane pods. If you have your own custom priority classes for your applications, ensure they have a lower priority value than the one assigned to the Dapr control plane to maintain system stability and prevent disruption of core Dapr services.
|
||||
|
||||
For a new Dapr control plane deployment, the `system-cluster-critical` priority class mode can be set via the helm value `global.priorityClassName`.
|
||||
|
||||
|
@ -155,7 +157,6 @@ spec:
|
|||
values: [system-cluster-critical]
|
||||
```
|
||||
|
||||
|
||||
## Deploy Dapr with Helm
|
||||
|
||||
[Visit the full guide on deploying Dapr with Helm]({{< ref "kubernetes-deploy.md#install-with-helm-advanced" >}}).
|
||||
|
|
|
@ -149,7 +149,7 @@ services:
|
|||
- type: tmpfs
|
||||
target: /data
|
||||
tmpfs:
|
||||
size: "10000"
|
||||
size: "64m"
|
||||
|
||||
networks:
|
||||
hello-dapr: null
|
||||
|
|
|
@ -72,7 +72,7 @@ spec:
|
|||
|
||||
## Configuring metrics for error codes
|
||||
|
||||
You can enable additional metrics for [Dapr API error codes](https://docs.dapr.io/reference/api/error_codes/) by setting `spec.metrics.recordErrorCodes` to `true`. Dapr APIs which communicate back to their caller may return standardized error codes. As described in the [Dapr development docs](https://github.com/dapr/dapr/blob/master/docs/development/dapr-metrics.md), a new metric called `error_code_total` is recorded, which allows monitoring of error codes triggered by application, code, and category. See [the `errorcodes` package](https://github.com/dapr/dapr/blob/master/pkg/messages/errorcodes/errorcodes.go) for specific codes and categories.
|
||||
You can enable additional metrics for [Dapr API error codes](https://docs.dapr.io/reference/api/error_codes/) by setting `spec.metrics.recordErrorCodes` to `true`. Dapr APIs which communicate back to their caller may return standardized error codes. [A new metric called `error_code_total` is recorded]({{< ref errors-overview.md >}}), which allows monitoring of error codes triggered by application, code, and category. See [the `errorcodes` package](https://github.com/dapr/dapr/blob/master/pkg/messages/errorcodes/errorcodes.go) for specific codes and categories.
|
||||
|
||||
Example configuration:
|
||||
```yaml
|
||||
|
|
|
@ -63,7 +63,7 @@ You must propagate the headers from `service A` to `service B`. For example: `In
|
|||
|
||||
##### Pub/sub messages
|
||||
|
||||
Dapr generates the trace headers in the published message topic. These trace headers are propagated to any services listening on that topic.
|
||||
Dapr generates the trace headers in the published message topic. For `rawPayload` messages, it is possible to specify the `traceparent` header to propagate the tracing information. These trace headers are propagated to any services listening on that topic.
|
||||
|
||||
#### Propagating multiple different service calls
|
||||
|
||||
|
|
|
@ -1,330 +0,0 @@
|
|||
---
|
||||
type: docs
|
||||
title: "Resiliency policies"
|
||||
linkTitle: "Policies"
|
||||
weight: 200
|
||||
description: "Configure resiliency policies for timeouts, retries, and circuit breakers"
|
||||
---
|
||||
|
||||
Define timeouts, retries, and circuit breaker policies under `policies`. Each policy is given a name so you can refer to them from the `targets` section in the resiliency spec.
|
||||
|
||||
> Note: Dapr offers default retries for specific APIs. [See here]({{< ref "#overriding-default-retries" >}}) to learn how you can overwrite default retry logic with user defined retry policies.
|
||||
|
||||
## Timeouts
|
||||
|
||||
Timeouts are optional policies that can be used to early-terminate long-running operations. If you've exceeded a timeout duration:
|
||||
|
||||
- The operation in progress is terminated (if possible).
|
||||
- An error is returned.
|
||||
|
||||
Valid values are of the form accepted by Go's [time.ParseDuration](https://pkg.go.dev/time#ParseDuration), for example: `15s`, `2m`, `1h30m`. Timeouts have no set maximum value.
|
||||
|
||||
Example:
|
||||
|
||||
```yaml
|
||||
spec:
|
||||
policies:
|
||||
# Timeouts are simple named durations.
|
||||
timeouts:
|
||||
general: 5s
|
||||
important: 60s
|
||||
largeResponse: 10s
|
||||
```
|
||||
|
||||
If you don't specify a timeout value, the policy does not enforce a time and defaults to whatever you set up per the request client.
|
||||
|
||||
## Retries
|
||||
|
||||
With `retries`, you can define a retry strategy for failed operations, including requests failed due to triggering a defined timeout or circuit breaker policy.
|
||||
|
||||
{{% alert title="Pub/sub component retries vs inbound resiliency" color="warning" %}}
|
||||
Each [pub/sub component]({{< ref supported-pubsub >}}) has its own built-in retry behaviors. Explicity applying a Dapr resiliency policy doesn't override these implicit retry policies. Rather, the resiliency policy augments the built-in retry, which can cause repetitive clustering of messages.
|
||||
{{% /alert %}}
|
||||
|
||||
The following retry options are configurable:
|
||||
|
||||
| Retry option | Description |
|
||||
| ------------ | ----------- |
|
||||
| `policy` | Determines the back-off and retry interval strategy. Valid values are `constant` and `exponential`.<br/>Defaults to `constant`. |
|
||||
| `duration` | Determines the time interval between retries. Only applies to the `constant` policy.<br/>Valid values are of the form `200ms`, `15s`, `2m`, etc.<br/> Defaults to `5s`.|
|
||||
| `maxInterval` | Determines the maximum interval between retries to which the `exponential` back-off policy can grow.<br/>Additional retries always occur after a duration of `maxInterval`. Defaults to `60s`. Valid values are of the form `5s`, `1m`, `1m30s`, etc |
|
||||
| `maxRetries` | The maximum number of retries to attempt. <br/>`-1` denotes an unlimited number of retries, while `0` means the request will not be retried (essentially behaving as if the retry policy were not set).<br/>Defaults to `-1`. |
|
||||
| `matching.httpStatusCodes` | Optional: a comma-separated string of HTTP status codes or code ranges to retry. Status codes not listed are not retried.<br/>Valid values: 100-599, [Reference](https://developer.mozilla.org/en-US/docs/Web/HTTP/Status)<br/>Format: `<code>` or range `<start>-<end>`<br/>Example: "429,501-503"<br/>Default: empty string `""` or field is not set. Retries on all HTTP errors. |
|
||||
| `matching.gRPCStatusCodes` | Optional: a comma-separated string of gRPC status codes or code ranges to retry. Status codes not listed are not retried.<br/>Valid values: 0-16, [Reference](https://grpc.io/docs/guides/status-codes/)<br/>Format: `<code>` or range `<start>-<end>`<br/>Example: "1,501-503"<br/>Default: empty string `""` or field is not set. Retries on all gRPC errors. |
|
||||
|
||||
|
||||
{{% alert title="httpStatusCodes and gRPCStatusCodes format" color="warning" %}}
|
||||
The field values should follow the format as specified in the field description or in the "Example 2" below.
|
||||
An incorrectly formatted value will produce an error log ("Could not read resiliency policy") and `daprd` startup sequence will proceed.
|
||||
{{% /alert %}}
|
||||
|
||||
|
||||
The exponential back-off window uses the following formula:
|
||||
|
||||
```
|
||||
BackOffDuration = PreviousBackOffDuration * (Random value from 0.5 to 1.5) * 1.5
|
||||
if BackOffDuration > maxInterval {
|
||||
BackoffDuration = maxInterval
|
||||
}
|
||||
```
|
||||
|
||||
Example:
|
||||
|
||||
```yaml
|
||||
spec:
|
||||
policies:
|
||||
# Retries are named templates for retry configurations and are instantiated for life of the operation.
|
||||
retries:
|
||||
pubsubRetry:
|
||||
policy: constant
|
||||
duration: 5s
|
||||
maxRetries: 10
|
||||
|
||||
retryForever:
|
||||
policy: exponential
|
||||
maxInterval: 15s
|
||||
maxRetries: -1 # Retry indefinitely
|
||||
```
|
||||
|
||||
Example 2:
|
||||
|
||||
```yaml
|
||||
spec:
|
||||
policies:
|
||||
retries:
|
||||
retry5xxOnly:
|
||||
policy: constant
|
||||
duration: 5s
|
||||
maxRetries: 3
|
||||
matching:
|
||||
httpStatusCodes: "429,500-599" # retry the HTTP status codes in this range. All others are not retried.
|
||||
gRPCStatusCodes: "1-4,8-11,13,14" # retry gRPC status codes in these ranges and separate single codes.
|
||||
```
|
||||
|
||||
## Circuit Breakers
|
||||
|
||||
Circuit Breaker (CB) policies are used when other applications/services/components are experiencing elevated failure rates. CBs monitor the requests and shut off all traffic to the impacted service when a certain criteria is met ("open" state). By doing this, CBs give the service time to recover from their outage instead of flooding it with events. The CB can also allow partial traffic through to see if the system has healed ("half-open" state). Once requests resume being successful, the CB gets into "closed" state and allows traffic to completely resume.
|
||||
|
||||
| Retry option | Description |
|
||||
| ------------ | ----------- |
|
||||
| `maxRequests` | The maximum number of requests allowed to pass through when the CB is half-open (recovering from failure). Defaults to `1`. |
|
||||
| `interval` | The cyclical period of time used by the CB to clear its internal counts. If set to 0 seconds, this never clears. Defaults to `0s`. |
|
||||
| `timeout` | The period of the open state (directly after failure) until the CB switches to half-open. Defaults to `60s`. |
|
||||
| `trip` | A [Common Expression Language (CEL)](https://github.com/google/cel-spec) statement that is evaluated by the CB. When the statement evaluates to true, the CB trips and becomes open. Defaults to `consecutiveFailures > 5`. |
|
||||
|
||||
Example:
|
||||
|
||||
```yaml
|
||||
spec:
|
||||
policies:
|
||||
circuitBreakers:
|
||||
pubsubCB:
|
||||
maxRequests: 1
|
||||
interval: 8s
|
||||
timeout: 45s
|
||||
trip: consecutiveFailures > 8
|
||||
```
|
||||
|
||||
## Overriding default retries
|
||||
|
||||
Dapr provides default retries for any unsuccessful request, such as failures and transient errors. Within a resiliency spec, you have the option to override Dapr's default retry logic by defining policies with reserved, named keywords. For example, defining a policy with the name `DaprBuiltInServiceRetries`, overrides the default retries for failures between sidecars via service-to-service requests. Policy overrides are not applied to specific targets.
|
||||
|
||||
> Note: Although you can override default values with more robust retries, you cannot override with lesser values than the provided default value, or completely remove default retries. This prevents unexpected downtime.
|
||||
|
||||
Below is a table that describes Dapr's default retries and the policy keywords to override them:
|
||||
|
||||
| Capability | Override Keyword | Default Retry Behavior | Description |
|
||||
| ------------------ | ------------------------- | ------------------------------ | ----------------------------------------------------------------------------------------------------------- |
|
||||
| Service Invocation | DaprBuiltInServiceRetries | Per call retries are performed with a backoff interval of 1 second, up to a threshold of 3 times. | Sidecar-to-sidecar requests (a service invocation method call) that fail and result in a gRPC code `Unavailable` or `Unauthenticated` |
|
||||
| Actors | DaprBuiltInActorRetries | Per call retries are performed with a backoff interval of 1 second, up to a threshold of 3 times. | Sidecar-to-sidecar requests (an actor method call) that fail and result in a gRPC code `Unavailable` or `Unauthenticated` |
|
||||
| Actor Reminders | DaprBuiltInActorReminderRetries | Per call retries are performed with an exponential backoff with an initial interval of 500ms, up to a maximum of 60s for a duration of 15mins | Requests that fail to persist an actor reminder to a state store |
|
||||
| Initialization Retries | DaprBuiltInInitializationRetries | Per call retries are performed 3 times with an exponential backoff, an initial interval of 500ms and for a duration of 10s | Failures when making a request to an application to retrieve a given spec. For example, failure to retrieve a subscription, component or resiliency specification |
|
||||
|
||||
|
||||
The resiliency spec example below shows overriding the default retries for _all_ service invocation requests by using the reserved, named keyword 'DaprBuiltInServiceRetries'.
|
||||
|
||||
Also defined is a retry policy called 'retryForever' that is only applied to the appB target. appB uses the 'retryForever' retry policy, while all other application service invocation retry failures use the overridden 'DaprBuiltInServiceRetries' default policy.
|
||||
|
||||
```yaml
|
||||
spec:
|
||||
policies:
|
||||
retries:
|
||||
DaprBuiltInServiceRetries: # Overrides default retry behavior for service-to-service calls
|
||||
policy: constant
|
||||
duration: 5s
|
||||
maxRetries: 10
|
||||
|
||||
retryForever: # A user defined retry policy replaces default retries. Targets rely solely on the applied policy.
|
||||
policy: exponential
|
||||
maxInterval: 15s
|
||||
maxRetries: -1 # Retry indefinitely
|
||||
|
||||
targets:
|
||||
apps:
|
||||
appB: # app-id of the target service
|
||||
retry: retryForever
|
||||
```
|
||||
|
||||
## Setting default policies
|
||||
|
||||
In resiliency you can set default policies, which have a broad scope. This is done through reserved keywords that let Dapr know when to apply the policy. There are 3 default policy types:
|
||||
|
||||
- `DefaultRetryPolicy`
|
||||
- `DefaultTimeoutPolicy`
|
||||
- `DefaultCircuitBreakerPolicy`
|
||||
|
||||
If these policies are defined, they are used for every operation to a service, application, or component. They can also be modified to be more specific through the appending of additional keywords. The specific policies follow the following pattern, `Default%sRetryPolicy`, `Default%sTimeoutPolicy`, and `Default%sCircuitBreakerPolicy`. Where the `%s` is replaced by a target of the policy.
|
||||
|
||||
Below is a table of all possible default policy keywords and how they translate into a policy name.
|
||||
|
||||
| Keyword | Target Operation | Example Policy Name |
|
||||
| -------------------------------- | ---------------------------------------------------- | ----------------------------------------------------------- |
|
||||
| `App` | Service invocation. | `DefaultAppRetryPolicy` |
|
||||
| `Actor` | Actor invocation. | `DefaultActorTimeoutPolicy` |
|
||||
| `Component` | All component operations. | `DefaultComponentCircuitBreakerPolicy` |
|
||||
| `ComponentInbound` | All inbound component operations. | `DefaultComponentInboundRetryPolicy` |
|
||||
| `ComponentOutbound` | All outbound component operations. | `DefaultComponentOutboundTimeoutPolicy` |
|
||||
| `StatestoreComponentOutbound` | All statestore component operations. | `DefaultStatestoreComponentOutboundCircuitBreakerPolicy` |
|
||||
| `PubsubComponentOutbound` | All outbound pubusub (publish) component operations. | `DefaultPubsubComponentOutboundRetryPolicy` |
|
||||
| `PubsubComponentInbound` | All inbound pubsub (subscribe) component operations. | `DefaultPubsubComponentInboundTimeoutPolicy` |
|
||||
| `BindingComponentOutbound` | All outbound binding (invoke) component operations. | `DefaultBindingComponentOutboundCircuitBreakerPolicy` |
|
||||
| `BindingComponentInbound` | All inbound binding (read) component operations. | `DefaultBindingComponentInboundRetryPolicy` |
|
||||
| `SecretstoreComponentOutbound` | All secretstore component operations. | `DefaultSecretstoreComponentTimeoutPolicy` |
|
||||
| `ConfigurationComponentOutbound` | All configuration component operations. | `DefaultConfigurationComponentOutboundCircuitBreakerPolicy` |
|
||||
| `LockComponentOutbound` | All lock component operations. | `DefaultLockComponentOutboundRetryPolicy` |
|
||||
|
||||
### Policy hierarchy resolution
|
||||
|
||||
Default policies are applied if the operation being executed matches the policy type and if there is no more specific policy targeting it. For each target type (app, actor, and component), the policy with the highest priority is a Named Policy, one that targets that construct specifically.
|
||||
|
||||
If none exists, the policies are applied from most specific to most broad.
|
||||
|
||||
#### How default policies and built-in retries work together
|
||||
|
||||
In the case of the [built-in retries]({{< ref "policies.md#Override Default Retries" >}}), default policies do not stop the built-in retry policies from running. Both are used together but only under specific circumstances.
|
||||
|
||||
For service and actor invocation, the built-in retries deal specifically with issues connecting to the remote sidecar (when needed). As these are important to the stability of the Dapr runtime, they are not disabled **unless** a named policy is specifically referenced for an operation. In some instances, there may be additional retries from both the built-in retry and the default retry policy, but this prevents an overly weak default policy from reducing the sidecar's availability/success rate.
|
||||
|
||||
Policy resolution hierarchy for applications, from most specific to most broad:
|
||||
|
||||
1. Named Policies in App Targets
|
||||
2. Default App Policies / Built-In Service Retries
|
||||
3. Default Policies / Built-In Service Retries
|
||||
|
||||
Policy resolution hierarchy for actors, from most specific to most broad:
|
||||
|
||||
1. Named Policies in Actor Targets
|
||||
2. Default Actor Policies / Built-In Actor Retries
|
||||
3. Default Policies / Built-In Actor Retries
|
||||
|
||||
Policy resolution hierarchy for components, from most specific to most broad:
|
||||
|
||||
1. Named Policies in Component Targets
|
||||
2. Default Component Type + Component Direction Policies / Built-In Actor Reminder Retries (if applicable)
|
||||
3. Default Component Direction Policies / Built-In Actor Reminder Retries (if applicable)
|
||||
4. Default Component Policies / Built-In Actor Reminder Retries (if applicable)
|
||||
5. Default Policies / Built-In Actor Reminder Retries (if applicable)
|
||||
|
||||
As an example, take the following solution consisting of three applications, three components and two actor types:
|
||||
|
||||
Applications:
|
||||
|
||||
- AppA
|
||||
- AppB
|
||||
- AppC
|
||||
|
||||
Components:
|
||||
|
||||
- Redis Pubsub: pubsub
|
||||
- Redis statestore: statestore
|
||||
- CosmosDB Statestore: actorstore
|
||||
|
||||
Actors:
|
||||
|
||||
- EventActor
|
||||
- SummaryActor
|
||||
|
||||
Below is policy that uses both default and named policies as applies these to the targets.
|
||||
|
||||
```yaml
|
||||
spec:
|
||||
policies:
|
||||
retries:
|
||||
# Global Retry Policy
|
||||
DefaultRetryPolicy:
|
||||
policy: constant
|
||||
duration: 1s
|
||||
maxRetries: 3
|
||||
|
||||
# Global Retry Policy for Apps
|
||||
DefaultAppRetryPolicy:
|
||||
policy: constant
|
||||
duration: 100ms
|
||||
maxRetries: 5
|
||||
|
||||
# Global Retry Policy for Apps
|
||||
DefaultActorRetryPolicy:
|
||||
policy: exponential
|
||||
maxInterval: 15s
|
||||
maxRetries: 10
|
||||
|
||||
# Global Retry Policy for Inbound Component operations
|
||||
DefaultComponentInboundRetryPolicy:
|
||||
policy: constant
|
||||
duration: 5s
|
||||
maxRetries: 5
|
||||
|
||||
# Global Retry Policy for Statestores
|
||||
DefaultStatestoreComponentOutboundRetryPolicy:
|
||||
policy: exponential
|
||||
maxInterval: 60s
|
||||
maxRetries: -1
|
||||
|
||||
# Named policy
|
||||
fastRetries:
|
||||
policy: constant
|
||||
duration: 10ms
|
||||
maxRetries: 3
|
||||
|
||||
# Named policy
|
||||
retryForever:
|
||||
policy: exponential
|
||||
maxInterval: 10s
|
||||
maxRetries: -1
|
||||
|
||||
targets:
|
||||
apps:
|
||||
appA:
|
||||
retry: fastRetries
|
||||
|
||||
appB:
|
||||
retry: retryForever
|
||||
|
||||
actors:
|
||||
EventActor:
|
||||
retry: retryForever
|
||||
|
||||
components:
|
||||
actorstore:
|
||||
retry: fastRetries
|
||||
```
|
||||
|
||||
The table below is a break down of which policies are applied when attempting to call the various targets in this solution.
|
||||
|
||||
| Target | Policy Used |
|
||||
| ------------------ | ----------------------------------------------- |
|
||||
| AppA | fastRetries |
|
||||
| AppB | retryForever |
|
||||
| AppC | DefaultAppRetryPolicy / DaprBuiltInActorRetries |
|
||||
| pubsub - Publish | DefaultRetryPolicy |
|
||||
| pubsub - Subscribe | DefaultComponentInboundRetryPolicy |
|
||||
| statestore | DefaultStatestoreComponentOutboundRetryPolicy |
|
||||
| actorstore | fastRetries |
|
||||
| EventActor | retryForever |
|
||||
| SummaryActor | DefaultActorRetryPolicy |
|
||||
|
||||
## Next steps
|
||||
|
||||
Try out one of the Resiliency quickstarts:
|
||||
- [Resiliency: Service-to-service]({{< ref resiliency-serviceinvo-quickstart.md >}})
|
||||
- [Resiliency: State Management]({{< ref resiliency-state-quickstart.md >}})
|
|
@ -0,0 +1,9 @@
|
|||
---
|
||||
type: docs
|
||||
title: "Resiliency policies"
|
||||
linkTitle: "Policies"
|
||||
weight: 200
|
||||
description: "Configure resiliency policies for timeouts, retries, and circuit breakers"
|
||||
---
|
||||
|
||||
Define timeouts, retries, and circuit breaker policies under `policies`. Each policy is given a name so you can refer to them from the [`targets` section in the resiliency spec]({{< ref targets.md >}}).
|
|
@ -0,0 +1,49 @@
|
|||
---
|
||||
type: docs
|
||||
title: "Circuit breaker resiliency policies"
|
||||
linkTitle: "Circuit breakers"
|
||||
weight: 30
|
||||
description: "Configure resiliency policies for circuit breakers"
|
||||
---
|
||||
|
||||
Circuit breaker policies are used when other applications/services/components are experiencing elevated failure rates. Circuit breakers reduce load by monitoring the requests and shutting off all traffic to the impacted service when a certain criteria is met.
|
||||
|
||||
After a certain number of requests fail, circuit breakers "trip" or open to prevent cascading failures. By doing this, circuit breakers give the service time to recover from their outage instead of flooding it with events.
|
||||
|
||||
The circuit breaker can also enter a “half-open” state, allowing partial traffic through to see if the system has healed.
|
||||
|
||||
Once requests resume being successful, the circuit breaker gets into "closed" state and allows traffic to completely resume.
|
||||
|
||||
## Circuit breaker policy format
|
||||
|
||||
```yaml
|
||||
spec:
|
||||
policies:
|
||||
circuitBreakers:
|
||||
pubsubCB:
|
||||
maxRequests: 1
|
||||
interval: 8s
|
||||
timeout: 45s
|
||||
trip: consecutiveFailures > 8
|
||||
```
|
||||
|
||||
## Spec metadata
|
||||
|
||||
| Retry option | Description |
|
||||
| ------------ | ----------- |
|
||||
| `maxRequests` | The maximum number of requests allowed to pass through when the circuit breaker is half-open (recovering from failure). Defaults to `1`. |
|
||||
| `interval` | The cyclical period of time used by the circuit breaker to clear its internal counts. If set to 0 seconds, this never clears. Defaults to `0s`. |
|
||||
| `timeout` | The period of the open state (directly after failure) until the circuit breaker switches to half-open. Defaults to `60s`. |
|
||||
| `trip` | A [Common Expression Language (CEL)](https://github.com/google/cel-spec) statement that is evaluated by the circuit breaker. When the statement evaluates to true, the circuit breaker trips and becomes open. Defaults to `consecutiveFailures > 5`. Other possible values are `requests` and `totalFailures` where `requests` represents the number of either successful or failed calls before the circuit opens and `totalFailures` represents the total (not necessarily consecutive) number of failed attempts before the circuit opens. Example: `requests > 5` and `totalFailures >3`.|
|
||||
|
||||
## Next steps
|
||||
- [Learn more about default resiliency policies]({{< ref default-policies.md >}})
|
||||
- Learn more about:
|
||||
- [Retry policies]({{< ref retries-overview.md >}})
|
||||
- [Timeout policies]({{< ref timeouts.md >}})
|
||||
|
||||
## Related links
|
||||
|
||||
Try out one of the Resiliency quickstarts:
|
||||
- [Resiliency: Service-to-service]({{< ref resiliency-serviceinvo-quickstart.md >}})
|
||||
- [Resiliency: State Management]({{< ref resiliency-state-quickstart.md >}})
|
|
@ -0,0 +1,173 @@
|
|||
---
|
||||
type: docs
|
||||
title: "Default resiliency policies"
|
||||
linkTitle: "Default policies"
|
||||
weight: 40
|
||||
description: "Learn more about the default resiliency policies for timeouts, retries, and circuit breakers"
|
||||
---
|
||||
|
||||
In resiliency, you can set default policies, which have a broad scope. This is done through reserved keywords that let Dapr know when to apply the policy. There are 3 default policy types:
|
||||
|
||||
- `DefaultRetryPolicy`
|
||||
- `DefaultTimeoutPolicy`
|
||||
- `DefaultCircuitBreakerPolicy`
|
||||
|
||||
If these policies are defined, they are used for every operation to a service, application, or component. They can also be modified to be more specific through the appending of additional keywords. The specific policies follow the following pattern, `Default%sRetryPolicy`, `Default%sTimeoutPolicy`, and `Default%sCircuitBreakerPolicy`. Where the `%s` is replaced by a target of the policy.
|
||||
|
||||
Below is a table of all possible default policy keywords and how they translate into a policy name.
|
||||
|
||||
| Keyword | Target Operation | Example Policy Name |
|
||||
| -------------------------------- | ---------------------------------------------------- | ----------------------------------------------------------- |
|
||||
| `App` | Service invocation. | `DefaultAppRetryPolicy` |
|
||||
| `Actor` | Actor invocation. | `DefaultActorTimeoutPolicy` |
|
||||
| `Component` | All component operations. | `DefaultComponentCircuitBreakerPolicy` |
|
||||
| `ComponentInbound` | All inbound component operations. | `DefaultComponentInboundRetryPolicy` |
|
||||
| `ComponentOutbound` | All outbound component operations. | `DefaultComponentOutboundTimeoutPolicy` |
|
||||
| `StatestoreComponentOutbound` | All statestore component operations. | `DefaultStatestoreComponentOutboundCircuitBreakerPolicy` |
|
||||
| `PubsubComponentOutbound` | All outbound pubusub (publish) component operations. | `DefaultPubsubComponentOutboundRetryPolicy` |
|
||||
| `PubsubComponentInbound` | All inbound pubsub (subscribe) component operations. | `DefaultPubsubComponentInboundTimeoutPolicy` |
|
||||
| `BindingComponentOutbound` | All outbound binding (invoke) component operations. | `DefaultBindingComponentOutboundCircuitBreakerPolicy` |
|
||||
| `BindingComponentInbound` | All inbound binding (read) component operations. | `DefaultBindingComponentInboundRetryPolicy` |
|
||||
| `SecretstoreComponentOutbound` | All secretstore component operations. | `DefaultSecretstoreComponentTimeoutPolicy` |
|
||||
| `ConfigurationComponentOutbound` | All configuration component operations. | `DefaultConfigurationComponentOutboundCircuitBreakerPolicy` |
|
||||
| `LockComponentOutbound` | All lock component operations. | `DefaultLockComponentOutboundRetryPolicy` |
|
||||
|
||||
## Policy hierarchy resolution
|
||||
|
||||
Default policies are applied if the operation being executed matches the policy type and if there is no more specific policy targeting it. For each target type (app, actor, and component), the policy with the highest priority is a Named Policy, one that targets that construct specifically.
|
||||
|
||||
If none exists, the policies are applied from most specific to most broad.
|
||||
|
||||
## How default policies and built-in retries work together
|
||||
|
||||
In the case of the [built-in retries]({{< ref override-default-retries.md >}}), default policies do not stop the built-in retry policies from running. Both are used together but only under specific circumstances.
|
||||
|
||||
For service and actor invocation, the built-in retries deal specifically with issues connecting to the remote sidecar (when needed). As these are important to the stability of the Dapr runtime, they are not disabled **unless** a named policy is specifically referenced for an operation. In some instances, there may be additional retries from both the built-in retry and the default retry policy, but this prevents an overly weak default policy from reducing the sidecar's availability/success rate.
|
||||
|
||||
Policy resolution hierarchy for applications, from most specific to most broad:
|
||||
|
||||
1. Named Policies in App Targets
|
||||
2. Default App Policies / Built-In Service Retries
|
||||
3. Default Policies / Built-In Service Retries
|
||||
|
||||
Policy resolution hierarchy for actors, from most specific to most broad:
|
||||
|
||||
1. Named Policies in Actor Targets
|
||||
2. Default Actor Policies / Built-In Actor Retries
|
||||
3. Default Policies / Built-In Actor Retries
|
||||
|
||||
Policy resolution hierarchy for components, from most specific to most broad:
|
||||
|
||||
1. Named Policies in Component Targets
|
||||
2. Default Component Type + Component Direction Policies / Built-In Actor Reminder Retries (if applicable)
|
||||
3. Default Component Direction Policies / Built-In Actor Reminder Retries (if applicable)
|
||||
4. Default Component Policies / Built-In Actor Reminder Retries (if applicable)
|
||||
5. Default Policies / Built-In Actor Reminder Retries (if applicable)
|
||||
|
||||
As an example, take the following solution consisting of three applications, three components and two actor types:
|
||||
|
||||
Applications:
|
||||
|
||||
- AppA
|
||||
- AppB
|
||||
- AppC
|
||||
|
||||
Components:
|
||||
|
||||
- Redis Pubsub: pubsub
|
||||
- Redis statestore: statestore
|
||||
- CosmosDB Statestore: actorstore
|
||||
|
||||
Actors:
|
||||
|
||||
- EventActor
|
||||
- SummaryActor
|
||||
|
||||
Below is policy that uses both default and named policies as applies these to the targets.
|
||||
|
||||
```yaml
|
||||
spec:
|
||||
policies:
|
||||
retries:
|
||||
# Global Retry Policy
|
||||
DefaultRetryPolicy:
|
||||
policy: constant
|
||||
duration: 1s
|
||||
maxRetries: 3
|
||||
|
||||
# Global Retry Policy for Apps
|
||||
DefaultAppRetryPolicy:
|
||||
policy: constant
|
||||
duration: 100ms
|
||||
maxRetries: 5
|
||||
|
||||
# Global Retry Policy for Apps
|
||||
DefaultActorRetryPolicy:
|
||||
policy: exponential
|
||||
maxInterval: 15s
|
||||
maxRetries: 10
|
||||
|
||||
# Global Retry Policy for Inbound Component operations
|
||||
DefaultComponentInboundRetryPolicy:
|
||||
policy: constant
|
||||
duration: 5s
|
||||
maxRetries: 5
|
||||
|
||||
# Global Retry Policy for Statestores
|
||||
DefaultStatestoreComponentOutboundRetryPolicy:
|
||||
policy: exponential
|
||||
maxInterval: 60s
|
||||
maxRetries: -1
|
||||
|
||||
# Named policy
|
||||
fastRetries:
|
||||
policy: constant
|
||||
duration: 10ms
|
||||
maxRetries: 3
|
||||
|
||||
# Named policy
|
||||
retryForever:
|
||||
policy: exponential
|
||||
maxInterval: 10s
|
||||
maxRetries: -1
|
||||
|
||||
targets:
|
||||
apps:
|
||||
appA:
|
||||
retry: fastRetries
|
||||
|
||||
appB:
|
||||
retry: retryForever
|
||||
|
||||
actors:
|
||||
EventActor:
|
||||
retry: retryForever
|
||||
|
||||
components:
|
||||
actorstore:
|
||||
retry: fastRetries
|
||||
```
|
||||
|
||||
The table below is a break down of which policies are applied when attempting to call the various targets in this solution.
|
||||
|
||||
| Target | Policy Used |
|
||||
| ------------------ | ----------------------------------------------- |
|
||||
| AppA | fastRetries |
|
||||
| AppB | retryForever |
|
||||
| AppC | DefaultAppRetryPolicy / DaprBuiltInActorRetries |
|
||||
| pubsub - Publish | DefaultRetryPolicy |
|
||||
| pubsub - Subscribe | DefaultComponentInboundRetryPolicy |
|
||||
| statestore | DefaultStatestoreComponentOutboundRetryPolicy |
|
||||
| actorstore | fastRetries |
|
||||
| EventActor | retryForever |
|
||||
| SummaryActor | DefaultActorRetryPolicy |
|
||||
|
||||
## Next steps
|
||||
|
||||
[Learn how to override default retry policies.]({{< ref override-default-retries.md >}})
|
||||
|
||||
## Related links
|
||||
|
||||
Try out one of the Resiliency quickstarts:
|
||||
- [Resiliency: Service-to-service]({{< ref resiliency-serviceinvo-quickstart.md >}})
|
||||
- [Resiliency: State Management]({{< ref resiliency-state-quickstart.md >}})
|
|
@ -0,0 +1,7 @@
|
|||
---
|
||||
type: docs
|
||||
title: "Retry and back-off resiliency policies"
|
||||
linkTitle: "Retries"
|
||||
weight: 20
|
||||
description: "Configure resiliency policies for retries and back-offs"
|
||||
---
|
|
@ -0,0 +1,51 @@
|
|||
---
|
||||
type: docs
|
||||
title: "Override default retry resiliency policies"
|
||||
linkTitle: "Override default retries"
|
||||
weight: 20
|
||||
description: "Learn how to override the default retry resiliency policies for specific APIs"
|
||||
---
|
||||
|
||||
Dapr provides [default retries]({{< ref default-policies.md >}}) for any unsuccessful request, such as failures and transient errors. Within a resiliency spec, you have the option to override Dapr's default retry logic by defining policies with reserved, named keywords. For example, defining a policy with the name `DaprBuiltInServiceRetries`, overrides the default retries for failures between sidecars via service-to-service requests. Policy overrides are not applied to specific targets.
|
||||
|
||||
> Note: Although you can override default values with more robust retries, you cannot override with lesser values than the provided default value, or completely remove default retries. This prevents unexpected downtime.
|
||||
|
||||
Below is a table that describes Dapr's default retries and the policy keywords to override them:
|
||||
|
||||
| Capability | Override Keyword | Default Retry Behavior | Description |
|
||||
| ------------------ | ------------------------- | ------------------------------ | ----------------------------------------------------------------------------------------------------------- |
|
||||
| Service Invocation | DaprBuiltInServiceRetries | Per call retries are performed with a backoff interval of 1 second, up to a threshold of 3 times. | Sidecar-to-sidecar requests (a service invocation method call) that fail and result in a gRPC code `Unavailable` or `Unauthenticated` |
|
||||
| Actors | DaprBuiltInActorRetries | Per call retries are performed with a backoff interval of 1 second, up to a threshold of 3 times. | Sidecar-to-sidecar requests (an actor method call) that fail and result in a gRPC code `Unavailable` or `Unauthenticated` |
|
||||
| Actor Reminders | DaprBuiltInActorReminderRetries | Per call retries are performed with an exponential backoff with an initial interval of 500ms, up to a maximum of 60s for a duration of 15mins | Requests that fail to persist an actor reminder to a state store |
|
||||
| Initialization Retries | DaprBuiltInInitializationRetries | Per call retries are performed 3 times with an exponential backoff, an initial interval of 500ms and for a duration of 10s | Failures when making a request to an application to retrieve a given spec. For example, failure to retrieve a subscription, component or resiliency specification |
|
||||
|
||||
|
||||
The resiliency spec example below shows overriding the default retries for _all_ service invocation requests by using the reserved, named keyword 'DaprBuiltInServiceRetries'.
|
||||
|
||||
Also defined is a retry policy called 'retryForever' that is only applied to the appB target. appB uses the 'retryForever' retry policy, while all other application service invocation retry failures use the overridden 'DaprBuiltInServiceRetries' default policy.
|
||||
|
||||
```yaml
|
||||
spec:
|
||||
policies:
|
||||
retries:
|
||||
DaprBuiltInServiceRetries: # Overrides default retry behavior for service-to-service calls
|
||||
policy: constant
|
||||
duration: 5s
|
||||
maxRetries: 10
|
||||
|
||||
retryForever: # A user defined retry policy replaces default retries. Targets rely solely on the applied policy.
|
||||
policy: exponential
|
||||
maxInterval: 15s
|
||||
maxRetries: -1 # Retry indefinitely
|
||||
|
||||
targets:
|
||||
apps:
|
||||
appB: # app-id of the target service
|
||||
retry: retryForever
|
||||
```
|
||||
|
||||
## Related links
|
||||
|
||||
Try out one of the Resiliency quickstarts:
|
||||
- [Resiliency: Service-to-service]({{< ref resiliency-serviceinvo-quickstart.md >}})
|
||||
- [Resiliency: State Management]({{< ref resiliency-state-quickstart.md >}})
|
|
@ -0,0 +1,153 @@
|
|||
---
|
||||
type: docs
|
||||
title: "Retry resiliency policies"
|
||||
linkTitle: "Overview"
|
||||
weight: 10
|
||||
description: "Configure resiliency policies for retries"
|
||||
---
|
||||
|
||||
Requests can fail due to transient errors, like encountering network congestion, reroutes to overloaded instances, and more. Sometimes, requests can fail due to other resiliency policies set in place, like triggering a defined timeout or circuit breaker policy.
|
||||
|
||||
In these cases, configuring `retries` can either:
|
||||
- Send the same request to a different instance, or
|
||||
- Retry sending the request after the condition has cleared.
|
||||
|
||||
Retries and timeouts work together, with timeouts ensuring your system fails fast when needed, and retries recovering from temporary glitches.
|
||||
|
||||
Dapr provides [default resiliency policies]({{< ref default-policies.md >}}), which you can [overwrite with user-defined retry policies.]({{< ref override-default-retries.md >}})
|
||||
|
||||
{{% alert title="Important" color="warning" %}}
|
||||
Each [pub/sub component]({{< ref supported-pubsub >}}) has its own built-in retry behaviors. Explicity applying a Dapr resiliency policy doesn't override these implicit retry policies. Rather, the resiliency policy augments the built-in retry, which can cause repetitive clustering of messages.
|
||||
{{% /alert %}}
|
||||
|
||||
## Retry policy format
|
||||
|
||||
**Example 1**
|
||||
|
||||
```yaml
|
||||
spec:
|
||||
policies:
|
||||
# Retries are named templates for retry configurations and are instantiated for life of the operation.
|
||||
retries:
|
||||
pubsubRetry:
|
||||
policy: constant
|
||||
duration: 5s
|
||||
maxRetries: 10
|
||||
|
||||
retryForever:
|
||||
policy: exponential
|
||||
maxInterval: 15s
|
||||
maxRetries: -1 # Retry indefinitely
|
||||
```
|
||||
|
||||
**Example 2**
|
||||
|
||||
```yaml
|
||||
spec:
|
||||
policies:
|
||||
retries:
|
||||
retry5xxOnly:
|
||||
policy: constant
|
||||
duration: 5s
|
||||
maxRetries: 3
|
||||
matching:
|
||||
httpStatusCodes: "429,500-599" # retry the HTTP status codes in this range. All others are not retried.
|
||||
gRPCStatusCodes: "1-4,8-11,13,14" # retry gRPC status codes in these ranges and separate single codes.
|
||||
```
|
||||
|
||||
## Spec metadata
|
||||
|
||||
The following retry options are configurable:
|
||||
|
||||
| Retry option | Description |
|
||||
| ------------ | ----------- |
|
||||
| `policy` | Determines the back-off and retry interval strategy. Valid values are `constant` and `exponential`.<br/>Defaults to `constant`. |
|
||||
| `duration` | Determines the time interval between retries. Only applies to the `constant` policy.<br/>Valid values are of the form `200ms`, `15s`, `2m`, etc.<br/> Defaults to `5s`.|
|
||||
| `maxInterval` | Determines the maximum interval between retries to which the [`exponential` back-off policy](#exponential-back-off-policy) can grow.<br/>Additional retries always occur after a duration of `maxInterval`. Defaults to `60s`. Valid values are of the form `5s`, `1m`, `1m30s`, etc |
|
||||
| `maxRetries` | The maximum number of retries to attempt. <br/>`-1` denotes an unlimited number of retries, while `0` means the request will not be retried (essentially behaving as if the retry policy were not set).<br/>Defaults to `-1`. |
|
||||
| `matching.httpStatusCodes` | Optional: a comma-separated string of [HTTP status codes or code ranges to retry](#retry-status-codes). Status codes not listed are not retried.<br/>Valid values: 100-599, [Reference](https://developer.mozilla.org/en-US/docs/Web/HTTP/Status)<br/>Format: `<code>` or range `<start>-<end>`<br/>Example: "429,501-503"<br/>Default: empty string `""` or field is not set. Retries on all HTTP errors. |
|
||||
| `matching.gRPCStatusCodes` | Optional: a comma-separated string of [gRPC status codes or code ranges to retry](#retry-status-codes). Status codes not listed are not retried.<br/>Valid values: 0-16, [Reference](https://grpc.io/docs/guides/status-codes/)<br/>Format: `<code>` or range `<start>-<end>`<br/>Example: "4,8,14"<br/>Default: empty string `""` or field is not set. Retries on all gRPC errors. |
|
||||
|
||||
|
||||
## Exponential back-off policy
|
||||
|
||||
The exponential back-off window uses the following formula:
|
||||
|
||||
```
|
||||
BackOffDuration = PreviousBackOffDuration * (Random value from 0.5 to 1.5) * 1.5
|
||||
if BackOffDuration > maxInterval {
|
||||
BackoffDuration = maxInterval
|
||||
}
|
||||
```
|
||||
|
||||
## Retry status codes
|
||||
|
||||
When applications span multiple services, especially on dynamic environments like Kubernetes, services can disappear for all kinds of reasons and network calls can start hanging. Status codes provide a glimpse into our operations and where they may have failed in production.
|
||||
|
||||
### HTTP
|
||||
|
||||
The following table includes some examples of HTTP status codes you may receive and whether you should or should not retry certain operations.
|
||||
|
||||
| HTTP Status Code | Retry Recommended? | Description |
|
||||
| ------------------------- | ---------------------- | ---------------------------- |
|
||||
| 404 Not Found | ❌ No | The resource doesn't exist. |
|
||||
| 400 Bad Request | ❌ No | Your request is invalid. |
|
||||
| 401 Unauthorized | ❌ No | Try getting new credentials. |
|
||||
| 408 Request Timeout | ✅ Yes | The server timed out waiting for the request. |
|
||||
| 429 Too Many Requests | ✅ Yes | (Respect the `Retry-After` header, if present). |
|
||||
| 500 Internal Server Error | ✅ Yes | The server encountered an unexpected condition. |
|
||||
| 502 Bad Gateway | ✅ Yes | A gateway or proxy received an invalid response. |
|
||||
| 503 Service Unavailable | ✅ Yes | Service might recover. |
|
||||
| 504 Gateway Timeout | ✅ Yes | Temporary network issue. |
|
||||
|
||||
### gRPC
|
||||
|
||||
The following table includes some examples of gRPC status codes you may receive and whether you should or should not retry certain operations.
|
||||
|
||||
| gRPC Status Code | Retry Recommended? | Description |
|
||||
| ------------------------- | ----------------------- | ---------------------------- |
|
||||
| Code 1 CANCELLED | ❌ No | N/A |
|
||||
| Code 3 INVALID_ARGUMENT | ❌ No | N/A |
|
||||
| Code 4 DEADLINE_EXCEEDED | ✅ Yes | Retry with backoff |
|
||||
| Code 5 NOT_FOUND | ❌ No | N/A |
|
||||
| Code 8 RESOURCE_EXHAUSTED | ✅ Yes | Retry with backoff |
|
||||
| Code 14 UNAVAILABLE | ✅ Yes | Retry with backoff |
|
||||
|
||||
### Retry filter based on status codes
|
||||
|
||||
The retry filter enables granular control over retry policies by allowing users to specify HTTP and gRPC status codes or ranges for which retries should apply.
|
||||
|
||||
```yml
|
||||
spec:
|
||||
policies:
|
||||
retries:
|
||||
retry5xxOnly:
|
||||
# ...
|
||||
matching:
|
||||
httpStatusCodes: "429,500-599" # retry the HTTP status codes in this range. All others are not retried.
|
||||
gRPCStatusCodes: "4,8-11,13,14" # retry gRPC status codes in these ranges and separate single codes.
|
||||
```
|
||||
|
||||
{{% alert title="Note" color="primary" %}}
|
||||
Field values for status codes must follow the format specified above. An incorrectly formatted value produces an error log ("Could not read resiliency policy") and the `daprd` startup sequence will proceed.
|
||||
{{% /alert %}}
|
||||
|
||||
## Demo
|
||||
|
||||
Watch a demo presented during [Diagrid's Dapr v1.15 celebration](https://www.diagrid.io/videos/dapr-1-15-deep-dive) to see how to set retry status code filters using Diagrid Conductor
|
||||
|
||||
<iframe width="560" height="315" src="https://www.youtube-nocookie.com/embed/NTnwoDhHIcQ?si=8k1IhRazjyrIJE3P&start=4565" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen></iframe>
|
||||
|
||||
## Next steps
|
||||
|
||||
- [Learn how to override default retry policies for specific APIs.]({[< ref override-default-retries.md >]})
|
||||
- [Learn how to target your retry policies from the resiliency spec.]({{< ref targets.md >}})
|
||||
- Learn more about:
|
||||
- [Timeout policies]({{< ref timeouts.md >}})
|
||||
- [Circuit breaker policies]({{< ref circuit-breakers.md >}})
|
||||
|
||||
## Related links
|
||||
|
||||
Try out one of the Resiliency quickstarts:
|
||||
- [Resiliency: Service-to-service]({{< ref resiliency-serviceinvo-quickstart.md >}})
|
||||
- [Resiliency: State Management]({{< ref resiliency-state-quickstart.md >}})
|
|
@ -0,0 +1,50 @@
|
|||
---
|
||||
type: docs
|
||||
title: "Timeout resiliency policies"
|
||||
linkTitle: "Timeouts"
|
||||
weight: 10
|
||||
description: "Configure resiliency policies for timeouts"
|
||||
---
|
||||
|
||||
Network calls can fail for many reasons, causing your application to wait indefinitely for responses. By setting a timeout duration, you can cut off those unresponsive services, freeing up resources to handle new requests.
|
||||
|
||||
Timeouts are optional policies that can be used to early-terminate long-running operations. Set a realistic timeout duration that reflects actual response times in production. If you've exceeded a timeout duration:
|
||||
|
||||
- The operation in progress is terminated (if possible).
|
||||
- An error is returned.
|
||||
|
||||
## Timeout policy format
|
||||
|
||||
```yaml
|
||||
spec:
|
||||
policies:
|
||||
# Timeouts are simple named durations.
|
||||
timeouts:
|
||||
timeoutName: timeout1
|
||||
general: 5s
|
||||
important: 60s
|
||||
largeResponse: 10s
|
||||
```
|
||||
|
||||
### Spec metadata
|
||||
|
||||
| Field | Details | Example |
|
||||
| timeoutName | Name of the timeout policy | `timeout1` |
|
||||
| general | Time duration for timeouts marked as "general". Uses Go's [time.ParseDuration](https://pkg.go.dev/time#ParseDuration) format. No set maximum value. | `15s`, `2m`, `1h30m` |
|
||||
| important | Time duration for timeouts marked as "important". Uses Go's [time.ParseDuration](https://pkg.go.dev/time#ParseDuration) format. No set maximum value. | `15s`, `2m`, `1h30m` |
|
||||
| largeResponse | Time duration for timeouts awaiting a large response. Uses Go's [time.ParseDuration](https://pkg.go.dev/time#ParseDuration) format. No set maximum value. | `15s`, `2m`, `1h30m` |
|
||||
|
||||
> If you don't specify a timeout value, the policy does not enforce a time and defaults to whatever you set up per the request client.
|
||||
|
||||
## Next steps
|
||||
|
||||
- [Learn more about default resiliency policies]({{< ref default-policies.md >}})
|
||||
- Learn more about:
|
||||
- [Retry policies]({{< ref retries-overview.md >}})
|
||||
- [Circuit breaker policies]({{< ref circuit-breakers.md >}})
|
||||
|
||||
## Related links
|
||||
|
||||
Try out one of the Resiliency quickstarts:
|
||||
- [Resiliency: Service-to-service]({{< ref resiliency-serviceinvo-quickstart.md >}})
|
||||
- [Resiliency: State Management]({{< ref resiliency-state-quickstart.md >}})
|
|
@ -6,25 +6,32 @@ weight: 100
|
|||
description: "Configure Dapr retries, timeouts, and circuit breakers"
|
||||
---
|
||||
|
||||
Dapr provides a capability for defining and applying fault tolerance resiliency policies via a [resiliency spec]({{< ref "resiliency-overview.md#complete-example-policy" >}}). Resiliency specs are saved in the same location as components specs and are applied when the Dapr sidecar starts. The sidecar determines how to apply resiliency policies to your Dapr API calls. In self-hosted mode, the resiliency spec must be named `resiliency.yaml`. In Kubernetes Dapr finds the named resiliency specs used by your application. Within the resiliency spec, you can define policies for popular resiliency patterns, such as:
|
||||
Dapr provides the capability for defining and applying fault tolerance resiliency policies via a [resiliency spec]({{< ref "resiliency-overview.md#complete-example-policy" >}}). Resiliency specs are saved in the same location as components specs and are applied when the Dapr sidecar starts. The sidecar determines how to apply resiliency policies to your Dapr API calls.
|
||||
- **In self-hosted mode:** The resiliency spec must be named `resiliency.yaml`.
|
||||
- **In Kubernetes:** Dapr finds the named resiliency specs used by your application.
|
||||
|
||||
- [Timeouts]({{< ref "policies.md#timeouts" >}})
|
||||
- [Retries/back-offs]({{< ref "policies.md#retries" >}})
|
||||
- [Circuit breakers]({{< ref "policies.md#circuit-breakers" >}})
|
||||
## Policies
|
||||
|
||||
Policies can then be applied to [targets]({{< ref "targets.md" >}}), which include:
|
||||
You can configure Dapr resiliency policies with the following parts:
|
||||
- Metadata defining where the policy applies (like namespace and scope)
|
||||
- Policies specifying the resiliency name and behaviors, like:
|
||||
- [Timeouts]({{< ref timeouts.md >}})
|
||||
- [Retries]({{< ref retries-overview.md >}})
|
||||
- [Circuit breakers]({{< ref circuit-breakers.md >}})
|
||||
- Targets determining which interactions these policies act on, including:
|
||||
- [Apps]({{< ref "targets.md#apps" >}}) via service invocation
|
||||
- [Components]({{< ref "targets.md#components" >}})
|
||||
- [Actors]({{< ref "targets.md#actors" >}})
|
||||
|
||||
- [Apps]({{< ref "targets.md#apps" >}}) via service invocation
|
||||
- [Components]({{< ref "targets.md#components" >}})
|
||||
- [Actors]({{< ref "targets.md#actors" >}})
|
||||
Once defined, you can apply this configuration to your local Dapr components directory, or to your Kubernetes cluster using:
|
||||
|
||||
Additionally, resiliency policies can be [scoped to specific apps]({{< ref "component-scopes.md#application-access-to-components-with-scopes" >}}).
|
||||
```bash
|
||||
kubectl apply -f <resiliency-spec-name>.yaml
|
||||
```
|
||||
|
||||
## Demo video
|
||||
Additionally, you can scope resiliency policies [to specific apps]({{< ref "component-scopes.md#application-access-to-components-with-scopes" >}}).
|
||||
|
||||
Learn more about [how to write resilient microservices with Dapr](https://youtu.be/uC-4Q5KFq98?si=JSUlCtcUNZLBM9rW).
|
||||
|
||||
<iframe width="560" height="315" src="https://www.youtube-nocookie.com/embed/uC-4Q5KFq98?si=JSUlCtcUNZLBM9rW" title="YouTube video player" style="padding-bottom:25px;" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" allowfullscreen></iframe>
|
||||
> See [known limitations](#limitations).
|
||||
|
||||
## Resiliency policy structure
|
||||
|
||||
|
@ -166,7 +173,11 @@ spec:
|
|||
circuitBreaker: pubsubCB
|
||||
```
|
||||
|
||||
## Related links
|
||||
## Limitations
|
||||
|
||||
- **Service invocation via gRPC:** Currently, resiliency policies are not supported for service invocation via gRPC.
|
||||
|
||||
## Demos
|
||||
|
||||
Watch this video for how to use [resiliency](https://www.youtube.com/watch?t=184&v=7D6HOU3Ms6g&feature=youtu.be):
|
||||
|
||||
|
@ -174,11 +185,20 @@ Watch this video for how to use [resiliency](https://www.youtube.com/watch?t=184
|
|||
<iframe width="560" height="315" src="https://www.youtube-nocookie.com/embed/7D6HOU3Ms6g?start=184" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" allowfullscreen></iframe>
|
||||
</div>
|
||||
|
||||
Learn more about [how to write resilient microservices with Dapr](https://youtu.be/uC-4Q5KFq98?si=JSUlCtcUNZLBM9rW).
|
||||
|
||||
<iframe width="560" height="315" src="https://www.youtube-nocookie.com/embed/uC-4Q5KFq98?si=JSUlCtcUNZLBM9rW" title="YouTube video player" style="padding-bottom:25px;" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" allowfullscreen></iframe>
|
||||
|
||||
|
||||
## Next steps
|
||||
Learn more about resiliency policies and targets:
|
||||
- [Policies]({{< ref "policies.md" >}})
|
||||
- Policies
|
||||
- [Timeouts]({{< ref "timeouts.md" >}})
|
||||
- [Retries]({{< ref "retries-overview.md" >}})
|
||||
- [Circuit breakers]({{< ref circuit-breakers.md >}})
|
||||
- [Targets]({{< ref "targets.md" >}})
|
||||
|
||||
## Related links
|
||||
Try out one of the Resiliency quickstarts:
|
||||
- [Resiliency: Service-to-service]({{< ref resiliency-serviceinvo-quickstart.md >}})
|
||||
- [Resiliency: State Management]({{< ref resiliency-state-quickstart.md >}})
|
|
@ -231,6 +231,8 @@ kubectl rollout restart -n <DAPR_NAMESPACE> deployment/dapr-sentry
|
|||
```bash
|
||||
kubectl rollout restart deploy/dapr-operator -n <DAPR_NAMESPACE>
|
||||
kubectl rollout restart statefulsets/dapr-placement-server -n <DAPR_NAMESPACE>
|
||||
kubectl rollout restart deploy/dapr-sidecar-injector -n <DAPR_NAMESPACE>
|
||||
kubectl rollout restart deploy/dapr-scheduler-server -n <DAPR_NAMESPACE>
|
||||
```
|
||||
|
||||
4. Restart your Dapr applications to pick up the latest trust bundle.
|
||||
|
@ -332,12 +334,13 @@ Example:
|
|||
dapr status -k
|
||||
|
||||
NAME NAMESPACE HEALTHY STATUS REPLICAS VERSION AGE CREATED
|
||||
dapr-sentry dapr-system True Running 1 1.7.0 17d 2022-03-15 09:29.45
|
||||
dapr-dashboard dapr-system True Running 1 0.9.0 17d 2022-03-15 09:29.45
|
||||
dapr-sidecar-injector dapr-system True Running 1 1.7.0 17d 2022-03-15 09:29.45
|
||||
dapr-operator dapr-system True Running 1 1.7.0 17d 2022-03-15 09:29.45
|
||||
dapr-placement-server dapr-system True Running 1 1.7.0 17d 2022-03-15 09:29.45
|
||||
⚠ Dapr root certificate of your Kubernetes cluster expires in 2 days. Expiry date: Mon, 04 Apr 2022 15:01:03 UTC.
|
||||
dapr-operator dapr-system True Running 1 1.15.0 4m 2025-02-19 17:36.26
|
||||
dapr-placement-server dapr-system True Running 1 1.15.0 4m 2025-02-19 17:36.27
|
||||
dapr-dashboard dapr-system True Running 1 0.15.0 4m 2025-02-19 17:36.27
|
||||
dapr-sentry dapr-system True Running 1 1.15.0 4m 2025-02-19 17:36.26
|
||||
dapr-scheduler-server dapr-system True Running 3 1.15.0 4m 2025-02-19 17:36.27
|
||||
dapr-sidecar-injector dapr-system True Running 1 1.15.0 4m 2025-02-19 17:36.26
|
||||
⚠ Dapr root certificate of your Kubernetes cluster expires in 2 days. Expiry date: Mon, 04 Apr 2025 15:01:03 UTC.
|
||||
Please see docs.dapr.io for certificate renewal instructions to avoid service interruptions.
|
||||
```
|
||||
|
||||
|
|
|
@ -17,7 +17,6 @@ For CLI there is no explicit opt-in, just the version that this was first made a
|
|||
| --- | --- | --- | --- | --- |
|
||||
| **Pluggable components** | Allows creating self-hosted gRPC-based components written in any language that supports gRPC. The following component APIs are supported: State stores, Pub/sub, Bindings | N/A | [Pluggable components concept]({{<ref "components-concept#pluggable-components" >}})| v1.9 |
|
||||
| **Multi-App Run for Kubernetes** | Configure multiple Dapr applications from a single configuration file and run from a single command on Kubernetes | `dapr run -k -f` | [Multi-App Run]({{< ref multi-app-dapr-run.md >}}) | v1.12 |
|
||||
| **Workflows** | Author workflows as code to automate and orchestrate tasks within your application, like messaging, state management, and failure handling | N/A | [Workflows concept]({{< ref "components-concept#workflows" >}})| v1.10 |
|
||||
| **Cryptography** | Encrypt or decrypt data without having to manage secrets keys | N/A | [Cryptography concept]({{< ref "components-concept#cryptography" >}})| v1.11 |
|
||||
| **Actor State TTL** | Allow actors to save records to state stores with Time To Live (TTL) set to automatically clean up old data. In its current implementation, actor state with TTL may not be reflected correctly by clients, read [Actor State Transactions]({{< ref actors_api.md >}}) for more information. | `ActorStateTTL` | [Actor State Transactions]({{< ref actors_api.md >}}) | v1.11 |
|
||||
| **Component Hot Reloading** | Allows for Dapr-loaded components to be "hot reloaded". A component spec is reloaded when it is created/updated/deleted in Kubernetes or on file when running in self-hosted mode. Ignores changes to actor state stores and workflow backends. | `HotReload`| [Hot Reloading]({{< ref components-concept.md >}}) | v1.13 |
|
||||
|
|
|
@ -45,11 +45,12 @@ The table below shows the versions of Dapr releases that have been tested togeth
|
|||
|
||||
| Release date | Runtime | CLI | SDKs | Dashboard | Status | Release notes |
|
||||
|--------------------|:--------:|:--------|---------|---------|---------|------------|
|
||||
| September 16th 2024 | 1.14.4</br> | 1.14.1 | Java 1.12.0 </br>Go 1.11.0 </br>PHP 1.2.0 </br>Python 1.14.0 </br>.NET 1.14.0 </br>JS 3.3.1 | 0.15.0 | Supported (current) | [v1.14.4 release notes](https://github.com/dapr/dapr/releases/tag/v1.14.4) |
|
||||
| February 27th 2025 | 1.15.0</br> | 1.15.0 | Java 1.14.0 </br>Go 1.12.0 </br>PHP 1.2.0 </br>Python 1.15.0 </br>.NET 1.15.0 </br>JS 3.5.0 </br>Rust 0.16 | 0.15.0 | Supported (current) | [v1.15.0 release notes](https://github.com/dapr/dapr/releases/tag/v1.15.0) |
|
||||
| September 16th 2024 | 1.14.4</br> | 1.14.1 | Java 1.12.0 </br>Go 1.11.0 </br>PHP 1.2.0 </br>Python 1.14.0 </br>.NET 1.14.0 </br>JS 3.3.1 | 0.15.0 | Supported | [v1.14.4 release notes](https://github.com/dapr/dapr/releases/tag/v1.14.4) |
|
||||
| September 13th 2024 | 1.14.3</br> | 1.14.1 | Java 1.12.0 </br>Go 1.11.0 </br>PHP 1.2.0 </br>Python 1.14.0 </br>.NET 1.14.0 </br>JS 3.3.1 | 0.15.0 | ⚠️ Recalled | [v1.14.3 release notes](https://github.com/dapr/dapr/releases/tag/v1.14.3) |
|
||||
| September 6th 2024 | 1.14.2</br> | 1.14.1 | Java 1.12.0 </br>Go 1.11.0 </br>PHP 1.2.0 </br>Python 1.14.0 </br>.NET 1.14.0 </br>JS 3.3.1 | 0.15.0 | Supported (current) | [v1.14.2 release notes](https://github.com/dapr/dapr/releases/tag/v1.14.2) |
|
||||
| August 14th 2024 | 1.14.1</br> | 1.14.1 | Java 1.12.0 </br>Go 1.11.0 </br>PHP 1.2.0 </br>Python 1.14.0 </br>.NET 1.14.0 </br>JS 3.3.1 | 0.15.0 | Supported (current) | [v1.14.1 release notes](https://github.com/dapr/dapr/releases/tag/v1.14.1) |
|
||||
| August 14th 2024 | 1.14.0</br> | 1.14.0 | Java 1.12.0 </br>Go 1.11.0 </br>PHP 1.2.0 </br>Python 1.14.0 </br>.NET 1.14.0 </br>JS 3.3.1 | 0.15.0 | Supported (current) | [v1.14.0 release notes](https://github.com/dapr/dapr/releases/tag/v1.14.0) |
|
||||
| September 6th 2024 | 1.14.2</br> | 1.14.1 | Java 1.12.0 </br>Go 1.11.0 </br>PHP 1.2.0 </br>Python 1.14.0 </br>.NET 1.14.0 </br>JS 3.3.1 | 0.15.0 | Supported | [v1.14.2 release notes](https://github.com/dapr/dapr/releases/tag/v1.14.2) |
|
||||
| August 14th 2024 | 1.14.1</br> | 1.14.1 | Java 1.12.0 </br>Go 1.11.0 </br>PHP 1.2.0 </br>Python 1.14.0 </br>.NET 1.14.0 </br>JS 3.3.1 | 0.15.0 | Supported | [v1.14.1 release notes](https://github.com/dapr/dapr/releases/tag/v1.14.1) |
|
||||
| August 14th 2024 | 1.14.0</br> | 1.14.0 | Java 1.12.0 </br>Go 1.11.0 </br>PHP 1.2.0 </br>Python 1.14.0 </br>.NET 1.14.0 </br>JS 3.3.1 | 0.15.0 | Supported | [v1.14.0 release notes](https://github.com/dapr/dapr/releases/tag/v1.14.0) |
|
||||
| May 29th 2024 | 1.13.4</br> | 1.13.0 | Java 1.11.0 </br>Go 1.10.0 </br>PHP 1.2.0 </br>Python 1.13.0 </br>.NET 1.13.0 </br>JS 3.3.0 | 0.14.0 | Supported | [v1.13.4 release notes](https://github.com/dapr/dapr/releases/tag/v1.13.4) |
|
||||
| May 21st 2024 | 1.13.3</br> | 1.13.0 | Java 1.11.0 </br>Go 1.10.0 </br>PHP 1.2.0 </br>Python 1.13.0 </br>.NET 1.13.0 </br>JS 3.3.0 | 0.14.0 | Supported | [v1.13.3 release notes](https://github.com/dapr/dapr/releases/tag/v1.13.3) |
|
||||
| April 3rd 2024 | 1.13.2</br> | 1.13.0 | Java 1.11.0 </br>Go 1.10.0 </br>PHP 1.2.0 </br>Python 1.13.0 </br>.NET 1.13.0 </br>JS 3.3.0 | 0.14.0 | Supported | [v1.13.2 release notes](https://github.com/dapr/dapr/releases/tag/v1.13.2) |
|
||||
|
@ -143,7 +144,8 @@ General guidance on upgrading can be found for [self hosted mode]({{< ref self-h
|
|||
| 1.11.0 to 1.11.4 | N/A | 1.12.4 |
|
||||
| 1.12.0 to 1.12.4 | N/A | 1.13.5 |
|
||||
| 1.13.0 to 1.13.5 | N/A | 1.14.0 |
|
||||
| 1.14.0 to 1.14.2 | N/A | 1.14.2 |
|
||||
| 1.14.0 to 1.14.4 | N/A | 1.14.4 |
|
||||
| 1.15.0 | N/A | 1.15.0 |
|
||||
|
||||
## Upgrade on Hosting platforms
|
||||
|
||||
|
|
|
@ -17,7 +17,7 @@ Dapr provides an API to interact with Large Language Models (LLMs) and enables c
|
|||
This endpoint lets you converse with LLMs.
|
||||
|
||||
```
|
||||
POST /v1.0-alpha1/conversation/<llm-name>/converse
|
||||
POST http://localhost:<daprPort>/v1.0-alpha1/conversation/<llm-name>/converse
|
||||
```
|
||||
|
||||
### URL parameters
|
||||
|
@ -30,17 +30,34 @@ POST /v1.0-alpha1/conversation/<llm-name>/converse
|
|||
|
||||
| Field | Description |
|
||||
| --------- | ----------- |
|
||||
| `conversationContext` | |
|
||||
| `inputs` | |
|
||||
| `parameters` | |
|
||||
| `inputs` | Inputs for the conversation. Multiple inputs at one time are supported. Required |
|
||||
| `cacheTTL` | A time-to-live value for a prompt cache to expire. Uses Golang duration format. Optional |
|
||||
| `scrubPII` | A boolean value to enable obfuscation of sensitive information returning from the LLM. Optional |
|
||||
| `temperature` | A float value to control the temperature of the model. Used to optimize for consistency and creativity. Optional |
|
||||
| `metadata` | [Metadata](#metadata) passed to conversation components. Optional |
|
||||
|
||||
#### Input body
|
||||
|
||||
### Request content
|
||||
| Field | Description |
|
||||
| --------- | ----------- |
|
||||
| `content` | The message content to send to the LLM. Required |
|
||||
| `role` | The role for the LLM to assume. Possible values: 'user', 'tool', 'assistant' |
|
||||
| `scrubPII` | A boolean value to enable obfuscation of sensitive information present in the content field. Optional |
|
||||
|
||||
### Request content example
|
||||
|
||||
```json
|
||||
REQUEST = {
|
||||
"inputs": ["what is Dapr", "Why use Dapr"],
|
||||
"parameters": {},
|
||||
"inputs": [
|
||||
{
|
||||
"content": "What is Dapr?",
|
||||
"role": "user", // Optional
|
||||
"scrubPII": "true", // Optional. Will obfuscate any sensitive information found in the content field
|
||||
},
|
||||
],
|
||||
"cacheTTL": "10m", // Optional
|
||||
"scrubPII": "true", // Optional. Will obfuscate any sensitive information returning from the LLM
|
||||
"temperature": 0.5 // Optional. Optimizes for consistency (0) or creativity (1)
|
||||
}
|
||||
```
|
||||
|
||||
|
@ -50,7 +67,7 @@ Code | Description
|
|||
---- | -----------
|
||||
`202` | Accepted
|
||||
`400` | Request was malformed
|
||||
`500` | Request formatted correctly, error in dapr code or underlying component
|
||||
`500` | Request formatted correctly, error in Dapr code or underlying component
|
||||
|
||||
### Response content
|
||||
|
||||
|
@ -71,4 +88,5 @@ RESPONSE = {
|
|||
|
||||
## Next steps
|
||||
|
||||
[Conversation API overview]({{< ref conversation-overview.md >}})
|
||||
- [Conversation API overview]({{< ref conversation-overview.md >}})
|
||||
- [Supported conversation components]({{< ref supported-conversation >}})
|
|
@ -1,156 +0,0 @@
|
|||
---
|
||||
type: docs
|
||||
title: "Error codes returned by APIs"
|
||||
linkTitle: "Error codes"
|
||||
description: "Detailed reference of the Dapr API error codes"
|
||||
weight: 1400
|
||||
---
|
||||
|
||||
For http calls made to Dapr runtime, when an error is encountered, an error json is returned in http response body. The json contains an error code and an descriptive error message, e.g.
|
||||
|
||||
```
|
||||
{
|
||||
"errorCode": "ERR_STATE_GET",
|
||||
"message": "Requested state key does not exist in state store."
|
||||
}
|
||||
```
|
||||
|
||||
The following tables list the error codes returned by Dapr runtime:
|
||||
|
||||
### Actors API
|
||||
|
||||
| Error Code | Description |
|
||||
| -------------------------------- | ------------------------------------------ |
|
||||
| ERR_ACTOR_INSTANCE_MISSING | Error when an actor instance is missing. |
|
||||
| ERR_ACTOR_RUNTIME_NOT_FOUND | Error the actor instance. |
|
||||
| ERR_ACTOR_REMINDER_CREATE | Error creating a reminder for an actor. |
|
||||
| ERR_ACTOR_REMINDER_DELETE | Error deleting a reminder for an actor. |
|
||||
| ERR_ACTOR_TIMER_CREATE | Error creating a timer for an actor. |
|
||||
| ERR_ACTOR_TIMER_DELETE | Error deleting a timer for an actor. |
|
||||
| ERR_ACTOR_REMINDER_GET | Error getting a reminder for an actor. |
|
||||
| ERR_ACTOR_INVOKE_METHOD | Error invoking a method on an actor. |
|
||||
| ERR_ACTOR_STATE_DELETE | Error deleting the state for an actor. |
|
||||
| ERR_ACTOR_STATE_GET | Error getting the state for an actor. |
|
||||
| ERR_ACTOR_STATE_TRANSACTION_SAVE | Error storing actor state transactionally. |
|
||||
| ERR_ACTOR_REMINDER_NON_HOSTED | Error setting reminder for an actor. |
|
||||
|
||||
### Workflows API
|
||||
|
||||
| Error Code | Description |
|
||||
| -------------------------------- | ----------------------------------------------------------- |
|
||||
| ERR_GET_WORKFLOW | Error getting workflow. |
|
||||
| ERR_START_WORKFLOW | Error starting the workflow. |
|
||||
| ERR_PAUSE_WORKFLOW | Error pausing the workflow. |
|
||||
| ERR_RESUME_WORKFLOW | Error resuming the workflow. |
|
||||
| ERR_TERMINATE_WORKFLOW | Error terminating the workflow. |
|
||||
| ERR_PURGE_WORKFLOW | Error purging workflow. |
|
||||
| ERR_RAISE_EVENT_WORKFLOW | Error raising an event within the workflow. |
|
||||
| ERR_WORKFLOW_COMPONENT_MISSING | Error when a workflow component is missing a configuration. |
|
||||
| ERR_WORKFLOW_COMPONENT_NOT_FOUND | Error when a workflow component is not found. |
|
||||
| ERR_WORKFLOW_EVENT_NAME_MISSING | Error when the event name for a workflow is missing. |
|
||||
| ERR_WORKFLOW_NAME_MISSING | Error when the workflow name is missing. |
|
||||
| ERR_INSTANCE_ID_INVALID | Error invalid workflow instance ID provided. |
|
||||
| ERR_INSTANCE_ID_NOT_FOUND | Error workflow instance ID not found. |
|
||||
| ERR_INSTANCE_ID_PROVIDED_MISSING | Error workflow instance ID was provided but missing. |
|
||||
| ERR_INSTANCE_ID_TOO_LONG | Error workflow instance ID exceeds allowable length. |
|
||||
|
||||
### State Management API
|
||||
|
||||
| Error Code | Description |
|
||||
| ------------------------------------- | ------------------------------------------------------------------------- |
|
||||
| ERR_STATE_STORE_NOT_FOUND | Error referencing a state store not found. |
|
||||
| ERR_STATE_STORES_NOT_CONFIGURED | Error no state stores configured. |
|
||||
| ERR_NOT_SUPPORTED_STATE_OPERATION | Error transaction requested on a state store with no transaction support. |
|
||||
| ERR_STATE_GET | Error getting a state for state store. |
|
||||
| ERR_STATE_DELETE | Error deleting a state from state store. |
|
||||
| ERR_STATE_SAVE | Error saving a state in state store. |
|
||||
| ERR_STATE_TRANSACTION | Error encountered during state transaction. |
|
||||
| ERR_STATE_BULK_GET | Error performing bulk retrieval of state entries. |
|
||||
| ERR_STATE_QUERY | Error querying the state store. |
|
||||
| ERR_STATE_STORE_NOT_CONFIGURED | Error state store is not configured. |
|
||||
| ERR_STATE_STORE_NOT_SUPPORTED | Error state store is not supported. |
|
||||
| ERR_STATE_STORE_TOO_MANY_TRANSACTIONS | Error exceeded maximum allowable transactions. |
|
||||
|
||||
### Configuration API
|
||||
|
||||
| Error Code | Description |
|
||||
| -------------------------------------- | -------------------------------------------- |
|
||||
| ERR_CONFIGURATION_GET | Error retrieving configuration. |
|
||||
| ERR_CONFIGURATION_STORE_NOT_CONFIGURED | Error configuration store is not configured. |
|
||||
| ERR_CONFIGURATION_STORE_NOT_FOUND | Error configuration store not found. |
|
||||
| ERR_CONFIGURATION_SUBSCRIBE | Error subscribing to a configuration. |
|
||||
| ERR_CONFIGURATION_UNSUBSCRIBE | Error unsubscribing from a configuration. |
|
||||
|
||||
### Crypto API
|
||||
|
||||
| Error Code | Description |
|
||||
| ----------------------------------- | ------------------------------------------ |
|
||||
| ERR_CRYPTO | General crypto building block error. |
|
||||
| ERR_CRYPTO_KEY | Error related to a crypto key. |
|
||||
| ERR_CRYPTO_PROVIDER_NOT_FOUND | Error specified crypto provider not found. |
|
||||
| ERR_CRYPTO_PROVIDERS_NOT_CONFIGURED | Error no crypto providers configured. |
|
||||
|
||||
### Secrets API
|
||||
|
||||
| Error Code | Description |
|
||||
| -------------------------------- | ---------------------------------------------------- |
|
||||
| ERR_SECRET_STORES_NOT_CONFIGURED | Error that no secret store is configured. |
|
||||
| ERR_SECRET_STORE_NOT_FOUND | Error that specified secret store is not found. |
|
||||
| ERR_SECRET_GET | Error retrieving the specified secret. |
|
||||
| ERR_PERMISSION_DENIED | Error access denied due to insufficient permissions. |
|
||||
|
||||
### Pub/Sub API
|
||||
|
||||
| Error Code | Description |
|
||||
| --------------------------- | -------------------------------------------------------- |
|
||||
| ERR_PUBSUB_NOT_FOUND | Error referencing the Pub/Sub component in Dapr runtime. |
|
||||
| ERR_PUBSUB_PUBLISH_MESSAGE | Error publishing a message. |
|
||||
| ERR_PUBSUB_FORBIDDEN | Error message forbidden by access controls. |
|
||||
| ERR_PUBSUB_CLOUD_EVENTS_SER | Error serializing Pub/Sub event envelope. |
|
||||
| ERR_PUBSUB_EMPTY | Error empty Pub/Sub. |
|
||||
| ERR_PUBSUB_NOT_CONFIGURED | Error Pub/Sub component is not configured. |
|
||||
| ERR_PUBSUB_REQUEST_METADATA | Error with metadata in Pub/Sub request. |
|
||||
| ERR_PUBSUB_EVENTS_SER | Error serializing Pub/Sub events. |
|
||||
| ERR_PUBLISH_OUTBOX | Error publishing message to the outbox. |
|
||||
| ERR_TOPIC_NAME_EMPTY | Error topic name for Pub/Sub message is empty. |
|
||||
|
||||
### Conversation API
|
||||
|
||||
| Error Code | Description |
|
||||
| ------------------------------- | ----------------------------------------------- |
|
||||
| ERR_INVOKE_OUTPUT_BINDING | Error invoking an output binding. |
|
||||
| ERR_DIRECT_INVOKE | Error in direct invocation. |
|
||||
| ERR_CONVERSATION_INVALID_PARMS | Error invalid parameters for conversation. |
|
||||
| ERR_CONVERSATION_INVOKE | Error invoking the conversation. |
|
||||
| ERR_CONVERSATION_MISSING_INPUTS | Error missing required inputs for conversation. |
|
||||
| ERR_CONVERSATION_NOT_FOUND | Error conversation not found. |
|
||||
|
||||
### Distributed Lock API
|
||||
|
||||
| Error Code | Description |
|
||||
| ----------------------------- | ----------------------------------- |
|
||||
| ERR_TRY_LOCK | Error attempting to acquire a lock. |
|
||||
| ERR_UNLOCK | Error attempting to release a lock. |
|
||||
| ERR_LOCK_STORE_NOT_CONFIGURED | Error lock store is not configured. |
|
||||
| ERR_LOCK_STORE_NOT_FOUND | Error lock store not found. |
|
||||
|
||||
### Healthz
|
||||
|
||||
| Error Code | Description |
|
||||
| ----------------------------- | --------------------------------------------------------------- |
|
||||
| ERR_HEALTH_NOT_READY | Error that Dapr is not ready. |
|
||||
| ERR_HEALTH_APPID_NOT_MATCH | Error the app-id does not match expected value in health check. |
|
||||
| ERR_OUTBOUND_HEALTH_NOT_READY | Error outbound connection health is not ready. |
|
||||
|
||||
### Common
|
||||
|
||||
| Error Code | Description |
|
||||
| -------------------------- | ------------------------------------------------ |
|
||||
| ERR_API_UNIMPLEMENTED | Error API is not implemented. |
|
||||
| ERR_APP_CHANNEL_NIL | Error application channel is nil. |
|
||||
| ERR_BAD_REQUEST | Error client request is badly formed or invalid. |
|
||||
| ERR_BODY_READ | Error reading body. |
|
||||
| ERR_INTERNAL | Internal server error encountered. |
|
||||
| ERR_MALFORMED_REQUEST | Error with a malformed request. |
|
||||
| ERR_MALFORMED_REQUEST_DATA | Error request data is malformed. |
|
||||
| ERR_MALFORMED_RESPONSE | Error response data is malformed. |
|
|
@ -13,11 +13,11 @@ The jobs API is currently in alpha.
|
|||
With the jobs API, you can schedule jobs and tasks in the future.
|
||||
|
||||
> The HTTP APIs are intended for development and testing only. For production scenarios, the use of the SDKs is strongly
|
||||
> recommended as they implement the gRPC APIs providing higher performance and capability than the HTTP APIs.
|
||||
> recommended as they implement the gRPC APIs providing higher performance and capability than the HTTP APIs. This is because HTTP does JSON marshalling which can be expensive, while with gRPC, the data is transmitted over the wire and stored as-is being more performant.
|
||||
|
||||
## Schedule a job
|
||||
|
||||
Schedule a job with a name.
|
||||
Schedule a job with a name. Jobs are scheduled based on the clock of the server where the Scheduler service is running. The timestamp is not converted to UTC. You can provide the timezone with the timestamp in RFC3339 format to specify which timezone you'd like the job to adhere to. If no timezone is provided, the server's local time is used.
|
||||
|
||||
```
|
||||
POST http://localhost:3500/v1.0-alpha1/jobs/<name>
|
||||
|
|
|
@ -37,6 +37,9 @@ A list of features enabled via Configuration spec (including build-time override
|
|||
### App connection details
|
||||
The metadata API returns information related to Dapr's connection to the app. This includes the app port, protocol, host, max concurrency, along with health check details.
|
||||
|
||||
### Scheduler connection details
|
||||
Information related to the connection to one or more scheduler hosts.
|
||||
|
||||
### Attributes
|
||||
|
||||
The metadata API allows you to store additional attribute information in the format of key-value pairs. These are ephemeral in-memory and are not persisted if a sidecar is reloaded. This information should be added at the time of a sidecar creation (for example, after the application has started).
|
||||
|
@ -82,6 +85,7 @@ components | [Metadata API Response Component](#metadataapiresponsec
|
|||
httpEndpoints | [Metadata API Response HttpEndpoint](#metadataapiresponsehttpendpoint)[] | A json encoded array of loaded HttpEndpoints metadata.
|
||||
subscriptions | [Metadata API Response Subscription](#metadataapiresponsesubscription)[] | A json encoded array of pub/sub subscriptions metadata.
|
||||
appConnectionProperties| [Metadata API Response AppConnectionProperties](#metadataapiresponseappconnectionproperties) | A json encoded object of app connection properties.
|
||||
scheduler | [Metadata API Response Scheduler](#metadataapiresponsescheduler) | A json encoded object of scheduler connection properties.
|
||||
|
||||
<a id="metadataapiresponseactor"></a>**Metadata API Response Registered Actor**
|
||||
|
||||
|
@ -142,6 +146,12 @@ healthProbeInterval | string | Time between each health probe, in go duration fo
|
|||
healthProbeTimeout | string | Timeout for each health probe, in go duration format.
|
||||
healthThreshold | integer | Max number of failed health probes before the app is considered unhealthy.
|
||||
|
||||
<a id="metadataapiresponsescheduler"></a>**Metadata API Response Scheduler**
|
||||
|
||||
Name | Type | Description
|
||||
---- | ---- | -----------
|
||||
connected_addresses | string[] | List of strings representing the addresses of the conntected scheduler hosts.
|
||||
|
||||
|
||||
### Examples
|
||||
|
||||
|
@ -215,6 +225,13 @@ curl http://localhost:3500/v1.0/metadata
|
|||
"healthProbeTimeout": "500ms",
|
||||
"healthThreshold": 3
|
||||
}
|
||||
},
|
||||
"scheduler": {
|
||||
"connected_addresses": [
|
||||
"10.244.0.47:50006",
|
||||
"10.244.0.48:50006",
|
||||
"10.244.0.49:50006"
|
||||
]
|
||||
}
|
||||
}
|
||||
```
|
||||
|
@ -338,6 +355,13 @@ Get the metadata information to confirm your custom attribute was added:
|
|||
"healthProbeTimeout": "500ms",
|
||||
"healthThreshold": 3
|
||||
}
|
||||
},
|
||||
"scheduler": {
|
||||
"connected_addresses": [
|
||||
"10.244.0.47:50006",
|
||||
"10.244.0.48:50006",
|
||||
"10.244.0.49:50006"
|
||||
]
|
||||
}
|
||||
}
|
||||
```
|
||||
|
|
|
@ -6,7 +6,7 @@ description: "Detailed documentation on the workflow API"
|
|||
weight: 300
|
||||
---
|
||||
|
||||
Dapr provides users with the ability to interact with workflows and comes with a built-in `dapr` component.
|
||||
Dapr provides users with the ability to interact with workflows through its built-in workflow engine, which is implemented using Dapr Actors. This workflow engine is accessed using the name `dapr` in API calls as the `workflowComponentName`.
|
||||
|
||||
## Start workflow request
|
||||
|
||||
|
@ -36,7 +36,7 @@ Code | Description
|
|||
---- | -----------
|
||||
`202` | Accepted
|
||||
`400` | Request was malformed
|
||||
`500` | Request formatted correctly, error in dapr code or underlying component
|
||||
`500` | Request formatted correctly, error in dapr code
|
||||
|
||||
### Response content
|
||||
|
||||
|
@ -76,7 +76,7 @@ Code | Description
|
|||
---- | -----------
|
||||
`202` | Accepted
|
||||
`400` | Request was malformed
|
||||
`500` | Request formatted correctly, error in dapr code or underlying component
|
||||
`500` | Request formatted correctly, error in dapr code
|
||||
|
||||
### Response content
|
||||
|
||||
|
@ -163,7 +163,7 @@ Code | Description
|
|||
---- | -----------
|
||||
`202` | Accepted
|
||||
`400` | Request was malformed
|
||||
`500` | Error in Dapr code or underlying component
|
||||
`500` | Error in Dapr code
|
||||
|
||||
### Response content
|
||||
|
||||
|
@ -194,7 +194,7 @@ Code | Description
|
|||
---- | -----------
|
||||
`202` | Accepted
|
||||
`400` | Request was malformed
|
||||
`500` | Error in Dapr code or underlying component
|
||||
`500` | Error in Dapr code
|
||||
|
||||
### Response content
|
||||
|
||||
|
@ -221,7 +221,7 @@ Code | Description
|
|||
---- | -----------
|
||||
`200` | OK
|
||||
`400` | Request was malformed
|
||||
`500` | Request formatted correctly, error in dapr code or underlying component
|
||||
`500` | Error in Dapr code
|
||||
|
||||
### Response content
|
||||
|
||||
|
@ -244,30 +244,6 @@ Parameter | Description
|
|||
--------- | -----------
|
||||
`runtimeStatus` | The status of the workflow instance. Values include: `"RUNNING"`, `"COMPLETED"`, `"CONTINUED_AS_NEW"`, `"FAILED"`, `"CANCELED"`, `"TERMINATED"`, `"PENDING"`, `"SUSPENDED"`
|
||||
|
||||
## Component format
|
||||
|
||||
A Dapr `workflow.yaml` component file has the following structure:
|
||||
|
||||
```yaml
|
||||
apiVersion: dapr.io/v1alpha1
|
||||
kind: Component
|
||||
metadata:
|
||||
name: <NAME>
|
||||
spec:
|
||||
type: workflow.<TYPE>
|
||||
version: v1.0-alpha1
|
||||
metadata:
|
||||
- name: <NAME>
|
||||
value: <VALUE>
|
||||
```
|
||||
|
||||
| Setting | Description |
|
||||
| ------- | ----------- |
|
||||
| `metadata.name` | The name of the workflow component. |
|
||||
| `spec/metadata` | Additional metadata parameters specified by workflow component |
|
||||
|
||||
However, Dapr comes with a built-in `dapr` workflow component that is built on Dapr Actors. No component file is required to use the built-in Dapr workflow component.
|
||||
|
||||
## Next Steps
|
||||
|
||||
- [Workflow API overview]({{< ref workflow-overview.md >}})
|
||||
|
|
|
@ -58,6 +58,8 @@ spec:
|
|||
- name: storageConnectionString
|
||||
value: "DefaultEndpointsProtocol=https;AccountName=<account>;AccountKey=<account-key>"
|
||||
# Optional metadata
|
||||
- name: getAllMessageProperties
|
||||
value: "true"
|
||||
- name: direction
|
||||
value: "input, output"
|
||||
```
|
||||
|
@ -84,6 +86,7 @@ The above example uses secrets as plain strings. It is recommended to use a secr
|
|||
| `storageAccountKey` | Y* | Input | Storage account key for the checkpoint store account.<br>* When using Microsoft Entra ID, it's possible to omit this if the service principal has access to the storage account too. | `"112233445566778899"`
|
||||
| `storageConnectionString` | Y* | Input | Connection string for the checkpoint store, alternative to specifying `storageAccountKey` | `"DefaultEndpointsProtocol=https;AccountName=myeventhubstorage;AccountKey=<account-key>"`
|
||||
| `storageContainerName` | Y | Input | Storage container name for the storage account name. | `"myeventhubstoragecontainer"`
|
||||
| `getAllMessageProperties` | N | Input | When set to `true`, retrieves all user/app/custom properties from the Event Hub message and forwards them in the returned event metadata. Default setting is `"false"`. | `"true"`, `"false"`
|
||||
| `direction` | N | Input/Output | The direction of the binding. | `"input"`, `"output"`, `"input, output"`
|
||||
|
||||
### Microsoft Entra ID authentication
|
||||
|
|
|
@ -9,7 +9,7 @@ aliases:
|
|||
|
||||
## Component format
|
||||
|
||||
To set up the SFTP binding, create a component of type `bindings.sftp`. See [this guide]({{ ref bindings-overview.md }}) on how to create and apply a binding configuration.
|
||||
To set up the SFTP binding, create a component of type `bindings.sftp`. See [this guide]({{< ref bindings-overview.md >}}) on how to create and apply a binding configuration.
|
||||
|
||||
```yaml
|
||||
apiVersion: dapr.io/v1alpha1
|
||||
|
|
|
@ -675,7 +675,12 @@ To perform a `throw-error` operation, invoke the Zeebe command binding with a `P
|
|||
"data": {
|
||||
"jobKey": 2251799813686172,
|
||||
"errorCode": "product-fetch-error",
|
||||
"errorMessage": "The product could not be fetched"
|
||||
"errorMessage": "The product could not be fetched",
|
||||
"variables": {
|
||||
"productId": "some-product-id",
|
||||
"productName": "some-product-name",
|
||||
"productKey": "some-product-key"
|
||||
}
|
||||
},
|
||||
"operation": "throw-error"
|
||||
}
|
||||
|
@ -686,6 +691,11 @@ The data parameters are:
|
|||
- `jobKey` - the unique job identifier, as obtained when activating the job
|
||||
- `errorCode` - the error code that will be matched with an error catch event
|
||||
- `errorMessage` - (optional) an error message that provides additional context
|
||||
- `variables` - (optional) JSON document that will instantiate the variables at the local scope of the
|
||||
job's associated task; it must be a JSON object, as variables will be mapped in a
|
||||
key-value fashion. e.g. { "a": 1, "b": 2 } will create two variables, named "a" and
|
||||
"b" respectively, with their associated values. [{ "a": 1, "b": 2 }] would not be a
|
||||
valid argument, as the root of the JSON document is an array and not an object.
|
||||
|
||||
##### Response
|
||||
|
||||
|
|
|
@ -37,6 +37,10 @@ The above example uses secrets as plain strings. It is recommended to use a secr
|
|||
| `model` | N | The LLM to use. Defaults to Bedrock's default provider model from Amazon. | `amazon.titan-text-express-v1` |
|
||||
| `cacheTTL` | N | A time-to-live value for a prompt cache to expire. Uses Golang duration format. | `10m` |
|
||||
|
||||
## Authenticating AWS
|
||||
|
||||
Instead of using a `key` parameter, AWS Bedrock authenticates using Dapr's standard method of IAM or static credentials. [Learn more about authenticating with AWS.]({{< ref authenticating-aws.md >}})
|
||||
|
||||
## Related links
|
||||
|
||||
- [Conversation API overview]({{< ref conversation-overview.md >}})
|
|
@ -0,0 +1,39 @@
|
|||
---
|
||||
type: docs
|
||||
title: "DeepSeek"
|
||||
linkTitle: "DeepSeek"
|
||||
description: Detailed information on the DeepSeek conversation component
|
||||
---
|
||||
|
||||
## Component format
|
||||
|
||||
A Dapr `conversation.yaml` component file has the following structure:
|
||||
|
||||
```yaml
|
||||
apiVersion: dapr.io/v1alpha1
|
||||
kind: Component
|
||||
metadata:
|
||||
name: deepseek
|
||||
spec:
|
||||
type: conversation.deepseek
|
||||
metadata:
|
||||
- name: key
|
||||
value: mykey
|
||||
- name: maxTokens
|
||||
value: 2048
|
||||
```
|
||||
|
||||
{{% alert title="Warning" color="warning" %}}
|
||||
The above example uses secrets as plain strings. It is recommended to use a secret store for the secrets, as described [here]({{< ref component-secrets.md >}}).
|
||||
{{% /alert %}}
|
||||
|
||||
## Spec metadata fields
|
||||
|
||||
| Field | Required | Details | Example |
|
||||
|--------------------|:--------:|---------|---------|
|
||||
| `key` | Y | API key for DeepSeek. | `mykey` |
|
||||
| `maxTokens` | N | The max amount of tokens for each request. | `2048` |
|
||||
|
||||
## Related links
|
||||
|
||||
- [Conversation API overview]({{< ref conversation-overview.md >}})
|
|
@ -12,7 +12,7 @@ no_list: true
|
|||
The following table lists publish and subscribe brokers supported by the Dapr pub/sub building block. [Learn how to set up different brokers for Dapr publish and subscribe.]({{< ref setup-pubsub.md >}})
|
||||
|
||||
{{% alert title="Pub/sub component retries vs inbound resiliency" color="warning" %}}
|
||||
Each pub/sub component has its own built-in retry behaviors. Before explicity applying a [Dapr resiliency policy]({{< ref "policies.md" >}}), make sure you understand the implicit retry policy of the pub/sub component you're using. Instead of overriding these built-in retries, Dapr resiliency augments them, which can cause repetitive clustering of messages.
|
||||
Each pub/sub component has its own built-in retry behaviors, unique to the message broker solution and unrelated to Dapr. Before explicity applying a [Dapr resiliency policy]({{< ref "resiliency-overview.md" >}}), make sure you understand the implicit retry policy of the pub/sub component you're using. Instead of overriding these built-in retries, Dapr resiliency augments them, which can cause repetitive clustering of messages.
|
||||
{{% /alert %}}
|
||||
|
||||
|
||||
|
|
|
@ -459,8 +459,8 @@ Apache Kafka supports the following bulk metadata options:
|
|||
|
||||
| Configuration | Default |
|
||||
|----------|---------|
|
||||
| `maxBulkAwaitDurationMs` | `10000` (10s) |
|
||||
| `maxBulkSubCount` | `80` |
|
||||
| `maxAwaitDurationMs` | `10000` (10s) |
|
||||
| `maxMessagesCount` | `80` |
|
||||
|
||||
## Per-call metadata fields
|
||||
|
||||
|
@ -540,6 +540,7 @@ app.include_router(router)
|
|||
```
|
||||
|
||||
{{% /codetab %}}
|
||||
|
||||
{{< /tabs >}}
|
||||
|
||||
## Receiving message headers with special characters
|
||||
|
|
|
@ -198,6 +198,44 @@ Entity management is only possible when using [Microsoft Entra ID Authentication
|
|||
|
||||
> Dapr passes the name of the consumer group to the Event Hub, so this is not supplied in the metadata.
|
||||
|
||||
## Receiving custom properties
|
||||
|
||||
By default, Dapr does not forward [custom properties](https://learn.microsoft.com/azure/event-hubs/add-custom-data-event). However, by setting the subscription metadata `requireAllProperties` to `"true"`, you can receive custom properties as HTTP headers.
|
||||
|
||||
```yaml
|
||||
apiVersion: dapr.io/v2alpha1
|
||||
kind: Subscription
|
||||
metadata:
|
||||
name: order-pub-sub
|
||||
spec:
|
||||
topic: orders
|
||||
routes:
|
||||
default: /checkout
|
||||
pubsubname: order-pub-sub
|
||||
metadata:
|
||||
requireAllProperties: "true"
|
||||
```
|
||||
|
||||
The same can be achieved using the Dapr SDK:
|
||||
|
||||
{{< tabs ".NET" >}}
|
||||
|
||||
{{% codetab %}}
|
||||
|
||||
```csharp
|
||||
[Topic("order-pub-sub", "orders")]
|
||||
[TopicMetadata("requireAllProperties", "true")]
|
||||
[HttpPost("checkout")]
|
||||
public ActionResult Checkout(Order order, [FromHeader] int priority)
|
||||
{
|
||||
return Ok();
|
||||
}
|
||||
```
|
||||
|
||||
{{% /codetab %}}
|
||||
|
||||
{{< /tabs >}}
|
||||
|
||||
## Subscribing to Azure IoT Hub Events
|
||||
|
||||
Azure IoT Hub provides an [endpoint that is compatible with Event Hubs](https://docs.microsoft.com/azure/iot-hub/iot-hub-devguide-messages-read-builtin#read-from-the-built-in-endpoint), so the Azure Event Hubs pubsub component can also be used to subscribe to Azure IoT Hub events.
|
||||
|
|
|
@ -54,13 +54,13 @@ The above example uses secrets as plain strings. It is recommended to use a secr
|
|||
|
||||
The MQTT pub/sub component has no built-in support for retry strategies. This means that the sidecar sends a message to the service only once. If the service marks the message as not processed, the message won't be acknowledged back to the broker. Only if broker resends the message, would it would be retried.
|
||||
|
||||
To make Dapr use more spohisticated retry policies, you can apply a [retry resiliency policy]({{< ref "policies.md#retries" >}}) to the MQTT pub/sub component.
|
||||
To make Dapr use more spohisticated retry policies, you can apply a [retry resiliency policy]({{< ref "retries-overview.md" >}}) to the MQTT pub/sub component.
|
||||
|
||||
There is a crucial difference between the two ways of retries:
|
||||
|
||||
1. Re-delivery of unacknowledged messages is completely dependent on the broker. Dapr does not guarantee it. Some brokers like [emqx](https://www.emqx.io/), [vernemq](https://vernemq.com/) etc. support it but it not a part of [MQTT3 spec](http://docs.oasis-open.org/mqtt/mqtt/v3.1.1/os/mqtt-v3.1.1-os.html#_Toc398718103).
|
||||
|
||||
2. Using a [retry resiliency policy]({{< ref "policies.md#retries" >}}) makes the same Dapr sidecar retry redelivering the messages. So it is the same Dapr sidecar and the same app receiving the same message.
|
||||
2. Using a [retry resiliency policy]({{< ref "retries-overview.md" >}}) makes the same Dapr sidecar retry redelivering the messages. So it is the same Dapr sidecar and the same app receiving the same message.
|
||||
|
||||
### Communication using TLS
|
||||
|
||||
|
|
|
@ -167,7 +167,7 @@ spec:
|
|||
|
||||
### Enabling message delivery retries
|
||||
|
||||
The Pulsar pub/sub component has no built-in support for retry strategies. This means that sidecar sends a message to the service only once and is not retried in case of failures. To make Dapr use more spohisticated retry policies, you can apply a [retry resiliency policy]({{< ref "policies.md#retries" >}}) to the Pulsar pub/sub component. Note that it will be the same Dapr sidecar retrying the redelivery the message to the same app instance and not other instances.
|
||||
The Pulsar pub/sub component has no built-in support for retry strategies. This means that sidecar sends a message to the service only once and is not retried in case of failures. To make Dapr use more spohisticated retry policies, you can apply a [retry resiliency policy]({{< ref "retries-overview.md" >}}) to the Pulsar pub/sub component. Note that it will be the same Dapr sidecar retrying the redelivery the message to the same app instance and not other instances.
|
||||
|
||||
### Delay queue
|
||||
|
||||
|
|
|
@ -166,7 +166,7 @@ Note that while the `caCert` and `clientCert` values may not be secrets, they ca
|
|||
The RabbitMQ pub/sub component has no built-in support for retry strategies. This means that the sidecar sends a message to the service only once. When the service returns a result, the message will be marked as consumed regardless of whether it was processed correctly or not. Note that this is common among all Dapr PubSub components and not just RabbitMQ.
|
||||
Dapr can try redelivering a message a second time, when `autoAck` is set to `false` and `requeueInFailure` is set to `true`.
|
||||
|
||||
To make Dapr use more sophisticated retry policies, you can apply a [retry resiliency policy]({{< ref "policies.md#retries" >}}) to the RabbitMQ pub/sub component.
|
||||
To make Dapr use more sophisticated retry policies, you can apply a [retry resiliency policy]({{< ref "retries-overview.md" >}}) to the RabbitMQ pub/sub component.
|
||||
|
||||
There is a crucial difference between the two ways to retry messages:
|
||||
|
||||
|
|
|
@ -52,7 +52,7 @@ spec:
|
|||
# Controls the default mode for executing queries. (optional)
|
||||
#- name: queryExecMode
|
||||
# value: ""
|
||||
# Uncomment this if you wish to use PostgreSQL as a state store for actors (optional)
|
||||
# Uncomment this if you wish to use PostgreSQL as a state store for actors or workflows (optional)
|
||||
#- name: actorStateStore
|
||||
# value: "true"
|
||||
```
|
||||
|
|
|
@ -52,7 +52,7 @@ spec:
|
|||
# Controls the default mode for executing queries. (optional)
|
||||
#- name: queryExecMode
|
||||
# value: ""
|
||||
# Uncomment this if you wish to use PostgreSQL as a state store for actors (optional)
|
||||
# Uncomment this if you wish to use PostgreSQL as a state store for actors or workflows (optional)
|
||||
#- name: actorStateStore
|
||||
# value: "true"
|
||||
```
|
||||
|
|
|
@ -1,10 +0,0 @@
|
|||
---
|
||||
type: docs
|
||||
title: "Workflow backend component specs"
|
||||
linkTitle: "Workflow backend"
|
||||
weight: 2000
|
||||
description: The supported workflow backend that orchestrate workflow and save workflow state
|
||||
no_list: true
|
||||
---
|
||||
|
||||
{{< partial "components/description.html" >}}
|
|
@ -1,24 +0,0 @@
|
|||
---
|
||||
type: docs
|
||||
title: "Actor workflow backend"
|
||||
linkTitle: "Actor workflow backend"
|
||||
description: Detailed information on the Actor workflow backend component
|
||||
---
|
||||
|
||||
## Component format
|
||||
|
||||
The Actor workflow backend is the default backend in Dapr. If no workflow backend is explicitly defined, the Actor backend will be used automatically.
|
||||
|
||||
You don't need to define any components to use the Actor workflow backend. It's ready to use out-of-the-box.
|
||||
|
||||
However, if you wish to explicitly define the Actor workflow backend as a component, you can do so, as shown in the example below.
|
||||
|
||||
```yaml
|
||||
apiVersion: dapr.io/v1alpha1
|
||||
kind: Component
|
||||
metadata:
|
||||
name: actorbackend
|
||||
spec:
|
||||
type: workflowbackend.actor
|
||||
version: v1
|
||||
```
|
|
@ -36,6 +36,7 @@ spec:
|
|||
labels:
|
||||
- name: <LABEL-NAME>
|
||||
regex: {}
|
||||
recordErrorCodes: <TRUE-OR-FALSE>
|
||||
latencyDistributionBuckets:
|
||||
- <BUCKET-VALUE-MS-0>
|
||||
- <BUCKET-VALUE-MS-1>
|
||||
|
|
|
@ -64,7 +64,7 @@ targets: # Required
|
|||
|
||||
| Field | Required | Details | Example |
|
||||
|--------------------|:--------:|---------|---------|
|
||||
| policies | Y | The configuration of resiliency policies, including: <br><ul><li>`timeouts`</li><li>`retries`</li><li>`circuitBreakers`</li></ul> <br> [See more examples with all of the built-in policies]({{< ref policies.md >}}) | timeout: `general`<br>retry: `retryForever`<br>circuit breaker: `simpleCB` |
|
||||
| policies | Y | The configuration of resiliency policies, including: <br><ul><li>`timeouts`</li><li>`retries`</li><li>`circuitBreakers`</li></ul> <br> [See more examples with all of the built-in policies]({{< ref resiliency-overview.md >}}) | timeout: `general`<br>retry: `retryForever`<br>circuit breaker: `simpleCB` |
|
||||
| targets | Y | The configuration for the applications, actors, or components that use the resiliency policies. <br>[See more examples in the resiliency targets guide]({{< ref targets.md >}}) | `apps` <br>`components`<br>`actors` |
|
||||
|
||||
|
||||
|
|
|
@ -18,3 +18,8 @@
|
|||
state: Alpha
|
||||
version: v1
|
||||
since: "1.15"
|
||||
- component: DeepSeek
|
||||
link: deepseek
|
||||
state: Alpha
|
||||
version: v1
|
||||
since: "1.15"
|
||||
|
|
|
@ -0,0 +1,45 @@
|
|||
<meta charset="utf-8">
|
||||
<meta name="viewport" content="width=device-width, initial-scale=1, shrink-to-fit=no">
|
||||
{{ hugo.Generator }}
|
||||
{{ range .AlternativeOutputFormats -}}
|
||||
<link rel="{{ .Rel }}" type="{{ .MediaType.Type }}" href="{{ .Permalink | safeURL }}">
|
||||
{{ end -}}
|
||||
|
||||
{{ $outputFormat := partial "outputformat.html" . -}}
|
||||
{{ if and hugo.IsProduction (ne $outputFormat "print") -}}
|
||||
<meta name="robots" content="index, follow">
|
||||
{{ else -}}
|
||||
<meta name="robots" content="noindex, nofollow">
|
||||
{{ end -}}
|
||||
|
||||
{{ partialCached "favicons.html" . }}
|
||||
<title>
|
||||
{{- if .IsHome -}}
|
||||
{{ .Site.Title -}}
|
||||
{{ else -}}
|
||||
{{ with .Title }}{{ . }} | {{ end -}}
|
||||
{{ .Site.Title -}}
|
||||
{{ end -}}
|
||||
</title>
|
||||
{{ $desc := .Page.Description | default (.Page.Content | safeHTML | truncate 150) -}}
|
||||
<meta name="description" content="{{ $desc }}">
|
||||
{{ template "_internal/opengraph.html" . -}}
|
||||
{{ template "_internal/schema.html" . -}}
|
||||
{{ template "_internal/twitter_cards.html" . -}}
|
||||
{{ partialCached "head-css.html" . "asdf" -}}
|
||||
<script
|
||||
src="https://code.jquery.com/jquery-3.5.1.min.js"
|
||||
integrity="sha256-9/aliU8dGd2tb6OSsuzixeV4y/faTqgFtohetphbbj0="
|
||||
crossorigin="anonymous"></script>
|
||||
{{ if .Site.Params.offlineSearch -}}
|
||||
<script
|
||||
src="https://unpkg.com/lunr@2.3.8/lunr.min.js"
|
||||
integrity="sha384-vRQ9bDyE0Wnu+lMfm57BlYLO0/XauFuKpVsZPs7KEDwYKktWi5+Kz3MP8++DFlRY"
|
||||
crossorigin="anonymous"></script>
|
||||
{{ end -}}
|
||||
|
||||
{{ if .Site.Params.prism_syntax_highlighting -}}
|
||||
<link rel="stylesheet" href="{{ "/css/prism.css" | relURL }}"/>
|
||||
{{ end -}}
|
||||
|
||||
{{ partial "hooks/head-end.html" . -}}
|
|
@ -1 +1 @@
|
|||
{{- if .Get "short" }}1.14{{ else if .Get "long" }}1.14.4{{ else if .Get "cli" }}1.14.1{{ else }}1.14.1{{ end -}}
|
||||
{{- if .Get "short" }}1.15{{ else if .Get "long" }}1.15.0{{ else if .Get "cli" }}1.15.0{{ else }}1.15.0{{ end -}}
|
||||
|
|
|
@ -720,9 +720,15 @@
|
|||
}
|
||||
},
|
||||
"node_modules/nanoid": {
|
||||
"version": "3.3.2",
|
||||
"resolved": "https://registry.npmjs.org/nanoid/-/nanoid-3.3.2.tgz",
|
||||
"integrity": "sha512-CuHBogktKwpm5g2sRgv83jEy2ijFzBwMoYA60orPDR7ynsLijJDqgsi4RDGj3OJpy3Ieb+LYwiRmIOGyytgITA==",
|
||||
"version": "3.3.8",
|
||||
"resolved": "https://registry.npmjs.org/nanoid/-/nanoid-3.3.8.tgz",
|
||||
"integrity": "sha512-WNLf5Sd8oZxOm+TzppcYk8gVOgP+l58xNy58D0nbUnOxOWRWvlcCV4kUF7ltmI6PsrLl/BgKEyS4mqsGChFN0w==",
|
||||
"funding": [
|
||||
{
|
||||
"type": "github",
|
||||
"url": "https://github.com/sponsors/ai"
|
||||
}
|
||||
],
|
||||
"bin": {
|
||||
"nanoid": "bin/nanoid.cjs"
|
||||
},
|
||||
|
|
|
@ -19,8 +19,8 @@ data:
|
|||
zpages:
|
||||
endpoint: :55679
|
||||
exporters:
|
||||
logging:
|
||||
loglevel: debug
|
||||
debug:
|
||||
verbosity: detailed
|
||||
# Depending on where you want to export your trace, use the
|
||||
# correct OpenTelemetry trace exporter here.
|
||||
#
|
||||
|
|
Before Width: | Height: | Size: 203 KiB After Width: | Height: | Size: 178 KiB |
After Width: | Height: | Size: 44 KiB |
Before Width: | Height: | Size: 170 KiB After Width: | Height: | Size: 154 KiB |
Before Width: | Height: | Size: 218 KiB After Width: | Height: | Size: 222 KiB |
Before Width: | Height: | Size: 272 KiB After Width: | Height: | Size: 210 KiB |
Before Width: | Height: | Size: 191 KiB After Width: | Height: | Size: 148 KiB |
After Width: | Height: | Size: 246 KiB |
Before Width: | Height: | Size: 65 KiB After Width: | Height: | Size: 192 KiB |
|
@ -1 +1 @@
|
|||
Subproject commit 03038fa519670b583eabcef1417eacd55c3e44c8
|
||||
Subproject commit 52f0851780202f71ac4c7fbbcd5c5fb7d674db5a
|
|
@ -1 +1 @@
|
|||
Subproject commit dd9a2d5a3c4481b8a6bda032df8f44f5eaedb370
|
||||
Subproject commit c81a381811fbd24b038319bbec07b60c215f8e63
|