Merge branch 'v1.15' into trace-info-propagation

This commit is contained in:
Hannah Hunter 2025-01-13 15:44:46 -05:00 committed by GitHub
commit 02cc486038
No known key found for this signature in database
GPG Key ID: B5690EEEBB952194
151 changed files with 2951 additions and 632 deletions

View File

@ -13,7 +13,7 @@ jobs:
validate:
runs-on: ubuntu-latest
env:
PYTHON_VER: 3.7
PYTHON_VER: 3.12
steps:
- uses: actions/checkout@v2
- name: Check Microsoft URLs do not pin localized versions
@ -27,7 +27,7 @@ jobs:
exit 1
fi
- name: Set up Python ${{ env.PYTHON_VER }}
uses: actions/setup-python@v2
uses: actions/setup-python@v5
with:
python-version: ${{ env.PYTHON_VER }}
- name: Install dependencies

View File

@ -87,6 +87,13 @@ you tackle the challenges that come with building microservices and keeps your c
<a href="{{< ref contributing >}}" class="stretched-link"></a>
</div>
</div>
<div class="card">
<div class="card-body">
<h5 class="card-title"><b>Roadmap</b></h5>
<p class="card-text">Learn about Dapr's roadmap and change process.</p>
<a href="{{< ref roadmap.md >}}" class="stretched-link"></a>
</div>
</div>
</div>

View File

@ -22,7 +22,7 @@ Dapr provides the following building blocks:
|----------------|----------|-------------|
| [**Service-to-service invocation**]({{< ref "service-invocation-overview.md" >}}) | `/v1.0/invoke` | Service invocation enables applications to communicate with each other through well-known endpoints in the form of http or gRPC messages. Dapr provides an endpoint that acts as a combination of a reverse proxy with built-in service discovery, while leveraging built-in distributed tracing and error handling.
| [**Publish and subscribe**]({{< ref "pubsub-overview.md" >}}) | `/v1.0/publish` `/v1.0/subscribe`| Pub/Sub is a loosely coupled messaging pattern where senders (or publishers) publish messages to a topic, to which subscribers subscribe. Dapr supports the pub/sub pattern between applications.
| [**Workflows**]({{< ref "workflow-overview.md" >}}) | `/v1.0-beta1/workflow` | The Workflow API enables you to define long running, persistent processes or data flows that span multiple microservices using Dapr workflows or workflow components. The Workflow API can be combined with other Dapr API building blocks. For example, a workflow can call another service with service invocation or retrieve secrets, providing flexibility and portability.
| [**Workflows**]({{< ref "workflow-overview.md" >}}) | `/v1.0/workflow` | The Workflow API enables you to define long running, persistent processes or data flows that span multiple microservices using Dapr workflows or workflow components. The Workflow API can be combined with other Dapr API building blocks. For example, a workflow can call another service with service invocation or retrieve secrets, providing flexibility and portability.
| [**State management**]({{< ref "state-management-overview.md" >}}) | `/v1.0/state` | Application state is anything an application wants to preserve beyond a single session. Dapr provides a key/value-based state and query APIs with pluggable state stores for persistence.
| [**Bindings**]({{< ref "bindings-overview.md" >}}) | `/v1.0/bindings` | A binding provides a bi-directional connection to an external cloud/on-premise service or system. Dapr allows you to invoke the external service through the Dapr binding API, and it allows your application to be triggered by events sent by the connected service.
| [**Actors**]({{< ref "actors-overview.md" >}}) | `/v1.0/actors` | An actor is an isolated, independent unit of compute and state with single-threaded execution. Dapr provides an actor implementation based on the virtual actor pattern which provides a single-threaded programming model and where actors are garbage collected when not in use.
@ -31,3 +31,4 @@ Dapr provides the following building blocks:
| [**Distributed lock**]({{< ref "distributed-lock-api-overview.md" >}}) | `/v1.0-alpha1/lock` | The distributed lock API enables you to take a lock on a resource so that multiple instances of an application can access the resource without conflicts and provide consistency guarantees.
| [**Cryptography**]({{< ref "cryptography-overview.md" >}}) | `/v1.0-alpha1/crypto` | The Cryptography API enables you to perform cryptographic operations, such as encrypting and decrypting messages, without exposing keys to your application.
| [**Jobs**]({{< ref "jobs-overview.md" >}}) | `/v1.0-alpha1/jobs` | The Jobs API enables you to schedule and orchestrate jobs. Example scenarios include: <ul><li>Schedule batch processing jobs to run every business day</li><li>Schedule various maintenance scripts to perform clean-ups</li><li>Schedule ETL jobs to run at specific times (hourly, daily) to fetch new data, process it, and update the data warehouse with the latest information.</li></ul>
| [**Conversation**]({{< ref "conversation-overview.md" >}}) | `/v1.0-alpha1/conversation` | The Conversation API enables you to supply prompts to converse with different large language models (LLMs) and includes features such as prompt caching and personally identifiable information (PII) obfuscation.

View File

@ -122,11 +122,18 @@ Lock components are used as a distributed lock to provide mutually exclusive acc
### Cryptography
[Cryptography]({{< ref cryptography-overview.md >}}) components are used to perform crypographic operations, including encrypting and decrypting messages, without exposing keys to your application.
[Cryptography]({{< ref cryptography-overview.md >}}) components are used to perform cryptographic operations, including encrypting and decrypting messages, without exposing keys to your application.
- [List of supported cryptography components]({{< ref supported-cryptography >}})
- [Cryptography implementations](https://github.com/dapr/components-contrib/tree/master/crypto)
### Conversation
Dapr provides developers a way to abstract interactions with large language models (LLMs) with built-in security and reliability features. Use [conversation]({{< ref conversation-overview.md >}}) components to send prompts to different LLMs, along with the conversation context.
- [List of supported conversation components]({{< ref supported-conversation >}})
- [Conversation implementations](https://github.com/dapr/components-contrib/tree/main/conversation)
### Middleware
Dapr allows custom [middleware]({{< ref "middleware.md" >}}) to be plugged into the HTTP request processing pipeline. Middleware can perform additional actions on an HTTP request (such as authentication, encryption, and message transformation) before the request is routed to the user code, or the response is returned to the client. The middleware components are used with the [service invocation]({{< ref "service-invocation-overview.md" >}}) building block.
@ -136,4 +143,4 @@ Dapr allows custom [middleware]({{< ref "middleware.md" >}}) to be plugged into
{{% alert title="Note" color="primary" %}}
Since pluggable components are not required to be written in Go, they follow a different implementation process than built-in Dapr components. For more information on developing built-in components, read [developing new components](https://github.com/dapr/components-contrib/blob/master/docs/developing-component.md).
{{% /alert %}}
{{% /alert %}}

View File

@ -6,9 +6,13 @@ weight: 400
description: "Change the behavior of Dapr application sidecars or globally on Dapr control plane system services"
---
Dapr configurations are settings and policies that enable you to change both the behavior of individual Dapr applications, or the global behavior of the Dapr control plane system services. For example, you can set an ACL policy on the application sidecar configuration which indicates which methods can be called from another application, or on the Dapr control plane configuration you can change the certificate renewal period for all certificates that are deployed to application sidecar instances.
With Dapr configurations, you use settings and policies to change:
- The behavior of individual Dapr applications
- The global behavior of the Dapr control plane system services
Configurations are defined and deployed as a YAML file. An application configuration example is shown below, which demonstrates an example of setting a tracing endpoint for where to send the metrics information, capturing all the sample traces.
For example, set a sampling rate policy on the application sidecar configuration to indicate which methods can be called from another application. If you set a policy on the Dapr control plane configuration, you can change the certificate renewal period for all certificates that are deployed to application sidecar instances.
Configurations are defined and deployed as a YAML file. In the following application configuration example, a tracing endpoint is set for where to send the metrics information, capturing all the sample traces.
```yaml
apiVersion: dapr.io/v1alpha1
@ -23,9 +27,11 @@ spec:
endpointAddress: "http://localhost:9411/api/v2/spans"
```
This configuration configures tracing for metrics recording. It can be loaded in local self-hosted mode by editing the default configuration file called `config.yaml` file in your `.dapr` directory, or by applying it to your Kubernetes cluster with kubectl/helm.
The above YAML configures tracing for metrics recording. You can load it in local self-hosted mode by either:
- Editing the default configuration file called `config.yaml` file in your `.dapr` directory, or
- Applying it to your Kubernetes cluster with `kubectl/helm`.
Here is an example of the Dapr control plane configuration called `daprsystem` in the `dapr-system` namespace.
The following example shows the Dapr control plane configuration called `daprsystem` in the `dapr-system` namespace.
```yaml
apiVersion: dapr.io/v1alpha1
@ -40,8 +46,14 @@ spec:
allowedClockSkew: "15m"
```
Visit [overview of Dapr configuration options]({{<ref "configuration-overview.md">}}) for a list of the configuration options.
By default, there is a single configuration file called `daprsystem` installed with the Dapr control plane system services. This configuration file applies global control plane settings and is set up when Dapr is deployed to Kubernetes.
{{% alert title="Note" color="primary" %}}
Dapr application and control plane configurations should not be confused with the configuration building block API that enables applications to retrieve key/value data from configuration store components. Read the [Configuration building block]({{< ref configuration-api-overview >}}) for more information.
[Learn more about configuration options.]({{< ref "configuration-overview.md" >}})
{{% alert title="Important" color="warning" %}}
Dapr application and control plane configurations should not be confused with the [configuration building block API]({{< ref configuration-api-overview >}}), which enables applications to retrieve key/value data from configuration store components.
{{% /alert %}}
## Next steps
{{< button text="Learn more about configuration" page="configuration-overview" >}}

View File

@ -13,7 +13,9 @@ The Placement service Docker container is started automatically as part of [`dap
## Kubernetes mode
The Placement service is deployed as part of `dapr init -k`, or via the Dapr Helm charts. For more information on running Dapr on Kubernetes, visit the [Kubernetes hosting page]({{< ref kubernetes >}}).
The Placement service is deployed as part of `dapr init -k`, or via the Dapr Helm charts. You can run Placement in high availability (HA) mode. [Learn more about setting HA mode in your Kubernetes service.]({{< ref "kubernetes-production.md#individual-service-ha-helm-configuration" >}})
For more information on running Dapr on Kubernetes, visit the [Kubernetes hosting page]({{< ref kubernetes >}}).
## Placement tables

View File

@ -11,13 +11,21 @@ The diagram below shows how the Scheduler service is used via the jobs API when
<img src="/images/scheduler/scheduler-architecture.png" alt="Diagram showing the Scheduler control plane service and the jobs API">
## Actor reminders
Prior to Dapr v1.15, [actor reminders]({{< ref "actors-timers-reminders.md#actor-reminders" >}}) were run using the Placement service. Now, by default, the [`SchedulerReminders` feature flag]({{< ref "support-preview-features.md#current-preview-features" >}}) is set to `true`, and all new actor reminders you create are run using the Scheduler service to make them more scalable.
When you deploy Dapr v1.15, any _existing_ actor reminders are migrated from the Placement service to the Scheduler service as a one time operation for each actor type. You can prevent this migration by setting the `SchedulerReminders` flag to `false` in application configuration file for the actor type.
## Self-hosted mode
The Scheduler service Docker container is started automatically as part of `dapr init`. It can also be run manually as a process if you are running in [slim-init mode]({{< ref self-hosted-no-docker.md >}}).
## Kubernetes mode
The Scheduler service is deployed as part of `dapr init -k`, or via the Dapr Helm charts. For more information on running Dapr on Kubernetes, visit the [Kubernetes hosting page]({{< ref kubernetes >}}).
The Scheduler service is deployed as part of `dapr init -k`, or via the Dapr Helm charts. You can run Scheduler in high availability (HA) mode. [Learn more about setting HA mode in your Kubernetes service.]({{< ref "kubernetes-production.md#individual-service-ha-helm-configuration" >}})
For more information on running Dapr on Kubernetes, visit the [Kubernetes hosting page]({{< ref kubernetes >}}).
## Related links

View File

@ -55,6 +55,7 @@ Each of these building block APIs is independent, meaning that you can use any n
| [**Distributed lock**]({{< ref "distributed-lock-api-overview.md" >}}) | The distributed lock API enables your application to acquire a lock for any resource that gives it exclusive access until either the lock is released by the application, or a lease timeout occurs.
| [**Cryptography**]({{< ref "cryptography-overview.md" >}}) | The cryptography API provides an abstraction layer on top of security infrastructure such as key vaults. It contains APIs that allow you to perform cryptographic operations, such as encrypting and decrypting messages, without exposing keys to your applications.
| [**Jobs**]({{< ref "jobs-overview.md" >}}) | The jobs API enables you to schedule jobs at specific times or intervals.
| [**Conversation**]({{< ref "conversation-overview.md" >}}) | The conversation API enables you to abstract the complexities of interacting with large language models (LLMs) and includes features such as prompt caching and personally identifiable information (PII) obfuscation. Using [conversation components]({{< ref supported-conversation >}}), you can supply prompts to converse with different LLMs.
### Cross-cutting APIs
@ -108,7 +109,7 @@ Deploying and running a Dapr-enabled application into your Kubernetes cluster is
### Clusters of physical or virtual machines
The Dapr control plane services can be deployed in high availability (HA) mode to clusters of physical or virtual machines in production. In the diagram below, the Actor `Placement` and security `Sentry` services are started on three different VMs to provide HA control plane. In order to provide name resolution using DNS for the applications running in the cluster, Dapr uses [Hashicorp Consul service]({{< ref setup-nr-consul >}}), also running in HA mode.
The Dapr control plane services can be deployed in high availability (HA) mode to clusters of physical or virtual machines in production. In the diagram below, the Actor `Placement` and security `Sentry` services are started on three different VMs to provide HA control plane. In order to provide name resolution using DNS for the applications running in the cluster, Dapr uses multicast DNS by default, but can also optionally support [Hashicorp Consul service]({{< ref setup-nr-consul >}}).
<img src="/images/overview-vms-hosting.png" width=1200 alt="Architecture diagram of Dapr control plane and Consul deployed to VMs in high availability mode">

View File

@ -18,7 +18,7 @@ See the [Dapr community repository](https://github.com/dapr/community) for more
1. **Docs**: This [repository](https://github.com/dapr/docs) contains the documentation for Dapr. You can contribute by updating existing documentation, fixing errors, or adding new content to improve user experience and clarity. Please see the specific guidelines for [docs contributions]({{< ref contributing-docs >}}).
2. **Quickstarts**: The Quickstarts [repository](https://github.com/dapr/quickstarts) provides simple, step-by-step guides to help users get started with Dapr quickly. Contributions in this repository involve creating new quickstarts, improving existing ones, or ensuring they stay up-to-date with the latest features.
2. **Quickstarts**: The Quickstarts [repository](https://github.com/dapr/quickstarts) provides simple, step-by-step guides to help users get started with Dapr quickly. [Contributions in this repository](https://github.com/dapr/quickstarts/blob/master/CONTRIBUTING.md) involve creating new quickstarts, improving existing ones, or ensuring they stay up-to-date with the latest features.
3. **Runtime**: The Dapr runtime [repository](https://github.com/dapr/dapr) houses the core runtime components. Here, you can contribute by fixing bugs, optimizing performance, implementing new features, or enhancing existing ones.

View File

@ -2,7 +2,7 @@
type: docs
title: "Dapr bot reference"
linkTitle: "Dapr bot"
weight: 15
weight: 70
description: "List of Dapr bot capabilities."
---

View File

@ -41,15 +41,18 @@ Style and tone conventions should be followed throughout all Dapr documentation
## Diagrams and images
Diagrams and images are invaluable visual aids for documentation pages. Diagrams are kept in a [Dapr Diagrams Deck](https://github.com/dapr/docs/tree/v1.11/daprdocs/static/presentations), which includes guidance on style and icons.
Diagrams and images are invaluable visual aids for documentation pages. Use the diagram style and icons in the [Dapr Diagrams template deck](https://github.com/dapr/docs/tree/v1.14/daprdocs/static/presentations).
As you create diagrams for your documentation:
The process for creating diagrams for your documentation:
- Save them as high-res PNG files into the [images folder](https://github.com/dapr/docs/tree/v1.11/daprdocs/static/images).
- Name your PNG files using the convention of a concept or building block so that they are grouped.
1. Download the [Dapr Diagrams template deck](https://github.com/dapr/docs/tree/v1.14/daprdocs/static/presentations) to use the icons and colors.
1. Add a new slide and create your diagram.
1. Screen capture the diagram as high-res PNG file and save in the [images folder](https://github.com/dapr/docs/tree/v1.14/daprdocs/static/images).
1. Name your PNG files using the convention of a concept or building block so that they are grouped.
- For example: `service-invocation-overview.png`.
- For more information on calling out images using shortcode, see the [Images guidance](#images) section below.
- Add the diagram to the correct section in the `Dapr-Diagrams.pptx` deck so that they can be amended and updated during routine refresh.
1. Add the diagram to the appropriate section in your documentation using the HTML `<image>` tag.
1. In your PR, comment the diagram slide (not the screen capture) so it can be reviewed and added to the diagram deck by maintainers.
## Contributing a new docs page

View File

@ -2,47 +2,9 @@
type: docs
title: "Dapr Roadmap"
linkTitle: "Roadmap"
description: "The Dapr Roadmap is a tool to help with visibility into investments across the Dapr project"
description: "The Dapr Roadmap gives the community visibility into the different priorities of the projecs"
weight: 30
no_list: true
---
Dapr encourages the community to help with prioritization. A GitHub project board is available to view and provide feedback on proposed issues and track them across development.
[<img src="/images/roadmap.png" alt="Screenshot of the Dapr Roadmap board" width=500 >](https://aka.ms/dapr/roadmap)
{{< button text="View the backlog" link="https://aka.ms/dapr/roadmap" color="primary" >}}
<br />
Please vote by adding a 👍 on the GitHub issues for the feature capabilities you would most like to see Dapr support. This will help the Dapr maintainers understand which features will provide the most value.
Contributions from the community is also welcomed. If there are features on the roadmap that you are interested in contributing to, please comment on the GitHub issue and include your solution proposal.
{{% alert title="Note" color="primary" %}}
The Dapr roadmap includes issues only from the v1.2 release and onwards. Issues closed and released prior to v1.2 are not included.
{{% /alert %}}
## Stages
The Dapr Roadmap progresses through the following stages:
{{< cardpane >}}
{{< card title="**[📄 Backlog](https://github.com/orgs/dapr/projects/52#column-14691591)**" >}}
Issues (features) that need 👍 votes from the community to prioritize. Updated by Dapr maintainers.
{{< /card >}}
{{< card title="**[⏳ Planned (Committed)](https://github.com/orgs/dapr/projects/52#column-14561691)**" >}}
Issues with a proposal and/or targeted release milestone. This is where design proposals are discussed and designed.
{{< /card >}}
{{< card title="**[👩‍💻 In Progress (Development)](https://github.com/orgs/dapr/projects/52#column-14561696)**" >}}
Implementation specifics have been agreed upon and the feature is under active development.
{{< /card >}}
{{< /cardpane >}}
{{< cardpane >}}
{{< card title="**[☑ Done](https://github.com/orgs/dapr/projects/52#column-14561700)**" >}}
The feature capability has been completed and is scheduled for an upcoming release.
{{< /card >}}
{{< card title="**[✅ Released](https://github.com/orgs/dapr/projects/52#column-14659973)**" >}}
The feature is released and available for use.
{{< /card >}}
{{< /cardpane >}}
See [this document](https://github.com/dapr/community/blob/master/roadmap.md) to view the Dapr project's roadmap.

View File

@ -104,7 +104,7 @@ The Dapr actor runtime provides a simple turn-based access model for accessing a
### State
Transactional state stores can be used to store actor state. To specify which state store to use for actors, specify value of property `actorStateStore` as `true` in the state store component's metadata section. Actors state is stored with a specific scheme in transactional state stores, allowing for consistent querying. Only a single state store component can be used as the state store for all actors. Read the [state API reference]({{< ref state_api.md >}}) and the [actors API reference]({{< ref actors_api.md >}}) to learn more about state stores for actors.
Transactional state stores can be used to store actor state. Regardless of whether you intend to store any state in your actor, you must specify a value for property `actorStateStore` as `true` in the state store component's metadata section. Actors state is stored with a specific scheme in transactional state stores, allowing for consistent querying. Only a single state store component can be used as the state store for all actors. Read the [state API reference]({{< ref state_api.md >}}) and the [actors API reference]({{< ref actors_api.md >}}) to learn more about state stores for actors.
### Actor timers and reminders

View File

@ -107,6 +107,10 @@ Refer [api spec]({{< ref "actors_api.md#invoke-timer" >}}) for more details.
## Actor reminders
{{% alert title="Note" color="primary" %}}
In Dapr v1.15, actor reminders are stored by default in the [Scheduler service]({{< ref "scheduler.md#actor-reminders" >}}).
{{% /alert %}}
Reminders are a mechanism to trigger *persistent* callbacks on an actor at specified times. Their functionality is similar to timers. But unlike timers, reminders are triggered under all circumstances until the actor explicitly unregisters them or the actor is explicitly deleted or the number in invocations is exhausted. Specifically, reminders are triggered across actor deactivations and failovers because the Dapr actor runtime persists the information about the actors' reminders using Dapr actor state provider.
You can create a persistent reminder for an actor by calling the HTTP/gRPC request to Dapr as shown below, or via Dapr SDK.
@ -148,7 +152,9 @@ If an invocation of the method fails, the timer is not removed. Timers are only
## Reminder data serialization format
Actor reminder data is serialized to JSON by default. Dapr v1.13 onwards supports a protobuf serialization format for reminders data which, depending on throughput and size of the payload, can result in significant performance improvements, giving developers a higher throughput and lower latency. Another benefit is storing smaller data in the actor underlying database, which can result in cost optimizations when using some cloud databases. A restriction with using protobuf serialization is that the reminder data can no longer be queried.
Actor reminder data is serialized to JSON by default. Dapr v1.13 onwards supports a protobuf serialization format for internal reminders data for workflow via both the Placement and Scheduler services. Depending on throughput and size of the payload, this can result in significant performance improvements, giving developers a higher throughput and lower latency.
Another benefit is storing smaller data in the actor underlying database, which can result in cost optimizations when using some cloud databases. A restriction with using protobuf serialization is that the reminder data can no longer be queried.
{{% alert title="Note" color="primary" %}}
Protobuf serialization will become the default format in Dapr 1.14

View File

@ -0,0 +1,7 @@
---
type: docs
title: "Conversation"
linkTitle: "Conversation"
weight: 130
description: "Utilize prompts with Large Language Models (LLMs)"
---

View File

@ -0,0 +1,43 @@
---
type: docs
title: "Conversation overview"
linkTitle: "Overview"
weight: 1000
description: "Overview of the conversation API building block"
---
{{% alert title="Alpha" color="primary" %}}
The conversation API is currently in [alpha]({{< ref "certification-lifecycle.md#certification-levels" >}}).
{{% /alert %}}
Using the Dapr conversation API, you can reduce the complexity of interacting with Large Language Models (LLMs) and enable critical performance and security functionality with features like prompt caching and personally identifiable information (PII) data obfuscation.
## Features
### Prompt caching
To significantly reduce latency and cost, frequent prompts are stored in a cache to be reused, instead of reprocessing the information for every new request. Prompt caching optimizes performance by storing and reusing prompts that are often repeated across multiple API calls.
### Personally identifiable information (PII) obfuscation
The PII obfuscation feature identifies and removes any PII from a conversation response. This feature protects your privacy by eliminating sensitive details like names, addresses, phone numbers, or other details that could be used to identify an individual.
## Try out conversation
### Quickstarts and tutorials
Want to put the Dapr conversation API to the test? Walk through the following quickstart and tutorials to see it in action:
| Quickstart/tutorial | Description |
| ------------------- | ----------- |
| [Conversation quickstart](todo) | . |
### Start using the conversation API directly in your app
Want to skip the quickstarts? Not a problem. You can try out the conversation building block directly in your application. After [Dapr is installed]({{< ref "getting-started/_index.md" >}}), you can begin using the conversation API starting with [the how-to guide]({{< ref howto-conversation-layer.md >}}).
## Next steps
- [How-To: Converse with an LLM using the conversation API]({{< ref howto-conversation-layer.md >}})
- [Conversation API components]({{< ref supported-conversation >}})

View File

@ -0,0 +1,239 @@
---
type: docs
title: "How-To: Converse with an LLM using the conversation API"
linkTitle: "How-To: Converse"
weight: 2000
description: "Learn how to abstract the complexities of interacting with large language models"
---
{{% alert title="Alpha" color="primary" %}}
The conversation API is currently in [alpha]({{< ref "certification-lifecycle.md#certification-levels" >}}).
{{% /alert %}}
Let's get started using the [conversation API]({{< ref conversation-overview.md >}}). In this guide, you'll learn how to:
- Set up one of the available Dapr components (echo) that work with the conversation API.
- Add the conversation client to your application.
- Run the connection using `dapr run`.
## Set up the conversation component
Create a new configuration file called `conversation.yaml` and save to a components or config sub-folder in your application directory.
Select your [preferred conversation component spec]({{< ref supported-conversation >}}) for your `conversation.yaml` file.
For this scenario, we use a simple echo component.
```yml
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
name: echo
spec:
type: conversation.echo
version: v1
```
## Connect the conversation client
The following examples use an HTTP client to send a POST request to Dapr's sidecar HTTP endpoint. You can also use [the Dapr SDK client instead]({{< ref "#related-links" >}}).
{{< tabs ".NET" "Go" "Rust" >}}
<!-- .NET -->
{{% codetab %}}
```csharp
using Dapr.AI.Conversation;
using Dapr.AI.Conversation.Extensions;
var builder = WebApplication.CreateBuilder(args);
builder.Services.AddDaprConversationClient();
var app = builder.Build();
var conversationClient = app.Services.GetRequiredService<DaprConversationClient>();
var response = await conversationClient.ConverseAsync("conversation",
new List<DaprConversationInput>
{
new DaprConversationInput(
"Please write a witty haiku about the Dapr distributed programming framework at dapr.io",
DaprConversationRole.Generic)
});
Console.WriteLine("Received the following from the LLM:");
foreach (var resp in response.Outputs)
{
Console.WriteLine($"\t{resp.Result}");
}
```
{{% /codetab %}}
<!-- Go -->
{{% codetab %}}
```go
package main
import (
"context"
"fmt"
dapr "github.com/dapr/go-sdk/client"
"log"
)
func main() {
client, err := dapr.NewClient()
if err != nil {
panic(err)
}
input := dapr.ConversationInput{
Message: "hello world",
// Role: nil, // Optional
// ScrubPII: nil, // Optional
}
fmt.Printf("conversation input: %s\n", input.Message)
var conversationComponent = "echo"
request := dapr.NewConversationRequest(conversationComponent, []dapr.ConversationInput{input})
resp, err := client.ConverseAlpha1(context.Background(), request)
if err != nil {
log.Fatalf("err: %v", err)
}
fmt.Printf("conversation output: %s\n", resp.Outputs[0].Result)
}
```
{{% /codetab %}}
<!-- Rust -->
{{% codetab %}}
```rust
use dapr::client::{ConversationInputBuilder, ConversationRequestBuilder};
use std::thread;
use std::time::Duration;
type DaprClient = dapr::Client<dapr::client::TonicClient>;
#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error>> {
// Sleep to allow for the server to become available
thread::sleep(Duration::from_secs(5));
// Set the Dapr address
let address = "https://127.0.0.1".to_string();
let mut client = DaprClient::connect(address).await?;
let input = ConversationInputBuilder::new("hello world").build();
let conversation_component = "echo";
let request =
ConversationRequestBuilder::new(conversation_component, vec![input.clone()]).build();
println!("conversation input: {:?}", input.message);
let response = client.converse_alpha1(request).await?;
println!("conversation output: {:?}", response.outputs[0].result);
Ok(())
}
```
{{% /codetab %}}
{{< /tabs >}}
## Run the conversation connection
Start the connection using the `dapr run` command. For example, for this scenario, we're running `dapr run` on an application with the app ID `conversation` and pointing to our conversation YAML file in the `./config` directory.
{{< tabs ".NET" "Go" "Rust" >}}
<!-- .NET -->
{{% codetab %}}
```bash
dapr run --app-id conversation --dapr-grpc-port 50001 --log-level debug --resources-path ./config -- dotnet run
```
{{% /codetab %}}
<!-- Go -->
{{% codetab %}}
```bash
dapr run --app-id conversation --dapr-grpc-port 50001 --log-level debug --resources-path ./config -- go run ./main.go
```
**Expected output**
```
- '== APP == conversation output: hello world'
```
{{% /codetab %}}
<!-- Rust -->
{{% codetab %}}
```bash
dapr run --app-id=conversation --resources-path ./config --dapr-grpc-port 3500 -- cargo run --example conversation
```
**Expected output**
```
- 'conversation input: hello world'
- 'conversation output: hello world'
```
{{% /codetab %}}
{{< /tabs >}}
## Related links
Try out the conversation API using the full examples provided in the supported SDK repos.
{{< tabs ".NET" "Go" "Rust" >}}
<!-- .NET -->
{{% codetab %}}
[Dapr conversation example with the .NET SDK](https://github.com/dapr/dotnet-sdk/tree/master/examples/AI/ConversationalAI)
{{% /codetab %}}
<!-- Go -->
{{% codetab %}}
[Dapr conversation example with the Go SDK](https://github.com/dapr/go-sdk/tree/main/examples/conversation)
{{% /codetab %}}
<!-- Rust -->
{{% codetab %}}
[Dapr conversation example with the Rust SDK](https://github.com/dapr/rust-sdk/tree/main/examples/src/conversation)
{{% /codetab %}}
{{< /tabs >}}
## Next steps
- [Conversation API reference guide]({{< ref conversation_api.md >}})
- [Available conversation components]({{< ref supported-conversation >}})

View File

@ -94,6 +94,75 @@ In this example, at trigger time, which is `@every 1s` according to the `Schedul
At the trigger time, the `prodDBBackupHandler` function is called, executing the desired business logic for this job at trigger time. For example:
#### HTTP
When you create a job using Dapr's Jobs API, Dapr will automatically assume there is an endpoint available at
`/job/<job-name>`. For instance, if you schedule a job named `test`, Dapr expects your application to listen for job
events at `/job/test`. Ensure your application has a handler set up for this endpoint to process the job when it is
triggered. For example:
*Note: The following example is in Go but applies to any programming language.*
```go
func main() {
...
http.HandleFunc("/job/", handleJob)
http.HandleFunc("/job/<job-name>", specificJob)
...
}
func specificJob(w http.ResponseWriter, r *http.Request) {
// Handle specific triggered job
}
func handleJob(w http.ResponseWriter, r *http.Request) {
// Handle the triggered jobs
}
```
#### gRPC
When a job reaches its scheduled trigger time, the triggered job is sent back to the application via the following
callback function:
*Note: The following example is in Go but applies to any programming language with gRPC support.*
```go
import rtv1 "github.com/dapr/dapr/pkg/proto/runtime/v1"
...
func (s *JobService) OnJobEventAlpha1(ctx context.Context, in *rtv1.JobEventRequest) (*rtv1.JobEventResponse, error) {
// Handle the triggered job
}
```
This function processes the triggered jobs within the context of your gRPC server. When you set up the server, ensure that
you register the callback server, which will invoke this function when a job is triggered:
```go
...
js := &JobService{}
rtv1.RegisterAppCallbackAlphaServer(server, js)
```
In this setup, you have full control over how triggered jobs are received and processed, as they are routed directly
through this gRPC method.
#### SDKs
For SDK users, handling triggered jobs is simpler. When a job is triggered, Dapr will automatically route the job to the
event handler you set up during the server initialization. For example, in Go, you'd register the event handler like this:
```go
...
if err = server.AddJobEventHandler("prod-db-backup", prodDBBackupHandler); err != nil {
log.Fatalf("failed to register job event handler: %v", err)
}
```
Dapr takes care of the underlying routing. When the job is triggered, your `prodDBBackupHandler` function is called with
the triggered job data. Heres an example of handling the triggered job:
```go
// ...
@ -103,11 +172,9 @@ func prodDBBackupHandler(ctx context.Context, job *common.JobEvent) error {
if err := json.Unmarshal(job.Data, &jobData); err != nil {
// ...
}
decodedPayload, err := base64.StdEncoding.DecodeString(jobData.Value)
// ...
var jobPayload api.DBBackup
if err := json.Unmarshal(decodedPayload, &jobPayload); err != nil {
if err := json.Unmarshal(job.Data, &jobPayload); err != nil {
// ...
}
fmt.Printf("job %d received:\n type: %v \n typeurl: %v\n value: %v\n extracted payload: %v\n", jobCount, job.JobType, jobData.TypeURL, jobData.Value, jobPayload)
@ -146,4 +213,4 @@ dapr run --app-id=distributed-scheduler \
## Next steps
- [Learn more about the Scheduler control plane service]({{< ref "concepts/dapr-services/scheduler.md" >}})
- [Jobs API reference]({{< ref jobs_api.md >}})
- [Jobs API reference]({{< ref jobs_api.md >}})

View File

@ -59,10 +59,6 @@ The jobs API provides several features to make it easy for you to schedule jobs.
The Scheduler service enables the scheduling of jobs to scale across multiple replicas, while guaranteeing that a job is only triggered by 1 scheduler service instance.
### Actor reminders
Actors have actor reminders, but present some limitations involving scalability using the Placement service implementation. You can make reminders more scalable by using [`SchedulerReminders`]({{< ref support-preview-features.md >}}). This is set in the configuration for your actor application.
## Try out the jobs API
You can try out the jobs API in your application. After [Dapr is installed]({{< ref install-dapr-cli.md >}}), you can begin using the jobs API, starting with [the How-to: Schedule jobs guide]({{< ref howto-schedule-and-handle-triggered-jobs.md >}}).

View File

@ -336,14 +336,13 @@ Status | Description
`RETRY` | Message to be retried by Dapr
`DROP` | Warning is logged and message is dropped
Please refer [Expected HTTP Response for Bulk Subscribe]({{< ref pubsub_api.md >}}) for further insights on response.
Refer to [Expected HTTP Response for Bulk Subscribe]({{< ref pubsub_api.md >}}) for further insights on response.
### Example
Please refer following code samples for how to use Bulk Subscribe:
{{< tabs "Java" "JavaScript" ".NET" >}}
The following code examples demonstrate how to use Bulk Subscribe.
{{< tabs "Java" "JavaScript" ".NET" "Python" >}}
{{% codetab %}}
```java
@ -471,7 +470,50 @@ public class BulkMessageController : ControllerBase
{{% /codetab %}}
{{% codetab %}}
Currently, you can only bulk subscribe in Python using an HTTP client.
```python
import json
from flask import Flask, request, jsonify
app = Flask(__name__)
@app.route('/dapr/subscribe', methods=['GET'])
def subscribe():
# Define the bulk subscribe configuration
subscriptions = [{
"pubsubname": "pubsub",
"topic": "TOPIC_A",
"route": "/checkout",
"bulkSubscribe": {
"enabled": True,
"maxMessagesCount": 3,
"maxAwaitDurationMs": 40
}
}]
print('Dapr pub/sub is subscribed to: ' + json.dumps(subscriptions))
return jsonify(subscriptions)
# Define the endpoint to handle incoming messages
@app.route('/checkout', methods=['POST'])
def checkout():
messages = request.json
print(messages)
for message in messages:
print(f"Received message: {message}")
return json.dumps({'success': True}), 200, {'ContentType': 'application/json'}
if __name__ == '__main__':
app.run(port=5000)
```
{{% /codetab %}}
{{< /tabs >}}
## How components handle publishing and subscribing to bulk messages
For event publish/subscribe, two kinds of network transfers are involved.

View File

@ -37,17 +37,16 @@ metadata:
spec:
topic: orders
routes:
default: /checkout
default: /orders
pubsubname: pubsub
scopes:
- orderprocessing
- checkout
```
Here the subscription called `order`:
- Uses the pub/sub component called `pubsub` to subscribes to the topic called `orders`.
- Sets the `route` field to send all topic messages to the `/checkout` endpoint in the app.
- Sets `scopes` field to scope this subscription for access only by apps with IDs `orderprocessing` and `checkout`.
- Sets the `route` field to send all topic messages to the `/orders` endpoint in the app.
- Sets `scopes` field to scope this subscription for access only by apps with ID `orderprocessing`.
When running Dapr, set the YAML component file path to point Dapr to the component.
@ -113,7 +112,7 @@ In your application code, subscribe to the topic specified in the Dapr pub/sub c
```csharp
//Subscribe to a topic
[HttpPost("checkout")]
[HttpPost("orders")]
public void getCheckout([FromBody] int orderId)
{
Console.WriteLine("Subscriber received : " + orderId);
@ -128,7 +127,7 @@ public void getCheckout([FromBody] int orderId)
import io.dapr.client.domain.CloudEvent;
//Subscribe to a topic
@PostMapping(path = "/checkout")
@PostMapping(path = "/orders")
public Mono<Void> getCheckout(@RequestBody(required = false) CloudEvent<String> cloudEvent) {
return Mono.fromRunnable(() -> {
try {
@ -146,7 +145,7 @@ public Mono<Void> getCheckout(@RequestBody(required = false) CloudEvent<String>
from cloudevents.sdk.event import v1
#Subscribe to a topic
@app.route('/checkout', methods=['POST'])
@app.route('/orders', methods=['POST'])
def checkout(event: v1.Event) -> None:
data = json.loads(event.Data())
logging.info('Subscriber received: ' + str(data))
@ -163,7 +162,7 @@ const app = express()
app.use(bodyParser.json({ type: 'application/*+json' }));
// listen to the declarative route
app.post('/checkout', (req, res) => {
app.post('/orders', (req, res) => {
console.log(req.body);
res.sendStatus(200);
});
@ -178,7 +177,7 @@ app.post('/checkout', (req, res) => {
var sub = &common.Subscription{
PubsubName: "pubsub",
Topic: "orders",
Route: "/checkout",
Route: "/orders",
}
func eventHandler(ctx context.Context, e *common.TopicEvent) (retry bool, err error) {
@ -191,7 +190,7 @@ func eventHandler(ctx context.Context, e *common.TopicEvent) (retry bool, err er
{{< /tabs >}}
The `/checkout` endpoint matches the `route` defined in the subscriptions and this is where Dapr sends all topic messages to.
The `/orders` endpoint matches the `route` defined in the subscriptions and this is where Dapr sends all topic messages to.
### Streaming subscriptions
@ -204,7 +203,112 @@ As messages are sent to the given message handler code, there is no concept of r
The example below shows the different ways to stream subscribe to a topic.
{{< tabs Go>}}
{{< tabs Python Go >}}
{{% codetab %}}
You can use the `subscribe` method, which returns a `Subscription` object and allows you to pull messages from the stream by calling the `next_message` method. This runs in and may block the main thread while waiting for messages.
```python
import time
from dapr.clients import DaprClient
from dapr.clients.grpc.subscription import StreamInactiveError
counter = 0
def process_message(message):
global counter
counter += 1
# Process the message here
print(f'Processing message: {message.data()} from {message.topic()}...')
return 'success'
def main():
with DaprClient() as client:
global counter
subscription = client.subscribe(
pubsub_name='pubsub', topic='orders', dead_letter_topic='orders_dead'
)
try:
while counter < 5:
try:
message = subscription.next_message()
except StreamInactiveError as e:
print('Stream is inactive. Retrying...')
time.sleep(1)
continue
if message is None:
print('No message received within timeout period.')
continue
# Process the message
response_status = process_message(message)
if response_status == 'success':
subscription.respond_success(message)
elif response_status == 'retry':
subscription.respond_retry(message)
elif response_status == 'drop':
subscription.respond_drop(message)
finally:
print("Closing subscription...")
subscription.close()
if __name__ == '__main__':
main()
```
You can also use the `subscribe_with_handler` method, which accepts a callback function executed for each message received from the stream. This runs in a separate thread, so it doesn't block the main thread.
```python
import time
from dapr.clients import DaprClient
from dapr.clients.grpc._response import TopicEventResponse
counter = 0
def process_message(message):
# Process the message here
global counter
counter += 1
print(f'Processing message: {message.data()} from {message.topic()}...')
return TopicEventResponse('success')
def main():
with (DaprClient() as client):
# This will start a new thread that will listen for messages
# and process them in the `process_message` function
close_fn = client.subscribe_with_handler(
pubsub_name='pubsub', topic='orders', handler_fn=process_message,
dead_letter_topic='orders_dead'
)
while counter < 5:
time.sleep(1)
print("Closing subscription...")
close_fn()
if __name__ == '__main__':
main()
```
[Learn more about streaming subscriptions using the Python SDK client.]({{< ref "python-client.md#streaming-message-subscription" >}})
{{% /codetab %}}
{{% codetab %}}
@ -325,7 +429,7 @@ In the example below, you define the values found in the [declarative YAML subsc
```csharp
[Topic("pubsub", "orders")]
[HttpPost("/checkout")]
[HttpPost("/orders")]
public async Task<ActionResult<Order>>Checkout(Order order, [FromServices] DaprClient daprClient)
{
// Logic
@ -337,7 +441,7 @@ or
```csharp
// Dapr subscription in [Topic] routes orders topic to this route
app.MapPost("/checkout", [Topic("pubsub", "orders")] (Order order) => {
app.MapPost("/orders", [Topic("pubsub", "orders")] (Order order) => {
Console.WriteLine("Subscriber received : " + order);
return Results.Ok(order);
});
@ -359,7 +463,7 @@ app.UseEndpoints(endpoints =>
```java
private static final ObjectMapper OBJECT_MAPPER = new ObjectMapper();
@Topic(name = "checkout", pubsubName = "pubsub")
@Topic(name = "orders", pubsubName = "pubsub")
@PostMapping(path = "/orders")
public Mono<Void> handleMessage(@RequestBody(required = false) CloudEvent<String> cloudEvent) {
return Mono.fromRunnable(() -> {
@ -370,6 +474,7 @@ public Mono<Void> handleMessage(@RequestBody(required = false) CloudEvent<String
throw new RuntimeException(e);
}
});
}
```
{{% /codetab %}}
@ -382,7 +487,7 @@ def subscribe():
subscriptions = [
{
'pubsubname': 'pubsub',
'topic': 'checkout',
'topic': 'orders',
'routes': {
'rules': [
{
@ -418,7 +523,7 @@ app.get('/dapr/subscribe', (req, res) => {
res.json([
{
pubsubname: "pubsub",
topic: "checkout",
topic: "orders",
routes: {
rules: [
{
@ -480,7 +585,7 @@ func configureSubscribeHandler(w http.ResponseWriter, _ *http.Request) {
t := []subscription{
{
PubsubName: "pubsub",
Topic: "checkout",
Topic: "orders",
Routes: routes{
Rules: []rule{
{

View File

@ -38,11 +38,9 @@ The diagram below is an overview of how Dapr's service invocation works when inv
<img src="/images/service-invocation-overview-non-dapr-endpoint.png" width=800 alt="Diagram showing the steps of service invocation to non-Dapr endpoints">
1. Service A makes an HTTP call targeting Service B, a non-Dapr endpoint. The call goes to the local Dapr sidecar.
2. Dapr discovers Service B's location using the `HTTPEndpoint` or FQDN URL.
3. Dapr forwards the message to Service B.
4. Service B runs its business logic code.
5. Service B sends a response to Service A's Dapr sidecar.
6. Service A receives the response.
2. Dapr discovers Service B's location using the `HTTPEndpoint` or FQDN URL then forwards the message to Service B.
3. Service B sends a response to Service A's Dapr sidecar.
4. Service A receives the response.
## Using an HTTPEndpoint resource or FQDN URL for non-Dapr endpoints
There are two ways to invoke a non-Dapr endpoint when communicating either to Dapr applications or non-Dapr applications. A Dapr application can invoke a non-Dapr endpoint by providing one of the following:

View File

@ -10,8 +10,6 @@ State management is one of the most common needs of any new, legacy, monolith, o
In this guide, you'll learn the basics of using the key/value state API to allow an application to save, get, and delete state.
## Example
The code example below _loosely_ describes an application that processes orders with an order processing service which has a Dapr sidecar. The order processing service uses Dapr to store state in a Redis state store.
<img src="/images/building-block-state-management-example.png" width=1000 alt="Diagram showing state management of example service">
@ -554,7 +552,7 @@ namespace EventService
string DAPR_STORE_NAME = "statestore";
//Using Dapr SDK to retrieve multiple states
using var client = new DaprClientBuilder().Build();
IReadOnlyList<BulkStateItem> mulitpleStateResult = await client.GetBulkStateAsync(DAPR_STORE_NAME, new List<string> { "order_1", "order_2" }, parallelism: 1);
IReadOnlyList<BulkStateItem> multipleStateResult = await client.GetBulkStateAsync(DAPR_STORE_NAME, new List<string> { "order_1", "order_2" }, parallelism: 1);
}
}
}

View File

@ -6,10 +6,6 @@ weight: 5000
description: "Learn how to develop and author workflows"
---
{{% alert title="Note" color="primary" %}}
Dapr Workflow is currently in beta. [See known limitations for {{% dapr-latest-version cli="true" %}}]({{< ref "workflow-overview.md#limitations" >}}).
{{% /alert %}}
This article provides a high-level overview of how to author workflows that are executed by the Dapr Workflow engine.
{{% alert title="Note" color="primary" %}}
@ -821,7 +817,7 @@ func main() {
ctx := context.Background()
// Start workflow test
respStart, err := daprClient.StartWorkflowBeta1(ctx, &client.StartWorkflowRequest{
respStart, err := daprClient.StartWorkflow(ctx, &client.StartWorkflowRequest{
InstanceID: "a7a4168d-3a1c-41da-8a4f-e7f6d9c718d9",
WorkflowComponent: workflowComponent,
WorkflowName: "TestWorkflow",
@ -835,7 +831,7 @@ func main() {
fmt.Printf("workflow started with id: %v\n", respStart.InstanceID)
// Pause workflow test
err = daprClient.PauseWorkflowBeta1(ctx, &client.PauseWorkflowRequest{
err = daprClient.PauseWorkflow(ctx, &client.PauseWorkflowRequest{
InstanceID: "a7a4168d-3a1c-41da-8a4f-e7f6d9c718d9",
WorkflowComponent: workflowComponent,
})
@ -844,7 +840,7 @@ func main() {
log.Fatalf("failed to pause workflow: %v", err)
}
respGet, err := daprClient.GetWorkflowBeta1(ctx, &client.GetWorkflowRequest{
respGet, err := daprClient.GetWorkflow(ctx, &client.GetWorkflowRequest{
InstanceID: "a7a4168d-3a1c-41da-8a4f-e7f6d9c718d9",
WorkflowComponent: workflowComponent,
})
@ -859,7 +855,7 @@ func main() {
fmt.Printf("workflow paused\n")
// Resume workflow test
err = daprClient.ResumeWorkflowBeta1(ctx, &client.ResumeWorkflowRequest{
err = daprClient.ResumeWorkflow(ctx, &client.ResumeWorkflowRequest{
InstanceID: "a7a4168d-3a1c-41da-8a4f-e7f6d9c718d9",
WorkflowComponent: workflowComponent,
})
@ -868,7 +864,7 @@ func main() {
log.Fatalf("failed to resume workflow: %v", err)
}
respGet, err = daprClient.GetWorkflowBeta1(ctx, &client.GetWorkflowRequest{
respGet, err = daprClient.GetWorkflow(ctx, &client.GetWorkflowRequest{
InstanceID: "a7a4168d-3a1c-41da-8a4f-e7f6d9c718d9",
WorkflowComponent: workflowComponent,
})
@ -886,7 +882,7 @@ func main() {
// Raise Event Test
err = daprClient.RaiseEventWorkflowBeta1(ctx, &client.RaiseEventWorkflowRequest{
err = daprClient.RaiseEventWorkflow(ctx, &client.RaiseEventWorkflowRequest{
InstanceID: "a7a4168d-3a1c-41da-8a4f-e7f6d9c718d9",
WorkflowComponent: workflowComponent,
EventName: "testEvent",
@ -904,7 +900,7 @@ func main() {
fmt.Printf("stage: %d\n", stage)
respGet, err = daprClient.GetWorkflowBeta1(ctx, &client.GetWorkflowRequest{
respGet, err = daprClient.GetWorkflow(ctx, &client.GetWorkflowRequest{
InstanceID: "a7a4168d-3a1c-41da-8a4f-e7f6d9c718d9",
WorkflowComponent: workflowComponent,
})
@ -915,7 +911,7 @@ func main() {
fmt.Printf("workflow status: %v\n", respGet.RuntimeStatus)
// Purge workflow test
err = daprClient.PurgeWorkflowBeta1(ctx, &client.PurgeWorkflowRequest{
err = daprClient.PurgeWorkflow(ctx, &client.PurgeWorkflowRequest{
InstanceID: "a7a4168d-3a1c-41da-8a4f-e7f6d9c718d9",
WorkflowComponent: workflowComponent,
})
@ -923,7 +919,7 @@ func main() {
log.Fatalf("failed to purge workflow: %v", err)
}
respGet, err = daprClient.GetWorkflowBeta1(ctx, &client.GetWorkflowRequest{
respGet, err = daprClient.GetWorkflow(ctx, &client.GetWorkflowRequest{
InstanceID: "a7a4168d-3a1c-41da-8a4f-e7f6d9c718d9",
WorkflowComponent: workflowComponent,
})
@ -936,7 +932,7 @@ func main() {
fmt.Printf("stage: %d\n", stage)
// Terminate workflow test
respStart, err = daprClient.StartWorkflowBeta1(ctx, &client.StartWorkflowRequest{
respStart, err = daprClient.StartWorkflow(ctx, &client.StartWorkflowRequest{
InstanceID: "a7a4168d-3a1c-41da-8a4f-e7f6d9c718d9",
WorkflowComponent: workflowComponent,
WorkflowName: "TestWorkflow",
@ -950,7 +946,7 @@ func main() {
fmt.Printf("workflow started with id: %s\n", respStart.InstanceID)
err = daprClient.TerminateWorkflowBeta1(ctx, &client.TerminateWorkflowRequest{
err = daprClient.TerminateWorkflow(ctx, &client.TerminateWorkflowRequest{
InstanceID: "a7a4168d-3a1c-41da-8a4f-e7f6d9c718d9",
WorkflowComponent: workflowComponent,
})
@ -958,7 +954,7 @@ func main() {
log.Fatalf("failed to terminate workflow: %v", err)
}
respGet, err = daprClient.GetWorkflowBeta1(ctx, &client.GetWorkflowRequest{
respGet, err = daprClient.GetWorkflow(ctx, &client.GetWorkflowRequest{
InstanceID: "a7a4168d-3a1c-41da-8a4f-e7f6d9c718d9",
WorkflowComponent: workflowComponent,
})
@ -971,12 +967,12 @@ func main() {
fmt.Println("workflow terminated")
err = daprClient.PurgeWorkflowBeta1(ctx, &client.PurgeWorkflowRequest{
err = daprClient.PurgeWorkflow(ctx, &client.PurgeWorkflowRequest{
InstanceID: "a7a4168d-3a1c-41da-8a4f-e7f6d9c718d9",
WorkflowComponent: workflowComponent,
})
respGet, err = daprClient.GetWorkflowBeta1(ctx, &client.GetWorkflowRequest{
respGet, err = daprClient.GetWorkflow(ctx, &client.GetWorkflowRequest{
InstanceID: "a7a4168d-3a1c-41da-8a4f-e7f6d9c718d9",
WorkflowComponent: workflowComponent,
})

View File

@ -6,10 +6,6 @@ weight: 6000
description: Manage and run workflows
---
{{% alert title="Note" color="primary" %}}
Dapr Workflow is currently in beta. [See known limitations for {{% dapr-latest-version cli="true" %}}]({{< ref "workflow-overview.md#limitations" >}}).
{{% /alert %}}
Now that you've [authored the workflow and its activities in your application]({{< ref howto-author-workflow.md >}}), you can start, terminate, and get information about the workflow using HTTP API calls. For more information, read the [workflow API reference]({{< ref workflow_api.md >}}).
{{< tabs Python JavaScript ".NET" Java Go HTTP >}}
@ -324,7 +320,7 @@ Manage your workflow using HTTP calls. The example below plugs in the properties
To start your workflow with an ID `12345678`, run:
```http
POST http://localhost:3500/v1.0-beta1/workflows/dapr/OrderProcessingWorkflow/start?instanceID=12345678
POST http://localhost:3500/v1.0/workflows/dapr/OrderProcessingWorkflow/start?instanceID=12345678
```
Note that workflow instance IDs can only contain alphanumeric characters, underscores, and dashes.
@ -334,7 +330,7 @@ Note that workflow instance IDs can only contain alphanumeric characters, unders
To terminate your workflow with an ID `12345678`, run:
```http
POST http://localhost:3500/v1.0-beta1/workflows/dapr/12345678/terminate
POST http://localhost:3500/v1.0/workflows/dapr/12345678/terminate
```
### Raise an event
@ -342,7 +338,7 @@ POST http://localhost:3500/v1.0-beta1/workflows/dapr/12345678/terminate
For workflow components that support subscribing to external events, such as the Dapr Workflow engine, you can use the following "raise event" API to deliver a named event to a specific workflow instance.
```http
POST http://localhost:3500/v1.0-beta1/workflows/<workflowComponentName>/<instanceID>/raiseEvent/<eventName>
POST http://localhost:3500/v1.0/workflows/<workflowComponentName>/<instanceID>/raiseEvent/<eventName>
```
> An `eventName` can be any function.
@ -352,13 +348,13 @@ POST http://localhost:3500/v1.0-beta1/workflows/<workflowComponentName>/<instanc
To plan for down-time, wait for inputs, and more, you can pause and then resume a workflow. To pause a workflow with an ID `12345678` until triggered to resume, run:
```http
POST http://localhost:3500/v1.0-beta1/workflows/dapr/12345678/pause
POST http://localhost:3500/v1.0/workflows/dapr/12345678/pause
```
To resume a workflow with an ID `12345678`, run:
```http
POST http://localhost:3500/v1.0-beta1/workflows/dapr/12345678/resume
POST http://localhost:3500/v1.0/workflows/dapr/12345678/resume
```
### Purge a workflow
@ -368,7 +364,7 @@ The purge API can be used to permanently delete workflow metadata from the under
Only workflow instances in the COMPLETED, FAILED, or TERMINATED state can be purged. If the workflow is in any other state, calling purge returns an error.
```http
POST http://localhost:3500/v1.0-beta1/workflows/dapr/12345678/purge
POST http://localhost:3500/v1.0/workflows/dapr/12345678/purge
```
### Get information about a workflow
@ -376,7 +372,7 @@ POST http://localhost:3500/v1.0-beta1/workflows/dapr/12345678/purge
To fetch workflow information (outputs and inputs) with an ID `12345678`, run:
```http
GET http://localhost:3500/v1.0-beta1/workflows/dapr/12345678
GET http://localhost:3500/v1.0/workflows/dapr/12345678
```
Learn more about these HTTP calls in the [workflow API reference guide]({{< ref workflow_api.md >}}).

View File

@ -6,10 +6,6 @@ weight: 4000
description: "The Dapr Workflow engine architecture"
---
{{% alert title="Note" color="primary" %}}
Dapr Workflow is currently in beta. [See known limitations for {{% dapr-latest-version cli="true" %}}]({{< ref "workflow-overview.md#limitations" >}}).
{{% /alert %}}
[Dapr Workflows]({{< ref "workflow-overview.md" >}}) allow developers to define workflows using ordinary code in a variety of programming languages. The workflow engine runs inside of the Dapr sidecar and orchestrates workflow code deployed as part of your application. This article describes:
- The architecture of the Dapr Workflow engine

View File

@ -6,10 +6,6 @@ weight: 2000
description: "Learn more about the Dapr Workflow features and concepts"
---
{{% alert title="Note" color="primary" %}}
Dapr Workflow is currently in beta. [See known limitations for {{% dapr-latest-version cli="true" %}}]({{< ref "workflow-overview.md#limitations" >}}).
{{% /alert %}}
Now that you've learned about the [workflow building block]({{< ref workflow-overview.md >}}) at a high level, let's deep dive into the features and concepts included with the Dapr Workflow engine and SDKs. Dapr Workflow exposes several core features and concepts which are common across all supported languages.
{{% alert title="Note" color="primary" %}}
@ -135,7 +131,7 @@ Because workflow retry policies are configured in code, the exact developer expe
| --- | --- |
| **Maximum number of attempts** | The maximum number of times to execute the activity or child workflow. |
| **First retry interval** | The amount of time to wait before the first retry. |
| **Backoff coefficient** | The amount of time to wait before each subsequent retry. |
| **Backoff coefficient** | The coefficient used to determine the rate of increase of back-off. For example a coefficient of 2 doubles the wait of each subsequent retry. |
| **Maximum retry interval** | The maximum amount of time to wait before each subsequent retry. |
| **Retry timeout** | The overall timeout for retries, regardless of any configured max number of attempts. |

View File

@ -6,10 +6,6 @@ weight: 1000
description: "Overview of Dapr Workflow"
---
{{% alert title="Note" color="primary" %}}
Dapr Workflow is currently in beta. [See known limitations]({{< ref "#limitations" >}}).
{{% /alert %}}
Dapr workflow makes it easy for developers to write business logic and integrations in a reliable way. Since Dapr workflows are stateful, they support long-running and fault-tolerant applications, ideal for orchestrating microservices. Dapr workflow works seamlessly with other Dapr building blocks, such as service invocation, pub/sub, state management, and bindings.
The durable, resilient Dapr Workflow capability:
@ -94,7 +90,7 @@ Want to put workflows to the test? Walk through the following quickstart and tut
| Quickstart/tutorial | Description |
| ------------------- | ----------- |
| [Workflow quickstart]({{< ref workflow-quickstart.md >}}) | Run a workflow application with four workflow activities to see Dapr Workflow in action |
| [Workflow Python SDK example](https://github.com/dapr/python-sdk/tree/master/examples/demo_workflow) | Learn how to create a Dapr Workflow and invoke it using the Python `DaprClient` package. |
| [Workflow Python SDK example](https://github.com/dapr/python-sdk/tree/master/examples/demo_workflow) | Learn how to create a Dapr Workflow and invoke it using the Python `dapr-ext-workflow` package. |
| [Workflow JavaScript SDK example](https://github.com/dapr/js-sdk/tree/main/examples/workflow) | Learn how to create a Dapr Workflow and invoke it using the JavaScript SDK. |
| [Workflow .NET SDK example](https://github.com/dapr/dotnet-sdk/tree/master/examples/Workflow) | Learn how to create a Dapr Workflow and invoke it using ASP.NET Core web APIs. |
| [Workflow Java SDK example](https://github.com/dapr/java-sdk/tree/master/examples/src/main/java/io/dapr/examples/workflows) | Learn how to create a Dapr Workflow and invoke it using the Java `io.dapr.workflows` package. |
@ -106,8 +102,7 @@ Want to skip the quickstarts? Not a problem. You can try out the workflow buildi
## Limitations
- **State stores:** As of the 1.12.0 beta release of Dapr Workflow, using the NoSQL databases as a state store results in limitations around storing internal states. For example, CosmosDB has a maximum single operation item limit of only 100 states in a single request.
- **Horizontal scaling:** As of the 1.12.0 beta release of Dapr Workflow, if you scale out Dapr sidecars or your application pods to more than 2, then the concurrency of the workflow execution drops. It is recommended to test with 1 or 2 instances, and no more than 2.
- **State stores:** Due to underlying limitations in some database choices, more commonly NoSQL databases, you might run into limitations around storing internal states. For example, CosmosDB has a maximum single operation item limit of only 100 states in a single request.
## Watch the demo

View File

@ -647,7 +647,7 @@ The Dapr workflow HTTP API supports the asynchronous request-reply pattern out-o
The following `curl` commands illustrate how the workflow APIs support this pattern.
```bash
curl -X POST http://localhost:3500/v1.0-beta1/workflows/dapr/OrderProcessingWorkflow/start?instanceID=12345678 -d '{"Name":"Paperclips","Quantity":1,"TotalCost":9.95}'
curl -X POST http://localhost:3500/v1.0/workflows/dapr/OrderProcessingWorkflow/start?instanceID=12345678 -d '{"Name":"Paperclips","Quantity":1,"TotalCost":9.95}'
```
The previous command will result in the following response JSON:
@ -659,7 +659,7 @@ The previous command will result in the following response JSON:
The HTTP client can then construct the status query URL using the workflow instance ID and poll it repeatedly until it sees the "COMPLETE", "FAILURE", or "TERMINATED" status in the payload.
```bash
curl http://localhost:3500/v1.0-beta1/workflows/dapr/12345678
curl http://localhost:3500/v1.0/workflows/dapr/12345678
```
The following is an example of what an in-progress workflow status might look like.
@ -749,7 +749,7 @@ def status_monitor_workflow(ctx: wf.DaprWorkflowContext, job: JobStatus):
ctx.call_activity(send_alert, input=f"Job '{job.job_id}' is unhealthy!")
next_sleep_interval = 5 # check more frequently when unhealthy
yield ctx.create_timer(fire_at=ctx.current_utc_datetime + timedelta(seconds=next_sleep_interval))
yield ctx.create_timer(fire_at=ctx.current_utc_datetime + timedelta(minutes=next_sleep_interval))
# restart from the beginning with a new JobStatus input
ctx.continue_as_new(job)
@ -896,7 +896,7 @@ func StatusMonitorWorkflow(ctx *workflow.WorkflowContext) (any, error) {
}
if status == "healthy" {
job.IsHealthy = true
sleepInterval = time.Second * 60
sleepInterval = time.Minutes * 60
} else {
if job.IsHealthy {
job.IsHealthy = false
@ -905,7 +905,7 @@ func StatusMonitorWorkflow(ctx *workflow.WorkflowContext) (any, error) {
return "", err
}
}
sleepInterval = time.Second * 5
sleepInterval = time.Minutes * 5
}
if err := ctx.CreateTimer(sleepInterval).Await(nil); err != nil {
return "", err
@ -1365,7 +1365,7 @@ func raiseEvent() {
if err != nil {
log.Fatalf("failed to initialize the client")
}
err = daprClient.RaiseEventWorkflowBeta1(context.Background(), &client.RaiseEventWorkflowRequest{
err = daprClient.RaiseEventWorkflow(context.Background(), &client.RaiseEventWorkflowRequest{
InstanceID: "instance_id",
WorkflowComponent: "dapr",
EventName: "approval_received",

View File

@ -2,6 +2,6 @@
type: docs
title: "Debugging Dapr applications and the Dapr control plane"
linkTitle: "Debugging"
weight: 50
weight: 60
description: "Guides on how to debug Dapr applications and the Dapr control plane"
---

View File

@ -2,6 +2,6 @@
type: docs
title: "Components"
linkTitle: "Components"
weight: 30
weight: 40
description: "Learn more about developing Dapr's pluggable and middleware components"
---

View File

@ -0,0 +1,8 @@
---
type: docs
title: "Error codes"
linkTitle: "Error codes"
weight: 20
description: "Error codes and messages you may encounter while using Dapr"
---

View File

@ -0,0 +1,152 @@
---
type: docs
title: "Error codes reference guide"
linkTitle: "Reference"
description: "List of gRPC and HTTP error codes in Dapr and their descriptions"
weight: 20
---
The following tables list the error codes returned by Dapr runtime:
### Actors API
| Error Code | Description |
| -------------------------------- | ------------------------------------------ |
| ERR_ACTOR_INSTANCE_MISSING | Error when an actor instance is missing. |
| ERR_ACTOR_RUNTIME_NOT_FOUND | Error the actor instance. |
| ERR_ACTOR_REMINDER_CREATE | Error creating a reminder for an actor. |
| ERR_ACTOR_REMINDER_DELETE | Error deleting a reminder for an actor. |
| ERR_ACTOR_TIMER_CREATE | Error creating a timer for an actor. |
| ERR_ACTOR_TIMER_DELETE | Error deleting a timer for an actor. |
| ERR_ACTOR_REMINDER_GET | Error getting a reminder for an actor. |
| ERR_ACTOR_INVOKE_METHOD | Error invoking a method on an actor. |
| ERR_ACTOR_STATE_DELETE | Error deleting the state for an actor. |
| ERR_ACTOR_STATE_GET | Error getting the state for an actor. |
| ERR_ACTOR_STATE_TRANSACTION_SAVE | Error storing actor state transactionally. |
| ERR_ACTOR_REMINDER_NON_HOSTED | Error setting reminder for an actor. |
### Workflows API
| Error Code | Description |
| -------------------------------- | ----------------------------------------------------------- |
| ERR_GET_WORKFLOW | Error getting workflow. |
| ERR_START_WORKFLOW | Error starting the workflow. |
| ERR_PAUSE_WORKFLOW | Error pausing the workflow. |
| ERR_RESUME_WORKFLOW | Error resuming the workflow. |
| ERR_TERMINATE_WORKFLOW | Error terminating the workflow. |
| ERR_PURGE_WORKFLOW | Error purging workflow. |
| ERR_RAISE_EVENT_WORKFLOW | Error raising an event within the workflow. |
| ERR_WORKFLOW_COMPONENT_MISSING | Error when a workflow component is missing a configuration. |
| ERR_WORKFLOW_COMPONENT_NOT_FOUND | Error when a workflow component is not found. |
| ERR_WORKFLOW_EVENT_NAME_MISSING | Error when the event name for a workflow is missing. |
| ERR_WORKFLOW_NAME_MISSING | Error when the workflow name is missing. |
| ERR_INSTANCE_ID_INVALID | Error invalid workflow instance ID provided. |
| ERR_INSTANCE_ID_NOT_FOUND | Error workflow instance ID not found. |
| ERR_INSTANCE_ID_PROVIDED_MISSING | Error workflow instance ID was provided but missing. |
| ERR_INSTANCE_ID_TOO_LONG | Error workflow instance ID exceeds allowable length. |
### State Management API
| Error Code | Description |
| ------------------------------------- | ------------------------------------------------------------------------- |
| ERR_STATE_STORE_NOT_FOUND | Error referencing a state store not found. |
| ERR_STATE_STORES_NOT_CONFIGURED | Error no state stores configured. |
| ERR_NOT_SUPPORTED_STATE_OPERATION | Error transaction requested on a state store with no transaction support. |
| ERR_STATE_GET | Error getting a state for state store. |
| ERR_STATE_DELETE | Error deleting a state from state store. |
| ERR_STATE_SAVE | Error saving a state in state store. |
| ERR_STATE_TRANSACTION | Error encountered during state transaction. |
| ERR_STATE_BULK_GET | Error performing bulk retrieval of state entries. |
| ERR_STATE_QUERY | Error querying the state store. |
| ERR_STATE_STORE_NOT_CONFIGURED | Error state store is not configured. |
| ERR_STATE_STORE_NOT_SUPPORTED | Error state store is not supported. |
| ERR_STATE_STORE_TOO_MANY_TRANSACTIONS | Error exceeded maximum allowable transactions. |
### Configuration API
| Error Code | Description |
| -------------------------------------- | -------------------------------------------- |
| ERR_CONFIGURATION_GET | Error retrieving configuration. |
| ERR_CONFIGURATION_STORE_NOT_CONFIGURED | Error configuration store is not configured. |
| ERR_CONFIGURATION_STORE_NOT_FOUND | Error configuration store not found. |
| ERR_CONFIGURATION_SUBSCRIBE | Error subscribing to a configuration. |
| ERR_CONFIGURATION_UNSUBSCRIBE | Error unsubscribing from a configuration. |
### Crypto API
| Error Code | Description |
| ----------------------------------- | ------------------------------------------ |
| ERR_CRYPTO | General crypto building block error. |
| ERR_CRYPTO_KEY | Error related to a crypto key. |
| ERR_CRYPTO_PROVIDER_NOT_FOUND | Error specified crypto provider not found. |
| ERR_CRYPTO_PROVIDERS_NOT_CONFIGURED | Error no crypto providers configured. |
### Secrets API
| Error Code | Description |
| -------------------------------- | ---------------------------------------------------- |
| ERR_SECRET_STORES_NOT_CONFIGURED | Error that no secret store is configured. |
| ERR_SECRET_STORE_NOT_FOUND | Error that specified secret store is not found. |
| ERR_SECRET_GET | Error retrieving the specified secret. |
| ERR_PERMISSION_DENIED | Error access denied due to insufficient permissions. |
### Pub/Sub API
| Error Code | Description |
| --------------------------- | -------------------------------------------------------- |
| ERR_PUBSUB_NOT_FOUND | Error referencing the Pub/Sub component in Dapr runtime. |
| ERR_PUBSUB_PUBLISH_MESSAGE | Error publishing a message. |
| ERR_PUBSUB_FORBIDDEN | Error message forbidden by access controls. |
| ERR_PUBSUB_CLOUD_EVENTS_SER | Error serializing Pub/Sub event envelope. |
| ERR_PUBSUB_EMPTY | Error empty Pub/Sub. |
| ERR_PUBSUB_NOT_CONFIGURED | Error Pub/Sub component is not configured. |
| ERR_PUBSUB_REQUEST_METADATA | Error with metadata in Pub/Sub request. |
| ERR_PUBSUB_EVENTS_SER | Error serializing Pub/Sub events. |
| ERR_PUBLISH_OUTBOX | Error publishing message to the outbox. |
| ERR_TOPIC_NAME_EMPTY | Error topic name for Pub/Sub message is empty. |
### Conversation API
| Error Code | Description |
| ------------------------------- | ----------------------------------------------- |
| ERR_INVOKE_OUTPUT_BINDING | Error invoking an output binding. |
| ERR_DIRECT_INVOKE | Error in direct invocation. |
| ERR_CONVERSATION_INVALID_PARMS | Error invalid parameters for conversation. |
| ERR_CONVERSATION_INVOKE | Error invoking the conversation. |
| ERR_CONVERSATION_MISSING_INPUTS | Error missing required inputs for conversation. |
| ERR_CONVERSATION_NOT_FOUND | Error conversation not found. |
### Distributed Lock API
| Error Code | Description |
| ----------------------------- | ----------------------------------- |
| ERR_TRY_LOCK | Error attempting to acquire a lock. |
| ERR_UNLOCK | Error attempting to release a lock. |
| ERR_LOCK_STORE_NOT_CONFIGURED | Error lock store is not configured. |
| ERR_LOCK_STORE_NOT_FOUND | Error lock store not found. |
### Healthz
| Error Code | Description |
| ----------------------------- | --------------------------------------------------------------- |
| ERR_HEALTH_NOT_READY | Error that Dapr is not ready. |
| ERR_HEALTH_APPID_NOT_MATCH | Error the app-id does not match expected value in health check. |
| ERR_OUTBOUND_HEALTH_NOT_READY | Error outbound connection health is not ready. |
### Common
| Error Code | Description |
| -------------------------- | ------------------------------------------------ |
| ERR_API_UNIMPLEMENTED | Error API is not implemented. |
| ERR_APP_CHANNEL_NIL | Error application channel is nil. |
| ERR_BAD_REQUEST | Error client request is badly formed or invalid. |
| ERR_BODY_READ | Error reading body. |
| ERR_INTERNAL | Internal server error encountered. |
| ERR_MALFORMED_REQUEST | Error with a malformed request. |
| ERR_MALFORMED_REQUEST_DATA | Error request data is malformed. |
| ERR_MALFORMED_RESPONSE | Error response data is malformed. |
## Next steps
- [Handling HTTP error codes]({{< ref http-error-codes.md >}})
- [Handling gRPC error codes]({{< ref grpc-error-codes.md >}})

View File

@ -0,0 +1,62 @@
---
type: docs
title: "Errors overview"
linkTitle: "Overview"
weight: 10
description: "Overview of Dapr errors"
---
An error code is a numeric or alphamueric code that indicates the nature of an error and, when possible, why it occured.
Dapr error codes are standardized strings for over 80+ common errors across HTTP and gRPC requests when using the Dapr APIs. These codes are both:
- Returned in the JSON response body of the request.
- When enabled, logged in debug-level logs in the runtime.
- If you're running in Kubernetes, error codes are logged in the sidecar.
- If you're running in self-hosted, you can enable and run debug logs.
## Error format
Dapr error codes consist of a prefix, a category, and shorthand of the error itself. For example:
| Prefix | Category | Error shorthand |
| ------ | -------- | --------------- |
| ERR_ | PUBSUB_ | NOT_FOUND |
Some of the most common errors returned include:
- ERR_ACTOR_TIMER_CREATE
- ERR_PURGE_WORKFLOW
- ERR_STATE_STORE_NOT_FOUND
- ERR_HEALTH_NOT_READY
> **Note:** [See a full list of error codes in Dapr.]({{< ref error-codes-reference.md >}})
An error returned for a state store not found might look like the following:
```json
{
"error": "Bad Request",
"error_msg": "{\"errorCode\":\"ERR_STATE_STORE_NOT_FOUND\",\"message\":\"state store <name> is not found\",\"details\":[{\"@type\":\"type.googleapis.com/google.rpc.ErrorInfo\",\"domain\":\"dapr.io\",\"metadata\":{\"appID\":\"nodeapp\"},\"reason\":\"DAPR_STATE_NOT_FOUND\"}]}",
"status": 400
}
```
The returned error includes:
- The error code: `ERR_STATE_STORE_NOT_FOUND`
- The error message describing the issue: `state store <name> is not found`
- The app ID in which the error is occuring: `nodeapp`
- The reason for the error: `DAPR_STATE_NOT_FOUND`
## Dapr error code metrics
Metrics help you see when exactly errors are occuring from within the runtime. Error code metrics are collected using the `error_code_total` endpoint. This endpoint is disabled by default. You can [enable it using the `recordErrorCodes` field in your configuration file]({{< ref "metrics-overview.md#configuring-metrics-for-error-codes" >}}).
## Demo
Watch a demo presented during [Diagrid's Dapr v1.15 celebration](https://www.diagrid.io/videos/dapr-1-15-deep-dive) to see how to enable error code metrics and handle error codes returned in the runtime.
<iframe width="560" height="315" src="https://www.youtube-nocookie.com/embed/NTnwoDhHIcQ?si=I2uCB_TINGxlu-9v&amp;start=2812" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen></iframe>
## Next step
{{< button text="See a list of all Dapr error codes" page="error-codes-reference" >}}

View File

@ -1,20 +1,18 @@
---
type: docs
title: Dapr errors
linkTitle: "Dapr errors"
weight: 700
description: "Information on Dapr errors and how to handle them"
title: Handling gRPC error codes
linkTitle: "gRPC"
weight: 40
description: "Information on Dapr gRPC errors and how to handle them"
---
## Error handling: Understanding errors model and reporting
Initially, errors followed the [Standard gRPC error model](https://grpc.io/docs/guides/error/#standard-error-model). However, to provide more detailed and informative error messages, an enhanced error model has been defined which aligns with the gRPC [Richer error model](https://grpc.io/docs/guides/error/#richer-error-model).
{{% alert title="Note" color="primary" %}}
Not all Dapr errors have been converted to the richer gRPC error model.
{{% /alert %}}
### Standard gRPC Error Model
## Standard gRPC Error Model
The [Standard gRPC error model](https://grpc.io/docs/guides/error/#standard-error-model) is an approach to error reporting in gRPC. Each error response includes an error code and an error message. The error codes are standardized and reflect common error conditions.
@ -25,7 +23,7 @@ ERROR:
Message: input key/keyPrefix 'bad||keyname' can't contain '||'
```
### Richer gRPC Error Model
## Richer gRPC Error Model
The [Richer gRPC error model](https://grpc.io/docs/guides/error/#richer-error-model) extends the standard error model by providing additional context and details about the error. This model includes the standard error `code` and `message`, along with a `details` section that can contain various types of information, such as `ErrorInfo`, `ResourceInfo`, and `BadRequest` details.

View File

@ -0,0 +1,21 @@
---
type: docs
title: "Handling HTTP error codes"
linkTitle: "HTTP"
description: "Detailed reference of the Dapr HTTP error codes and how to handle them"
weight: 30
---
For HTTP calls made to Dapr runtime, when an error is encountered, an error JSON is returned in response body. The JSON contains an error code and an descriptive error message.
```
{
"errorCode": "ERR_STATE_GET",
"message": "Requested state key does not exist in state store."
}
```
## Related
- [Error code reference list]({{< ref error-codes-reference.md >}})
- [Handling gRPC error codes]({{< ref grpc-error-codes.md >}})

View File

@ -8,24 +8,70 @@ aliases:
- /developing-applications/integrations/authenticating/authenticating-aws/
---
All Dapr components using various AWS services (DynamoDB, SQS, S3, etc) use a standardized set of attributes for configuration via the AWS SDK. [Learn more about how the AWS SDK handles credentials](https://docs.aws.amazon.com/sdk-for-go/v1/developer-guide/configuring-sdk.html#specifying-credentials).
Dapr components leveraging AWS services (for example, DynamoDB, SQS, S3) utilize standardized configuration attributes via the AWS SDK. [Learn more about how the AWS SDK handles credentials](https://docs.aws.amazon.com/sdk-for-go/v1/developer-guide/configuring-sdk.html#specifying-credentials).
Since you can configure the AWS SDK using the default provider chain, all of the following attributes are optional. Test the component configuration and inspect the log output from the Dapr runtime to ensure that components initialize correctly.
You can configure authentication using the AWS SDKs default provider chain or one of the predefined AWS authentication profiles outlined below. Verify your component configuration by testing and inspecting Dapr runtime logs to confirm proper initialization.
| Attribute | Description |
| --------- | ----------- |
| `region` | Which AWS region to connect to. In some situations (when running Dapr in self-hosted mode, for example), this flag can be provided by the environment variable `AWS_REGION`. Since Dapr sidecar injection doesn't allow configuring environment variables on the Dapr sidecar, it is recommended to always set the `region` attribute in the component spec. |
| `endpoint` | The endpoint is normally handled internally by the AWS SDK. However, in some situations it might make sense to set it locally - for example if developing against [DynamoDB Local](https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/DynamoDBLocal.html). |
| `accessKey` | AWS Access key id. |
| `secretKey` | AWS Secret access key. Use together with `accessKey` to explicitly specify credentials. |
| `sessionToken` | AWS Session token. Used together with `accessKey` and `secretKey`. When using a regular IAM user's access key and secret, a session token is normally not required. |
### Terminology
- **ARN (Amazon Resource Name):** A unique identifier used to specify AWS resources. Format: `arn:partition:service:region:account-id:resource`. Example: `arn:aws:iam::123456789012:role/example-role`.
- **IAM (Identity and Access Management):** AWS's service for managing access to AWS resources securely.
### Authentication Profiles
#### Access Key ID and Secret Access Key
Use static Access Key and Secret Key credentials, either through component metadata fields or via [default AWS configuration](https://docs.aws.amazon.com/sdkref/latest/guide/creds-config-files.html).
{{% alert title="Important" color="warning" %}}
You **must not** provide AWS access-key, secret-key, and tokens in the definition of the component spec you're using:
- When running the Dapr sidecar (`daprd`) with your application on EKS (AWS Kubernetes)
- If using a node/pod that has already been attached to an IAM policy defining access to AWS resources
Prefer loading credentials via the default AWS configuration in scenarios such as:
- Running the Dapr sidecar (`daprd`) with your application on EKS (AWS Kubernetes).
- Using nodes or pods attached to IAM policies that define AWS resource access.
{{% /alert %}}
| Attribute | Required | Description | Example |
| --------- | ----------- | ----------- | ----------- |
| `region` | Y | AWS region to connect to. | "us-east-1" |
| `accessKey` | N | AWS Access key id. Will be required in Dapr v1.17. | "AKIAIOSFODNN7EXAMPLE" |
| `secretKey` | N | AWS Secret access key, used alongside `accessKey`. Will be required in Dapr v1.17. | "wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY" |
| `sessionToken` | N | AWS Session token, used with `accessKey` and `secretKey`. Often unnecessary for IAM user keys. | |
#### Assume IAM Role
This profile allows Dapr to assume a specific IAM Role. Typically used when the Dapr sidecar runs on EKS or nodes/pods linked to IAM policies. Currently supported by Kafka and PostgreSQL components.
| Attribute | Required | Description | Example |
| --------- | ----------- | ----------- | ----------- |
| `region` | Y | AWS region to connect to. | "us-east-1" |
| `assumeRoleArn` | N | ARN of the IAM role with AWS resource access. Will be required in Dapr v1.17. | "arn:aws:iam::123456789:role/mskRole" |
| `sessionName` | N | Session name for role assumption. Default is `"DaprDefaultSession"`. | "MyAppSession" |
#### Credentials from Environment Variables
Authenticate using [environment variables](https://docs.aws.amazon.com/sdkref/latest/guide/environment-variables.html). This is especially useful for Dapr in self-hosted mode where sidecar injectors dont configure environment variables.
There are no metadata fields required for this authentication profile.
#### IAM Roles Anywhere
[IAM Roles Anywhere](https://aws.amazon.com/iam/roles-anywhere/) extends IAM role-based authentication to external workloads. It eliminates the need for long-term credentials by using cryptographically signed certificates, anchored in a trust relationship using Dapr PKI. Dapr SPIFFE identity X.509 certificates are used to authenticate to AWS services, and Dapr handles credential rotation at half the session lifespan.
To configure this authentication profile:
1. Create a Trust Anchor in the trusting AWS account using the Dapr certificate bundle as an `External certificate bundle`.
2. Create an IAM role with the resource permissions policy necessary, as well as a trust entity for the Roles Anywhere AWS service. Here, you specify SPIFFE identities allowed.
3. Create an IAM Profile under the Roles Anywhere service, linking the IAM Role.
| Attribute | Required | Description | Example |
| --------- | ----------- | ----------- | ----------- |
| `trustAnchorArn` | Y | ARN of the Trust Anchor in the AWS account granting trust to the Dapr Certificate Authority. | arn:aws:rolesanywhere:us-west-1:012345678910:trust-anchor/01234568-0123-0123-0123-012345678901 |
| `trustProfileArn` | Y | ARN of the AWS IAM Profile in the trusting AWS account. | arn:aws:rolesanywhere:us-west-1:012345678910:profile/01234568-0123-0123-0123-012345678901 |
| `assumeRoleArn` | Y | ARN of the AWS IAM role to assume in the trusting AWS account. | arn:aws:iam:012345678910:role/exampleIAMRoleName |
### Additional Fields
Some AWS components include additional optional fields:
| Attribute | Required | Description | Example |
| --------- | ----------- | ----------- | ----------- |
| `endpoint` | N | The endpoint is normally handled internally by the AWS SDK. However, in some situations it might make sense to set it locally - for example if developing against [DynamoDB Local](https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/DynamoDBLocal.html). | |
Furthermore, non-native AWS components such as Kafka and PostgreSQL that support AWS authentication profiles have metadata fields to trigger the AWS authentication logic. Be sure to check specific component documentation.
## Alternatives to explicitly specifying credentials in component manifest files
In production scenarios, it is recommended to use a solution such as:

View File

@ -26,6 +26,4 @@ By studying past resource behavior, recommend application resource optimization
The application graph facilitates collaboration between dev and ops by providing a dynamic overview of your services and infrastructure components.
Try out [Conductor Free](https://www.diagrid.io/pricing), ideal for individual developers building and testing Dapr applications on Kubernetes.
{{< button text="Learn more about Diagrid Conductor" link="https://www.diagrid.io/conductor" >}}

View File

@ -2,6 +2,6 @@
type: docs
title: "Integrations"
linkTitle: "Integrations"
weight: 60
weight: 70
description: "Dapr integrations with other technologies"
---

View File

@ -2,6 +2,6 @@
type: docs
title: "Local development"
linkTitle: "Local development"
weight: 40
weight: 50
description: "Capabilities for developing Dapr applications locally"
---

View File

@ -2,7 +2,7 @@
type: docs
title: "Dapr Software Development Kits (SDKs)"
linkTitle: "SDKs"
weight: 20
weight: 30
description: "Use your favorite languages with Dapr"
no_list: true
---

View File

@ -18,7 +18,7 @@ Currently, you can experience this actors quickstart using the .NET SDK.
As a quick overview of the .NET actors quickstart:
1. Using a `SmartDevice.Service` microservice, you host:
- Two `SmartDectectorActor` smoke alarm objects
- Two `SmokeDetectorActor` smoke alarm objects
- A `ControllerActor` object that commands and controls the smart devices
1. Using a `SmartDevice.Client` console app, the client app interacts with each actor, or the controller, to perform actions in aggregate.
1. The `SmartDevice.Interfaces` contains the shared interfaces and data types used by both the service and client apps.
@ -119,7 +119,7 @@ If you have Zipkin configured for Dapr locally on your machine, you can view the
When you ran the client app, a few things happened:
1. Two `SmartDetectorActor` actors were [created in the client application](https://github.com/dapr/quickstarts/blob/master/actors/csharp/sdk/client/Program.cs) and initialized with object state with:
1. Two `SmokeDetectorActor` actors were [created in the client application](https://github.com/dapr/quickstarts/blob/master/actors/csharp/sdk/client/Program.cs) and initialized with object state with:
- `ActorProxy.Create<ISmartDevice>(actorId, actorType)`
- `proxySmartDevice.SetDataAsync(data)`
@ -177,7 +177,7 @@ When you ran the client app, a few things happened:
Console.WriteLine($"Device 2 state: {storedDeviceData2}");
```
1. The [`DetectSmokeAsync` method of `SmartDetectorActor 1` is called](https://github.com/dapr/quickstarts/blob/master/actors/csharp/sdk/service/SmokeDetectorActor.cs#L70).
1. The [`DetectSmokeAsync` method of `SmokeDetectorActor 1` is called](https://github.com/dapr/quickstarts/blob/master/actors/csharp/sdk/service/SmokeDetectorActor.cs#L70).
```csharp
public async Task DetectSmokeAsync()
@ -216,7 +216,7 @@ When you ran the client app, a few things happened:
await proxySmartDevice1.DetectSmokeAsync();
```
1. The [`SoundAlarm` methods](https://github.com/dapr/quickstarts/blob/master/actors/csharp/sdk/service/SmokeDetectorActor.cs#L78) of `SmartDetectorActor 1` and `2` are called.
1. The [`SoundAlarm` methods](https://github.com/dapr/quickstarts/blob/master/actors/csharp/sdk/service/SmokeDetectorActor.cs#L78) of `SmokeDetectorActor 1` and `2` are called.
```csharp
storedDeviceData1 = await proxySmartDevice1.GetDataAsync();
@ -234,9 +234,9 @@ When you ran the client app, a few things happened:
For full context of the sample, take a look at the following code:
- [`SmartDetectorActor.cs`](https://github.com/dapr/quickstarts/blob/master/actors/csharp/sdk/service/SmokeDetectorActor.cs): Implements the smart device actors
- [`SmokeDetectorActor.cs`](https://github.com/dapr/quickstarts/blob/master/actors/csharp/sdk/service/SmokeDetectorActor.cs): Implements the smart device actors
- [`ControllerActor.cs`](https://github.com/dapr/quickstarts/blob/master/actors/csharp/sdk/service/ControllerActor.cs): Implements the controller actor that manages all devices
- [`ISmartDevice`](https://github.com/dapr/quickstarts/blob/master/actors/csharp/sdk/interfaces/ISmartDevice.cs): The method definitions and shared data types for each `SmartDetectorActor`
- [`ISmartDevice`](https://github.com/dapr/quickstarts/blob/master/actors/csharp/sdk/interfaces/ISmartDevice.cs): The method definitions and shared data types for each `SmokeDetectorActor`
- [`IController`](https://github.com/dapr/quickstarts/blob/master/actors/csharp/sdk/interfaces/IController.cs): The method definitions and shared data types for the `ControllerActor`
{{% /codetab %}}

View File

@ -273,23 +273,20 @@ func deleteJob(ctx context.Context, in *common.InvocationEvent) (out *common.Con
// Handler that handles job events
func handleJob(ctx context.Context, job *common.JobEvent) error {
var jobData common.Job
if err := json.Unmarshal(job.Data, &jobData); err != nil {
return fmt.Errorf("failed to unmarshal job: %v", err)
}
decodedPayload, err := base64.StdEncoding.DecodeString(jobData.Value)
if err != nil {
return fmt.Errorf("failed to decode job payload: %v", err)
}
var jobPayload JobData
if err := json.Unmarshal(decodedPayload, &jobPayload); err != nil {
return fmt.Errorf("failed to unmarshal payload: %v", err)
}
var jobData common.Job
if err := json.Unmarshal(job.Data, &jobData); err != nil {
return fmt.Errorf("failed to unmarshal job: %v", err)
}
fmt.Println("Starting droid:", jobPayload.Droid)
fmt.Println("Executing maintenance job:", jobPayload.Task)
var jobPayload JobData
if err := json.Unmarshal(job.Data, &jobPayload); err != nil {
return fmt.Errorf("failed to unmarshal payload: %v", err)
}
return nil
fmt.Println("Starting droid:", jobPayload.Droid)
fmt.Println("Executing maintenance job:", jobPayload.Task)
return nil
}
```

View File

@ -7,7 +7,7 @@ description: Get started with the Dapr Workflow building block
---
{{% alert title="Note" color="primary" %}}
Dapr Workflow is currently in beta. [See known limitations for {{% dapr-latest-version cli="true" %}}]({{< ref "workflow-overview.md#limitations" >}}).
Redis is currently used as the state store component for Workflows in the Quickstarts. However, Redis does not support transaction rollbacks and should not be used in production as an actor state store.
{{% /alert %}}
Let's take a look at the Dapr [Workflow building block]({{< ref workflow-overview.md >}}). In this Quickstart, you'll create a simple console application to demonstrate Dapr's workflow programming model and the workflow management APIs.
@ -66,12 +66,18 @@ Install the Dapr Python SDK package:
pip3 install -r requirements.txt
```
### Step 3: Run the order processor app
In the terminal, start the order processor app alongside a Dapr sidecar using [Multi-App Run]({{< ref multi-app-dapr-run >}}):
Return to the `python/sdk` directory:
```bash
cd ..
```
### Step 3: Run the order processor app
In the terminal, start the order processor app alongside a Dapr sidecar using [Multi-App Run]({{< ref multi-app-dapr-run >}}). From the `python/sdk` directory, run the following command:
```bash
cd workflows/python/sdk
dapr run -f .
```
@ -308,12 +314,11 @@ Install the dependencies:
cd ./javascript/sdk
npm install
npm run build
cd ..
```
### Step 3: Run the order processor app
In the terminal, start the order processor app alongside a Dapr sidecar using [Multi-App Run]({{< ref multi-app-dapr-run >}}):
In the terminal, start the order processor app alongside a Dapr sidecar using [Multi-App Run]({{< ref multi-app-dapr-run >}}). From the `javascript/sdk` directory, run the following command:
```bash
dapr run -f .
@ -515,15 +520,28 @@ Clone the [sample provided in the Quickstarts repo](https://github.com/dapr/quic
git clone https://github.com/dapr/quickstarts.git
```
In a new terminal window, navigate to the `sdk` directory:
In a new terminal window, navigate to the `order-processor` directory:
```bash
cd workflows/csharp/sdk
cd workflows/csharp/sdk/order-processor
```
Install the dependencies:
```bash
dotnet restore
dotnet build
```
Return to the `csharp/sdk` directory:
```bash
cd ..
```
### Step 3: Run the order processor app
In the terminal, start the order processor app alongside a Dapr sidecar using [Multi-App Run]({{< ref multi-app-dapr-run >}}):
In the terminal, start the order processor app alongside a Dapr sidecar using [Multi-App Run]({{< ref multi-app-dapr-run >}}). From the `csharp/sdk` directory, run the following command:
```bash
dapr run -f .
@ -628,25 +646,24 @@ OrderPayload orderInfo = new OrderPayload(itemToPurchase, 15000, ammountToPurcha
// Start the workflow
Console.WriteLine("Starting workflow {0} purchasing {1} {2}", orderId, ammountToPurchase, itemToPurchase);
await daprClient.StartWorkflowAsync(
workflowComponent: DaprWorkflowComponent,
workflowName: nameof(OrderProcessingWorkflow),
await daprWorkflowClient.ScheduleNewWorkflowAsync(
name: nameof(OrderProcessingWorkflow),
input: orderInfo,
instanceId: orderId);
// Wait for the workflow to start and confirm the input
GetWorkflowResponse state = await daprClient.WaitForWorkflowStartAsync(
instanceId: orderId,
workflowComponent: DaprWorkflowComponent);
WorkflowState state = await daprWorkflowClient.WaitForWorkflowStartAsync(
instanceId: orderId);
Console.WriteLine("Your workflow has started. Here is the status of the workflow: {0}", state.RuntimeStatus);
Console.WriteLine($"{nameof(OrderProcessingWorkflow)} (ID = {orderId}) started successfully with {state.ReadInputAs<OrderPayload>()}");
// Wait for the workflow to complete
using var ctx = new CancellationTokenSource(TimeSpan.FromSeconds(5));
state = await daprClient.WaitForWorkflowCompletionAsync(
instanceId: orderId,
workflowComponent: DaprWorkflowComponent);
cancellation: ctx.Token);
Console.WriteLine("Workflow Status: {0}", state.RuntimeStatus);
Console.WriteLine("Workflow Status: {0}", state.ReadCustomStatusAs<string>());
```
#### `order-processor/Workflows/OrderProcessingWorkflow.cs`
@ -697,7 +714,7 @@ class OrderProcessingWorkflow : Workflow<OrderPayload, OrderResult>
nameof(UpdateInventoryActivity),
new PaymentRequest(RequestId: orderId, order.Name, order.Quantity, order.TotalCost));
}
catch (TaskFailedException)
catch (WorkflowTaskFailedException)
{
// Let them know their payment was processed
await context.CallActivityAsync(
@ -779,9 +796,15 @@ Install the dependencies:
mvn clean install
```
Return to the `java/sdk` directory:
```bash
cd ..
```
### Step 3: Run the order processor app
In the terminal, start the order processor app alongside a Dapr sidecar using [Multi-App Run]({{< ref multi-app-dapr-run >}}):
In the terminal, start the order processor app alongside a Dapr sidecar using [Multi-App Run]({{< ref multi-app-dapr-run >}}). From the `java/sdk` directory, run the following command:
```bash
cd workflows/java/sdk
@ -1114,7 +1137,7 @@ cd workflows/go/sdk
### Step 3: Run the order processor app
In the terminal, start the order processor app alongside a Dapr sidecar using [Multi-App Run]({{< ref multi-app-dapr-run >}}):
In the terminal, start the order processor app alongside a Dapr sidecar using [Multi-App Run]({{< ref multi-app-dapr-run >}}). From the `go/sdk` directory, run the following command:
```bash
dapr run -f .
@ -1333,4 +1356,4 @@ Join the discussion in our [discord channel](https://discord.com/channels/778680
- Walk through a more in-depth [.NET SDK example workflow](https://github.com/dapr/dotnet-sdk/tree/master/examples/Workflow)
- Learn more about [Workflow as a Dapr building block]({{< ref workflow-overview >}})
{{< button text="Explore Dapr tutorials >>" page="getting-started/tutorials/_index.md" >}}
{{< button text="Explore Dapr tutorials >>" page="getting-started/tutorials/_index.md" >}}

View File

@ -6,17 +6,17 @@ weight: 4500
description: "Choose which Dapr sidecar APIs are available to the app"
---
In certain scenarios, such as zero trust networks or when exposing the Dapr sidecar to external traffic through a frontend, it's recommended to only enable the Dapr sidecar APIs that are being used by the app. Doing so reduces the attack surface and helps keep the Dapr APIs scoped to the actual needs of the application.
In scenarios such as zero trust networks or when exposing the Dapr sidecar to external traffic through a frontend, it's recommended to only enable the Dapr sidecar APIs being used by the app. Doing so reduces the attack surface and helps keep the Dapr APIs scoped to the actual needs of the application.
Dapr allows developers to control which APIs are accessible to the application by setting an API allowlist or denylist using a [Dapr Configuration]({{<ref "configuration-overview.md">}}).
Dapr allows you to control which APIs are accessible to the application by setting an API allowlist or denylist using a [Dapr Configuration]({{< ref "configuration-schema.md" >}}).
### Default behavior
If no API allowlist or denylist is specified, the default behavior is to allow access to all Dapr APIs.
- If only a denylist is defined, all Dapr APIs are allowed except those defined in the denylist
- If only an allowlist is defined, only the Dapr APIs listed in the allowlist are allowed
- If both an allowlist and a denylist are defined, the allowed APIs are those defined in the allowlist, unless they are also included in the denylist. In other words, the denylist overrides the allowlist for APIs that are defined in both.
- If you've only defined a denylist, all Dapr APIs are allowed except those defined in the denylist
- If you've only defined an allowlist, only the Dapr APIs listed in the allowlist are allowed
- If you've defined both an allowlist and a denylist, the denylist overrides the allowlist for APIs that are defined in both.
- If neither is defined, all APIs are allowed.
For example, the following configuration enables all APIs for both HTTP and gRPC:
@ -119,14 +119,18 @@ See this list of values corresponding to the different Dapr APIs:
| [Service Invocation]({{< ref service_invocation_api.md >}}) | `invoke` (`v1.0`) | `invoke` (`v1`) |
| [State]({{< ref state_api.md>}})| `state` (`v1.0` and `v1.0-alpha1`) | `state` (`v1` and `v1alpha1`) |
| [Pub/Sub]({{< ref pubsub.md >}}) | `publish` (`v1.0` and `v1.0-alpha1`) | `publish` (`v1` and `v1alpha1`) |
| [Output Bindings]({{< ref bindings_api.md >}}) | `bindings` (`v1.0`) |`bindings` (`v1`) |
| Subscribe | n/a | `subscribe` (`v1alpha1`) |
| [(Output) Bindings]({{< ref bindings_api.md >}}) | `bindings` (`v1.0`) |`bindings` (`v1`) |
| [Secrets]({{< ref secrets_api.md >}})| `secrets` (`v1.0`) | `secrets` (`v1`) |
| [Actors]({{< ref actors_api.md >}}) | `actors` (`v1.0`) |`actors` (`v1`) |
| [Metadata]({{< ref metadata_api.md >}}) | `metadata` (`v1.0`) |`metadata` (`v1`) |
| [Configuration]({{< ref configuration_api.md >}}) | `configuration` (`v1.0` and `v1.0-alpha1`) | `configuration` (`v1` and `v1alpha1`) |
| [Distributed Lock]({{< ref distributed_lock_api.md >}}) | `lock` (`v1.0-alpha1`)<br/>`unlock` (`v1.0-alpha1`) | `lock` (`v1alpha1`)<br/>`unlock` (`v1alpha1`) |
| Cryptography | `crypto` (`v1.0-alpha1`) | `crypto` (`v1alpha1`) |
| [Workflow]({{< ref workflow_api.md >}}) | `workflows` (`v1.0-alpha1`) |`workflows` (`v1alpha1`) |
| [Cryptography]({{< ref cryptography_api.md >}}) | `crypto` (`v1.0-alpha1`) | `crypto` (`v1alpha1`) |
| [Workflow]({{< ref workflow_api.md >}}) | `workflows` (`v1.0`) |`workflows` (`v1`) |
| [Health]({{< ref health_api.md >}}) | `healthz` (`v1.0`) | n/a |
| Shutdown | `shutdown` (`v1.0`) | `shutdown` (`v1`) |
## Next steps
{{< button text="Configure Dapr to use gRPC" page="grpc" >}}

View File

@ -1,30 +1,44 @@
---
type: docs
title: "Overview of Dapr configuration options"
title: "Dapr configuration"
linkTitle: "Overview"
weight: 100
description: "Information on Dapr configuration and how to set options for your application"
description: "Overview of Dapr configuration"
---
## Sidecar configuration
Dapr configurations are settings and policies that enable you to change both the behavior of individual Dapr applications, or the global behavior of the Dapr control plane system services.
### Setup sidecar configuration
[for more information, read the configuration concept.]({{< ref configuration-concept.md >}})
#### Self-hosted sidecar
## Application configuration
In self hosted mode the Dapr configuration is a configuration file, for example `config.yaml`. By default the Dapr sidecar looks in the default Dapr folder for the runtime configuration eg: `$HOME/.dapr/config.yaml` in Linux/MacOS and `%USERPROFILE%\.dapr\config.yaml` in Windows.
### Set up application configuration
A Dapr sidecar can also apply a configuration by using a `--config` flag to the file path with `dapr run` CLI command.
You can set up application configuration either in self-hosted or Kubernetes mode.
#### Kubernetes sidecar
{{< tabs "Self-hosted" Kubernetes >}}
In Kubernetes mode the Dapr configuration is a Configuration resource, that is applied to the cluster. For example:
<!-- Self hosted -->
{{% codetab %}}
In self hosted mode, the Dapr configuration is a [configuration file]({{< ref configuration-schema.md >}}) - for example, `config.yaml`. By default, the Dapr sidecar looks in the default Dapr folder for the runtime configuration:
- Linux/MacOs: `$HOME/.dapr/config.yaml`
- Windows: `%USERPROFILE%\.dapr\config.yaml`
An application can also apply a configuration by using a `--config` flag to the file path with `dapr run` CLI command.
{{% /codetab %}}
<!-- Kubernetes -->
{{% codetab %}}
In Kubernetes mode, the Dapr configuration is a Configuration resource, that is applied to the cluster. For example:
```bash
kubectl apply -f myappconfig.yaml
```
You can use the Dapr CLI to list the Configuration resources
You can use the Dapr CLI to list the Configuration resources for applications.
```bash
dapr configurations -k
@ -40,11 +54,15 @@ A Dapr sidecar can apply a specific configuration by using a `dapr.io/config` an
dapr.io/config: "myappconfig"
```
Note: There are more [Kubernetes annotations]({{< ref "arguments-annotations-overview.md" >}}) available to configure the Dapr sidecar on activation by sidecar Injector system service.
> **Note:** [See all Kubernetes annotations]({{< ref "arguments-annotations-overview.md" >}}) available to configure the Dapr sidecar on activation by sidecar Injector system service.
### Sidecar configuration settings
{{% /codetab %}}
The following configuration settings can be applied to Dapr application sidecars:
{{< /tabs >}}
### Application configuration settings
The following menu includes all of the configuration settings you can set on the sidecar.
- [Tracing](#tracing)
- [Metrics](#metrics)
@ -68,7 +86,7 @@ The `tracing` section under the `Configuration` spec contains the following prop
tracing:
samplingRate: "1"
otel:
endpointAddress: "https://..."
endpointAddress: "otelcollector.observability.svc.cluster.local:4317"
zipkin:
endpointAddress: "http://zipkin.default.svc.cluster.local:9411/api/v2/spans"
```
@ -79,15 +97,22 @@ The following table lists the properties for tracing:
|--------------|--------|-------------|
| `samplingRate` | string | Set sampling rate for tracing to be enabled or disabled.
| `stdout` | bool | True write more verbose information to the traces
| `otel.endpointAddress` | string | Set the Open Telemetry (OTEL) server address to send traces to
| `otel.endpointAddress` | string | Set the Open Telemetry (OTEL) server address to send traces to. This may or may not require the https:// or http:// depending on your OTEL provider.
| `otel.isSecure` | bool | Is the connection to the endpoint address encrypted
| `otel.protocol` | string | Set to `http` or `grpc` protocol
| `zipkin.endpointAddress` | string | Set the Zipkin server address to send traces to
| `zipkin.endpointAddress` | string | Set the Zipkin server address to send traces to. This should include the protocol (http:// or https://) on the endpoint.
`samplingRate` is used to enable or disable the tracing. To disable the sampling rate ,
set `samplingRate : "0"` in the configuration. The valid range of samplingRate is between 0 and 1 inclusive. The sampling rate determines whether a trace span should be sampled or not based on value. `samplingRate : "1"` samples all traces. By default, the sampling rate is (0.0001) or 1 in 10,000 traces.
##### `samplingRate`
The OpenTelemetry (otel) endpoint can also be configured via an environment variables. The presence of the OTEL_EXPORTER_OTLP_ENDPOINT environment variable
`samplingRate` is used to enable or disable the tracing. The valid range of `samplingRate` is between `0` and `1` inclusive. The sampling rate determines whether a trace span should be sampled or not based on value.
`samplingRate : "1"` samples all traces. By default, the sampling rate is (0.0001), or 1 in 10,000 traces.
To disable the sampling rate, set `samplingRate : "0"` in the configuration.
##### `otel`
The OpenTelemetry (`otel`) endpoint can also be configured via an environment variable. The presence of the `OTEL_EXPORTER_OTLP_ENDPOINT` environment variable
turns on tracing for the sidecar.
| Environment Variable | Description |
@ -100,9 +125,9 @@ See [Observability distributed tracing]({{< ref "tracing-overview.md" >}}) for m
#### Metrics
The metrics section can be used to enable or disable metrics for an application.
The `metrics` section under the `Configuration` spec can be used to enable or disable metrics for an application.
The `metrics` section under the `Configuration` spec contains the following properties:
The `metrics` section contains the following properties:
```yml
metrics:
@ -120,9 +145,12 @@ metrics:
- /payments/{paymentID}/refund
- /payments/{paymentID}/details
excludeVerbs: false
recordErrorCodes: true
```
In the examples above this path filter `/orders/{orderID}/items/{itemID}` would return a single metric count matching all the orderIDs and all the itemIDs rather than multiple metrics for each itemID. For more information see [HTTP metrics path matching]({{< ref "metrics-overview.md#http-metrics-path-matching" >}})
In the examples above, the path filter `/orders/{orderID}/items/{itemID}` would return _a single metric count_ matching all the `orderID`s and all the `itemID`s, rather than multiple metrics for each `itemID`. For more information, see [HTTP metrics path matching]({{< ref "metrics-overview.md#http-metrics-path-matching" >}}).
The above example also enables [recording error code metrics]({{< ref "metrics-overview.md#configuring-metrics-for-error-codes" >}}), which is disabled by default.
The following table lists the properties for metrics:
@ -135,7 +163,7 @@ The following table lists the properties for metrics:
| `http.pathMatching` | array | Array of paths for path matching, allowing users to define matching paths to manage cardinality. |
| `http.excludeVerbs` | boolean | When set to true (default is false), the Dapr HTTP server ignores each request HTTP verb when building the method metric label. |
To further help managing cardinality, path matching allows specified paths matched according to defined patterns, reducing the number of unique metrics paths and thus controlling metric cardinality. This feature is particularly useful for applications with dynamic URLs, ensuring that metrics remain meaningful and manageable without excessive memory consumption.
To further help manage cardinality, path matching allows you to match specified paths according to defined patterns, reducing the number of unique metrics paths and thus controlling metric cardinality. This feature is particularly useful for applications with dynamic URLs, ensuring that metrics remain meaningful and manageable without excessive memory consumption.
Using rules, you can set regular expressions for every metric exposed by the Dapr sidecar. For example:
@ -154,9 +182,9 @@ See [metrics documentation]({{< ref "metrics-overview.md" >}}) for more informat
#### Logging
The logging section can be used to configure how logging works in the Dapr Runtime.
The `logging` section under the `Configuration` spec is used to configure how logging works in the Dapr Runtime.
The `logging` section under the `Configuration` spec contains the following properties:
The `logging` section contains the following properties:
```yml
logging:
@ -178,8 +206,7 @@ See [logging documentation]({{< ref "logs.md" >}}) for more information.
#### Middleware
Middleware configuration set named HTTP pipeline middleware handlers
The `httpPipeline` and the `appHttpPipeline` section under the `Configuration` spec contains the following properties:
Middleware configuration sets named HTTP pipeline middleware handlers. The `httpPipeline` and the `appHttpPipeline` section under the `Configuration` spec contain the following properties:
```yml
httpPipeline: # for incoming http calls
@ -203,13 +230,13 @@ The following table lists the properties for HTTP handlers:
| `name` | string | Name of the middleware component
| `type` | string | Type of middleware component
See [Middleware pipelines]({{< ref "middleware.md" >}}) for more information
See [Middleware pipelines]({{< ref "middleware.md" >}}) for more information.
#### Name resolution component
You can set name resolution component to use within the configuration YAML. For example, to set the `spec.nameResolution.component` property to `"sqlite"`, pass configuration options in the `spec.nameResolution.configuration` dictionary as shown below.
You can set name resolution components to use within the configuration file. For example, to set the `spec.nameResolution.component` property to `"sqlite"`, pass configuration options in the `spec.nameResolution.configuration` dictionary as shown below.
This is the basic example of a configuration resource:
This is a basic example of a configuration resource:
```yaml
apiVersion: dapr.io/v1alpha1
@ -226,7 +253,7 @@ spec:
For more information, see:
- [The name resolution component documentation]({{< ref supported-name-resolution >}}) for more examples.
- - [The Configuration YAML documentation]({{< ref configuration-schema.md >}}) to learn more about how to configure name resolution per component.
- [The Configuration file documentation]({{< ref configuration-schema.md >}}) to learn more about how to configure name resolution per component.
#### Scope secret store access
@ -234,11 +261,11 @@ See the [Scoping secrets]({{< ref "secret-scope.md" >}}) guide for information a
#### Access Control allow lists for building block APIs
See the [selectively enable Dapr APIs on the Dapr sidecar]({{< ref "api-allowlist.md" >}}) guide for information and examples on how to set ACLs on the building block APIs lists.
See the guide for [selectively enabling Dapr APIs on the Dapr sidecar]({{< ref "api-allowlist.md" >}}) for information and examples on how to set access control allow lists (ACLs) on the building block APIs lists.
#### Access Control allow lists for service invocation API
See the [Allow lists for service invocation]({{< ref "invoke-allowlist.md" >}}) guide for information and examples on how to set allow lists with ACLs which using service invocation API.
See the [Allow lists for service invocation]({{< ref "invoke-allowlist.md" >}}) guide for information and examples on how to set allow lists with ACLs which use the service invocation API.
#### Disallow usage of certain component types
@ -258,13 +285,23 @@ spec:
- secretstores.local.file
```
You can optionally specify a version to disallow by adding it at the end of the component name. For example, `state.in-memory/v1` disables initializing components of type `state.in-memory` and version `v1`, but does not disable a (hypothetical) `v2` version of the component.
Optionally, you can specify a version to disallow by adding it at the end of the component name. For example, `state.in-memory/v1` disables initializing components of type `state.in-memory` and version `v1`, but does not disable a (hypothetical) `v2` version of the component.
> Note: One special note applies to the component type `secretstores.kubernetes`. When you add that component to the denylist, Dapr forbids the creation of _additional_ components of type `secretstores.kubernetes`. However, it does not disable the built-in Kubernetes secret store, which is created by Dapr automatically and is used to store secrets specified in Components specs. If you want to disable the built-in Kubernetes secret store, you need to use the `dapr.io/disable-builtin-k8s-secret-store` [annotation]({{< ref arguments-annotations-overview.md >}}).
{{% alert title="Note" color="primary" %}}
When you add the component type `secretstores.kubernetes` to the denylist, Dapr forbids the creation of _additional_ components of type `secretstores.kubernetes`.
However, it does not disable the built-in Kubernetes secret store, which is:
- Created by Dapr automatically
- Used to store secrets specified in Components specs
If you want to disable the built-in Kubernetes secret store, you need to use the `dapr.io/disable-builtin-k8s-secret-store` [annotation]({{< ref arguments-annotations-overview.md >}}).
{{% /alert %}}
#### Turning on preview features
See the [preview features]({{< ref "preview-features.md" >}}) guide for information and examples on how to opt-in to preview features for a release. Preview feature enable new capabilities to be added that still need more time until they become generally available (GA) in the runtime.
See the [preview features]({{< ref "preview-features.md" >}}) guide for information and examples on how to opt-in to preview features for a release.
Enabling preview features unlock new capabilities to be added for dev/test, since they still need more time before becoming generally available (GA) in the runtime.
### Example sidecar configuration
@ -316,7 +353,9 @@ spec:
## Control plane configuration
There is a single configuration file called `daprsystem` installed with the Dapr control plane system services that applies global settings. This is only set up when Dapr is deployed to Kubernetes.
A single configuration file called `daprsystem` is installed with the Dapr control plane system services that applies global settings.
> **This is only set up when Dapr is deployed to Kubernetes.**
### Control plane configuration settings
@ -353,3 +392,7 @@ spec:
allowedClockSkew: 15m
workloadCertTTL: 24h
```
## Next steps
{{< button text="Learn about concurrency and rate limits" page="control-concurrency" >}}

View File

@ -3,30 +3,57 @@ type: docs
title: "How-To: Control concurrency and rate limit applications"
linkTitle: "Concurrency & rate limits"
weight: 2000
description: "Control how many requests and events will invoke your application simultaneously"
description: "Learn how to control how many requests and events can invoke your application simultaneously"
---
A common scenario in distributed computing is to only allow for a given number of requests to execute concurrently.
Using Dapr, you can control how many requests and events will invoke your application simultaneously.
Typically, in distributed computing, you may only want to allow for a given number of requests to execute concurrently. Using Dapr's `app-max-concurrency`, you can control how many requests and events can invoke your application simultaneously.
*Note that this rate limiting is guaranteed for every event that's coming from Dapr, meaning Pub/Sub events, direct invocation from other services, bindings events etc. Dapr can't enforce the concurrency policy on requests that are coming to your app externally.*
Default `app-max-concurreny` is set to `-1`, meaning no concurrency limit is enforced.
*Note that rate limiting per second can be achieved by using the **middleware.http.ratelimit** middleware. However, there is an important difference between the two approaches. The rate limit middleware is time bound and limits the number of requests per second, while the `app-max-concurrency` flag specifies the number of concurrent requests (and events) at any point of time. See [Rate limit middleware]({{< ref middleware-rate-limit.md >}}). *
## Different approaches
Watch this [video](https://youtu.be/yRI5g6o_jp8?t=1710) on how to control concurrency and rate limiting ".
While this guide focuses on `app-max-concurrency`, you can also limit request rate per second using the **`middleware.http.ratelimit`** middleware. However, it's important to understand the difference between the two approaches:
- `middleware.http.ratelimit`: Time bound and limits the number of requests per second
- `app-max-concurrency`: Specifies the max number of concurrent requests (and events) at any point of time.
See [Rate limit middleware]({{< ref middleware-rate-limit.md >}}) for more information about that approach.
## Demo
Watch this [video](https://youtu.be/yRI5g6o_jp8?t=1710) on how to control concurrency and rate limiting.
<div class="embed-responsive embed-responsive-16by9">
<iframe width="764" height="430" src="https://www.youtube-nocookie.com/embed/yRI5g6o_jp8?t=1710" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe>
</div>
## Setting app-max-concurrency
## Configure `app-max-concurrency`
Without using Dapr, a developer would need to create some sort of a semaphore in the application and take care of acquiring and releasing it.
Using Dapr, there are no code changes needed to an app.
Without using Dapr, you would need to create some sort of a semaphore in the application and take care of acquiring and releasing it.
### Setting app-max-concurrency in Kubernetes
Using Dapr, you don't need to make any code changes to your application.
To set app-max-concurrency in Kubernetes, add the following annotation to your pod:
Select how you'd like to configure `app-max-concurrency`.
{{< tabs "CLI" Kubernetes >}}
<!-- CLI -->
{{% codetab %}}
To set concurrency limits with the Dapr CLI for running on your local dev machine, add the `app-max-concurrency` flag:
```bash
dapr run --app-max-concurrency 1 --app-port 5000 python ./app.py
```
The above example effectively turns your app into a sequential processing service.
{{% /codetab %}}
<!-- Kubernetes -->
{{% codetab %}}
To configure concurrency limits in Kubernetes, add the following annotation to your pod:
```yaml
apiVersion: apps/v1
@ -50,15 +77,22 @@ spec:
dapr.io/app-id: "nodesubscriber"
dapr.io/app-port: "3000"
dapr.io/app-max-concurrency: "1"
...
#...
```
### Setting app-max-concurrency using the Dapr CLI
{{% /codetab %}}
To set app-max-concurrency with the Dapr CLI for running on your local dev machine, add the `app-max-concurrency` flag:
{{< /tabs >}}
```bash
dapr run --app-max-concurrency 1 --app-port 5000 python ./app.py
```
## Limitations
The above examples will effectively turn your app into a single concurrent service.
### Controlling concurrency on external requests
Rate limiting is guaranteed for every event coming _from_ Dapr, including pub/sub events, direct invocation from other services, bindings events, etc. However, Dapr can't enforce the concurrency policy on requests that are coming _to_ your app externally.
## Related links
[Arguments and annotations]({{< ref arguments-annotations-overview.md >}})
## Next steps
{{< button text="Limit secret store access" page="secret-scope" >}}

View File

@ -0,0 +1,122 @@
---
type: docs
title: "How-To: Configure Environment Variables from Secrets for Dapr sidecar"
linkTitle: "Environment Variables from Secrets"
weight: 7500
description: "Inject Environment Variables from Kubernetes Secrets into Dapr sidecar"
---
In special cases, the Dapr sidecar needs an environment variable injected into it. This use case may be required by a component, a 3rd party library, or a module that uses environment variables to configure the said component or customize its behavior. This can be useful for both production and non-production environments.
## Overview
In Dapr 1.15, the new `dapr.io/env-from-secret` annotation was introduced, [similar to `dapr.io/env`]({{< ref arguments-annotations-overview >}}).
With this annotation, you can inject an environment variable into the Dapr sidecar, with a value from a secret.
### Annotation format
The values of this annotation are formatted like so:
- Single key secret: `<ENV_VAR_NAME>=<SECRET_NAME>`
- Multi key/value secret: `<ENV_VAR_NAME>=<SECRET_NAME>:<SECRET_KEY>`
`<ENV_VAR_NAME>` is required to follow the `C_IDENTIFIER` format and captured by the `[A-Za-z_][A-Za-z0-9_]*` regex:
- Must start with a letter or underscore
- The rest of the identifier contains letters, digits, or underscores
The `name` field is required due to the restriction of the `secretKeyRef`, so both `name` and `key` must be set. [Learn more from the "env.valueFrom.secretKeyRef.name" section in this Kubernetes documentation.](https://kubernetes.io/docs/reference/kubernetes-api/workload-resources/pod-v1/#environment-variables)
In this case, Dapr sets both to the same value.
## Configuring single key secret environment variable
In the following example, the `dapr.io/env-from-secret` annotation is added to the Deployment.
```yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: nodeapp
spec:
template:
metadata:
annotations:
dapr.io/enabled: "true"
dapr.io/app-id: "nodeapp"
dapr.io/app-port: "3000"
dapr.io/env-from-secret: "AUTH_TOKEN=auth-headers-secret"
spec:
containers:
- name: node
image: dapriosamples/hello-k8s-node:latest
ports:
- containerPort: 3000
imagePullPolicy: Always
```
The `dapr.io/env-from-secret` annotation with a value of `"AUTH_TOKEN=auth-headers-secret"` is injected as:
```yaml
env:
- name: AUTH_TOKEN
valueFrom:
secretKeyRef:
name: auth-headers-secret
key: auth-headers-secret
```
This requires the secret to have both `name` and `key` fields with the same value, "auth-headers-secret".
**Example secret**
> **Note:** The following example is for demo purposes only. It's not recommended to store secrets in plain text.
```yaml
apiVersion: v1
kind: Secret
metadata:
name: auth-headers-secret
type: Opaque
stringData:
auth-headers-secret: "AUTH=mykey"
```
## Configuring multi-key secret environment variable
In the following example, the `dapr.io/env-from-secret` annotation is added to the Deployment.
```yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: nodeapp
spec:
template:
metadata:
annotations:
dapr.io/enabled: "true"
dapr.io/app-id: "nodeapp"
dapr.io/app-port: "3000"
dapr.io/env-from-secret: "AUTH_TOKEN=auth-headers-secret:auth-header-value"
spec:
containers:
- name: node
image: dapriosamples/hello-k8s-node:latest
ports:
- containerPort: 3000
imagePullPolicy: Always
```
The `dapr.io/env-from-secret` annotation with a value of `"AUTH_TOKEN=auth-headers-secret:auth-header-value"` is injected as:
```yaml
env:
- name: AUTH_TOKEN
valueFrom:
secretKeyRef:
name: auth-headers-secret
key: auth-header-value
```
**Example secret**
> **Note:** The following example is for demo purposes only. It's not recommended to store secrets in plain text.
```yaml
apiVersion: v1
kind: Secret
metadata:
name: auth-headers-secret
type: Opaque
stringData:
auth-header-value: "AUTH=mykey"
```

View File

@ -3,20 +3,21 @@ type: docs
title: "How-To: Configure Dapr to use gRPC"
linkTitle: "Use gRPC interface"
weight: 5000
description: "How to configure Dapr to use gRPC for low-latency, high performance scenarios"
description: "Configure Dapr to use gRPC for low-latency, high performance scenarios"
---
Dapr implements both an HTTP and a gRPC API for local calls. gRPC is useful for low-latency, high performance scenarios and has language integration using the proto clients.
You can find a list of auto-generated clients [here]({{< ref sdks >}}).
Dapr implements both an HTTP and a gRPC API for local calls. gRPC is useful for low-latency, high performance scenarios and has language integration using the proto clients. [You can see the full list of auto-generated clients (Dapr SDKs)]({{< ref sdks >}}).
The Dapr runtime implements a [proto service](https://github.com/dapr/dapr/blob/master/dapr/proto/runtime/v1/dapr.proto) that apps can communicate with via gRPC.
In addition to calling Dapr via gRPC, Dapr can communicate with an application via gRPC. To do that, the app needs to host a gRPC server and implements the [Dapr appcallback service](https://github.com/dapr/dapr/blob/master/dapr/proto/runtime/v1/appcallback.proto)
Not only can you call Dapr via gRPC, Dapr can communicate with an application via gRPC. To do that, the app needs to host a gRPC server and implement the [Dapr `appcallback` service](https://github.com/dapr/dapr/blob/master/dapr/proto/runtime/v1/appcallback.proto)
## Configuring Dapr to communicate with an app via gRPC
### Self hosted
{{< tabs "Self-hosted" Kubernetes >}}
<!-- Self hosted -->
{{% codetab %}}
When running in self hosted mode, use the `--app-protocol` flag to tell Dapr to use gRPC to talk to the app:
@ -25,8 +26,10 @@ dapr run --app-protocol grpc --app-port 5005 node app.js
```
This tells Dapr to communicate with your app via gRPC over port `5005`.
{{% /codetab %}}
### Kubernetes
<!-- Kubernetes -->
{{% codetab %}}
On Kubernetes, set the following annotations in your deployment YAML:
@ -52,5 +55,13 @@ spec:
dapr.io/app-id: "myapp"
dapr.io/app-protocol: "grpc"
dapr.io/app-port: "5005"
...
```
#...
```
{{% /codetab %}}
{{< /tabs >}}
## Next steps
{{< button text="Handle large HTTP header sizes" page="increase-read-buffer-size" >}}

View File

@ -1,20 +1,23 @@
---
type: docs
title: "How-To: Handle large http header size"
title: "How-To: Handle large HTTP header size"
linkTitle: "HTTP header size"
weight: 6000
description: "Configure a larger http read buffer size"
description: "Configure a larger HTTP read buffer size"
---
Dapr has a default limit of 4KB for the http header read buffer size. When sending http headers that are bigger than the default 4KB, you can increase this value. Otherwise, you may encounter a `Too big request header` service invocation error. You can change the http header size by using the `dapr.io/http-read-buffer-size` annotation or `--dapr-http-read-buffer-size` flag when using the CLI.
Dapr has a default limit of 4KB for the HTTP header read buffer size. If you're sending HTTP headers larger than the default 4KB, you may encounter a `Too big request header` service invocation error.
You can increase the HTTP header size by using:
- The `dapr.io/http-read-buffer-size` annotation, or
- The `--dapr-http-read-buffer-size` flag when using the CLI.
{{< tabs Self-hosted Kubernetes >}}
<!--Self-hosted-->
{{% codetab %}}
When running in self hosted mode, use the `--dapr-http-read-buffer-size` flag to configure Dapr to use non-default http header size:
When running in self-hosted mode, use the `--dapr-http-read-buffer-size` flag to configure Dapr to use non-default http header size:
```bash
dapr run --dapr-http-read-buffer-size 16 node app.js
@ -23,10 +26,11 @@ This tells Dapr to set maximum read buffer size to `16` KB.
{{% /codetab %}}
<!--Kubernetes-->
{{% codetab %}}
On Kubernetes, set the following annotations in your deployment YAML:
```yaml
apiVersion: apps/v1
kind: Deployment
@ -49,7 +53,7 @@ spec:
dapr.io/app-id: "myapp"
dapr.io/app-port: "8000"
dapr.io/http-read-buffer-size: "16"
...
#...
```
{{% /codetab %}}
@ -57,4 +61,8 @@ spec:
{{< /tabs >}}
## Related links
- [Dapr Kubernetes pod annotations spec]({{< ref arguments-annotations-overview.md >}})
[Dapr Kubernetes pod annotations spec]({{< ref arguments-annotations-overview.md >}})
## Next steps
{{< button text="Handle large HTTP body requests" page="increase-request-size" >}}

View File

@ -6,15 +6,16 @@ weight: 6000
description: "Configure http requests that are bigger than 4 MB"
---
By default Dapr has a limit for the request body size which is set to 4 MB, however you can change this by defining `dapr.io/http-max-request-size` annotation or `--dapr-http-max-request-size` flag.
By default, Dapr has a limit for the request body size, set to 4MB. You can change this by defining:
- The `dapr.io/http-max-request-size` annotation, or
- The `--dapr-http-max-request-size` flag.
{{< tabs Self-hosted Kubernetes >}}
<!--self hosted-->
{{% codetab %}}
When running in self hosted mode, use the `--dapr-http-max-request-size` flag to configure Dapr to use non-default request body size:
When running in self-hosted mode, use the `--dapr-http-max-request-size` flag to configure Dapr to use non-default request body size:
```bash
dapr run --dapr-http-max-request-size 16 node app.js
@ -23,10 +24,11 @@ This tells Dapr to set maximum request body size to `16` MB.
{{% /codetab %}}
<!--kubernetes-->
{{% codetab %}}
On Kubernetes, set the following annotations in your deployment YAML:
```yaml
apiVersion: apps/v1
kind: Deployment
@ -49,7 +51,7 @@ spec:
dapr.io/app-id: "myapp"
dapr.io/app-port: "8000"
dapr.io/http-max-request-size: "16"
...
#...
```
{{% /codetab %}}
@ -57,4 +59,9 @@ spec:
{{< /tabs >}}
## Related links
- [Dapr Kubernetes pod annotations spec]({{< ref arguments-annotations-overview.md >}})
[Dapr Kubernetes pod annotations spec]({{< ref arguments-annotations-overview.md >}})
## Next steps
{{< button text="Install sidecar certificates" page="install-certificates" >}}

View File

@ -6,20 +6,26 @@ weight: 6500
description: "Configure the Dapr sidecar container to trust certificates"
---
The Dapr sidecar can be configured to trust certificates for communicating with external services. This is useful in scenarios where a self-signed certificate needs to be trusted. For example, using an HTTP binding or configuring an outbound proxy for the sidecar. Both certificate authority (CA) certificates and leaf certificates are supported.
The Dapr sidecar can be configured to trust certificates for communicating with external services. This is useful in scenarios where a self-signed certificate needs to be trusted, such as:
- Using an HTTP binding
- Configuring an outbound proxy for the sidecar
Both certificate authority (CA) certificates and leaf certificates are supported.
{{< tabs Self-hosted Kubernetes >}}
<!--self-hosted-->
{{% codetab %}}
When the sidecar is not running inside a container, certificates must be directly installed on the host operating system.
You can make the following configurations when the sidecar is running as a container.
When the sidecar is running as a container:
1. Certificates must be available to the sidecar container. This can be configured using volume mounts.
1. The environment variable `SSL_CERT_DIR` must be set in the sidecar container, pointing to the directory containing the certificates.
1. For Windows containers, the container needs to run with administrator privileges to be able to install the certificates.
1. Configure certificates to be available to the sidecar container using volume mounts.
1. Point the environment variable `SSL_CERT_DIR` in the sidecar container to the directory containing the certificates.
> **Note:** For Windows containers, make sure the container is running with administrator privileges so it can install the certificates.
The following example uses Docker Compose to install certificates (present locally in the `./certificates` directory) in the sidecar container:
Below is an example that uses Docker Compose to install certificates (present locally in the `./certificates` directory) in the sidecar container:
```yaml
version: '3'
services:
@ -39,16 +45,22 @@ services:
# user: ContainerAdministrator
```
> **Note:** When the sidecar is not running inside a container, certificates must be directly installed on the host operating system.
{{% /codetab %}}
<!--kubernetes-->
{{% codetab %}}
On Kubernetes:
1. Certificates must be available to the sidecar container using a volume mount.
1. The environment variable `SSL_CERT_DIR` must be set in the sidecar container, pointing to the directory containing the certificates.
The YAML below is an example of a deployment that attaches a pod volume to the sidecar, and sets `SSL_CERT_DIR` to install the certificates.
1. Configure certificates to be available to the sidecar container using a volume mount.
1. Point the environment variable `SSL_CERT_DIR` in the sidecar container to the directory containing the certificates.
The following example YAML shows a deployment that:
- Attaches a pod volume to the sidecar
- Sets `SSL_CERT_DIR` to install the certificates
```yaml
apiVersion: apps/v1
kind: Deployment
@ -77,23 +89,21 @@ spec:
- name: certificates-vol
hostPath:
path: /certificates
...
#...
```
**Note**: When using Windows containers, the sidecar container is started with admin privileges, which is required to install the certificates. This does not apply to Linux containers.
> **Note**: When using Windows containers, the sidecar container is started with admin privileges, which is required to install the certificates. This does not apply to Linux containers.
{{% /codetab %}}
{{< /tabs >}}
<hr/>
After following these steps, all the certificates in the directory pointed by `SSL_CERT_DIR` are installed.
All the certificates in the directory pointed by `SSL_CERT_DIR` are installed.
- **On Linux containers:** All the certificate extensions supported by OpenSSL are supported. [Learn more.](https://www.openssl.org/docs/man1.1.1/man1/openssl-rehash.html)
- **On Windows container:** All the certificate extensions supported by `certoc.exe` are supported. [See certoc.exe present in Windows Server Core](https://hub.docker.com/_/microsoft-windows-servercore).
1. On Linux containers, all the certificate extensions supported by OpenSSL are supported. For more information, see https://www.openssl.org/docs/man1.1.1/man1/openssl-rehash.html
1. On Windows container, all the certificate extensions supported by certoc.exe are supported. For more information, see certoc.exe present in [Windows Server Core](https://hub.docker.com/_/microsoft-windows-servercore)
## Example
## Demo
Watch the demo on using installing SSL certificates and securely using the HTTP binding in community call 64:
@ -106,3 +116,7 @@ Watch the demo on using installing SSL certificates and securely using the HTTP
- [HTTP binding spec]({{< ref http.md >}})
- [(Kubernetes) How-to: Mount Pod volumes to the Dapr sidecar]({{< ref kubernetes-volume-mounts.md >}})
- [Dapr Kubernetes pod annotations spec]({{< ref arguments-annotations-overview.md >}})
## Next steps
{{< button text="Enable preview features" page="preview-features" >}}

View File

@ -3,71 +3,87 @@ type: docs
title: "How-To: Apply access control list configuration for service invocation"
linkTitle: "Service Invocation access control"
weight: 4000
description: "Restrict what operations *calling* applications can perform, via service invocation, on the *called* application"
description: "Restrict what operations calling applications can perform"
---
Access control enables the configuration of policies that restrict what operations *calling* applications can perform, via service invocation, on the *called* application. To limit access to a called applications from specific operations and HTTP verbs from the calling applications, you can define an access control policy specification in configuration.
Using access control, you can configure policies that restrict what the operations _calling_ applications can perform, via service invocation, on the _called_ application. You can define an access control policy specification in the Configuration schema to limit access:
- To a called application from specific operations, and
- To HTTP verbs from the calling applications.
An access control policy is specified in configuration and be applied to Dapr sidecar for the *called* application. Example access policies are shown below and access to the called app is based on the matched policy action. You can provide a default global action for all calling applications and if no access control policy is specified, the default behavior is to allow all calling applications to access to the called app.
An access control policy is specified in Configuration and applied to the Dapr sidecar for the _called_ application. Access to the called app is based on the matched policy action.
## Concepts
You can provide a default global action for all calling applications. If no access control policy is specified, the default behavior is to allow all calling applications to access to the called app.
**TrustDomain** - A "trust domain" is a logical group to manage trust relationships. Every application is assigned a trust domain which can be specified in the access control list policy spec. If no policy spec is defined or an empty trust domain is specified, then a default value "public" is used. This trust domain is used to generate the identity of the application in the TLS cert.
[See examples of access policies.](#example-scenarios)
**App Identity** - Dapr requests the sentry service to generate a [SPIFFE](https://spiffe.io/) id for all applications and this id is attached in the TLS cert. The SPIFFE id is of the format: `**spiffe://\<trustdomain>/ns/\<namespace\>/\<appid\>**`. For matching policies, the trust domain, namespace and app ID values of the calling app are extracted from the SPIFFE id in the TLS cert of the calling app. These values are matched against the trust domain, namespace and app ID values specified in the policy spec. If all three of these match, then more specific policies are further matched.
## Terminology
### `trustDomain`
A "trust domain" is a logical group that manages trust relationships. Every application is assigned a trust domain, which can be specified in the access control list policy spec. If no policy spec is defined or an empty trust domain is specified, then a default value "public" is used. This trust domain is used to generate the identity of the application in the TLS cert.
### App Identity
Dapr requests the sentry service to generate a [SPIFFE](https://spiffe.io/) ID for all applications. This ID is attached in the TLS cert.
The SPIFFE ID is of the format: `**spiffe://\<trustdomain>/ns/\<namespace\>/\<appid\>**`.
For matching policies, the trust domain, namespace, and app ID values of the calling app are extracted from the SPIFFE ID in the TLS cert of the calling app. These values are matched against the trust domain, namespace, and app ID values specified in the policy spec. If all three of these match, then more specific policies are further matched.
## Configuration properties
The following tables lists the different properties for access control, policies and operations:
The following tables lists the different properties for access control, policies, and operations:
### Access Control
| Property | Type | Description |
|---------------|--------|-------------|
| defaultAction | string | Global default action when no other policy is matched
| trustDomain | string | Trust domain assigned to the application. Default is "public".
| policies | string | Policies to determine what operations the calling app can do on the called app
| `defaultAction` | string | Global default action when no other policy is matched
| `trustDomain` | string | Trust domain assigned to the application. Default is "public".
| `policies` | string | Policies to determine what operations the calling app can do on the called app
### Policies
| Property | Type | Description |
|---------------|--------|-------------|
| app | string | AppId of the calling app to allow/deny service invocation from
| namespace | string | Namespace value that needs to be matched with the namespace of the calling app
| trustDomain | string | Trust domain that needs to be matched with the trust domain of the calling app. Default is "public"
| defaultAction | string | App level default action in case the app is found but no specific operation is matched
| operations | string | operations that are allowed from the calling app
| `app` | string | AppId of the calling app to allow/deny service invocation from
| `namespace` | string | Namespace value that needs to be matched with the namespace of the calling app
| `trustDomain` | string | Trust domain that needs to be matched with the trust domain of the calling app. Default is "public"
| `defaultAction` | string | App level default action in case the app is found but no specific operation is matched
| `operations` | string | operations that are allowed from the calling app
### Operations
| Property | Type | Description |
| -------- | ------ | ------------------------------------------------------------ |
| name | string | Path name of the operations allowed on the called app. Wildcard "\*" can be used in a path to match. Wildcard "\**" can be used to match under multiple paths. |
| httpVerb | list | List specific http verbs that can be used by the calling app. Wildcard "\*" can be used to match any http verb. Unused for grpc invocation. |
| action | string | Access modifier. Accepted values "allow" (default) or "deny" |
| `name` | string | Path name of the operations allowed on the called app. Wildcard "\*" can be used in a path to match. Wildcard "\**" can be used to match under multiple paths. |
| `httpVerb` | list | List specific http verbs that can be used by the calling app. Wildcard "\*" can be used to match any http verb. Unused for grpc invocation. |
| `action` | string | Access modifier. Accepted values "allow" (default) or "deny" |
## Policy rules
1. If no access policy is specified, the default behavior is to allow all apps to access to all methods on the called app
2. If no global default action is specified and no app specific policies defined, the empty access policy is treated like no access policy specified and the default behavior is to allow all apps to access to all methods on the called app.
3. If no global default action is specified but some app specific policies have been defined, then we resort to a more secure option of assuming the global default action to deny access to all methods on the called app.
4. If an access policy is defined and if the incoming app credentials cannot be verified, then the global default action takes effect.
5. If either the trust domain or namespace of the incoming app do not match the values specified in the app policy, the app policy is ignored and the global default action takes effect.
1. If no access policy is specified, the default behavior is to allow all apps to access to all methods on the called app.
1. If no global default action is specified and no app specific policies defined, the empty access policy is treated like no access policy is specified. The default behavior is to allow all apps to access to all methods on the called app.
1. If no global default action is specified but some app specific policies have been defined, then we resort to a more secure option of assuming the global default action to deny access to all methods on the called app.
1. If an access policy is defined and if the incoming app credentials cannot be verified, then the global default action takes effect.
1. If either the trust domain or namespace of the incoming app do not match the values specified in the app policy, the app policy is ignored and the global default action takes effect.
## Policy priority
The action corresponding to the most specific policy matched takes effect as ordered below:
1. Specific HTTP verbs in the case of HTTP or the operation level action in the case of GRPC.
2. The default action at the app level
3. The default action at the global level
1. The default action at the app level
1. The default action at the global level
## Example scenarios
Below are some example scenarios for using access control list for service invocation. See [configuration guidance]({{< ref "configuration-concept.md" >}}) to understand the available configuration settings for an application sidecar.
<font size=5>Scenario 1: Deny access to all apps except where trustDomain = public, namespace = default, appId = app1</font>
### Scenario 1:
With this configuration, all calling methods with appId = app1 are allowed and all other invocation requests from other applications are denied
Deny access to all apps except where `trustDomain` = `public`, `namespace` = `default`, `appId` = `app1`
With this configuration, all calling methods with `appId` = `app1` are allowed. All other invocation requests from other applications are denied.
```yaml
apiVersion: dapr.io/v1alpha1
@ -85,9 +101,11 @@ spec:
namespace: "default"
```
<font size=5>Scenario 2: Deny access to all apps except trustDomain = public, namespace = default, appId = app1, operation = op1</font>
### Scenario 2:
With this configuration, only method op1 from appId = app1 is allowed and all other method requests from all other apps, including other methods on app1, are denied
Deny access to all apps except `trustDomain` = `public`, `namespace` = `default`, `appId` = `app1`, `operation` = `op1`
With this configuration, only the method `op1` from `appId` = `app1` is allowed. All other method requests from all other apps, including other methods on `app1`, are denied.
```yaml
apiVersion: dapr.io/v1alpha1
@ -109,12 +127,16 @@ spec:
action: allow
```
<font size=5>Scenario 3: Deny access to all apps except when a specific verb for HTTP and operation for GRPC is matched</font>
### Scenario 3:
With this configuration, the only scenarios below are allowed access and and all other method requests from all other apps, including other methods on app1 or app2, are denied
* trustDomain = public, namespace = default, appID = app1, operation = op1, http verb = POST/PUT
* trustDomain = "myDomain", namespace = "ns1", appID = app2, operation = op2 and application protocol is GRPC
, only HTTP verbs POST/PUT on method op1 from appId = app1 are allowed and all other method requests from all other apps, including other methods on app1, are denied
Deny access to all apps except when a specific verb for HTTP and operation for GRPC is matched
With this configuration, only the scenarios below are allowed access. All other method requests from all other apps, including other methods on `app1` or `app2`, are denied.
- `trustDomain` = `public`, `namespace` = `default`, `appID` = `app1`, `operation` = `op1`, `httpVerb` = `POST`/`PUT`
- `trustDomain` = `"myDomain"`, `namespace` = `"ns1"`, `appID` = `app2`, `operation` = `op2` and application protocol is GRPC
Only the `httpVerb` `POST`/`PUT` on method `op1` from `appId` = `app1` are allowe. All other method requests from all other apps, including other methods on `app1`, are denied.
```yaml
apiVersion: dapr.io/v1alpha1
@ -143,7 +165,9 @@ spec:
action: allow
```
<font size=5>Scenario 4: Allow access to all methods except trustDomain = public, namespace = default, appId = app1, operation = /op1/*, all http verbs</font>
### Scenario 4:
Allow access to all methods except `trustDomain` = `public`, `namespace` = `default`, `appId` = `app1`, `operation` = `/op1/*`, all `httpVerb`
```yaml
apiVersion: dapr.io/v1alpha1
@ -165,9 +189,11 @@ spec:
action: deny
```
<font size=5>Scenario 5: Allow access to all methods for trustDomain = public, namespace = ns1, appId = app1 and deny access to all methods for trustDomain = public, namespace = ns2, appId = app1</font>
### Scenario 5:
This scenario shows how applications with the same app ID but belonging to different namespaces can be specified
Allow access to all methods for `trustDomain` = `public`, `namespace` = `ns1`, `appId` = `app1` and deny access to all methods for `trustDomain` = `public`, `namespace` = `ns2`, `appId` = `app1`
This scenario shows how applications with the same app ID while belonging to different namespaces can be specified.
```yaml
apiVersion: dapr.io/v1alpha1
@ -189,7 +215,9 @@ spec:
namespace: "ns2"
```
<font size=5>Scenario 6: Allow access to all methods except trustDomain = public, namespace = default, appId = app1, operation = /op1/**/a, all http verbs</font>
### Scenario 6:
Allow access to all methods except `trustDomain` = `public`, `namespace` = `default`, `appId` = `app1`, `operation` = `/op1/**/a`, all `httpVerb`
```yaml
apiVersion: dapr.io/v1alpha1
@ -211,14 +239,15 @@ spec:
action: deny
```
## Hello world examples
## "hello world" examples
These examples show how to apply access control to the [hello world](https://github.com/dapr/quickstarts#quickstarts) quickstart samples where a python app invokes a node.js app.
Access control lists rely on the Dapr [Sentry service]({{< ref "security-concept.md" >}}) to generate the TLS certificates with a SPIFFE id for authentication, which means the Sentry service either has to be running locally or deployed to your hosting environment such as a Kubernetes cluster.
In these examples, you learn how to apply access control to the [hello world](https://github.com/dapr/quickstarts/tree/master/tutorials) tutorials.
The nodeappconfig example below shows how to **deny** access to the `neworder` method from the `pythonapp`, where the python app is in the `myDomain` trust domain and `default` namespace. The nodeapp is in the `public` trust domain.
Access control lists rely on the Dapr [Sentry service]({{< ref "security-concept.md" >}}) to generate the TLS certificates with a SPIFFE ID for authentication. This means the Sentry service either has to be running locally or deployed to your hosting environment, such as a Kubernetes cluster.
**nodeappconfig.yaml**
The `nodeappconfig` example below shows how to **deny** access to the `neworder` method from the `pythonapp`, where the Python app is in the `myDomain` trust domain and `default` namespace. The Node.js app is in the `public` trust domain.
### nodeappconfig.yaml
```yaml
apiVersion: dapr.io/v1alpha1
@ -242,7 +271,7 @@ spec:
action: deny
```
**pythonappconfig.yaml**
### pythonappconfig.yaml
```yaml
apiVersion: dapr.io/v1alpha1
@ -258,95 +287,119 @@ spec:
```
### Self-hosted mode
This example uses the [hello world](https://github.com/dapr/quickstarts/tree/master/tutorials/hello-world/README.md) quickstart.
The following steps run the Sentry service locally with mTLS enabled, set up necessary environment variables to access certificates, and then launch both the node app and python app each referencing the Sentry service to apply the ACLs.
When walking through this tutorial, you:
- Run the Sentry service locally with mTLS enabled
- Set up necessary environment variables to access certificates
- Launch both the Node app and Python app each referencing the Sentry service to apply the ACLs
1. Follow these steps to run the [Sentry service in self-hosted mode]({{< ref "mtls.md" >}}) with mTLS enabled
#### Prerequisites
2. In a command prompt, set these environment variables:
- Become familiar with running [Sentry service in self-hosted mode]({{< ref "mtls.md" >}}) with mTLS enabled
- Clone the [hello world](https://github.com/dapr/quickstarts/tree/master/tutorials/hello-world/README.md) tutorial
#### Run the Node.js app
1. In a command prompt, set these environment variables:
{{< tabs "Linux/MacOS" Windows >}}
{{% codetab %}}
```bash
export DAPR_TRUST_ANCHORS=`cat $HOME/.dapr/certs/ca.crt`
export DAPR_CERT_CHAIN=`cat $HOME/.dapr/certs/issuer.crt`
export DAPR_CERT_KEY=`cat $HOME/.dapr/certs/issuer.key`
export NAMESPACE=default
```
```bash
export DAPR_TRUST_ANCHORS=`cat $HOME/.dapr/certs/ca.crt`
export DAPR_CERT_CHAIN=`cat $HOME/.dapr/certs/issuer.crt`
export DAPR_CERT_KEY=`cat $HOME/.dapr/certs/issuer.key`
export NAMESPACE=default
```
{{% /codetab %}}
{{% codetab %}}
```powershell
$env:DAPR_TRUST_ANCHORS=$(Get-Content -raw $env:USERPROFILE\.dapr\certs\ca.crt)
$env:DAPR_CERT_CHAIN=$(Get-Content -raw $env:USERPROFILE\.dapr\certs\issuer.crt)
$env:DAPR_CERT_KEY=$(Get-Content -raw $env:USERPROFILE\.dapr\certs\issuer.key)
$env:NAMESPACE="default"
```
{{% codetab %}}
```powershell
$env:DAPR_TRUST_ANCHORS=$(Get-Content -raw $env:USERPROFILE\.dapr\certs\ca.crt)
$env:DAPR_CERT_CHAIN=$(Get-Content -raw $env:USERPROFILE\.dapr\certs\issuer.crt)
$env:DAPR_CERT_KEY=$(Get-Content -raw $env:USERPROFILE\.dapr\certs\issuer.key)
$env:NAMESPACE="default"
```
{{% /codetab %}}
{{< /tabs >}}
3. Run daprd to launch a Dapr sidecar for the node.js app with mTLS enabled, referencing the local Sentry service:
1. Run daprd to launch a Dapr sidecar for the Node.js app with mTLS enabled, referencing the local Sentry service:
```bash
daprd --app-id nodeapp --dapr-grpc-port 50002 -dapr-http-port 3501 --log-level debug --app-port 3000 --enable-mtls --sentry-address localhost:50001 --config nodeappconfig.yaml
```
4. Run the node app in a separate command prompt:
1. Run the Node.js app in a separate command prompt:
```bash
node app.js
```
5. In another command prompt, set these environment variables:
#### Run the Python app
1. In another command prompt, set these environment variables:
{{< tabs "Linux/MacOS" Windows >}}
{{% codetab %}}
```bash
export DAPR_TRUST_ANCHORS=`cat $HOME/.dapr/certs/ca.crt`
export DAPR_CERT_CHAIN=`cat $HOME/.dapr/certs/issuer.crt`
export DAPR_CERT_KEY=`cat $HOME/.dapr/certs/issuer.key`
export NAMESPACE=default
```
```bash
export DAPR_TRUST_ANCHORS=`cat $HOME/.dapr/certs/ca.crt`
export DAPR_CERT_CHAIN=`cat $HOME/.dapr/certs/issuer.crt`
export DAPR_CERT_KEY=`cat $HOME/.dapr/certs/issuer.key`
export NAMESPACE=default
```
{{% /codetab %}}
{{% codetab %}}
```powershell
$env:DAPR_TRUST_ANCHORS=$(Get-Content -raw $env:USERPROFILE\.dapr\certs\ca.crt)
$env:DAPR_CERT_CHAIN=$(Get-Content -raw $env:USERPROFILE\.dapr\certs\issuer.crt)
$env:DAPR_CERT_KEY=$(Get-Content -raw $env:USERPROFILE\.dapr\certs\issuer.key)
$env:NAMESPACE="default"
```
```
{{% /codetab %}}
{{< /tabs >}}
6. Run daprd to launch a Dapr sidecar for the python app with mTLS enabled, referencing the local Sentry service:
1. Run daprd to launch a Dapr sidecar for the Python app with mTLS enabled, referencing the local Sentry service:
```bash
daprd --app-id pythonapp --dapr-grpc-port 50003 --metrics-port 9092 --log-level debug --enable-mtls --sentry-address localhost:50001 --config pythonappconfig.yaml
```
7. Run the python app in a separate command prompt:
1. Run the Python app in a separate command prompt:
```bash
python app.py
```
8. You should see the calls to the node app fail in the python app command prompt based due to the **deny** operation action in the nodeappconfig file. Change this action to **allow** and re-run the apps and you should then see this call succeed.
You should see the calls to the Node.js app fail in the Python app command prompt, due to the **deny** operation action in the `nodeappconfig` file. Change this action to **allow** and re-run the apps to see this call succeed.
### Kubernetes mode
This example uses the [hello kubernetes](https://github.com/dapr/quickstarts/tree/master/tutorials/hello-kubernetes/README.md) quickstart.
You can create and apply the above configuration files `nodeappconfig.yaml` and `pythonappconfig.yaml` as described in the [configuration]({{< ref "configuration-concept.md" >}}) to the Kubernetes deployments.
#### Prerequisites
For example, below is how the pythonapp is deployed to Kubernetes in the default namespace with this pythonappconfig configuration file.
Do the same for the nodeapp deployment and then look at the logs for the pythonapp to see the calls fail due to the **deny** operation action set in the nodeappconfig file. Change this action to **allow** and re-deploy the apps and you should then see this call succeed.
- Become familiar with running [Sentry service in self-hosted mode]({{< ref "mtls.md" >}}) with mTLS enabled
- Clone the [hello world](https://github.com/dapr/quickstarts/tree/master/tutorials/hello-world/README.md) tutorial
#### Configure the Node.js and Python apps
You can create and apply the above [`nodeappconfig.yaml`](#nodeappconfigyaml) and [`pythonappconfig.yaml`](#pythonappconfigyaml) configuration files, as described in the [configuration]({{< ref "configuration-concept.md" >}}).
For example, the Kubernetes Deployment below is how the Python app is deployed to Kubernetes in the default namespace with this `pythonappconfig` configuration file.
Do the same for the Node.js deployment and look at the logs for the Python app to see the calls fail due to the **deny** operation action set in the `nodeappconfig` file.
Change this action to **allow** and re-deploy the apps to see this call succeed.
##### Deployment YAML example
```yaml
apiVersion: apps/v1
@ -375,9 +428,14 @@ spec:
image: dapriosamples/hello-k8s-python:edge
```
## Community call demo
## Demo
Watch this [video](https://youtu.be/j99RN_nxExA?t=1108) on how to apply access control list for service invocation.
<div class="embed-responsive embed-responsive-16by9">
<iframe width="688" height="430" src="https://www.youtube-nocookie.com/embed/j99RN_nxExA?start=1108" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe>
</div>
</div>
## Next steps
{{< button text="Dapr APIs allow list" page="api-allowlist" >}}

View File

@ -6,23 +6,21 @@ weight: 7000
description: "How to specify and enable preview features"
---
## Overview
Preview features in Dapr are considered experimental when they are first released. These preview features require explicit opt-in in order to be used. The opt-in is specified in Dapr's configuration.
[Preview features]({{< ref support-preview-features >}}) in Dapr are considered experimental when they are first released. These preview features require you to explicitly opt-in to use them. You specify this opt-in in Dapr's Configuration file.
Preview features are enabled on a per application basis by setting configuration when running an application instance.
### Preview features
The current list of preview features can be found [here]({{<ref support-preview-features>}}).
## Configuration properties
The `features` section under the `Configuration` spec contains the following properties:
| Property | Type | Description |
|----------------|--------|-------------|
|name|string|The name of the preview feature that is enabled/disabled
|enabled|bool|Boolean specifying if the feature is enabled or disabled
|`name`|string|The name of the preview feature that is enabled/disabled
|`enabled`|bool|Boolean specifying if the feature is enabled or disabled
## Enabling a preview feature
Preview features are specified in the configuration. Here is an example of a full configuration that contains multiple features:
```yaml
@ -42,7 +40,11 @@ spec:
enabled: true
```
### Standalone
{{< tabs Self-hosted Kubernetes >}}
<!--self-hosted-->
{{% codetab %}}
To enable preview features when running Dapr locally, either update the default configuration or specify a separate config file using `dapr run`.
The default Dapr config is created when you run `dapr init`, and is located at:
@ -55,8 +57,11 @@ Alternately, you can update preview features on all apps run locally by specifyi
dapr run --app-id myApp --config ./previewConfig.yaml ./app
```
{{% /codetab %}}
<!--kubernetes-->
{{% codetab %}}
### Kubernetes
In Kubernetes mode, the configuration must be provided via a configuration component. Using the same configuration as above, apply it via `kubectl`:
```bash
@ -94,3 +99,11 @@ spec:
- containerPort: 3000
imagePullPolicy: Always
```
{{% /codetab %}}
{{< /tabs >}}
## Next steps
{{< button text="Configuration schema" page="configuration-schema" >}}

View File

@ -3,12 +3,19 @@ type: docs
title: "How-To: Limit the secrets that can be read from secret stores"
linkTitle: "Limit secret store access"
weight: 3000
description: "To limit the secrets to which the Dapr application has access, users can define secret scopes by augmenting existing configuration resource with restrictive permissions."
description: "Define secret scopes by augmenting the existing configuration resource with restrictive permissions."
description: "Define secret scopes by augmenting the existing configuration resource with restrictive permissions."
---
In addition to scoping which applications can access a given component, for example a secret store component (see [Scoping components]({{< ref "component-scopes.md">}})), a named secret store component itself can be scoped to one or more secrets for an application. By defining `allowedSecrets` and/or `deniedSecrets` list, applications can be restricted to access only specific secrets.
In addition to [scoping which applications can access a given component]({{< ref "component-scopes.md">}}), you can also scop a named secret store component to one or more secrets for an application. By defining `allowedSecrets` and/or `deniedSecrets` lists, you restrict applications to access only specific secrets.
In addition to [scoping which applications can access a given component]({{< ref "component-scopes.md">}}), you can also scop a named secret store component to one or more secrets for an application. By defining `allowedSecrets` and/or `deniedSecrets` lists, you restrict applications to access only specific secrets.
Follow [these instructions]({{< ref "configuration-overview.md" >}}) to define a configuration resource.
For more information about configuring a Configuration resource:
- [Configuration overview]({{< ref configuration-overview.md >}})
- [Configuration schema]({{< ref configuration-schema.md >}})
For more information about configuring a Configuration resource:
- [Configuration overview]({{< ref configuration-overview.md >}})
- [Configuration schema]({{< ref configuration-schema.md >}})
## Configure secrets access
@ -38,38 +45,64 @@ When an `allowedSecrets` list is present with at least one element, only those s
## Permission priority
The `allowedSecrets` and `deniedSecrets` list values take priorty over the `defaultAccess`.
The `allowedSecrets` and `deniedSecrets` list values take priority over the `defaultAccess`. See how this works in the following example scenarios:
| Scenarios | defaultAccess | allowedSecrets | deniedSecrets | permission
|----- | ------- | -----------| ----------| ------------
| 1 - Only default access | deny/allow | empty | empty | deny/allow
| 2 - Default deny with allowed list | deny | ["s1"] | empty | only "s1" can be accessed
| 3 - Default allow with denied list | allow | empty | ["s1"] | only "s1" cannot be accessed
| 4 - Default allow with allowed list | allow | ["s1"] | empty | only "s1" can be accessed
| 5 - Default deny with denied list | deny | empty | ["s1"] | deny
| 6 - Default deny/allow with both lists | deny/allow | ["s1"] | ["s2"] | only "s1" can be accessed
| | Scenarios | `defaultAccess` | `allowedSecrets` | `deniedSecrets` | `permission`
|--| ----- | ------- | -----------| ----------| ------------
| 1 | Only default access | `deny`/`allow` | empty | empty | `deny`/`allow`
| 2 | Default deny with allowed list | `deny` | [`"s1"`] | empty | only `"s1"` can be accessed
| 3 | Default allow with denied list | `allow` | empty | [`"s1"`] | only `"s1"` cannot be accessed
| 4 | Default allow with allowed list | `allow` | [`"s1"`] | empty | only `"s1"` can be accessed
| 5 | Default deny with denied list | `deny` | empty | [`"s1"`] | `deny`
| 6 | Default deny/allow with both lists | `deny`/`allow` | [`"s1"`] | [`"s2"`] | only `"s1"` can be accessed
## Examples
### Scenario 1 : Deny access to all secrets for a secret store
### Scenario 1: Deny access to all secrets for a secret store
In Kubernetes cluster, the native Kubernetes secret store is added to Dapr application by default. In some scenarios it may be necessary to deny access to Dapr secrets for a given application. To add this configuration follow the steps below:
In a Kubernetes cluster, the native Kubernetes secret store is added to your Dapr application by default. In some scenarios, it may be necessary to deny access to Dapr secrets for a given application. To add this configuration:
In a Kubernetes cluster, the native Kubernetes secret store is added to your Dapr application by default. In some scenarios, it may be necessary to deny access to Dapr secrets for a given application. To add this configuration:
Define the following `appconfig.yaml` and apply it to the Kubernetes cluster using the command `kubectl apply -f appconfig.yaml`.
1. Define the following `appconfig.yaml`.
1. Define the following `appconfig.yaml`.
```yaml
apiVersion: dapr.io/v1alpha1
kind: Configuration
metadata:
name: appconfig
spec:
secrets:
scopes:
- storeName: kubernetes
defaultAccess: deny
```
```yaml
apiVersion: dapr.io/v1alpha1
kind: Configuration
metadata:
name: appconfig
spec:
secrets:
scopes:
- storeName: kubernetes
defaultAccess: deny
```
```yaml
apiVersion: dapr.io/v1alpha1
kind: Configuration
metadata:
name: appconfig
spec:
secrets:
scopes:
- storeName: kubernetes
defaultAccess: deny
```
For applications that need to be denied access to the Kubernetes secret store, follow [these instructions]({{< ref kubernetes-overview >}}), and add the following annotation to the application pod.
1. Apply it to the Kubernetes cluster using the following command:
```bash
kubectl apply -f appconfig.yaml`.
```
For applications that you need to deny access to the Kubernetes secret store, follow [the Kubernetes instructions]({{< ref kubernetes-overview >}}), adding the following annotation to the application pod.
1. Apply it to the Kubernetes cluster using the following command:
```bash
kubectl apply -f appconfig.yaml`.
```
For applications that you need to deny access to the Kubernetes secret store, follow [the Kubernetes instructions]({{< ref kubernetes-overview >}}), adding the following annotation to the application pod.
```yaml
dapr.io/config: appconfig
@ -77,7 +110,8 @@ dapr.io/config: appconfig
With this defined, the application no longer has access to Kubernetes secret store.
### Scenario 2 : Allow access to only certain secrets in a secret store
### Scenario 2: Allow access to only certain secrets in a secret store
### Scenario 2: Allow access to only certain secrets in a secret store
To allow a Dapr application to have access to only certain secrets, define the following `config.yaml`:
@ -94,7 +128,8 @@ spec:
allowedSecrets: ["secret1", "secret2"]
```
This example defines configuration for secret store named vault. The default access to the secret store is `deny`, whereas some secrets are accessible by the application based on the `allowedSecrets` list. Follow [these instructions]({{< ref configuration-overview.md >}}) to apply configuration to the sidecar.
This example defines configuration for secret store named `vault`. The default access to the secret store is `deny`. Meanwhile, some secrets are accessible by the application based on the `allowedSecrets` list. Follow [the Sidecar configuration instructions]({{< ref "configuration-overview.md#sidecar-configuration" >}}) to apply configuration to the sidecar.
This example defines configuration for secret store named `vault`. The default access to the secret store is `deny`. Meanwhile, some secrets are accessible by the application based on the `allowedSecrets` list. Follow [the Sidecar configuration instructions]({{< ref "configuration-overview.md#sidecar-configuration" >}}) to apply configuration to the sidecar.
### Scenario 3: Deny access to certain sensitive secrets in a secret store
@ -113,4 +148,13 @@ spec:
deniedSecrets: ["secret1", "secret2"]
```
The above configuration explicitly denies access to `secret1` and `secret2` from the secret store named vault while allowing access to all other secrets. Follow [these instructions]({{< ref configuration-overview.md >}}) to apply configuration to the sidecar.
This configuration explicitly denies access to `secret1` and `secret2` from the secret store named `vault,` while allowing access to all other secrets. Follow [the Sidecar configuration instructions]({{< ref "configuration-overview.md#sidecar-configuration" >}}) to apply configuration to the sidecar.
## Next steps
{{< button text="Service invocation access control" page="invoke-allowlist" >}}
This configuration explicitly denies access to `secret1` and `secret2` from the secret store named `vault,` while allowing access to all other secrets. Follow [the Sidecar configuration instructions]({{< ref "configuration-overview.md#sidecar-configuration" >}}) to apply configuration to the sidecar.
## Next steps
{{< button text="Service invocation access control" page="invoke-allowlist" >}}

View File

@ -16,6 +16,7 @@ This guide walks you through installing an Elastic Kubernetes Service (EKS) clus
- [AWS CLI](https://aws.amazon.com/cli/)
- [eksctl](https://eksctl.io/)
- [An existing VPC and subnets](https://docs.aws.amazon.com/eks/latest/userguide/network_reqs.html)
- [Dapr CLI](https://docs.dapr.io/getting-started/install-dapr-cli/)
## Deploy an EKS cluster
@ -25,20 +26,57 @@ This guide walks you through installing an Elastic Kubernetes Service (EKS) clus
aws configure
```
1. Create an EKS cluster. To use a specific version of Kubernetes, use `--version` (1.13.x or newer version required).
1. Create a new file called `cluster-config.yaml` and add the content below to it, replacing `[your_cluster_name]`, `[your_cluster_region]`, and `[your_k8s_version]` with the appropriate values:
```yaml
apiVersion: eksctl.io/v1alpha5
kind: ClusterConfig
metadata:
name: [your_cluster_name]
region: [your_cluster_region]
version: [your_k8s_version]
tags:
karpenter.sh/discovery: [your_cluster_name]
iam:
withOIDC: true
managedNodeGroups:
- name: mng-od-4vcpu-8gb
desiredCapacity: 2
minSize: 1
maxSize: 5
instanceType: c5.xlarge
privateNetworking: true
addons:
- name: vpc-cni
attachPolicyARNs:
- arn:aws:iam::aws:policy/AmazonEKS_CNI_Policy
- name: coredns
version: latest
- name: kube-proxy
version: latest
- name: aws-ebs-csi-driver
wellKnownPolicies:
ebsCSIController: true
```
1. Create the cluster by running the following command:
```bash
eksctl create cluster --name [your_eks_cluster_name] --region [your_aws_region] --version [kubernetes_version] --vpc-private-subnets [subnet_list_seprated_by_comma] --without-nodegroup
eksctl create cluster -f cluster.yaml
```
Change the values for `vpc-private-subnets` to meet your requirements. You can also add additional IDs. You must specify at least two subnet IDs. If you'd rather specify public subnets, you can change `--vpc-private-subnets` to `--vpc-public-subnets`.
1. Verify kubectl context:
1. Verify the kubectl context:
```bash
kubectl config current-context
```
## Add Dapr requirements for sidecar access and default storage class:
1. Update the security group rule to allow the EKS cluster to communicate with the Dapr Sidecar by creating an inbound rule for port 4000.
```bash
@ -49,11 +87,37 @@ This guide walks you through installing an Elastic Kubernetes Service (EKS) clus
--source-group [your_security_group]
```
2. Add a default storage class if you don't have one:
```bash
kubectl patch storageclass gp2 -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"true"}}}'
```
## Install Dapr
Install Dapr on your cluster by running:
```bash
dapr init -k
```
You should see the following response:
```bash
⌛ Making the jump to hyperspace...
Note: To install Dapr using Helm, see here: https://docs.dapr.io/getting-started/install-dapr-kubernetes/#install-with-helm-advanced
Container images will be pulled from Docker Hub
✅ Deploying the Dapr control plane with latest version to your cluster...
✅ Deploying the Dapr dashboard with latest version to your cluster...
✅ Success! Dapr has been installed to namespace dapr-system. To verify, run `dapr status -k' in your terminal. To get started, go here: https://docs.dapr.io/getting-started
```
## Troubleshooting
### Access permissions
If you face any access permissions, make sure you are using the same AWS profile that was used to create the cluster. If needed, update the kubectl configuration with the correct profile:
If you face any access permissions, make sure you are using the same AWS profile that was used to create the cluster. If needed, update the kubectl configuration with the correct profile. More information [here](https://repost.aws/knowledge-center/eks-api-server-unauthorized-error):
```bash
aws eks --region [your_aws_region] update-kubeconfig --name [your_eks_cluster_name] --profile [your_profile_name]

View File

@ -231,6 +231,19 @@ You can install Dapr on Kubernetes using a Helm v3 chart.
--wait
```
To install in **high availability** mode and scale select services independently of global:
```bash
helm upgrade --install dapr dapr/dapr \
--version={{% dapr-latest-version short="true" %}} \
--namespace dapr-system \
--create-namespace \
--set global.ha.enabled=false \
--set dapr_scheduler.ha=true \
--set dapr_placement.ha=true \
--wait
```
See [Guidelines for production ready deployments on Kubernetes]({{< ref kubernetes-production.md >}}) for more information on installing and upgrading Dapr using Helm.
### (optional) Install the Dapr dashboard as part of the control plane

View File

@ -7,10 +7,123 @@ description: "Configure Scheduler to persist its database to make it resilient t
---
The [Scheduler]({{< ref scheduler.md >}}) service is responsible for writing jobs to its embedded Etcd database and scheduling them for execution.
By default, the Scheduler service database writes this data to a Persistent Volume Claim of 1Gb of size using the cluster's default [storage class](https://kubernetes.io/docs/concepts/storage/storage-classes/). This means that there is no additional parameter required to run the scheduler service reliably on most Kubernetes deployments, although you will need additional configuration in some deployments or for a production environment.
By default, the Scheduler service database writes data to a Persistent Volume Claim volume of size `1Gb`, using the cluster's default [storage class](https://kubernetes.io/docs/concepts/storage/storage-classes/).
This means that there is no additional parameter required to run the scheduler service reliably on most Kubernetes deployments, although you will need [additional configuration](#storage-class) if a default StorageClass is not available or when running a production environment.
{{% alert title="Warning" color="warning" %}}
The default storage size for the Scheduler is `1Gi`, which is likely not sufficient for most production deployments.
Remember that the Scheduler is used for [Actor Reminders]({{< ref actors-timers-reminders.md >}}) & [Workflows]({{< ref workflow-overview.md >}}) when the [SchedulerReminders]({{< ref support-preview-features.md >}}) preview feature is enabled, and the [Jobs API]({{< ref jobs_api.md >}}).
You may want to consider reinstalling Dapr with a larger Scheduler storage of at least `16Gi` or more.
For more information, see the [ETCD Storage Disk Size](#etcd-storage-disk-size) section below.
{{% /alert %}}
## Production Setup
### ETCD Storage Disk Size
The default storage size for the Scheduler is `1Gb`.
This size is likely not sufficient for most production deployments.
When the storage size is exceeded, the Scheduler will log an error similar to the following:
```
error running scheduler: etcdserver: mvcc: database space exceeded
```
Knowing the safe upper bound for your storage size is not an exact science, and relies heavily on the number, persistence, and the data payload size of your application jobs.
The [Job API]({{< ref jobs_api.md >}}) and [Actor Reminders]({{< ref actors-timers-reminders.md >}}) (with the [SchedulerReminders]({{< ref support-preview-features.md >}}) preview feature enabled) transparently maps one to one to the usage of your applications.
Workflows (when the [SchedulerReminders]({{< ref support-preview-features.md >}}) preview feature is enabled) create a large number of jobs as Actor Reminders, however these jobs are short lived- matching the lifecycle of each workflow execution.
The data payload of jobs created by Workflows is typically empty or small.
The Scheduler uses Etcd as its storage backend database.
By design, Etcd persists historical transactions and data in form of [Write-Ahead Logs (WAL) and snapshots](https://etcd.io/docs/v3.5/learning/persistent-storage-files/).
This means the actual disk usage of Scheduler will be higher than the current observable database state, often by a number of multiples.
### Setting the Storage Size on Installation
If you need to increase an **existing** Scheduler storage size, see the [Increase Scheduler Storage Size](#increase-existing-scheduler-storage-size) section below.
To increase the storage size (in this example- `16Gi`) for a **fresh** Dapr instalation, you can use the following command:
{{< tabs "Dapr CLI" "Helm" >}}
<!-- Dapr CLI -->
{{% codetab %}}
```bash
dapr init -k --set dapr_scheduler.cluster.storageSize=16Gi --set dapr_scheduler.etcdSpaceQuota=16Gi
```
{{% /codetab %}}
<!-- Helm -->
{{% codetab %}}
```bash
helm upgrade --install dapr dapr/dapr \
--version={{% dapr-latest-version short="true" %}} \
--namespace dapr-system \
--create-namespace \
--set dapr_scheduler.cluster.storageSize=16Gi \
--set dapr_scheduler.etcdSpaceQuota=16Gi \
--wait
```
{{% /codetab %}}
{{< /tabs >}}
#### Increase existing Scheduler Storage Size
{{% alert title="Warning" color="warning" %}}
Not all storage providers support dynamic volume expansion.
Please see your storage provider documentation to determine if this feature is supported, and what to do if it is not.
{{% /alert %}}
By default, each Scheduler will create a Persistent Volume and Persistent Volume Claim of size `1Gi` against the [default `standard` storage class](#storage-class) for each Scheduler replica.
These will look similar to the following, where in this example we are running Scheduler in HA mode.
```
NAMESPACE NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS VOLUMEATTRIBUTESCLASS AGE
dapr-system dapr-scheduler-data-dir-dapr-scheduler-server-0 Bound pvc-9f699d2e-f347-43b0-aa98-57dcf38229c5 1Gi RWO standard <unset> 3m25s
dapr-system dapr-scheduler-data-dir-dapr-scheduler-server-1 Bound pvc-f4c8be7b-ffbe-407b-954e-7688f2482caa 1Gi RWO standard <unset> 3m25s
dapr-system dapr-scheduler-data-dir-dapr-scheduler-server-2 Bound pvc-eaad5fb1-98e9-42a5-bcc8-d45dba1c4b9f 1Gi RWO standard <unset> 3m25s
```
```
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS VOLUMEATTRIBUTESCLASS REASON AGE
pvc-9f699d2e-f347-43b0-aa98-57dcf38229c5 1Gi RWO Delete Bound dapr-system/dapr-scheduler-data-dir-dapr-scheduler-server-0 standard <unset> 4m24s
pvc-eaad5fb1-98e9-42a5-bcc8-d45dba1c4b9f 1Gi RWO Delete Bound dapr-system/dapr-scheduler-data-dir-dapr-scheduler-server-2 standard <unset> 4m24s
pvc-f4c8be7b-ffbe-407b-954e-7688f2482caa 1Gi RWO Delete Bound dapr-system/dapr-scheduler-data-dir-dapr-scheduler-server-1 standard <unset> 4m24s
```
To expand the storage size of the Scheduler, follow these steps:
1. First, ensure that the storage class supports volume expansion, and that the `allowVolumeExpansion` field is set to `true` if it is not already.
```yaml
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: standard
provisioner: my.driver
allowVolumeExpansion: true
...
```
2. Delete the Scheduler StatefulSet whilst preserving the Bound Persistent Volume Claims.
```bash
kubectl delete sts -n dapr-system dapr-scheduler-server --cascade=orphan
```
3. Increase the size of the Persistent Volume Claims to the desired size by editing the `spec.resources.requests.storage` field.
Again in this case, we are assuming that the Scheduler is running in HA mode with 3 replicas.
```bash
kubectl edit pvc -n dapr-system dapr-scheduler-data-dir-dapr-scheduler-server-0 dapr-scheduler-data-dir-dapr-scheduler-server-1 dapr-scheduler-data-dir-dapr-scheduler-server-2
```
4. Recreate the Scheduler StatefulSet by [installing Dapr with the desired storage size](#setting-the-storage-size-on-installation).
### Storage Class
In case your Kubernetes deployment does not have a default storage class or you are configuring a production cluster, defining a storage class is required.
A persistent volume is backed by a real disk that is provided by the hosted Cloud Provider or Kubernetes infrastructure platform.
@ -59,8 +172,8 @@ helm upgrade --install dapr dapr/dapr \
## Ephemeral Storage
Scheduler can be optionally made to use Ephemeral storage, which is in-memory storage which is **not** resilient to restarts, i.e. all Job data will be lost after a Scheduler restart.
This is useful for deployments where storage is not available or required, or for testing purposes.
When running in non-HA mode, the Scheduler can be optionally made to use ephemeral storage, which is in-memory storage that is **not** resilient to restarts. For example, all jobs data is lost after a Scheduler restart.
This is useful in non-production deployments or for testing where storage is not available or required.
{{% alert title="Note" color="primary" %}}
If Dapr is already installed, the control plane needs to be completely [uninstalled]({{< ref dapr-uninstall.md >}}) in order for the Scheduler `StatefulSet` to be recreated without the persistent volume.

View File

@ -95,6 +95,25 @@ For a new Dapr deployment, HA mode can be set with both:
For an existing Dapr deployment, [you can enable HA mode in a few extra steps]({{< ref "#enabling-high-availability-in-an-existing-dapr-deployment" >}}).
### Individual service HA Helm configuration
You can configure HA mode via Helm across all services by setting the `global.ha.enabled` flag to `true`. By default, `--set global.ha.enabled=true` is fully respected and cannot be overridden, making it impossible to simultaneously have either the placement or scheduler service as a single instance.
> **Note:** HA for scheduler and placement services is not the default setting.
To scale scheduler and placement to three instances independently of the `global.ha.enabled` flag, set `global.ha.enabled` to `false` and `dapr_scheduler.ha` and `dapr_placement.ha` to `true`. For example:
```bash
helm upgrade --install dapr dapr/dapr \
--version={{% dapr-latest-version short="true" %}} \
--namespace dapr-system \
--create-namespace \
--set global.ha.enabled=false \
--set dapr_scheduler.ha=true \
--set dapr_placement.ha=true \
--wait
```
## Setting cluster critical priority class name for control plane services
In some scenarios, nodes may have memory and/or cpu pressure and the Dapr control plane pods might get selected
@ -260,6 +279,22 @@ Verify your production-ready deployment includes the following settings:
1. Dapr supports and is enabled to **scope components for certain applications**. This is not a required practice. [Learn more about component scopes]({{< ref "component-scopes.md" >}}).
## Recommended Placement service configuration
The [Placement service]({{< ref "placement.md" >}}) is a component in Dapr, responsible for disseminating information about actor addresses to all Dapr sidecars via a placement table (more information on this can be found [here]({{< ref "actors-features-concepts.md#actor-placement-service" >}})).
When running in production, it's recommended to configure the Placement service with the following values:
1. **High availability**. Ensure the Placement service is highly available (three replicas) and can survive individual node failures. Helm chart value: `dapr_placement.ha=true`
2. **In-memory logs**. Use in-memory Raft log store for faster writes. The tradeoff is more placement table disseminations (and thus, network traffic) in an eventual Placement service pod failure. Helm chart value: `dapr_placement.cluster.forceInMemoryLog=true`
3. **No metadata endpoint**. Disable the unauthenticated `/placement/state` endpoint which exposes placement table information for the Placement service. Helm chart value: `dapr_placement.metadataEnabled=false`
4. **Timeouts** Control the sensitivity of network connectivity between the Placement service and the sidecars using the below timeout values. Default values are set, but you can adjust these based on your network conditions.
1. `dapr_placement.keepAliveTime` sets the interval at which the Placement service sends [keep alive](https://grpc.io/docs/guides/keepalive/) pings to Dapr sidecars on the gRPC stream to check if the connection is still alive. Lower values will lead to shorter actor rebalancing time in case of pod loss/restart, but higher network traffic during normal operation. Accepts values between `1s` and `10s`. Default is `2s`.
2. `dapr_placement.keepAliveTimeout` sets the timeout period for Dapr sidecars to respond to the Placement service's [keep alive](https://grpc.io/docs/guides/keepalive/) pings before the Placement service closes the connection. Lower values will lead to shorter actor rebalancing time in case of pod loss/restart, but higher network traffic during normal operation. Accepts values between `1s` and `10s`. Default is `3s`.
3. `dapr_placement.disseminateTimeout` sets the timeout period for dissemination to be delayed after actor membership change (usually related to pod restarts) to avoid excessive dissemination during multiple pod restarts. Higher values will reduce the frequency of dissemination, but delay the table dissemination. Accepts values between `1s` and `3s`. Default is `2s`.
## Service account tokens
By default, Kubernetes mounts a volume containing a [Service Account token](https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/) in each container. Applications can use this token, whose permissions vary depending on the configuration of the cluster and namespace, among other things, to perform API calls against the Kubernetes control plane.

View File

@ -138,6 +138,18 @@ services:
command: ["./placement", "--port", "50006"]
ports:
- "50006:50006"
scheduler:
image: "daprio/dapr"
command: ["./scheduler", "--port", "50007"]
ports:
- "50007:50007"
# WARNING - This is a tmpfs volume, your state will not be persisted across restarts
volumes:
- type: tmpfs
target: /data
tmpfs:
size: "10000"
networks:
hello-dapr: null
@ -147,6 +159,8 @@ services:
To further learn how to run Dapr with Docker Compose, see the [Docker-Compose Sample](https://github.com/dapr/samples/tree/master/hello-docker-compose).
The above example also includes a scheduler definition that uses a non-persistent data store for testing and development purposes.
## Run on Kubernetes
If your deployment target is Kubernetes please use Dapr's first-class integration. Refer to the

View File

@ -6,7 +6,7 @@ weight: 60
description: See and measure the message calls to components and between networked services
---
[The following overview video and demo](https://www.youtube.com/live/0y7ne6teHT4?si=3bmNSSyIEIVSF-Ej&t=9931) demonstrates how observability in Dapr works.
[The following overview video and demo](https://www.youtube.com/watch?v=0y7ne6teHT4&t=12652s) demonstrates how observability in Dapr works.
<iframe width="560" height="315" src="https://www.youtube.com/embed/0y7ne6teHT4?si=iURnLk57t2zN-7zP&amp;start=12653" title="YouTube video player" style="padding-bottom:25px;" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" allowfullscreen></iframe>

View File

@ -70,6 +70,38 @@ spec:
enabled: false
```
## Configuring metrics for error codes
You can enable additional metrics for [Dapr API error codes](https://docs.dapr.io/reference/api/error_codes/) by setting `spec.metrics.recordErrorCodes` to `true`. Dapr APIs which communicate back to their caller may return standardized error codes. [A new metric called `error_code_total` is recorded]({{< ref errors-overview.md >}}), which allows monitoring of error codes triggered by application, code, and category. See [the `errorcodes` package](https://github.com/dapr/dapr/blob/master/pkg/messages/errorcodes/errorcodes.go) for specific codes and categories.
Example configuration:
```yaml
apiVersion: dapr.io/v1alpha1
kind: Configuration
metadata:
name: tracing
namespace: default
spec:
metrics:
enabled: true
recordErrorCodes: true
```
Example metric:
```json
{
"app_id": "publisher-app",
"category": "state",
"dapr_io_enabled": "true",
"error_code": "ERR_STATE_STORE_NOT_CONFIGURED",
"instance": "10.244.1.64:9090",
"job": "kubernetes-service-endpoints",
"namespace": "my-app",
"node": "my-node",
"service": "publisher-app-dapr"
}
```
## Optimizing HTTP metrics reporting with path matching
When invoking Dapr using HTTP, metrics are created for each requested method by default. This can result in a high number of metrics, known as high cardinality, which can impact memory usage and CPU.

View File

@ -93,13 +93,108 @@ helm install dapr-prom prometheus-community/prometheus -n dapr-monitoring
--set alertmanager.persistence.enabled=false --set pushgateway.persistentVolume.enabled=false --set server.persistentVolume.enabled=false
```
For automatic discovery of Dapr targets (Service Discovery), use:
```bash
helm install dapr-prom prometheus-community/prometheus -f values.yaml -n dapr-monitoring --create-namespace
```
### `values.yaml` File
```yaml
alertmanager:
persistence:
enabled: false
pushgateway:
persistentVolume:
enabled: false
server:
persistentVolume:
enabled: false
# Adds additional scrape configurations to prometheus.yml
# Uses service discovery to find Dapr and Dapr sidecar targets
extraScrapeConfigs: |-
- job_name: dapr-sidecars
kubernetes_sd_configs:
- role: pod
relabel_configs:
- action: keep
regex: "true"
source_labels:
- __meta_kubernetes_pod_annotation_dapr_io_enabled
- action: keep
regex: "true"
source_labels:
- __meta_kubernetes_pod_annotation_dapr_io_enable_metrics
- action: replace
replacement: ${1}
source_labels:
- __meta_kubernetes_namespace
target_label: namespace
- action: replace
replacement: ${1}
source_labels:
- __meta_kubernetes_pod_name
target_label: pod
- action: replace
regex: (.*);daprd
replacement: ${1}-dapr
source_labels:
- __meta_kubernetes_pod_annotation_dapr_io_app_id
- __meta_kubernetes_pod_container_name
target_label: service
- action: replace
replacement: ${1}:9090
source_labels:
- __meta_kubernetes_pod_ip
target_label: __address__
- job_name: dapr
kubernetes_sd_configs:
- role: pod
relabel_configs:
- action: keep
regex: dapr
source_labels:
- __meta_kubernetes_pod_label_app_kubernetes_io_name
- action: keep
regex: dapr
source_labels:
- __meta_kubernetes_pod_label_app_kubernetes_io_part_of
- action: replace
replacement: ${1}
source_labels:
- __meta_kubernetes_pod_label_app
target_label: app
- action: replace
replacement: ${1}
source_labels:
- __meta_kubernetes_namespace
target_label: namespace
- action: replace
replacement: ${1}
source_labels:
- __meta_kubernetes_pod_name
target_label: pod
- action: replace
replacement: ${1}:9090
source_labels:
- __meta_kubernetes_pod_ip
target_label: __address__
```
3. Validation
Ensure Prometheus is running in your cluster.
```bash
kubectl get pods -n dapr-monitoring
```
Expected output:
```bash
NAME READY STATUS RESTARTS AGE
dapr-prom-kube-state-metrics-9849d6cc6-t94p8 1/1 Running 0 4m58s
dapr-prom-prometheus-alertmanager-749cc46f6-9b5t8 2/2 Running 0 4m58s
@ -110,6 +205,22 @@ dapr-prom-prometheus-pushgateway-688665d597-h4xx2 1/1 Running 0
dapr-prom-prometheus-server-694fd8d7c-q5d59 2/2 Running 0 4m58s
```
### Access the Prometheus Dashboard
To view the Prometheus dashboard and check service discovery:
```bash
kubectl port-forward svc/dapr-prom-prometheus-server 9090:80 -n dapr-monitoring
```
Open a browser and visit `http://localhost:9090`. Navigate to **Status** > **Service Discovery** to verify that the Dapr targets are discovered correctly.
<img src="/images/prometheus-web-ui.png" alt="Prometheus Web UI" width="1200">
You can see the `job_name` and its discovered targets.
<img src="/images/prometheus-service-discovery.png" alt="Prometheus Service Discovery" width="1200">
## Example
<div class="embed-responsive embed-responsive-16by9">

View File

@ -35,7 +35,13 @@ If you don't specify a timeout value, the policy does not enforce a time and def
## Retries
With `retries`, you can define a retry strategy for failed operations, including requests failed due to triggering a defined timeout or circuit breaker policy. The following retry options are configurable:
With `retries`, you can define a retry strategy for failed operations, including requests failed due to triggering a defined timeout or circuit breaker policy.
{{% alert title="Pub/sub component retries vs inbound resiliency" color="warning" %}}
Each [pub/sub component]({{< ref supported-pubsub >}}) has its own built-in retry behaviors. Explicity applying a Dapr resiliency policy doesn't override these implicit retry policies. Rather, the resiliency policy augments the built-in retry, which can cause repetitive clustering of messages.
{{% /alert %}}
The following retry options are configurable:
| Retry option | Description |
| ------------ | ----------- |
@ -43,6 +49,15 @@ With `retries`, you can define a retry strategy for failed operations, including
| `duration` | Determines the time interval between retries. Only applies to the `constant` policy.<br/>Valid values are of the form `200ms`, `15s`, `2m`, etc.<br/> Defaults to `5s`.|
| `maxInterval` | Determines the maximum interval between retries to which the `exponential` back-off policy can grow.<br/>Additional retries always occur after a duration of `maxInterval`. Defaults to `60s`. Valid values are of the form `5s`, `1m`, `1m30s`, etc |
| `maxRetries` | The maximum number of retries to attempt. <br/>`-1` denotes an unlimited number of retries, while `0` means the request will not be retried (essentially behaving as if the retry policy were not set).<br/>Defaults to `-1`. |
| `matching.httpStatusCodes` | Optional: a comma-separated string of HTTP status codes or code ranges to retry. Status codes not listed are not retried.<br/>Valid values: 100-599, [Reference](https://developer.mozilla.org/en-US/docs/Web/HTTP/Status)<br/>Format: `<code>` or range `<start>-<end>`<br/>Example: "429,501-503"<br/>Default: empty string `""` or field is not set. Retries on all HTTP errors. |
| `matching.gRPCStatusCodes` | Optional: a comma-separated string of gRPC status codes or code ranges to retry. Status codes not listed are not retried.<br/>Valid values: 0-16, [Reference](https://grpc.io/docs/guides/status-codes/)<br/>Format: `<code>` or range `<start>-<end>`<br/>Example: "1,501-503"<br/>Default: empty string `""` or field is not set. Retries on all gRPC errors. |
{{% alert title="httpStatusCodes and gRPCStatusCodes format" color="warning" %}}
The field values should follow the format as specified in the field description or in the "Example 2" below.
An incorrectly formatted value will produce an error log ("Could not read resiliency policy") and `daprd` startup sequence will proceed.
{{% /alert %}}
The exponential back-off window uses the following formula:
@ -71,7 +86,20 @@ spec:
maxRetries: -1 # Retry indefinitely
```
Example 2:
```yaml
spec:
policies:
retries:
retry5xxOnly:
policy: constant
duration: 5s
maxRetries: 3
matching:
httpStatusCodes: "429,500-599" # retry the HTTP status codes in this range. All others are not retried.
gRPCStatusCodes: "1-4,8-11,13,14" # retry gRPC status codes in these ranges and separate single codes.
```
## Circuit Breakers
@ -82,7 +110,7 @@ Circuit Breaker (CB) policies are used when other applications/services/componen
| `maxRequests` | The maximum number of requests allowed to pass through when the CB is half-open (recovering from failure). Defaults to `1`. |
| `interval` | The cyclical period of time used by the CB to clear its internal counts. If set to 0 seconds, this never clears. Defaults to `0s`. |
| `timeout` | The period of the open state (directly after failure) until the CB switches to half-open. Defaults to `60s`. |
| `trip` | A [Common Expression Language (CEL)](https://github.com/google/cel-spec) statement that is evaluated by the CB. When the statement evaluates to true, the CB trips and becomes open. Defaults to `consecutiveFailures > 5`. |
| `trip` | A [Common Expression Language (CEL)](https://github.com/google/cel-spec) statement that is evaluated by the CB. When the statement evaluates to true, the CB trips and becomes open. Defaults to `consecutiveFailures > 5`. Other possible values are `requests` and `totalFailures` where `requests` represents the number of either successful or failed calls before the circuit opens and `totalFailures` represents the total (not necessarily consecutive) number of failed attempts before the circuit opens. Example: `requests > 5` and `totalFailures >3`.|
Example:

View File

@ -15,13 +15,13 @@ description: "List of current alpha and beta APIs"
| Bulk Publish | [Bulk publish proto](https://github.com/dapr/dapr/blob/5aba3c9aa4ea9b3f388df125f9c66495b43c5c9e/dapr/proto/runtime/v1/dapr.proto#L59) | `v1.0-alpha1/publish/bulk` | The bulk publish API allows you to publish multiple messages to a topic in a single request. | [Bulk Publish and Subscribe API]({{< ref "pubsub-bulk.md" >}}) | v1.10 |
| Bulk Subscribe | [Bulk subscribe proto](https://github.com/dapr/dapr/blob/5aba3c9aa4ea9b3f388df125f9c66495b43c5c9e/dapr/proto/runtime/v1/appcallback.proto#L57) | N/A | The bulk subscribe application callback receives multiple messages from a topic in a single call. | [Bulk Publish and Subscribe API]({{< ref "pubsub-bulk.md" >}}) | v1.10 |
| Cryptography | [Crypto proto](https://github.com/dapr/dapr/blob/5aba3c9aa4ea9b3f388df125f9c66495b43c5c9e/dapr/proto/runtime/v1/dapr.proto#L118) | `v1.0-alpha1/crypto` | The cryptography API enables you to perform **high level** cryptography operations for encrypting and decrypting messages. | [Cryptography API]({{< ref "cryptography-overview.md" >}}) | v1.11 |
| Jobs | [Jobs proto](https://github.com/dapr/dapr/blob/master/dapr/proto/runtime/v1/dapr.proto#L198-L204) | `v1.0-alpha1/jobs` | The jobs API enables you to schedule and orchestrate jobs. | [Jobs API]({{< ref "jobs-overview.md" >}}) | v1.14 |
| Jobs | [Jobs proto](https://github.com/dapr/dapr/blob/master/dapr/proto/runtime/v1/dapr.proto#L212-219) | `v1.0-alpha1/jobs` | The jobs API enables you to schedule and orchestrate jobs. | [Jobs API]({{< ref "jobs-overview.md" >}}) | v1.14 |
| Conversation | [Conversation proto](https://github.com/dapr/dapr/blob/master/dapr/proto/runtime/v1/dapr.proto#L221-222) | `v1.0-alpha1/conversation` | Converse between different large language models using the conversation API. | [Conversation API]({{< ref "conversation-overview.md" >}}) | v1.15 |
## Beta APIs
| Building block/API | gRPC | HTTP | Description | Documentation | Version introduced |
| ------------------ | ---- | ---- | ----------- | ------------- | ------------------ |
| Workflow | [Workflow proto](https://github.com/dapr/dapr/blob/5aba3c9aa4ea9b3f388df125f9c66495b43c5c9e/dapr/proto/runtime/v1/dapr.proto#L151) | `/v1.0-beta1/workflow` | The workflow API enables you to define long running, persistent processes or data flows. | [Workflow API]({{< ref "workflow-overview.md" >}}) | v1.10 |
No current beta APIs.
## Related links

View File

@ -68,6 +68,7 @@ After announcing a future breaking change, the change will happen in 2 releases
| Hazelcast PubSub Component | 1.9.0 | 1.11.0 |
| Twitter Binding Component | 1.10.0 | 1.11.0 |
| NATS Streaming PubSub Component | 1.11.0 | 1.13.0 |
| Workflows API Alpha1 `/v1.0-alpha1/workflows` being deprecated in favor of Workflow Client | 1.15.0 | 1.17.0 |
## Related links

View File

@ -22,4 +22,4 @@ For CLI there is no explicit opt-in, just the version that this was first made a
| **Actor State TTL** | Allow actors to save records to state stores with Time To Live (TTL) set to automatically clean up old data. In its current implementation, actor state with TTL may not be reflected correctly by clients, read [Actor State Transactions]({{< ref actors_api.md >}}) for more information. | `ActorStateTTL` | [Actor State Transactions]({{< ref actors_api.md >}}) | v1.11 |
| **Component Hot Reloading** | Allows for Dapr-loaded components to be "hot reloaded". A component spec is reloaded when it is created/updated/deleted in Kubernetes or on file when running in self-hosted mode. Ignores changes to actor state stores and workflow backends. | `HotReload`| [Hot Reloading]({{< ref components-concept.md >}}) | v1.13 |
| **Subscription Hot Reloading** | Allows for declarative subscriptions to be "hot reloaded". A subscription is reloaded either when it is created/updated/deleted in Kubernetes, or on file in self-hosted mode. In-flight messages are unaffected when reloading. | `HotReload`| [Hot Reloading]({{< ref "subscription-methods.md#declarative-subscriptions" >}}) | v1.14 |
| **Job actor reminders** | Whilst the [Scheduler service]({{< ref "concepts/dapr-services/scheduler.md" >}}) is deployed by default, job actor reminders (used for scheduling actor reminders) are enabled through a preview feature and needs a feature flag. | `SchedulerReminders`| [Job actor reminders]({{< ref "jobs-overview.md#actor-reminders" >}}) | v1.14 |
| **Scheduler Actor Reminders** | Scheduler actor reminders are actor reminders stored in the Scheduler control plane service, as opposed to the Placement control plane service actor reminder system. The `SchedulerReminders` preview feature defaults to `true`, but you can disable Scheduler actor reminders by setting it to `false`. | `SchedulerReminders`| [Scheduler actor reminders]({{< ref "scheduler.md#actor-reminders" >}}) | v1.14 |

View File

@ -24,7 +24,7 @@ A supported release means:
From the 1.8.0 release onwards three (3) versions of Dapr are supported; the current and previous two (2) versions. Typically these are `MINOR`release updates. This means that there is a rolling window that moves forward for supported releases and it is your operational responsibility to remain up to date with these supported versions. If you have an older version of Dapr you may have to do intermediate upgrades to get to a supported version.
There will be at least 6 weeks between major.minor version releases giving users a 12 week (3 month) rolling window for upgrading.
There will be at least 13 weeks (3 months) between major.minor version releases giving users at least a 9 month rolling window for upgrading from a non-supported version. For more details on the release process read [release cycle and cadence](https://github.com/dapr/community/blob/master/release-process.md)
Patch support is for supported versions (current and previous).
@ -45,6 +45,10 @@ The table below shows the versions of Dapr releases that have been tested togeth
| Release date | Runtime | CLI | SDKs | Dashboard | Status | Release notes |
|--------------------|:--------:|:--------|---------|---------|---------|------------|
| September 16th 2024 | 1.14.4</br> | 1.14.1 | Java 1.12.0 </br>Go 1.11.0 </br>PHP 1.2.0 </br>Python 1.14.0 </br>.NET 1.14.0 </br>JS 3.3.1 | 0.15.0 | Supported (current) | [v1.14.4 release notes](https://github.com/dapr/dapr/releases/tag/v1.14.4) |
| September 13th 2024 | 1.14.3</br> | 1.14.1 | Java 1.12.0 </br>Go 1.11.0 </br>PHP 1.2.0 </br>Python 1.14.0 </br>.NET 1.14.0 </br>JS 3.3.1 | 0.15.0 | ⚠️ Recalled | [v1.14.3 release notes](https://github.com/dapr/dapr/releases/tag/v1.14.3) |
| September 6th 2024 | 1.14.2</br> | 1.14.1 | Java 1.12.0 </br>Go 1.11.0 </br>PHP 1.2.0 </br>Python 1.14.0 </br>.NET 1.14.0 </br>JS 3.3.1 | 0.15.0 | Supported (current) | [v1.14.2 release notes](https://github.com/dapr/dapr/releases/tag/v1.14.2) |
| August 14th 2024 | 1.14.1</br> | 1.14.1 | Java 1.12.0 </br>Go 1.11.0 </br>PHP 1.2.0 </br>Python 1.14.0 </br>.NET 1.14.0 </br>JS 3.3.1 | 0.15.0 | Supported (current) | [v1.14.1 release notes](https://github.com/dapr/dapr/releases/tag/v1.14.1) |
| August 14th 2024 | 1.14.0</br> | 1.14.0 | Java 1.12.0 </br>Go 1.11.0 </br>PHP 1.2.0 </br>Python 1.14.0 </br>.NET 1.14.0 </br>JS 3.3.1 | 0.15.0 | Supported (current) | [v1.14.0 release notes](https://github.com/dapr/dapr/releases/tag/v1.14.0) |
| May 29th 2024 | 1.13.4</br> | 1.13.0 | Java 1.11.0 </br>Go 1.10.0 </br>PHP 1.2.0 </br>Python 1.13.0 </br>.NET 1.13.0 </br>JS 3.3.0 | 0.14.0 | Supported | [v1.13.4 release notes](https://github.com/dapr/dapr/releases/tag/v1.13.4) |
| May 21st 2024 | 1.13.3</br> | 1.13.0 | Java 1.11.0 </br>Go 1.10.0 </br>PHP 1.2.0 </br>Python 1.13.0 </br>.NET 1.13.0 </br>JS 3.3.0 | 0.14.0 | Supported | [v1.13.3 release notes](https://github.com/dapr/dapr/releases/tag/v1.13.3) |
@ -134,13 +138,12 @@ General guidance on upgrading can be found for [self hosted mode]({{< ref self-h
| | 1.8.6 | 1.9.6 |
| | 1.9.6 | 1.10.7 |
| 1.8.0 to 1.8.6 | N/A | 1.9.6 |
| 1.9.0 | N/A | 1.9.6 |
| 1.10.0 | N/A | 1.10.8 |
| 1.11.0 | N/A | 1.11.4 |
| 1.12.0 | N/A | 1.12.4 |
| 1.12.0 to 1.13.0 | N/A | 1.13.4 |
| 1.13.0 | N/A | 1.13.4 |
| 1.13.0 to 1.14.0 | N/A | 1.14.0 |
| 1.9.0 to 1.9.6 | N/A | 1.10.8 |
| 1.10.0 to 1.10.8 | N/A | 1.11.4 |
| 1.11.0 to 1.11.4 | N/A | 1.12.4 |
| 1.12.0 to 1.12.4 | N/A | 1.13.5 |
| 1.13.0 to 1.13.5 | N/A | 1.14.0 |
| 1.14.0 to 1.14.2 | N/A | 1.14.2 |
## Upgrade on Hosting platforms

View File

@ -52,7 +52,7 @@ The people who should have access to read your security report are listed in [`m
code which allows the issue to be reproduced. Explain why you believe this
to be a security issue in Dapr.
2. Put that information into an email. Use a descriptive title.
3. Send the email to [Dapr Maintainers (dapr@dapr.io)](mailto:dapr@dapr.io?subject=[Security%20Disclosure]:%20ISSUE%20TITLE)
3. Send an email to [Security (security@dapr.io)](mailto:security@dapr.io?subject=[Security%20Disclosure]:%20ISSUE%20TITLE)
## Response

View File

@ -0,0 +1,74 @@
---
type: docs
title: "Conversation API reference"
linkTitle: "Conversation API"
description: "Detailed documentation on the conversation API"
weight: 1400
---
{{% alert title="Alpha" color="primary" %}}
The conversation API is currently in [alpha]({{< ref "certification-lifecycle.md#certification-levels" >}}).
{{% /alert %}}
Dapr provides an API to interact with Large Language Models (LLMs) and enables critical performance and security functionality with features like prompt caching and PII data obfuscation.
## Converse
This endpoint lets you converse with LLMs.
```
POST /v1.0-alpha1/conversation/<llm-name>/converse
```
### URL parameters
| Parameter | Description |
| --------- | ----------- |
| `llm-name` | The name of the LLM component. [See a list of all available conversation components.]({{< ref supported-conversation >}})
### Request body
| Field | Description |
| --------- | ----------- |
| `conversationContext` | |
| `inputs` | |
| `parameters` | |
### Request content
```json
REQUEST = {
"inputs": ["what is Dapr", "Why use Dapr"],
"parameters": {},
}
```
### HTTP response codes
Code | Description
---- | -----------
`202` | Accepted
`400` | Request was malformed
`500` | Request formatted correctly, error in dapr code or underlying component
### Response content
```json
RESPONSE = {
"outputs": {
{
"result": "Dapr is distribution application runtime ...",
"parameters": {},
},
{
"result": "Dapr can help developers ...",
"parameters": {},
}
},
}
```
## Next steps
[Conversation API overview]({{< ref conversation-overview.md >}})

View File

@ -20,7 +20,7 @@ This endpoint lets you encrypt a value provided as a byte array using a specifie
### HTTP Request
```
PUT http://localhost:<daprPort>/v1.0/crypto/<crypto-store-name>/encrypt
PUT http://localhost:<daprPort>/v1.0-alpha1/crypto/<crypto-store-name>/encrypt
```
#### URL Parameters
@ -59,7 +59,7 @@ returns an array of bytes with the encrypted payload.
### Examples
```shell
curl http://localhost:3500/v1.0/crypto/myAzureKeyVault/encrypt \
curl http://localhost:3500/v1.0-alpha1/crypto/myAzureKeyVault/encrypt \
-X PUT \
-H "dapr-key-name: myCryptoKey" \
-H "dapr-key-wrap-algorithm: aes-gcm" \
@ -81,7 +81,7 @@ This endpoint lets you decrypt a value provided as a byte array using a specifie
#### HTTP Request
```
PUT curl http://localhost:3500/v1.0/crypto/<crypto-store-name>/decrypt
PUT curl http://localhost:3500/v1.0-alpha1/crypto/<crypto-store-name>/decrypt
```
#### URL Parameters
@ -116,7 +116,7 @@ returns an array of bytes representing the decrypted payload.
### Examples
```bash
curl http://localhost:3500/v1.0/crypto/myAzureKeyVault/decrypt \
curl http://localhost:3500/v1.0-alpha1/crypto/myAzureKeyVault/decrypt \
-X PUT
-H "dapr-key-name: myCryptoKey"\
-H "Content-Type: application/octet-stream" \

View File

@ -1,49 +0,0 @@
---
type: docs
title: "Error codes returned by APIs"
linkTitle: "Error codes"
description: "Detailed reference of the Dapr API error codes"
weight: 1400
---
For http calls made to Dapr runtime, when an error is encountered, an error json is returned in http response body. The json contains an error code and an descriptive error message, e.g.
```
{
"errorCode": "ERR_STATE_GET",
"message": "Requested state key does not exist in state store."
}
```
Following table lists the error codes returned by Dapr runtime:
| Error Code | Description |
|-----------------------------------|-------------|
| ERR_ACTOR_INSTANCE_MISSING | Error getting an actor instance. This means that actor is now hosted in some other service replica.
| ERR_ACTOR_RUNTIME_NOT_FOUND | Error getting the actor instance.
| ERR_ACTOR_REMINDER_CREATE | Error creating a reminder for an actor.
| ERR_ACTOR_REMINDER_DELETE | Error deleting a reminder for an actor.
| ERR_ACTOR_TIMER_CREATE | Error creating a timer for an actor.
| ERR_ACTOR_TIMER_DELETE | Error deleting a timer for an actor.
| ERR_ACTOR_REMINDER_GET | Error getting a reminder for an actor.
| ERR_ACTOR_INVOKE_METHOD | Error invoking a method on an actor.
| ERR_ACTOR_STATE_DELETE | Error deleting the state for an actor.
| ERR_ACTOR_STATE_GET | Error getting the state for an actor.
| ERR_ACTOR_STATE_TRANSACTION_SAVE | Error storing actor state transactionally.
| ERR_PUBSUB_NOT_FOUND | Error referencing the Pub/Sub component in Dapr runtime.
| ERR_PUBSUB_PUBLISH_MESSAGE | Error publishing a message.
| ERR_PUBSUB_FORBIDDEN | Error message forbidden by access controls.
| ERR_PUBSUB_CLOUD_EVENTS_SER | Error serializing Pub/Sub event envelope.
| ERR_STATE_STORE_NOT_FOUND | Error referencing a state store not found.
| ERR_STATE_STORES_NOT_CONFIGURED | Error no state stores configured.
| ERR_NOT_SUPPORTED_STATE_OPERATION | Error transaction requested on a state store with no transaction support.
| ERR_STATE_GET | Error getting a state for state store.
| ERR_STATE_DELETE | Error deleting a state from state store.
| ERR_STATE_SAVE | Error saving a state in state store.
| ERR_INVOKE_OUTPUT_BINDING | Error invoking an output binding.
| ERR_MALFORMED_REQUEST | Error with a malformed request.
| ERR_DIRECT_INVOKE | Error in direct invocation.
| ERR_DESERIALIZE_HTTP_BODY | Error deserializing an HTTP request body.
| ERR_SECRET_STORES_NOT_CONFIGURED | Error that no secret store is configured.
| ERR_SECRET_STORE_NOT_FOUND | Error that specified secret store is not found.
| ERR_HEALTH_NOT_READY | Error that Dapr is not ready.
| ERR_METADATA_GET | Error parsing the Metadata information.

View File

@ -32,7 +32,7 @@ At least one of `schedule` or `dueTime` must be provided, but they can also be p
Parameter | Description
--------- | -----------
`name` | Name of the job you're scheduling
`data` | A protobuf message `@type`/`value` pair. `@type` must be of a [well-known type](https://protobuf.dev/reference/protobuf/google.protobuf). `value` is the serialized data.
`data` | A JSON serialized value or object.
`schedule` | An optional schedule at which the job is to be run. Details of the format are below.
`dueTime` | An optional time at which the job should be active, or the "one shot" time, if other scheduling type fields are not provided. Accepts a "point in time" string in the format of RFC3339, Go duration string (calculated from creation time), or non-repeating ISO8601.
`repeats` | An optional number of times in which the job should be triggered. If not set, the job runs indefinitely or until expiration.
@ -43,9 +43,13 @@ Parameter | Description
Systemd timer style cron accepts 6 fields:
seconds | minutes | hours | day of month | month | day of week
0-59 | 0-59 | 0-23 | 1-31 | 1-12/jan-dec | 0-7/sun-sat
--- | --- | --- | --- | --- | ---
0-59 | 0-59 | 0-23 | 1-31 | 1-12/jan-dec | 0-6/sun-sat
##### Example 1
"0 30 * * * *" - every hour on the half hour
##### Example 2
"0 15 3 * * *" - every day at 03:15
Period string expressions:
@ -63,13 +67,8 @@ Entry | Description | Equivalent
```json
{
"job": {
"data": {
"@type": "type.googleapis.com/google.protobuf.StringValue",
"value": "\"someData\""
},
"dueTime": "30s"
}
"data": "some data",
"dueTime": "30s"
}
```
@ -88,20 +87,14 @@ The following example curl command creates a job, naming the job `jobforjabba` a
```bash
$ curl -X POST \
http://localhost:3500/v1.0-alpha1/jobs/jobforjabba \
-H "Content-Type: application/json"
-H "Content-Type: application/json" \
-d '{
"job": {
"data": {
"@type": "type.googleapis.com/google.protobuf.StringValue",
"value": "Running spice"
},
"schedule": "@every 1m",
"repeats": 5
}
"data": "{\"value\":\"Running spice\"}",
"schedule": "@every 1m",
"repeats": 5
}'
```
## Get job data
Get a job from its name.
@ -137,10 +130,7 @@ $ curl -X GET http://localhost:3500/v1.0-alpha1/jobs/jobforjabba -H "Content-Typ
"name": "jobforjabba",
"schedule": "@every 1m",
"repeats": 5,
"data": {
"@type": "type.googleapis.com/google.protobuf.StringValue",
"value": "Running spice"
}
"data": 123
}
```
## Delete a job

View File

@ -302,7 +302,7 @@ other | warning is logged and all messages to be retried
## Message envelope
Dapr pub/sub adheres to version 1.0 of CloudEvents.
Dapr pub/sub adheres to [version 1.0 of CloudEvents](https://github.com/cloudevents/spec/blob/v1.0/spec.md).
## Related links

View File

@ -6,10 +6,6 @@ description: "Detailed documentation on the workflow API"
weight: 300
---
{{% alert title="Note" color="primary" %}}
Dapr Workflow is currently in beta. [See known limitations for {{% dapr-latest-version cli="true" %}}]({{< ref "workflow-overview.md#limitations" >}}).
{{% /alert %}}
Dapr provides users with the ability to interact with workflows and comes with a built-in `dapr` component.
## Start workflow request
@ -17,7 +13,7 @@ Dapr provides users with the ability to interact with workflows and comes with a
Start a workflow instance with the given name and optionally, an instance ID.
```
POST http://localhost:3500/v1.0-beta1/workflows/<workflowComponentName>/<workflowName>/start[?instanceID=<instanceID>]
POST http://localhost:3500/v1.0/workflows/<workflowComponentName>/<workflowName>/start[?instanceID=<instanceID>]
```
Note that workflow instance IDs can only contain alphanumeric characters, underscores, and dashes.
@ -57,7 +53,7 @@ The API call will provide a response similar to this:
Terminate a running workflow instance with the given name and instance ID.
```
POST http://localhost:3500/v1.0-beta1/workflows/<workflowComponentName>/<instanceId>/terminate
POST http://localhost:3500/v1.0/workflows/<workflowComponentName>/<instanceId>/terminate
```
{{% alert title="Note" color="primary" %}}
@ -91,7 +87,7 @@ This API does not return any content.
For workflow components that support subscribing to external events, such as the Dapr Workflow engine, you can use the following "raise event" API to deliver a named event to a specific workflow instance.
```
POST http://localhost:3500/v1.0-beta1/workflows/<workflowComponentName>/<instanceID>/raiseEvent/<eventName>
POST http://localhost:3500/v1.0/workflows/<workflowComponentName>/<instanceID>/raiseEvent/<eventName>
```
{{% alert title="Note" color="primary" %}}
@ -124,7 +120,7 @@ None.
Pause a running workflow instance.
```
POST http://localhost:3500/v1.0-beta1/workflows/<workflowComponentName>/<instanceId>/pause
POST http://localhost:3500/v1.0/workflows/<workflowComponentName>/<instanceId>/pause
```
### URL parameters
@ -151,7 +147,7 @@ None.
Resume a paused workflow instance.
```
POST http://localhost:3500/v1.0-beta1/workflows/<workflowComponentName>/<instanceId>/resume
POST http://localhost:3500/v1.0/workflows/<workflowComponentName>/<instanceId>/resume
```
### URL parameters
@ -178,7 +174,7 @@ None.
Purge the workflow state from your state store with the workflow's instance ID.
```
POST http://localhost:3500/v1.0-beta1/workflows/<workflowComponentName>/<instanceId>/purge
POST http://localhost:3500/v1.0/workflows/<workflowComponentName>/<instanceId>/purge
```
{{% alert title="Note" color="primary" %}}
@ -209,7 +205,7 @@ None.
Get information about a given workflow instance.
```
GET http://localhost:3500/v1.0-beta1/workflows/<workflowComponentName>/<instanceId>
GET http://localhost:3500/v1.0/workflows/<workflowComponentName>/<instanceId>
```
### URL parameters

View File

@ -16,15 +16,17 @@ This table is meant to help users understand the equivalent options for running
| `--app-id` | `--app-id` | `-i` | `dapr.io/app-id` | The unique ID of the application. Used for service discovery, state encapsulation and the pub/sub consumer ID |
| `--app-port` | `--app-port` | `-p` | `dapr.io/app-port` | This parameter tells Dapr which port your application is listening on |
| `--components-path` | `--components-path` | `-d` | not supported | **Deprecated** in favor of `--resources-path` |
| `--resources-path` | `--resources-path` | `-d` | not supported | Path for components directory. If empty, components will not be loaded. |
| `--resources-path` | `--resources-path` | `-d` | not supported | Path for components directory. If empty, components will not be loaded |
| `--config` | `--config` | `-c` | `dapr.io/config` | Tells Dapr which Configuration resource to use |
| `--control-plane-address` | not supported | | not supported | Address for a Dapr control plane |
| `--dapr-grpc-port` | `--dapr-grpc-port` | | not supported | gRPC port for the Dapr API to listen on (default "50001") |
| `--dapr-http-port` | `--dapr-http-port` | | not supported | The HTTP port for the Dapr API |
| `--dapr-http-max-request-size` | --dapr-http-max-request-size | | `dapr.io/http-max-request-size` | Increasing max size of request body http and grpc servers parameter in MB to handle uploading of big files. Default is `4` MB |
| `--dapr-http-read-buffer-size` | --dapr-http-read-buffer-size | | `dapr.io/http-read-buffer-size` | Increasing max size of http header read buffer in KB to handle when sending multi-KB headers. The default 4 KB. When sending bigger than default 4KB http headers, you should set this to a larger value, for example 16 (for 16KB) |
| `--dapr-grpc-port` | `--dapr-grpc-port` | | `dapr.io/grpc-port` | Sets the Dapr API gRPC port (default `50001`); all cluster services must use the same port for communication |
| `--dapr-http-port` | `--dapr-http-port` | | not supported | HTTP port for the Dapr API to listen on (default `3500`) |
| `--dapr-http-max-request-size` | `--dapr-http-max-request-size` | | `dapr.io/http-max-request-size` | **Deprecated** in favor of `--max-body-size`. Inreasing the request max body size to handle large file uploads using http and grpc protocols. Default is `4` MB |
| `--max-body-size` | not supported | | `dapr.io/max-body-size` | Inreasing the request max body size to handle large file uploads using http and grpc protocols. Set the value using size units (e.g., `16Mi` for 16MB). The default is `4Mi` |
| `--dapr-http-read-buffer-size` | `--dapr-http-read-buffer-size` | | `dapr.io/http-read-buffer-size` | **Deprecated** in favor of `--read-buffer-size`. Increasing max size of http header read buffer in KB to to support larger header values, for example `16` to support headers up to 16KB . Default is `16` for 16KB |
| `--read-buffer-size` | not supported | | `dapr.io/read-buffer-size` | Increasing max size of http header read buffer in KB to to support larger header values. Set the value using size units, for example `32Ki` will support headers up to 32KB . Default is `4` for 4KB |
| not supported | `--image` | | `dapr.io/sidecar-image` | Dapr sidecar image. Default is daprio/daprd:latest. The Dapr sidecar uses this image instead of the latest default image. Use this when building your own custom image of Dapr and or [using an alternative stable Dapr image]({{< ref "support-release-policy.md#build-variations" >}}) |
| `--internal-grpc-port` | not supported | | not supported | gRPC port for the Dapr Internal API to listen on |
| `--internal-grpc-port` | not supported | | `dapr.io/internal-grpc-port` | Sets the internal Dapr gRPC port (default `50002`); all cluster services must use the same port for communication |
| `--enable-metrics` | not supported | | configuration spec | Enable [prometheus metric]({{< ref prometheus >}}) (default true) |
| `--enable-mtls` | not supported | | configuration spec | Enables automatic mTLS for daprd to daprd communication channels |
| `--enable-profiling` | `--enable-profiling` | | `dapr.io/enable-profiling` | [Enable profiling]({{< ref profiling-debugging >}}) |
@ -32,11 +34,11 @@ This table is meant to help users understand the equivalent options for running
| `--log-as-json` | not supported | | `dapr.io/log-as-json` | Setting this parameter to `true` outputs [logs in JSON format]({{< ref logs >}}). Default is `false` |
| `--log-level` | `--log-level` | | `dapr.io/log-level` | Sets the [log level]({{< ref logs-troubleshooting >}}) for the Dapr sidecar. Allowed values are `debug`, `info`, `warn`, `error`. Default is `info` |
| `--enable-api-logging` | `--enable-api-logging` | | `dapr.io/enable-api-logging` | [Enables API logging]({{< ref "api-logs-troubleshooting.md#configuring-api-logging-in-kubernetes" >}}) for the Dapr sidecar |
| `--app-max-concurrency` | `--app-max-concurrency` | | `dapr.io/app-max-concurrency` | Limit the [concurrency of your application]({{< ref "control-concurrency.md#setting-app-max-concurrency" >}}). A valid value is any number larger than `0`|
| `--app-max-concurrency` | `--app-max-concurrency` | | `dapr.io/app-max-concurrency` | Limit the [concurrency of your application]({{< ref "control-concurrency.md#setting-app-max-concurrency" >}}). A valid value is any number larger than `0`. Default value: `-1`, meaning no concurrency. |
| `--metrics-port` | `--metrics-port` | | `dapr.io/metrics-port` | Sets the port for the sidecar metrics server. Default is `9090` |
| `--mode` | not supported | | not supported | Runtime hosting option mode for Dapr, either `"standalone"` or `"kubernetes"` (default `"standalone"`). [Learn more.]({{< ref hosting >}}) |
| `--placement-host-address` | `--placement-host-address` | | `dapr.io/placement-host-address` | Comma separated list of addresses for Dapr Actor Placement servers. When no annotation is set, the default value is set by the Sidecar Injector. When the annotation is set and the value is empty, the sidecar does not connect to Placement server. This can be used when there are no actors running in the sidecar. When the annotation is set and the value is not empty, the sidecar connects to the configured address. For example: `127.0.0.1:50057,127.0.0.1:50058` |
| `--scheduler-host-address` | `--scheduler-host-address` | | `dapr.io/scheduler-host-address` | Comma separated list of addresses for Dapr Scheduler servers. When no annotation is set, the default value is set by the Sidecar Injector. When the annotation is set and the value is empty, the sidecar does not connect to Scheduler server. When the annotation is set and the value is not empty, the sidecar connects to the configured address. For example: `127.0.0.1:50055,127.0.0.1:50056` |
| `--placement-host-address` | `--placement-host-address` | | `dapr.io/placement-host-address` | Comma separated list of addresses for Dapr Actor Placement servers. <br><br> When no annotation is set, the default value is set by the Sidecar Injector. <br><br> When the annotation is set and the value is a single space (`' '`), or "empty", the sidecar does not connect to Placement server. This can be used when there are no actors running in the sidecar. <br><br> When the annotation is set and the value is not empty, the sidecar connects to the configured address. For example: `127.0.0.1:50057,127.0.0.1:50058` |
| `--scheduler-host-address` | `--scheduler-host-address` | | `dapr.io/scheduler-host-address` | Comma separated list of addresses for Dapr Scheduler servers. <br><br>When no annotation is set, the default value is set by the Sidecar Injector. <br><br>When the annotation is set and the value is a single space (`' '`), or "empty", the sidecar does not connect to Scheduler server. <br><br>When the annotation is set and the value is not empty, the sidecar connects to the configured address. For example: `127.0.0.1:50055,127.0.0.1:50056` |
| `--actors-service` | not supported | | not supported | Configuration for the service that offers actor placement information. The format is `<name>:<address>`. For example, setting this value to `placement:127.0.0.1:50057,127.0.0.1:50058` is an alternative to using the `--placement-host-address` flag. |
| `--reminders-service` | not supported | | not supported | Configuration for the service that enables actor reminders. The format is `<name>[:<address>]`. Currently, the only supported value is `"default"` (which is also the default value), which uses the built-in reminders subsystem in the Dapr sidecar. |
| `--profiling-port` | `--profiling-port` | | not supported | The port for the profile server (default `7777`) |
@ -67,6 +69,7 @@ This table is meant to help users understand the equivalent options for running
| not supported | not supported | | `dapr.io/sidecar-readiness-probe-period-seconds` | How often (in seconds) to perform the sidecar readiness probe. Read more [here](https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/#configure-probes). Default is `6`|
| not supported | not supported | | `dapr.io/sidecar-readiness-probe-threshold` | When the sidecar readiness probe fails, Kubernetes will try N times before giving up. In this case, the Pod will be marked Unready. Read more about `failureThreshold` [here](https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/#configure-probes). Default is `3`|
| not supported | not supported | | `dapr.io/env` | List of environment variable to be injected into the sidecar. Strings consisting of key=value pairs separated by a comma.|
| not supported | not supported | | `dapr.io/env-from-secret` | List of environment variables to be injected into the sidecar from secret. Strings consisting of `"key=secret-name:secret-key"` pairs are separated by a comma. |
| not supported | not supported | | `dapr.io/volume-mounts` | List of [pod volumes to be mounted to the sidecar container]({{< ref "kubernetes-volume-mounts" >}}) in read-only mode. Strings consisting of `volume:path` pairs separated by a comma. Example, `"volume-1:/tmp/mount1,volume-2:/home/root/mount2"`. |
| not supported | not supported | | `dapr.io/volume-mounts-rw` | List of [pod volumes to be mounted to the sidecar container]({{< ref "kubernetes-volume-mounts" >}}) in read-write mode. Strings consisting of `volume:path` pairs separated by a comma. Example, `"volume-1:/tmp/mount1,volume-2:/home/root/mount2"`. |
| `--disable-builtin-k8s-secret-store` | not supported | | `dapr.io/disable-builtin-k8s-secret-store` | Disables BuiltIn Kubernetes secret store. Default value is false. See [Kubernetes secret store component]({{< ref "kubernetes-secret-store.md" >}}) for details. |

View File

@ -63,6 +63,10 @@ This component supports **output binding** with the following operations:
- `delete` : [Delete blob](#delete-blob)
- `list`: [List blobs](#list-blobs)
The Blob storage component's **input binding** triggers and pushes events using [Azure Event Grid]({{< ref eventgrid.md >}}).
Refer to the [Reacting to Blob storage events](https://learn.microsoft.com/azure/storage/blobs/storage-blob-event-overview) guide for more set up and more information.
### Create blob
To perform a create blob operation, invoke the Azure Blob Storage binding with a `POST` method and the following JSON body:

View File

@ -90,6 +90,21 @@ This component supports **output binding** with the following operations:
- `create`: publishes a message on the Event Grid topic
## Receiving events
You can use the Event Grid binding to receive events from a variety of sources and actions. [Learn more about all of the available event sources and handlers that work with Event Grid.](https://learn.microsoft.com/azure/event-grid/overview)
In the following table, you can find the list of Dapr components that can raise events.
| Event sources | Dapr components |
| ------------- | --------------- |
| [Azure Blob Storage](https://learn.microsoft.com/azure/storage/blobs/) | [Azure Blob Storage binding]({{< ref blobstorage.md >}}) <br/>[Azure Blob Storage state store]({{< ref setup-azure-blobstorage.md >}}) |
| [Azure Cache for Redis](https://learn.microsoft.com/azure/azure-cache-for-redis/cache-overview) | [Redis binding]({{< ref redis.md >}}) <br/>[Redis pub/sub]({{< ref setup-redis-pubsub.md >}}) |
| [Azure Event Hubs](https://learn.microsoft.com/azure/event-hubs/event-hubs-about) | [Azure Event Hubs pub/sub]({{< ref setup-azure-eventhubs.md >}}) <br/>[Azure Event Hubs binding]({{< ref eventhubs.md >}}) |
| [Azure IoT Hub](https://learn.microsoft.com/azure/iot-hub/iot-concepts-and-iot-hub) | [Azure Event Hubs pub/sub]({{< ref setup-azure-eventhubs.md >}}) <br/>[Azure Event Hubs binding]({{< ref eventhubs.md >}}) |
| [Azure Service Bus](https://learn.microsoft.com/azure/service-bus-messaging/service-bus-messaging-overview) | [Azure Service Bus binding]({{< ref servicebusqueues.md >}}) <br/>[Azure Service Bus pub/sub topics]({{< ref setup-azure-servicebus-topics.md >}}) and [queues]({{< ref setup-azure-servicebus-queues.md >}}) |
| [Azure SignalR Service](https://learn.microsoft.com/azure/azure-signalr/signalr-overview) | [SignalR binding]({{< ref signalr.md >}}) |
## Microsoft Entra ID credentials
The Azure Event Grid binding requires an Microsoft Entra ID application and service principal for two reasons:
@ -142,7 +157,7 @@ Connect-MgGraph -Scopes "Application.Read.All","Application.ReadWrite.All"
> Note: if your directory does not have a Service Principal for the application "Microsoft.EventGrid", you may need to run the command `Connect-MgGraph` and sign in as an admin for the Microsoft Entra ID tenant (this is related to permissions on the Microsoft Entra ID directory, and not the Azure subscription). Otherwise, please ask your tenant's admin to sign in and run this PowerShell command: `New-MgServicePrincipal -AppId "4962773b-9cdb-44cf-a8bf-237846a00ab7"` (the UUID is a constant)
### Testing locally
## Testing locally
- Install [ngrok](https://ngrok.com/download)
- Run locally using a custom port, for example `9000`, for handshakes
@ -160,7 +175,7 @@ ngrok http --host-header=localhost 9000
dapr run --app-id dotnetwebapi --app-port 5000 --dapr-http-port 3500 dotnet run
```
### Testing on Kubernetes
## Testing on Kubernetes
Azure Event Grid requires a valid HTTPS endpoint for custom webhooks; self-signed certificates aren't accepted. In order to enable traffic from the public internet to your app's Dapr sidecar you need an ingress controller enabled with Dapr. There's a good article on this topic: [Kubernetes NGINX ingress controller with Dapr](https://carlos.mendible.com/2020/04/05/kubernetes-nginx-ingress-controller-with-dapr/).

View File

@ -36,6 +36,8 @@ spec:
value: "namespace"
- name: enableEntityManagement
value: "false"
- name: enableInOrderMessageDelivery
value: "false"
# The following four properties are needed only if enableEntityManagement is set to true
- name: resourceGroupName
value: "test-rg"
@ -71,7 +73,8 @@ The above example uses secrets as plain strings. It is recommended to use a secr
| `eventHub` | Y* | Input/Output | The name of the Event Hubs hub ("topic"). Required if using Microsoft Entra ID authentication or if the connection string doesn't contain an `EntityPath` value | `mytopic` |
| `connectionString` | Y* | Input/Output | Connection string for the Event Hub or the Event Hub namespace.<br>* Mutally exclusive with `eventHubNamespace` field.<br>* Required when not using [Microsoft Entra ID Authentication]({{< ref "authenticating-azure.md" >}}) | `"Endpoint=sb://{EventHubNamespace}.servicebus.windows.net/;SharedAccessKeyName={PolicyName};SharedAccessKey={Key};EntityPath={EventHub}"` or `"Endpoint=sb://{EventHubNamespace}.servicebus.windows.net/;SharedAccessKeyName={PolicyName};SharedAccessKey={Key}"`
| `eventHubNamespace` | Y* | Input/Output | The Event Hub Namespace name.<br>* Mutally exclusive with `connectionString` field.<br>* Required when using [Microsoft Entra ID Authentication]({{< ref "authenticating-azure.md" >}}) | `"namespace"`
| `enableEntityManagement` | N | Input/Output | Boolean value to allow management of the EventHub namespace and storage account. Default: `false` | `"true", "false"`
| `enableEntityManagement` | N | Input/Output | Boolean value to allow management of the EventHub namespace and storage account. Default: `false` | `"true"`, `"false"`
| `enableInOrderMessageDelivery` | N | Input/Output | Boolean value to allow messages to be delivered in the order in which they were posted. This assumes `partitionKey` is set when publishing or posting to ensure ordering across partitions. Default: `false` | `"true"`, `"false"`
| `resourceGroupName` | N | Input/Output | Name of the resource group the Event Hub namespace is part of. Required when entity management is enabled | `"test-rg"`
| `subscriptionID` | N | Input/Output | Azure subscription ID value. Required when entity management is enabled | `"azure subscription id"`
| `partitionCount` | N | Input/Output | Number of partitions for the new Event Hub namespace. Used only when entity management is enabled. Default: `"1"` | `"2"`

View File

@ -63,6 +63,8 @@ spec:
value: true
- name: schemaLatestVersionCacheTTL # Optional. When using Schema Registry Avro serialization/deserialization. The TTL for schema caching when publishing a message with latest schema available.
value: 5m
- name: escapeHeaders # Optional.
value: false
```
## Spec metadata fields
@ -99,6 +101,7 @@ spec:
| `consumerFetchDefault` | N | Input/Output | The default number of message bytes to fetch from the broker in each request. Default is `"1048576"` bytes. | `"2097152"` |
| `heartbeatInterval` | N | Input | The interval between heartbeats to the consumer coordinator. At most, the value should be set to a 1/3 of the `sessionTimeout` value. Defaults to `"3s"`. | `"5s"` |
| `sessionTimeout` | N | Input | The timeout used to detect client failures when using Kafkas group management facility. If the broker fails to receive any heartbeats from the consumer before the expiration of this session timeout, then the consumer is removed and initiates a rebalance. Defaults to `"10s"`. | `"20s"` |
| `escapeHeaders` | N | Input | Enables URL escaping of the message header values received by the consumer. Allows receiving content with special characters that are usually not allowed in HTTP headers. Default is `false`. | `true` |
#### Note
The metadata `version` must be set to `1.0.0` when using Azure EventHubs with Kafka.

View File

@ -56,23 +56,27 @@ Authenticating with Microsoft Entra ID is supported with Azure Database for Post
### Authenticate using AWS IAM
Authenticating with AWS IAM is supported with all versions of PostgreSQL type components.
The user specified in the connection string must be an AWS IAM enabled user granted the `rds_iam` database role.
The user specified in the connection string must be an already existing user in the DB, and an AWS IAM enabled user granted the `rds_iam` database role.
Authentication is based on the AWS authentication configuration file, or the AccessKey/SecretKey provided.
The AWS authentication token will be dynamically rotated before it's expiration time with AWS.
| Field | Required | Details | Example |
|--------|:--------:|---------|---------|
| `awsRegion` | Y | The AWS Region where the AWS Relational Database Service is deployed to. | `"us-east-1"`
| `accessKey` | Y | AWS access key associated with an IAM account | `"AKIAIOSFODNN7EXAMPLE"`
| `secretKey` | Y | The secret key associated with the access key. | `"wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY"`
| `useAWSIAM` | Y | Must be set to `true` to enable the component to retrieve access tokens from AWS IAM. This authentication method only works with AWS Relational Database Service for PostgreSQL databases. | `"true"` |
| `connectionString` | Y | The connection string for the PostgreSQL database.<br>This must contain an already existing user, which corresponds to the name of the user created inside PostgreSQL that maps to the AWS IAM policy. This connection string should not contain any password. Note that the database name field is denoted by dbname with AWS. | `"host=mydb.postgres.database.aws.com user=myapplication port=5432 dbname=my_db sslmode=require"`|
| `awsRegion` | N | This maintains backwards compatibility with existing fields. It will be deprecated as of Dapr 1.17. Use 'region' instead. The AWS Region where the AWS Relational Database Service is deployed to. | `"us-east-1"` |
| `awsAccessKey` | N | This maintains backwards compatibility with existing fields. It will be deprecated as of Dapr 1.17. Use 'accessKey' instead. AWS access key associated with an IAM account | `"AKIAIOSFODNN7EXAMPLE"` |
| `awsSecretKey` | N | This maintains backwards compatibility with existing fields. It will be deprecated as of Dapr 1.17. Use 'secretKey' instead. The secret key associated with the access key | `"wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY"` |
| `awsSessionToken` | N | This maintains backwards compatibility with existing fields. It will be deprecated as of Dapr 1.17. Use 'sessionToken' instead. AWS session token to use. A session token is only required if you are using temporary security credentials. | `"TOKEN"` |
### Other metadata options
| Field | Required | Binding support |Details | Example |
| Field | Required | Binding support | Details | Example |
|--------------------|:--------:|-----|---|---------|
| `maxConns` | N | Output | Maximum number of connections pooled by this component. Set to 0 or lower to use the default value, which is the greater of 4 or the number of CPUs. | `"4"`
| `connectionMaxIdleTime` | N | Output | Max idle time before unused connections are automatically closed in the connection pool. By default, there's no value and this is left to the database driver to choose. | `"5m"`
| `queryExecMode` | N | Output | Controls the default mode for executing queries. By default Dapr uses the extended protocol and automatically prepares and caches prepared statements. However, this may be incompatible with proxies such as PGBouncer. In this case it may be preferrable to use `exec` or `simple_protocol`. | `"simple_protocol"`
| `timeout` | N | Output | Timeout for operations on the database, as a [Go duration](https://pkg.go.dev/time#ParseDuration). Integers are interpreted as number of seconds. Defaults to `20s` | `"30s"`, `30` |
| `maxConns` | N | Output | Maximum number of connections pooled by this component. Set to 0 or lower to use the default value, which is the greater of 4 or the number of CPUs. | `"4"` |
| `connectionMaxIdleTime` | N | Output | Max idle time before unused connections are automatically closed in the connection pool. By default, there's no value and this is left to the database driver to choose. | `"5m"` |
| `queryExecMode` | N | Output | Controls the default mode for executing queries. By default Dapr uses the extended protocol and automatically prepares and caches prepared statements. However, this may be incompatible with proxies such as PGBouncer. In this case it may be preferrable to use `exec` or `simple_protocol`. | `"simple_protocol"` |
### URL format

View File

@ -43,6 +43,8 @@ The above example uses secrets as plain strings. It is recommended to use a secr
| `redisUsername` | N | Output | Username for Redis host. Defaults to empty. Make sure your redis server version is 6 or above, and have created acl rule correctly. | `"username"` |
| `useEntraID` | N | Output | Implements EntraID support for Azure Cache for Redis. Before enabling this: <ul><li>The `redisHost` name must be specified in the form of `"server:port"`</li><li>TLS must be enabled</li></ul> Learn more about this setting under [Create a Redis instance > Azure Cache for Redis]({{< ref "#create-a-redis-instance" >}}) | `"true"`, `"false"` |
| `enableTLS` | N | Output | If the Redis instance supports TLS with public certificates it can be configured to enable or disable TLS. Defaults to `"false"` | `"true"`, `"false"` |
| `clientCert` | N | Output | The content of the client certificate, used for Redis instances that require client-side certificates. Must be used with `clientKey` and `enableTLS` must be set to true. It is recommended to use a secret store as described [here]({{< ref component-secrets.md >}}) | `"----BEGIN CERTIFICATE-----\nMIIC..."` |
| `clientKey` | N | Output | The content of the client private key, used in conjunction with `clientCert` for authentication. It is recommended to use a secret store as described [here]({{< ref component-secrets.md >}}) | `"----BEGIN PRIVATE KEY-----\nMIIE..."` |
| `failover` | N | Output | Property to enabled failover configuration. Needs sentinalMasterName to be set. Defaults to `"false"` | `"true"`, `"false"`
| `sentinelMasterName` | N | Output | The sentinel master name. See [Redis Sentinel Documentation](https://redis.io/docs/reference/sentinel-clients/) | `""`, `"127.0.0.1:6379"`
| `redeliverInterval` | N | Output | The interval between checking for pending messages to redelivery. Defaults to `"60s"`. `"0"` disables redelivery. | `"30s"`

View File

@ -44,6 +44,8 @@ spec:
value: "<bool>"
- name: insecureSSL
value: "<bool>"
- name: storageClass
value: "<string>"
```
{{% alert title="Warning" color="warning" %}}
@ -65,6 +67,7 @@ The above example uses secrets as plain strings. It is recommended to use a secr
| `encodeBase64` | N | Output | Configuration to encode base64 file content before return the content. (In case of opening a file with binary content). `"true"` is the only allowed positive value. Other positive variations like `"True", "1"` are not acceptable. Defaults to `"false"` | `"true"`, `"false"` |
| `disableSSL` | N | Output | Allows to connect to non `https://` endpoints. Defaults to `"false"` | `"true"`, `"false"` |
| `insecureSSL` | N | Output | When connecting to `https://` endpoints, accepts invalid or self-signed certificates. Defaults to `"false"` | `"true"`, `"false"` |
| `storageClass` | N | Output | The desired storage class for objects during the create operation. [Valid aws storage class types can be found here](https://aws.amazon.com/s3/storage-classes/) | `STANDARD_IA` |
{{% alert title="Important" color="warning" %}}
When running the Dapr sidecar (daprd) with your application on EKS (AWS Kubernetes), if you're using a node/pod that has already been attached to an IAM policy defining access to AWS resources, you **must not** provide AWS access-key, secret-key, and tokens in the definition of the component spec you're using.
@ -165,10 +168,20 @@ To perform a create operation, invoke the AWS S3 binding with a `POST` method an
```json
{
"operation": "create",
"data": "YOUR_CONTENT"
"data": "YOUR_CONTENT",
"metadata": {
"storageClass": "STANDARD_IA"
}
}
```
For example you can provide a storage class while using the `create` operation with a Linux curl command
```bash
curl -d '{ "operation": "create", "data": "YOUR_BASE_64_CONTENT", "metadata": { "storageClass": "STANDARD_IA" } }' /
http://localhost:<dapr-port>/v1.0/bindings/<binding-name>
```
#### Share object with a presigned URL
To presign an object with a specified time-to-live, use the `presignTTL` metadata key on a `create` request.

View File

@ -0,0 +1,231 @@
---
type: docs
title: "SFTP binding spec"
linkTitle: "SFTP"
description: "Detailed documentation on the Secure File Transfer Protocol (SFTP) binding component"
aliases:
- "/operations/components/setup-bindings/supported-bindings/sftp/"
---
## Component format
To set up the SFTP binding, create a component of type `bindings.sftp`. See [this guide]({{ ref bindings-overview.md }}) on how to create and apply a binding configuration.
```yaml
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
name: <NAME>
spec:
type: bindings.sftp
version: v1
metadata:
- name: rootPath
value: "<string>"
- name: address
value: "<string>"
- name: username
value: "<string>"
- name: password
value: "*****************"
- name: privateKey
value: "*****************"
- name: privateKeyPassphrase
value: "*****************"
- name: hostPublicKey
value: "*****************"
- name: knownHostsFile
value: "<string>"
- name: insecureIgnoreHostKey
value: "<bool>"
```
## Spec metadata fields
| Field | Required | Binding support | Details | Example |
|--------------------|:--------:|------------|-----|---------|
| `rootPath` | Y | Output | Root path for default working directory | `"/path"` |
| `address` | Y | Output | Address of SFTP server | `"localhost:22"` |
| `username` | Y | Output | Username for authentication | `"username"` |
| `password` | N | Output | Password for username/password authentication | `"password"` |
| `privateKey` | N | Output | Private key for public key authentication | <pre>"\|-<br>-----BEGIN OPENSSH PRIVATE KEY-----<br>*****************<br>-----END OPENSSH PRIVATE KEY-----"</pre> |
| `privateKeyPassphrase` | N | Output | Private key passphrase for public key authentication | `"passphrase"` |
| `hostPublicKey` | N | Output | Host public key for host validation | `"ecdsa-sha2-nistp256 *** root@openssh-server"` |
| `knownHostsFile` | N | Output | Known hosts file for host validation | `"/path/file"` |
| `insecureIgnoreHostKey` | N | Output | Allows to skip host validation. Defaults to `"false"` | `"true"`, `"false"` |
## Binding support
This component supports **output binding** with the following operations:
- `create` : [Create file](#create-file)
- `get` : [Get file](#get-file)
- `list` : [List files](#list-files)
- `delete` : [Delete file](#delete-file)
### Create file
To perform a create file operation, invoke the SFTP binding with a `POST` method and the following JSON body:
```json
{
"operation": "create",
"data": "<YOUR_BASE_64_CONTENT>",
"metadata": {
"fileName": "<filename>",
}
}
```
#### Example
{{< tabs Windows Linux >}}
{{% codetab %}}
```bash
curl -d "{ \"operation\": \"create\", \"data\": \"YOUR_BASE_64_CONTENT\", \"metadata\": { \"fileName\": \"my-test-file.jpg\" } }" http://localhost:<dapr-port>/v1.0/bindings/<binding-name>
```
{{% /codetab %}}
{{% codetab %}}
```bash
curl -d '{ "operation": "create", "data": "YOUR_BASE_64_CONTENT", "metadata": { "fileName": "my-test-file.jpg" } }' \
http://localhost:<dapr-port>/v1.0/bindings/<binding-name>
```
{{% /codetab %}}
{{< /tabs >}}
#### Response
The response body contains the following JSON:
```json
{
"fileName": "<filename>"
}
```
### Get file
To perform a get file operation, invoke the SFTP binding with a `POST` method and the following JSON body:
```json
{
"operation": "get",
"metadata": {
"fileName": "<filename>"
}
}
```
#### Example
{{< tabs Windows Linux >}}
{{% codetab %}}
```bash
curl -d '{ \"operation\": \"get\", \"metadata\": { \"fileName\": \"filename\" }}' http://localhost:<dapr-port>/v1.0/bindings/<binding-name>
```
{{% /codetab %}}
{{% codetab %}}
```bash
curl -d '{ "operation": "get", "metadata": { "fileName": "filename" }}' \
http://localhost:<dapr-port>/v1.0/bindings/<binding-name>
```
{{% /codetab %}}
{{< /tabs >}}
#### Response
The response body contains the value stored in the file.
### List files
To perform a list files operation, invoke the SFTP binding with a `POST` method and the following JSON body:
```json
{
"operation": "list"
}
```
If you only want to list the files beneath a particular directory below the `rootPath`, specify the relative directory name as the `fileName` in the metadata.
```json
{
"operation": "list",
"metadata": {
"fileName": "my/cool/directory"
}
}
```
#### Example
{{< tabs Windows Linux >}}
{{% codetab %}}
```bash
curl -d '{ \"operation\": \"list\", \"metadata\": { \"fileName\": \"my/cool/directory\" }}' http://localhost:<dapr-port>/v1.0/bindings/<binding-name>
```
{{% /codetab %}}
{{% codetab %}}
```bash
curl -d '{ "operation": "list", "metadata": { "fileName": "my/cool/directory" }}' \
http://localhost:<dapr-port>/v1.0/bindings/<binding-name>
```
{{% /codetab %}}
{{< /tabs >}}
#### Response
The response is a JSON array of file names.
### Delete file
To perform a delete file operation, invoke the SFTP binding with a `POST` method and the following JSON body:
```json
{
"operation": "delete",
"metadata": {
"fileName": "myfile"
}
}
```
#### Example
{{< tabs Windows Linux >}}
{{% codetab %}}
```bash
curl -d '{ \"operation\": \"delete\", \"metadata\": { \"fileName\": \"myfile\" }}' http://localhost:<dapr-port>/v1.0/bindings/<binding-name>
```
{{% /codetab %}}
{{% codetab %}}
```bash
curl -d '{ "operation": "delete", "metadata": { "fileName": "myfile" }}' \
http://localhost:<dapr-port>/v1.0/bindings/<binding-name>
```
{{% /codetab %}}
{{< /tabs >}}
#### Response
An HTTP 204 (No Content) and empty body is returned if successful.
## Related links
- [Basic schema for a Dapr component]({{< ref component-schema >}})
- [Bindings building block]({{< ref bindings >}})
- [How-To: Use bindings to interface with external resources]({{< ref howto-bindings.md >}})
- [Bindings API reference]({{< ref bindings_api.md >}})

View File

@ -79,11 +79,28 @@ Authenticating with Microsoft Entra ID is supported with Azure Database for Post
| `azureClientId` | N | Client ID (application ID) | `"c7dd251f-811f-…"` |
| `azureClientSecret` | N | Client secret (application password) | `"Ecy3X…"` |
### Authenticate using AWS IAM
Authenticating with AWS IAM is supported with all versions of PostgreSQL type components.
The user specified in the connection string must be an already existing user in the DB, and an AWS IAM enabled user granted the `rds_iam` database role.
Authentication is based on the AWS authentication configuration file, or the AccessKey/SecretKey provided.
The AWS authentication token will be dynamically rotated before it's expiration time with AWS.
| Field | Required | Details | Example |
|--------|:--------:|---------|---------|
| `useAWSIAM` | Y | Must be set to `true` to enable the component to retrieve access tokens from AWS IAM. This authentication method only works with AWS Relational Database Service for PostgreSQL databases. | `"true"` |
| `connectionString` | Y | The connection string for the PostgreSQL database.<br>This must contain an already existing user, which corresponds to the name of the user created inside PostgreSQL that maps to the AWS IAM policy. This connection string should not contain any password. Note that the database name field is denoted by dbname with AWS. | `"host=mydb.postgres.database.aws.com user=myapplication port=5432 dbname=my_db sslmode=require"`|
| `awsRegion` | N | This maintains backwards compatibility with existing fields. It will be deprecated as of Dapr 1.17. Use 'region' instead. The AWS Region where the AWS Relational Database Service is deployed to. | `"us-east-1"` |
| `awsAccessKey` | N | This maintains backwards compatibility with existing fields. It will be deprecated as of Dapr 1.17. Use 'accessKey' instead. AWS access key associated with an IAM account | `"AKIAIOSFODNN7EXAMPLE"` |
| `awsSecretKey` | N | This maintains backwards compatibility with existing fields. It will be deprecated as of Dapr 1.17. Use 'secretKey' instead. The secret key associated with the access key | `"wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY"` |
| `awsSessionToken` | N | This maintains backwards compatibility with existing fields. It will be deprecated as of Dapr 1.17. Use 'sessionToken' instead. AWS session token to use. A session token is only required if you are using temporary security credentials. | `"TOKEN"` |
### Other metadata options
| Field | Required | Details | Example |
|--------------------|:--------:|---------|---------|
| `table` | Y | Table name for configuration information, must be lowercased. | `configtable`
| `timeout` | N | Timeout for operations on the database, as a [Go duration](https://pkg.go.dev/time#ParseDuration). Integers are interpreted as number of seconds. Defaults to `20s` | `"30s"`, `30` |
| `maxConns` | N | Maximum number of connections pooled by this component. Set to 0 or lower to use the default value, which is the greater of 4 or the number of CPUs. | `"4"`
| `connectionMaxIdleTime` | N | Max idle time before unused connections are automatically closed in the connection pool. By default, there's no value and this is left to the database driver to choose. | `"5m"`
| `queryExecMode` | N | Controls the default mode for executing queries. By default Dapr uses the extended protocol and automatically prepares and caches prepared statements. However, this may be incompatible with proxies such as PGBouncer. In this case it may be preferrable to use `exec` or `simple_protocol`. | `"simple_protocol"`

View File

@ -43,6 +43,8 @@ The above example uses secrets as plain strings. It is recommended to use a secr
| redisPassword | N | Output | The Redis password | `"password"` |
| redisUsername | N | Output | Username for Redis host. Defaults to empty. Make sure your Redis server version is 6 or above, and have created acl rule correctly. | `"username"` |
| enableTLS | N | Output | If the Redis instance supports TLS with public certificates it can be configured to enable or disable TLS. Defaults to `"false"` | `"true"`, `"false"` |
| clientCert | N | Output | The content of the client certificate, used for Redis instances that require client-side certificates. Must be used with `clientKey` and `enableTLS` must be set to true. It is recommended to use a secret store as described [here]({{< ref component-secrets.md >}}) | `"----BEGIN CERTIFICATE-----\nMIIC..."` |
| clientKey | N | Output | The content of the client private key, used in conjunction with `clientCert` for authentication. It is recommended to use a secret store as described [here]({{< ref component-secrets.md >}}) | `"----BEGIN PRIVATE KEY-----\nMIIE..."` |
| failover | N | Output | Property to enabled failover configuration. Needs sentinelMasterName to be set. Defaults to `"false"` | `"true"`, `"false"`
| sentinelMasterName | N | Output | The Sentinel master name. See [Redis Sentinel Documentation](https://redis.io/docs/reference/sentinel-clients/) | `""`, `"127.0.0.1:6379"`
| redisType | N | Output | The type of Redis. There are two valid values, one is `"node"` for single node mode, the other is `"cluster"` for Redis cluster mode. Defaults to `"node"`. | `"cluster"`

View File

@ -0,0 +1,12 @@
---
type: docs
title: "Conversation component specs"
linkTitle: "Conversation"
weight: 9000
description: The supported conversation components that interface with Dapr
no_list: true
---
{{< partial "components/description.html" >}}
{{< partial "components/conversation.html" >}}

View File

@ -0,0 +1,42 @@
---
type: docs
title: "Anthropic"
linkTitle: "Anthropic"
description: Detailed information on the Anthropic conversation component
---
## Component format
A Dapr `conversation.yaml` component file has the following structure:
```yaml
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
name: anthropic
spec:
type: conversation.anthropic
metadata:
- name: key
value: "mykey"
- name: model
value: claude-3-5-sonnet-20240620
- name: cacheTTL
value: 10m
```
{{% alert title="Warning" color="warning" %}}
The above example uses secrets as plain strings. It is recommended to use a secret store for the secrets, as described [here]({{< ref component-secrets.md >}}).
{{% /alert %}}
## Spec metadata fields
| Field | Required | Details | Example |
|--------------------|:--------:|---------|---------|
| `key` | Y | API key for Anthropic. | `"mykey"` |
| `model` | N | The Anthropic LLM to use. Defaults to `claude-3-5-sonnet-20240620` | `claude-3-5-sonnet-20240620` |
| `cacheTTL` | N | A time-to-live value for a prompt cache to expire. Uses Golang duration format. | `10m` |
## Related links
- [Conversation API overview]({{< ref conversation-overview.md >}})

View File

@ -0,0 +1,42 @@
---
type: docs
title: "AWS Bedrock"
linkTitle: "AWS Bedrock"
description: Detailed information on the AWS Bedrock conversation component
---
## Component format
A Dapr `conversation.yaml` component file has the following structure:
```yaml
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
name: awsbedrock
spec:
type: conversation.aws.bedrock
metadata:
- name: endpoint
value: "http://localhost:4566"
- name: model
value: amazon.titan-text-express-v1
- name: cacheTTL
value: 10m
```
{{% alert title="Warning" color="warning" %}}
The above example uses secrets as plain strings. It is recommended to use a secret store for the secrets, as described [here]({{< ref component-secrets.md >}}).
{{% /alert %}}
## Spec metadata fields
| Field | Required | Details | Example |
|--------------------|:--------:|---------|---------|
| `endpoint` | N | AWS endpoint for the component to use and connect to emulators. Not recommended for production AWS use. | `http://localhost:4566` |
| `model` | N | The LLM to use. Defaults to Bedrock's default provider model from Amazon. | `amazon.titan-text-express-v1` |
| `cacheTTL` | N | A time-to-live value for a prompt cache to expire. Uses Golang duration format. | `10m` |
## Related links
- [Conversation API overview]({{< ref conversation-overview.md >}})

View File

@ -0,0 +1,42 @@
---
type: docs
title: "Huggingface"
linkTitle: "Huggingface"
description: Detailed information on the Huggingface conversation component
---
## Component format
A Dapr `conversation.yaml` component file has the following structure:
```yaml
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
name: huggingface
spec:
type: conversation.huggingface
metadata:
- name: key
value: mykey
- name: model
value: meta-llama/Meta-Llama-3-8B
- name: cacheTTL
value: 10m
```
{{% alert title="Warning" color="warning" %}}
The above example uses secrets as plain strings. It is recommended to use a secret store for the secrets, as described [here]({{< ref component-secrets.md >}}).
{{% /alert %}}
## Spec metadata fields
| Field | Required | Details | Example |
|--------------------|:--------:|---------|---------|
| `key` | Y | API key for Huggingface. | `mykey` |
| `model` | N | The Huggingface LLM to use. Defaults to `meta-llama/Meta-Llama-3-8B`. | `meta-llama/Meta-Llama-3-8B` |
| `cacheTTL` | N | A time-to-live value for a prompt cache to expire. Uses Golang duration format. | `10m` |
## Related links
- [Conversation API overview]({{< ref conversation-overview.md >}})

View File

@ -0,0 +1,42 @@
---
type: docs
title: "Mistral"
linkTitle: "Mistral"
description: Detailed information on the Mistral conversation component
---
## Component format
A Dapr `conversation.yaml` component file has the following structure:
```yaml
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
name: mistral
spec:
type: conversation.mistral
metadata:
- name: key
value: mykey
- name: model
value: open-mistral-7b
- name: cacheTTL
value: 10m
```
{{% alert title="Warning" color="warning" %}}
The above example uses secrets as plain strings. It is recommended to use a secret store for the secrets, as described [here]({{< ref component-secrets.md >}}).
{{% /alert %}}
## Spec metadata fields
| Field | Required | Details | Example |
|--------------------|:--------:|---------|---------|
| `key` | Y | API key for Mistral. | `mykey` |
| `model` | N | The Mistral LLM to use. Defaults to `open-mistral-7b`. | `open-mistral-7b` |
| `cacheTTL` | N | A time-to-live value for a prompt cache to expire. Uses Golang duration format. | `10m` |
## Related links
- [Conversation API overview]({{< ref conversation-overview.md >}})

View File

@ -0,0 +1,42 @@
---
type: docs
title: "OpenAI"
linkTitle: "OpenAI"
description: Detailed information on the OpenAI conversation component
---
## Component format
A Dapr `conversation.yaml` component file has the following structure:
```yaml
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
name: openai
spec:
type: conversation.openai
metadata:
- name: key
value: mykey
- name: model
value: gpt-4-turbo
- name: cacheTTL
value: 10m
```
{{% alert title="Warning" color="warning" %}}
The above example uses secrets as plain strings. It is recommended to use a secret store for the secrets, as described [here]({{< ref component-secrets.md >}}).
{{% /alert %}}
## Spec metadata fields
| Field | Required | Details | Example |
|--------------------|:--------:|---------|---------|
| `key` | Y | API key for OpenAI. | `mykey` |
| `model` | N | The OpenAI LLM to use. Defaults to `gpt-4-turbo`. | `gpt-4-turbo` |
| `cacheTTL` | N | A time-to-live value for a prompt cache to expire. Uses Golang duration format. | `10m` |
## Related links
- [Conversation API overview]({{< ref conversation-overview.md >}})

View File

@ -11,6 +11,11 @@ no_list: true
The following table lists publish and subscribe brokers supported by the Dapr pub/sub building block. [Learn how to set up different brokers for Dapr publish and subscribe.]({{< ref setup-pubsub.md >}})
{{% alert title="Pub/sub component retries vs inbound resiliency" color="warning" %}}
Each pub/sub component has its own built-in retry behaviors. Before explicity applying a [Dapr resiliency policy]({{< ref "policies.md" >}}), make sure you understand the implicit retry policy of the pub/sub component you're using. Instead of overriding these built-in retries, Dapr resiliency augments them, which can cause repetitive clustering of messages.
{{% /alert %}}
{{< partial "components/description.html" >}}
{{< partial "components/pubsub.html" >}}

View File

@ -53,6 +53,12 @@ spec:
value: 2.0.0
- name: disableTls # Optional. Disable TLS. This is not safe for production!! You should read the `Mutual TLS` section for how to use TLS.
value: "true"
- name: consumerFetchMin # Optional. Advanced setting. The minimum number of message bytes to fetch in a request - the broker will wait until at least this many are available.
value: 1
- name: consumerFetchDefault # Optional. Advanced setting. The default number of message bytes to fetch from the broker in each request.
value: 2097152
- name: channelBufferSize # Optional. Advanced setting. The number of events to buffer in internal and external channels.
value: 512
- name: schemaRegistryURL # Optional. When using Schema Registry Avro serialization/deserialization. The Schema Registry URL.
value: http://localhost:8081
- name: schemaRegistryAPIKey # Optional. When using Schema Registry Avro serialization/deserialization. The Schema Registry API Key.
@ -63,6 +69,8 @@ spec:
value: true
- name: schemaLatestVersionCacheTTL # Optional. When using Schema Registry Avro serialization/deserialization. The TTL for schema caching when publishing a message with latest schema available.
value: 5m
- name: escapeHeaders # Optional.
value: false
```
@ -96,12 +104,12 @@ spec:
| oidcClientSecret | N | The OAuth2 client secret that has been provisioned in the identity provider: Required when `authType` is set to `oidc` | `"KeFg23!"` |
| oidcScopes | N | Comma-delimited list of OAuth2/OIDC scopes to request with the access token. Recommended when `authType` is set to `oidc`. Defaults to `"openid"` | `"openid,kafka-prod"` |
| oidcExtensions | N | String containing a JSON-encoded dictionary of OAuth2/OIDC extensions to request with the access token | `{"cluster":"kafka","poolid":"kafkapool"}` |
| awsRegion | N | The AWS region where the Kafka cluster is deployed to. Required when `authType` is set to `awsiam` | `us-west-1` |
| awsAccessKey | N | AWS access key associated with an IAM account. | `"accessKey"`
| awsSecretKey | N | The secret key associated with the access key. | `"secretKey"`
| awsSessionToken | N | AWS session token to use. A session token is only required if you are using temporary security credentials. | `"sessionToken"`
| awsIamRoleArn | N | IAM role that has access to AWS Managed Streaming for Apache Kafka (MSK). This is another option to authenticate with MSK aside from the AWS Credentials. | `"arn:aws:iam::123456789:role/mskRole"`
| awsStsSessionName | N | Represents the session name for assuming a role. | `"MSKSASLDefaultSession"`
| awsRegion | N | This maintains backwards compatibility with existing fields. It will be deprecated as of Dapr 1.17. Use 'region' instead. The AWS region where the Kafka cluster is deployed to. Required when `authType` is set to `awsiam` | `us-west-1` |
| awsAccessKey | N | This maintains backwards compatibility with existing fields. It will be deprecated as of Dapr 1.17. Use 'accessKey' instead. AWS access key associated with an IAM account. | `"accessKey"`
| awsSecretKey | N | This maintains backwards compatibility with existing fields. It will be deprecated as of Dapr 1.17. Use 'secretKey' instead. The secret key associated with the access key. | `"secretKey"`
| awsSessionToken | N | This maintains backwards compatibility with existing fields. It will be deprecated as of Dapr 1.17. Use 'sessionToken' instead. AWS session token to use. A session token is only required if you are using temporary security credentials. | `"sessionToken"`
| awsIamRoleArn | N | This maintains backwards compatibility with existing fields. It will be deprecated as of Dapr 1.17. Use 'assumeRoleArn' instead. IAM role that has access to AWS Managed Streaming for Apache Kafka (MSK). This is another option to authenticate with MSK aside from the AWS Credentials. | `"arn:aws:iam::123456789:role/mskRole"`
| awsStsSessionName | N | This maintains backwards compatibility with existing fields. It will be deprecated as of Dapr 1.17. Use 'sessionName' instead. Represents the session name for assuming a role. | `"DaprDefaultSession"`
| schemaRegistryURL | N | Required when using Schema Registry Avro serialization/deserialization. The Schema Registry URL. | `http://localhost:8081` |
| schemaRegistryAPIKey | N | When using Schema Registry Avro serialization/deserialization. The Schema Registry credentials API Key. | `XYAXXAZ` |
| schemaRegistryAPISecret | N | When using Schema Registry Avro serialization/deserialization. The Schema Registry credentials API Secret. | `ABCDEFGMEADFF` |
@ -109,9 +117,12 @@ spec:
| schemaLatestVersionCacheTTL | N | When using Schema Registry Avro serialization/deserialization. The TTL for schema caching when publishing a message with latest schema available. Default is 5 min | `5m` |
| clientConnectionTopicMetadataRefreshInterval | N | The interval for the client connection's topic metadata to be refreshed with the broker as a Go duration. Defaults to `9m`. | `"4m"` |
| clientConnectionKeepAliveInterval | N | The maximum time for the client connection to be kept alive with the broker, as a Go duration, before closing the connection. A zero value (default) means keeping alive indefinitely. | `"4m"` |
| consumerFetchMin | N | The minimum number of message bytes to fetch in a request - the broker will wait until at least this many are available. The default is `1`, as `0` causes the consumer to spin when no messages are available. Equivalent to the JVM's `fetch.min.bytes`. | `"2"` |
| consumerFetchDefault | N | The default number of message bytes to fetch from the broker in each request. Default is `"1048576"` bytes. | `"2097152"` |
| channelBufferSize | N | The number of events to buffer in internal and external channels. This permits the producer and consumer to continue processing some messages in the background while user code is working, greatly improving throughput. Defaults to `256`. | `"512"` |
| heartbeatInterval | N | The interval between heartbeats to the consumer coordinator. At most, the value should be set to a 1/3 of the `sessionTimeout` value. Defaults to "3s". | `"5s"` |
| sessionTimeout | N | The timeout used to detect client failures when using Kafkas group management facility. If the broker fails to receive any heartbeats from the consumer before the expiration of this session timeout, then the consumer is removed and initiates a rebalance. Defaults to "10s". | `"20s"` |
| escapeHeaders | N | Enables URL escaping of the message header values received by the consumer. Allows receiving content with special characters that are usually not allowed in HTTP headers. Default is `false`. | `true` |
The `secretKeyRef` above is referencing a [kubernetes secrets store]({{< ref kubernetes-secret-store.md >}}) to access the tls information. Visit [here]({{< ref setup-secret-store.md >}}) to learn more about how to configure a secret store component.
@ -321,7 +332,7 @@ spec:
Authenticating with AWS IAM is supported with MSK. Setting `authType` to `awsiam` uses AWS SDK to generate auth tokens to authenticate.
{{% alert title="Note" color="primary" %}}
The only required metadata field is `awsRegion`. If no `awsAccessKey` and `awsSecretKey` are provided, you can use AWS IAM roles for service accounts to have password-less authentication to your Kafka cluster.
The only required metadata field is `region`. If no `acessKey` and `secretKey` are provided, you can use AWS IAM roles for service accounts to have password-less authentication to your Kafka cluster.
{{% /alert %}}
```yaml
@ -341,18 +352,18 @@ spec:
value: "my-dapr-app-id"
- name: authType # Required.
value: "awsiam"
- name: awsRegion # Required.
- name: region # Required.
value: "us-west-1"
- name: awsAccessKey # Optional.
- name: accessKey # Optional.
value: <AWS_ACCESS_KEY>
- name: awsSecretKey # Optional.
- name: secretKey # Optional.
value: <AWS_SECRET_KEY>
- name: awsSessionToken # Optional.
- name: sessionToken # Optional.
value: <AWS_SESSION_KEY>
- name: awsIamRoleArn # Optional.
- name: assumeRoleArn # Optional.
value: "arn:aws:iam::123456789:role/mskRole"
- name: awsStsSessionName # Optional.
value: "MSKSASLDefaultSession"
- name: sessionName # Optional.
value: "DaprDefaultSession"
```
### Communication using TLS
@ -457,7 +468,7 @@ Apache Kafka supports the following bulk metadata options:
When invoking the Kafka pub/sub, its possible to provide an optional partition key by using the `metadata` query param in the request url.
The param name is `partitionKey`.
The param name can either be `partitionKey` or `__key`
Example:
@ -473,7 +484,7 @@ curl -X POST http://localhost:3500/v1.0/publish/myKafka/myTopic?metadata.partiti
### Message headers
All other metadata key/value pairs (that are not `partitionKey`) are set as headers in the Kafka message. Here is an example setting a `correlationId` for the message.
All other metadata key/value pairs (that are not `partitionKey` or `__key`) are set as headers in the Kafka message. Here is an example setting a `correlationId` for the message.
```shell
curl -X POST http://localhost:3500/v1.0/publish/myKafka/myTopic?metadata.correlationId=myCorrelationID&metadata.partitionKey=key1 \
@ -484,6 +495,85 @@ curl -X POST http://localhost:3500/v1.0/publish/myKafka/myTopic?metadata.correla
}
}'
```
### Kafka Pubsub special message headers received on consumer side
When consuming messages, special message metadata are being automatically passed as headers. These are:
- `__key`: the message key if available
- `__topic`: the topic for the message
- `__partition`: the partition number for the message
- `__offset`: the offset of the message in the partition
- `__timestamp`: the timestamp for the message
You can access them within the consumer endpoint as follows:
{{< tabs "Python (FastAPI)" >}}
{{% codetab %}}
```python
from fastapi import APIRouter, Body, Response, status
import json
import sys
app = FastAPI()
router = APIRouter()
@router.get('/dapr/subscribe')
def subscribe():
subscriptions = [{'pubsubname': 'pubsub',
'topic': 'my-topic',
'route': 'my_topic_subscriber',
}]
return subscriptions
@router.post('/my_topic_subscriber')
def my_topic_subscriber(
key: Annotated[str, Header(alias="__key")],
offset: Annotated[int, Header(alias="__offset")],
event_data=Body()):
print(f"key={key} - offset={offset} - data={event_data}", flush=True)
return Response(status_code=status.HTTP_200_OK)
app.include_router(router)
```
{{% /codetab %}}
{{< /tabs >}}
## Receiving message headers with special characters
The consumer application may be required to receive message headers that include special characters, which may cause HTTP protocol validation errors.
HTTP header values must follow specifications, making some characters not allowed. [Learn more about the protocols](https://www.w3.org/Protocols/rfc2616/rfc2616-sec4.html#sec4.2).
In this case, you can enable `escapeHeaders` configuration setting, which uses URL escaping to encode header values on the consumer side.
{{% alert title="Note" color="primary" %}}
When using this setting, the received message headers are URL escaped, and you need to URL "un-escape" it to get the original value.
{{% /alert %}}
Set `escapeHeaders` to `true` to URL escape.
```yaml
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
name: kafka-pubsub-escape-headers
spec:
type: pubsub.kafka
version: v1
metadata:
- name: brokers # Required. Kafka broker connection setting
value: "dapr-kafka.myapp.svc.cluster.local:9092"
- name: consumerGroup # Optional. Used for input bindings.
value: "group1"
- name: clientID # Optional. Used as client tracing ID by Kafka brokers.
value: "my-dapr-app-id"
- name: authType # Required.
value: "none"
- name: escapeHeaders
value: "true"
```
## Avro Schema Registry serialization/deserialization
You can configure pub/sub to publish or consume data encoded using [Avro binary serialization](https://avro.apache.org/docs/), leveraging an [Apache Schema Registry](https://developer.confluent.io/courses/apache-kafka/schema-registry/) (for example, [Confluent Schema Registry](https://developer.confluent.io/courses/apache-kafka/schema-registry/), [Apicurio](https://www.apicur.io/registry/)).
@ -597,6 +687,7 @@ To run Kafka on Kubernetes, you can use any Kafka operator, such as [Strimzi](ht
{{< /tabs >}}
## Related links
- [Basic schema for a Dapr component]({{< ref component-schema >}})
- Read [this guide]({{< ref "howto-publish-subscribe.md##step-1-setup-the-pubsub-component" >}}) for instructions on configuring pub/sub components

View File

@ -68,7 +68,8 @@ spec:
# value: 5
# - name: concurrencyMode # Optional
# value: "single"
# - name: concurrencyLimit # Optional
# value: "0"
```
@ -98,6 +99,7 @@ The above example uses secrets as plain strings. It is recommended to use [a sec
| disableDeleteOnRetryLimit | N | When set to true, after retrying and failing of `messageRetryLimit` times processing a message, reset the message visibility timeout so that other consumers can try processing, instead of deleting the message from SQS (the default behvior). Default: `"false"` | `"true"`, `"false"`
| assetsManagementTimeoutSeconds | N | Amount of time in seconds, for an AWS asset management operation, before it times out and cancelled. Asset management operations are any operations performed on STS, SNS and SQS, except message publish and consume operations that implement the default Dapr component retry behavior. The value can be set to any non-negative float/integer. Default: `5` | `0.5`, `10`
| concurrencyMode | N | When messages are received in bulk from SQS, call the subscriber sequentially (“single” message at a time), or concurrently (in “parallel”). Default: `"parallel"` | `"single"`, `"parallel"`
| concurrencyLimit | N | Defines the maximum number of concurrent workers handling messages. This value is ignored when concurrencyMode is set to `"single"`. To avoid limiting the number of concurrent workers, set this to `0`. Default: `0` | `100`
### Additional info

View File

@ -33,6 +33,8 @@ spec:
value: "channel1"
- name: enableEntityManagement
value: "false"
- name: enableInOrderMessageDelivery
value: "false"
# The following four properties are needed only if enableEntityManagement is set to true
- name: resourceGroupName
value: "test-rg"
@ -65,11 +67,12 @@ The above example uses secrets as plain strings. It is recommended to use a secr
| `connectionString` | Y* | Connection string for the Event Hub or the Event Hub namespace.<br>* Mutally exclusive with `eventHubNamespace` field.<br>* Required when not using [Microsoft Entra ID Authentication]({{< ref "authenticating-azure.md" >}}) | `"Endpoint=sb://{EventHubNamespace}.servicebus.windows.net/;SharedAccessKeyName={PolicyName};SharedAccessKey={Key};EntityPath={EventHub}"` or `"Endpoint=sb://{EventHubNamespace}.servicebus.windows.net/;SharedAccessKeyName={PolicyName};SharedAccessKey={Key}"`
| `eventHubNamespace` | Y* | The Event Hub Namespace name.<br>* Mutally exclusive with `connectionString` field.<br>* Required when using [Microsoft Entra ID Authentication]({{< ref "authenticating-azure.md" >}}) | `"namespace"`
| `consumerID` | N | Consumer ID (consumer tag) organizes one or more consumers into a group. Consumers with the same consumer ID work as one virtual consumer; for example, a message is processed only once by one of the consumers in the group. If the `consumerID` is not provided, the Dapr runtime set it to the Dapr application ID (`appID`) value. | Can be set to string value (such as `"channel1"` in the example above) or string format value (such as `"{podName}"`, etc.). [See all of template tags you can use in your component metadata.]({{< ref "component-schema.md#templated-metadata-values" >}})
| `enableEntityManagement` | N | Boolean value to allow management of the EventHub namespace and storage account. Default: `false` | `"true", "false"`
| `enableInOrderMessageDelivery` | N | Input/Output | Boolean value to allow messages to be delivered in the order in which they were posted. This assumes `partitionKey` is set when publishing or posting to ensure ordering across partitions. Default: `false` | `"true"`, `"false"`
| `storageAccountName` | Y | Storage account name to use for the checkpoint store. |`"myeventhubstorage"`
| `storageAccountKey` | Y* | Storage account key for the checkpoint store account.<br>* When using Microsoft Entra ID, it's possible to omit this if the service principal has access to the storage account too. | `"112233445566778899"`
| `storageConnectionString` | Y* | Connection string for the checkpoint store, alternative to specifying `storageAccountKey` | `"DefaultEndpointsProtocol=https;AccountName=myeventhubstorage;AccountKey=<account-key>"`
| `storageContainerName` | Y | Storage container name for the storage account name. | `"myeventhubstoragecontainer"`
| `enableEntityManagement` | N | Boolean value to allow management of the EventHub namespace and storage account. Default: `false` | `"true", "false"`
| `resourceGroupName` | N | Name of the resource group the Event Hub namespace is part of. Required when entity management is enabled | `"test-rg"`
| `subscriptionID` | N | Azure subscription ID value. Required when entity management is enabled | `"azure subscription id"`
| `partitionCount` | N | Number of partitions for the new Event Hub namespace. Used only when entity management is enabled. Default: `"1"` | `"2"`

View File

@ -83,8 +83,8 @@ The above example uses secrets as plain strings. It is recommended to use a secr
| `maxConcurrentHandlers` | N | Defines the maximum number of concurrent message handlers. Default: `0` (unlimited) | `10`
| `disableEntityManagement` | N | When set to true, queues and subscriptions do not get created automatically. Default: `"false"` | `"true"`, `"false"`
| `defaultMessageTimeToLiveInSec` | N | Default message time to live, in seconds. Used during subscription creation only. | `10`
| `autoDeleteOnIdleInSec` | N | Time in seconds to wait before auto deleting idle subscriptions. Used during subscription creation only. Default: `0` (disabled) | `3600`
| `maxDeliveryCount` | N | Defines the number of attempts the server will make to deliver a message. Used during subscription creation only. Must be 300s or greater. Default set by server. | `10`
| `autoDeleteOnIdleInSec` | N | Time in seconds to wait before auto deleting idle subscriptions. Used during subscription creation only. Must be 300s or greater. Default: `0` (disabled) | `3600`
| `maxDeliveryCount` | N | Defines the number of attempts the server makes to deliver a message. Used during subscription creation only. Default set by server. | `10`
| `lockDurationInSec` | N | Defines the length in seconds that a message will be locked for before expiring. Used during subscription creation only. Default set by server. | `30`
| `minConnectionRecoveryInSec` | N | Minimum interval (in seconds) to wait before attempting to reconnect to Azure Service Bus in case of a connection failure. Default: `2` | `5`
| `maxConnectionRecoveryInSec` | N | Maximum interval (in seconds) to wait before attempting to reconnect to Azure Service Bus in case of a connection failure. After each attempt, the component waits a random number of seconds, increasing every time, between the minimum and the maximum. Default: `300` (5 minutes) | `600`

View File

@ -38,6 +38,8 @@ spec:
value: "true"
- name: disableBatching
value: "false"
- name: receiverQueueSize
value: "1000"
- name: <topic-name>.jsonschema # sets a json schema validation for the configured topic
value: |
{
@ -78,6 +80,7 @@ The above example uses secrets as plain strings. It is recommended to use a [sec
| namespace | N | The administrative unit of the topic, which acts as a grouping mechanism for related topics. Default: `"default"` | `"default"`
| persistent | N | Pulsar supports two kinds of topics: [persistent](https://pulsar.apache.org/docs/en/concepts-architecture-overview#persistent-storage) and [non-persistent](https://pulsar.apache.org/docs/en/concepts-messaging/#non-persistent-topics). With persistent topics, all messages are durably persisted on disks (if the broker is not standalone, messages are durably persisted on multiple disks), whereas data for non-persistent topics is not persisted to storage disks.
| disableBatching | N | disable batching.When batching enabled default batch delay is set to 10 ms and default batch size is 1000 messages,Setting `disableBatching: true` will make the producer to send messages individually. Default: `"false"` | `"true"`, `"false"`|
| receiverQueueSize | N | Sets the size of the consumer receiver queue. Controls how many messages can be accumulated by the consumer before it is explicitly called to read messages by Dapr. Default: `"1000"` | `"1000"` |
| batchingMaxPublishDelay | N | batchingMaxPublishDelay set the time period within which the messages sent will be batched,if batch messages are enabled. If set to a non zero value, messages will be queued until this time interval or batchingMaxMessages (see below) or batchingMaxSize (see below). There are two valid formats, one is the fraction with a unit suffix format, and the other is the pure digital format that is processed as milliseconds. Valid time units are "ns", "us" (or "µs"), "ms", "s", "m", "h". Default: `"10ms"` | `"10ms"`, `"10"`|
| batchingMaxMessages | N | batchingMaxMessages set the maximum number of messages permitted in a batch.If set to a value greater than 1, messages will be queued until this threshold is reached or batchingMaxSize (see below) has been reached or the batch interval has elapsed. Default: `"1000"` | `"1000"`|
| batchingMaxSize | N | batchingMaxSize sets the maximum number of bytes permitted in a batch. If set to a value greater than 1, messages will be queued until this threshold is reached or batchingMaxMessages (see above) has been reached or the batch interval has elapsed. Default: `"128KB"` | `"131072"`|

View File

@ -45,7 +45,9 @@ The above example uses secrets as plain strings. It is recommended to use a secr
| redisUsername | N | Username for Redis host. Defaults to empty. Make sure your redis server version is 6 or above, and have created acl rule correctly. | `""`, `"default"`
| consumerID | N | The consumer group ID. | Can be set to string value (such as `"channel1"` in the example above) or string format value (such as `"{podName}"`, etc.). [See all of template tags you can use in your component metadata.]({{< ref "component-schema.md#templated-metadata-values" >}})
| useEntraID | N | Implements EntraID support for Azure Cache for Redis. Before enabling this: <ul><li>The `redisHost` name must be specified in the form of `"server:port"`</li><li>TLS must be enabled</li></ul> Learn more about this setting under [Create a Redis instance > Azure Cache for Redis]({{< ref "#setup-redis" >}}) | `"true"`, `"false"` |
| enableTLS | N | If the Redis instance supports TLS with public certificates, can be configured to be enabled or disabled. Defaults to `"false"` | `"true"`, `"false"`
| enableTLS | N | If the Redis instance supports TLS with public certificates, can be configured to be enabled or disabled. Defaults to `"false"` | `"true"`, `"false"` |
| clientCert | N | The content of the client certificate, used for Redis instances that require client-side certificates. Must be used with `clientKey` and `enableTLS` must be set to true. It is recommended to use a secret store as described [here]({{< ref component-secrets.md >}}) | `"----BEGIN CERTIFICATE-----\nMIIC..."` |
| clientKey | N | The content of the client private key, used in conjunction with `clientCert` for authentication. It is recommended to use a secret store as described [here]({{< ref component-secrets.md >}}) | `"----BEGIN PRIVATE KEY-----\nMIIE..."` |
| redeliverInterval | N | The interval between checking for pending messages to redeliver. Can use either be Go duration string (for example "ms", "s", "m") or milliseconds number. Defaults to `"60s"`. `"0"` disables redelivery. | `"30s"`, `"5000"`
| processingTimeout | N | The amount time that a message must be pending before attempting to redeliver it. Can use either be Go duration string ( for example "ms", "s", "m") or milliseconds number. Defaults to `"15s"`. `"0"` disables redelivery. | `"60s"`, `"600000"`
| queueDepth | N | The size of the message queue for processing. Defaults to `"100"`. | `"1000"`

View File

@ -83,6 +83,22 @@ Authenticating with Microsoft Entra ID is supported with Azure Database for Post
| `azureClientId` | N | Client ID (application ID) | `"c7dd251f-811f-…"` |
| `azureClientSecret` | N | Client secret (application password) | `"Ecy3X…"` |
### Authenticate using AWS IAM
Authenticating with AWS IAM is supported with all versions of PostgreSQL type components.
The user specified in the connection string must be an already existing user in the DB, and an AWS IAM enabled user granted the `rds_iam` database role.
Authentication is based on the AWS authentication configuration file, or the AccessKey/SecretKey provided.
The AWS authentication token will be dynamically rotated before it's expiration time with AWS.
| Field | Required | Details | Example |
|--------|:--------:|---------|---------|
| `useAWSIAM` | Y | Must be set to `true` to enable the component to retrieve access tokens from AWS IAM. This authentication method only works with AWS Relational Database Service for PostgreSQL databases. | `"true"` |
| `connectionString` | Y | The connection string for the PostgreSQL database.<br>This must contain an already existing user, which corresponds to the name of the user created inside PostgreSQL that maps to the AWS IAM policy. This connection string should not contain any password. Note that the database name field is denoted by dbname with AWS. | `"host=mydb.postgres.database.aws.com user=myapplication port=5432 dbname=my_db sslmode=require"`|
| `awsRegion` | N | The AWS Region where the AWS Relational Database Service is deployed to. | `"us-east-1"` |
| `awsAccessKey` | N | AWS access key associated with an IAM account | `"AKIAIOSFODNN7EXAMPLE"` |
| `awsSecretKey` | N | The secret key associated with the access key | `"wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY"` |
| `awsSessionToken` | N | AWS session token to use. A session token is only required if you are using temporary security credentials. | `"TOKEN"` |
### Other metadata options
| Field | Required | Details | Example |

Some files were not shown because too many files have changed in this diff Show More