diff --git a/.github/workflows/link_validation.yaml b/.github/workflows/link_validation.yaml index 4b7840e3c..350f8407a 100644 --- a/.github/workflows/link_validation.yaml +++ b/.github/workflows/link_validation.yaml @@ -13,7 +13,7 @@ jobs: validate: runs-on: ubuntu-latest env: - PYTHON_VER: 3.7 + PYTHON_VER: 3.12 steps: - uses: actions/checkout@v2 - name: Check Microsoft URLs do not pin localized versions @@ -27,7 +27,7 @@ jobs: exit 1 fi - name: Set up Python ${{ env.PYTHON_VER }} - uses: actions/setup-python@v2 + uses: actions/setup-python@v5 with: python-version: ${{ env.PYTHON_VER }} - name: Install dependencies diff --git a/daprdocs/content/en/_index.md b/daprdocs/content/en/_index.md index 642107a7b..0e1251403 100644 --- a/daprdocs/content/en/_index.md +++ b/daprdocs/content/en/_index.md @@ -87,6 +87,13 @@ you tackle the challenges that come with building microservices and keeps your c +
+
+
Roadmap
+

Learn about Dapr's roadmap and change process.

+ +
+
diff --git a/daprdocs/content/en/concepts/building-blocks-concept.md b/daprdocs/content/en/concepts/building-blocks-concept.md index ca3d7b955..2134ee550 100644 --- a/daprdocs/content/en/concepts/building-blocks-concept.md +++ b/daprdocs/content/en/concepts/building-blocks-concept.md @@ -22,7 +22,7 @@ Dapr provides the following building blocks: |----------------|----------|-------------| | [**Service-to-service invocation**]({{< ref "service-invocation-overview.md" >}}) | `/v1.0/invoke` | Service invocation enables applications to communicate with each other through well-known endpoints in the form of http or gRPC messages. Dapr provides an endpoint that acts as a combination of a reverse proxy with built-in service discovery, while leveraging built-in distributed tracing and error handling. | [**Publish and subscribe**]({{< ref "pubsub-overview.md" >}}) | `/v1.0/publish` `/v1.0/subscribe`| Pub/Sub is a loosely coupled messaging pattern where senders (or publishers) publish messages to a topic, to which subscribers subscribe. Dapr supports the pub/sub pattern between applications. -| [**Workflows**]({{< ref "workflow-overview.md" >}}) | `/v1.0-beta1/workflow` | The Workflow API enables you to define long running, persistent processes or data flows that span multiple microservices using Dapr workflows or workflow components. The Workflow API can be combined with other Dapr API building blocks. For example, a workflow can call another service with service invocation or retrieve secrets, providing flexibility and portability. +| [**Workflows**]({{< ref "workflow-overview.md" >}}) | `/v1.0/workflow` | The Workflow API enables you to define long running, persistent processes or data flows that span multiple microservices using Dapr workflows or workflow components. The Workflow API can be combined with other Dapr API building blocks. For example, a workflow can call another service with service invocation or retrieve secrets, providing flexibility and portability. | [**State management**]({{< ref "state-management-overview.md" >}}) | `/v1.0/state` | Application state is anything an application wants to preserve beyond a single session. Dapr provides a key/value-based state and query APIs with pluggable state stores for persistence. | [**Bindings**]({{< ref "bindings-overview.md" >}}) | `/v1.0/bindings` | A binding provides a bi-directional connection to an external cloud/on-premise service or system. Dapr allows you to invoke the external service through the Dapr binding API, and it allows your application to be triggered by events sent by the connected service. | [**Actors**]({{< ref "actors-overview.md" >}}) | `/v1.0/actors` | An actor is an isolated, independent unit of compute and state with single-threaded execution. Dapr provides an actor implementation based on the virtual actor pattern which provides a single-threaded programming model and where actors are garbage collected when not in use. @@ -31,3 +31,4 @@ Dapr provides the following building blocks: | [**Distributed lock**]({{< ref "distributed-lock-api-overview.md" >}}) | `/v1.0-alpha1/lock` | The distributed lock API enables you to take a lock on a resource so that multiple instances of an application can access the resource without conflicts and provide consistency guarantees. | [**Cryptography**]({{< ref "cryptography-overview.md" >}}) | `/v1.0-alpha1/crypto` | The Cryptography API enables you to perform cryptographic operations, such as encrypting and decrypting messages, without exposing keys to your application. | [**Jobs**]({{< ref "jobs-overview.md" >}}) | `/v1.0-alpha1/jobs` | The Jobs API enables you to schedule and orchestrate jobs. Example scenarios include: +| [**Conversation**]({{< ref "conversation-overview.md" >}}) | `/v1.0-alpha1/conversation` | The Conversation API enables you to supply prompts to converse with different large language models (LLMs) and includes features such as prompt caching and personally identifiable information (PII) obfuscation. \ No newline at end of file diff --git a/daprdocs/content/en/concepts/components-concept.md b/daprdocs/content/en/concepts/components-concept.md index d7a4f92ab..27c647969 100644 --- a/daprdocs/content/en/concepts/components-concept.md +++ b/daprdocs/content/en/concepts/components-concept.md @@ -122,11 +122,18 @@ Lock components are used as a distributed lock to provide mutually exclusive acc ### Cryptography -[Cryptography]({{< ref cryptography-overview.md >}}) components are used to perform crypographic operations, including encrypting and decrypting messages, without exposing keys to your application. +[Cryptography]({{< ref cryptography-overview.md >}}) components are used to perform cryptographic operations, including encrypting and decrypting messages, without exposing keys to your application. - [List of supported cryptography components]({{< ref supported-cryptography >}}) - [Cryptography implementations](https://github.com/dapr/components-contrib/tree/master/crypto) +### Conversation + +Dapr provides developers a way to abstract interactions with large language models (LLMs) with built-in security and reliability features. Use [conversation]({{< ref conversation-overview.md >}}) components to send prompts to different LLMs, along with the conversation context. + +- [List of supported conversation components]({{< ref supported-conversation >}}) +- [Conversation implementations](https://github.com/dapr/components-contrib/tree/main/conversation) + ### Middleware Dapr allows custom [middleware]({{< ref "middleware.md" >}}) to be plugged into the HTTP request processing pipeline. Middleware can perform additional actions on an HTTP request (such as authentication, encryption, and message transformation) before the request is routed to the user code, or the response is returned to the client. The middleware components are used with the [service invocation]({{< ref "service-invocation-overview.md" >}}) building block. @@ -136,4 +143,4 @@ Dapr allows custom [middleware]({{< ref "middleware.md" >}}) to be plugged into {{% alert title="Note" color="primary" %}} Since pluggable components are not required to be written in Go, they follow a different implementation process than built-in Dapr components. For more information on developing built-in components, read [developing new components](https://github.com/dapr/components-contrib/blob/master/docs/developing-component.md). -{{% /alert %}} +{{% /alert %}} \ No newline at end of file diff --git a/daprdocs/content/en/concepts/configuration-concept.md b/daprdocs/content/en/concepts/configuration-concept.md index f9b89ad4f..a4a859395 100644 --- a/daprdocs/content/en/concepts/configuration-concept.md +++ b/daprdocs/content/en/concepts/configuration-concept.md @@ -6,9 +6,13 @@ weight: 400 description: "Change the behavior of Dapr application sidecars or globally on Dapr control plane system services" --- -Dapr configurations are settings and policies that enable you to change both the behavior of individual Dapr applications, or the global behavior of the Dapr control plane system services. For example, you can set an ACL policy on the application sidecar configuration which indicates which methods can be called from another application, or on the Dapr control plane configuration you can change the certificate renewal period for all certificates that are deployed to application sidecar instances. +With Dapr configurations, you use settings and policies to change: +- The behavior of individual Dapr applications +- The global behavior of the Dapr control plane system services -Configurations are defined and deployed as a YAML file. An application configuration example is shown below, which demonstrates an example of setting a tracing endpoint for where to send the metrics information, capturing all the sample traces. +For example, set a sampling rate policy on the application sidecar configuration to indicate which methods can be called from another application. If you set a policy on the Dapr control plane configuration, you can change the certificate renewal period for all certificates that are deployed to application sidecar instances. + +Configurations are defined and deployed as a YAML file. In the following application configuration example, a tracing endpoint is set for where to send the metrics information, capturing all the sample traces. ```yaml apiVersion: dapr.io/v1alpha1 @@ -23,9 +27,11 @@ spec: endpointAddress: "http://localhost:9411/api/v2/spans" ``` -This configuration configures tracing for metrics recording. It can be loaded in local self-hosted mode by editing the default configuration file called `config.yaml` file in your `.dapr` directory, or by applying it to your Kubernetes cluster with kubectl/helm. +The above YAML configures tracing for metrics recording. You can load it in local self-hosted mode by either: +- Editing the default configuration file called `config.yaml` file in your `.dapr` directory, or +- Applying it to your Kubernetes cluster with `kubectl/helm`. -Here is an example of the Dapr control plane configuration called `daprsystem` in the `dapr-system` namespace. +The following example shows the Dapr control plane configuration called `daprsystem` in the `dapr-system` namespace. ```yaml apiVersion: dapr.io/v1alpha1 @@ -40,8 +46,14 @@ spec: allowedClockSkew: "15m" ``` -Visit [overview of Dapr configuration options]({{}}) for a list of the configuration options. +By default, there is a single configuration file called `daprsystem` installed with the Dapr control plane system services. This configuration file applies global control plane settings and is set up when Dapr is deployed to Kubernetes. -{{% alert title="Note" color="primary" %}} -Dapr application and control plane configurations should not be confused with the configuration building block API that enables applications to retrieve key/value data from configuration store components. Read the [Configuration building block]({{< ref configuration-api-overview >}}) for more information. +[Learn more about configuration options.]({{< ref "configuration-overview.md" >}}) + +{{% alert title="Important" color="warning" %}} +Dapr application and control plane configurations should not be confused with the [configuration building block API]({{< ref configuration-api-overview >}}), which enables applications to retrieve key/value data from configuration store components. {{% /alert %}} + +## Next steps + +{{< button text="Learn more about configuration" page="configuration-overview" >}} \ No newline at end of file diff --git a/daprdocs/content/en/concepts/dapr-services/placement.md b/daprdocs/content/en/concepts/dapr-services/placement.md index d94f9a843..c6d739957 100644 --- a/daprdocs/content/en/concepts/dapr-services/placement.md +++ b/daprdocs/content/en/concepts/dapr-services/placement.md @@ -13,7 +13,9 @@ The Placement service Docker container is started automatically as part of [`dap ## Kubernetes mode -The Placement service is deployed as part of `dapr init -k`, or via the Dapr Helm charts. For more information on running Dapr on Kubernetes, visit the [Kubernetes hosting page]({{< ref kubernetes >}}). +The Placement service is deployed as part of `dapr init -k`, or via the Dapr Helm charts. You can run Placement in high availability (HA) mode. [Learn more about setting HA mode in your Kubernetes service.]({{< ref "kubernetes-production.md#individual-service-ha-helm-configuration" >}}) + +For more information on running Dapr on Kubernetes, visit the [Kubernetes hosting page]({{< ref kubernetes >}}). ## Placement tables diff --git a/daprdocs/content/en/concepts/dapr-services/scheduler.md b/daprdocs/content/en/concepts/dapr-services/scheduler.md index 29060fe93..2fba4ba71 100644 --- a/daprdocs/content/en/concepts/dapr-services/scheduler.md +++ b/daprdocs/content/en/concepts/dapr-services/scheduler.md @@ -11,13 +11,21 @@ The diagram below shows how the Scheduler service is used via the jobs API when Diagram showing the Scheduler control plane service and the jobs API +## Actor reminders + +Prior to Dapr v1.15, [actor reminders]({{< ref "actors-timers-reminders.md#actor-reminders" >}}) were run using the Placement service. Now, by default, the [`SchedulerReminders` feature flag]({{< ref "support-preview-features.md#current-preview-features" >}}) is set to `true`, and all new actor reminders you create are run using the Scheduler service to make them more scalable. + +When you deploy Dapr v1.15, any _existing_ actor reminders are migrated from the Placement service to the Scheduler service as a one time operation for each actor type. You can prevent this migration by setting the `SchedulerReminders` flag to `false` in application configuration file for the actor type. + ## Self-hosted mode The Scheduler service Docker container is started automatically as part of `dapr init`. It can also be run manually as a process if you are running in [slim-init mode]({{< ref self-hosted-no-docker.md >}}). ## Kubernetes mode -The Scheduler service is deployed as part of `dapr init -k`, or via the Dapr Helm charts. For more information on running Dapr on Kubernetes, visit the [Kubernetes hosting page]({{< ref kubernetes >}}). +The Scheduler service is deployed as part of `dapr init -k`, or via the Dapr Helm charts. You can run Scheduler in high availability (HA) mode. [Learn more about setting HA mode in your Kubernetes service.]({{< ref "kubernetes-production.md#individual-service-ha-helm-configuration" >}}) + +For more information on running Dapr on Kubernetes, visit the [Kubernetes hosting page]({{< ref kubernetes >}}). ## Related links diff --git a/daprdocs/content/en/concepts/overview.md b/daprdocs/content/en/concepts/overview.md index 106a772e5..7613042ff 100644 --- a/daprdocs/content/en/concepts/overview.md +++ b/daprdocs/content/en/concepts/overview.md @@ -55,6 +55,7 @@ Each of these building block APIs is independent, meaning that you can use any n | [**Distributed lock**]({{< ref "distributed-lock-api-overview.md" >}}) | The distributed lock API enables your application to acquire a lock for any resource that gives it exclusive access until either the lock is released by the application, or a lease timeout occurs. | [**Cryptography**]({{< ref "cryptography-overview.md" >}}) | The cryptography API provides an abstraction layer on top of security infrastructure such as key vaults. It contains APIs that allow you to perform cryptographic operations, such as encrypting and decrypting messages, without exposing keys to your applications. | [**Jobs**]({{< ref "jobs-overview.md" >}}) | The jobs API enables you to schedule jobs at specific times or intervals. +| [**Conversation**]({{< ref "conversation-overview.md" >}}) | The conversation API enables you to abstract the complexities of interacting with large language models (LLMs) and includes features such as prompt caching and personally identifiable information (PII) obfuscation. Using [conversation components]({{< ref supported-conversation >}}), you can supply prompts to converse with different LLMs. ### Cross-cutting APIs @@ -108,7 +109,7 @@ Deploying and running a Dapr-enabled application into your Kubernetes cluster is ### Clusters of physical or virtual machines -The Dapr control plane services can be deployed in high availability (HA) mode to clusters of physical or virtual machines in production. In the diagram below, the Actor `Placement` and security `Sentry` services are started on three different VMs to provide HA control plane. In order to provide name resolution using DNS for the applications running in the cluster, Dapr uses [Hashicorp Consul service]({{< ref setup-nr-consul >}}), also running in HA mode. +The Dapr control plane services can be deployed in high availability (HA) mode to clusters of physical or virtual machines in production. In the diagram below, the Actor `Placement` and security `Sentry` services are started on three different VMs to provide HA control plane. In order to provide name resolution using DNS for the applications running in the cluster, Dapr uses multicast DNS by default, but can also optionally support [Hashicorp Consul service]({{< ref setup-nr-consul >}}). Architecture diagram of Dapr control plane and Consul deployed to VMs in high availability mode diff --git a/daprdocs/content/en/contributing/contributing-overview.md b/daprdocs/content/en/contributing/contributing-overview.md index 4e84c5071..b78669994 100644 --- a/daprdocs/content/en/contributing/contributing-overview.md +++ b/daprdocs/content/en/contributing/contributing-overview.md @@ -18,7 +18,7 @@ See the [Dapr community repository](https://github.com/dapr/community) for more 1. **Docs**: This [repository](https://github.com/dapr/docs) contains the documentation for Dapr. You can contribute by updating existing documentation, fixing errors, or adding new content to improve user experience and clarity. Please see the specific guidelines for [docs contributions]({{< ref contributing-docs >}}). -2. **Quickstarts**: The Quickstarts [repository](https://github.com/dapr/quickstarts) provides simple, step-by-step guides to help users get started with Dapr quickly. Contributions in this repository involve creating new quickstarts, improving existing ones, or ensuring they stay up-to-date with the latest features. +2. **Quickstarts**: The Quickstarts [repository](https://github.com/dapr/quickstarts) provides simple, step-by-step guides to help users get started with Dapr quickly. [Contributions in this repository](https://github.com/dapr/quickstarts/blob/master/CONTRIBUTING.md) involve creating new quickstarts, improving existing ones, or ensuring they stay up-to-date with the latest features. 3. **Runtime**: The Dapr runtime [repository](https://github.com/dapr/dapr) houses the core runtime components. Here, you can contribute by fixing bugs, optimizing performance, implementing new features, or enhancing existing ones. diff --git a/daprdocs/content/en/contributing/daprbot.md b/daprdocs/content/en/contributing/daprbot.md index 2e35fd491..b4b53d0bf 100644 --- a/daprdocs/content/en/contributing/daprbot.md +++ b/daprdocs/content/en/contributing/daprbot.md @@ -2,7 +2,7 @@ type: docs title: "Dapr bot reference" linkTitle: "Dapr bot" -weight: 15 +weight: 70 description: "List of Dapr bot capabilities." --- diff --git a/daprdocs/content/en/contributing/docs-contrib/contributing-docs.md b/daprdocs/content/en/contributing/docs-contrib/contributing-docs.md index 2ed1f81bd..a76b02344 100644 --- a/daprdocs/content/en/contributing/docs-contrib/contributing-docs.md +++ b/daprdocs/content/en/contributing/docs-contrib/contributing-docs.md @@ -41,15 +41,18 @@ Style and tone conventions should be followed throughout all Dapr documentation ## Diagrams and images -Diagrams and images are invaluable visual aids for documentation pages. Diagrams are kept in a [Dapr Diagrams Deck](https://github.com/dapr/docs/tree/v1.11/daprdocs/static/presentations), which includes guidance on style and icons. +Diagrams and images are invaluable visual aids for documentation pages. Use the diagram style and icons in the [Dapr Diagrams template deck](https://github.com/dapr/docs/tree/v1.14/daprdocs/static/presentations). -As you create diagrams for your documentation: +The process for creating diagrams for your documentation: -- Save them as high-res PNG files into the [images folder](https://github.com/dapr/docs/tree/v1.11/daprdocs/static/images). -- Name your PNG files using the convention of a concept or building block so that they are grouped. +1. Download the [Dapr Diagrams template deck](https://github.com/dapr/docs/tree/v1.14/daprdocs/static/presentations) to use the icons and colors. +1. Add a new slide and create your diagram. +1. Screen capture the diagram as high-res PNG file and save in the [images folder](https://github.com/dapr/docs/tree/v1.14/daprdocs/static/images). +1. Name your PNG files using the convention of a concept or building block so that they are grouped. - For example: `service-invocation-overview.png`. - For more information on calling out images using shortcode, see the [Images guidance](#images) section below. -- Add the diagram to the correct section in the `Dapr-Diagrams.pptx` deck so that they can be amended and updated during routine refresh. +1. Add the diagram to the appropriate section in your documentation using the HTML `` tag. +1. In your PR, comment the diagram slide (not the screen capture) so it can be reviewed and added to the diagram deck by maintainers. ## Contributing a new docs page diff --git a/daprdocs/content/en/contributing/roadmap.md b/daprdocs/content/en/contributing/roadmap.md index d3a790935..6c1093ecb 100644 --- a/daprdocs/content/en/contributing/roadmap.md +++ b/daprdocs/content/en/contributing/roadmap.md @@ -2,47 +2,9 @@ type: docs title: "Dapr Roadmap" linkTitle: "Roadmap" -description: "The Dapr Roadmap is a tool to help with visibility into investments across the Dapr project" +description: "The Dapr Roadmap gives the community visibility into the different priorities of the projecs" weight: 30 no_list: true --- - -Dapr encourages the community to help with prioritization. A GitHub project board is available to view and provide feedback on proposed issues and track them across development. - -[Screenshot of the Dapr Roadmap board](https://aka.ms/dapr/roadmap) - -{{< button text="View the backlog" link="https://aka.ms/dapr/roadmap" color="primary" >}} -
- -Please vote by adding a 👍 on the GitHub issues for the feature capabilities you would most like to see Dapr support. This will help the Dapr maintainers understand which features will provide the most value. - -Contributions from the community is also welcomed. If there are features on the roadmap that you are interested in contributing to, please comment on the GitHub issue and include your solution proposal. - -{{% alert title="Note" color="primary" %}} -The Dapr roadmap includes issues only from the v1.2 release and onwards. Issues closed and released prior to v1.2 are not included. -{{% /alert %}} - -## Stages - -The Dapr Roadmap progresses through the following stages: - -{{< cardpane >}} -{{< card title="**[📄 Backlog](https://github.com/orgs/dapr/projects/52#column-14691591)**" >}} - Issues (features) that need 👍 votes from the community to prioritize. Updated by Dapr maintainers. -{{< /card >}} -{{< card title="**[⏳ Planned (Committed)](https://github.com/orgs/dapr/projects/52#column-14561691)**" >}} - Issues with a proposal and/or targeted release milestone. This is where design proposals are discussed and designed. -{{< /card >}} -{{< card title="**[👩‍💻 In Progress (Development)](https://github.com/orgs/dapr/projects/52#column-14561696)**" >}} - Implementation specifics have been agreed upon and the feature is under active development. -{{< /card >}} -{{< /cardpane >}} -{{< cardpane >}} -{{< card title="**[☑ Done](https://github.com/orgs/dapr/projects/52#column-14561700)**" >}} - The feature capability has been completed and is scheduled for an upcoming release. -{{< /card >}} -{{< card title="**[✅ Released](https://github.com/orgs/dapr/projects/52#column-14659973)**" >}} - The feature is released and available for use. -{{< /card >}} -{{< /cardpane >}} +See [this document](https://github.com/dapr/community/blob/master/roadmap.md) to view the Dapr project's roadmap. diff --git a/daprdocs/content/en/developing-applications/building-blocks/actors/actors-overview.md b/daprdocs/content/en/developing-applications/building-blocks/actors/actors-overview.md index bb96b9b23..7bb1bcf0e 100644 --- a/daprdocs/content/en/developing-applications/building-blocks/actors/actors-overview.md +++ b/daprdocs/content/en/developing-applications/building-blocks/actors/actors-overview.md @@ -104,7 +104,7 @@ The Dapr actor runtime provides a simple turn-based access model for accessing a ### State -Transactional state stores can be used to store actor state. To specify which state store to use for actors, specify value of property `actorStateStore` as `true` in the state store component's metadata section. Actors state is stored with a specific scheme in transactional state stores, allowing for consistent querying. Only a single state store component can be used as the state store for all actors. Read the [state API reference]({{< ref state_api.md >}}) and the [actors API reference]({{< ref actors_api.md >}}) to learn more about state stores for actors. +Transactional state stores can be used to store actor state. Regardless of whether you intend to store any state in your actor, you must specify a value for property `actorStateStore` as `true` in the state store component's metadata section. Actors state is stored with a specific scheme in transactional state stores, allowing for consistent querying. Only a single state store component can be used as the state store for all actors. Read the [state API reference]({{< ref state_api.md >}}) and the [actors API reference]({{< ref actors_api.md >}}) to learn more about state stores for actors. ### Actor timers and reminders diff --git a/daprdocs/content/en/developing-applications/building-blocks/actors/actors-timers-reminders.md b/daprdocs/content/en/developing-applications/building-blocks/actors/actors-timers-reminders.md index 7a4cd1ec7..866404563 100644 --- a/daprdocs/content/en/developing-applications/building-blocks/actors/actors-timers-reminders.md +++ b/daprdocs/content/en/developing-applications/building-blocks/actors/actors-timers-reminders.md @@ -107,6 +107,10 @@ Refer [api spec]({{< ref "actors_api.md#invoke-timer" >}}) for more details. ## Actor reminders +{{% alert title="Note" color="primary" %}} +In Dapr v1.15, actor reminders are stored by default in the [Scheduler service]({{< ref "scheduler.md#actor-reminders" >}}). +{{% /alert %}} + Reminders are a mechanism to trigger *persistent* callbacks on an actor at specified times. Their functionality is similar to timers. But unlike timers, reminders are triggered under all circumstances until the actor explicitly unregisters them or the actor is explicitly deleted or the number in invocations is exhausted. Specifically, reminders are triggered across actor deactivations and failovers because the Dapr actor runtime persists the information about the actors' reminders using Dapr actor state provider. You can create a persistent reminder for an actor by calling the HTTP/gRPC request to Dapr as shown below, or via Dapr SDK. @@ -148,7 +152,9 @@ If an invocation of the method fails, the timer is not removed. Timers are only ## Reminder data serialization format -Actor reminder data is serialized to JSON by default. Dapr v1.13 onwards supports a protobuf serialization format for reminders data which, depending on throughput and size of the payload, can result in significant performance improvements, giving developers a higher throughput and lower latency. Another benefit is storing smaller data in the actor underlying database, which can result in cost optimizations when using some cloud databases. A restriction with using protobuf serialization is that the reminder data can no longer be queried. +Actor reminder data is serialized to JSON by default. Dapr v1.13 onwards supports a protobuf serialization format for internal reminders data for workflow via both the Placement and Scheduler services. Depending on throughput and size of the payload, this can result in significant performance improvements, giving developers a higher throughput and lower latency. + +Another benefit is storing smaller data in the actor underlying database, which can result in cost optimizations when using some cloud databases. A restriction with using protobuf serialization is that the reminder data can no longer be queried. {{% alert title="Note" color="primary" %}} Protobuf serialization will become the default format in Dapr 1.14 diff --git a/daprdocs/content/en/developing-applications/building-blocks/conversation/_index.md b/daprdocs/content/en/developing-applications/building-blocks/conversation/_index.md new file mode 100644 index 000000000..efd115759 --- /dev/null +++ b/daprdocs/content/en/developing-applications/building-blocks/conversation/_index.md @@ -0,0 +1,7 @@ +--- +type: docs +title: "Conversation" +linkTitle: "Conversation" +weight: 130 +description: "Utilize prompts with Large Language Models (LLMs)" +--- \ No newline at end of file diff --git a/daprdocs/content/en/developing-applications/building-blocks/conversation/conversation-overview.md b/daprdocs/content/en/developing-applications/building-blocks/conversation/conversation-overview.md new file mode 100644 index 000000000..f7621517e --- /dev/null +++ b/daprdocs/content/en/developing-applications/building-blocks/conversation/conversation-overview.md @@ -0,0 +1,43 @@ +--- +type: docs +title: "Conversation overview" +linkTitle: "Overview" +weight: 1000 +description: "Overview of the conversation API building block" +--- + +{{% alert title="Alpha" color="primary" %}} +The conversation API is currently in [alpha]({{< ref "certification-lifecycle.md#certification-levels" >}}). +{{% /alert %}} + + +Using the Dapr conversation API, you can reduce the complexity of interacting with Large Language Models (LLMs) and enable critical performance and security functionality with features like prompt caching and personally identifiable information (PII) data obfuscation. + +## Features + +### Prompt caching + +To significantly reduce latency and cost, frequent prompts are stored in a cache to be reused, instead of reprocessing the information for every new request. Prompt caching optimizes performance by storing and reusing prompts that are often repeated across multiple API calls. + +### Personally identifiable information (PII) obfuscation + +The PII obfuscation feature identifies and removes any PII from a conversation response. This feature protects your privacy by eliminating sensitive details like names, addresses, phone numbers, or other details that could be used to identify an individual. + +## Try out conversation + +### Quickstarts and tutorials + +Want to put the Dapr conversation API to the test? Walk through the following quickstart and tutorials to see it in action: + +| Quickstart/tutorial | Description | +| ------------------- | ----------- | +| [Conversation quickstart](todo) | . | + +### Start using the conversation API directly in your app + +Want to skip the quickstarts? Not a problem. You can try out the conversation building block directly in your application. After [Dapr is installed]({{< ref "getting-started/_index.md" >}}), you can begin using the conversation API starting with [the how-to guide]({{< ref howto-conversation-layer.md >}}). + +## Next steps + +- [How-To: Converse with an LLM using the conversation API]({{< ref howto-conversation-layer.md >}}) +- [Conversation API components]({{< ref supported-conversation >}}) \ No newline at end of file diff --git a/daprdocs/content/en/developing-applications/building-blocks/conversation/howto-conversation-layer.md b/daprdocs/content/en/developing-applications/building-blocks/conversation/howto-conversation-layer.md new file mode 100644 index 000000000..0d35d860b --- /dev/null +++ b/daprdocs/content/en/developing-applications/building-blocks/conversation/howto-conversation-layer.md @@ -0,0 +1,239 @@ +--- +type: docs +title: "How-To: Converse with an LLM using the conversation API" +linkTitle: "How-To: Converse" +weight: 2000 +description: "Learn how to abstract the complexities of interacting with large language models" +--- + +{{% alert title="Alpha" color="primary" %}} +The conversation API is currently in [alpha]({{< ref "certification-lifecycle.md#certification-levels" >}}). +{{% /alert %}} + +Let's get started using the [conversation API]({{< ref conversation-overview.md >}}). In this guide, you'll learn how to: + +- Set up one of the available Dapr components (echo) that work with the conversation API. +- Add the conversation client to your application. +- Run the connection using `dapr run`. + +## Set up the conversation component + +Create a new configuration file called `conversation.yaml` and save to a components or config sub-folder in your application directory. + +Select your [preferred conversation component spec]({{< ref supported-conversation >}}) for your `conversation.yaml` file. + +For this scenario, we use a simple echo component. + +```yml +apiVersion: dapr.io/v1alpha1 +kind: Component +metadata: + name: echo +spec: + type: conversation.echo + version: v1 +``` + +## Connect the conversation client + +The following examples use an HTTP client to send a POST request to Dapr's sidecar HTTP endpoint. You can also use [the Dapr SDK client instead]({{< ref "#related-links" >}}). + +{{< tabs ".NET" "Go" "Rust" >}} + + + +{{% codetab %}} + +```csharp +using Dapr.AI.Conversation; +using Dapr.AI.Conversation.Extensions; + +var builder = WebApplication.CreateBuilder(args); + +builder.Services.AddDaprConversationClient(); + +var app = builder.Build(); + +var conversationClient = app.Services.GetRequiredService(); +var response = await conversationClient.ConverseAsync("conversation", + new List + { + new DaprConversationInput( + "Please write a witty haiku about the Dapr distributed programming framework at dapr.io", + DaprConversationRole.Generic) + }); + +Console.WriteLine("Received the following from the LLM:"); +foreach (var resp in response.Outputs) +{ + Console.WriteLine($"\t{resp.Result}"); +} +``` + +{{% /codetab %}} + + +{{% codetab %}} + +```go +package main + +import ( + "context" + "fmt" + dapr "github.com/dapr/go-sdk/client" + "log" +) + +func main() { + client, err := dapr.NewClient() + if err != nil { + panic(err) + } + + input := dapr.ConversationInput{ + Message: "hello world", + // Role: nil, // Optional + // ScrubPII: nil, // Optional + } + + fmt.Printf("conversation input: %s\n", input.Message) + + var conversationComponent = "echo" + + request := dapr.NewConversationRequest(conversationComponent, []dapr.ConversationInput{input}) + + resp, err := client.ConverseAlpha1(context.Background(), request) + if err != nil { + log.Fatalf("err: %v", err) + } + + fmt.Printf("conversation output: %s\n", resp.Outputs[0].Result) +} +``` + +{{% /codetab %}} + + +{{% codetab %}} + +```rust +use dapr::client::{ConversationInputBuilder, ConversationRequestBuilder}; +use std::thread; +use std::time::Duration; + +type DaprClient = dapr::Client; + +#[tokio::main] +async fn main() -> Result<(), Box> { + // Sleep to allow for the server to become available + thread::sleep(Duration::from_secs(5)); + + // Set the Dapr address + let address = "https://127.0.0.1".to_string(); + + let mut client = DaprClient::connect(address).await?; + + let input = ConversationInputBuilder::new("hello world").build(); + + let conversation_component = "echo"; + + let request = + ConversationRequestBuilder::new(conversation_component, vec![input.clone()]).build(); + + println!("conversation input: {:?}", input.message); + + let response = client.converse_alpha1(request).await?; + + println!("conversation output: {:?}", response.outputs[0].result); + Ok(()) +} +``` + +{{% /codetab %}} + +{{< /tabs >}} + +## Run the conversation connection + +Start the connection using the `dapr run` command. For example, for this scenario, we're running `dapr run` on an application with the app ID `conversation` and pointing to our conversation YAML file in the `./config` directory. + +{{< tabs ".NET" "Go" "Rust" >}} + + +{{% codetab %}} + +```bash +dapr run --app-id conversation --dapr-grpc-port 50001 --log-level debug --resources-path ./config -- dotnet run +``` + +{{% /codetab %}} + + +{{% codetab %}} + +```bash +dapr run --app-id conversation --dapr-grpc-port 50001 --log-level debug --resources-path ./config -- go run ./main.go +``` + +**Expected output** + +``` + - '== APP == conversation output: hello world' +``` + +{{% /codetab %}} + + +{{% codetab %}} + +```bash +dapr run --app-id=conversation --resources-path ./config --dapr-grpc-port 3500 -- cargo run --example conversation +``` + +**Expected output** + +``` + - 'conversation input: hello world' + - 'conversation output: hello world' +``` + +{{% /codetab %}} + +{{< /tabs >}} + +## Related links + +Try out the conversation API using the full examples provided in the supported SDK repos. + + +{{< tabs ".NET" "Go" "Rust" >}} + + +{{% codetab %}} + +[Dapr conversation example with the .NET SDK](https://github.com/dapr/dotnet-sdk/tree/master/examples/AI/ConversationalAI) + +{{% /codetab %}} + + +{{% codetab %}} + +[Dapr conversation example with the Go SDK](https://github.com/dapr/go-sdk/tree/main/examples/conversation) + +{{% /codetab %}} + + +{{% codetab %}} + +[Dapr conversation example with the Rust SDK](https://github.com/dapr/rust-sdk/tree/main/examples/src/conversation) + +{{% /codetab %}} + +{{< /tabs >}} + + +## Next steps + +- [Conversation API reference guide]({{< ref conversation_api.md >}}) +- [Available conversation components]({{< ref supported-conversation >}}) \ No newline at end of file diff --git a/daprdocs/content/en/developing-applications/building-blocks/jobs/howto-schedule-and-handle-triggered-jobs.md b/daprdocs/content/en/developing-applications/building-blocks/jobs/howto-schedule-and-handle-triggered-jobs.md index 27dc31fc2..0a5dba3b1 100644 --- a/daprdocs/content/en/developing-applications/building-blocks/jobs/howto-schedule-and-handle-triggered-jobs.md +++ b/daprdocs/content/en/developing-applications/building-blocks/jobs/howto-schedule-and-handle-triggered-jobs.md @@ -94,6 +94,75 @@ In this example, at trigger time, which is `@every 1s` according to the `Schedul At the trigger time, the `prodDBBackupHandler` function is called, executing the desired business logic for this job at trigger time. For example: +#### HTTP + +When you create a job using Dapr's Jobs API, Dapr will automatically assume there is an endpoint available at +`/job/`. For instance, if you schedule a job named `test`, Dapr expects your application to listen for job +events at `/job/test`. Ensure your application has a handler set up for this endpoint to process the job when it is +triggered. For example: + +*Note: The following example is in Go but applies to any programming language.* + +```go + +func main() { + ... + http.HandleFunc("/job/", handleJob) + http.HandleFunc("/job/", specificJob) + ... +} + +func specificJob(w http.ResponseWriter, r *http.Request) { + // Handle specific triggered job +} + +func handleJob(w http.ResponseWriter, r *http.Request) { + // Handle the triggered jobs +} +``` + +#### gRPC + +When a job reaches its scheduled trigger time, the triggered job is sent back to the application via the following +callback function: + +*Note: The following example is in Go but applies to any programming language with gRPC support.* + +```go +import rtv1 "github.com/dapr/dapr/pkg/proto/runtime/v1" +... +func (s *JobService) OnJobEventAlpha1(ctx context.Context, in *rtv1.JobEventRequest) (*rtv1.JobEventResponse, error) { + // Handle the triggered job +} +``` + +This function processes the triggered jobs within the context of your gRPC server. When you set up the server, ensure that +you register the callback server, which will invoke this function when a job is triggered: + +```go +... +js := &JobService{} +rtv1.RegisterAppCallbackAlphaServer(server, js) +``` + +In this setup, you have full control over how triggered jobs are received and processed, as they are routed directly +through this gRPC method. + +#### SDKs + +For SDK users, handling triggered jobs is simpler. When a job is triggered, Dapr will automatically route the job to the +event handler you set up during the server initialization. For example, in Go, you'd register the event handler like this: + +```go +... +if err = server.AddJobEventHandler("prod-db-backup", prodDBBackupHandler); err != nil { + log.Fatalf("failed to register job event handler: %v", err) +} +``` + +Dapr takes care of the underlying routing. When the job is triggered, your `prodDBBackupHandler` function is called with +the triggered job data. Here’s an example of handling the triggered job: + ```go // ... @@ -103,11 +172,9 @@ func prodDBBackupHandler(ctx context.Context, job *common.JobEvent) error { if err := json.Unmarshal(job.Data, &jobData); err != nil { // ... } - decodedPayload, err := base64.StdEncoding.DecodeString(jobData.Value) - // ... var jobPayload api.DBBackup - if err := json.Unmarshal(decodedPayload, &jobPayload); err != nil { + if err := json.Unmarshal(job.Data, &jobPayload); err != nil { // ... } fmt.Printf("job %d received:\n type: %v \n typeurl: %v\n value: %v\n extracted payload: %v\n", jobCount, job.JobType, jobData.TypeURL, jobData.Value, jobPayload) @@ -146,4 +213,4 @@ dapr run --app-id=distributed-scheduler \ ## Next steps - [Learn more about the Scheduler control plane service]({{< ref "concepts/dapr-services/scheduler.md" >}}) -- [Jobs API reference]({{< ref jobs_api.md >}}) \ No newline at end of file +- [Jobs API reference]({{< ref jobs_api.md >}}) diff --git a/daprdocs/content/en/developing-applications/building-blocks/jobs/jobs-overview.md b/daprdocs/content/en/developing-applications/building-blocks/jobs/jobs-overview.md index 486cfc5d6..63f90c102 100644 --- a/daprdocs/content/en/developing-applications/building-blocks/jobs/jobs-overview.md +++ b/daprdocs/content/en/developing-applications/building-blocks/jobs/jobs-overview.md @@ -59,10 +59,6 @@ The jobs API provides several features to make it easy for you to schedule jobs. The Scheduler service enables the scheduling of jobs to scale across multiple replicas, while guaranteeing that a job is only triggered by 1 scheduler service instance. -### Actor reminders - -Actors have actor reminders, but present some limitations involving scalability using the Placement service implementation. You can make reminders more scalable by using [`SchedulerReminders`]({{< ref support-preview-features.md >}}). This is set in the configuration for your actor application. - ## Try out the jobs API You can try out the jobs API in your application. After [Dapr is installed]({{< ref install-dapr-cli.md >}}), you can begin using the jobs API, starting with [the How-to: Schedule jobs guide]({{< ref howto-schedule-and-handle-triggered-jobs.md >}}). diff --git a/daprdocs/content/en/developing-applications/building-blocks/pubsub/pubsub-bulk.md b/daprdocs/content/en/developing-applications/building-blocks/pubsub/pubsub-bulk.md index 89ca63fe8..5131d9080 100644 --- a/daprdocs/content/en/developing-applications/building-blocks/pubsub/pubsub-bulk.md +++ b/daprdocs/content/en/developing-applications/building-blocks/pubsub/pubsub-bulk.md @@ -336,14 +336,13 @@ Status | Description `RETRY` | Message to be retried by Dapr `DROP` | Warning is logged and message is dropped -Please refer [Expected HTTP Response for Bulk Subscribe]({{< ref pubsub_api.md >}}) for further insights on response. +Refer to [Expected HTTP Response for Bulk Subscribe]({{< ref pubsub_api.md >}}) for further insights on response. ### Example -Please refer following code samples for how to use Bulk Subscribe: - -{{< tabs "Java" "JavaScript" ".NET" >}} +The following code examples demonstrate how to use Bulk Subscribe. +{{< tabs "Java" "JavaScript" ".NET" "Python" >}} {{% codetab %}} ```java @@ -471,7 +470,50 @@ public class BulkMessageController : ControllerBase {{% /codetab %}} +{{% codetab %}} +Currently, you can only bulk subscribe in Python using an HTTP client. + +```python +import json +from flask import Flask, request, jsonify + +app = Flask(__name__) + +@app.route('/dapr/subscribe', methods=['GET']) +def subscribe(): + # Define the bulk subscribe configuration + subscriptions = [{ + "pubsubname": "pubsub", + "topic": "TOPIC_A", + "route": "/checkout", + "bulkSubscribe": { + "enabled": True, + "maxMessagesCount": 3, + "maxAwaitDurationMs": 40 + } + }] + print('Dapr pub/sub is subscribed to: ' + json.dumps(subscriptions)) + return jsonify(subscriptions) + + +# Define the endpoint to handle incoming messages +@app.route('/checkout', methods=['POST']) +def checkout(): + messages = request.json + print(messages) + for message in messages: + print(f"Received message: {message}") + return json.dumps({'success': True}), 200, {'ContentType': 'application/json'} + +if __name__ == '__main__': + app.run(port=5000) + +``` + +{{% /codetab %}} + {{< /tabs >}} + ## How components handle publishing and subscribing to bulk messages For event publish/subscribe, two kinds of network transfers are involved. diff --git a/daprdocs/content/en/developing-applications/building-blocks/pubsub/subscription-methods.md b/daprdocs/content/en/developing-applications/building-blocks/pubsub/subscription-methods.md index 436c16295..5c31057ee 100644 --- a/daprdocs/content/en/developing-applications/building-blocks/pubsub/subscription-methods.md +++ b/daprdocs/content/en/developing-applications/building-blocks/pubsub/subscription-methods.md @@ -37,17 +37,16 @@ metadata: spec: topic: orders routes: - default: /checkout + default: /orders pubsubname: pubsub scopes: - orderprocessing -- checkout ``` Here the subscription called `order`: - Uses the pub/sub component called `pubsub` to subscribes to the topic called `orders`. -- Sets the `route` field to send all topic messages to the `/checkout` endpoint in the app. -- Sets `scopes` field to scope this subscription for access only by apps with IDs `orderprocessing` and `checkout`. +- Sets the `route` field to send all topic messages to the `/orders` endpoint in the app. +- Sets `scopes` field to scope this subscription for access only by apps with ID `orderprocessing`. When running Dapr, set the YAML component file path to point Dapr to the component. @@ -113,7 +112,7 @@ In your application code, subscribe to the topic specified in the Dapr pub/sub c ```csharp //Subscribe to a topic -[HttpPost("checkout")] +[HttpPost("orders")] public void getCheckout([FromBody] int orderId) { Console.WriteLine("Subscriber received : " + orderId); @@ -128,7 +127,7 @@ public void getCheckout([FromBody] int orderId) import io.dapr.client.domain.CloudEvent; //Subscribe to a topic -@PostMapping(path = "/checkout") +@PostMapping(path = "/orders") public Mono getCheckout(@RequestBody(required = false) CloudEvent cloudEvent) { return Mono.fromRunnable(() -> { try { @@ -146,7 +145,7 @@ public Mono getCheckout(@RequestBody(required = false) CloudEvent from cloudevents.sdk.event import v1 #Subscribe to a topic -@app.route('/checkout', methods=['POST']) +@app.route('/orders', methods=['POST']) def checkout(event: v1.Event) -> None: data = json.loads(event.Data()) logging.info('Subscriber received: ' + str(data)) @@ -163,7 +162,7 @@ const app = express() app.use(bodyParser.json({ type: 'application/*+json' })); // listen to the declarative route -app.post('/checkout', (req, res) => { +app.post('/orders', (req, res) => { console.log(req.body); res.sendStatus(200); }); @@ -178,7 +177,7 @@ app.post('/checkout', (req, res) => { var sub = &common.Subscription{ PubsubName: "pubsub", Topic: "orders", - Route: "/checkout", + Route: "/orders", } func eventHandler(ctx context.Context, e *common.TopicEvent) (retry bool, err error) { @@ -191,7 +190,7 @@ func eventHandler(ctx context.Context, e *common.TopicEvent) (retry bool, err er {{< /tabs >}} -The `/checkout` endpoint matches the `route` defined in the subscriptions and this is where Dapr sends all topic messages to. +The `/orders` endpoint matches the `route` defined in the subscriptions and this is where Dapr sends all topic messages to. ### Streaming subscriptions @@ -204,7 +203,112 @@ As messages are sent to the given message handler code, there is no concept of r The example below shows the different ways to stream subscribe to a topic. -{{< tabs Go>}} +{{< tabs Python Go >}} + +{{% codetab %}} + +You can use the `subscribe` method, which returns a `Subscription` object and allows you to pull messages from the stream by calling the `next_message` method. This runs in and may block the main thread while waiting for messages. + +```python +import time + +from dapr.clients import DaprClient +from dapr.clients.grpc.subscription import StreamInactiveError + +counter = 0 + + +def process_message(message): + global counter + counter += 1 + # Process the message here + print(f'Processing message: {message.data()} from {message.topic()}...') + return 'success' + + +def main(): + with DaprClient() as client: + global counter + + subscription = client.subscribe( + pubsub_name='pubsub', topic='orders', dead_letter_topic='orders_dead' + ) + + try: + while counter < 5: + try: + message = subscription.next_message() + + except StreamInactiveError as e: + print('Stream is inactive. Retrying...') + time.sleep(1) + continue + if message is None: + print('No message received within timeout period.') + continue + + # Process the message + response_status = process_message(message) + + if response_status == 'success': + subscription.respond_success(message) + elif response_status == 'retry': + subscription.respond_retry(message) + elif response_status == 'drop': + subscription.respond_drop(message) + + finally: + print("Closing subscription...") + subscription.close() + + +if __name__ == '__main__': + main() + +``` + +You can also use the `subscribe_with_handler` method, which accepts a callback function executed for each message received from the stream. This runs in a separate thread, so it doesn't block the main thread. + +```python +import time + +from dapr.clients import DaprClient +from dapr.clients.grpc._response import TopicEventResponse + +counter = 0 + + +def process_message(message): + # Process the message here + global counter + counter += 1 + print(f'Processing message: {message.data()} from {message.topic()}...') + return TopicEventResponse('success') + + +def main(): + with (DaprClient() as client): + # This will start a new thread that will listen for messages + # and process them in the `process_message` function + close_fn = client.subscribe_with_handler( + pubsub_name='pubsub', topic='orders', handler_fn=process_message, + dead_letter_topic='orders_dead' + ) + + while counter < 5: + time.sleep(1) + + print("Closing subscription...") + close_fn() + + +if __name__ == '__main__': + main() +``` + +[Learn more about streaming subscriptions using the Python SDK client.]({{< ref "python-client.md#streaming-message-subscription" >}}) + +{{% /codetab %}} {{% codetab %}} @@ -325,7 +429,7 @@ In the example below, you define the values found in the [declarative YAML subsc ```csharp [Topic("pubsub", "orders")] -[HttpPost("/checkout")] +[HttpPost("/orders")] public async Task>Checkout(Order order, [FromServices] DaprClient daprClient) { // Logic @@ -337,7 +441,7 @@ or ```csharp // Dapr subscription in [Topic] routes orders topic to this route -app.MapPost("/checkout", [Topic("pubsub", "orders")] (Order order) => { +app.MapPost("/orders", [Topic("pubsub", "orders")] (Order order) => { Console.WriteLine("Subscriber received : " + order); return Results.Ok(order); }); @@ -359,7 +463,7 @@ app.UseEndpoints(endpoints => ```java private static final ObjectMapper OBJECT_MAPPER = new ObjectMapper(); -@Topic(name = "checkout", pubsubName = "pubsub") +@Topic(name = "orders", pubsubName = "pubsub") @PostMapping(path = "/orders") public Mono handleMessage(@RequestBody(required = false) CloudEvent cloudEvent) { return Mono.fromRunnable(() -> { @@ -370,6 +474,7 @@ public Mono handleMessage(@RequestBody(required = false) CloudEvent { res.json([ { pubsubname: "pubsub", - topic: "checkout", + topic: "orders", routes: { rules: [ { @@ -480,7 +585,7 @@ func configureSubscribeHandler(w http.ResponseWriter, _ *http.Request) { t := []subscription{ { PubsubName: "pubsub", - Topic: "checkout", + Topic: "orders", Routes: routes{ Rules: []rule{ { diff --git a/daprdocs/content/en/developing-applications/building-blocks/service-invocation/howto-invoke-non-dapr-endpoints.md b/daprdocs/content/en/developing-applications/building-blocks/service-invocation/howto-invoke-non-dapr-endpoints.md index 28c3cb8f1..680b03611 100644 --- a/daprdocs/content/en/developing-applications/building-blocks/service-invocation/howto-invoke-non-dapr-endpoints.md +++ b/daprdocs/content/en/developing-applications/building-blocks/service-invocation/howto-invoke-non-dapr-endpoints.md @@ -38,11 +38,9 @@ The diagram below is an overview of how Dapr's service invocation works when inv Diagram showing the steps of service invocation to non-Dapr endpoints 1. Service A makes an HTTP call targeting Service B, a non-Dapr endpoint. The call goes to the local Dapr sidecar. -2. Dapr discovers Service B's location using the `HTTPEndpoint` or FQDN URL. -3. Dapr forwards the message to Service B. -4. Service B runs its business logic code. -5. Service B sends a response to Service A's Dapr sidecar. -6. Service A receives the response. +2. Dapr discovers Service B's location using the `HTTPEndpoint` or FQDN URL then forwards the message to Service B. +3. Service B sends a response to Service A's Dapr sidecar. +4. Service A receives the response. ## Using an HTTPEndpoint resource or FQDN URL for non-Dapr endpoints There are two ways to invoke a non-Dapr endpoint when communicating either to Dapr applications or non-Dapr applications. A Dapr application can invoke a non-Dapr endpoint by providing one of the following: diff --git a/daprdocs/content/en/developing-applications/building-blocks/state-management/howto-get-save-state.md b/daprdocs/content/en/developing-applications/building-blocks/state-management/howto-get-save-state.md index 6a6c27d4b..e630365db 100644 --- a/daprdocs/content/en/developing-applications/building-blocks/state-management/howto-get-save-state.md +++ b/daprdocs/content/en/developing-applications/building-blocks/state-management/howto-get-save-state.md @@ -10,8 +10,6 @@ State management is one of the most common needs of any new, legacy, monolith, o In this guide, you'll learn the basics of using the key/value state API to allow an application to save, get, and delete state. -## Example - The code example below _loosely_ describes an application that processes orders with an order processing service which has a Dapr sidecar. The order processing service uses Dapr to store state in a Redis state store. Diagram showing state management of example service @@ -554,7 +552,7 @@ namespace EventService string DAPR_STORE_NAME = "statestore"; //Using Dapr SDK to retrieve multiple states using var client = new DaprClientBuilder().Build(); - IReadOnlyList mulitpleStateResult = await client.GetBulkStateAsync(DAPR_STORE_NAME, new List { "order_1", "order_2" }, parallelism: 1); + IReadOnlyList multipleStateResult = await client.GetBulkStateAsync(DAPR_STORE_NAME, new List { "order_1", "order_2" }, parallelism: 1); } } } diff --git a/daprdocs/content/en/developing-applications/building-blocks/workflow/howto-author-workflow.md b/daprdocs/content/en/developing-applications/building-blocks/workflow/howto-author-workflow.md index 2b37739d1..3345b97b2 100644 --- a/daprdocs/content/en/developing-applications/building-blocks/workflow/howto-author-workflow.md +++ b/daprdocs/content/en/developing-applications/building-blocks/workflow/howto-author-workflow.md @@ -6,10 +6,6 @@ weight: 5000 description: "Learn how to develop and author workflows" --- -{{% alert title="Note" color="primary" %}} -Dapr Workflow is currently in beta. [See known limitations for {{% dapr-latest-version cli="true" %}}]({{< ref "workflow-overview.md#limitations" >}}). -{{% /alert %}} - This article provides a high-level overview of how to author workflows that are executed by the Dapr Workflow engine. {{% alert title="Note" color="primary" %}} @@ -821,7 +817,7 @@ func main() { ctx := context.Background() // Start workflow test - respStart, err := daprClient.StartWorkflowBeta1(ctx, &client.StartWorkflowRequest{ + respStart, err := daprClient.StartWorkflow(ctx, &client.StartWorkflowRequest{ InstanceID: "a7a4168d-3a1c-41da-8a4f-e7f6d9c718d9", WorkflowComponent: workflowComponent, WorkflowName: "TestWorkflow", @@ -835,7 +831,7 @@ func main() { fmt.Printf("workflow started with id: %v\n", respStart.InstanceID) // Pause workflow test - err = daprClient.PauseWorkflowBeta1(ctx, &client.PauseWorkflowRequest{ + err = daprClient.PauseWorkflow(ctx, &client.PauseWorkflowRequest{ InstanceID: "a7a4168d-3a1c-41da-8a4f-e7f6d9c718d9", WorkflowComponent: workflowComponent, }) @@ -844,7 +840,7 @@ func main() { log.Fatalf("failed to pause workflow: %v", err) } - respGet, err := daprClient.GetWorkflowBeta1(ctx, &client.GetWorkflowRequest{ + respGet, err := daprClient.GetWorkflow(ctx, &client.GetWorkflowRequest{ InstanceID: "a7a4168d-3a1c-41da-8a4f-e7f6d9c718d9", WorkflowComponent: workflowComponent, }) @@ -859,7 +855,7 @@ func main() { fmt.Printf("workflow paused\n") // Resume workflow test - err = daprClient.ResumeWorkflowBeta1(ctx, &client.ResumeWorkflowRequest{ + err = daprClient.ResumeWorkflow(ctx, &client.ResumeWorkflowRequest{ InstanceID: "a7a4168d-3a1c-41da-8a4f-e7f6d9c718d9", WorkflowComponent: workflowComponent, }) @@ -868,7 +864,7 @@ func main() { log.Fatalf("failed to resume workflow: %v", err) } - respGet, err = daprClient.GetWorkflowBeta1(ctx, &client.GetWorkflowRequest{ + respGet, err = daprClient.GetWorkflow(ctx, &client.GetWorkflowRequest{ InstanceID: "a7a4168d-3a1c-41da-8a4f-e7f6d9c718d9", WorkflowComponent: workflowComponent, }) @@ -886,7 +882,7 @@ func main() { // Raise Event Test - err = daprClient.RaiseEventWorkflowBeta1(ctx, &client.RaiseEventWorkflowRequest{ + err = daprClient.RaiseEventWorkflow(ctx, &client.RaiseEventWorkflowRequest{ InstanceID: "a7a4168d-3a1c-41da-8a4f-e7f6d9c718d9", WorkflowComponent: workflowComponent, EventName: "testEvent", @@ -904,7 +900,7 @@ func main() { fmt.Printf("stage: %d\n", stage) - respGet, err = daprClient.GetWorkflowBeta1(ctx, &client.GetWorkflowRequest{ + respGet, err = daprClient.GetWorkflow(ctx, &client.GetWorkflowRequest{ InstanceID: "a7a4168d-3a1c-41da-8a4f-e7f6d9c718d9", WorkflowComponent: workflowComponent, }) @@ -915,7 +911,7 @@ func main() { fmt.Printf("workflow status: %v\n", respGet.RuntimeStatus) // Purge workflow test - err = daprClient.PurgeWorkflowBeta1(ctx, &client.PurgeWorkflowRequest{ + err = daprClient.PurgeWorkflow(ctx, &client.PurgeWorkflowRequest{ InstanceID: "a7a4168d-3a1c-41da-8a4f-e7f6d9c718d9", WorkflowComponent: workflowComponent, }) @@ -923,7 +919,7 @@ func main() { log.Fatalf("failed to purge workflow: %v", err) } - respGet, err = daprClient.GetWorkflowBeta1(ctx, &client.GetWorkflowRequest{ + respGet, err = daprClient.GetWorkflow(ctx, &client.GetWorkflowRequest{ InstanceID: "a7a4168d-3a1c-41da-8a4f-e7f6d9c718d9", WorkflowComponent: workflowComponent, }) @@ -936,7 +932,7 @@ func main() { fmt.Printf("stage: %d\n", stage) // Terminate workflow test - respStart, err = daprClient.StartWorkflowBeta1(ctx, &client.StartWorkflowRequest{ + respStart, err = daprClient.StartWorkflow(ctx, &client.StartWorkflowRequest{ InstanceID: "a7a4168d-3a1c-41da-8a4f-e7f6d9c718d9", WorkflowComponent: workflowComponent, WorkflowName: "TestWorkflow", @@ -950,7 +946,7 @@ func main() { fmt.Printf("workflow started with id: %s\n", respStart.InstanceID) - err = daprClient.TerminateWorkflowBeta1(ctx, &client.TerminateWorkflowRequest{ + err = daprClient.TerminateWorkflow(ctx, &client.TerminateWorkflowRequest{ InstanceID: "a7a4168d-3a1c-41da-8a4f-e7f6d9c718d9", WorkflowComponent: workflowComponent, }) @@ -958,7 +954,7 @@ func main() { log.Fatalf("failed to terminate workflow: %v", err) } - respGet, err = daprClient.GetWorkflowBeta1(ctx, &client.GetWorkflowRequest{ + respGet, err = daprClient.GetWorkflow(ctx, &client.GetWorkflowRequest{ InstanceID: "a7a4168d-3a1c-41da-8a4f-e7f6d9c718d9", WorkflowComponent: workflowComponent, }) @@ -971,12 +967,12 @@ func main() { fmt.Println("workflow terminated") - err = daprClient.PurgeWorkflowBeta1(ctx, &client.PurgeWorkflowRequest{ + err = daprClient.PurgeWorkflow(ctx, &client.PurgeWorkflowRequest{ InstanceID: "a7a4168d-3a1c-41da-8a4f-e7f6d9c718d9", WorkflowComponent: workflowComponent, }) - respGet, err = daprClient.GetWorkflowBeta1(ctx, &client.GetWorkflowRequest{ + respGet, err = daprClient.GetWorkflow(ctx, &client.GetWorkflowRequest{ InstanceID: "a7a4168d-3a1c-41da-8a4f-e7f6d9c718d9", WorkflowComponent: workflowComponent, }) diff --git a/daprdocs/content/en/developing-applications/building-blocks/workflow/howto-manage-workflow.md b/daprdocs/content/en/developing-applications/building-blocks/workflow/howto-manage-workflow.md index 162ec4a41..f03f4a4c4 100644 --- a/daprdocs/content/en/developing-applications/building-blocks/workflow/howto-manage-workflow.md +++ b/daprdocs/content/en/developing-applications/building-blocks/workflow/howto-manage-workflow.md @@ -6,10 +6,6 @@ weight: 6000 description: Manage and run workflows --- -{{% alert title="Note" color="primary" %}} -Dapr Workflow is currently in beta. [See known limitations for {{% dapr-latest-version cli="true" %}}]({{< ref "workflow-overview.md#limitations" >}}). -{{% /alert %}} - Now that you've [authored the workflow and its activities in your application]({{< ref howto-author-workflow.md >}}), you can start, terminate, and get information about the workflow using HTTP API calls. For more information, read the [workflow API reference]({{< ref workflow_api.md >}}). {{< tabs Python JavaScript ".NET" Java Go HTTP >}} @@ -324,7 +320,7 @@ Manage your workflow using HTTP calls. The example below plugs in the properties To start your workflow with an ID `12345678`, run: ```http -POST http://localhost:3500/v1.0-beta1/workflows/dapr/OrderProcessingWorkflow/start?instanceID=12345678 +POST http://localhost:3500/v1.0/workflows/dapr/OrderProcessingWorkflow/start?instanceID=12345678 ``` Note that workflow instance IDs can only contain alphanumeric characters, underscores, and dashes. @@ -334,7 +330,7 @@ Note that workflow instance IDs can only contain alphanumeric characters, unders To terminate your workflow with an ID `12345678`, run: ```http -POST http://localhost:3500/v1.0-beta1/workflows/dapr/12345678/terminate +POST http://localhost:3500/v1.0/workflows/dapr/12345678/terminate ``` ### Raise an event @@ -342,7 +338,7 @@ POST http://localhost:3500/v1.0-beta1/workflows/dapr/12345678/terminate For workflow components that support subscribing to external events, such as the Dapr Workflow engine, you can use the following "raise event" API to deliver a named event to a specific workflow instance. ```http -POST http://localhost:3500/v1.0-beta1/workflows///raiseEvent/ +POST http://localhost:3500/v1.0/workflows///raiseEvent/ ``` > An `eventName` can be any function. @@ -352,13 +348,13 @@ POST http://localhost:3500/v1.0-beta1/workflows//}}). diff --git a/daprdocs/content/en/developing-applications/building-blocks/workflow/workflow-architecture.md b/daprdocs/content/en/developing-applications/building-blocks/workflow/workflow-architecture.md index 06269e0ad..186ea3264 100644 --- a/daprdocs/content/en/developing-applications/building-blocks/workflow/workflow-architecture.md +++ b/daprdocs/content/en/developing-applications/building-blocks/workflow/workflow-architecture.md @@ -6,10 +6,6 @@ weight: 4000 description: "The Dapr Workflow engine architecture" --- -{{% alert title="Note" color="primary" %}} -Dapr Workflow is currently in beta. [See known limitations for {{% dapr-latest-version cli="true" %}}]({{< ref "workflow-overview.md#limitations" >}}). -{{% /alert %}} - [Dapr Workflows]({{< ref "workflow-overview.md" >}}) allow developers to define workflows using ordinary code in a variety of programming languages. The workflow engine runs inside of the Dapr sidecar and orchestrates workflow code deployed as part of your application. This article describes: - The architecture of the Dapr Workflow engine diff --git a/daprdocs/content/en/developing-applications/building-blocks/workflow/workflow-features-concepts.md b/daprdocs/content/en/developing-applications/building-blocks/workflow/workflow-features-concepts.md index efdac157c..7ee3b500d 100644 --- a/daprdocs/content/en/developing-applications/building-blocks/workflow/workflow-features-concepts.md +++ b/daprdocs/content/en/developing-applications/building-blocks/workflow/workflow-features-concepts.md @@ -6,10 +6,6 @@ weight: 2000 description: "Learn more about the Dapr Workflow features and concepts" --- -{{% alert title="Note" color="primary" %}} -Dapr Workflow is currently in beta. [See known limitations for {{% dapr-latest-version cli="true" %}}]({{< ref "workflow-overview.md#limitations" >}}). -{{% /alert %}} - Now that you've learned about the [workflow building block]({{< ref workflow-overview.md >}}) at a high level, let's deep dive into the features and concepts included with the Dapr Workflow engine and SDKs. Dapr Workflow exposes several core features and concepts which are common across all supported languages. {{% alert title="Note" color="primary" %}} @@ -135,7 +131,7 @@ Because workflow retry policies are configured in code, the exact developer expe | --- | --- | | **Maximum number of attempts** | The maximum number of times to execute the activity or child workflow. | | **First retry interval** | The amount of time to wait before the first retry. | -| **Backoff coefficient** | The amount of time to wait before each subsequent retry. | +| **Backoff coefficient** | The coefficient used to determine the rate of increase of back-off. For example a coefficient of 2 doubles the wait of each subsequent retry. | | **Maximum retry interval** | The maximum amount of time to wait before each subsequent retry. | | **Retry timeout** | The overall timeout for retries, regardless of any configured max number of attempts. | diff --git a/daprdocs/content/en/developing-applications/building-blocks/workflow/workflow-overview.md b/daprdocs/content/en/developing-applications/building-blocks/workflow/workflow-overview.md index b4fa5a443..a4447fc65 100644 --- a/daprdocs/content/en/developing-applications/building-blocks/workflow/workflow-overview.md +++ b/daprdocs/content/en/developing-applications/building-blocks/workflow/workflow-overview.md @@ -6,10 +6,6 @@ weight: 1000 description: "Overview of Dapr Workflow" --- -{{% alert title="Note" color="primary" %}} -Dapr Workflow is currently in beta. [See known limitations]({{< ref "#limitations" >}}). -{{% /alert %}} - Dapr workflow makes it easy for developers to write business logic and integrations in a reliable way. Since Dapr workflows are stateful, they support long-running and fault-tolerant applications, ideal for orchestrating microservices. Dapr workflow works seamlessly with other Dapr building blocks, such as service invocation, pub/sub, state management, and bindings. The durable, resilient Dapr Workflow capability: @@ -94,7 +90,7 @@ Want to put workflows to the test? Walk through the following quickstart and tut | Quickstart/tutorial | Description | | ------------------- | ----------- | | [Workflow quickstart]({{< ref workflow-quickstart.md >}}) | Run a workflow application with four workflow activities to see Dapr Workflow in action | -| [Workflow Python SDK example](https://github.com/dapr/python-sdk/tree/master/examples/demo_workflow) | Learn how to create a Dapr Workflow and invoke it using the Python `DaprClient` package. | +| [Workflow Python SDK example](https://github.com/dapr/python-sdk/tree/master/examples/demo_workflow) | Learn how to create a Dapr Workflow and invoke it using the Python `dapr-ext-workflow` package. | | [Workflow JavaScript SDK example](https://github.com/dapr/js-sdk/tree/main/examples/workflow) | Learn how to create a Dapr Workflow and invoke it using the JavaScript SDK. | | [Workflow .NET SDK example](https://github.com/dapr/dotnet-sdk/tree/master/examples/Workflow) | Learn how to create a Dapr Workflow and invoke it using ASP.NET Core web APIs. | | [Workflow Java SDK example](https://github.com/dapr/java-sdk/tree/master/examples/src/main/java/io/dapr/examples/workflows) | Learn how to create a Dapr Workflow and invoke it using the Java `io.dapr.workflows` package. | @@ -106,8 +102,7 @@ Want to skip the quickstarts? Not a problem. You can try out the workflow buildi ## Limitations -- **State stores:** As of the 1.12.0 beta release of Dapr Workflow, using the NoSQL databases as a state store results in limitations around storing internal states. For example, CosmosDB has a maximum single operation item limit of only 100 states in a single request. -- **Horizontal scaling:** As of the 1.12.0 beta release of Dapr Workflow, if you scale out Dapr sidecars or your application pods to more than 2, then the concurrency of the workflow execution drops. It is recommended to test with 1 or 2 instances, and no more than 2. +- **State stores:** Due to underlying limitations in some database choices, more commonly NoSQL databases, you might run into limitations around storing internal states. For example, CosmosDB has a maximum single operation item limit of only 100 states in a single request. ## Watch the demo diff --git a/daprdocs/content/en/developing-applications/building-blocks/workflow/workflow-patterns.md b/daprdocs/content/en/developing-applications/building-blocks/workflow/workflow-patterns.md index fe6f69b63..24db5b492 100644 --- a/daprdocs/content/en/developing-applications/building-blocks/workflow/workflow-patterns.md +++ b/daprdocs/content/en/developing-applications/building-blocks/workflow/workflow-patterns.md @@ -647,7 +647,7 @@ The Dapr workflow HTTP API supports the asynchronous request-reply pattern out-o The following `curl` commands illustrate how the workflow APIs support this pattern. ```bash -curl -X POST http://localhost:3500/v1.0-beta1/workflows/dapr/OrderProcessingWorkflow/start?instanceID=12345678 -d '{"Name":"Paperclips","Quantity":1,"TotalCost":9.95}' +curl -X POST http://localhost:3500/v1.0/workflows/dapr/OrderProcessingWorkflow/start?instanceID=12345678 -d '{"Name":"Paperclips","Quantity":1,"TotalCost":9.95}' ``` The previous command will result in the following response JSON: @@ -659,7 +659,7 @@ The previous command will result in the following response JSON: The HTTP client can then construct the status query URL using the workflow instance ID and poll it repeatedly until it sees the "COMPLETE", "FAILURE", or "TERMINATED" status in the payload. ```bash -curl http://localhost:3500/v1.0-beta1/workflows/dapr/12345678 +curl http://localhost:3500/v1.0/workflows/dapr/12345678 ``` The following is an example of what an in-progress workflow status might look like. @@ -749,7 +749,7 @@ def status_monitor_workflow(ctx: wf.DaprWorkflowContext, job: JobStatus): ctx.call_activity(send_alert, input=f"Job '{job.job_id}' is unhealthy!") next_sleep_interval = 5 # check more frequently when unhealthy - yield ctx.create_timer(fire_at=ctx.current_utc_datetime + timedelta(seconds=next_sleep_interval)) + yield ctx.create_timer(fire_at=ctx.current_utc_datetime + timedelta(minutes=next_sleep_interval)) # restart from the beginning with a new JobStatus input ctx.continue_as_new(job) @@ -896,7 +896,7 @@ func StatusMonitorWorkflow(ctx *workflow.WorkflowContext) (any, error) { } if status == "healthy" { job.IsHealthy = true - sleepInterval = time.Second * 60 + sleepInterval = time.Minutes * 60 } else { if job.IsHealthy { job.IsHealthy = false @@ -905,7 +905,7 @@ func StatusMonitorWorkflow(ctx *workflow.WorkflowContext) (any, error) { return "", err } } - sleepInterval = time.Second * 5 + sleepInterval = time.Minutes * 5 } if err := ctx.CreateTimer(sleepInterval).Await(nil); err != nil { return "", err @@ -1365,7 +1365,7 @@ func raiseEvent() { if err != nil { log.Fatalf("failed to initialize the client") } - err = daprClient.RaiseEventWorkflowBeta1(context.Background(), &client.RaiseEventWorkflowRequest{ + err = daprClient.RaiseEventWorkflow(context.Background(), &client.RaiseEventWorkflowRequest{ InstanceID: "instance_id", WorkflowComponent: "dapr", EventName: "approval_received", diff --git a/daprdocs/content/en/developing-applications/debugging/_index.md b/daprdocs/content/en/developing-applications/debugging/_index.md index bb9d76df1..d6d77e77d 100644 --- a/daprdocs/content/en/developing-applications/debugging/_index.md +++ b/daprdocs/content/en/developing-applications/debugging/_index.md @@ -2,6 +2,6 @@ type: docs title: "Debugging Dapr applications and the Dapr control plane" linkTitle: "Debugging" -weight: 50 +weight: 60 description: "Guides on how to debug Dapr applications and the Dapr control plane" --- \ No newline at end of file diff --git a/daprdocs/content/en/developing-applications/develop-components/_index.md b/daprdocs/content/en/developing-applications/develop-components/_index.md index cb9f7e8a8..970744958 100644 --- a/daprdocs/content/en/developing-applications/develop-components/_index.md +++ b/daprdocs/content/en/developing-applications/develop-components/_index.md @@ -2,6 +2,6 @@ type: docs title: "Components" linkTitle: "Components" -weight: 30 +weight: 40 description: "Learn more about developing Dapr's pluggable and middleware components" --- diff --git a/daprdocs/content/en/developing-applications/error-codes/_index.md b/daprdocs/content/en/developing-applications/error-codes/_index.md new file mode 100644 index 000000000..f693722f5 --- /dev/null +++ b/daprdocs/content/en/developing-applications/error-codes/_index.md @@ -0,0 +1,8 @@ +--- +type: docs +title: "Error codes" +linkTitle: "Error codes" +weight: 20 +description: "Error codes and messages you may encounter while using Dapr" +--- + diff --git a/daprdocs/content/en/developing-applications/error-codes/error-codes-reference.md b/daprdocs/content/en/developing-applications/error-codes/error-codes-reference.md new file mode 100644 index 000000000..314bf67c4 --- /dev/null +++ b/daprdocs/content/en/developing-applications/error-codes/error-codes-reference.md @@ -0,0 +1,152 @@ +--- +type: docs +title: "Error codes reference guide" +linkTitle: "Reference" +description: "List of gRPC and HTTP error codes in Dapr and their descriptions" +weight: 20 +--- + +The following tables list the error codes returned by Dapr runtime: + +### Actors API + +| Error Code | Description | +| -------------------------------- | ------------------------------------------ | +| ERR_ACTOR_INSTANCE_MISSING | Error when an actor instance is missing. | +| ERR_ACTOR_RUNTIME_NOT_FOUND | Error the actor instance. | +| ERR_ACTOR_REMINDER_CREATE | Error creating a reminder for an actor. | +| ERR_ACTOR_REMINDER_DELETE | Error deleting a reminder for an actor. | +| ERR_ACTOR_TIMER_CREATE | Error creating a timer for an actor. | +| ERR_ACTOR_TIMER_DELETE | Error deleting a timer for an actor. | +| ERR_ACTOR_REMINDER_GET | Error getting a reminder for an actor. | +| ERR_ACTOR_INVOKE_METHOD | Error invoking a method on an actor. | +| ERR_ACTOR_STATE_DELETE | Error deleting the state for an actor. | +| ERR_ACTOR_STATE_GET | Error getting the state for an actor. | +| ERR_ACTOR_STATE_TRANSACTION_SAVE | Error storing actor state transactionally. | +| ERR_ACTOR_REMINDER_NON_HOSTED | Error setting reminder for an actor. | + +### Workflows API + +| Error Code | Description | +| -------------------------------- | ----------------------------------------------------------- | +| ERR_GET_WORKFLOW | Error getting workflow. | +| ERR_START_WORKFLOW | Error starting the workflow. | +| ERR_PAUSE_WORKFLOW | Error pausing the workflow. | +| ERR_RESUME_WORKFLOW | Error resuming the workflow. | +| ERR_TERMINATE_WORKFLOW | Error terminating the workflow. | +| ERR_PURGE_WORKFLOW | Error purging workflow. | +| ERR_RAISE_EVENT_WORKFLOW | Error raising an event within the workflow. | +| ERR_WORKFLOW_COMPONENT_MISSING | Error when a workflow component is missing a configuration. | +| ERR_WORKFLOW_COMPONENT_NOT_FOUND | Error when a workflow component is not found. | +| ERR_WORKFLOW_EVENT_NAME_MISSING | Error when the event name for a workflow is missing. | +| ERR_WORKFLOW_NAME_MISSING | Error when the workflow name is missing. | +| ERR_INSTANCE_ID_INVALID | Error invalid workflow instance ID provided. | +| ERR_INSTANCE_ID_NOT_FOUND | Error workflow instance ID not found. | +| ERR_INSTANCE_ID_PROVIDED_MISSING | Error workflow instance ID was provided but missing. | +| ERR_INSTANCE_ID_TOO_LONG | Error workflow instance ID exceeds allowable length. | + +### State Management API + +| Error Code | Description | +| ------------------------------------- | ------------------------------------------------------------------------- | +| ERR_STATE_STORE_NOT_FOUND | Error referencing a state store not found. | +| ERR_STATE_STORES_NOT_CONFIGURED | Error no state stores configured. | +| ERR_NOT_SUPPORTED_STATE_OPERATION | Error transaction requested on a state store with no transaction support. | +| ERR_STATE_GET | Error getting a state for state store. | +| ERR_STATE_DELETE | Error deleting a state from state store. | +| ERR_STATE_SAVE | Error saving a state in state store. | +| ERR_STATE_TRANSACTION | Error encountered during state transaction. | +| ERR_STATE_BULK_GET | Error performing bulk retrieval of state entries. | +| ERR_STATE_QUERY | Error querying the state store. | +| ERR_STATE_STORE_NOT_CONFIGURED | Error state store is not configured. | +| ERR_STATE_STORE_NOT_SUPPORTED | Error state store is not supported. | +| ERR_STATE_STORE_TOO_MANY_TRANSACTIONS | Error exceeded maximum allowable transactions. | + +### Configuration API + +| Error Code | Description | +| -------------------------------------- | -------------------------------------------- | +| ERR_CONFIGURATION_GET | Error retrieving configuration. | +| ERR_CONFIGURATION_STORE_NOT_CONFIGURED | Error configuration store is not configured. | +| ERR_CONFIGURATION_STORE_NOT_FOUND | Error configuration store not found. | +| ERR_CONFIGURATION_SUBSCRIBE | Error subscribing to a configuration. | +| ERR_CONFIGURATION_UNSUBSCRIBE | Error unsubscribing from a configuration. | + +### Crypto API + +| Error Code | Description | +| ----------------------------------- | ------------------------------------------ | +| ERR_CRYPTO | General crypto building block error. | +| ERR_CRYPTO_KEY | Error related to a crypto key. | +| ERR_CRYPTO_PROVIDER_NOT_FOUND | Error specified crypto provider not found. | +| ERR_CRYPTO_PROVIDERS_NOT_CONFIGURED | Error no crypto providers configured. | + +### Secrets API + +| Error Code | Description | +| -------------------------------- | ---------------------------------------------------- | +| ERR_SECRET_STORES_NOT_CONFIGURED | Error that no secret store is configured. | +| ERR_SECRET_STORE_NOT_FOUND | Error that specified secret store is not found. | +| ERR_SECRET_GET | Error retrieving the specified secret. | +| ERR_PERMISSION_DENIED | Error access denied due to insufficient permissions. | + +### Pub/Sub API + +| Error Code | Description | +| --------------------------- | -------------------------------------------------------- | +| ERR_PUBSUB_NOT_FOUND | Error referencing the Pub/Sub component in Dapr runtime. | +| ERR_PUBSUB_PUBLISH_MESSAGE | Error publishing a message. | +| ERR_PUBSUB_FORBIDDEN | Error message forbidden by access controls. | +| ERR_PUBSUB_CLOUD_EVENTS_SER | Error serializing Pub/Sub event envelope. | +| ERR_PUBSUB_EMPTY | Error empty Pub/Sub. | +| ERR_PUBSUB_NOT_CONFIGURED | Error Pub/Sub component is not configured. | +| ERR_PUBSUB_REQUEST_METADATA | Error with metadata in Pub/Sub request. | +| ERR_PUBSUB_EVENTS_SER | Error serializing Pub/Sub events. | +| ERR_PUBLISH_OUTBOX | Error publishing message to the outbox. | +| ERR_TOPIC_NAME_EMPTY | Error topic name for Pub/Sub message is empty. | + +### Conversation API + +| Error Code | Description | +| ------------------------------- | ----------------------------------------------- | +| ERR_INVOKE_OUTPUT_BINDING | Error invoking an output binding. | +| ERR_DIRECT_INVOKE | Error in direct invocation. | +| ERR_CONVERSATION_INVALID_PARMS | Error invalid parameters for conversation. | +| ERR_CONVERSATION_INVOKE | Error invoking the conversation. | +| ERR_CONVERSATION_MISSING_INPUTS | Error missing required inputs for conversation. | +| ERR_CONVERSATION_NOT_FOUND | Error conversation not found. | + +### Distributed Lock API + +| Error Code | Description | +| ----------------------------- | ----------------------------------- | +| ERR_TRY_LOCK | Error attempting to acquire a lock. | +| ERR_UNLOCK | Error attempting to release a lock. | +| ERR_LOCK_STORE_NOT_CONFIGURED | Error lock store is not configured. | +| ERR_LOCK_STORE_NOT_FOUND | Error lock store not found. | + +### Healthz + +| Error Code | Description | +| ----------------------------- | --------------------------------------------------------------- | +| ERR_HEALTH_NOT_READY | Error that Dapr is not ready. | +| ERR_HEALTH_APPID_NOT_MATCH | Error the app-id does not match expected value in health check. | +| ERR_OUTBOUND_HEALTH_NOT_READY | Error outbound connection health is not ready. | + +### Common + +| Error Code | Description | +| -------------------------- | ------------------------------------------------ | +| ERR_API_UNIMPLEMENTED | Error API is not implemented. | +| ERR_APP_CHANNEL_NIL | Error application channel is nil. | +| ERR_BAD_REQUEST | Error client request is badly formed or invalid. | +| ERR_BODY_READ | Error reading body. | +| ERR_INTERNAL | Internal server error encountered. | +| ERR_MALFORMED_REQUEST | Error with a malformed request. | +| ERR_MALFORMED_REQUEST_DATA | Error request data is malformed. | +| ERR_MALFORMED_RESPONSE | Error response data is malformed. | + +## Next steps + +- [Handling HTTP error codes]({{< ref http-error-codes.md >}}) +- [Handling gRPC error codes]({{< ref grpc-error-codes.md >}}) \ No newline at end of file diff --git a/daprdocs/content/en/developing-applications/error-codes/errors-overview.md b/daprdocs/content/en/developing-applications/error-codes/errors-overview.md new file mode 100644 index 000000000..762413fb7 --- /dev/null +++ b/daprdocs/content/en/developing-applications/error-codes/errors-overview.md @@ -0,0 +1,62 @@ +--- +type: docs +title: "Errors overview" +linkTitle: "Overview" +weight: 10 +description: "Overview of Dapr errors" +--- + +An error code is a numeric or alphamueric code that indicates the nature of an error and, when possible, why it occured. + +Dapr error codes are standardized strings for over 80+ common errors across HTTP and gRPC requests when using the Dapr APIs. These codes are both: +- Returned in the JSON response body of the request. +- When enabled, logged in debug-level logs in the runtime. + - If you're running in Kubernetes, error codes are logged in the sidecar. + - If you're running in self-hosted, you can enable and run debug logs. + +## Error format + +Dapr error codes consist of a prefix, a category, and shorthand of the error itself. For example: + +| Prefix | Category | Error shorthand | +| ------ | -------- | --------------- | +| ERR_ | PUBSUB_ | NOT_FOUND | + +Some of the most common errors returned include: + +- ERR_ACTOR_TIMER_CREATE +- ERR_PURGE_WORKFLOW +- ERR_STATE_STORE_NOT_FOUND +- ERR_HEALTH_NOT_READY + +> **Note:** [See a full list of error codes in Dapr.]({{< ref error-codes-reference.md >}}) + +An error returned for a state store not found might look like the following: + +```json +{ + "error": "Bad Request", + "error_msg": "{\"errorCode\":\"ERR_STATE_STORE_NOT_FOUND\",\"message\":\"state store is not found\",\"details\":[{\"@type\":\"type.googleapis.com/google.rpc.ErrorInfo\",\"domain\":\"dapr.io\",\"metadata\":{\"appID\":\"nodeapp\"},\"reason\":\"DAPR_STATE_NOT_FOUND\"}]}", + "status": 400 +} +``` + +The returned error includes: +- The error code: `ERR_STATE_STORE_NOT_FOUND` +- The error message describing the issue: `state store is not found` +- The app ID in which the error is occuring: `nodeapp` +- The reason for the error: `DAPR_STATE_NOT_FOUND` + +## Dapr error code metrics + +Metrics help you see when exactly errors are occuring from within the runtime. Error code metrics are collected using the `error_code_total` endpoint. This endpoint is disabled by default. You can [enable it using the `recordErrorCodes` field in your configuration file]({{< ref "metrics-overview.md#configuring-metrics-for-error-codes" >}}). + +## Demo + +Watch a demo presented during [Diagrid's Dapr v1.15 celebration](https://www.diagrid.io/videos/dapr-1-15-deep-dive) to see how to enable error code metrics and handle error codes returned in the runtime. + + + +## Next step + +{{< button text="See a list of all Dapr error codes" page="error-codes-reference" >}} \ No newline at end of file diff --git a/daprdocs/content/en/reference/errors/_index.md b/daprdocs/content/en/developing-applications/error-codes/grpc-error-codes.md similarity index 93% rename from daprdocs/content/en/reference/errors/_index.md rename to daprdocs/content/en/developing-applications/error-codes/grpc-error-codes.md index 35f685f74..1d343cce5 100644 --- a/daprdocs/content/en/reference/errors/_index.md +++ b/daprdocs/content/en/developing-applications/error-codes/grpc-error-codes.md @@ -1,20 +1,18 @@ --- type: docs -title: Dapr errors -linkTitle: "Dapr errors" -weight: 700 -description: "Information on Dapr errors and how to handle them" +title: Handling gRPC error codes +linkTitle: "gRPC" +weight: 40 +description: "Information on Dapr gRPC errors and how to handle them" --- -## Error handling: Understanding errors model and reporting - Initially, errors followed the [Standard gRPC error model](https://grpc.io/docs/guides/error/#standard-error-model). However, to provide more detailed and informative error messages, an enhanced error model has been defined which aligns with the gRPC [Richer error model](https://grpc.io/docs/guides/error/#richer-error-model). {{% alert title="Note" color="primary" %}} Not all Dapr errors have been converted to the richer gRPC error model. {{% /alert %}} -### Standard gRPC Error Model +## Standard gRPC Error Model The [Standard gRPC error model](https://grpc.io/docs/guides/error/#standard-error-model) is an approach to error reporting in gRPC. Each error response includes an error code and an error message. The error codes are standardized and reflect common error conditions. @@ -25,7 +23,7 @@ ERROR: Message: input key/keyPrefix 'bad||keyname' can't contain '||' ``` -### Richer gRPC Error Model +## Richer gRPC Error Model The [Richer gRPC error model](https://grpc.io/docs/guides/error/#richer-error-model) extends the standard error model by providing additional context and details about the error. This model includes the standard error `code` and `message`, along with a `details` section that can contain various types of information, such as `ErrorInfo`, `ResourceInfo`, and `BadRequest` details. diff --git a/daprdocs/content/en/developing-applications/error-codes/http-error-codes.md b/daprdocs/content/en/developing-applications/error-codes/http-error-codes.md new file mode 100644 index 000000000..1b069ebaf --- /dev/null +++ b/daprdocs/content/en/developing-applications/error-codes/http-error-codes.md @@ -0,0 +1,21 @@ +--- +type: docs +title: "Handling HTTP error codes" +linkTitle: "HTTP" +description: "Detailed reference of the Dapr HTTP error codes and how to handle them" +weight: 30 +--- + +For HTTP calls made to Dapr runtime, when an error is encountered, an error JSON is returned in response body. The JSON contains an error code and an descriptive error message. + +``` +{ + "errorCode": "ERR_STATE_GET", + "message": "Requested state key does not exist in state store." +} +``` + +## Related + +- [Error code reference list]({{< ref error-codes-reference.md >}}) +- [Handling gRPC error codes]({{< ref grpc-error-codes.md >}}) \ No newline at end of file diff --git a/daprdocs/content/en/developing-applications/integrations/AWS/authenticating-aws.md b/daprdocs/content/en/developing-applications/integrations/AWS/authenticating-aws.md index f11565ceb..94757e86b 100644 --- a/daprdocs/content/en/developing-applications/integrations/AWS/authenticating-aws.md +++ b/daprdocs/content/en/developing-applications/integrations/AWS/authenticating-aws.md @@ -8,24 +8,70 @@ aliases: - /developing-applications/integrations/authenticating/authenticating-aws/ --- -All Dapr components using various AWS services (DynamoDB, SQS, S3, etc) use a standardized set of attributes for configuration via the AWS SDK. [Learn more about how the AWS SDK handles credentials](https://docs.aws.amazon.com/sdk-for-go/v1/developer-guide/configuring-sdk.html#specifying-credentials). +Dapr components leveraging AWS services (for example, DynamoDB, SQS, S3) utilize standardized configuration attributes via the AWS SDK. [Learn more about how the AWS SDK handles credentials](https://docs.aws.amazon.com/sdk-for-go/v1/developer-guide/configuring-sdk.html#specifying-credentials). -Since you can configure the AWS SDK using the default provider chain, all of the following attributes are optional. Test the component configuration and inspect the log output from the Dapr runtime to ensure that components initialize correctly. +You can configure authentication using the AWS SDK’s default provider chain or one of the predefined AWS authentication profiles outlined below. Verify your component configuration by testing and inspecting Dapr runtime logs to confirm proper initialization. -| Attribute | Description | -| --------- | ----------- | -| `region` | Which AWS region to connect to. In some situations (when running Dapr in self-hosted mode, for example), this flag can be provided by the environment variable `AWS_REGION`. Since Dapr sidecar injection doesn't allow configuring environment variables on the Dapr sidecar, it is recommended to always set the `region` attribute in the component spec. | -| `endpoint` | The endpoint is normally handled internally by the AWS SDK. However, in some situations it might make sense to set it locally - for example if developing against [DynamoDB Local](https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/DynamoDBLocal.html). | -| `accessKey` | AWS Access key id. | -| `secretKey` | AWS Secret access key. Use together with `accessKey` to explicitly specify credentials. | -| `sessionToken` | AWS Session token. Used together with `accessKey` and `secretKey`. When using a regular IAM user's access key and secret, a session token is normally not required. | +### Terminology +- **ARN (Amazon Resource Name):** A unique identifier used to specify AWS resources. Format: `arn:partition:service:region:account-id:resource`. Example: `arn:aws:iam::123456789012:role/example-role`. +- **IAM (Identity and Access Management):** AWS's service for managing access to AWS resources securely. + +### Authentication Profiles + +#### Access Key ID and Secret Access Key +Use static Access Key and Secret Key credentials, either through component metadata fields or via [default AWS configuration](https://docs.aws.amazon.com/sdkref/latest/guide/creds-config-files.html). {{% alert title="Important" color="warning" %}} -You **must not** provide AWS access-key, secret-key, and tokens in the definition of the component spec you're using: -- When running the Dapr sidecar (`daprd`) with your application on EKS (AWS Kubernetes) -- If using a node/pod that has already been attached to an IAM policy defining access to AWS resources +Prefer loading credentials via the default AWS configuration in scenarios such as: +- Running the Dapr sidecar (`daprd`) with your application on EKS (AWS Kubernetes). +- Using nodes or pods attached to IAM policies that define AWS resource access. {{% /alert %}} +| Attribute | Required | Description | Example | +| --------- | ----------- | ----------- | ----------- | +| `region` | Y | AWS region to connect to. | "us-east-1" | +| `accessKey` | N | AWS Access key id. Will be required in Dapr v1.17. | "AKIAIOSFODNN7EXAMPLE" | +| `secretKey` | N | AWS Secret access key, used alongside `accessKey`. Will be required in Dapr v1.17. | "wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY" | +| `sessionToken` | N | AWS Session token, used with `accessKey` and `secretKey`. Often unnecessary for IAM user keys. | | + +#### Assume IAM Role +This profile allows Dapr to assume a specific IAM Role. Typically used when the Dapr sidecar runs on EKS or nodes/pods linked to IAM policies. Currently supported by Kafka and PostgreSQL components. + +| Attribute | Required | Description | Example | +| --------- | ----------- | ----------- | ----------- | +| `region` | Y | AWS region to connect to. | "us-east-1" | +| `assumeRoleArn` | N | ARN of the IAM role with AWS resource access. Will be required in Dapr v1.17. | "arn:aws:iam::123456789:role/mskRole" | +| `sessionName` | N | Session name for role assumption. Default is `"DaprDefaultSession"`. | "MyAppSession" | + +#### Credentials from Environment Variables +Authenticate using [environment variables](https://docs.aws.amazon.com/sdkref/latest/guide/environment-variables.html). This is especially useful for Dapr in self-hosted mode where sidecar injectors don’t configure environment variables. + +There are no metadata fields required for this authentication profile. + +#### IAM Roles Anywhere +[IAM Roles Anywhere](https://aws.amazon.com/iam/roles-anywhere/) extends IAM role-based authentication to external workloads. It eliminates the need for long-term credentials by using cryptographically signed certificates, anchored in a trust relationship using Dapr PKI. Dapr SPIFFE identity X.509 certificates are used to authenticate to AWS services, and Dapr handles credential rotation at half the session lifespan. + +To configure this authentication profile: +1. Create a Trust Anchor in the trusting AWS account using the Dapr certificate bundle as an `External certificate bundle`. +2. Create an IAM role with the resource permissions policy necessary, as well as a trust entity for the Roles Anywhere AWS service. Here, you specify SPIFFE identities allowed. +3. Create an IAM Profile under the Roles Anywhere service, linking the IAM Role. + +| Attribute | Required | Description | Example | +| --------- | ----------- | ----------- | ----------- | +| `trustAnchorArn` | Y | ARN of the Trust Anchor in the AWS account granting trust to the Dapr Certificate Authority. | arn:aws:rolesanywhere:us-west-1:012345678910:trust-anchor/01234568-0123-0123-0123-012345678901 | +| `trustProfileArn` | Y | ARN of the AWS IAM Profile in the trusting AWS account. | arn:aws:rolesanywhere:us-west-1:012345678910:profile/01234568-0123-0123-0123-012345678901 | +| `assumeRoleArn` | Y | ARN of the AWS IAM role to assume in the trusting AWS account. | arn:aws:iam:012345678910:role/exampleIAMRoleName | + +### Additional Fields + +Some AWS components include additional optional fields: + +| Attribute | Required | Description | Example | +| --------- | ----------- | ----------- | ----------- | +| `endpoint` | N | The endpoint is normally handled internally by the AWS SDK. However, in some situations it might make sense to set it locally - for example if developing against [DynamoDB Local](https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/DynamoDBLocal.html). | | + +Furthermore, non-native AWS components such as Kafka and PostgreSQL that support AWS authentication profiles have metadata fields to trigger the AWS authentication logic. Be sure to check specific component documentation. + ## Alternatives to explicitly specifying credentials in component manifest files In production scenarios, it is recommended to use a solution such as: diff --git a/daprdocs/content/en/developing-applications/integrations/Diagrid/diagrid-conductor.md b/daprdocs/content/en/developing-applications/integrations/Diagrid/diagrid-conductor.md index 554ca118a..c7504b56c 100644 --- a/daprdocs/content/en/developing-applications/integrations/Diagrid/diagrid-conductor.md +++ b/daprdocs/content/en/developing-applications/integrations/Diagrid/diagrid-conductor.md @@ -26,6 +26,4 @@ By studying past resource behavior, recommend application resource optimization The application graph facilitates collaboration between dev and ops by providing a dynamic overview of your services and infrastructure components. -Try out [Conductor Free](https://www.diagrid.io/pricing), ideal for individual developers building and testing Dapr applications on Kubernetes. - {{< button text="Learn more about Diagrid Conductor" link="https://www.diagrid.io/conductor" >}} diff --git a/daprdocs/content/en/developing-applications/integrations/_index.md b/daprdocs/content/en/developing-applications/integrations/_index.md index a884aeb5c..b988581b7 100644 --- a/daprdocs/content/en/developing-applications/integrations/_index.md +++ b/daprdocs/content/en/developing-applications/integrations/_index.md @@ -2,6 +2,6 @@ type: docs title: "Integrations" linkTitle: "Integrations" -weight: 60 +weight: 70 description: "Dapr integrations with other technologies" --- \ No newline at end of file diff --git a/daprdocs/content/en/developing-applications/local-development/_index.md b/daprdocs/content/en/developing-applications/local-development/_index.md index b06587df5..8ffc396d4 100644 --- a/daprdocs/content/en/developing-applications/local-development/_index.md +++ b/daprdocs/content/en/developing-applications/local-development/_index.md @@ -2,6 +2,6 @@ type: docs title: "Local development" linkTitle: "Local development" -weight: 40 +weight: 50 description: "Capabilities for developing Dapr applications locally" --- \ No newline at end of file diff --git a/daprdocs/content/en/developing-applications/sdks/_index.md b/daprdocs/content/en/developing-applications/sdks/_index.md index 4f56c0513..5434d497b 100644 --- a/daprdocs/content/en/developing-applications/sdks/_index.md +++ b/daprdocs/content/en/developing-applications/sdks/_index.md @@ -2,7 +2,7 @@ type: docs title: "Dapr Software Development Kits (SDKs)" linkTitle: "SDKs" -weight: 20 +weight: 30 description: "Use your favorite languages with Dapr" no_list: true --- diff --git a/daprdocs/content/en/getting-started/quickstarts/actors-quickstart.md b/daprdocs/content/en/getting-started/quickstarts/actors-quickstart.md index a412cc014..3b7ad2068 100644 --- a/daprdocs/content/en/getting-started/quickstarts/actors-quickstart.md +++ b/daprdocs/content/en/getting-started/quickstarts/actors-quickstart.md @@ -18,7 +18,7 @@ Currently, you can experience this actors quickstart using the .NET SDK. As a quick overview of the .NET actors quickstart: 1. Using a `SmartDevice.Service` microservice, you host: - - Two `SmartDectectorActor` smoke alarm objects + - Two `SmokeDetectorActor` smoke alarm objects - A `ControllerActor` object that commands and controls the smart devices 1. Using a `SmartDevice.Client` console app, the client app interacts with each actor, or the controller, to perform actions in aggregate. 1. The `SmartDevice.Interfaces` contains the shared interfaces and data types used by both the service and client apps. @@ -119,7 +119,7 @@ If you have Zipkin configured for Dapr locally on your machine, you can view the When you ran the client app, a few things happened: -1. Two `SmartDetectorActor` actors were [created in the client application](https://github.com/dapr/quickstarts/blob/master/actors/csharp/sdk/client/Program.cs) and initialized with object state with: +1. Two `SmokeDetectorActor` actors were [created in the client application](https://github.com/dapr/quickstarts/blob/master/actors/csharp/sdk/client/Program.cs) and initialized with object state with: - `ActorProxy.Create(actorId, actorType)` - `proxySmartDevice.SetDataAsync(data)` @@ -177,7 +177,7 @@ When you ran the client app, a few things happened: Console.WriteLine($"Device 2 state: {storedDeviceData2}"); ``` -1. The [`DetectSmokeAsync` method of `SmartDetectorActor 1` is called](https://github.com/dapr/quickstarts/blob/master/actors/csharp/sdk/service/SmokeDetectorActor.cs#L70). +1. The [`DetectSmokeAsync` method of `SmokeDetectorActor 1` is called](https://github.com/dapr/quickstarts/blob/master/actors/csharp/sdk/service/SmokeDetectorActor.cs#L70). ```csharp public async Task DetectSmokeAsync() @@ -216,7 +216,7 @@ When you ran the client app, a few things happened: await proxySmartDevice1.DetectSmokeAsync(); ``` -1. The [`SoundAlarm` methods](https://github.com/dapr/quickstarts/blob/master/actors/csharp/sdk/service/SmokeDetectorActor.cs#L78) of `SmartDetectorActor 1` and `2` are called. +1. The [`SoundAlarm` methods](https://github.com/dapr/quickstarts/blob/master/actors/csharp/sdk/service/SmokeDetectorActor.cs#L78) of `SmokeDetectorActor 1` and `2` are called. ```csharp storedDeviceData1 = await proxySmartDevice1.GetDataAsync(); @@ -234,9 +234,9 @@ When you ran the client app, a few things happened: For full context of the sample, take a look at the following code: -- [`SmartDetectorActor.cs`](https://github.com/dapr/quickstarts/blob/master/actors/csharp/sdk/service/SmokeDetectorActor.cs): Implements the smart device actors +- [`SmokeDetectorActor.cs`](https://github.com/dapr/quickstarts/blob/master/actors/csharp/sdk/service/SmokeDetectorActor.cs): Implements the smart device actors - [`ControllerActor.cs`](https://github.com/dapr/quickstarts/blob/master/actors/csharp/sdk/service/ControllerActor.cs): Implements the controller actor that manages all devices -- [`ISmartDevice`](https://github.com/dapr/quickstarts/blob/master/actors/csharp/sdk/interfaces/ISmartDevice.cs): The method definitions and shared data types for each `SmartDetectorActor` +- [`ISmartDevice`](https://github.com/dapr/quickstarts/blob/master/actors/csharp/sdk/interfaces/ISmartDevice.cs): The method definitions and shared data types for each `SmokeDetectorActor` - [`IController`](https://github.com/dapr/quickstarts/blob/master/actors/csharp/sdk/interfaces/IController.cs): The method definitions and shared data types for the `ControllerActor` {{% /codetab %}} diff --git a/daprdocs/content/en/getting-started/quickstarts/jobs-quickstart.md b/daprdocs/content/en/getting-started/quickstarts/jobs-quickstart.md index 8b52eedb1..9435df194 100644 --- a/daprdocs/content/en/getting-started/quickstarts/jobs-quickstart.md +++ b/daprdocs/content/en/getting-started/quickstarts/jobs-quickstart.md @@ -273,23 +273,20 @@ func deleteJob(ctx context.Context, in *common.InvocationEvent) (out *common.Con // Handler that handles job events func handleJob(ctx context.Context, job *common.JobEvent) error { - var jobData common.Job - if err := json.Unmarshal(job.Data, &jobData); err != nil { - return fmt.Errorf("failed to unmarshal job: %v", err) - } - decodedPayload, err := base64.StdEncoding.DecodeString(jobData.Value) - if err != nil { - return fmt.Errorf("failed to decode job payload: %v", err) - } - var jobPayload JobData - if err := json.Unmarshal(decodedPayload, &jobPayload); err != nil { - return fmt.Errorf("failed to unmarshal payload: %v", err) - } + var jobData common.Job + if err := json.Unmarshal(job.Data, &jobData); err != nil { + return fmt.Errorf("failed to unmarshal job: %v", err) + } - fmt.Println("Starting droid:", jobPayload.Droid) - fmt.Println("Executing maintenance job:", jobPayload.Task) + var jobPayload JobData + if err := json.Unmarshal(job.Data, &jobPayload); err != nil { + return fmt.Errorf("failed to unmarshal payload: %v", err) + } - return nil + fmt.Println("Starting droid:", jobPayload.Droid) + fmt.Println("Executing maintenance job:", jobPayload.Task) + + return nil } ``` diff --git a/daprdocs/content/en/getting-started/quickstarts/workflow-quickstart.md b/daprdocs/content/en/getting-started/quickstarts/workflow-quickstart.md index 5a025aed8..3cf04fac9 100644 --- a/daprdocs/content/en/getting-started/quickstarts/workflow-quickstart.md +++ b/daprdocs/content/en/getting-started/quickstarts/workflow-quickstart.md @@ -7,7 +7,7 @@ description: Get started with the Dapr Workflow building block --- {{% alert title="Note" color="primary" %}} -Dapr Workflow is currently in beta. [See known limitations for {{% dapr-latest-version cli="true" %}}]({{< ref "workflow-overview.md#limitations" >}}). +Redis is currently used as the state store component for Workflows in the Quickstarts. However, Redis does not support transaction rollbacks and should not be used in production as an actor state store. {{% /alert %}} Let's take a look at the Dapr [Workflow building block]({{< ref workflow-overview.md >}}). In this Quickstart, you'll create a simple console application to demonstrate Dapr's workflow programming model and the workflow management APIs. @@ -66,12 +66,18 @@ Install the Dapr Python SDK package: pip3 install -r requirements.txt ``` -### Step 3: Run the order processor app - -In the terminal, start the order processor app alongside a Dapr sidecar using [Multi-App Run]({{< ref multi-app-dapr-run >}}): +Return to the `python/sdk` directory: + +```bash +cd .. +``` + + +### Step 3: Run the order processor app + +In the terminal, start the order processor app alongside a Dapr sidecar using [Multi-App Run]({{< ref multi-app-dapr-run >}}). From the `python/sdk` directory, run the following command: ```bash -cd workflows/python/sdk dapr run -f . ``` @@ -308,12 +314,11 @@ Install the dependencies: cd ./javascript/sdk npm install npm run build -cd .. ``` ### Step 3: Run the order processor app -In the terminal, start the order processor app alongside a Dapr sidecar using [Multi-App Run]({{< ref multi-app-dapr-run >}}): +In the terminal, start the order processor app alongside a Dapr sidecar using [Multi-App Run]({{< ref multi-app-dapr-run >}}). From the `javascript/sdk` directory, run the following command: ```bash dapr run -f . @@ -515,15 +520,28 @@ Clone the [sample provided in the Quickstarts repo](https://github.com/dapr/quic git clone https://github.com/dapr/quickstarts.git ``` -In a new terminal window, navigate to the `sdk` directory: +In a new terminal window, navigate to the `order-processor` directory: ```bash -cd workflows/csharp/sdk +cd workflows/csharp/sdk/order-processor +``` + +Install the dependencies: + +```bash +dotnet restore +dotnet build +``` + +Return to the `csharp/sdk` directory: + +```bash +cd .. ``` ### Step 3: Run the order processor app -In the terminal, start the order processor app alongside a Dapr sidecar using [Multi-App Run]({{< ref multi-app-dapr-run >}}): +In the terminal, start the order processor app alongside a Dapr sidecar using [Multi-App Run]({{< ref multi-app-dapr-run >}}). From the `csharp/sdk` directory, run the following command: ```bash dapr run -f . @@ -628,25 +646,24 @@ OrderPayload orderInfo = new OrderPayload(itemToPurchase, 15000, ammountToPurcha // Start the workflow Console.WriteLine("Starting workflow {0} purchasing {1} {2}", orderId, ammountToPurchase, itemToPurchase); -await daprClient.StartWorkflowAsync( - workflowComponent: DaprWorkflowComponent, - workflowName: nameof(OrderProcessingWorkflow), +await daprWorkflowClient.ScheduleNewWorkflowAsync( + name: nameof(OrderProcessingWorkflow), input: orderInfo, instanceId: orderId); // Wait for the workflow to start and confirm the input -GetWorkflowResponse state = await daprClient.WaitForWorkflowStartAsync( - instanceId: orderId, - workflowComponent: DaprWorkflowComponent); +WorkflowState state = await daprWorkflowClient.WaitForWorkflowStartAsync( + instanceId: orderId); -Console.WriteLine("Your workflow has started. Here is the status of the workflow: {0}", state.RuntimeStatus); +Console.WriteLine($"{nameof(OrderProcessingWorkflow)} (ID = {orderId}) started successfully with {state.ReadInputAs()}"); // Wait for the workflow to complete +using var ctx = new CancellationTokenSource(TimeSpan.FromSeconds(5)); state = await daprClient.WaitForWorkflowCompletionAsync( instanceId: orderId, - workflowComponent: DaprWorkflowComponent); + cancellation: ctx.Token); -Console.WriteLine("Workflow Status: {0}", state.RuntimeStatus); +Console.WriteLine("Workflow Status: {0}", state.ReadCustomStatusAs()); ``` #### `order-processor/Workflows/OrderProcessingWorkflow.cs` @@ -697,7 +714,7 @@ class OrderProcessingWorkflow : Workflow nameof(UpdateInventoryActivity), new PaymentRequest(RequestId: orderId, order.Name, order.Quantity, order.TotalCost)); } - catch (TaskFailedException) + catch (WorkflowTaskFailedException) { // Let them know their payment was processed await context.CallActivityAsync( @@ -779,9 +796,15 @@ Install the dependencies: mvn clean install ``` +Return to the `java/sdk` directory: + +```bash +cd .. +``` + ### Step 3: Run the order processor app -In the terminal, start the order processor app alongside a Dapr sidecar using [Multi-App Run]({{< ref multi-app-dapr-run >}}): +In the terminal, start the order processor app alongside a Dapr sidecar using [Multi-App Run]({{< ref multi-app-dapr-run >}}). From the `java/sdk` directory, run the following command: ```bash cd workflows/java/sdk @@ -1114,7 +1137,7 @@ cd workflows/go/sdk ### Step 3: Run the order processor app -In the terminal, start the order processor app alongside a Dapr sidecar using [Multi-App Run]({{< ref multi-app-dapr-run >}}): +In the terminal, start the order processor app alongside a Dapr sidecar using [Multi-App Run]({{< ref multi-app-dapr-run >}}). From the `go/sdk` directory, run the following command: ```bash dapr run -f . @@ -1333,4 +1356,4 @@ Join the discussion in our [discord channel](https://discord.com/channels/778680 - Walk through a more in-depth [.NET SDK example workflow](https://github.com/dapr/dotnet-sdk/tree/master/examples/Workflow) - Learn more about [Workflow as a Dapr building block]({{< ref workflow-overview >}}) -{{< button text="Explore Dapr tutorials >>" page="getting-started/tutorials/_index.md" >}} \ No newline at end of file +{{< button text="Explore Dapr tutorials >>" page="getting-started/tutorials/_index.md" >}} diff --git a/daprdocs/content/en/operations/configuration/api-allowlist.md b/daprdocs/content/en/operations/configuration/api-allowlist.md index 75930dba8..dffb8db39 100644 --- a/daprdocs/content/en/operations/configuration/api-allowlist.md +++ b/daprdocs/content/en/operations/configuration/api-allowlist.md @@ -6,17 +6,17 @@ weight: 4500 description: "Choose which Dapr sidecar APIs are available to the app" --- -In certain scenarios, such as zero trust networks or when exposing the Dapr sidecar to external traffic through a frontend, it's recommended to only enable the Dapr sidecar APIs that are being used by the app. Doing so reduces the attack surface and helps keep the Dapr APIs scoped to the actual needs of the application. +In scenarios such as zero trust networks or when exposing the Dapr sidecar to external traffic through a frontend, it's recommended to only enable the Dapr sidecar APIs being used by the app. Doing so reduces the attack surface and helps keep the Dapr APIs scoped to the actual needs of the application. -Dapr allows developers to control which APIs are accessible to the application by setting an API allowlist or denylist using a [Dapr Configuration]({{}}). +Dapr allows you to control which APIs are accessible to the application by setting an API allowlist or denylist using a [Dapr Configuration]({{< ref "configuration-schema.md" >}}). ### Default behavior If no API allowlist or denylist is specified, the default behavior is to allow access to all Dapr APIs. -- If only a denylist is defined, all Dapr APIs are allowed except those defined in the denylist -- If only an allowlist is defined, only the Dapr APIs listed in the allowlist are allowed -- If both an allowlist and a denylist are defined, the allowed APIs are those defined in the allowlist, unless they are also included in the denylist. In other words, the denylist overrides the allowlist for APIs that are defined in both. +- If you've only defined a denylist, all Dapr APIs are allowed except those defined in the denylist +- If you've only defined an allowlist, only the Dapr APIs listed in the allowlist are allowed +- If you've defined both an allowlist and a denylist, the denylist overrides the allowlist for APIs that are defined in both. - If neither is defined, all APIs are allowed. For example, the following configuration enables all APIs for both HTTP and gRPC: @@ -119,14 +119,18 @@ See this list of values corresponding to the different Dapr APIs: | [Service Invocation]({{< ref service_invocation_api.md >}}) | `invoke` (`v1.0`) | `invoke` (`v1`) | | [State]({{< ref state_api.md>}})| `state` (`v1.0` and `v1.0-alpha1`) | `state` (`v1` and `v1alpha1`) | | [Pub/Sub]({{< ref pubsub.md >}}) | `publish` (`v1.0` and `v1.0-alpha1`) | `publish` (`v1` and `v1alpha1`) | +| [Output Bindings]({{< ref bindings_api.md >}}) | `bindings` (`v1.0`) |`bindings` (`v1`) | | Subscribe | n/a | `subscribe` (`v1alpha1`) | -| [(Output) Bindings]({{< ref bindings_api.md >}}) | `bindings` (`v1.0`) |`bindings` (`v1`) | | [Secrets]({{< ref secrets_api.md >}})| `secrets` (`v1.0`) | `secrets` (`v1`) | | [Actors]({{< ref actors_api.md >}}) | `actors` (`v1.0`) |`actors` (`v1`) | | [Metadata]({{< ref metadata_api.md >}}) | `metadata` (`v1.0`) |`metadata` (`v1`) | | [Configuration]({{< ref configuration_api.md >}}) | `configuration` (`v1.0` and `v1.0-alpha1`) | `configuration` (`v1` and `v1alpha1`) | | [Distributed Lock]({{< ref distributed_lock_api.md >}}) | `lock` (`v1.0-alpha1`)
`unlock` (`v1.0-alpha1`) | `lock` (`v1alpha1`)
`unlock` (`v1alpha1`) | -| Cryptography | `crypto` (`v1.0-alpha1`) | `crypto` (`v1alpha1`) | -| [Workflow]({{< ref workflow_api.md >}}) | `workflows` (`v1.0-alpha1`) |`workflows` (`v1alpha1`) | +| [Cryptography]({{< ref cryptography_api.md >}}) | `crypto` (`v1.0-alpha1`) | `crypto` (`v1alpha1`) | +| [Workflow]({{< ref workflow_api.md >}}) | `workflows` (`v1.0`) |`workflows` (`v1`) | | [Health]({{< ref health_api.md >}}) | `healthz` (`v1.0`) | n/a | | Shutdown | `shutdown` (`v1.0`) | `shutdown` (`v1`) | + +## Next steps + +{{< button text="Configure Dapr to use gRPC" page="grpc" >}} \ No newline at end of file diff --git a/daprdocs/content/en/operations/configuration/configuration-overview.md b/daprdocs/content/en/operations/configuration/configuration-overview.md index 0cba2414c..5a528a224 100644 --- a/daprdocs/content/en/operations/configuration/configuration-overview.md +++ b/daprdocs/content/en/operations/configuration/configuration-overview.md @@ -1,30 +1,44 @@ --- type: docs -title: "Overview of Dapr configuration options" +title: "Dapr configuration" linkTitle: "Overview" weight: 100 -description: "Information on Dapr configuration and how to set options for your application" +description: "Overview of Dapr configuration" --- -## Sidecar configuration +Dapr configurations are settings and policies that enable you to change both the behavior of individual Dapr applications, or the global behavior of the Dapr control plane system services. -### Setup sidecar configuration +[for more information, read the configuration concept.]({{< ref configuration-concept.md >}}) -#### Self-hosted sidecar +## Application configuration -In self hosted mode the Dapr configuration is a configuration file, for example `config.yaml`. By default the Dapr sidecar looks in the default Dapr folder for the runtime configuration eg: `$HOME/.dapr/config.yaml` in Linux/MacOS and `%USERPROFILE%\.dapr\config.yaml` in Windows. +### Set up application configuration -A Dapr sidecar can also apply a configuration by using a `--config` flag to the file path with `dapr run` CLI command. +You can set up application configuration either in self-hosted or Kubernetes mode. -#### Kubernetes sidecar +{{< tabs "Self-hosted" Kubernetes >}} -In Kubernetes mode the Dapr configuration is a Configuration resource, that is applied to the cluster. For example: + +{{% codetab %}} + +In self hosted mode, the Dapr configuration is a [configuration file]({{< ref configuration-schema.md >}}) - for example, `config.yaml`. By default, the Dapr sidecar looks in the default Dapr folder for the runtime configuration: +- Linux/MacOs: `$HOME/.dapr/config.yaml` +- Windows: `%USERPROFILE%\.dapr\config.yaml` + +An application can also apply a configuration by using a `--config` flag to the file path with `dapr run` CLI command. + +{{% /codetab %}} + + +{{% codetab %}} + +In Kubernetes mode, the Dapr configuration is a Configuration resource, that is applied to the cluster. For example: ```bash kubectl apply -f myappconfig.yaml ``` -You can use the Dapr CLI to list the Configuration resources +You can use the Dapr CLI to list the Configuration resources for applications. ```bash dapr configurations -k @@ -40,11 +54,15 @@ A Dapr sidecar can apply a specific configuration by using a `dapr.io/config` an dapr.io/config: "myappconfig" ``` -Note: There are more [Kubernetes annotations]({{< ref "arguments-annotations-overview.md" >}}) available to configure the Dapr sidecar on activation by sidecar Injector system service. +> **Note:** [See all Kubernetes annotations]({{< ref "arguments-annotations-overview.md" >}}) available to configure the Dapr sidecar on activation by sidecar Injector system service. -### Sidecar configuration settings +{{% /codetab %}} -The following configuration settings can be applied to Dapr application sidecars: +{{< /tabs >}} + +### Application configuration settings + +The following menu includes all of the configuration settings you can set on the sidecar. - [Tracing](#tracing) - [Metrics](#metrics) @@ -68,7 +86,7 @@ The `tracing` section under the `Configuration` spec contains the following prop tracing: samplingRate: "1" otel: - endpointAddress: "https://..." + endpointAddress: "otelcollector.observability.svc.cluster.local:4317" zipkin: endpointAddress: "http://zipkin.default.svc.cluster.local:9411/api/v2/spans" ``` @@ -79,15 +97,22 @@ The following table lists the properties for tracing: |--------------|--------|-------------| | `samplingRate` | string | Set sampling rate for tracing to be enabled or disabled. | `stdout` | bool | True write more verbose information to the traces -| `otel.endpointAddress` | string | Set the Open Telemetry (OTEL) server address to send traces to +| `otel.endpointAddress` | string | Set the Open Telemetry (OTEL) server address to send traces to. This may or may not require the https:// or http:// depending on your OTEL provider. | `otel.isSecure` | bool | Is the connection to the endpoint address encrypted | `otel.protocol` | string | Set to `http` or `grpc` protocol -| `zipkin.endpointAddress` | string | Set the Zipkin server address to send traces to +| `zipkin.endpointAddress` | string | Set the Zipkin server address to send traces to. This should include the protocol (http:// or https://) on the endpoint. -`samplingRate` is used to enable or disable the tracing. To disable the sampling rate , -set `samplingRate : "0"` in the configuration. The valid range of samplingRate is between 0 and 1 inclusive. The sampling rate determines whether a trace span should be sampled or not based on value. `samplingRate : "1"` samples all traces. By default, the sampling rate is (0.0001) or 1 in 10,000 traces. +##### `samplingRate` -The OpenTelemetry (otel) endpoint can also be configured via an environment variables. The presence of the OTEL_EXPORTER_OTLP_ENDPOINT environment variable +`samplingRate` is used to enable or disable the tracing. The valid range of `samplingRate` is between `0` and `1` inclusive. The sampling rate determines whether a trace span should be sampled or not based on value. + +`samplingRate : "1"` samples all traces. By default, the sampling rate is (0.0001), or 1 in 10,000 traces. + +To disable the sampling rate, set `samplingRate : "0"` in the configuration. + +##### `otel` + +The OpenTelemetry (`otel`) endpoint can also be configured via an environment variable. The presence of the `OTEL_EXPORTER_OTLP_ENDPOINT` environment variable turns on tracing for the sidecar. | Environment Variable | Description | @@ -100,9 +125,9 @@ See [Observability distributed tracing]({{< ref "tracing-overview.md" >}}) for m #### Metrics -The metrics section can be used to enable or disable metrics for an application. +The `metrics` section under the `Configuration` spec can be used to enable or disable metrics for an application. -The `metrics` section under the `Configuration` spec contains the following properties: +The `metrics` section contains the following properties: ```yml metrics: @@ -120,9 +145,12 @@ metrics: - /payments/{paymentID}/refund - /payments/{paymentID}/details excludeVerbs: false + recordErrorCodes: true ``` -In the examples above this path filter `/orders/{orderID}/items/{itemID}` would return a single metric count matching all the orderIDs and all the itemIDs rather than multiple metrics for each itemID. For more information see [HTTP metrics path matching]({{< ref "metrics-overview.md#http-metrics-path-matching" >}}) +In the examples above, the path filter `/orders/{orderID}/items/{itemID}` would return _a single metric count_ matching all the `orderID`s and all the `itemID`s, rather than multiple metrics for each `itemID`. For more information, see [HTTP metrics path matching]({{< ref "metrics-overview.md#http-metrics-path-matching" >}}). + +The above example also enables [recording error code metrics]({{< ref "metrics-overview.md#configuring-metrics-for-error-codes" >}}), which is disabled by default. The following table lists the properties for metrics: @@ -135,7 +163,7 @@ The following table lists the properties for metrics: | `http.pathMatching` | array | Array of paths for path matching, allowing users to define matching paths to manage cardinality. | | `http.excludeVerbs` | boolean | When set to true (default is false), the Dapr HTTP server ignores each request HTTP verb when building the method metric label. | -To further help managing cardinality, path matching allows specified paths matched according to defined patterns, reducing the number of unique metrics paths and thus controlling metric cardinality. This feature is particularly useful for applications with dynamic URLs, ensuring that metrics remain meaningful and manageable without excessive memory consumption. +To further help manage cardinality, path matching allows you to match specified paths according to defined patterns, reducing the number of unique metrics paths and thus controlling metric cardinality. This feature is particularly useful for applications with dynamic URLs, ensuring that metrics remain meaningful and manageable without excessive memory consumption. Using rules, you can set regular expressions for every metric exposed by the Dapr sidecar. For example: @@ -154,9 +182,9 @@ See [metrics documentation]({{< ref "metrics-overview.md" >}}) for more informat #### Logging -The logging section can be used to configure how logging works in the Dapr Runtime. +The `logging` section under the `Configuration` spec is used to configure how logging works in the Dapr Runtime. -The `logging` section under the `Configuration` spec contains the following properties: +The `logging` section contains the following properties: ```yml logging: @@ -178,8 +206,7 @@ See [logging documentation]({{< ref "logs.md" >}}) for more information. #### Middleware -Middleware configuration set named HTTP pipeline middleware handlers -The `httpPipeline` and the `appHttpPipeline` section under the `Configuration` spec contains the following properties: +Middleware configuration sets named HTTP pipeline middleware handlers. The `httpPipeline` and the `appHttpPipeline` section under the `Configuration` spec contain the following properties: ```yml httpPipeline: # for incoming http calls @@ -203,13 +230,13 @@ The following table lists the properties for HTTP handlers: | `name` | string | Name of the middleware component | `type` | string | Type of middleware component -See [Middleware pipelines]({{< ref "middleware.md" >}}) for more information +See [Middleware pipelines]({{< ref "middleware.md" >}}) for more information. #### Name resolution component -You can set name resolution component to use within the configuration YAML. For example, to set the `spec.nameResolution.component` property to `"sqlite"`, pass configuration options in the `spec.nameResolution.configuration` dictionary as shown below. +You can set name resolution components to use within the configuration file. For example, to set the `spec.nameResolution.component` property to `"sqlite"`, pass configuration options in the `spec.nameResolution.configuration` dictionary as shown below. -This is the basic example of a configuration resource: +This is a basic example of a configuration resource: ```yaml apiVersion: dapr.io/v1alpha1 @@ -226,7 +253,7 @@ spec: For more information, see: - [The name resolution component documentation]({{< ref supported-name-resolution >}}) for more examples. -- - [The Configuration YAML documentation]({{< ref configuration-schema.md >}}) to learn more about how to configure name resolution per component. +- [The Configuration file documentation]({{< ref configuration-schema.md >}}) to learn more about how to configure name resolution per component. #### Scope secret store access @@ -234,11 +261,11 @@ See the [Scoping secrets]({{< ref "secret-scope.md" >}}) guide for information a #### Access Control allow lists for building block APIs -See the [selectively enable Dapr APIs on the Dapr sidecar]({{< ref "api-allowlist.md" >}}) guide for information and examples on how to set ACLs on the building block APIs lists. +See the guide for [selectively enabling Dapr APIs on the Dapr sidecar]({{< ref "api-allowlist.md" >}}) for information and examples on how to set access control allow lists (ACLs) on the building block APIs lists. #### Access Control allow lists for service invocation API -See the [Allow lists for service invocation]({{< ref "invoke-allowlist.md" >}}) guide for information and examples on how to set allow lists with ACLs which using service invocation API. +See the [Allow lists for service invocation]({{< ref "invoke-allowlist.md" >}}) guide for information and examples on how to set allow lists with ACLs which use the service invocation API. #### Disallow usage of certain component types @@ -258,13 +285,23 @@ spec: - secretstores.local.file ``` -You can optionally specify a version to disallow by adding it at the end of the component name. For example, `state.in-memory/v1` disables initializing components of type `state.in-memory` and version `v1`, but does not disable a (hypothetical) `v2` version of the component. +Optionally, you can specify a version to disallow by adding it at the end of the component name. For example, `state.in-memory/v1` disables initializing components of type `state.in-memory` and version `v1`, but does not disable a (hypothetical) `v2` version of the component. -> Note: One special note applies to the component type `secretstores.kubernetes`. When you add that component to the denylist, Dapr forbids the creation of _additional_ components of type `secretstores.kubernetes`. However, it does not disable the built-in Kubernetes secret store, which is created by Dapr automatically and is used to store secrets specified in Components specs. If you want to disable the built-in Kubernetes secret store, you need to use the `dapr.io/disable-builtin-k8s-secret-store` [annotation]({{< ref arguments-annotations-overview.md >}}). +{{% alert title="Note" color="primary" %}} + When you add the component type `secretstores.kubernetes` to the denylist, Dapr forbids the creation of _additional_ components of type `secretstores.kubernetes`. + + However, it does not disable the built-in Kubernetes secret store, which is: + - Created by Dapr automatically + - Used to store secrets specified in Components specs + + If you want to disable the built-in Kubernetes secret store, you need to use the `dapr.io/disable-builtin-k8s-secret-store` [annotation]({{< ref arguments-annotations-overview.md >}}). +{{% /alert %}} #### Turning on preview features -See the [preview features]({{< ref "preview-features.md" >}}) guide for information and examples on how to opt-in to preview features for a release. Preview feature enable new capabilities to be added that still need more time until they become generally available (GA) in the runtime. +See the [preview features]({{< ref "preview-features.md" >}}) guide for information and examples on how to opt-in to preview features for a release. + +Enabling preview features unlock new capabilities to be added for dev/test, since they still need more time before becoming generally available (GA) in the runtime. ### Example sidecar configuration @@ -316,7 +353,9 @@ spec: ## Control plane configuration -There is a single configuration file called `daprsystem` installed with the Dapr control plane system services that applies global settings. This is only set up when Dapr is deployed to Kubernetes. +A single configuration file called `daprsystem` is installed with the Dapr control plane system services that applies global settings. + +> **This is only set up when Dapr is deployed to Kubernetes.** ### Control plane configuration settings @@ -353,3 +392,7 @@ spec: allowedClockSkew: 15m workloadCertTTL: 24h ``` + +## Next steps + +{{< button text="Learn about concurrency and rate limits" page="control-concurrency" >}} diff --git a/daprdocs/content/en/operations/configuration/control-concurrency.md b/daprdocs/content/en/operations/configuration/control-concurrency.md index 85b240c19..8bfdc044c 100644 --- a/daprdocs/content/en/operations/configuration/control-concurrency.md +++ b/daprdocs/content/en/operations/configuration/control-concurrency.md @@ -3,30 +3,57 @@ type: docs title: "How-To: Control concurrency and rate limit applications" linkTitle: "Concurrency & rate limits" weight: 2000 -description: "Control how many requests and events will invoke your application simultaneously" +description: "Learn how to control how many requests and events can invoke your application simultaneously" --- -A common scenario in distributed computing is to only allow for a given number of requests to execute concurrently. -Using Dapr, you can control how many requests and events will invoke your application simultaneously. +Typically, in distributed computing, you may only want to allow for a given number of requests to execute concurrently. Using Dapr's `app-max-concurrency`, you can control how many requests and events can invoke your application simultaneously. -*Note that this rate limiting is guaranteed for every event that's coming from Dapr, meaning Pub/Sub events, direct invocation from other services, bindings events etc. Dapr can't enforce the concurrency policy on requests that are coming to your app externally.* +Default `app-max-concurreny` is set to `-1`, meaning no concurrency limit is enforced. -*Note that rate limiting per second can be achieved by using the **middleware.http.ratelimit** middleware. However, there is an important difference between the two approaches. The rate limit middleware is time bound and limits the number of requests per second, while the `app-max-concurrency` flag specifies the number of concurrent requests (and events) at any point of time. See [Rate limit middleware]({{< ref middleware-rate-limit.md >}}). * +## Different approaches -Watch this [video](https://youtu.be/yRI5g6o_jp8?t=1710) on how to control concurrency and rate limiting ". +While this guide focuses on `app-max-concurrency`, you can also limit request rate per second using the **`middleware.http.ratelimit`** middleware. However, it's important to understand the difference between the two approaches: + +- `middleware.http.ratelimit`: Time bound and limits the number of requests per second +- `app-max-concurrency`: Specifies the max number of concurrent requests (and events) at any point of time. + +See [Rate limit middleware]({{< ref middleware-rate-limit.md >}}) for more information about that approach. + +## Demo + +Watch this [video](https://youtu.be/yRI5g6o_jp8?t=1710) on how to control concurrency and rate limiting.
-## Setting app-max-concurrency +## Configure `app-max-concurrency` -Without using Dapr, a developer would need to create some sort of a semaphore in the application and take care of acquiring and releasing it. -Using Dapr, there are no code changes needed to an app. +Without using Dapr, you would need to create some sort of a semaphore in the application and take care of acquiring and releasing it. -### Setting app-max-concurrency in Kubernetes +Using Dapr, you don't need to make any code changes to your application. -To set app-max-concurrency in Kubernetes, add the following annotation to your pod: +Select how you'd like to configure `app-max-concurrency`. + +{{< tabs "CLI" Kubernetes >}} + + +{{% codetab %}} + +To set concurrency limits with the Dapr CLI for running on your local dev machine, add the `app-max-concurrency` flag: + +```bash +dapr run --app-max-concurrency 1 --app-port 5000 python ./app.py +``` + +The above example effectively turns your app into a sequential processing service. + +{{% /codetab %}} + + +{{% codetab %}} + +To configure concurrency limits in Kubernetes, add the following annotation to your pod: ```yaml apiVersion: apps/v1 @@ -50,15 +77,22 @@ spec: dapr.io/app-id: "nodesubscriber" dapr.io/app-port: "3000" dapr.io/app-max-concurrency: "1" -... +#... ``` -### Setting app-max-concurrency using the Dapr CLI +{{% /codetab %}} -To set app-max-concurrency with the Dapr CLI for running on your local dev machine, add the `app-max-concurrency` flag: +{{< /tabs >}} -```bash -dapr run --app-max-concurrency 1 --app-port 5000 python ./app.py -``` +## Limitations -The above examples will effectively turn your app into a single concurrent service. +### Controlling concurrency on external requests +Rate limiting is guaranteed for every event coming _from_ Dapr, including pub/sub events, direct invocation from other services, bindings events, etc. However, Dapr can't enforce the concurrency policy on requests that are coming _to_ your app externally. + +## Related links + +[Arguments and annotations]({{< ref arguments-annotations-overview.md >}}) + +## Next steps + +{{< button text="Limit secret store access" page="secret-scope" >}} diff --git a/daprdocs/content/en/operations/configuration/environment-variables-secrets.md b/daprdocs/content/en/operations/configuration/environment-variables-secrets.md new file mode 100644 index 000000000..960ee4771 --- /dev/null +++ b/daprdocs/content/en/operations/configuration/environment-variables-secrets.md @@ -0,0 +1,122 @@ +--- +type: docs +title: "How-To: Configure Environment Variables from Secrets for Dapr sidecar" +linkTitle: "Environment Variables from Secrets" +weight: 7500 +description: "Inject Environment Variables from Kubernetes Secrets into Dapr sidecar" +--- +In special cases, the Dapr sidecar needs an environment variable injected into it. This use case may be required by a component, a 3rd party library, or a module that uses environment variables to configure the said component or customize its behavior. This can be useful for both production and non-production environments. + +## Overview +In Dapr 1.15, the new `dapr.io/env-from-secret` annotation was introduced, [similar to `dapr.io/env`]({{< ref arguments-annotations-overview >}}). +With this annotation, you can inject an environment variable into the Dapr sidecar, with a value from a secret. + +### Annotation format +The values of this annotation are formatted like so: + +- Single key secret: `=` +- Multi key/value secret: `=:` + +`` is required to follow the `C_IDENTIFIER` format and captured by the `[A-Za-z_][A-Za-z0-9_]*` regex: +- Must start with a letter or underscore +- The rest of the identifier contains letters, digits, or underscores + +The `name` field is required due to the restriction of the `secretKeyRef`, so both `name` and `key` must be set. [Learn more from the "env.valueFrom.secretKeyRef.name" section in this Kubernetes documentation.](https://kubernetes.io/docs/reference/kubernetes-api/workload-resources/pod-v1/#environment-variables) +In this case, Dapr sets both to the same value. + +## Configuring single key secret environment variable +In the following example, the `dapr.io/env-from-secret` annotation is added to the Deployment. +```yaml +apiVersion: apps/v1 +kind: Deployment +metadata: + name: nodeapp +spec: + template: + metadata: + annotations: + dapr.io/enabled: "true" + dapr.io/app-id: "nodeapp" + dapr.io/app-port: "3000" + dapr.io/env-from-secret: "AUTH_TOKEN=auth-headers-secret" + spec: + containers: + - name: node + image: dapriosamples/hello-k8s-node:latest + ports: + - containerPort: 3000 + imagePullPolicy: Always +``` + +The `dapr.io/env-from-secret` annotation with a value of `"AUTH_TOKEN=auth-headers-secret"` is injected as: + +```yaml +env: +- name: AUTH_TOKEN + valueFrom: + secretKeyRef: + name: auth-headers-secret + key: auth-headers-secret +``` +This requires the secret to have both `name` and `key` fields with the same value, "auth-headers-secret". + +**Example secret** + +> **Note:** The following example is for demo purposes only. It's not recommended to store secrets in plain text. + +```yaml +apiVersion: v1 +kind: Secret +metadata: + name: auth-headers-secret +type: Opaque +stringData: + auth-headers-secret: "AUTH=mykey" +``` + +## Configuring multi-key secret environment variable + +In the following example, the `dapr.io/env-from-secret` annotation is added to the Deployment. +```yaml +apiVersion: apps/v1 +kind: Deployment +metadata: + name: nodeapp +spec: + template: + metadata: + annotations: + dapr.io/enabled: "true" + dapr.io/app-id: "nodeapp" + dapr.io/app-port: "3000" + dapr.io/env-from-secret: "AUTH_TOKEN=auth-headers-secret:auth-header-value" + spec: + containers: + - name: node + image: dapriosamples/hello-k8s-node:latest + ports: + - containerPort: 3000 + imagePullPolicy: Always +``` +The `dapr.io/env-from-secret` annotation with a value of `"AUTH_TOKEN=auth-headers-secret:auth-header-value"` is injected as: +```yaml +env: +- name: AUTH_TOKEN + valueFrom: + secretKeyRef: + name: auth-headers-secret + key: auth-header-value +``` + +**Example secret** + + > **Note:** The following example is for demo purposes only. It's not recommended to store secrets in plain text. +```yaml +apiVersion: v1 +kind: Secret +metadata: + name: auth-headers-secret +type: Opaque +stringData: + auth-header-value: "AUTH=mykey" +``` \ No newline at end of file diff --git a/daprdocs/content/en/operations/configuration/grpc.md b/daprdocs/content/en/operations/configuration/grpc.md index 59c51a4ce..5ab2df15f 100644 --- a/daprdocs/content/en/operations/configuration/grpc.md +++ b/daprdocs/content/en/operations/configuration/grpc.md @@ -3,20 +3,21 @@ type: docs title: "How-To: Configure Dapr to use gRPC" linkTitle: "Use gRPC interface" weight: 5000 -description: "How to configure Dapr to use gRPC for low-latency, high performance scenarios" +description: "Configure Dapr to use gRPC for low-latency, high performance scenarios" --- -Dapr implements both an HTTP and a gRPC API for local calls. gRPC is useful for low-latency, high performance scenarios and has language integration using the proto clients. - -You can find a list of auto-generated clients [here]({{< ref sdks >}}). +Dapr implements both an HTTP and a gRPC API for local calls. gRPC is useful for low-latency, high performance scenarios and has language integration using the proto clients. [You can see the full list of auto-generated clients (Dapr SDKs)]({{< ref sdks >}}). The Dapr runtime implements a [proto service](https://github.com/dapr/dapr/blob/master/dapr/proto/runtime/v1/dapr.proto) that apps can communicate with via gRPC. -In addition to calling Dapr via gRPC, Dapr can communicate with an application via gRPC. To do that, the app needs to host a gRPC server and implements the [Dapr appcallback service](https://github.com/dapr/dapr/blob/master/dapr/proto/runtime/v1/appcallback.proto) +Not only can you call Dapr via gRPC, Dapr can communicate with an application via gRPC. To do that, the app needs to host a gRPC server and implement the [Dapr `appcallback` service](https://github.com/dapr/dapr/blob/master/dapr/proto/runtime/v1/appcallback.proto) ## Configuring Dapr to communicate with an app via gRPC -### Self hosted +{{< tabs "Self-hosted" Kubernetes >}} + + +{{% codetab %}} When running in self hosted mode, use the `--app-protocol` flag to tell Dapr to use gRPC to talk to the app: @@ -25,8 +26,10 @@ dapr run --app-protocol grpc --app-port 5005 node app.js ``` This tells Dapr to communicate with your app via gRPC over port `5005`. +{{% /codetab %}} -### Kubernetes + +{{% codetab %}} On Kubernetes, set the following annotations in your deployment YAML: @@ -52,5 +55,13 @@ spec: dapr.io/app-id: "myapp" dapr.io/app-protocol: "grpc" dapr.io/app-port: "5005" -... -``` \ No newline at end of file +#... +``` + +{{% /codetab %}} + +{{< /tabs >}} + +## Next steps + +{{< button text="Handle large HTTP header sizes" page="increase-read-buffer-size" >}} \ No newline at end of file diff --git a/daprdocs/content/en/operations/configuration/increase-read-buffer-size.md b/daprdocs/content/en/operations/configuration/increase-read-buffer-size.md index a8528e09b..9fcb80c4f 100644 --- a/daprdocs/content/en/operations/configuration/increase-read-buffer-size.md +++ b/daprdocs/content/en/operations/configuration/increase-read-buffer-size.md @@ -1,20 +1,23 @@ --- type: docs -title: "How-To: Handle large http header size" +title: "How-To: Handle large HTTP header size" linkTitle: "HTTP header size" weight: 6000 -description: "Configure a larger http read buffer size" +description: "Configure a larger HTTP read buffer size" --- -Dapr has a default limit of 4KB for the http header read buffer size. When sending http headers that are bigger than the default 4KB, you can increase this value. Otherwise, you may encounter a `Too big request header` service invocation error. You can change the http header size by using the `dapr.io/http-read-buffer-size` annotation or `--dapr-http-read-buffer-size` flag when using the CLI. - +Dapr has a default limit of 4KB for the HTTP header read buffer size. If you're sending HTTP headers larger than the default 4KB, you may encounter a `Too big request header` service invocation error. +You can increase the HTTP header size by using: +- The `dapr.io/http-read-buffer-size` annotation, or +- The `--dapr-http-read-buffer-size` flag when using the CLI. {{< tabs Self-hosted Kubernetes >}} + {{% codetab %}} -When running in self hosted mode, use the `--dapr-http-read-buffer-size` flag to configure Dapr to use non-default http header size: +When running in self-hosted mode, use the `--dapr-http-read-buffer-size` flag to configure Dapr to use non-default http header size: ```bash dapr run --dapr-http-read-buffer-size 16 node app.js @@ -23,10 +26,11 @@ This tells Dapr to set maximum read buffer size to `16` KB. {{% /codetab %}} - + {{% codetab %}} On Kubernetes, set the following annotations in your deployment YAML: + ```yaml apiVersion: apps/v1 kind: Deployment @@ -49,7 +53,7 @@ spec: dapr.io/app-id: "myapp" dapr.io/app-port: "8000" dapr.io/http-read-buffer-size: "16" -... +#... ``` {{% /codetab %}} @@ -57,4 +61,8 @@ spec: {{< /tabs >}} ## Related links -- [Dapr Kubernetes pod annotations spec]({{< ref arguments-annotations-overview.md >}}) +[Dapr Kubernetes pod annotations spec]({{< ref arguments-annotations-overview.md >}}) + +## Next steps + +{{< button text="Handle large HTTP body requests" page="increase-request-size" >}} diff --git a/daprdocs/content/en/operations/configuration/increase-request-size.md b/daprdocs/content/en/operations/configuration/increase-request-size.md index 2faadecf0..25461e3e8 100644 --- a/daprdocs/content/en/operations/configuration/increase-request-size.md +++ b/daprdocs/content/en/operations/configuration/increase-request-size.md @@ -6,15 +6,16 @@ weight: 6000 description: "Configure http requests that are bigger than 4 MB" --- -By default Dapr has a limit for the request body size which is set to 4 MB, however you can change this by defining `dapr.io/http-max-request-size` annotation or `--dapr-http-max-request-size` flag. - - +By default, Dapr has a limit for the request body size, set to 4MB. You can change this by defining: +- The `dapr.io/http-max-request-size` annotation, or +- The `--dapr-http-max-request-size` flag. {{< tabs Self-hosted Kubernetes >}} + {{% codetab %}} -When running in self hosted mode, use the `--dapr-http-max-request-size` flag to configure Dapr to use non-default request body size: +When running in self-hosted mode, use the `--dapr-http-max-request-size` flag to configure Dapr to use non-default request body size: ```bash dapr run --dapr-http-max-request-size 16 node app.js @@ -23,10 +24,11 @@ This tells Dapr to set maximum request body size to `16` MB. {{% /codetab %}} - + {{% codetab %}} On Kubernetes, set the following annotations in your deployment YAML: + ```yaml apiVersion: apps/v1 kind: Deployment @@ -49,7 +51,7 @@ spec: dapr.io/app-id: "myapp" dapr.io/app-port: "8000" dapr.io/http-max-request-size: "16" -... +#... ``` {{% /codetab %}} @@ -57,4 +59,9 @@ spec: {{< /tabs >}} ## Related links -- [Dapr Kubernetes pod annotations spec]({{< ref arguments-annotations-overview.md >}}) + +[Dapr Kubernetes pod annotations spec]({{< ref arguments-annotations-overview.md >}}) + +## Next steps + +{{< button text="Install sidecar certificates" page="install-certificates" >}} \ No newline at end of file diff --git a/daprdocs/content/en/operations/configuration/install-certificates.md b/daprdocs/content/en/operations/configuration/install-certificates.md index 071753ef9..7c2b79f8c 100644 --- a/daprdocs/content/en/operations/configuration/install-certificates.md +++ b/daprdocs/content/en/operations/configuration/install-certificates.md @@ -6,20 +6,26 @@ weight: 6500 description: "Configure the Dapr sidecar container to trust certificates" --- -The Dapr sidecar can be configured to trust certificates for communicating with external services. This is useful in scenarios where a self-signed certificate needs to be trusted. For example, using an HTTP binding or configuring an outbound proxy for the sidecar. Both certificate authority (CA) certificates and leaf certificates are supported. +The Dapr sidecar can be configured to trust certificates for communicating with external services. This is useful in scenarios where a self-signed certificate needs to be trusted, such as: +- Using an HTTP binding +- Configuring an outbound proxy for the sidecar + +Both certificate authority (CA) certificates and leaf certificates are supported. {{< tabs Self-hosted Kubernetes >}} + {{% codetab %}} -When the sidecar is not running inside a container, certificates must be directly installed on the host operating system. +You can make the following configurations when the sidecar is running as a container. -When the sidecar is running as a container: -1. Certificates must be available to the sidecar container. This can be configured using volume mounts. -1. The environment variable `SSL_CERT_DIR` must be set in the sidecar container, pointing to the directory containing the certificates. -1. For Windows containers, the container needs to run with administrator privileges to be able to install the certificates. +1. Configure certificates to be available to the sidecar container using volume mounts. +1. Point the environment variable `SSL_CERT_DIR` in the sidecar container to the directory containing the certificates. + +> **Note:** For Windows containers, make sure the container is running with administrator privileges so it can install the certificates. + +The following example uses Docker Compose to install certificates (present locally in the `./certificates` directory) in the sidecar container: -Below is an example that uses Docker Compose to install certificates (present locally in the `./certificates` directory) in the sidecar container: ```yaml version: '3' services: @@ -39,16 +45,22 @@ services: # user: ContainerAdministrator ``` +> **Note:** When the sidecar is not running inside a container, certificates must be directly installed on the host operating system. + {{% /codetab %}} - + {{% codetab %}} On Kubernetes: -1. Certificates must be available to the sidecar container using a volume mount. -1. The environment variable `SSL_CERT_DIR` must be set in the sidecar container, pointing to the directory containing the certificates. -The YAML below is an example of a deployment that attaches a pod volume to the sidecar, and sets `SSL_CERT_DIR` to install the certificates. +1. Configure certificates to be available to the sidecar container using a volume mount. +1. Point the environment variable `SSL_CERT_DIR` in the sidecar container to the directory containing the certificates. + +The following example YAML shows a deployment that: +- Attaches a pod volume to the sidecar +- Sets `SSL_CERT_DIR` to install the certificates + ```yaml apiVersion: apps/v1 kind: Deployment @@ -77,23 +89,21 @@ spec: - name: certificates-vol hostPath: path: /certificates -... +#... ``` -**Note**: When using Windows containers, the sidecar container is started with admin privileges, which is required to install the certificates. This does not apply to Linux containers. +> **Note**: When using Windows containers, the sidecar container is started with admin privileges, which is required to install the certificates. This does not apply to Linux containers. {{% /codetab %}} {{< /tabs >}} -
+After following these steps, all the certificates in the directory pointed by `SSL_CERT_DIR` are installed. -All the certificates in the directory pointed by `SSL_CERT_DIR` are installed. +- **On Linux containers:** All the certificate extensions supported by OpenSSL are supported. [Learn more.](https://www.openssl.org/docs/man1.1.1/man1/openssl-rehash.html) +- **On Windows container:** All the certificate extensions supported by `certoc.exe` are supported. [See certoc.exe present in Windows Server Core](https://hub.docker.com/_/microsoft-windows-servercore). -1. On Linux containers, all the certificate extensions supported by OpenSSL are supported. For more information, see https://www.openssl.org/docs/man1.1.1/man1/openssl-rehash.html -1. On Windows container, all the certificate extensions supported by certoc.exe are supported. For more information, see certoc.exe present in [Windows Server Core](https://hub.docker.com/_/microsoft-windows-servercore) - -## Example +## Demo Watch the demo on using installing SSL certificates and securely using the HTTP binding in community call 64: @@ -106,3 +116,7 @@ Watch the demo on using installing SSL certificates and securely using the HTTP - [HTTP binding spec]({{< ref http.md >}}) - [(Kubernetes) How-to: Mount Pod volumes to the Dapr sidecar]({{< ref kubernetes-volume-mounts.md >}}) - [Dapr Kubernetes pod annotations spec]({{< ref arguments-annotations-overview.md >}}) + +## Next steps + +{{< button text="Enable preview features" page="preview-features" >}} \ No newline at end of file diff --git a/daprdocs/content/en/operations/configuration/invoke-allowlist.md b/daprdocs/content/en/operations/configuration/invoke-allowlist.md index 725c83084..f9afe0299 100644 --- a/daprdocs/content/en/operations/configuration/invoke-allowlist.md +++ b/daprdocs/content/en/operations/configuration/invoke-allowlist.md @@ -3,71 +3,87 @@ type: docs title: "How-To: Apply access control list configuration for service invocation" linkTitle: "Service Invocation access control" weight: 4000 -description: "Restrict what operations *calling* applications can perform, via service invocation, on the *called* application" +description: "Restrict what operations calling applications can perform" --- -Access control enables the configuration of policies that restrict what operations *calling* applications can perform, via service invocation, on the *called* application. To limit access to a called applications from specific operations and HTTP verbs from the calling applications, you can define an access control policy specification in configuration. +Using access control, you can configure policies that restrict what the operations _calling_ applications can perform, via service invocation, on the _called_ application. You can define an access control policy specification in the Configuration schema to limit access: +- To a called application from specific operations, and +- To HTTP verbs from the calling applications. -An access control policy is specified in configuration and be applied to Dapr sidecar for the *called* application. Example access policies are shown below and access to the called app is based on the matched policy action. You can provide a default global action for all calling applications and if no access control policy is specified, the default behavior is to allow all calling applications to access to the called app. +An access control policy is specified in Configuration and applied to the Dapr sidecar for the _called_ application. Access to the called app is based on the matched policy action. -## Concepts +You can provide a default global action for all calling applications. If no access control policy is specified, the default behavior is to allow all calling applications to access to the called app. -**TrustDomain** - A "trust domain" is a logical group to manage trust relationships. Every application is assigned a trust domain which can be specified in the access control list policy spec. If no policy spec is defined or an empty trust domain is specified, then a default value "public" is used. This trust domain is used to generate the identity of the application in the TLS cert. +[See examples of access policies.](#example-scenarios) -**App Identity** - Dapr requests the sentry service to generate a [SPIFFE](https://spiffe.io/) id for all applications and this id is attached in the TLS cert. The SPIFFE id is of the format: `**spiffe://\/ns/\/\**`. For matching policies, the trust domain, namespace and app ID values of the calling app are extracted from the SPIFFE id in the TLS cert of the calling app. These values are matched against the trust domain, namespace and app ID values specified in the policy spec. If all three of these match, then more specific policies are further matched. +## Terminology + +### `trustDomain` + +A "trust domain" is a logical group that manages trust relationships. Every application is assigned a trust domain, which can be specified in the access control list policy spec. If no policy spec is defined or an empty trust domain is specified, then a default value "public" is used. This trust domain is used to generate the identity of the application in the TLS cert. + +### App Identity + +Dapr requests the sentry service to generate a [SPIFFE](https://spiffe.io/) ID for all applications. This ID is attached in the TLS cert. + +The SPIFFE ID is of the format: `**spiffe://\/ns/\/\**`. + +For matching policies, the trust domain, namespace, and app ID values of the calling app are extracted from the SPIFFE ID in the TLS cert of the calling app. These values are matched against the trust domain, namespace, and app ID values specified in the policy spec. If all three of these match, then more specific policies are further matched. ## Configuration properties -The following tables lists the different properties for access control, policies and operations: +The following tables lists the different properties for access control, policies, and operations: ### Access Control | Property | Type | Description | |---------------|--------|-------------| -| defaultAction | string | Global default action when no other policy is matched -| trustDomain | string | Trust domain assigned to the application. Default is "public". -| policies | string | Policies to determine what operations the calling app can do on the called app +| `defaultAction` | string | Global default action when no other policy is matched +| `trustDomain` | string | Trust domain assigned to the application. Default is "public". +| `policies` | string | Policies to determine what operations the calling app can do on the called app ### Policies | Property | Type | Description | |---------------|--------|-------------| -| app | string | AppId of the calling app to allow/deny service invocation from -| namespace | string | Namespace value that needs to be matched with the namespace of the calling app -| trustDomain | string | Trust domain that needs to be matched with the trust domain of the calling app. Default is "public" -| defaultAction | string | App level default action in case the app is found but no specific operation is matched -| operations | string | operations that are allowed from the calling app +| `app` | string | AppId of the calling app to allow/deny service invocation from +| `namespace` | string | Namespace value that needs to be matched with the namespace of the calling app +| `trustDomain` | string | Trust domain that needs to be matched with the trust domain of the calling app. Default is "public" +| `defaultAction` | string | App level default action in case the app is found but no specific operation is matched +| `operations` | string | operations that are allowed from the calling app ### Operations | Property | Type | Description | | -------- | ------ | ------------------------------------------------------------ | -| name | string | Path name of the operations allowed on the called app. Wildcard "\*" can be used in a path to match. Wildcard "\**" can be used to match under multiple paths. | -| httpVerb | list | List specific http verbs that can be used by the calling app. Wildcard "\*" can be used to match any http verb. Unused for grpc invocation. | -| action | string | Access modifier. Accepted values "allow" (default) or "deny" | +| `name` | string | Path name of the operations allowed on the called app. Wildcard "\*" can be used in a path to match. Wildcard "\**" can be used to match under multiple paths. | +| `httpVerb` | list | List specific http verbs that can be used by the calling app. Wildcard "\*" can be used to match any http verb. Unused for grpc invocation. | +| `action` | string | Access modifier. Accepted values "allow" (default) or "deny" | ## Policy rules -1. If no access policy is specified, the default behavior is to allow all apps to access to all methods on the called app -2. If no global default action is specified and no app specific policies defined, the empty access policy is treated like no access policy specified and the default behavior is to allow all apps to access to all methods on the called app. -3. If no global default action is specified but some app specific policies have been defined, then we resort to a more secure option of assuming the global default action to deny access to all methods on the called app. -4. If an access policy is defined and if the incoming app credentials cannot be verified, then the global default action takes effect. -5. If either the trust domain or namespace of the incoming app do not match the values specified in the app policy, the app policy is ignored and the global default action takes effect. +1. If no access policy is specified, the default behavior is to allow all apps to access to all methods on the called app. +1. If no global default action is specified and no app specific policies defined, the empty access policy is treated like no access policy is specified. The default behavior is to allow all apps to access to all methods on the called app. +1. If no global default action is specified but some app specific policies have been defined, then we resort to a more secure option of assuming the global default action to deny access to all methods on the called app. +1. If an access policy is defined and if the incoming app credentials cannot be verified, then the global default action takes effect. +1. If either the trust domain or namespace of the incoming app do not match the values specified in the app policy, the app policy is ignored and the global default action takes effect. ## Policy priority The action corresponding to the most specific policy matched takes effect as ordered below: 1. Specific HTTP verbs in the case of HTTP or the operation level action in the case of GRPC. -2. The default action at the app level -3. The default action at the global level +1. The default action at the app level +1. The default action at the global level ## Example scenarios Below are some example scenarios for using access control list for service invocation. See [configuration guidance]({{< ref "configuration-concept.md" >}}) to understand the available configuration settings for an application sidecar. -Scenario 1: Deny access to all apps except where trustDomain = public, namespace = default, appId = app1 +### Scenario 1: -With this configuration, all calling methods with appId = app1 are allowed and all other invocation requests from other applications are denied +Deny access to all apps except where `trustDomain` = `public`, `namespace` = `default`, `appId` = `app1` + +With this configuration, all calling methods with `appId` = `app1` are allowed. All other invocation requests from other applications are denied. ```yaml apiVersion: dapr.io/v1alpha1 @@ -85,9 +101,11 @@ spec: namespace: "default" ``` -Scenario 2: Deny access to all apps except trustDomain = public, namespace = default, appId = app1, operation = op1 +### Scenario 2: -With this configuration, only method op1 from appId = app1 is allowed and all other method requests from all other apps, including other methods on app1, are denied +Deny access to all apps except `trustDomain` = `public`, `namespace` = `default`, `appId` = `app1`, `operation` = `op1` + +With this configuration, only the method `op1` from `appId` = `app1` is allowed. All other method requests from all other apps, including other methods on `app1`, are denied. ```yaml apiVersion: dapr.io/v1alpha1 @@ -109,12 +127,16 @@ spec: action: allow ``` -Scenario 3: Deny access to all apps except when a specific verb for HTTP and operation for GRPC is matched +### Scenario 3: -With this configuration, the only scenarios below are allowed access and and all other method requests from all other apps, including other methods on app1 or app2, are denied -* trustDomain = public, namespace = default, appID = app1, operation = op1, http verb = POST/PUT -* trustDomain = "myDomain", namespace = "ns1", appID = app2, operation = op2 and application protocol is GRPC -, only HTTP verbs POST/PUT on method op1 from appId = app1 are allowed and all other method requests from all other apps, including other methods on app1, are denied +Deny access to all apps except when a specific verb for HTTP and operation for GRPC is matched + +With this configuration, only the scenarios below are allowed access. All other method requests from all other apps, including other methods on `app1` or `app2`, are denied. + +- `trustDomain` = `public`, `namespace` = `default`, `appID` = `app1`, `operation` = `op1`, `httpVerb` = `POST`/`PUT` +- `trustDomain` = `"myDomain"`, `namespace` = `"ns1"`, `appID` = `app2`, `operation` = `op2` and application protocol is GRPC + +Only the `httpVerb` `POST`/`PUT` on method `op1` from `appId` = `app1` are allowe. All other method requests from all other apps, including other methods on `app1`, are denied. ```yaml apiVersion: dapr.io/v1alpha1 @@ -143,7 +165,9 @@ spec: action: allow ``` -Scenario 4: Allow access to all methods except trustDomain = public, namespace = default, appId = app1, operation = /op1/*, all http verbs +### Scenario 4: + +Allow access to all methods except `trustDomain` = `public`, `namespace` = `default`, `appId` = `app1`, `operation` = `/op1/*`, all `httpVerb` ```yaml apiVersion: dapr.io/v1alpha1 @@ -165,9 +189,11 @@ spec: action: deny ``` -Scenario 5: Allow access to all methods for trustDomain = public, namespace = ns1, appId = app1 and deny access to all methods for trustDomain = public, namespace = ns2, appId = app1 +### Scenario 5: -This scenario shows how applications with the same app ID but belonging to different namespaces can be specified +Allow access to all methods for `trustDomain` = `public`, `namespace` = `ns1`, `appId` = `app1` and deny access to all methods for `trustDomain` = `public`, `namespace` = `ns2`, `appId` = `app1` + +This scenario shows how applications with the same app ID while belonging to different namespaces can be specified. ```yaml apiVersion: dapr.io/v1alpha1 @@ -189,7 +215,9 @@ spec: namespace: "ns2" ``` -Scenario 6: Allow access to all methods except trustDomain = public, namespace = default, appId = app1, operation = /op1/**/a, all http verbs +### Scenario 6: + +Allow access to all methods except `trustDomain` = `public`, `namespace` = `default`, `appId` = `app1`, `operation` = `/op1/**/a`, all `httpVerb` ```yaml apiVersion: dapr.io/v1alpha1 @@ -211,14 +239,15 @@ spec: action: deny ``` -## Hello world examples +## "hello world" examples -These examples show how to apply access control to the [hello world](https://github.com/dapr/quickstarts#quickstarts) quickstart samples where a python app invokes a node.js app. -Access control lists rely on the Dapr [Sentry service]({{< ref "security-concept.md" >}}) to generate the TLS certificates with a SPIFFE id for authentication, which means the Sentry service either has to be running locally or deployed to your hosting environment such as a Kubernetes cluster. +In these examples, you learn how to apply access control to the [hello world](https://github.com/dapr/quickstarts/tree/master/tutorials) tutorials. -The nodeappconfig example below shows how to **deny** access to the `neworder` method from the `pythonapp`, where the python app is in the `myDomain` trust domain and `default` namespace. The nodeapp is in the `public` trust domain. +Access control lists rely on the Dapr [Sentry service]({{< ref "security-concept.md" >}}) to generate the TLS certificates with a SPIFFE ID for authentication. This means the Sentry service either has to be running locally or deployed to your hosting environment, such as a Kubernetes cluster. -**nodeappconfig.yaml** +The `nodeappconfig` example below shows how to **deny** access to the `neworder` method from the `pythonapp`, where the Python app is in the `myDomain` trust domain and `default` namespace. The Node.js app is in the `public` trust domain. + +### nodeappconfig.yaml ```yaml apiVersion: dapr.io/v1alpha1 @@ -242,7 +271,7 @@ spec: action: deny ``` -**pythonappconfig.yaml** +### pythonappconfig.yaml ```yaml apiVersion: dapr.io/v1alpha1 @@ -258,95 +287,119 @@ spec: ``` ### Self-hosted mode -This example uses the [hello world](https://github.com/dapr/quickstarts/tree/master/tutorials/hello-world/README.md) quickstart. -The following steps run the Sentry service locally with mTLS enabled, set up necessary environment variables to access certificates, and then launch both the node app and python app each referencing the Sentry service to apply the ACLs. +When walking through this tutorial, you: +- Run the Sentry service locally with mTLS enabled +- Set up necessary environment variables to access certificates +- Launch both the Node app and Python app each referencing the Sentry service to apply the ACLs - 1. Follow these steps to run the [Sentry service in self-hosted mode]({{< ref "mtls.md" >}}) with mTLS enabled +#### Prerequisites - 2. In a command prompt, set these environment variables: +- Become familiar with running [Sentry service in self-hosted mode]({{< ref "mtls.md" >}}) with mTLS enabled +- Clone the [hello world](https://github.com/dapr/quickstarts/tree/master/tutorials/hello-world/README.md) tutorial + +#### Run the Node.js app + +1. In a command prompt, set these environment variables: {{< tabs "Linux/MacOS" Windows >}} {{% codetab %}} - ```bash - export DAPR_TRUST_ANCHORS=`cat $HOME/.dapr/certs/ca.crt` - export DAPR_CERT_CHAIN=`cat $HOME/.dapr/certs/issuer.crt` - export DAPR_CERT_KEY=`cat $HOME/.dapr/certs/issuer.key` - export NAMESPACE=default - ``` + + ```bash + export DAPR_TRUST_ANCHORS=`cat $HOME/.dapr/certs/ca.crt` + export DAPR_CERT_CHAIN=`cat $HOME/.dapr/certs/issuer.crt` + export DAPR_CERT_KEY=`cat $HOME/.dapr/certs/issuer.key` + export NAMESPACE=default + ``` {{% /codetab %}} - {{% codetab %}} - ```powershell - $env:DAPR_TRUST_ANCHORS=$(Get-Content -raw $env:USERPROFILE\.dapr\certs\ca.crt) - $env:DAPR_CERT_CHAIN=$(Get-Content -raw $env:USERPROFILE\.dapr\certs\issuer.crt) - $env:DAPR_CERT_KEY=$(Get-Content -raw $env:USERPROFILE\.dapr\certs\issuer.key) - $env:NAMESPACE="default" - ``` + {{% codetab %}} + + ```powershell + $env:DAPR_TRUST_ANCHORS=$(Get-Content -raw $env:USERPROFILE\.dapr\certs\ca.crt) + $env:DAPR_CERT_CHAIN=$(Get-Content -raw $env:USERPROFILE\.dapr\certs\issuer.crt) + $env:DAPR_CERT_KEY=$(Get-Content -raw $env:USERPROFILE\.dapr\certs\issuer.key) + $env:NAMESPACE="default" + ``` {{% /codetab %}} {{< /tabs >}} -3. Run daprd to launch a Dapr sidecar for the node.js app with mTLS enabled, referencing the local Sentry service: +1. Run daprd to launch a Dapr sidecar for the Node.js app with mTLS enabled, referencing the local Sentry service: ```bash daprd --app-id nodeapp --dapr-grpc-port 50002 -dapr-http-port 3501 --log-level debug --app-port 3000 --enable-mtls --sentry-address localhost:50001 --config nodeappconfig.yaml ``` -4. Run the node app in a separate command prompt: +1. Run the Node.js app in a separate command prompt: ```bash node app.js ``` -5. In another command prompt, set these environment variables: +#### Run the Python app + +1. In another command prompt, set these environment variables: {{< tabs "Linux/MacOS" Windows >}} {{% codetab %}} - ```bash - export DAPR_TRUST_ANCHORS=`cat $HOME/.dapr/certs/ca.crt` - export DAPR_CERT_CHAIN=`cat $HOME/.dapr/certs/issuer.crt` - export DAPR_CERT_KEY=`cat $HOME/.dapr/certs/issuer.key` - export NAMESPACE=default - ``` + + ```bash + export DAPR_TRUST_ANCHORS=`cat $HOME/.dapr/certs/ca.crt` + export DAPR_CERT_CHAIN=`cat $HOME/.dapr/certs/issuer.crt` + export DAPR_CERT_KEY=`cat $HOME/.dapr/certs/issuer.key` + export NAMESPACE=default + ``` {{% /codetab %}} {{% codetab %}} + ```powershell $env:DAPR_TRUST_ANCHORS=$(Get-Content -raw $env:USERPROFILE\.dapr\certs\ca.crt) $env:DAPR_CERT_CHAIN=$(Get-Content -raw $env:USERPROFILE\.dapr\certs\issuer.crt) $env:DAPR_CERT_KEY=$(Get-Content -raw $env:USERPROFILE\.dapr\certs\issuer.key) $env:NAMESPACE="default" - ``` + ``` + {{% /codetab %}} {{< /tabs >}} -6. Run daprd to launch a Dapr sidecar for the python app with mTLS enabled, referencing the local Sentry service: +1. Run daprd to launch a Dapr sidecar for the Python app with mTLS enabled, referencing the local Sentry service: ```bash daprd --app-id pythonapp --dapr-grpc-port 50003 --metrics-port 9092 --log-level debug --enable-mtls --sentry-address localhost:50001 --config pythonappconfig.yaml ``` - -7. Run the python app in a separate command prompt: +1. Run the Python app in a separate command prompt: ```bash python app.py ``` -8. You should see the calls to the node app fail in the python app command prompt based due to the **deny** operation action in the nodeappconfig file. Change this action to **allow** and re-run the apps and you should then see this call succeed. +You should see the calls to the Node.js app fail in the Python app command prompt, due to the **deny** operation action in the `nodeappconfig` file. Change this action to **allow** and re-run the apps to see this call succeed. ### Kubernetes mode -This example uses the [hello kubernetes](https://github.com/dapr/quickstarts/tree/master/tutorials/hello-kubernetes/README.md) quickstart. -You can create and apply the above configuration files `nodeappconfig.yaml` and `pythonappconfig.yaml` as described in the [configuration]({{< ref "configuration-concept.md" >}}) to the Kubernetes deployments. +#### Prerequisites -For example, below is how the pythonapp is deployed to Kubernetes in the default namespace with this pythonappconfig configuration file. -Do the same for the nodeapp deployment and then look at the logs for the pythonapp to see the calls fail due to the **deny** operation action set in the nodeappconfig file. Change this action to **allow** and re-deploy the apps and you should then see this call succeed. +- Become familiar with running [Sentry service in self-hosted mode]({{< ref "mtls.md" >}}) with mTLS enabled +- Clone the [hello world](https://github.com/dapr/quickstarts/tree/master/tutorials/hello-world/README.md) tutorial + +#### Configure the Node.js and Python apps + +You can create and apply the above [`nodeappconfig.yaml`](#nodeappconfigyaml) and [`pythonappconfig.yaml`](#pythonappconfigyaml) configuration files, as described in the [configuration]({{< ref "configuration-concept.md" >}}). + +For example, the Kubernetes Deployment below is how the Python app is deployed to Kubernetes in the default namespace with this `pythonappconfig` configuration file. + +Do the same for the Node.js deployment and look at the logs for the Python app to see the calls fail due to the **deny** operation action set in the `nodeappconfig` file. + +Change this action to **allow** and re-deploy the apps to see this call succeed. + +##### Deployment YAML example ```yaml apiVersion: apps/v1 @@ -375,9 +428,14 @@ spec: image: dapriosamples/hello-k8s-python:edge ``` -## Community call demo +## Demo + Watch this [video](https://youtu.be/j99RN_nxExA?t=1108) on how to apply access control list for service invocation.
-
\ No newline at end of file + + +## Next steps + +{{< button text="Dapr APIs allow list" page="api-allowlist" >}} \ No newline at end of file diff --git a/daprdocs/content/en/operations/configuration/preview-features.md b/daprdocs/content/en/operations/configuration/preview-features.md index 387ba0fa6..1e442dcc5 100644 --- a/daprdocs/content/en/operations/configuration/preview-features.md +++ b/daprdocs/content/en/operations/configuration/preview-features.md @@ -6,23 +6,21 @@ weight: 7000 description: "How to specify and enable preview features" --- -## Overview -Preview features in Dapr are considered experimental when they are first released. These preview features require explicit opt-in in order to be used. The opt-in is specified in Dapr's configuration. +[Preview features]({{< ref support-preview-features >}}) in Dapr are considered experimental when they are first released. These preview features require you to explicitly opt-in to use them. You specify this opt-in in Dapr's Configuration file. Preview features are enabled on a per application basis by setting configuration when running an application instance. -### Preview features -The current list of preview features can be found [here]({{}}). - ## Configuration properties + The `features` section under the `Configuration` spec contains the following properties: | Property | Type | Description | |----------------|--------|-------------| -|name|string|The name of the preview feature that is enabled/disabled -|enabled|bool|Boolean specifying if the feature is enabled or disabled +|`name`|string|The name of the preview feature that is enabled/disabled +|`enabled`|bool|Boolean specifying if the feature is enabled or disabled ## Enabling a preview feature + Preview features are specified in the configuration. Here is an example of a full configuration that contains multiple features: ```yaml @@ -42,7 +40,11 @@ spec: enabled: true ``` -### Standalone +{{< tabs Self-hosted Kubernetes >}} + + +{{% codetab %}} + To enable preview features when running Dapr locally, either update the default configuration or specify a separate config file using `dapr run`. The default Dapr config is created when you run `dapr init`, and is located at: @@ -55,8 +57,11 @@ Alternately, you can update preview features on all apps run locally by specifyi dapr run --app-id myApp --config ./previewConfig.yaml ./app ``` +{{% /codetab %}} + + +{{% codetab %}} -### Kubernetes In Kubernetes mode, the configuration must be provided via a configuration component. Using the same configuration as above, apply it via `kubectl`: ```bash @@ -94,3 +99,11 @@ spec: - containerPort: 3000 imagePullPolicy: Always ``` + +{{% /codetab %}} + +{{< /tabs >}} + +## Next steps + +{{< button text="Configuration schema" page="configuration-schema" >}} \ No newline at end of file diff --git a/daprdocs/content/en/operations/configuration/secret-scope.md b/daprdocs/content/en/operations/configuration/secret-scope.md index 397964472..bd718288d 100644 --- a/daprdocs/content/en/operations/configuration/secret-scope.md +++ b/daprdocs/content/en/operations/configuration/secret-scope.md @@ -3,12 +3,19 @@ type: docs title: "How-To: Limit the secrets that can be read from secret stores" linkTitle: "Limit secret store access" weight: 3000 -description: "To limit the secrets to which the Dapr application has access, users can define secret scopes by augmenting existing configuration resource with restrictive permissions." +description: "Define secret scopes by augmenting the existing configuration resource with restrictive permissions." +description: "Define secret scopes by augmenting the existing configuration resource with restrictive permissions." --- -In addition to scoping which applications can access a given component, for example a secret store component (see [Scoping components]({{< ref "component-scopes.md">}})), a named secret store component itself can be scoped to one or more secrets for an application. By defining `allowedSecrets` and/or `deniedSecrets` list, applications can be restricted to access only specific secrets. +In addition to [scoping which applications can access a given component]({{< ref "component-scopes.md">}}), you can also scop a named secret store component to one or more secrets for an application. By defining `allowedSecrets` and/or `deniedSecrets` lists, you restrict applications to access only specific secrets. +In addition to [scoping which applications can access a given component]({{< ref "component-scopes.md">}}), you can also scop a named secret store component to one or more secrets for an application. By defining `allowedSecrets` and/or `deniedSecrets` lists, you restrict applications to access only specific secrets. -Follow [these instructions]({{< ref "configuration-overview.md" >}}) to define a configuration resource. +For more information about configuring a Configuration resource: +- [Configuration overview]({{< ref configuration-overview.md >}}) +- [Configuration schema]({{< ref configuration-schema.md >}}) +For more information about configuring a Configuration resource: +- [Configuration overview]({{< ref configuration-overview.md >}}) +- [Configuration schema]({{< ref configuration-schema.md >}}) ## Configure secrets access @@ -38,38 +45,64 @@ When an `allowedSecrets` list is present with at least one element, only those s ## Permission priority -The `allowedSecrets` and `deniedSecrets` list values take priorty over the `defaultAccess`. +The `allowedSecrets` and `deniedSecrets` list values take priority over the `defaultAccess`. See how this works in the following example scenarios: -| Scenarios | defaultAccess | allowedSecrets | deniedSecrets | permission -|----- | ------- | -----------| ----------| ------------ -| 1 - Only default access | deny/allow | empty | empty | deny/allow -| 2 - Default deny with allowed list | deny | ["s1"] | empty | only "s1" can be accessed -| 3 - Default allow with denied list | allow | empty | ["s1"] | only "s1" cannot be accessed -| 4 - Default allow with allowed list | allow | ["s1"] | empty | only "s1" can be accessed -| 5 - Default deny with denied list | deny | empty | ["s1"] | deny -| 6 - Default deny/allow with both lists | deny/allow | ["s1"] | ["s2"] | only "s1" can be accessed +| | Scenarios | `defaultAccess` | `allowedSecrets` | `deniedSecrets` | `permission` +|--| ----- | ------- | -----------| ----------| ------------ +| 1 | Only default access | `deny`/`allow` | empty | empty | `deny`/`allow` +| 2 | Default deny with allowed list | `deny` | [`"s1"`] | empty | only `"s1"` can be accessed +| 3 | Default allow with denied list | `allow` | empty | [`"s1"`] | only `"s1"` cannot be accessed +| 4 | Default allow with allowed list | `allow` | [`"s1"`] | empty | only `"s1"` can be accessed +| 5 | Default deny with denied list | `deny` | empty | [`"s1"`] | `deny` +| 6 | Default deny/allow with both lists | `deny`/`allow` | [`"s1"`] | [`"s2"`] | only `"s1"` can be accessed ## Examples -### Scenario 1 : Deny access to all secrets for a secret store +### Scenario 1: Deny access to all secrets for a secret store -In Kubernetes cluster, the native Kubernetes secret store is added to Dapr application by default. In some scenarios it may be necessary to deny access to Dapr secrets for a given application. To add this configuration follow the steps below: +In a Kubernetes cluster, the native Kubernetes secret store is added to your Dapr application by default. In some scenarios, it may be necessary to deny access to Dapr secrets for a given application. To add this configuration: +In a Kubernetes cluster, the native Kubernetes secret store is added to your Dapr application by default. In some scenarios, it may be necessary to deny access to Dapr secrets for a given application. To add this configuration: -Define the following `appconfig.yaml` and apply it to the Kubernetes cluster using the command `kubectl apply -f appconfig.yaml`. +1. Define the following `appconfig.yaml`. +1. Define the following `appconfig.yaml`. -```yaml -apiVersion: dapr.io/v1alpha1 -kind: Configuration -metadata: - name: appconfig -spec: - secrets: - scopes: - - storeName: kubernetes - defaultAccess: deny -``` + ```yaml + apiVersion: dapr.io/v1alpha1 + kind: Configuration + metadata: + name: appconfig + spec: + secrets: + scopes: + - storeName: kubernetes + defaultAccess: deny + ``` + ```yaml + apiVersion: dapr.io/v1alpha1 + kind: Configuration + metadata: + name: appconfig + spec: + secrets: + scopes: + - storeName: kubernetes + defaultAccess: deny + ``` -For applications that need to be denied access to the Kubernetes secret store, follow [these instructions]({{< ref kubernetes-overview >}}), and add the following annotation to the application pod. +1. Apply it to the Kubernetes cluster using the following command: + + ```bash + kubectl apply -f appconfig.yaml`. + ``` + +For applications that you need to deny access to the Kubernetes secret store, follow [the Kubernetes instructions]({{< ref kubernetes-overview >}}), adding the following annotation to the application pod. +1. Apply it to the Kubernetes cluster using the following command: + + ```bash + kubectl apply -f appconfig.yaml`. + ``` + +For applications that you need to deny access to the Kubernetes secret store, follow [the Kubernetes instructions]({{< ref kubernetes-overview >}}), adding the following annotation to the application pod. ```yaml dapr.io/config: appconfig @@ -77,7 +110,8 @@ dapr.io/config: appconfig With this defined, the application no longer has access to Kubernetes secret store. -### Scenario 2 : Allow access to only certain secrets in a secret store +### Scenario 2: Allow access to only certain secrets in a secret store +### Scenario 2: Allow access to only certain secrets in a secret store To allow a Dapr application to have access to only certain secrets, define the following `config.yaml`: @@ -94,7 +128,8 @@ spec: allowedSecrets: ["secret1", "secret2"] ``` -This example defines configuration for secret store named vault. The default access to the secret store is `deny`, whereas some secrets are accessible by the application based on the `allowedSecrets` list. Follow [these instructions]({{< ref configuration-overview.md >}}) to apply configuration to the sidecar. +This example defines configuration for secret store named `vault`. The default access to the secret store is `deny`. Meanwhile, some secrets are accessible by the application based on the `allowedSecrets` list. Follow [the Sidecar configuration instructions]({{< ref "configuration-overview.md#sidecar-configuration" >}}) to apply configuration to the sidecar. +This example defines configuration for secret store named `vault`. The default access to the secret store is `deny`. Meanwhile, some secrets are accessible by the application based on the `allowedSecrets` list. Follow [the Sidecar configuration instructions]({{< ref "configuration-overview.md#sidecar-configuration" >}}) to apply configuration to the sidecar. ### Scenario 3: Deny access to certain sensitive secrets in a secret store @@ -113,4 +148,13 @@ spec: deniedSecrets: ["secret1", "secret2"] ``` -The above configuration explicitly denies access to `secret1` and `secret2` from the secret store named vault while allowing access to all other secrets. Follow [these instructions]({{< ref configuration-overview.md >}}) to apply configuration to the sidecar. +This configuration explicitly denies access to `secret1` and `secret2` from the secret store named `vault,` while allowing access to all other secrets. Follow [the Sidecar configuration instructions]({{< ref "configuration-overview.md#sidecar-configuration" >}}) to apply configuration to the sidecar. + +## Next steps + +{{< button text="Service invocation access control" page="invoke-allowlist" >}} +This configuration explicitly denies access to `secret1` and `secret2` from the secret store named `vault,` while allowing access to all other secrets. Follow [the Sidecar configuration instructions]({{< ref "configuration-overview.md#sidecar-configuration" >}}) to apply configuration to the sidecar. + +## Next steps + +{{< button text="Service invocation access control" page="invoke-allowlist" >}} diff --git a/daprdocs/content/en/operations/hosting/kubernetes/cluster/setup-eks.md b/daprdocs/content/en/operations/hosting/kubernetes/cluster/setup-eks.md index b6e9b7747..6a87484cc 100644 --- a/daprdocs/content/en/operations/hosting/kubernetes/cluster/setup-eks.md +++ b/daprdocs/content/en/operations/hosting/kubernetes/cluster/setup-eks.md @@ -16,6 +16,7 @@ This guide walks you through installing an Elastic Kubernetes Service (EKS) clus - [AWS CLI](https://aws.amazon.com/cli/) - [eksctl](https://eksctl.io/) - [An existing VPC and subnets](https://docs.aws.amazon.com/eks/latest/userguide/network_reqs.html) + - [Dapr CLI](https://docs.dapr.io/getting-started/install-dapr-cli/) ## Deploy an EKS cluster @@ -25,20 +26,57 @@ This guide walks you through installing an Elastic Kubernetes Service (EKS) clus aws configure ``` -1. Create an EKS cluster. To use a specific version of Kubernetes, use `--version` (1.13.x or newer version required). +1. Create a new file called `cluster-config.yaml` and add the content below to it, replacing `[your_cluster_name]`, `[your_cluster_region]`, and `[your_k8s_version]` with the appropriate values: + + ```yaml + apiVersion: eksctl.io/v1alpha5 + kind: ClusterConfig + + metadata: + name: [your_cluster_name] + region: [your_cluster_region] + version: [your_k8s_version] + tags: + karpenter.sh/discovery: [your_cluster_name] + + iam: + withOIDC: true + + managedNodeGroups: + - name: mng-od-4vcpu-8gb + desiredCapacity: 2 + minSize: 1 + maxSize: 5 + instanceType: c5.xlarge + privateNetworking: true + + addons: + - name: vpc-cni + attachPolicyARNs: + - arn:aws:iam::aws:policy/AmazonEKS_CNI_Policy + - name: coredns + version: latest + - name: kube-proxy + version: latest + - name: aws-ebs-csi-driver + wellKnownPolicies: + ebsCSIController: true + ``` + +1. Create the cluster by running the following command: ```bash - eksctl create cluster --name [your_eks_cluster_name] --region [your_aws_region] --version [kubernetes_version] --vpc-private-subnets [subnet_list_seprated_by_comma] --without-nodegroup + eksctl create cluster -f cluster.yaml ``` - - Change the values for `vpc-private-subnets` to meet your requirements. You can also add additional IDs. You must specify at least two subnet IDs. If you'd rather specify public subnets, you can change `--vpc-private-subnets` to `--vpc-public-subnets`. - -1. Verify kubectl context: + +1. Verify the kubectl context: ```bash kubectl config current-context ``` +## Add Dapr requirements for sidecar access and default storage class: + 1. Update the security group rule to allow the EKS cluster to communicate with the Dapr Sidecar by creating an inbound rule for port 4000. ```bash @@ -49,11 +87,37 @@ This guide walks you through installing an Elastic Kubernetes Service (EKS) clus --source-group [your_security_group] ``` +2. Add a default storage class if you don't have one: + + ```bash + kubectl patch storageclass gp2 -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"true"}}}' + ``` + +## Install Dapr + +Install Dapr on your cluster by running: + +```bash +dapr init -k +``` + +You should see the following response: + +```bash +⌛ Making the jump to hyperspace... +ℹ️ Note: To install Dapr using Helm, see here: https://docs.dapr.io/getting-started/install-dapr-kubernetes/#install-with-helm-advanced + +ℹ️ Container images will be pulled from Docker Hub +✅ Deploying the Dapr control plane with latest version to your cluster... +✅ Deploying the Dapr dashboard with latest version to your cluster... +✅ Success! Dapr has been installed to namespace dapr-system. To verify, run `dapr status -k' in your terminal. To get started, go here: https://docs.dapr.io/getting-started +``` + ## Troubleshooting ### Access permissions -If you face any access permissions, make sure you are using the same AWS profile that was used to create the cluster. If needed, update the kubectl configuration with the correct profile: +If you face any access permissions, make sure you are using the same AWS profile that was used to create the cluster. If needed, update the kubectl configuration with the correct profile. More information [here](https://repost.aws/knowledge-center/eks-api-server-unauthorized-error): ```bash aws eks --region [your_aws_region] update-kubeconfig --name [your_eks_cluster_name] --profile [your_profile_name] diff --git a/daprdocs/content/en/operations/hosting/kubernetes/kubernetes-deploy.md b/daprdocs/content/en/operations/hosting/kubernetes/kubernetes-deploy.md index 658d1475e..41af7c0d8 100644 --- a/daprdocs/content/en/operations/hosting/kubernetes/kubernetes-deploy.md +++ b/daprdocs/content/en/operations/hosting/kubernetes/kubernetes-deploy.md @@ -231,6 +231,19 @@ You can install Dapr on Kubernetes using a Helm v3 chart. --wait ``` + To install in **high availability** mode and scale select services independently of global: + + ```bash + helm upgrade --install dapr dapr/dapr \ + --version={{% dapr-latest-version short="true" %}} \ + --namespace dapr-system \ + --create-namespace \ + --set global.ha.enabled=false \ + --set dapr_scheduler.ha=true \ + --set dapr_placement.ha=true \ + --wait + ``` + See [Guidelines for production ready deployments on Kubernetes]({{< ref kubernetes-production.md >}}) for more information on installing and upgrading Dapr using Helm. ### (optional) Install the Dapr dashboard as part of the control plane diff --git a/daprdocs/content/en/operations/hosting/kubernetes/kubernetes-persisting-scheduler.md b/daprdocs/content/en/operations/hosting/kubernetes/kubernetes-persisting-scheduler.md index 9172a28fe..8c877d73c 100644 --- a/daprdocs/content/en/operations/hosting/kubernetes/kubernetes-persisting-scheduler.md +++ b/daprdocs/content/en/operations/hosting/kubernetes/kubernetes-persisting-scheduler.md @@ -7,10 +7,123 @@ description: "Configure Scheduler to persist its database to make it resilient t --- The [Scheduler]({{< ref scheduler.md >}}) service is responsible for writing jobs to its embedded Etcd database and scheduling them for execution. -By default, the Scheduler service database writes this data to a Persistent Volume Claim of 1Gb of size using the cluster's default [storage class](https://kubernetes.io/docs/concepts/storage/storage-classes/). This means that there is no additional parameter required to run the scheduler service reliably on most Kubernetes deployments, although you will need additional configuration in some deployments or for a production environment. +By default, the Scheduler service database writes data to a Persistent Volume Claim volume of size `1Gb`, using the cluster's default [storage class](https://kubernetes.io/docs/concepts/storage/storage-classes/). +This means that there is no additional parameter required to run the scheduler service reliably on most Kubernetes deployments, although you will need [additional configuration](#storage-class) if a default StorageClass is not available or when running a production environment. + +{{% alert title="Warning" color="warning" %}} +The default storage size for the Scheduler is `1Gi`, which is likely not sufficient for most production deployments. +Remember that the Scheduler is used for [Actor Reminders]({{< ref actors-timers-reminders.md >}}) & [Workflows]({{< ref workflow-overview.md >}}) when the [SchedulerReminders]({{< ref support-preview-features.md >}}) preview feature is enabled, and the [Jobs API]({{< ref jobs_api.md >}}). +You may want to consider reinstalling Dapr with a larger Scheduler storage of at least `16Gi` or more. +For more information, see the [ETCD Storage Disk Size](#etcd-storage-disk-size) section below. +{{% /alert %}} ## Production Setup +### ETCD Storage Disk Size + +The default storage size for the Scheduler is `1Gb`. +This size is likely not sufficient for most production deployments. +When the storage size is exceeded, the Scheduler will log an error similar to the following: + +``` +error running scheduler: etcdserver: mvcc: database space exceeded +``` + +Knowing the safe upper bound for your storage size is not an exact science, and relies heavily on the number, persistence, and the data payload size of your application jobs. +The [Job API]({{< ref jobs_api.md >}}) and [Actor Reminders]({{< ref actors-timers-reminders.md >}}) (with the [SchedulerReminders]({{< ref support-preview-features.md >}}) preview feature enabled) transparently maps one to one to the usage of your applications. +Workflows (when the [SchedulerReminders]({{< ref support-preview-features.md >}}) preview feature is enabled) create a large number of jobs as Actor Reminders, however these jobs are short lived- matching the lifecycle of each workflow execution. +The data payload of jobs created by Workflows is typically empty or small. + +The Scheduler uses Etcd as its storage backend database. +By design, Etcd persists historical transactions and data in form of [Write-Ahead Logs (WAL) and snapshots](https://etcd.io/docs/v3.5/learning/persistent-storage-files/). +This means the actual disk usage of Scheduler will be higher than the current observable database state, often by a number of multiples. + +### Setting the Storage Size on Installation + +If you need to increase an **existing** Scheduler storage size, see the [Increase Scheduler Storage Size](#increase-existing-scheduler-storage-size) section below. +To increase the storage size (in this example- `16Gi`) for a **fresh** Dapr instalation, you can use the following command: + +{{< tabs "Dapr CLI" "Helm" >}} + +{{% codetab %}} + +```bash +dapr init -k --set dapr_scheduler.cluster.storageSize=16Gi --set dapr_scheduler.etcdSpaceQuota=16Gi +``` + +{{% /codetab %}} + + +{{% codetab %}} + +```bash +helm upgrade --install dapr dapr/dapr \ +--version={{% dapr-latest-version short="true" %}} \ +--namespace dapr-system \ +--create-namespace \ +--set dapr_scheduler.cluster.storageSize=16Gi \ +--set dapr_scheduler.etcdSpaceQuota=16Gi \ +--wait +``` + +{{% /codetab %}} +{{< /tabs >}} + +#### Increase existing Scheduler Storage Size + +{{% alert title="Warning" color="warning" %}} +Not all storage providers support dynamic volume expansion. +Please see your storage provider documentation to determine if this feature is supported, and what to do if it is not. +{{% /alert %}} + +By default, each Scheduler will create a Persistent Volume and Persistent Volume Claim of size `1Gi` against the [default `standard` storage class](#storage-class) for each Scheduler replica. +These will look similar to the following, where in this example we are running Scheduler in HA mode. + +``` +NAMESPACE NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS VOLUMEATTRIBUTESCLASS AGE +dapr-system dapr-scheduler-data-dir-dapr-scheduler-server-0 Bound pvc-9f699d2e-f347-43b0-aa98-57dcf38229c5 1Gi RWO standard 3m25s +dapr-system dapr-scheduler-data-dir-dapr-scheduler-server-1 Bound pvc-f4c8be7b-ffbe-407b-954e-7688f2482caa 1Gi RWO standard 3m25s +dapr-system dapr-scheduler-data-dir-dapr-scheduler-server-2 Bound pvc-eaad5fb1-98e9-42a5-bcc8-d45dba1c4b9f 1Gi RWO standard 3m25s +``` + +``` +NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS VOLUMEATTRIBUTESCLASS REASON AGE +pvc-9f699d2e-f347-43b0-aa98-57dcf38229c5 1Gi RWO Delete Bound dapr-system/dapr-scheduler-data-dir-dapr-scheduler-server-0 standard 4m24s +pvc-eaad5fb1-98e9-42a5-bcc8-d45dba1c4b9f 1Gi RWO Delete Bound dapr-system/dapr-scheduler-data-dir-dapr-scheduler-server-2 standard 4m24s +pvc-f4c8be7b-ffbe-407b-954e-7688f2482caa 1Gi RWO Delete Bound dapr-system/dapr-scheduler-data-dir-dapr-scheduler-server-1 standard 4m24s +``` + +To expand the storage size of the Scheduler, follow these steps: + +1. First, ensure that the storage class supports volume expansion, and that the `allowVolumeExpansion` field is set to `true` if it is not already. + +```yaml +apiVersion: storage.k8s.io/v1 +kind: StorageClass +metadata: + name: standard +provisioner: my.driver +allowVolumeExpansion: true +... +``` + +2. Delete the Scheduler StatefulSet whilst preserving the Bound Persistent Volume Claims. + +```bash +kubectl delete sts -n dapr-system dapr-scheduler-server --cascade=orphan +``` + +3. Increase the size of the Persistent Volume Claims to the desired size by editing the `spec.resources.requests.storage` field. + Again in this case, we are assuming that the Scheduler is running in HA mode with 3 replicas. + +```bash +kubectl edit pvc -n dapr-system dapr-scheduler-data-dir-dapr-scheduler-server-0 dapr-scheduler-data-dir-dapr-scheduler-server-1 dapr-scheduler-data-dir-dapr-scheduler-server-2 +``` + +4. Recreate the Scheduler StatefulSet by [installing Dapr with the desired storage size](#setting-the-storage-size-on-installation). + +### Storage Class + In case your Kubernetes deployment does not have a default storage class or you are configuring a production cluster, defining a storage class is required. A persistent volume is backed by a real disk that is provided by the hosted Cloud Provider or Kubernetes infrastructure platform. @@ -59,8 +172,8 @@ helm upgrade --install dapr dapr/dapr \ ## Ephemeral Storage -Scheduler can be optionally made to use Ephemeral storage, which is in-memory storage which is **not** resilient to restarts, i.e. all Job data will be lost after a Scheduler restart. -This is useful for deployments where storage is not available or required, or for testing purposes. +When running in non-HA mode, the Scheduler can be optionally made to use ephemeral storage, which is in-memory storage that is **not** resilient to restarts. For example, all jobs data is lost after a Scheduler restart. +This is useful in non-production deployments or for testing where storage is not available or required. {{% alert title="Note" color="primary" %}} If Dapr is already installed, the control plane needs to be completely [uninstalled]({{< ref dapr-uninstall.md >}}) in order for the Scheduler `StatefulSet` to be recreated without the persistent volume. diff --git a/daprdocs/content/en/operations/hosting/kubernetes/kubernetes-production.md b/daprdocs/content/en/operations/hosting/kubernetes/kubernetes-production.md index ab42e5515..1151137ef 100644 --- a/daprdocs/content/en/operations/hosting/kubernetes/kubernetes-production.md +++ b/daprdocs/content/en/operations/hosting/kubernetes/kubernetes-production.md @@ -95,6 +95,25 @@ For a new Dapr deployment, HA mode can be set with both: For an existing Dapr deployment, [you can enable HA mode in a few extra steps]({{< ref "#enabling-high-availability-in-an-existing-dapr-deployment" >}}). +### Individual service HA Helm configuration + +You can configure HA mode via Helm across all services by setting the `global.ha.enabled` flag to `true`. By default, `--set global.ha.enabled=true` is fully respected and cannot be overridden, making it impossible to simultaneously have either the placement or scheduler service as a single instance. + +> **Note:** HA for scheduler and placement services is not the default setting. + +To scale scheduler and placement to three instances independently of the `global.ha.enabled` flag, set `global.ha.enabled` to `false` and `dapr_scheduler.ha` and `dapr_placement.ha` to `true`. For example: + + ```bash + helm upgrade --install dapr dapr/dapr \ + --version={{% dapr-latest-version short="true" %}} \ + --namespace dapr-system \ + --create-namespace \ + --set global.ha.enabled=false \ + --set dapr_scheduler.ha=true \ + --set dapr_placement.ha=true \ + --wait + ``` + ## Setting cluster critical priority class name for control plane services In some scenarios, nodes may have memory and/or cpu pressure and the Dapr control plane pods might get selected @@ -260,6 +279,22 @@ Verify your production-ready deployment includes the following settings: 1. Dapr supports and is enabled to **scope components for certain applications**. This is not a required practice. [Learn more about component scopes]({{< ref "component-scopes.md" >}}). +## Recommended Placement service configuration + +The [Placement service]({{< ref "placement.md" >}}) is a component in Dapr, responsible for disseminating information about actor addresses to all Dapr sidecars via a placement table (more information on this can be found [here]({{< ref "actors-features-concepts.md#actor-placement-service" >}})). + +When running in production, it's recommended to configure the Placement service with the following values: + +1. **High availability**. Ensure the Placement service is highly available (three replicas) and can survive individual node failures. Helm chart value: `dapr_placement.ha=true` +2. **In-memory logs**. Use in-memory Raft log store for faster writes. The tradeoff is more placement table disseminations (and thus, network traffic) in an eventual Placement service pod failure. Helm chart value: `dapr_placement.cluster.forceInMemoryLog=true` +3. **No metadata endpoint**. Disable the unauthenticated `/placement/state` endpoint which exposes placement table information for the Placement service. Helm chart value: `dapr_placement.metadataEnabled=false` +4. **Timeouts** Control the sensitivity of network connectivity between the Placement service and the sidecars using the below timeout values. Default values are set, but you can adjust these based on your network conditions. + 1. `dapr_placement.keepAliveTime` sets the interval at which the Placement service sends [keep alive](https://grpc.io/docs/guides/keepalive/) pings to Dapr sidecars on the gRPC stream to check if the connection is still alive. Lower values will lead to shorter actor rebalancing time in case of pod loss/restart, but higher network traffic during normal operation. Accepts values between `1s` and `10s`. Default is `2s`. + 2. `dapr_placement.keepAliveTimeout` sets the timeout period for Dapr sidecars to respond to the Placement service's [keep alive](https://grpc.io/docs/guides/keepalive/) pings before the Placement service closes the connection. Lower values will lead to shorter actor rebalancing time in case of pod loss/restart, but higher network traffic during normal operation. Accepts values between `1s` and `10s`. Default is `3s`. + 3. `dapr_placement.disseminateTimeout` sets the timeout period for dissemination to be delayed after actor membership change (usually related to pod restarts) to avoid excessive dissemination during multiple pod restarts. Higher values will reduce the frequency of dissemination, but delay the table dissemination. Accepts values between `1s` and `3s`. Default is `2s`. + + + ## Service account tokens By default, Kubernetes mounts a volume containing a [Service Account token](https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/) in each container. Applications can use this token, whose permissions vary depending on the configuration of the cluster and namespace, among other things, to perform API calls against the Kubernetes control plane. diff --git a/daprdocs/content/en/operations/hosting/self-hosted/self-hosted-with-docker.md b/daprdocs/content/en/operations/hosting/self-hosted/self-hosted-with-docker.md index 3e7c090cb..78f0e2c75 100644 --- a/daprdocs/content/en/operations/hosting/self-hosted/self-hosted-with-docker.md +++ b/daprdocs/content/en/operations/hosting/self-hosted/self-hosted-with-docker.md @@ -138,6 +138,18 @@ services: command: ["./placement", "--port", "50006"] ports: - "50006:50006" + + scheduler: + image: "daprio/dapr" + command: ["./scheduler", "--port", "50007"] + ports: + - "50007:50007" + # WARNING - This is a tmpfs volume, your state will not be persisted across restarts + volumes: + - type: tmpfs + target: /data + tmpfs: + size: "10000" networks: hello-dapr: null @@ -147,6 +159,8 @@ services: To further learn how to run Dapr with Docker Compose, see the [Docker-Compose Sample](https://github.com/dapr/samples/tree/master/hello-docker-compose). +The above example also includes a scheduler definition that uses a non-persistent data store for testing and development purposes. + ## Run on Kubernetes If your deployment target is Kubernetes please use Dapr's first-class integration. Refer to the diff --git a/daprdocs/content/en/operations/observability/_index.md b/daprdocs/content/en/operations/observability/_index.md index fbbd1abbe..4fc85c257 100644 --- a/daprdocs/content/en/operations/observability/_index.md +++ b/daprdocs/content/en/operations/observability/_index.md @@ -6,7 +6,7 @@ weight: 60 description: See and measure the message calls to components and between networked services --- -[The following overview video and demo](https://www.youtube.com/live/0y7ne6teHT4?si=3bmNSSyIEIVSF-Ej&t=9931) demonstrates how observability in Dapr works. +[The following overview video and demo](https://www.youtube.com/watch?v=0y7ne6teHT4&t=12652s) demonstrates how observability in Dapr works. diff --git a/daprdocs/content/en/operations/observability/metrics/metrics-overview.md b/daprdocs/content/en/operations/observability/metrics/metrics-overview.md index 5f07bb325..1df663ab7 100644 --- a/daprdocs/content/en/operations/observability/metrics/metrics-overview.md +++ b/daprdocs/content/en/operations/observability/metrics/metrics-overview.md @@ -70,6 +70,38 @@ spec: enabled: false ``` +## Configuring metrics for error codes + +You can enable additional metrics for [Dapr API error codes](https://docs.dapr.io/reference/api/error_codes/) by setting `spec.metrics.recordErrorCodes` to `true`. Dapr APIs which communicate back to their caller may return standardized error codes. [A new metric called `error_code_total` is recorded]({{< ref errors-overview.md >}}), which allows monitoring of error codes triggered by application, code, and category. See [the `errorcodes` package](https://github.com/dapr/dapr/blob/master/pkg/messages/errorcodes/errorcodes.go) for specific codes and categories. + +Example configuration: +```yaml +apiVersion: dapr.io/v1alpha1 +kind: Configuration +metadata: + name: tracing + namespace: default +spec: + metrics: + enabled: true + recordErrorCodes: true +``` + +Example metric: +```json +{ + "app_id": "publisher-app", + "category": "state", + "dapr_io_enabled": "true", + "error_code": "ERR_STATE_STORE_NOT_CONFIGURED", + "instance": "10.244.1.64:9090", + "job": "kubernetes-service-endpoints", + "namespace": "my-app", + "node": "my-node", + "service": "publisher-app-dapr" +} +``` + ## Optimizing HTTP metrics reporting with path matching When invoking Dapr using HTTP, metrics are created for each requested method by default. This can result in a high number of metrics, known as high cardinality, which can impact memory usage and CPU. diff --git a/daprdocs/content/en/operations/observability/metrics/prometheus.md b/daprdocs/content/en/operations/observability/metrics/prometheus.md index 3c787602f..04e49a42e 100644 --- a/daprdocs/content/en/operations/observability/metrics/prometheus.md +++ b/daprdocs/content/en/operations/observability/metrics/prometheus.md @@ -93,13 +93,108 @@ helm install dapr-prom prometheus-community/prometheus -n dapr-monitoring --set alertmanager.persistence.enabled=false --set pushgateway.persistentVolume.enabled=false --set server.persistentVolume.enabled=false ``` +For automatic discovery of Dapr targets (Service Discovery), use: + +```bash + helm install dapr-prom prometheus-community/prometheus -f values.yaml -n dapr-monitoring --create-namespace +``` + +### `values.yaml` File + +```yaml +alertmanager: + persistence: + enabled: false +pushgateway: + persistentVolume: + enabled: false +server: + persistentVolume: + enabled: false + +# Adds additional scrape configurations to prometheus.yml +# Uses service discovery to find Dapr and Dapr sidecar targets +extraScrapeConfigs: |- + - job_name: dapr-sidecars + kubernetes_sd_configs: + - role: pod + relabel_configs: + - action: keep + regex: "true" + source_labels: + - __meta_kubernetes_pod_annotation_dapr_io_enabled + - action: keep + regex: "true" + source_labels: + - __meta_kubernetes_pod_annotation_dapr_io_enable_metrics + - action: replace + replacement: ${1} + source_labels: + - __meta_kubernetes_namespace + target_label: namespace + - action: replace + replacement: ${1} + source_labels: + - __meta_kubernetes_pod_name + target_label: pod + - action: replace + regex: (.*);daprd + replacement: ${1}-dapr + source_labels: + - __meta_kubernetes_pod_annotation_dapr_io_app_id + - __meta_kubernetes_pod_container_name + target_label: service + - action: replace + replacement: ${1}:9090 + source_labels: + - __meta_kubernetes_pod_ip + target_label: __address__ + + - job_name: dapr + kubernetes_sd_configs: + - role: pod + relabel_configs: + - action: keep + regex: dapr + source_labels: + - __meta_kubernetes_pod_label_app_kubernetes_io_name + - action: keep + regex: dapr + source_labels: + - __meta_kubernetes_pod_label_app_kubernetes_io_part_of + - action: replace + replacement: ${1} + source_labels: + - __meta_kubernetes_pod_label_app + target_label: app + - action: replace + replacement: ${1} + source_labels: + - __meta_kubernetes_namespace + target_label: namespace + - action: replace + replacement: ${1} + source_labels: + - __meta_kubernetes_pod_name + target_label: pod + - action: replace + replacement: ${1}:9090 + source_labels: + - __meta_kubernetes_pod_ip + target_label: __address__ +``` + 3. Validation Ensure Prometheus is running in your cluster. ```bash kubectl get pods -n dapr-monitoring +``` +Expected output: + +```bash NAME READY STATUS RESTARTS AGE dapr-prom-kube-state-metrics-9849d6cc6-t94p8 1/1 Running 0 4m58s dapr-prom-prometheus-alertmanager-749cc46f6-9b5t8 2/2 Running 0 4m58s @@ -110,6 +205,22 @@ dapr-prom-prometheus-pushgateway-688665d597-h4xx2 1/1 Running 0 dapr-prom-prometheus-server-694fd8d7c-q5d59 2/2 Running 0 4m58s ``` +### Access the Prometheus Dashboard + +To view the Prometheus dashboard and check service discovery: + +```bash +kubectl port-forward svc/dapr-prom-prometheus-server 9090:80 -n dapr-monitoring +``` + +Open a browser and visit `http://localhost:9090`. Navigate to **Status** > **Service Discovery** to verify that the Dapr targets are discovered correctly. + +Prometheus Web UI + +You can see the `job_name` and its discovered targets. + +Prometheus Service Discovery + ## Example
diff --git a/daprdocs/content/en/operations/resiliency/policies.md b/daprdocs/content/en/operations/resiliency/policies.md index db72dd78c..99a71eaef 100644 --- a/daprdocs/content/en/operations/resiliency/policies.md +++ b/daprdocs/content/en/operations/resiliency/policies.md @@ -35,7 +35,13 @@ If you don't specify a timeout value, the policy does not enforce a time and def ## Retries -With `retries`, you can define a retry strategy for failed operations, including requests failed due to triggering a defined timeout or circuit breaker policy. The following retry options are configurable: +With `retries`, you can define a retry strategy for failed operations, including requests failed due to triggering a defined timeout or circuit breaker policy. + +{{% alert title="Pub/sub component retries vs inbound resiliency" color="warning" %}} +Each [pub/sub component]({{< ref supported-pubsub >}}) has its own built-in retry behaviors. Explicity applying a Dapr resiliency policy doesn't override these implicit retry policies. Rather, the resiliency policy augments the built-in retry, which can cause repetitive clustering of messages. +{{% /alert %}} + +The following retry options are configurable: | Retry option | Description | | ------------ | ----------- | @@ -43,6 +49,15 @@ With `retries`, you can define a retry strategy for failed operations, including | `duration` | Determines the time interval between retries. Only applies to the `constant` policy.
Valid values are of the form `200ms`, `15s`, `2m`, etc.
Defaults to `5s`.| | `maxInterval` | Determines the maximum interval between retries to which the `exponential` back-off policy can grow.
Additional retries always occur after a duration of `maxInterval`. Defaults to `60s`. Valid values are of the form `5s`, `1m`, `1m30s`, etc | | `maxRetries` | The maximum number of retries to attempt.
`-1` denotes an unlimited number of retries, while `0` means the request will not be retried (essentially behaving as if the retry policy were not set).
Defaults to `-1`. | +| `matching.httpStatusCodes` | Optional: a comma-separated string of HTTP status codes or code ranges to retry. Status codes not listed are not retried.
Valid values: 100-599, [Reference](https://developer.mozilla.org/en-US/docs/Web/HTTP/Status)
Format: `` or range `-`
Example: "429,501-503"
Default: empty string `""` or field is not set. Retries on all HTTP errors. | +| `matching.gRPCStatusCodes` | Optional: a comma-separated string of gRPC status codes or code ranges to retry. Status codes not listed are not retried.
Valid values: 0-16, [Reference](https://grpc.io/docs/guides/status-codes/)
Format: `` or range `-`
Example: "1,501-503"
Default: empty string `""` or field is not set. Retries on all gRPC errors. | + + +{{% alert title="httpStatusCodes and gRPCStatusCodes format" color="warning" %}} +The field values should follow the format as specified in the field description or in the "Example 2" below. +An incorrectly formatted value will produce an error log ("Could not read resiliency policy") and `daprd` startup sequence will proceed. +{{% /alert %}} + The exponential back-off window uses the following formula: @@ -71,7 +86,20 @@ spec: maxRetries: -1 # Retry indefinitely ``` +Example 2: +```yaml +spec: + policies: + retries: + retry5xxOnly: + policy: constant + duration: 5s + maxRetries: 3 + matching: + httpStatusCodes: "429,500-599" # retry the HTTP status codes in this range. All others are not retried. + gRPCStatusCodes: "1-4,8-11,13,14" # retry gRPC status codes in these ranges and separate single codes. +``` ## Circuit Breakers @@ -82,7 +110,7 @@ Circuit Breaker (CB) policies are used when other applications/services/componen | `maxRequests` | The maximum number of requests allowed to pass through when the CB is half-open (recovering from failure). Defaults to `1`. | | `interval` | The cyclical period of time used by the CB to clear its internal counts. If set to 0 seconds, this never clears. Defaults to `0s`. | | `timeout` | The period of the open state (directly after failure) until the CB switches to half-open. Defaults to `60s`. | -| `trip` | A [Common Expression Language (CEL)](https://github.com/google/cel-spec) statement that is evaluated by the CB. When the statement evaluates to true, the CB trips and becomes open. Defaults to `consecutiveFailures > 5`. | +| `trip` | A [Common Expression Language (CEL)](https://github.com/google/cel-spec) statement that is evaluated by the CB. When the statement evaluates to true, the CB trips and becomes open. Defaults to `consecutiveFailures > 5`. Other possible values are `requests` and `totalFailures` where `requests` represents the number of either successful or failed calls before the circuit opens and `totalFailures` represents the total (not necessarily consecutive) number of failed attempts before the circuit opens. Example: `requests > 5` and `totalFailures >3`.| Example: diff --git a/daprdocs/content/en/operations/support/alpha-beta-apis.md b/daprdocs/content/en/operations/support/alpha-beta-apis.md index 66d9470ff..7516bfc47 100644 --- a/daprdocs/content/en/operations/support/alpha-beta-apis.md +++ b/daprdocs/content/en/operations/support/alpha-beta-apis.md @@ -15,13 +15,13 @@ description: "List of current alpha and beta APIs" | Bulk Publish | [Bulk publish proto](https://github.com/dapr/dapr/blob/5aba3c9aa4ea9b3f388df125f9c66495b43c5c9e/dapr/proto/runtime/v1/dapr.proto#L59) | `v1.0-alpha1/publish/bulk` | The bulk publish API allows you to publish multiple messages to a topic in a single request. | [Bulk Publish and Subscribe API]({{< ref "pubsub-bulk.md" >}}) | v1.10 | | Bulk Subscribe | [Bulk subscribe proto](https://github.com/dapr/dapr/blob/5aba3c9aa4ea9b3f388df125f9c66495b43c5c9e/dapr/proto/runtime/v1/appcallback.proto#L57) | N/A | The bulk subscribe application callback receives multiple messages from a topic in a single call. | [Bulk Publish and Subscribe API]({{< ref "pubsub-bulk.md" >}}) | v1.10 | | Cryptography | [Crypto proto](https://github.com/dapr/dapr/blob/5aba3c9aa4ea9b3f388df125f9c66495b43c5c9e/dapr/proto/runtime/v1/dapr.proto#L118) | `v1.0-alpha1/crypto` | The cryptography API enables you to perform **high level** cryptography operations for encrypting and decrypting messages. | [Cryptography API]({{< ref "cryptography-overview.md" >}}) | v1.11 | -| Jobs | [Jobs proto](https://github.com/dapr/dapr/blob/master/dapr/proto/runtime/v1/dapr.proto#L198-L204) | `v1.0-alpha1/jobs` | The jobs API enables you to schedule and orchestrate jobs. | [Jobs API]({{< ref "jobs-overview.md" >}}) | v1.14 | +| Jobs | [Jobs proto](https://github.com/dapr/dapr/blob/master/dapr/proto/runtime/v1/dapr.proto#L212-219) | `v1.0-alpha1/jobs` | The jobs API enables you to schedule and orchestrate jobs. | [Jobs API]({{< ref "jobs-overview.md" >}}) | v1.14 | +| Conversation | [Conversation proto](https://github.com/dapr/dapr/blob/master/dapr/proto/runtime/v1/dapr.proto#L221-222) | `v1.0-alpha1/conversation` | Converse between different large language models using the conversation API. | [Conversation API]({{< ref "conversation-overview.md" >}}) | v1.15 | + ## Beta APIs -| Building block/API | gRPC | HTTP | Description | Documentation | Version introduced | -| ------------------ | ---- | ---- | ----------- | ------------- | ------------------ | -| Workflow | [Workflow proto](https://github.com/dapr/dapr/blob/5aba3c9aa4ea9b3f388df125f9c66495b43c5c9e/dapr/proto/runtime/v1/dapr.proto#L151) | `/v1.0-beta1/workflow` | The workflow API enables you to define long running, persistent processes or data flows. | [Workflow API]({{< ref "workflow-overview.md" >}}) | v1.10 | +No current beta APIs. ## Related links diff --git a/daprdocs/content/en/operations/support/breaking-changes-and-deprecations.md b/daprdocs/content/en/operations/support/breaking-changes-and-deprecations.md index 50dc764c8..995f62287 100644 --- a/daprdocs/content/en/operations/support/breaking-changes-and-deprecations.md +++ b/daprdocs/content/en/operations/support/breaking-changes-and-deprecations.md @@ -68,6 +68,7 @@ After announcing a future breaking change, the change will happen in 2 releases | Hazelcast PubSub Component | 1.9.0 | 1.11.0 | | Twitter Binding Component | 1.10.0 | 1.11.0 | | NATS Streaming PubSub Component | 1.11.0 | 1.13.0 | +| Workflows API Alpha1 `/v1.0-alpha1/workflows` being deprecated in favor of Workflow Client | 1.15.0 | 1.17.0 | ## Related links diff --git a/daprdocs/content/en/operations/support/support-preview-features.md b/daprdocs/content/en/operations/support/support-preview-features.md index 943b35d0e..07ae1b9a6 100644 --- a/daprdocs/content/en/operations/support/support-preview-features.md +++ b/daprdocs/content/en/operations/support/support-preview-features.md @@ -22,4 +22,4 @@ For CLI there is no explicit opt-in, just the version that this was first made a | **Actor State TTL** | Allow actors to save records to state stores with Time To Live (TTL) set to automatically clean up old data. In its current implementation, actor state with TTL may not be reflected correctly by clients, read [Actor State Transactions]({{< ref actors_api.md >}}) for more information. | `ActorStateTTL` | [Actor State Transactions]({{< ref actors_api.md >}}) | v1.11 | | **Component Hot Reloading** | Allows for Dapr-loaded components to be "hot reloaded". A component spec is reloaded when it is created/updated/deleted in Kubernetes or on file when running in self-hosted mode. Ignores changes to actor state stores and workflow backends. | `HotReload`| [Hot Reloading]({{< ref components-concept.md >}}) | v1.13 | | **Subscription Hot Reloading** | Allows for declarative subscriptions to be "hot reloaded". A subscription is reloaded either when it is created/updated/deleted in Kubernetes, or on file in self-hosted mode. In-flight messages are unaffected when reloading. | `HotReload`| [Hot Reloading]({{< ref "subscription-methods.md#declarative-subscriptions" >}}) | v1.14 | -| **Job actor reminders** | Whilst the [Scheduler service]({{< ref "concepts/dapr-services/scheduler.md" >}}) is deployed by default, job actor reminders (used for scheduling actor reminders) are enabled through a preview feature and needs a feature flag. | `SchedulerReminders`| [Job actor reminders]({{< ref "jobs-overview.md#actor-reminders" >}}) | v1.14 | +| **Scheduler Actor Reminders** | Scheduler actor reminders are actor reminders stored in the Scheduler control plane service, as opposed to the Placement control plane service actor reminder system. The `SchedulerReminders` preview feature defaults to `true`, but you can disable Scheduler actor reminders by setting it to `false`. | `SchedulerReminders`| [Scheduler actor reminders]({{< ref "scheduler.md#actor-reminders" >}}) | v1.14 | \ No newline at end of file diff --git a/daprdocs/content/en/operations/support/support-release-policy.md b/daprdocs/content/en/operations/support/support-release-policy.md index 76a1a5730..fbba03b5f 100644 --- a/daprdocs/content/en/operations/support/support-release-policy.md +++ b/daprdocs/content/en/operations/support/support-release-policy.md @@ -24,7 +24,7 @@ A supported release means: From the 1.8.0 release onwards three (3) versions of Dapr are supported; the current and previous two (2) versions. Typically these are `MINOR`release updates. This means that there is a rolling window that moves forward for supported releases and it is your operational responsibility to remain up to date with these supported versions. If you have an older version of Dapr you may have to do intermediate upgrades to get to a supported version. -There will be at least 6 weeks between major.minor version releases giving users a 12 week (3 month) rolling window for upgrading. +There will be at least 13 weeks (3 months) between major.minor version releases giving users at least a 9 month rolling window for upgrading from a non-supported version. For more details on the release process read [release cycle and cadence](https://github.com/dapr/community/blob/master/release-process.md) Patch support is for supported versions (current and previous). @@ -45,6 +45,10 @@ The table below shows the versions of Dapr releases that have been tested togeth | Release date | Runtime | CLI | SDKs | Dashboard | Status | Release notes | |--------------------|:--------:|:--------|---------|---------|---------|------------| +| September 16th 2024 | 1.14.4
| 1.14.1 | Java 1.12.0
Go 1.11.0
PHP 1.2.0
Python 1.14.0
.NET 1.14.0
JS 3.3.1 | 0.15.0 | Supported (current) | [v1.14.4 release notes](https://github.com/dapr/dapr/releases/tag/v1.14.4) | +| September 13th 2024 | 1.14.3
| 1.14.1 | Java 1.12.0
Go 1.11.0
PHP 1.2.0
Python 1.14.0
.NET 1.14.0
JS 3.3.1 | 0.15.0 | ⚠️ Recalled | [v1.14.3 release notes](https://github.com/dapr/dapr/releases/tag/v1.14.3) | +| September 6th 2024 | 1.14.2
| 1.14.1 | Java 1.12.0
Go 1.11.0
PHP 1.2.0
Python 1.14.0
.NET 1.14.0
JS 3.3.1 | 0.15.0 | Supported (current) | [v1.14.2 release notes](https://github.com/dapr/dapr/releases/tag/v1.14.2) | +| August 14th 2024 | 1.14.1
| 1.14.1 | Java 1.12.0
Go 1.11.0
PHP 1.2.0
Python 1.14.0
.NET 1.14.0
JS 3.3.1 | 0.15.0 | Supported (current) | [v1.14.1 release notes](https://github.com/dapr/dapr/releases/tag/v1.14.1) | | August 14th 2024 | 1.14.0
| 1.14.0 | Java 1.12.0
Go 1.11.0
PHP 1.2.0
Python 1.14.0
.NET 1.14.0
JS 3.3.1 | 0.15.0 | Supported (current) | [v1.14.0 release notes](https://github.com/dapr/dapr/releases/tag/v1.14.0) | | May 29th 2024 | 1.13.4
| 1.13.0 | Java 1.11.0
Go 1.10.0
PHP 1.2.0
Python 1.13.0
.NET 1.13.0
JS 3.3.0 | 0.14.0 | Supported | [v1.13.4 release notes](https://github.com/dapr/dapr/releases/tag/v1.13.4) | | May 21st 2024 | 1.13.3
| 1.13.0 | Java 1.11.0
Go 1.10.0
PHP 1.2.0
Python 1.13.0
.NET 1.13.0
JS 3.3.0 | 0.14.0 | Supported | [v1.13.3 release notes](https://github.com/dapr/dapr/releases/tag/v1.13.3) | @@ -134,13 +138,12 @@ General guidance on upgrading can be found for [self hosted mode]({{< ref self-h | | 1.8.6 | 1.9.6 | | | 1.9.6 | 1.10.7 | | 1.8.0 to 1.8.6 | N/A | 1.9.6 | -| 1.9.0 | N/A | 1.9.6 | -| 1.10.0 | N/A | 1.10.8 | -| 1.11.0 | N/A | 1.11.4 | -| 1.12.0 | N/A | 1.12.4 | -| 1.12.0 to 1.13.0 | N/A | 1.13.4 | -| 1.13.0 | N/A | 1.13.4 | -| 1.13.0 to 1.14.0 | N/A | 1.14.0 | +| 1.9.0 to 1.9.6 | N/A | 1.10.8 | +| 1.10.0 to 1.10.8 | N/A | 1.11.4 | +| 1.11.0 to 1.11.4 | N/A | 1.12.4 | +| 1.12.0 to 1.12.4 | N/A | 1.13.5 | +| 1.13.0 to 1.13.5 | N/A | 1.14.0 | +| 1.14.0 to 1.14.2 | N/A | 1.14.2 | ## Upgrade on Hosting platforms diff --git a/daprdocs/content/en/operations/support/support-security-issues.md b/daprdocs/content/en/operations/support/support-security-issues.md index 1ae3fce27..6e7b24a2d 100644 --- a/daprdocs/content/en/operations/support/support-security-issues.md +++ b/daprdocs/content/en/operations/support/support-security-issues.md @@ -52,7 +52,7 @@ The people who should have access to read your security report are listed in [`m code which allows the issue to be reproduced. Explain why you believe this to be a security issue in Dapr. 2. Put that information into an email. Use a descriptive title. -3. Send the email to [Dapr Maintainers (dapr@dapr.io)](mailto:dapr@dapr.io?subject=[Security%20Disclosure]:%20ISSUE%20TITLE) +3. Send an email to [Security (security@dapr.io)](mailto:security@dapr.io?subject=[Security%20Disclosure]:%20ISSUE%20TITLE) ## Response diff --git a/daprdocs/content/en/reference/api/conversation_api.md b/daprdocs/content/en/reference/api/conversation_api.md new file mode 100644 index 000000000..366625006 --- /dev/null +++ b/daprdocs/content/en/reference/api/conversation_api.md @@ -0,0 +1,74 @@ +--- +type: docs +title: "Conversation API reference" +linkTitle: "Conversation API" +description: "Detailed documentation on the conversation API" +weight: 1400 +--- + +{{% alert title="Alpha" color="primary" %}} +The conversation API is currently in [alpha]({{< ref "certification-lifecycle.md#certification-levels" >}}). +{{% /alert %}} + +Dapr provides an API to interact with Large Language Models (LLMs) and enables critical performance and security functionality with features like prompt caching and PII data obfuscation. + +## Converse + +This endpoint lets you converse with LLMs. + +``` +POST /v1.0-alpha1/conversation//converse +``` + +### URL parameters + +| Parameter | Description | +| --------- | ----------- | +| `llm-name` | The name of the LLM component. [See a list of all available conversation components.]({{< ref supported-conversation >}}) + +### Request body + +| Field | Description | +| --------- | ----------- | +| `conversationContext` | | +| `inputs` | | +| `parameters` | | + + +### Request content + +```json +REQUEST = { + "inputs": ["what is Dapr", "Why use Dapr"], + "parameters": {}, +} +``` + +### HTTP response codes + +Code | Description +---- | ----------- +`202` | Accepted +`400` | Request was malformed +`500` | Request formatted correctly, error in dapr code or underlying component + +### Response content + +```json +RESPONSE = { + "outputs": { + { + "result": "Dapr is distribution application runtime ...", + "parameters": {}, + }, + { + "result": "Dapr can help developers ...", + "parameters": {}, + } + }, +} +``` + +## Next steps + +[Conversation API overview]({{< ref conversation-overview.md >}}) \ No newline at end of file diff --git a/daprdocs/content/en/reference/api/cryptography_api.md b/daprdocs/content/en/reference/api/cryptography_api.md index 336088f23..c0c482427 100644 --- a/daprdocs/content/en/reference/api/cryptography_api.md +++ b/daprdocs/content/en/reference/api/cryptography_api.md @@ -20,7 +20,7 @@ This endpoint lets you encrypt a value provided as a byte array using a specifie ### HTTP Request ``` -PUT http://localhost:/v1.0/crypto//encrypt +PUT http://localhost:/v1.0-alpha1/crypto//encrypt ``` #### URL Parameters @@ -59,7 +59,7 @@ returns an array of bytes with the encrypted payload. ### Examples ```shell -curl http://localhost:3500/v1.0/crypto/myAzureKeyVault/encrypt \ +curl http://localhost:3500/v1.0-alpha1/crypto/myAzureKeyVault/encrypt \ -X PUT \ -H "dapr-key-name: myCryptoKey" \ -H "dapr-key-wrap-algorithm: aes-gcm" \ @@ -81,7 +81,7 @@ This endpoint lets you decrypt a value provided as a byte array using a specifie #### HTTP Request ``` -PUT curl http://localhost:3500/v1.0/crypto//decrypt +PUT curl http://localhost:3500/v1.0-alpha1/crypto//decrypt ``` #### URL Parameters @@ -116,7 +116,7 @@ returns an array of bytes representing the decrypted payload. ### Examples ```bash -curl http://localhost:3500/v1.0/crypto/myAzureKeyVault/decrypt \ +curl http://localhost:3500/v1.0-alpha1/crypto/myAzureKeyVault/decrypt \ -X PUT -H "dapr-key-name: myCryptoKey"\ -H "Content-Type: application/octet-stream" \ diff --git a/daprdocs/content/en/reference/api/error_codes.md b/daprdocs/content/en/reference/api/error_codes.md deleted file mode 100644 index 19d3b8cc3..000000000 --- a/daprdocs/content/en/reference/api/error_codes.md +++ /dev/null @@ -1,49 +0,0 @@ ---- -type: docs -title: "Error codes returned by APIs" -linkTitle: "Error codes" -description: "Detailed reference of the Dapr API error codes" -weight: 1400 ---- - -For http calls made to Dapr runtime, when an error is encountered, an error json is returned in http response body. The json contains an error code and an descriptive error message, e.g. -``` -{ - "errorCode": "ERR_STATE_GET", - "message": "Requested state key does not exist in state store." -} -``` - -Following table lists the error codes returned by Dapr runtime: - -| Error Code | Description | -|-----------------------------------|-------------| -| ERR_ACTOR_INSTANCE_MISSING | Error getting an actor instance. This means that actor is now hosted in some other service replica. -| ERR_ACTOR_RUNTIME_NOT_FOUND | Error getting the actor instance. -| ERR_ACTOR_REMINDER_CREATE | Error creating a reminder for an actor. -| ERR_ACTOR_REMINDER_DELETE | Error deleting a reminder for an actor. -| ERR_ACTOR_TIMER_CREATE | Error creating a timer for an actor. -| ERR_ACTOR_TIMER_DELETE | Error deleting a timer for an actor. -| ERR_ACTOR_REMINDER_GET | Error getting a reminder for an actor. -| ERR_ACTOR_INVOKE_METHOD | Error invoking a method on an actor. -| ERR_ACTOR_STATE_DELETE | Error deleting the state for an actor. -| ERR_ACTOR_STATE_GET | Error getting the state for an actor. -| ERR_ACTOR_STATE_TRANSACTION_SAVE | Error storing actor state transactionally. -| ERR_PUBSUB_NOT_FOUND | Error referencing the Pub/Sub component in Dapr runtime. -| ERR_PUBSUB_PUBLISH_MESSAGE | Error publishing a message. -| ERR_PUBSUB_FORBIDDEN | Error message forbidden by access controls. -| ERR_PUBSUB_CLOUD_EVENTS_SER | Error serializing Pub/Sub event envelope. -| ERR_STATE_STORE_NOT_FOUND | Error referencing a state store not found. -| ERR_STATE_STORES_NOT_CONFIGURED | Error no state stores configured. -| ERR_NOT_SUPPORTED_STATE_OPERATION | Error transaction requested on a state store with no transaction support. -| ERR_STATE_GET | Error getting a state for state store. -| ERR_STATE_DELETE | Error deleting a state from state store. -| ERR_STATE_SAVE | Error saving a state in state store. -| ERR_INVOKE_OUTPUT_BINDING | Error invoking an output binding. -| ERR_MALFORMED_REQUEST | Error with a malformed request. -| ERR_DIRECT_INVOKE | Error in direct invocation. -| ERR_DESERIALIZE_HTTP_BODY | Error deserializing an HTTP request body. -| ERR_SECRET_STORES_NOT_CONFIGURED | Error that no secret store is configured. -| ERR_SECRET_STORE_NOT_FOUND | Error that specified secret store is not found. -| ERR_HEALTH_NOT_READY | Error that Dapr is not ready. -| ERR_METADATA_GET | Error parsing the Metadata information. diff --git a/daprdocs/content/en/reference/api/jobs_api.md b/daprdocs/content/en/reference/api/jobs_api.md index 3a04ed1a9..454598676 100644 --- a/daprdocs/content/en/reference/api/jobs_api.md +++ b/daprdocs/content/en/reference/api/jobs_api.md @@ -32,7 +32,7 @@ At least one of `schedule` or `dueTime` must be provided, but they can also be p Parameter | Description --------- | ----------- `name` | Name of the job you're scheduling -`data` | A protobuf message `@type`/`value` pair. `@type` must be of a [well-known type](https://protobuf.dev/reference/protobuf/google.protobuf). `value` is the serialized data. +`data` | A JSON serialized value or object. `schedule` | An optional schedule at which the job is to be run. Details of the format are below. `dueTime` | An optional time at which the job should be active, or the "one shot" time, if other scheduling type fields are not provided. Accepts a "point in time" string in the format of RFC3339, Go duration string (calculated from creation time), or non-repeating ISO8601. `repeats` | An optional number of times in which the job should be triggered. If not set, the job runs indefinitely or until expiration. @@ -43,9 +43,13 @@ Parameter | Description Systemd timer style cron accepts 6 fields: seconds | minutes | hours | day of month | month | day of week -0-59 | 0-59 | 0-23 | 1-31 | 1-12/jan-dec | 0-7/sun-sat +--- | --- | --- | --- | --- | --- +0-59 | 0-59 | 0-23 | 1-31 | 1-12/jan-dec | 0-6/sun-sat +##### Example 1 "0 30 * * * *" - every hour on the half hour + +##### Example 2 "0 15 3 * * *" - every day at 03:15 Period string expressions: @@ -63,13 +67,8 @@ Entry | Description | Equivalent ```json { - "job": { - "data": { - "@type": "type.googleapis.com/google.protobuf.StringValue", - "value": "\"someData\"" - }, - "dueTime": "30s" - } + "data": "some data", + "dueTime": "30s" } ``` @@ -88,20 +87,14 @@ The following example curl command creates a job, naming the job `jobforjabba` a ```bash $ curl -X POST \ http://localhost:3500/v1.0-alpha1/jobs/jobforjabba \ - -H "Content-Type: application/json" + -H "Content-Type: application/json" \ -d '{ - "job": { - "data": { - "@type": "type.googleapis.com/google.protobuf.StringValue", - "value": "Running spice" - }, - "schedule": "@every 1m", - "repeats": 5 - } + "data": "{\"value\":\"Running spice\"}", + "schedule": "@every 1m", + "repeats": 5 }' ``` - ## Get job data Get a job from its name. @@ -137,10 +130,7 @@ $ curl -X GET http://localhost:3500/v1.0-alpha1/jobs/jobforjabba -H "Content-Typ "name": "jobforjabba", "schedule": "@every 1m", "repeats": 5, - "data": { - "@type": "type.googleapis.com/google.protobuf.StringValue", - "value": "Running spice" - } + "data": 123 } ``` ## Delete a job diff --git a/daprdocs/content/en/reference/api/pubsub_api.md b/daprdocs/content/en/reference/api/pubsub_api.md index f53677108..8fbf0f615 100644 --- a/daprdocs/content/en/reference/api/pubsub_api.md +++ b/daprdocs/content/en/reference/api/pubsub_api.md @@ -302,7 +302,7 @@ other | warning is logged and all messages to be retried ## Message envelope -Dapr pub/sub adheres to version 1.0 of CloudEvents. +Dapr pub/sub adheres to [version 1.0 of CloudEvents](https://github.com/cloudevents/spec/blob/v1.0/spec.md). ## Related links diff --git a/daprdocs/content/en/reference/api/workflow_api.md b/daprdocs/content/en/reference/api/workflow_api.md index 91a19d864..c9dddaa61 100644 --- a/daprdocs/content/en/reference/api/workflow_api.md +++ b/daprdocs/content/en/reference/api/workflow_api.md @@ -6,10 +6,6 @@ description: "Detailed documentation on the workflow API" weight: 300 --- -{{% alert title="Note" color="primary" %}} -Dapr Workflow is currently in beta. [See known limitations for {{% dapr-latest-version cli="true" %}}]({{< ref "workflow-overview.md#limitations" >}}). -{{% /alert %}} - Dapr provides users with the ability to interact with workflows and comes with a built-in `dapr` component. ## Start workflow request @@ -17,7 +13,7 @@ Dapr provides users with the ability to interact with workflows and comes with a Start a workflow instance with the given name and optionally, an instance ID. ``` -POST http://localhost:3500/v1.0-beta1/workflows///start[?instanceID=] +POST http://localhost:3500/v1.0/workflows///start[?instanceID=] ``` Note that workflow instance IDs can only contain alphanumeric characters, underscores, and dashes. @@ -57,7 +53,7 @@ The API call will provide a response similar to this: Terminate a running workflow instance with the given name and instance ID. ``` -POST http://localhost:3500/v1.0-beta1/workflows///terminate +POST http://localhost:3500/v1.0/workflows///terminate ``` {{% alert title="Note" color="primary" %}} @@ -91,7 +87,7 @@ This API does not return any content. For workflow components that support subscribing to external events, such as the Dapr Workflow engine, you can use the following "raise event" API to deliver a named event to a specific workflow instance. ``` -POST http://localhost:3500/v1.0-beta1/workflows///raiseEvent/ +POST http://localhost:3500/v1.0/workflows///raiseEvent/ ``` {{% alert title="Note" color="primary" %}} @@ -124,7 +120,7 @@ None. Pause a running workflow instance. ``` -POST http://localhost:3500/v1.0-beta1/workflows///pause +POST http://localhost:3500/v1.0/workflows///pause ``` ### URL parameters @@ -151,7 +147,7 @@ None. Resume a paused workflow instance. ``` -POST http://localhost:3500/v1.0-beta1/workflows///resume +POST http://localhost:3500/v1.0/workflows///resume ``` ### URL parameters @@ -178,7 +174,7 @@ None. Purge the workflow state from your state store with the workflow's instance ID. ``` -POST http://localhost:3500/v1.0-beta1/workflows///purge +POST http://localhost:3500/v1.0/workflows///purge ``` {{% alert title="Note" color="primary" %}} @@ -209,7 +205,7 @@ None. Get information about a given workflow instance. ``` -GET http://localhost:3500/v1.0-beta1/workflows// +GET http://localhost:3500/v1.0/workflows// ``` ### URL parameters diff --git a/daprdocs/content/en/reference/arguments-annotations-overview.md b/daprdocs/content/en/reference/arguments-annotations-overview.md index 6a0c2f60b..72d50b01c 100644 --- a/daprdocs/content/en/reference/arguments-annotations-overview.md +++ b/daprdocs/content/en/reference/arguments-annotations-overview.md @@ -16,15 +16,17 @@ This table is meant to help users understand the equivalent options for running | `--app-id` | `--app-id` | `-i` | `dapr.io/app-id` | The unique ID of the application. Used for service discovery, state encapsulation and the pub/sub consumer ID | | `--app-port` | `--app-port` | `-p` | `dapr.io/app-port` | This parameter tells Dapr which port your application is listening on | | `--components-path` | `--components-path` | `-d` | not supported | **Deprecated** in favor of `--resources-path` | -| `--resources-path` | `--resources-path` | `-d` | not supported | Path for components directory. If empty, components will not be loaded. | +| `--resources-path` | `--resources-path` | `-d` | not supported | Path for components directory. If empty, components will not be loaded | | `--config` | `--config` | `-c` | `dapr.io/config` | Tells Dapr which Configuration resource to use | | `--control-plane-address` | not supported | | not supported | Address for a Dapr control plane | -| `--dapr-grpc-port` | `--dapr-grpc-port` | | not supported | gRPC port for the Dapr API to listen on (default "50001") | -| `--dapr-http-port` | `--dapr-http-port` | | not supported | The HTTP port for the Dapr API | -| `--dapr-http-max-request-size` | --dapr-http-max-request-size | | `dapr.io/http-max-request-size` | Increasing max size of request body http and grpc servers parameter in MB to handle uploading of big files. Default is `4` MB | -| `--dapr-http-read-buffer-size` | --dapr-http-read-buffer-size | | `dapr.io/http-read-buffer-size` | Increasing max size of http header read buffer in KB to handle when sending multi-KB headers. The default 4 KB. When sending bigger than default 4KB http headers, you should set this to a larger value, for example 16 (for 16KB) | +| `--dapr-grpc-port` | `--dapr-grpc-port` | | `dapr.io/grpc-port` | Sets the Dapr API gRPC port (default `50001`); all cluster services must use the same port for communication | +| `--dapr-http-port` | `--dapr-http-port` | | not supported | HTTP port for the Dapr API to listen on (default `3500`) | +| `--dapr-http-max-request-size` | `--dapr-http-max-request-size` | | `dapr.io/http-max-request-size` | **Deprecated** in favor of `--max-body-size`. Inreasing the request max body size to handle large file uploads using http and grpc protocols. Default is `4` MB | +| `--max-body-size` | not supported | | `dapr.io/max-body-size` | Inreasing the request max body size to handle large file uploads using http and grpc protocols. Set the value using size units (e.g., `16Mi` for 16MB). The default is `4Mi` | +| `--dapr-http-read-buffer-size` | `--dapr-http-read-buffer-size` | | `dapr.io/http-read-buffer-size` | **Deprecated** in favor of `--read-buffer-size`. Increasing max size of http header read buffer in KB to to support larger header values, for example `16` to support headers up to 16KB . Default is `16` for 16KB | +| `--read-buffer-size` | not supported | | `dapr.io/read-buffer-size` | Increasing max size of http header read buffer in KB to to support larger header values. Set the value using size units, for example `32Ki` will support headers up to 32KB . Default is `4` for 4KB | | not supported | `--image` | | `dapr.io/sidecar-image` | Dapr sidecar image. Default is daprio/daprd:latest. The Dapr sidecar uses this image instead of the latest default image. Use this when building your own custom image of Dapr and or [using an alternative stable Dapr image]({{< ref "support-release-policy.md#build-variations" >}}) | -| `--internal-grpc-port` | not supported | | not supported | gRPC port for the Dapr Internal API to listen on | +| `--internal-grpc-port` | not supported | | `dapr.io/internal-grpc-port` | Sets the internal Dapr gRPC port (default `50002`); all cluster services must use the same port for communication | | `--enable-metrics` | not supported | | configuration spec | Enable [prometheus metric]({{< ref prometheus >}}) (default true) | | `--enable-mtls` | not supported | | configuration spec | Enables automatic mTLS for daprd to daprd communication channels | | `--enable-profiling` | `--enable-profiling` | | `dapr.io/enable-profiling` | [Enable profiling]({{< ref profiling-debugging >}}) | @@ -32,11 +34,11 @@ This table is meant to help users understand the equivalent options for running | `--log-as-json` | not supported | | `dapr.io/log-as-json` | Setting this parameter to `true` outputs [logs in JSON format]({{< ref logs >}}). Default is `false` | | `--log-level` | `--log-level` | | `dapr.io/log-level` | Sets the [log level]({{< ref logs-troubleshooting >}}) for the Dapr sidecar. Allowed values are `debug`, `info`, `warn`, `error`. Default is `info` | | `--enable-api-logging` | `--enable-api-logging` | | `dapr.io/enable-api-logging` | [Enables API logging]({{< ref "api-logs-troubleshooting.md#configuring-api-logging-in-kubernetes" >}}) for the Dapr sidecar | -| `--app-max-concurrency` | `--app-max-concurrency` | | `dapr.io/app-max-concurrency` | Limit the [concurrency of your application]({{< ref "control-concurrency.md#setting-app-max-concurrency" >}}). A valid value is any number larger than `0`| +| `--app-max-concurrency` | `--app-max-concurrency` | | `dapr.io/app-max-concurrency` | Limit the [concurrency of your application]({{< ref "control-concurrency.md#setting-app-max-concurrency" >}}). A valid value is any number larger than `0`. Default value: `-1`, meaning no concurrency. | | `--metrics-port` | `--metrics-port` | | `dapr.io/metrics-port` | Sets the port for the sidecar metrics server. Default is `9090` | | `--mode` | not supported | | not supported | Runtime hosting option mode for Dapr, either `"standalone"` or `"kubernetes"` (default `"standalone"`). [Learn more.]({{< ref hosting >}}) | -| `--placement-host-address` | `--placement-host-address` | | `dapr.io/placement-host-address` | Comma separated list of addresses for Dapr Actor Placement servers. When no annotation is set, the default value is set by the Sidecar Injector. When the annotation is set and the value is empty, the sidecar does not connect to Placement server. This can be used when there are no actors running in the sidecar. When the annotation is set and the value is not empty, the sidecar connects to the configured address. For example: `127.0.0.1:50057,127.0.0.1:50058` | -| `--scheduler-host-address` | `--scheduler-host-address` | | `dapr.io/scheduler-host-address` | Comma separated list of addresses for Dapr Scheduler servers. When no annotation is set, the default value is set by the Sidecar Injector. When the annotation is set and the value is empty, the sidecar does not connect to Scheduler server. When the annotation is set and the value is not empty, the sidecar connects to the configured address. For example: `127.0.0.1:50055,127.0.0.1:50056` | +| `--placement-host-address` | `--placement-host-address` | | `dapr.io/placement-host-address` | Comma separated list of addresses for Dapr Actor Placement servers.

When no annotation is set, the default value is set by the Sidecar Injector.

When the annotation is set and the value is a single space (`' '`), or "empty", the sidecar does not connect to Placement server. This can be used when there are no actors running in the sidecar.

When the annotation is set and the value is not empty, the sidecar connects to the configured address. For example: `127.0.0.1:50057,127.0.0.1:50058` | +| `--scheduler-host-address` | `--scheduler-host-address` | | `dapr.io/scheduler-host-address` | Comma separated list of addresses for Dapr Scheduler servers.

When no annotation is set, the default value is set by the Sidecar Injector.

When the annotation is set and the value is a single space (`' '`), or "empty", the sidecar does not connect to Scheduler server.

When the annotation is set and the value is not empty, the sidecar connects to the configured address. For example: `127.0.0.1:50055,127.0.0.1:50056` | | `--actors-service` | not supported | | not supported | Configuration for the service that offers actor placement information. The format is `:
`. For example, setting this value to `placement:127.0.0.1:50057,127.0.0.1:50058` is an alternative to using the `--placement-host-address` flag. | | `--reminders-service` | not supported | | not supported | Configuration for the service that enables actor reminders. The format is `[:
]`. Currently, the only supported value is `"default"` (which is also the default value), which uses the built-in reminders subsystem in the Dapr sidecar. | | `--profiling-port` | `--profiling-port` | | not supported | The port for the profile server (default `7777`) | @@ -67,6 +69,7 @@ This table is meant to help users understand the equivalent options for running | not supported | not supported | | `dapr.io/sidecar-readiness-probe-period-seconds` | How often (in seconds) to perform the sidecar readiness probe. Read more [here](https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/#configure-probes). Default is `6`| | not supported | not supported | | `dapr.io/sidecar-readiness-probe-threshold` | When the sidecar readiness probe fails, Kubernetes will try N times before giving up. In this case, the Pod will be marked Unready. Read more about `failureThreshold` [here](https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/#configure-probes). Default is `3`| | not supported | not supported | | `dapr.io/env` | List of environment variable to be injected into the sidecar. Strings consisting of key=value pairs separated by a comma.| +| not supported | not supported | | `dapr.io/env-from-secret` | List of environment variables to be injected into the sidecar from secret. Strings consisting of `"key=secret-name:secret-key"` pairs are separated by a comma. | | not supported | not supported | | `dapr.io/volume-mounts` | List of [pod volumes to be mounted to the sidecar container]({{< ref "kubernetes-volume-mounts" >}}) in read-only mode. Strings consisting of `volume:path` pairs separated by a comma. Example, `"volume-1:/tmp/mount1,volume-2:/home/root/mount2"`. | | not supported | not supported | | `dapr.io/volume-mounts-rw` | List of [pod volumes to be mounted to the sidecar container]({{< ref "kubernetes-volume-mounts" >}}) in read-write mode. Strings consisting of `volume:path` pairs separated by a comma. Example, `"volume-1:/tmp/mount1,volume-2:/home/root/mount2"`. | | `--disable-builtin-k8s-secret-store` | not supported | | `dapr.io/disable-builtin-k8s-secret-store` | Disables BuiltIn Kubernetes secret store. Default value is false. See [Kubernetes secret store component]({{< ref "kubernetes-secret-store.md" >}}) for details. | diff --git a/daprdocs/content/en/reference/components-reference/supported-bindings/blobstorage.md b/daprdocs/content/en/reference/components-reference/supported-bindings/blobstorage.md index 4baea225c..008d10a7e 100644 --- a/daprdocs/content/en/reference/components-reference/supported-bindings/blobstorage.md +++ b/daprdocs/content/en/reference/components-reference/supported-bindings/blobstorage.md @@ -63,6 +63,10 @@ This component supports **output binding** with the following operations: - `delete` : [Delete blob](#delete-blob) - `list`: [List blobs](#list-blobs) +The Blob storage component's **input binding** triggers and pushes events using [Azure Event Grid]({{< ref eventgrid.md >}}). + +Refer to the [Reacting to Blob storage events](https://learn.microsoft.com/azure/storage/blobs/storage-blob-event-overview) guide for more set up and more information. + ### Create blob To perform a create blob operation, invoke the Azure Blob Storage binding with a `POST` method and the following JSON body: diff --git a/daprdocs/content/en/reference/components-reference/supported-bindings/eventgrid.md b/daprdocs/content/en/reference/components-reference/supported-bindings/eventgrid.md index 9e66107b5..f970e8cce 100644 --- a/daprdocs/content/en/reference/components-reference/supported-bindings/eventgrid.md +++ b/daprdocs/content/en/reference/components-reference/supported-bindings/eventgrid.md @@ -90,6 +90,21 @@ This component supports **output binding** with the following operations: - `create`: publishes a message on the Event Grid topic +## Receiving events + +You can use the Event Grid binding to receive events from a variety of sources and actions. [Learn more about all of the available event sources and handlers that work with Event Grid.](https://learn.microsoft.com/azure/event-grid/overview) + +In the following table, you can find the list of Dapr components that can raise events. + +| Event sources | Dapr components | +| ------------- | --------------- | +| [Azure Blob Storage](https://learn.microsoft.com/azure/storage/blobs/) | [Azure Blob Storage binding]({{< ref blobstorage.md >}})
[Azure Blob Storage state store]({{< ref setup-azure-blobstorage.md >}}) | +| [Azure Cache for Redis](https://learn.microsoft.com/azure/azure-cache-for-redis/cache-overview) | [Redis binding]({{< ref redis.md >}})
[Redis pub/sub]({{< ref setup-redis-pubsub.md >}}) | +| [Azure Event Hubs](https://learn.microsoft.com/azure/event-hubs/event-hubs-about) | [Azure Event Hubs pub/sub]({{< ref setup-azure-eventhubs.md >}})
[Azure Event Hubs binding]({{< ref eventhubs.md >}}) | +| [Azure IoT Hub](https://learn.microsoft.com/azure/iot-hub/iot-concepts-and-iot-hub) | [Azure Event Hubs pub/sub]({{< ref setup-azure-eventhubs.md >}})
[Azure Event Hubs binding]({{< ref eventhubs.md >}}) | +| [Azure Service Bus](https://learn.microsoft.com/azure/service-bus-messaging/service-bus-messaging-overview) | [Azure Service Bus binding]({{< ref servicebusqueues.md >}})
[Azure Service Bus pub/sub topics]({{< ref setup-azure-servicebus-topics.md >}}) and [queues]({{< ref setup-azure-servicebus-queues.md >}}) | +| [Azure SignalR Service](https://learn.microsoft.com/azure/azure-signalr/signalr-overview) | [SignalR binding]({{< ref signalr.md >}}) | + ## Microsoft Entra ID credentials The Azure Event Grid binding requires an Microsoft Entra ID application and service principal for two reasons: @@ -142,7 +157,7 @@ Connect-MgGraph -Scopes "Application.Read.All","Application.ReadWrite.All" > Note: if your directory does not have a Service Principal for the application "Microsoft.EventGrid", you may need to run the command `Connect-MgGraph` and sign in as an admin for the Microsoft Entra ID tenant (this is related to permissions on the Microsoft Entra ID directory, and not the Azure subscription). Otherwise, please ask your tenant's admin to sign in and run this PowerShell command: `New-MgServicePrincipal -AppId "4962773b-9cdb-44cf-a8bf-237846a00ab7"` (the UUID is a constant) -### Testing locally +## Testing locally - Install [ngrok](https://ngrok.com/download) - Run locally using a custom port, for example `9000`, for handshakes @@ -160,7 +175,7 @@ ngrok http --host-header=localhost 9000 dapr run --app-id dotnetwebapi --app-port 5000 --dapr-http-port 3500 dotnet run ``` -### Testing on Kubernetes +## Testing on Kubernetes Azure Event Grid requires a valid HTTPS endpoint for custom webhooks; self-signed certificates aren't accepted. In order to enable traffic from the public internet to your app's Dapr sidecar you need an ingress controller enabled with Dapr. There's a good article on this topic: [Kubernetes NGINX ingress controller with Dapr](https://carlos.mendible.com/2020/04/05/kubernetes-nginx-ingress-controller-with-dapr/). diff --git a/daprdocs/content/en/reference/components-reference/supported-bindings/eventhubs.md b/daprdocs/content/en/reference/components-reference/supported-bindings/eventhubs.md index ee005b4dd..be8536f72 100644 --- a/daprdocs/content/en/reference/components-reference/supported-bindings/eventhubs.md +++ b/daprdocs/content/en/reference/components-reference/supported-bindings/eventhubs.md @@ -36,6 +36,8 @@ spec: value: "namespace" - name: enableEntityManagement value: "false" + - name: enableInOrderMessageDelivery + value: "false" # The following four properties are needed only if enableEntityManagement is set to true - name: resourceGroupName value: "test-rg" @@ -71,7 +73,8 @@ The above example uses secrets as plain strings. It is recommended to use a secr | `eventHub` | Y* | Input/Output | The name of the Event Hubs hub ("topic"). Required if using Microsoft Entra ID authentication or if the connection string doesn't contain an `EntityPath` value | `mytopic` | | `connectionString` | Y* | Input/Output | Connection string for the Event Hub or the Event Hub namespace.
* Mutally exclusive with `eventHubNamespace` field.
* Required when not using [Microsoft Entra ID Authentication]({{< ref "authenticating-azure.md" >}}) | `"Endpoint=sb://{EventHubNamespace}.servicebus.windows.net/;SharedAccessKeyName={PolicyName};SharedAccessKey={Key};EntityPath={EventHub}"` or `"Endpoint=sb://{EventHubNamespace}.servicebus.windows.net/;SharedAccessKeyName={PolicyName};SharedAccessKey={Key}"` | `eventHubNamespace` | Y* | Input/Output | The Event Hub Namespace name.
* Mutally exclusive with `connectionString` field.
* Required when using [Microsoft Entra ID Authentication]({{< ref "authenticating-azure.md" >}}) | `"namespace"` -| `enableEntityManagement` | N | Input/Output | Boolean value to allow management of the EventHub namespace and storage account. Default: `false` | `"true", "false"` +| `enableEntityManagement` | N | Input/Output | Boolean value to allow management of the EventHub namespace and storage account. Default: `false` | `"true"`, `"false"` +| `enableInOrderMessageDelivery` | N | Input/Output | Boolean value to allow messages to be delivered in the order in which they were posted. This assumes `partitionKey` is set when publishing or posting to ensure ordering across partitions. Default: `false` | `"true"`, `"false"` | `resourceGroupName` | N | Input/Output | Name of the resource group the Event Hub namespace is part of. Required when entity management is enabled | `"test-rg"` | `subscriptionID` | N | Input/Output | Azure subscription ID value. Required when entity management is enabled | `"azure subscription id"` | `partitionCount` | N | Input/Output | Number of partitions for the new Event Hub namespace. Used only when entity management is enabled. Default: `"1"` | `"2"` diff --git a/daprdocs/content/en/reference/components-reference/supported-bindings/kafka.md b/daprdocs/content/en/reference/components-reference/supported-bindings/kafka.md index addfba98a..413e1893f 100644 --- a/daprdocs/content/en/reference/components-reference/supported-bindings/kafka.md +++ b/daprdocs/content/en/reference/components-reference/supported-bindings/kafka.md @@ -63,6 +63,8 @@ spec: value: true - name: schemaLatestVersionCacheTTL # Optional. When using Schema Registry Avro serialization/deserialization. The TTL for schema caching when publishing a message with latest schema available. value: 5m + - name: escapeHeaders # Optional. + value: false ``` ## Spec metadata fields @@ -99,6 +101,7 @@ spec: | `consumerFetchDefault` | N | Input/Output | The default number of message bytes to fetch from the broker in each request. Default is `"1048576"` bytes. | `"2097152"` | | `heartbeatInterval` | N | Input | The interval between heartbeats to the consumer coordinator. At most, the value should be set to a 1/3 of the `sessionTimeout` value. Defaults to `"3s"`. | `"5s"` | | `sessionTimeout` | N | Input | The timeout used to detect client failures when using Kafka’s group management facility. If the broker fails to receive any heartbeats from the consumer before the expiration of this session timeout, then the consumer is removed and initiates a rebalance. Defaults to `"10s"`. | `"20s"` | +| `escapeHeaders` | N | Input | Enables URL escaping of the message header values received by the consumer. Allows receiving content with special characters that are usually not allowed in HTTP headers. Default is `false`. | `true` | #### Note The metadata `version` must be set to `1.0.0` when using Azure EventHubs with Kafka. diff --git a/daprdocs/content/en/reference/components-reference/supported-bindings/postgresql.md b/daprdocs/content/en/reference/components-reference/supported-bindings/postgresql.md index 7d8f4104b..a77814b8e 100644 --- a/daprdocs/content/en/reference/components-reference/supported-bindings/postgresql.md +++ b/daprdocs/content/en/reference/components-reference/supported-bindings/postgresql.md @@ -56,23 +56,27 @@ Authenticating with Microsoft Entra ID is supported with Azure Database for Post ### Authenticate using AWS IAM Authenticating with AWS IAM is supported with all versions of PostgreSQL type components. -The user specified in the connection string must be an AWS IAM enabled user granted the `rds_iam` database role. +The user specified in the connection string must be an already existing user in the DB, and an AWS IAM enabled user granted the `rds_iam` database role. Authentication is based on the AWS authentication configuration file, or the AccessKey/SecretKey provided. The AWS authentication token will be dynamically rotated before it's expiration time with AWS. | Field | Required | Details | Example | |--------|:--------:|---------|---------| -| `awsRegion` | Y | The AWS Region where the AWS Relational Database Service is deployed to. | `"us-east-1"` -| `accessKey` | Y | AWS access key associated with an IAM account | `"AKIAIOSFODNN7EXAMPLE"` -| `secretKey` | Y | The secret key associated with the access key. | `"wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY"` +| `useAWSIAM` | Y | Must be set to `true` to enable the component to retrieve access tokens from AWS IAM. This authentication method only works with AWS Relational Database Service for PostgreSQL databases. | `"true"` | +| `connectionString` | Y | The connection string for the PostgreSQL database.
This must contain an already existing user, which corresponds to the name of the user created inside PostgreSQL that maps to the AWS IAM policy. This connection string should not contain any password. Note that the database name field is denoted by dbname with AWS. | `"host=mydb.postgres.database.aws.com user=myapplication port=5432 dbname=my_db sslmode=require"`| +| `awsRegion` | N | This maintains backwards compatibility with existing fields. It will be deprecated as of Dapr 1.17. Use 'region' instead. The AWS Region where the AWS Relational Database Service is deployed to. | `"us-east-1"` | +| `awsAccessKey` | N | This maintains backwards compatibility with existing fields. It will be deprecated as of Dapr 1.17. Use 'accessKey' instead. AWS access key associated with an IAM account | `"AKIAIOSFODNN7EXAMPLE"` | +| `awsSecretKey` | N | This maintains backwards compatibility with existing fields. It will be deprecated as of Dapr 1.17. Use 'secretKey' instead. The secret key associated with the access key | `"wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY"` | +| `awsSessionToken` | N | This maintains backwards compatibility with existing fields. It will be deprecated as of Dapr 1.17. Use 'sessionToken' instead. AWS session token to use. A session token is only required if you are using temporary security credentials. | `"TOKEN"` | ### Other metadata options -| Field | Required | Binding support |Details | Example | +| Field | Required | Binding support | Details | Example | |--------------------|:--------:|-----|---|---------| -| `maxConns` | N | Output | Maximum number of connections pooled by this component. Set to 0 or lower to use the default value, which is the greater of 4 or the number of CPUs. | `"4"` -| `connectionMaxIdleTime` | N | Output | Max idle time before unused connections are automatically closed in the connection pool. By default, there's no value and this is left to the database driver to choose. | `"5m"` -| `queryExecMode` | N | Output | Controls the default mode for executing queries. By default Dapr uses the extended protocol and automatically prepares and caches prepared statements. However, this may be incompatible with proxies such as PGBouncer. In this case it may be preferrable to use `exec` or `simple_protocol`. | `"simple_protocol"` +| `timeout` | N | Output | Timeout for operations on the database, as a [Go duration](https://pkg.go.dev/time#ParseDuration). Integers are interpreted as number of seconds. Defaults to `20s` | `"30s"`, `30` | +| `maxConns` | N | Output | Maximum number of connections pooled by this component. Set to 0 or lower to use the default value, which is the greater of 4 or the number of CPUs. | `"4"` | +| `connectionMaxIdleTime` | N | Output | Max idle time before unused connections are automatically closed in the connection pool. By default, there's no value and this is left to the database driver to choose. | `"5m"` | +| `queryExecMode` | N | Output | Controls the default mode for executing queries. By default Dapr uses the extended protocol and automatically prepares and caches prepared statements. However, this may be incompatible with proxies such as PGBouncer. In this case it may be preferrable to use `exec` or `simple_protocol`. | `"simple_protocol"` | ### URL format diff --git a/daprdocs/content/en/reference/components-reference/supported-bindings/redis.md b/daprdocs/content/en/reference/components-reference/supported-bindings/redis.md index 3a9093666..4fc8dbb1b 100644 --- a/daprdocs/content/en/reference/components-reference/supported-bindings/redis.md +++ b/daprdocs/content/en/reference/components-reference/supported-bindings/redis.md @@ -43,6 +43,8 @@ The above example uses secrets as plain strings. It is recommended to use a secr | `redisUsername` | N | Output | Username for Redis host. Defaults to empty. Make sure your redis server version is 6 or above, and have created acl rule correctly. | `"username"` | | `useEntraID` | N | Output | Implements EntraID support for Azure Cache for Redis. Before enabling this:
  • The `redisHost` name must be specified in the form of `"server:port"`
  • TLS must be enabled
Learn more about this setting under [Create a Redis instance > Azure Cache for Redis]({{< ref "#create-a-redis-instance" >}}) | `"true"`, `"false"` | | `enableTLS` | N | Output | If the Redis instance supports TLS with public certificates it can be configured to enable or disable TLS. Defaults to `"false"` | `"true"`, `"false"` | +| `clientCert` | N | Output | The content of the client certificate, used for Redis instances that require client-side certificates. Must be used with `clientKey` and `enableTLS` must be set to true. It is recommended to use a secret store as described [here]({{< ref component-secrets.md >}}) | `"----BEGIN CERTIFICATE-----\nMIIC..."` | +| `clientKey` | N | Output | The content of the client private key, used in conjunction with `clientCert` for authentication. It is recommended to use a secret store as described [here]({{< ref component-secrets.md >}}) | `"----BEGIN PRIVATE KEY-----\nMIIE..."` | | `failover` | N | Output | Property to enabled failover configuration. Needs sentinalMasterName to be set. Defaults to `"false"` | `"true"`, `"false"` | `sentinelMasterName` | N | Output | The sentinel master name. See [Redis Sentinel Documentation](https://redis.io/docs/reference/sentinel-clients/) | `""`, `"127.0.0.1:6379"` | `redeliverInterval` | N | Output | The interval between checking for pending messages to redelivery. Defaults to `"60s"`. `"0"` disables redelivery. | `"30s"` diff --git a/daprdocs/content/en/reference/components-reference/supported-bindings/s3.md b/daprdocs/content/en/reference/components-reference/supported-bindings/s3.md index 2de9b95a7..d91818a1d 100644 --- a/daprdocs/content/en/reference/components-reference/supported-bindings/s3.md +++ b/daprdocs/content/en/reference/components-reference/supported-bindings/s3.md @@ -44,6 +44,8 @@ spec: value: "" - name: insecureSSL value: "" + - name: storageClass + value: "" ``` {{% alert title="Warning" color="warning" %}} @@ -65,6 +67,7 @@ The above example uses secrets as plain strings. It is recommended to use a secr | `encodeBase64` | N | Output | Configuration to encode base64 file content before return the content. (In case of opening a file with binary content). `"true"` is the only allowed positive value. Other positive variations like `"True", "1"` are not acceptable. Defaults to `"false"` | `"true"`, `"false"` | | `disableSSL` | N | Output | Allows to connect to non `https://` endpoints. Defaults to `"false"` | `"true"`, `"false"` | | `insecureSSL` | N | Output | When connecting to `https://` endpoints, accepts invalid or self-signed certificates. Defaults to `"false"` | `"true"`, `"false"` | +| `storageClass` | N | Output | The desired storage class for objects during the create operation. [Valid aws storage class types can be found here](https://aws.amazon.com/s3/storage-classes/) | `STANDARD_IA` | {{% alert title="Important" color="warning" %}} When running the Dapr sidecar (daprd) with your application on EKS (AWS Kubernetes), if you're using a node/pod that has already been attached to an IAM policy defining access to AWS resources, you **must not** provide AWS access-key, secret-key, and tokens in the definition of the component spec you're using. @@ -165,10 +168,20 @@ To perform a create operation, invoke the AWS S3 binding with a `POST` method an ```json { "operation": "create", - "data": "YOUR_CONTENT" + "data": "YOUR_CONTENT", + "metadata": { + "storageClass": "STANDARD_IA" + } } ``` +For example you can provide a storage class while using the `create` operation with a Linux curl command + +```bash +curl -d '{ "operation": "create", "data": "YOUR_BASE_64_CONTENT", "metadata": { "storageClass": "STANDARD_IA" } }' / +http://localhost:/v1.0/bindings/ +``` + #### Share object with a presigned URL To presign an object with a specified time-to-live, use the `presignTTL` metadata key on a `create` request. diff --git a/daprdocs/content/en/reference/components-reference/supported-bindings/sftp.md b/daprdocs/content/en/reference/components-reference/supported-bindings/sftp.md new file mode 100644 index 000000000..a0e356e54 --- /dev/null +++ b/daprdocs/content/en/reference/components-reference/supported-bindings/sftp.md @@ -0,0 +1,231 @@ +--- +type: docs +title: "SFTP binding spec" +linkTitle: "SFTP" +description: "Detailed documentation on the Secure File Transfer Protocol (SFTP) binding component" +aliases: + - "/operations/components/setup-bindings/supported-bindings/sftp/" +--- + +## Component format + +To set up the SFTP binding, create a component of type `bindings.sftp`. See [this guide]({{ ref bindings-overview.md }}) on how to create and apply a binding configuration. + +```yaml +apiVersion: dapr.io/v1alpha1 +kind: Component +metadata: + name: +spec: + type: bindings.sftp + version: v1 + metadata: + - name: rootPath + value: "" + - name: address + value: "" + - name: username + value: "" + - name: password + value: "*****************" + - name: privateKey + value: "*****************" + - name: privateKeyPassphrase + value: "*****************" + - name: hostPublicKey + value: "*****************" + - name: knownHostsFile + value: "" + - name: insecureIgnoreHostKey + value: "" +``` + +## Spec metadata fields + +| Field | Required | Binding support | Details | Example | +|--------------------|:--------:|------------|-----|---------| +| `rootPath` | Y | Output | Root path for default working directory | `"/path"` | +| `address` | Y | Output | Address of SFTP server | `"localhost:22"` | +| `username` | Y | Output | Username for authentication | `"username"` | +| `password` | N | Output | Password for username/password authentication | `"password"` | +| `privateKey` | N | Output | Private key for public key authentication |
"\|-
-----BEGIN OPENSSH PRIVATE KEY-----
*****************
-----END OPENSSH PRIVATE KEY-----"
| +| `privateKeyPassphrase` | N | Output | Private key passphrase for public key authentication | `"passphrase"` | +| `hostPublicKey` | N | Output | Host public key for host validation | `"ecdsa-sha2-nistp256 *** root@openssh-server"` | +| `knownHostsFile` | N | Output | Known hosts file for host validation | `"/path/file"` | +| `insecureIgnoreHostKey` | N | Output | Allows to skip host validation. Defaults to `"false"` | `"true"`, `"false"` | + +## Binding support + +This component supports **output binding** with the following operations: + +- `create` : [Create file](#create-file) +- `get` : [Get file](#get-file) +- `list` : [List files](#list-files) +- `delete` : [Delete file](#delete-file) + +### Create file + +To perform a create file operation, invoke the SFTP binding with a `POST` method and the following JSON body: + +```json +{ + "operation": "create", + "data": "", + "metadata": { + "fileName": "", + } +} +``` + +#### Example + +{{< tabs Windows Linux >}} + + {{% codetab %}} + ```bash + curl -d "{ \"operation\": \"create\", \"data\": \"YOUR_BASE_64_CONTENT\", \"metadata\": { \"fileName\": \"my-test-file.jpg\" } }" http://localhost:/v1.0/bindings/ + ``` + {{% /codetab %}} + + {{% codetab %}} + ```bash + curl -d '{ "operation": "create", "data": "YOUR_BASE_64_CONTENT", "metadata": { "fileName": "my-test-file.jpg" } }' \ + http://localhost:/v1.0/bindings/ + ``` + {{% /codetab %}} + +{{< /tabs >}} + +#### Response + +The response body contains the following JSON: + +```json +{ + "fileName": "" +} + +``` + +### Get file + +To perform a get file operation, invoke the SFTP binding with a `POST` method and the following JSON body: + +```json +{ + "operation": "get", + "metadata": { + "fileName": "" + } +} +``` + +#### Example + +{{< tabs Windows Linux >}} + + {{% codetab %}} + ```bash + curl -d '{ \"operation\": \"get\", \"metadata\": { \"fileName\": \"filename\" }}' http://localhost:/v1.0/bindings/ + ``` + {{% /codetab %}} + + {{% codetab %}} + ```bash + curl -d '{ "operation": "get", "metadata": { "fileName": "filename" }}' \ + http://localhost:/v1.0/bindings/ + ``` + {{% /codetab %}} + +{{< /tabs >}} + +#### Response + +The response body contains the value stored in the file. + +### List files + +To perform a list files operation, invoke the SFTP binding with a `POST` method and the following JSON body: + +```json +{ + "operation": "list" +} +``` + +If you only want to list the files beneath a particular directory below the `rootPath`, specify the relative directory name as the `fileName` in the metadata. + +```json +{ + "operation": "list", + "metadata": { + "fileName": "my/cool/directory" + } +} +``` + +#### Example + +{{< tabs Windows Linux >}} + + {{% codetab %}} + ```bash + curl -d '{ \"operation\": \"list\", \"metadata\": { \"fileName\": \"my/cool/directory\" }}' http://localhost:/v1.0/bindings/ + ``` + {{% /codetab %}} + + {{% codetab %}} + ```bash + curl -d '{ "operation": "list", "metadata": { "fileName": "my/cool/directory" }}' \ + http://localhost:/v1.0/bindings/ + ``` + {{% /codetab %}} + +{{< /tabs >}} + +#### Response + +The response is a JSON array of file names. + +### Delete file + +To perform a delete file operation, invoke the SFTP binding with a `POST` method and the following JSON body: + +```json +{ + "operation": "delete", + "metadata": { + "fileName": "myfile" + } +} +``` + +#### Example + +{{< tabs Windows Linux >}} + + {{% codetab %}} + ```bash + curl -d '{ \"operation\": \"delete\", \"metadata\": { \"fileName\": \"myfile\" }}' http://localhost:/v1.0/bindings/ + ``` + {{% /codetab %}} + + {{% codetab %}} + ```bash + curl -d '{ "operation": "delete", "metadata": { "fileName": "myfile" }}' \ + http://localhost:/v1.0/bindings/ + ``` + {{% /codetab %}} + +{{< /tabs >}} + +#### Response + +An HTTP 204 (No Content) and empty body is returned if successful. + +## Related links + +- [Basic schema for a Dapr component]({{< ref component-schema >}}) +- [Bindings building block]({{< ref bindings >}}) +- [How-To: Use bindings to interface with external resources]({{< ref howto-bindings.md >}}) +- [Bindings API reference]({{< ref bindings_api.md >}}) diff --git a/daprdocs/content/en/reference/components-reference/supported-configuration-stores/postgresql-configuration-store.md b/daprdocs/content/en/reference/components-reference/supported-configuration-stores/postgresql-configuration-store.md index a846b6a23..ea4868fe3 100644 --- a/daprdocs/content/en/reference/components-reference/supported-configuration-stores/postgresql-configuration-store.md +++ b/daprdocs/content/en/reference/components-reference/supported-configuration-stores/postgresql-configuration-store.md @@ -79,11 +79,28 @@ Authenticating with Microsoft Entra ID is supported with Azure Database for Post | `azureClientId` | N | Client ID (application ID) | `"c7dd251f-811f-…"` | | `azureClientSecret` | N | Client secret (application password) | `"Ecy3X…"` | +### Authenticate using AWS IAM + +Authenticating with AWS IAM is supported with all versions of PostgreSQL type components. +The user specified in the connection string must be an already existing user in the DB, and an AWS IAM enabled user granted the `rds_iam` database role. +Authentication is based on the AWS authentication configuration file, or the AccessKey/SecretKey provided. +The AWS authentication token will be dynamically rotated before it's expiration time with AWS. + +| Field | Required | Details | Example | +|--------|:--------:|---------|---------| +| `useAWSIAM` | Y | Must be set to `true` to enable the component to retrieve access tokens from AWS IAM. This authentication method only works with AWS Relational Database Service for PostgreSQL databases. | `"true"` | +| `connectionString` | Y | The connection string for the PostgreSQL database.
This must contain an already existing user, which corresponds to the name of the user created inside PostgreSQL that maps to the AWS IAM policy. This connection string should not contain any password. Note that the database name field is denoted by dbname with AWS. | `"host=mydb.postgres.database.aws.com user=myapplication port=5432 dbname=my_db sslmode=require"`| +| `awsRegion` | N | This maintains backwards compatibility with existing fields. It will be deprecated as of Dapr 1.17. Use 'region' instead. The AWS Region where the AWS Relational Database Service is deployed to. | `"us-east-1"` | +| `awsAccessKey` | N | This maintains backwards compatibility with existing fields. It will be deprecated as of Dapr 1.17. Use 'accessKey' instead. AWS access key associated with an IAM account | `"AKIAIOSFODNN7EXAMPLE"` | +| `awsSecretKey` | N | This maintains backwards compatibility with existing fields. It will be deprecated as of Dapr 1.17. Use 'secretKey' instead. The secret key associated with the access key | `"wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY"` | +| `awsSessionToken` | N | This maintains backwards compatibility with existing fields. It will be deprecated as of Dapr 1.17. Use 'sessionToken' instead. AWS session token to use. A session token is only required if you are using temporary security credentials. | `"TOKEN"` | + ### Other metadata options | Field | Required | Details | Example | |--------------------|:--------:|---------|---------| | `table` | Y | Table name for configuration information, must be lowercased. | `configtable` +| `timeout` | N | Timeout for operations on the database, as a [Go duration](https://pkg.go.dev/time#ParseDuration). Integers are interpreted as number of seconds. Defaults to `20s` | `"30s"`, `30` | | `maxConns` | N | Maximum number of connections pooled by this component. Set to 0 or lower to use the default value, which is the greater of 4 or the number of CPUs. | `"4"` | `connectionMaxIdleTime` | N | Max idle time before unused connections are automatically closed in the connection pool. By default, there's no value and this is left to the database driver to choose. | `"5m"` | `queryExecMode` | N | Controls the default mode for executing queries. By default Dapr uses the extended protocol and automatically prepares and caches prepared statements. However, this may be incompatible with proxies such as PGBouncer. In this case it may be preferrable to use `exec` or `simple_protocol`. | `"simple_protocol"` diff --git a/daprdocs/content/en/reference/components-reference/supported-configuration-stores/redis-configuration-store.md b/daprdocs/content/en/reference/components-reference/supported-configuration-stores/redis-configuration-store.md index caf9d8a44..28965cb0e 100644 --- a/daprdocs/content/en/reference/components-reference/supported-configuration-stores/redis-configuration-store.md +++ b/daprdocs/content/en/reference/components-reference/supported-configuration-stores/redis-configuration-store.md @@ -43,6 +43,8 @@ The above example uses secrets as plain strings. It is recommended to use a secr | redisPassword | N | Output | The Redis password | `"password"` | | redisUsername | N | Output | Username for Redis host. Defaults to empty. Make sure your Redis server version is 6 or above, and have created acl rule correctly. | `"username"` | | enableTLS | N | Output | If the Redis instance supports TLS with public certificates it can be configured to enable or disable TLS. Defaults to `"false"` | `"true"`, `"false"` | +| clientCert | N | Output | The content of the client certificate, used for Redis instances that require client-side certificates. Must be used with `clientKey` and `enableTLS` must be set to true. It is recommended to use a secret store as described [here]({{< ref component-secrets.md >}}) | `"----BEGIN CERTIFICATE-----\nMIIC..."` | +| clientKey | N | Output | The content of the client private key, used in conjunction with `clientCert` for authentication. It is recommended to use a secret store as described [here]({{< ref component-secrets.md >}}) | `"----BEGIN PRIVATE KEY-----\nMIIE..."` | | failover | N | Output | Property to enabled failover configuration. Needs sentinelMasterName to be set. Defaults to `"false"` | `"true"`, `"false"` | sentinelMasterName | N | Output | The Sentinel master name. See [Redis Sentinel Documentation](https://redis.io/docs/reference/sentinel-clients/) | `""`, `"127.0.0.1:6379"` | redisType | N | Output | The type of Redis. There are two valid values, one is `"node"` for single node mode, the other is `"cluster"` for Redis cluster mode. Defaults to `"node"`. | `"cluster"` diff --git a/daprdocs/content/en/reference/components-reference/supported-conversation/_index.md b/daprdocs/content/en/reference/components-reference/supported-conversation/_index.md new file mode 100644 index 000000000..179162b3b --- /dev/null +++ b/daprdocs/content/en/reference/components-reference/supported-conversation/_index.md @@ -0,0 +1,12 @@ +--- +type: docs +title: "Conversation component specs" +linkTitle: "Conversation" +weight: 9000 +description: The supported conversation components that interface with Dapr +no_list: true +--- + +{{< partial "components/description.html" >}} + +{{< partial "components/conversation.html" >}} \ No newline at end of file diff --git a/daprdocs/content/en/reference/components-reference/supported-conversation/anthropic.md b/daprdocs/content/en/reference/components-reference/supported-conversation/anthropic.md new file mode 100644 index 000000000..334b7cb99 --- /dev/null +++ b/daprdocs/content/en/reference/components-reference/supported-conversation/anthropic.md @@ -0,0 +1,42 @@ +--- +type: docs +title: "Anthropic" +linkTitle: "Anthropic" +description: Detailed information on the Anthropic conversation component +--- + +## Component format + +A Dapr `conversation.yaml` component file has the following structure: + +```yaml +apiVersion: dapr.io/v1alpha1 +kind: Component +metadata: + name: anthropic +spec: + type: conversation.anthropic + metadata: + - name: key + value: "mykey" + - name: model + value: claude-3-5-sonnet-20240620 + - name: cacheTTL + value: 10m +``` + +{{% alert title="Warning" color="warning" %}} +The above example uses secrets as plain strings. It is recommended to use a secret store for the secrets, as described [here]({{< ref component-secrets.md >}}). +{{% /alert %}} + +## Spec metadata fields + +| Field | Required | Details | Example | +|--------------------|:--------:|---------|---------| +| `key` | Y | API key for Anthropic. | `"mykey"` | +| `model` | N | The Anthropic LLM to use. Defaults to `claude-3-5-sonnet-20240620` | `claude-3-5-sonnet-20240620` | +| `cacheTTL` | N | A time-to-live value for a prompt cache to expire. Uses Golang duration format. | `10m` | + +## Related links + +- [Conversation API overview]({{< ref conversation-overview.md >}}) \ No newline at end of file diff --git a/daprdocs/content/en/reference/components-reference/supported-conversation/aws-bedrock.md b/daprdocs/content/en/reference/components-reference/supported-conversation/aws-bedrock.md new file mode 100644 index 000000000..759e37013 --- /dev/null +++ b/daprdocs/content/en/reference/components-reference/supported-conversation/aws-bedrock.md @@ -0,0 +1,42 @@ +--- +type: docs +title: "AWS Bedrock" +linkTitle: "AWS Bedrock" +description: Detailed information on the AWS Bedrock conversation component +--- + +## Component format + +A Dapr `conversation.yaml` component file has the following structure: + +```yaml +apiVersion: dapr.io/v1alpha1 +kind: Component +metadata: + name: awsbedrock +spec: + type: conversation.aws.bedrock + metadata: + - name: endpoint + value: "http://localhost:4566" + - name: model + value: amazon.titan-text-express-v1 + - name: cacheTTL + value: 10m +``` + +{{% alert title="Warning" color="warning" %}} +The above example uses secrets as plain strings. It is recommended to use a secret store for the secrets, as described [here]({{< ref component-secrets.md >}}). +{{% /alert %}} + +## Spec metadata fields + +| Field | Required | Details | Example | +|--------------------|:--------:|---------|---------| +| `endpoint` | N | AWS endpoint for the component to use and connect to emulators. Not recommended for production AWS use. | `http://localhost:4566` | +| `model` | N | The LLM to use. Defaults to Bedrock's default provider model from Amazon. | `amazon.titan-text-express-v1` | +| `cacheTTL` | N | A time-to-live value for a prompt cache to expire. Uses Golang duration format. | `10m` | + +## Related links + +- [Conversation API overview]({{< ref conversation-overview.md >}}) \ No newline at end of file diff --git a/daprdocs/content/en/reference/components-reference/supported-conversation/hugging-face.md b/daprdocs/content/en/reference/components-reference/supported-conversation/hugging-face.md new file mode 100644 index 000000000..6429c84e8 --- /dev/null +++ b/daprdocs/content/en/reference/components-reference/supported-conversation/hugging-face.md @@ -0,0 +1,42 @@ +--- +type: docs +title: "Huggingface" +linkTitle: "Huggingface" +description: Detailed information on the Huggingface conversation component +--- + +## Component format + +A Dapr `conversation.yaml` component file has the following structure: + +```yaml +apiVersion: dapr.io/v1alpha1 +kind: Component +metadata: + name: huggingface +spec: + type: conversation.huggingface + metadata: + - name: key + value: mykey + - name: model + value: meta-llama/Meta-Llama-3-8B + - name: cacheTTL + value: 10m +``` + +{{% alert title="Warning" color="warning" %}} +The above example uses secrets as plain strings. It is recommended to use a secret store for the secrets, as described [here]({{< ref component-secrets.md >}}). +{{% /alert %}} + +## Spec metadata fields + +| Field | Required | Details | Example | +|--------------------|:--------:|---------|---------| +| `key` | Y | API key for Huggingface. | `mykey` | +| `model` | N | The Huggingface LLM to use. Defaults to `meta-llama/Meta-Llama-3-8B`. | `meta-llama/Meta-Llama-3-8B` | +| `cacheTTL` | N | A time-to-live value for a prompt cache to expire. Uses Golang duration format. | `10m` | + +## Related links + +- [Conversation API overview]({{< ref conversation-overview.md >}}) \ No newline at end of file diff --git a/daprdocs/content/en/reference/components-reference/supported-conversation/mistral.md b/daprdocs/content/en/reference/components-reference/supported-conversation/mistral.md new file mode 100644 index 000000000..57504e56b --- /dev/null +++ b/daprdocs/content/en/reference/components-reference/supported-conversation/mistral.md @@ -0,0 +1,42 @@ +--- +type: docs +title: "Mistral" +linkTitle: "Mistral" +description: Detailed information on the Mistral conversation component +--- + +## Component format + +A Dapr `conversation.yaml` component file has the following structure: + +```yaml +apiVersion: dapr.io/v1alpha1 +kind: Component +metadata: + name: mistral +spec: + type: conversation.mistral + metadata: + - name: key + value: mykey + - name: model + value: open-mistral-7b + - name: cacheTTL + value: 10m +``` + +{{% alert title="Warning" color="warning" %}} +The above example uses secrets as plain strings. It is recommended to use a secret store for the secrets, as described [here]({{< ref component-secrets.md >}}). +{{% /alert %}} + +## Spec metadata fields + +| Field | Required | Details | Example | +|--------------------|:--------:|---------|---------| +| `key` | Y | API key for Mistral. | `mykey` | +| `model` | N | The Mistral LLM to use. Defaults to `open-mistral-7b`. | `open-mistral-7b` | +| `cacheTTL` | N | A time-to-live value for a prompt cache to expire. Uses Golang duration format. | `10m` | + +## Related links + +- [Conversation API overview]({{< ref conversation-overview.md >}}) \ No newline at end of file diff --git a/daprdocs/content/en/reference/components-reference/supported-conversation/openai.md b/daprdocs/content/en/reference/components-reference/supported-conversation/openai.md new file mode 100644 index 000000000..7148685b1 --- /dev/null +++ b/daprdocs/content/en/reference/components-reference/supported-conversation/openai.md @@ -0,0 +1,42 @@ +--- +type: docs +title: "OpenAI" +linkTitle: "OpenAI" +description: Detailed information on the OpenAI conversation component +--- + +## Component format + +A Dapr `conversation.yaml` component file has the following structure: + +```yaml +apiVersion: dapr.io/v1alpha1 +kind: Component +metadata: + name: openai +spec: + type: conversation.openai + metadata: + - name: key + value: mykey + - name: model + value: gpt-4-turbo + - name: cacheTTL + value: 10m +``` + +{{% alert title="Warning" color="warning" %}} +The above example uses secrets as plain strings. It is recommended to use a secret store for the secrets, as described [here]({{< ref component-secrets.md >}}). +{{% /alert %}} + +## Spec metadata fields + +| Field | Required | Details | Example | +|--------------------|:--------:|---------|---------| +| `key` | Y | API key for OpenAI. | `mykey` | +| `model` | N | The OpenAI LLM to use. Defaults to `gpt-4-turbo`. | `gpt-4-turbo` | +| `cacheTTL` | N | A time-to-live value for a prompt cache to expire. Uses Golang duration format. | `10m` | + +## Related links + +- [Conversation API overview]({{< ref conversation-overview.md >}}) \ No newline at end of file diff --git a/daprdocs/content/en/reference/components-reference/supported-pubsub/_index.md b/daprdocs/content/en/reference/components-reference/supported-pubsub/_index.md index ff00c0137..2e2962d68 100644 --- a/daprdocs/content/en/reference/components-reference/supported-pubsub/_index.md +++ b/daprdocs/content/en/reference/components-reference/supported-pubsub/_index.md @@ -11,6 +11,11 @@ no_list: true The following table lists publish and subscribe brokers supported by the Dapr pub/sub building block. [Learn how to set up different brokers for Dapr publish and subscribe.]({{< ref setup-pubsub.md >}}) +{{% alert title="Pub/sub component retries vs inbound resiliency" color="warning" %}} +Each pub/sub component has its own built-in retry behaviors. Before explicity applying a [Dapr resiliency policy]({{< ref "policies.md" >}}), make sure you understand the implicit retry policy of the pub/sub component you're using. Instead of overriding these built-in retries, Dapr resiliency augments them, which can cause repetitive clustering of messages. +{{% /alert %}} + + {{< partial "components/description.html" >}} {{< partial "components/pubsub.html" >}} diff --git a/daprdocs/content/en/reference/components-reference/supported-pubsub/setup-apache-kafka.md b/daprdocs/content/en/reference/components-reference/supported-pubsub/setup-apache-kafka.md index cafcee537..503500ca8 100644 --- a/daprdocs/content/en/reference/components-reference/supported-pubsub/setup-apache-kafka.md +++ b/daprdocs/content/en/reference/components-reference/supported-pubsub/setup-apache-kafka.md @@ -53,6 +53,12 @@ spec: value: 2.0.0 - name: disableTls # Optional. Disable TLS. This is not safe for production!! You should read the `Mutual TLS` section for how to use TLS. value: "true" + - name: consumerFetchMin # Optional. Advanced setting. The minimum number of message bytes to fetch in a request - the broker will wait until at least this many are available. + value: 1 + - name: consumerFetchDefault # Optional. Advanced setting. The default number of message bytes to fetch from the broker in each request. + value: 2097152 + - name: channelBufferSize # Optional. Advanced setting. The number of events to buffer in internal and external channels. + value: 512 - name: schemaRegistryURL # Optional. When using Schema Registry Avro serialization/deserialization. The Schema Registry URL. value: http://localhost:8081 - name: schemaRegistryAPIKey # Optional. When using Schema Registry Avro serialization/deserialization. The Schema Registry API Key. @@ -63,6 +69,8 @@ spec: value: true - name: schemaLatestVersionCacheTTL # Optional. When using Schema Registry Avro serialization/deserialization. The TTL for schema caching when publishing a message with latest schema available. value: 5m + - name: escapeHeaders # Optional. + value: false ``` @@ -96,12 +104,12 @@ spec: | oidcClientSecret | N | The OAuth2 client secret that has been provisioned in the identity provider: Required when `authType` is set to `oidc` | `"KeFg23!"` | | oidcScopes | N | Comma-delimited list of OAuth2/OIDC scopes to request with the access token. Recommended when `authType` is set to `oidc`. Defaults to `"openid"` | `"openid,kafka-prod"` | | oidcExtensions | N | String containing a JSON-encoded dictionary of OAuth2/OIDC extensions to request with the access token | `{"cluster":"kafka","poolid":"kafkapool"}` | -| awsRegion | N | The AWS region where the Kafka cluster is deployed to. Required when `authType` is set to `awsiam` | `us-west-1` | -| awsAccessKey | N | AWS access key associated with an IAM account. | `"accessKey"` -| awsSecretKey | N | The secret key associated with the access key. | `"secretKey"` -| awsSessionToken | N | AWS session token to use. A session token is only required if you are using temporary security credentials. | `"sessionToken"` -| awsIamRoleArn | N | IAM role that has access to AWS Managed Streaming for Apache Kafka (MSK). This is another option to authenticate with MSK aside from the AWS Credentials. | `"arn:aws:iam::123456789:role/mskRole"` -| awsStsSessionName | N | Represents the session name for assuming a role. | `"MSKSASLDefaultSession"` +| awsRegion | N | This maintains backwards compatibility with existing fields. It will be deprecated as of Dapr 1.17. Use 'region' instead. The AWS region where the Kafka cluster is deployed to. Required when `authType` is set to `awsiam` | `us-west-1` | +| awsAccessKey | N | This maintains backwards compatibility with existing fields. It will be deprecated as of Dapr 1.17. Use 'accessKey' instead. AWS access key associated with an IAM account. | `"accessKey"` +| awsSecretKey | N | This maintains backwards compatibility with existing fields. It will be deprecated as of Dapr 1.17. Use 'secretKey' instead. The secret key associated with the access key. | `"secretKey"` +| awsSessionToken | N | This maintains backwards compatibility with existing fields. It will be deprecated as of Dapr 1.17. Use 'sessionToken' instead. AWS session token to use. A session token is only required if you are using temporary security credentials. | `"sessionToken"` +| awsIamRoleArn | N | This maintains backwards compatibility with existing fields. It will be deprecated as of Dapr 1.17. Use 'assumeRoleArn' instead. IAM role that has access to AWS Managed Streaming for Apache Kafka (MSK). This is another option to authenticate with MSK aside from the AWS Credentials. | `"arn:aws:iam::123456789:role/mskRole"` +| awsStsSessionName | N | This maintains backwards compatibility with existing fields. It will be deprecated as of Dapr 1.17. Use 'sessionName' instead. Represents the session name for assuming a role. | `"DaprDefaultSession"` | schemaRegistryURL | N | Required when using Schema Registry Avro serialization/deserialization. The Schema Registry URL. | `http://localhost:8081` | | schemaRegistryAPIKey | N | When using Schema Registry Avro serialization/deserialization. The Schema Registry credentials API Key. | `XYAXXAZ` | | schemaRegistryAPISecret | N | When using Schema Registry Avro serialization/deserialization. The Schema Registry credentials API Secret. | `ABCDEFGMEADFF` | @@ -109,9 +117,12 @@ spec: | schemaLatestVersionCacheTTL | N | When using Schema Registry Avro serialization/deserialization. The TTL for schema caching when publishing a message with latest schema available. Default is 5 min | `5m` | | clientConnectionTopicMetadataRefreshInterval | N | The interval for the client connection's topic metadata to be refreshed with the broker as a Go duration. Defaults to `9m`. | `"4m"` | | clientConnectionKeepAliveInterval | N | The maximum time for the client connection to be kept alive with the broker, as a Go duration, before closing the connection. A zero value (default) means keeping alive indefinitely. | `"4m"` | +| consumerFetchMin | N | The minimum number of message bytes to fetch in a request - the broker will wait until at least this many are available. The default is `1`, as `0` causes the consumer to spin when no messages are available. Equivalent to the JVM's `fetch.min.bytes`. | `"2"` | | consumerFetchDefault | N | The default number of message bytes to fetch from the broker in each request. Default is `"1048576"` bytes. | `"2097152"` | +| channelBufferSize | N | The number of events to buffer in internal and external channels. This permits the producer and consumer to continue processing some messages in the background while user code is working, greatly improving throughput. Defaults to `256`. | `"512"` | | heartbeatInterval | N | The interval between heartbeats to the consumer coordinator. At most, the value should be set to a 1/3 of the `sessionTimeout` value. Defaults to "3s". | `"5s"` | | sessionTimeout | N | The timeout used to detect client failures when using Kafka’s group management facility. If the broker fails to receive any heartbeats from the consumer before the expiration of this session timeout, then the consumer is removed and initiates a rebalance. Defaults to "10s". | `"20s"` | +| escapeHeaders | N | Enables URL escaping of the message header values received by the consumer. Allows receiving content with special characters that are usually not allowed in HTTP headers. Default is `false`. | `true` | The `secretKeyRef` above is referencing a [kubernetes secrets store]({{< ref kubernetes-secret-store.md >}}) to access the tls information. Visit [here]({{< ref setup-secret-store.md >}}) to learn more about how to configure a secret store component. @@ -321,7 +332,7 @@ spec: Authenticating with AWS IAM is supported with MSK. Setting `authType` to `awsiam` uses AWS SDK to generate auth tokens to authenticate. {{% alert title="Note" color="primary" %}} -The only required metadata field is `awsRegion`. If no `awsAccessKey` and `awsSecretKey` are provided, you can use AWS IAM roles for service accounts to have password-less authentication to your Kafka cluster. +The only required metadata field is `region`. If no `acessKey` and `secretKey` are provided, you can use AWS IAM roles for service accounts to have password-less authentication to your Kafka cluster. {{% /alert %}} ```yaml @@ -341,18 +352,18 @@ spec: value: "my-dapr-app-id" - name: authType # Required. value: "awsiam" - - name: awsRegion # Required. + - name: region # Required. value: "us-west-1" - - name: awsAccessKey # Optional. + - name: accessKey # Optional. value: - - name: awsSecretKey # Optional. + - name: secretKey # Optional. value: - - name: awsSessionToken # Optional. + - name: sessionToken # Optional. value: - - name: awsIamRoleArn # Optional. + - name: assumeRoleArn # Optional. value: "arn:aws:iam::123456789:role/mskRole" - - name: awsStsSessionName # Optional. - value: "MSKSASLDefaultSession" + - name: sessionName # Optional. + value: "DaprDefaultSession" ``` ### Communication using TLS @@ -457,7 +468,7 @@ Apache Kafka supports the following bulk metadata options: When invoking the Kafka pub/sub, its possible to provide an optional partition key by using the `metadata` query param in the request url. -The param name is `partitionKey`. +The param name can either be `partitionKey` or `__key` Example: @@ -473,7 +484,7 @@ curl -X POST http://localhost:3500/v1.0/publish/myKafka/myTopic?metadata.partiti ### Message headers -All other metadata key/value pairs (that are not `partitionKey`) are set as headers in the Kafka message. Here is an example setting a `correlationId` for the message. +All other metadata key/value pairs (that are not `partitionKey` or `__key`) are set as headers in the Kafka message. Here is an example setting a `correlationId` for the message. ```shell curl -X POST http://localhost:3500/v1.0/publish/myKafka/myTopic?metadata.correlationId=myCorrelationID&metadata.partitionKey=key1 \ @@ -484,6 +495,85 @@ curl -X POST http://localhost:3500/v1.0/publish/myKafka/myTopic?metadata.correla } }' ``` +### Kafka Pubsub special message headers received on consumer side + +When consuming messages, special message metadata are being automatically passed as headers. These are: +- `__key`: the message key if available +- `__topic`: the topic for the message +- `__partition`: the partition number for the message +- `__offset`: the offset of the message in the partition +- `__timestamp`: the timestamp for the message + +You can access them within the consumer endpoint as follows: +{{< tabs "Python (FastAPI)" >}} + +{{% codetab %}} + +```python +from fastapi import APIRouter, Body, Response, status +import json +import sys + +app = FastAPI() + +router = APIRouter() + + +@router.get('/dapr/subscribe') +def subscribe(): + subscriptions = [{'pubsubname': 'pubsub', + 'topic': 'my-topic', + 'route': 'my_topic_subscriber', + }] + return subscriptions + +@router.post('/my_topic_subscriber') +def my_topic_subscriber( + key: Annotated[str, Header(alias="__key")], + offset: Annotated[int, Header(alias="__offset")], + event_data=Body()): + print(f"key={key} - offset={offset} - data={event_data}", flush=True) + return Response(status_code=status.HTTP_200_OK) + +app.include_router(router) + +``` + +{{% /codetab %}} +{{< /tabs >}} + +## Receiving message headers with special characters + +The consumer application may be required to receive message headers that include special characters, which may cause HTTP protocol validation errors. +HTTP header values must follow specifications, making some characters not allowed. [Learn more about the protocols](https://www.w3.org/Protocols/rfc2616/rfc2616-sec4.html#sec4.2). +In this case, you can enable `escapeHeaders` configuration setting, which uses URL escaping to encode header values on the consumer side. + +{{% alert title="Note" color="primary" %}} +When using this setting, the received message headers are URL escaped, and you need to URL "un-escape" it to get the original value. +{{% /alert %}} + +Set `escapeHeaders` to `true` to URL escape. + +```yaml +apiVersion: dapr.io/v1alpha1 +kind: Component +metadata: + name: kafka-pubsub-escape-headers +spec: + type: pubsub.kafka + version: v1 + metadata: + - name: brokers # Required. Kafka broker connection setting + value: "dapr-kafka.myapp.svc.cluster.local:9092" + - name: consumerGroup # Optional. Used for input bindings. + value: "group1" + - name: clientID # Optional. Used as client tracing ID by Kafka brokers. + value: "my-dapr-app-id" + - name: authType # Required. + value: "none" + - name: escapeHeaders + value: "true" +``` ## Avro Schema Registry serialization/deserialization You can configure pub/sub to publish or consume data encoded using [Avro binary serialization](https://avro.apache.org/docs/), leveraging an [Apache Schema Registry](https://developer.confluent.io/courses/apache-kafka/schema-registry/) (for example, [Confluent Schema Registry](https://developer.confluent.io/courses/apache-kafka/schema-registry/), [Apicurio](https://www.apicur.io/registry/)). @@ -597,6 +687,7 @@ To run Kafka on Kubernetes, you can use any Kafka operator, such as [Strimzi](ht {{< /tabs >}} + ## Related links - [Basic schema for a Dapr component]({{< ref component-schema >}}) - Read [this guide]({{< ref "howto-publish-subscribe.md##step-1-setup-the-pubsub-component" >}}) for instructions on configuring pub/sub components diff --git a/daprdocs/content/en/reference/components-reference/supported-pubsub/setup-aws-snssqs.md b/daprdocs/content/en/reference/components-reference/supported-pubsub/setup-aws-snssqs.md index 360bd6ef3..86865de5b 100644 --- a/daprdocs/content/en/reference/components-reference/supported-pubsub/setup-aws-snssqs.md +++ b/daprdocs/content/en/reference/components-reference/supported-pubsub/setup-aws-snssqs.md @@ -68,7 +68,8 @@ spec: # value: 5 # - name: concurrencyMode # Optional # value: "single" - + # - name: concurrencyLimit # Optional + # value: "0" ``` @@ -98,6 +99,7 @@ The above example uses secrets as plain strings. It is recommended to use [a sec | disableDeleteOnRetryLimit | N | When set to true, after retrying and failing of `messageRetryLimit` times processing a message, reset the message visibility timeout so that other consumers can try processing, instead of deleting the message from SQS (the default behvior). Default: `"false"` | `"true"`, `"false"` | assetsManagementTimeoutSeconds | N | Amount of time in seconds, for an AWS asset management operation, before it times out and cancelled. Asset management operations are any operations performed on STS, SNS and SQS, except message publish and consume operations that implement the default Dapr component retry behavior. The value can be set to any non-negative float/integer. Default: `5` | `0.5`, `10` | concurrencyMode | N | When messages are received in bulk from SQS, call the subscriber sequentially (“single” message at a time), or concurrently (in “parallel”). Default: `"parallel"` | `"single"`, `"parallel"` +| concurrencyLimit | N | Defines the maximum number of concurrent workers handling messages. This value is ignored when concurrencyMode is set to `"single"`. To avoid limiting the number of concurrent workers, set this to `0`. Default: `0` | `100` ### Additional info diff --git a/daprdocs/content/en/reference/components-reference/supported-pubsub/setup-azure-eventhubs.md b/daprdocs/content/en/reference/components-reference/supported-pubsub/setup-azure-eventhubs.md index 713bdb1cb..73db174a0 100644 --- a/daprdocs/content/en/reference/components-reference/supported-pubsub/setup-azure-eventhubs.md +++ b/daprdocs/content/en/reference/components-reference/supported-pubsub/setup-azure-eventhubs.md @@ -33,6 +33,8 @@ spec: value: "channel1" - name: enableEntityManagement value: "false" + - name: enableInOrderMessageDelivery + value: "false" # The following four properties are needed only if enableEntityManagement is set to true - name: resourceGroupName value: "test-rg" @@ -65,11 +67,12 @@ The above example uses secrets as plain strings. It is recommended to use a secr | `connectionString` | Y* | Connection string for the Event Hub or the Event Hub namespace.
* Mutally exclusive with `eventHubNamespace` field.
* Required when not using [Microsoft Entra ID Authentication]({{< ref "authenticating-azure.md" >}}) | `"Endpoint=sb://{EventHubNamespace}.servicebus.windows.net/;SharedAccessKeyName={PolicyName};SharedAccessKey={Key};EntityPath={EventHub}"` or `"Endpoint=sb://{EventHubNamespace}.servicebus.windows.net/;SharedAccessKeyName={PolicyName};SharedAccessKey={Key}"` | `eventHubNamespace` | Y* | The Event Hub Namespace name.
* Mutally exclusive with `connectionString` field.
* Required when using [Microsoft Entra ID Authentication]({{< ref "authenticating-azure.md" >}}) | `"namespace"` | `consumerID` | N | Consumer ID (consumer tag) organizes one or more consumers into a group. Consumers with the same consumer ID work as one virtual consumer; for example, a message is processed only once by one of the consumers in the group. If the `consumerID` is not provided, the Dapr runtime set it to the Dapr application ID (`appID`) value. | Can be set to string value (such as `"channel1"` in the example above) or string format value (such as `"{podName}"`, etc.). [See all of template tags you can use in your component metadata.]({{< ref "component-schema.md#templated-metadata-values" >}}) +| `enableEntityManagement` | N | Boolean value to allow management of the EventHub namespace and storage account. Default: `false` | `"true", "false"` +| `enableInOrderMessageDelivery` | N | Input/Output | Boolean value to allow messages to be delivered in the order in which they were posted. This assumes `partitionKey` is set when publishing or posting to ensure ordering across partitions. Default: `false` | `"true"`, `"false"` | `storageAccountName` | Y | Storage account name to use for the checkpoint store. |`"myeventhubstorage"` | `storageAccountKey` | Y* | Storage account key for the checkpoint store account.
* When using Microsoft Entra ID, it's possible to omit this if the service principal has access to the storage account too. | `"112233445566778899"` | `storageConnectionString` | Y* | Connection string for the checkpoint store, alternative to specifying `storageAccountKey` | `"DefaultEndpointsProtocol=https;AccountName=myeventhubstorage;AccountKey="` | `storageContainerName` | Y | Storage container name for the storage account name. | `"myeventhubstoragecontainer"` -| `enableEntityManagement` | N | Boolean value to allow management of the EventHub namespace and storage account. Default: `false` | `"true", "false"` | `resourceGroupName` | N | Name of the resource group the Event Hub namespace is part of. Required when entity management is enabled | `"test-rg"` | `subscriptionID` | N | Azure subscription ID value. Required when entity management is enabled | `"azure subscription id"` | `partitionCount` | N | Number of partitions for the new Event Hub namespace. Used only when entity management is enabled. Default: `"1"` | `"2"` diff --git a/daprdocs/content/en/reference/components-reference/supported-pubsub/setup-azure-servicebus-topics.md b/daprdocs/content/en/reference/components-reference/supported-pubsub/setup-azure-servicebus-topics.md index 831f6aa72..cc357b5bc 100644 --- a/daprdocs/content/en/reference/components-reference/supported-pubsub/setup-azure-servicebus-topics.md +++ b/daprdocs/content/en/reference/components-reference/supported-pubsub/setup-azure-servicebus-topics.md @@ -83,8 +83,8 @@ The above example uses secrets as plain strings. It is recommended to use a secr | `maxConcurrentHandlers` | N | Defines the maximum number of concurrent message handlers. Default: `0` (unlimited) | `10` | `disableEntityManagement` | N | When set to true, queues and subscriptions do not get created automatically. Default: `"false"` | `"true"`, `"false"` | `defaultMessageTimeToLiveInSec` | N | Default message time to live, in seconds. Used during subscription creation only. | `10` -| `autoDeleteOnIdleInSec` | N | Time in seconds to wait before auto deleting idle subscriptions. Used during subscription creation only. Default: `0` (disabled) | `3600` -| `maxDeliveryCount` | N | Defines the number of attempts the server will make to deliver a message. Used during subscription creation only. Must be 300s or greater. Default set by server. | `10` +| `autoDeleteOnIdleInSec` | N | Time in seconds to wait before auto deleting idle subscriptions. Used during subscription creation only. Must be 300s or greater. Default: `0` (disabled) | `3600` +| `maxDeliveryCount` | N | Defines the number of attempts the server makes to deliver a message. Used during subscription creation only. Default set by server. | `10` | `lockDurationInSec` | N | Defines the length in seconds that a message will be locked for before expiring. Used during subscription creation only. Default set by server. | `30` | `minConnectionRecoveryInSec` | N | Minimum interval (in seconds) to wait before attempting to reconnect to Azure Service Bus in case of a connection failure. Default: `2` | `5` | `maxConnectionRecoveryInSec` | N | Maximum interval (in seconds) to wait before attempting to reconnect to Azure Service Bus in case of a connection failure. After each attempt, the component waits a random number of seconds, increasing every time, between the minimum and the maximum. Default: `300` (5 minutes) | `600` diff --git a/daprdocs/content/en/reference/components-reference/supported-pubsub/setup-pulsar.md b/daprdocs/content/en/reference/components-reference/supported-pubsub/setup-pulsar.md index 5686211ff..3e94c2fc7 100644 --- a/daprdocs/content/en/reference/components-reference/supported-pubsub/setup-pulsar.md +++ b/daprdocs/content/en/reference/components-reference/supported-pubsub/setup-pulsar.md @@ -38,6 +38,8 @@ spec: value: "true" - name: disableBatching value: "false" + - name: receiverQueueSize + value: "1000" - name: .jsonschema # sets a json schema validation for the configured topic value: | { @@ -78,6 +80,7 @@ The above example uses secrets as plain strings. It is recommended to use a [sec | namespace | N | The administrative unit of the topic, which acts as a grouping mechanism for related topics. Default: `"default"` | `"default"` | persistent | N | Pulsar supports two kinds of topics: [persistent](https://pulsar.apache.org/docs/en/concepts-architecture-overview#persistent-storage) and [non-persistent](https://pulsar.apache.org/docs/en/concepts-messaging/#non-persistent-topics). With persistent topics, all messages are durably persisted on disks (if the broker is not standalone, messages are durably persisted on multiple disks), whereas data for non-persistent topics is not persisted to storage disks. | disableBatching | N | disable batching.When batching enabled default batch delay is set to 10 ms and default batch size is 1000 messages,Setting `disableBatching: true` will make the producer to send messages individually. Default: `"false"` | `"true"`, `"false"`| +| receiverQueueSize | N | Sets the size of the consumer receiver queue. Controls how many messages can be accumulated by the consumer before it is explicitly called to read messages by Dapr. Default: `"1000"` | `"1000"` | | batchingMaxPublishDelay | N | batchingMaxPublishDelay set the time period within which the messages sent will be batched,if batch messages are enabled. If set to a non zero value, messages will be queued until this time interval or batchingMaxMessages (see below) or batchingMaxSize (see below). There are two valid formats, one is the fraction with a unit suffix format, and the other is the pure digital format that is processed as milliseconds. Valid time units are "ns", "us" (or "µs"), "ms", "s", "m", "h". Default: `"10ms"` | `"10ms"`, `"10"`| | batchingMaxMessages | N | batchingMaxMessages set the maximum number of messages permitted in a batch.If set to a value greater than 1, messages will be queued until this threshold is reached or batchingMaxSize (see below) has been reached or the batch interval has elapsed. Default: `"1000"` | `"1000"`| | batchingMaxSize | N | batchingMaxSize sets the maximum number of bytes permitted in a batch. If set to a value greater than 1, messages will be queued until this threshold is reached or batchingMaxMessages (see above) has been reached or the batch interval has elapsed. Default: `"128KB"` | `"131072"`| diff --git a/daprdocs/content/en/reference/components-reference/supported-pubsub/setup-redis-pubsub.md b/daprdocs/content/en/reference/components-reference/supported-pubsub/setup-redis-pubsub.md index 1da2cb8b3..387920e7a 100644 --- a/daprdocs/content/en/reference/components-reference/supported-pubsub/setup-redis-pubsub.md +++ b/daprdocs/content/en/reference/components-reference/supported-pubsub/setup-redis-pubsub.md @@ -45,7 +45,9 @@ The above example uses secrets as plain strings. It is recommended to use a secr | redisUsername | N | Username for Redis host. Defaults to empty. Make sure your redis server version is 6 or above, and have created acl rule correctly. | `""`, `"default"` | consumerID | N | The consumer group ID. | Can be set to string value (such as `"channel1"` in the example above) or string format value (such as `"{podName}"`, etc.). [See all of template tags you can use in your component metadata.]({{< ref "component-schema.md#templated-metadata-values" >}}) | useEntraID | N | Implements EntraID support for Azure Cache for Redis. Before enabling this:
  • The `redisHost` name must be specified in the form of `"server:port"`
  • TLS must be enabled
Learn more about this setting under [Create a Redis instance > Azure Cache for Redis]({{< ref "#setup-redis" >}}) | `"true"`, `"false"` | -| enableTLS | N | If the Redis instance supports TLS with public certificates, can be configured to be enabled or disabled. Defaults to `"false"` | `"true"`, `"false"` +| enableTLS | N | If the Redis instance supports TLS with public certificates, can be configured to be enabled or disabled. Defaults to `"false"` | `"true"`, `"false"` | +| clientCert | N | The content of the client certificate, used for Redis instances that require client-side certificates. Must be used with `clientKey` and `enableTLS` must be set to true. It is recommended to use a secret store as described [here]({{< ref component-secrets.md >}}) | `"----BEGIN CERTIFICATE-----\nMIIC..."` | +| clientKey | N | The content of the client private key, used in conjunction with `clientCert` for authentication. It is recommended to use a secret store as described [here]({{< ref component-secrets.md >}}) | `"----BEGIN PRIVATE KEY-----\nMIIE..."` | | redeliverInterval | N | The interval between checking for pending messages to redeliver. Can use either be Go duration string (for example "ms", "s", "m") or milliseconds number. Defaults to `"60s"`. `"0"` disables redelivery. | `"30s"`, `"5000"` | processingTimeout | N | The amount time that a message must be pending before attempting to redeliver it. Can use either be Go duration string ( for example "ms", "s", "m") or milliseconds number. Defaults to `"15s"`. `"0"` disables redelivery. | `"60s"`, `"600000"` | queueDepth | N | The size of the message queue for processing. Defaults to `"100"`. | `"1000"` diff --git a/daprdocs/content/en/reference/components-reference/supported-state-stores/setup-postgresql-v1.md b/daprdocs/content/en/reference/components-reference/supported-state-stores/setup-postgresql-v1.md index 7026dcc92..53e4c0e75 100644 --- a/daprdocs/content/en/reference/components-reference/supported-state-stores/setup-postgresql-v1.md +++ b/daprdocs/content/en/reference/components-reference/supported-state-stores/setup-postgresql-v1.md @@ -83,6 +83,22 @@ Authenticating with Microsoft Entra ID is supported with Azure Database for Post | `azureClientId` | N | Client ID (application ID) | `"c7dd251f-811f-…"` | | `azureClientSecret` | N | Client secret (application password) | `"Ecy3X…"` | +### Authenticate using AWS IAM + +Authenticating with AWS IAM is supported with all versions of PostgreSQL type components. +The user specified in the connection string must be an already existing user in the DB, and an AWS IAM enabled user granted the `rds_iam` database role. +Authentication is based on the AWS authentication configuration file, or the AccessKey/SecretKey provided. +The AWS authentication token will be dynamically rotated before it's expiration time with AWS. + +| Field | Required | Details | Example | +|--------|:--------:|---------|---------| +| `useAWSIAM` | Y | Must be set to `true` to enable the component to retrieve access tokens from AWS IAM. This authentication method only works with AWS Relational Database Service for PostgreSQL databases. | `"true"` | +| `connectionString` | Y | The connection string for the PostgreSQL database.
This must contain an already existing user, which corresponds to the name of the user created inside PostgreSQL that maps to the AWS IAM policy. This connection string should not contain any password. Note that the database name field is denoted by dbname with AWS. | `"host=mydb.postgres.database.aws.com user=myapplication port=5432 dbname=my_db sslmode=require"`| +| `awsRegion` | N | The AWS Region where the AWS Relational Database Service is deployed to. | `"us-east-1"` | +| `awsAccessKey` | N | AWS access key associated with an IAM account | `"AKIAIOSFODNN7EXAMPLE"` | +| `awsSecretKey` | N | The secret key associated with the access key | `"wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY"` | +| `awsSessionToken` | N | AWS session token to use. A session token is only required if you are using temporary security credentials. | `"TOKEN"` | + ### Other metadata options | Field | Required | Details | Example | diff --git a/daprdocs/content/en/reference/components-reference/supported-state-stores/setup-postgresql-v2.md b/daprdocs/content/en/reference/components-reference/supported-state-stores/setup-postgresql-v2.md index bcda2558b..d4e21f17b 100644 --- a/daprdocs/content/en/reference/components-reference/supported-state-stores/setup-postgresql-v2.md +++ b/daprdocs/content/en/reference/components-reference/supported-state-stores/setup-postgresql-v2.md @@ -83,6 +83,22 @@ Authenticating with Microsoft Entra ID is supported with Azure Database for Post | `azureClientId` | N | Client ID (application ID) | `"c7dd251f-811f-…"` | | `azureClientSecret` | N | Client secret (application password) | `"Ecy3X…"` | +### Authenticate using AWS IAM + +Authenticating with AWS IAM is supported with all versions of PostgreSQL type components. +The user specified in the connection string must be an already existing user in the DB, and an AWS IAM enabled user granted the `rds_iam` database role. +Authentication is based on the AWS authentication configuration file, or the AccessKey/SecretKey provided. +The AWS authentication token will be dynamically rotated before it's expiration time with AWS. + +| Field | Required | Details | Example | +|--------|:--------:|---------|---------| +| `useAWSIAM` | Y | Must be set to `true` to enable the component to retrieve access tokens from AWS IAM. This authentication method only works with AWS Relational Database Service for PostgreSQL databases. | `"true"` | +| `connectionString` | Y | The connection string for the PostgreSQL database.
This must contain an already existing user, which corresponds to the name of the user created inside PostgreSQL that maps to the AWS IAM policy. This connection string should not contain any password. Note that the database name field is denoted by dbname with AWS. | `"host=mydb.postgres.database.aws.com user=myapplication port=5432 dbname=my_db sslmode=require"`| +| `awsRegion` | N | This maintains backwards compatibility with existing fields. It will be deprecated as of Dapr 1.17. Use 'region' instead. The AWS Region where the AWS Relational Database Service is deployed to. | `"us-east-1"` | +| `awsAccessKey` | N | This maintains backwards compatibility with existing fields. It will be deprecated as of Dapr 1.17. Use 'accessKey' instead. AWS access key associated with an IAM account | `"AKIAIOSFODNN7EXAMPLE"` | +| `awsSecretKey` | N | This maintains backwards compatibility with existing fields. It will be deprecated as of Dapr 1.17. Use 'secretKey' instead. The secret key associated with the access key | `"wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY"` | +| `awsSessionToken` | N | This maintains backwards compatibility with existing fields. It will be deprecated as of Dapr 1.17. Use 'sessionToken' instead. AWS session token to use. A session token is only required if you are using temporary security credentials. | `"TOKEN"` | + ### Other metadata options | Field | Required | Details | Example | diff --git a/daprdocs/content/en/reference/components-reference/supported-state-stores/setup-redis.md b/daprdocs/content/en/reference/components-reference/supported-state-stores/setup-redis.md index ed6d4118e..9b672c6a6 100644 --- a/daprdocs/content/en/reference/components-reference/supported-state-stores/setup-redis.md +++ b/daprdocs/content/en/reference/components-reference/supported-state-stores/setup-redis.md @@ -32,6 +32,10 @@ spec: value: # Optional. Allowed: true, false. - name: enableTLS value: # Optional. Allowed: true, false. + - name: clientCert + value: # Optional + - name: clientKey + value: # Optional - name: maxRetries value: # Optional - name: maxRetryBackoff @@ -102,6 +106,8 @@ If you wish to use Redis as an actor store, append the following to the yaml. | redisUsername | N | Username for Redis host. Defaults to empty. Make sure your redis server version is 6 or above, and have created acl rule correctly. | `""`, `"default"` | useEntraID | N | Implements EntraID support for Azure Cache for Redis. Before enabling this:
  • The `redisHost` name must be specified in the form of `"server:port"`
  • TLS must be enabled
Learn more about this setting under [Create a Redis instance > Azure Cache for Redis]({{< ref "#setup-redis" >}}) | `"true"`, `"false"` | | enableTLS | N | If the Redis instance supports TLS with public certificates, can be configured to be enabled or disabled. Defaults to `"false"` | `"true"`, `"false"` +| clientCert | N | The content of the client certificate, used for Redis instances that require client-side certificates. Must be used with `clientKey` and `enableTLS` must be set to true. It is recommended to use a secret store as described [here]({{< ref component-secrets.md >}}) | `"----BEGIN CERTIFICATE-----\nMIIC..."` | +| clientKey | N | The content of the client private key, used in conjunction with `clientCert` for authentication. It is recommended to use a secret store as described [here]({{< ref component-secrets.md >}}) | `"----BEGIN PRIVATE KEY-----\nMIIE..."` | | maxRetries | N | Maximum number of retries before giving up. Defaults to `3` | `5`, `10` | maxRetryBackoff | N | Maximum backoff between each retry. Defaults to `2` seconds; `"-1"` disables backoff. | `3000000000` | failover | N | Property to enabled failover configuration. Needs sentinelMasterName to be set. The redisHost should be the sentinel host address. See [Redis Sentinel Documentation](https://redis.io/docs/manual/sentinel/). Defaults to `"false"` | `"true"`, `"false"` diff --git a/daprdocs/content/en/reference/resource-specs/component-schema.md b/daprdocs/content/en/reference/resource-specs/component-schema.md index 349ff4923..875744c28 100644 --- a/daprdocs/content/en/reference/resource-specs/component-schema.md +++ b/daprdocs/content/en/reference/resource-specs/component-schema.md @@ -8,27 +8,33 @@ description: "The basic spec for a Dapr component" Dapr defines and registers components using a [resource specifications](https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/). All components are defined as a resource and can be applied to any hosting environment where Dapr is running, not just Kubernetes. +Typically, components are restricted to a particular [namepsace]({{< ref isolation-concept.md >}}) and restricted access through scopes to any particular set of applications. The namespace is either explicit on the component manifest itself, or set by the API server, which derives the namespace through context with applying to Kubernetes. + +{{% alert title="Note" color="primary" %}} +The exception to this rule is in self-hosted mode, where daprd ingests component resources when the namespace field is omitted. However, the security profile is mute, as daprd has access to the manifest anyway, unlike in Kubernetes. +{{% /alert %}} + ## Format ```yaml apiVersion: dapr.io/v1alpha1 kind: Component auth: - secretstore: [SECRET-STORE-NAME] + secretstore: metadata: - name: [COMPONENT-NAME] - namespace: [COMPONENT-NAMESPACE] + name: + namespace: spec: - type: [COMPONENT-TYPE] + type: version: v1 - initTimeout: [TIMEOUT-DURATION] - ignoreErrors: [BOOLEAN] + initTimeout: + ignoreErrors: metadata: - - name: [METADATA-NAME] - value: [METADATA-VALUE] + - name: + value: scopes: - - [APPID] - - [APPID] + - + - ``` ## Spec fields diff --git a/daprdocs/content/en/reference/resource-specs/configuration-schema.md b/daprdocs/content/en/reference/resource-specs/configuration-schema.md index b52228c16..e5caac792 100644 --- a/daprdocs/content/en/reference/resource-specs/configuration-schema.md +++ b/daprdocs/content/en/reference/resource-specs/configuration-schema.md @@ -36,6 +36,7 @@ spec: labels: - name: regex: {} + recordErrorCodes: latencyDistributionBuckets: - - diff --git a/daprdocs/content/en/reference/resource-specs/httpendpoints-schema.md b/daprdocs/content/en/reference/resource-specs/httpendpoints-schema.md index a85a25315..5e2b8f45d 100644 --- a/daprdocs/content/en/reference/resource-specs/httpendpoints-schema.md +++ b/daprdocs/content/en/reference/resource-specs/httpendpoints-schema.md @@ -10,6 +10,10 @@ aliases: The `HTTPEndpoint` is a Dapr resource that is used to enable the invocation of non-Dapr endpoints from a Dapr application. +{{% alert title="Note" color="primary" %}} +Any HTTPEndpoint resource can be restricted to a particular [namepsace]({{< ref isolation-concept.md >}}) and restricted access through scopes to any particular set of applications. +{{% /alert %}} + ## Format ```yaml diff --git a/daprdocs/content/en/reference/resource-specs/resiliency-schema.md b/daprdocs/content/en/reference/resource-specs/resiliency-schema.md index 32888adc7..d307b70b4 100644 --- a/daprdocs/content/en/reference/resource-specs/resiliency-schema.md +++ b/daprdocs/content/en/reference/resource-specs/resiliency-schema.md @@ -8,6 +8,10 @@ description: "The basic spec for a Dapr resiliency resource" The `Resiliency` Dapr resource allows you to define and apply fault tolerance resiliency policies. Resiliency specs are applied when the Dapr sidecar starts. +{{% alert title="Note" color="primary" %}} +Any resiliency resource can be restricted to a particular [namepsace]({{< ref isolation-concept.md >}}) and restricted access through scopes to any particular set of applications. +{{% /alert %}} + ## Format ```yml @@ -28,6 +32,9 @@ spec: duration: maxInterval: maxRetries: + matching: + httpStatusCodes: + gRPCStatusCodes: circuitBreakers: circuitBreakerName: # Replace with any unique name maxRequests: diff --git a/daprdocs/content/en/reference/resource-specs/subscription-schema.md b/daprdocs/content/en/reference/resource-specs/subscription-schema.md index bd5fc8263..c047fd40f 100644 --- a/daprdocs/content/en/reference/resource-specs/subscription-schema.md +++ b/daprdocs/content/en/reference/resource-specs/subscription-schema.md @@ -6,7 +6,13 @@ weight: 2000 description: "The basic spec for a Dapr subscription" --- -The `Subscription` Dapr resource allows you to subscribe declaratively to a topic using an external component YAML file. This guide demonstrates two subscription API versions: +The `Subscription` Dapr resource allows you to subscribe declaratively to a topic using an external component YAML file. + +{{% alert title="Note" color="primary" %}} +Any subscription can be restricted to a particular [namepsace]({{< ref isolation-concept.md >}}) and restricted access through scopes to any particular set of applications. +{{% /alert %}} + +This guide demonstrates two subscription API versions: - `v2alpha` (default spec) - `v1alpha1` (deprecated) @@ -23,15 +29,15 @@ metadata: spec: topic: # Required routes: # Required - - rules: - - match: - path: + rules: + - match: + path: pubsubname: # Required deadLetterTopic: # Optional bulkSubscribe: # Optional - - enabled: - - maxMessagesCount: - - maxAwaitDurationMs: + enabled: + maxMessagesCount: + maxAwaitDurationMs: scopes: - ``` diff --git a/daprdocs/data/components/bindings/generic.yaml b/daprdocs/data/components/bindings/generic.yaml index 4f63295bd..250eb4d88 100644 --- a/daprdocs/data/components/bindings/generic.yaml +++ b/daprdocs/data/components/bindings/generic.yaml @@ -134,6 +134,14 @@ features: input: true output: false +- component: SFTP + link: sftp + state: Alpha + version: v1 + since: "1.15" + features: + input: false + output: true - component: SMTP link: smtp state: Alpha diff --git a/daprdocs/data/components/conversation/aws.yaml b/daprdocs/data/components/conversation/aws.yaml new file mode 100644 index 000000000..6f5b33d20 --- /dev/null +++ b/daprdocs/data/components/conversation/aws.yaml @@ -0,0 +1,5 @@ +- component: AWS Bedrock + link: aws-bedrock + state: Alpha + version: v1 + since: "1.15" \ No newline at end of file diff --git a/daprdocs/data/components/conversation/generic.yaml b/daprdocs/data/components/conversation/generic.yaml new file mode 100644 index 000000000..26cf8431c --- /dev/null +++ b/daprdocs/data/components/conversation/generic.yaml @@ -0,0 +1,20 @@ +- component: Anthropic + link: anthropic + state: Alpha + version: v1 + since: "1.15" +- component: Huggingface + link: hugging-face + state: Alpha + version: v1 + since: "1.15" +- component: Mistral + link: mistral + state: Alpha + version: v1 + since: "1.15" +- component: OpenAI + link: openai + state: Alpha + version: v1 + since: "1.15" diff --git a/daprdocs/data/components/secret_stores/aws.yaml b/daprdocs/data/components/secret_stores/aws.yaml index f1e6d77ec..522b7f64e 100644 --- a/daprdocs/data/components/secret_stores/aws.yaml +++ b/daprdocs/data/components/secret_stores/aws.yaml @@ -1,8 +1,8 @@ - component: AWS Secrets Manager link: aws-secret-manager - state: Alpha + state: Beta version: v1 - since: "1.0" + since: "1.15" - component: AWS SSM Parameter Store link: aws-parameter-store state: Alpha diff --git a/daprdocs/layouts/partials/components/conversation.html b/daprdocs/layouts/partials/components/conversation.html new file mode 100644 index 000000000..266217073 --- /dev/null +++ b/daprdocs/layouts/partials/components/conversation.html @@ -0,0 +1,28 @@ +{{- $groups := dict + "Generic" $.Site.Data.components.conversation.generic + "Amazon Web Services (AWS)" $.Site.Data.components.conversation.aws + + }} + + {{ range $group, $components := $groups }} +

{{ $group }}

+ + + + + + + + {{ range sort $components "component" }} + + + + + + + {{ end }} +
ComponentStatusComponent versionSince runtime version
{{ .component }} + {{ .state }}{{ .version }}{{ .since }}
+ {{ end }} + + {{ partial "components/componenttoc.html" . }} \ No newline at end of file diff --git a/daprdocs/layouts/shortcodes/dapr-latest-version.html b/daprdocs/layouts/shortcodes/dapr-latest-version.html index c64a87827..79be56261 100644 --- a/daprdocs/layouts/shortcodes/dapr-latest-version.html +++ b/daprdocs/layouts/shortcodes/dapr-latest-version.html @@ -1 +1 @@ -{{- if .Get "short" }}1.14{{ else if .Get "long" }}1.14.0{{ else if .Get "cli" }}1.14.0{{ else }}1.14.0{{ end -}} +{{- if .Get "short" }}1.14{{ else if .Get "long" }}1.14.4{{ else if .Get "cli" }}1.14.1{{ else }}1.14.1{{ end -}} diff --git a/daprdocs/static/images/actors-quickstart/actors-quickstart.png b/daprdocs/static/images/actors-quickstart/actors-quickstart.png index 1ed195714..3769e1171 100644 Binary files a/daprdocs/static/images/actors-quickstart/actors-quickstart.png and b/daprdocs/static/images/actors-quickstart/actors-quickstart.png differ diff --git a/daprdocs/static/images/bindings-quickstart/bindings-quickstart.png b/daprdocs/static/images/bindings-quickstart/bindings-quickstart.png index c10bbd38e..afc3e21cc 100644 Binary files a/daprdocs/static/images/bindings-quickstart/bindings-quickstart.png and b/daprdocs/static/images/bindings-quickstart/bindings-quickstart.png differ diff --git a/daprdocs/static/images/building_blocks.png b/daprdocs/static/images/building_blocks.png index ec8f7bbff..6e5c51b69 100644 Binary files a/daprdocs/static/images/building_blocks.png and b/daprdocs/static/images/building_blocks.png differ diff --git a/daprdocs/static/images/buildingblocks-overview.png b/daprdocs/static/images/buildingblocks-overview.png index 9b05f41be..9570df612 100644 Binary files a/daprdocs/static/images/buildingblocks-overview.png and b/daprdocs/static/images/buildingblocks-overview.png differ diff --git a/daprdocs/static/images/concepts-components.png b/daprdocs/static/images/concepts-components.png index fd80064a6..c22c50f23 100644 Binary files a/daprdocs/static/images/concepts-components.png and b/daprdocs/static/images/concepts-components.png differ diff --git a/daprdocs/static/images/configuration-quickstart/configuration-quickstart-flow.png b/daprdocs/static/images/configuration-quickstart/configuration-quickstart-flow.png index 29dc6f44c..94310e1fe 100644 Binary files a/daprdocs/static/images/configuration-quickstart/configuration-quickstart-flow.png and b/daprdocs/static/images/configuration-quickstart/configuration-quickstart-flow.png differ diff --git a/daprdocs/static/images/crypto-quickstart.png b/daprdocs/static/images/crypto-quickstart.png index e6f7fe70f..0d315a7d2 100644 Binary files a/daprdocs/static/images/crypto-quickstart.png and b/daprdocs/static/images/crypto-quickstart.png differ diff --git a/daprdocs/static/images/observability-sidecar.png b/daprdocs/static/images/observability-sidecar.png index 3df2734b1..585aae22b 100644 Binary files a/daprdocs/static/images/observability-sidecar.png and b/daprdocs/static/images/observability-sidecar.png differ diff --git a/daprdocs/static/images/observability-tracing.png b/daprdocs/static/images/observability-tracing.png index b195845b3..1497dfe71 100644 Binary files a/daprdocs/static/images/observability-tracing.png and b/daprdocs/static/images/observability-tracing.png differ diff --git a/daprdocs/static/images/overview-kubernetes.png b/daprdocs/static/images/overview-kubernetes.png index ba307aa70..f2aedc745 100644 Binary files a/daprdocs/static/images/overview-kubernetes.png and b/daprdocs/static/images/overview-kubernetes.png differ diff --git a/daprdocs/static/images/overview-sidecar-apis.png b/daprdocs/static/images/overview-sidecar-apis.png index 417d381b2..2dc2f6830 100644 Binary files a/daprdocs/static/images/overview-sidecar-apis.png and b/daprdocs/static/images/overview-sidecar-apis.png differ diff --git a/daprdocs/static/images/overview-sidecar-model.png b/daprdocs/static/images/overview-sidecar-model.png index 40d7425d9..46185c356 100644 Binary files a/daprdocs/static/images/overview-sidecar-model.png and b/daprdocs/static/images/overview-sidecar-model.png differ diff --git a/daprdocs/static/images/overview-standalone.png b/daprdocs/static/images/overview-standalone.png index 179ee7460..e3070c6a7 100644 Binary files a/daprdocs/static/images/overview-standalone.png and b/daprdocs/static/images/overview-standalone.png differ diff --git a/daprdocs/static/images/overview-vms-hosting.png b/daprdocs/static/images/overview-vms-hosting.png index 05a6a0fee..cc05fe9e8 100644 Binary files a/daprdocs/static/images/overview-vms-hosting.png and b/daprdocs/static/images/overview-vms-hosting.png differ diff --git a/daprdocs/static/images/overview.png b/daprdocs/static/images/overview.png index 58124f0a1..f62c82085 100644 Binary files a/daprdocs/static/images/overview.png and b/daprdocs/static/images/overview.png differ diff --git a/daprdocs/static/images/prometheus-service-discovery.png b/daprdocs/static/images/prometheus-service-discovery.png new file mode 100644 index 000000000..34acfcadb Binary files /dev/null and b/daprdocs/static/images/prometheus-service-discovery.png differ diff --git a/daprdocs/static/images/prometheus-web-ui.png b/daprdocs/static/images/prometheus-web-ui.png new file mode 100644 index 000000000..f6b82e903 Binary files /dev/null and b/daprdocs/static/images/prometheus-web-ui.png differ diff --git a/daprdocs/static/images/pubsub-quickstart/pubsub-diagram.png b/daprdocs/static/images/pubsub-quickstart/pubsub-diagram.png index 21256c817..dca9a907a 100644 Binary files a/daprdocs/static/images/pubsub-quickstart/pubsub-diagram.png and b/daprdocs/static/images/pubsub-quickstart/pubsub-diagram.png differ diff --git a/daprdocs/static/images/scheduler/scheduler-architecture.png b/daprdocs/static/images/scheduler/scheduler-architecture.png index 5cf309bf4..1b87d1ffd 100644 Binary files a/daprdocs/static/images/scheduler/scheduler-architecture.png and b/daprdocs/static/images/scheduler/scheduler-architecture.png differ diff --git a/daprdocs/static/images/secretsmanagement-quickstart/secrets-mgmt-quickstart.png b/daprdocs/static/images/secretsmanagement-quickstart/secrets-mgmt-quickstart.png index 643af8ea2..f24c09b17 100644 Binary files a/daprdocs/static/images/secretsmanagement-quickstart/secrets-mgmt-quickstart.png and b/daprdocs/static/images/secretsmanagement-quickstart/secrets-mgmt-quickstart.png differ diff --git a/daprdocs/static/images/security-dapr-API-scoping.png b/daprdocs/static/images/security-dapr-API-scoping.png index 03528e38e..d74364cf1 100644 Binary files a/daprdocs/static/images/security-dapr-API-scoping.png and b/daprdocs/static/images/security-dapr-API-scoping.png differ diff --git a/daprdocs/static/images/security-end-to-end-communication.png b/daprdocs/static/images/security-end-to-end-communication.png index 6012f22c2..41e8f6469 100644 Binary files a/daprdocs/static/images/security-end-to-end-communication.png and b/daprdocs/static/images/security-end-to-end-communication.png differ diff --git a/daprdocs/static/images/security-mTLS-dapr-system-services.png b/daprdocs/static/images/security-mTLS-dapr-system-services.png index ae898d8e9..3762fff6f 100644 Binary files a/daprdocs/static/images/security-mTLS-dapr-system-services.png and b/daprdocs/static/images/security-mTLS-dapr-system-services.png differ diff --git a/daprdocs/static/images/security-mTLS-sentry-kubernetes.png b/daprdocs/static/images/security-mTLS-sentry-kubernetes.png index a7437c1d3..1a3163e9a 100644 Binary files a/daprdocs/static/images/security-mTLS-sentry-kubernetes.png and b/daprdocs/static/images/security-mTLS-sentry-kubernetes.png differ diff --git a/daprdocs/static/images/security-mTLS-sentry-selfhosted.png b/daprdocs/static/images/security-mTLS-sentry-selfhosted.png index 366ce86b9..55bf0c315 100644 Binary files a/daprdocs/static/images/security-mTLS-sentry-selfhosted.png and b/daprdocs/static/images/security-mTLS-sentry-selfhosted.png differ diff --git a/daprdocs/static/images/security-overview-capabilities-example.png b/daprdocs/static/images/security-overview-capabilities-example.png index 0dbde1fc3..386a1590b 100644 Binary files a/daprdocs/static/images/security-overview-capabilities-example.png and b/daprdocs/static/images/security-overview-capabilities-example.png differ diff --git a/daprdocs/static/images/service-invocation-overview.png b/daprdocs/static/images/service-invocation-overview.png index c5b2fe554..eadef1e61 100644 Binary files a/daprdocs/static/images/service-invocation-overview.png and b/daprdocs/static/images/service-invocation-overview.png differ diff --git a/daprdocs/static/images/state-management-quickstart.png b/daprdocs/static/images/state-management-quickstart.png index e6606ae97..8c07c8a52 100644 Binary files a/daprdocs/static/images/state-management-quickstart.png and b/daprdocs/static/images/state-management-quickstart.png differ diff --git a/daprdocs/static/images/workflow-quickstart-overview.png b/daprdocs/static/images/workflow-quickstart-overview.png index d616f2106..7a8ea3e22 100644 Binary files a/daprdocs/static/images/workflow-quickstart-overview.png and b/daprdocs/static/images/workflow-quickstart-overview.png differ diff --git a/daprdocs/static/presentations/Dapr-Diagrams-template.pptx.zip b/daprdocs/static/presentations/Dapr-Diagrams-template.pptx.zip new file mode 100644 index 000000000..3a871010f Binary files /dev/null and b/daprdocs/static/presentations/Dapr-Diagrams-template.pptx.zip differ diff --git a/daprdocs/static/presentations/Dapr-Diagrams.pptx.zip b/daprdocs/static/presentations/Dapr-Diagrams.pptx.zip deleted file mode 100644 index 206d01600..000000000 Binary files a/daprdocs/static/presentations/Dapr-Diagrams.pptx.zip and /dev/null differ diff --git a/daprdocs/static/presentations/dapr-slidedeck.pptx.zip b/daprdocs/static/presentations/dapr-slidedeck.pptx.zip index 1ccec7c23..985bf939f 100644 Binary files a/daprdocs/static/presentations/dapr-slidedeck.pptx.zip and b/daprdocs/static/presentations/dapr-slidedeck.pptx.zip differ diff --git a/sdkdocs/dotnet b/sdkdocs/dotnet index b8e276728..01b483347 160000 --- a/sdkdocs/dotnet +++ b/sdkdocs/dotnet @@ -1 +1 @@ -Subproject commit b8e276728935c66b0a335b5aa2ca4102c560dd3d +Subproject commit 01b4833474f869865cba916196376fb49a97911c diff --git a/sdkdocs/go b/sdkdocs/go index 7c03c7ce5..2ab3420ad 160000 --- a/sdkdocs/go +++ b/sdkdocs/go @@ -1 +1 @@ -Subproject commit 7c03c7ce58d100a559ac1881bc0c80d6dedc5ab9 +Subproject commit 2ab3420adc75049bfcf27cb2eeebdc08f2156474 diff --git a/sdkdocs/java b/sdkdocs/java index a98327e7d..380cda68f 160000 --- a/sdkdocs/java +++ b/sdkdocs/java @@ -1 +1 @@ -Subproject commit a98327e7d9a81611b0d7e91e59ea23ad48271948 +Subproject commit 380cda68f82456ecc52cd876e9567a7aaaf4e05f diff --git a/sdkdocs/js b/sdkdocs/js index 7350742b6..9adc54ded 160000 --- a/sdkdocs/js +++ b/sdkdocs/js @@ -1 +1 @@ -Subproject commit 7350742b6869cc166633d1f4d17d76fbdbb12921 +Subproject commit 9adc54dedd87846d513943a5ed9ebe0c1627a192 diff --git a/sdkdocs/python b/sdkdocs/python index 64a4f2f66..6e90e84b1 160000 --- a/sdkdocs/python +++ b/sdkdocs/python @@ -1 +1 @@ -Subproject commit 64a4f2f6658e9023e8ea080eefdb019645cae802 +Subproject commit 6e90e84b166ac7ea603b78894e9e1b92dc456014 diff --git a/sdkdocs/rust b/sdkdocs/rust index 4abf5aa65..4e2d31603 160000 --- a/sdkdocs/rust +++ b/sdkdocs/rust @@ -1 +1 @@ -Subproject commit 4abf5aa6504f7c0b0018d20f8dc038a486a67e3a +Subproject commit 4e2d3160324f9c5968415acf206c039837df9a63