Merge branch 'dapr:v1.13' into v1.13

This commit is contained in:
Bilgin Ibryam 2024-07-17 17:30:50 +01:00 committed by GitHub
commit 2ddafc2080
No known key found for this signature in database
GPG Key ID: B5690EEEBB952194
104 changed files with 398 additions and 189 deletions

View File

@ -1,6 +1,7 @@
name: Azure Static Web App v1.13 name: Azure Static Web App v1.13
on: on:
workflow_dispatch:
push: push:
branches: branches:
- v1.13 - v1.13

View File

@ -1,5 +1,7 @@
# Dapr documentation # Dapr documentation
[![GitHub License](https://img.shields.io/github/license/dapr/docs?style=flat&label=License&logo=github)](https://github.com/dapr/docs/blob/v1.13/LICENSE) [![GitHub issue custom search in repo](https://img.shields.io/github/issues-search/dapr/docs?query=type%3Aissue%20is%3Aopen%20label%3A%22good%20first%20issue%22&label=Good%20first%20issues&style=flat&logo=github)](https://github.com/dapr/docs/issues?q=is%3Aissue+is%3Aopen+label%3A%22good+first+issue%22) [![Discord](https://img.shields.io/discord/778680217417809931?label=Discord&style=flat&logo=discord)](http://bit.ly/dapr-discord) [![YouTube Channel Views](https://img.shields.io/youtube/channel/views/UCtpSQ9BLB_3EXdWAUQYwnRA?style=flat&label=YouTube%20views&logo=youtube)](https://youtube.com/@daprdev) [![X (formerly Twitter) Follow](https://img.shields.io/twitter/follow/daprdev?logo=x&style=flat)](https://twitter.com/daprdev)
If you are looking to explore the Dapr documentation, please go to the documentation website: If you are looking to explore the Dapr documentation, please go to the documentation website:
[**https://docs.dapr.io**](https://docs.dapr.io) [**https://docs.dapr.io**](https://docs.dapr.io)

View File

@ -21,12 +21,12 @@ Dapr provides the following building blocks:
| Building Block | Endpoint | Description | | Building Block | Endpoint | Description |
|----------------|----------|-------------| |----------------|----------|-------------|
| [**Service-to-service invocation**]({{< ref "service-invocation-overview.md" >}}) | `/v1.0/invoke` | Service invocation enables applications to communicate with each other through well-known endpoints in the form of http or gRPC messages. Dapr provides an endpoint that acts as a combination of a reverse proxy with built-in service discovery, while leveraging built-in distributed tracing and error handling. | [**Service-to-service invocation**]({{< ref "service-invocation-overview.md" >}}) | `/v1.0/invoke` | Service invocation enables applications to communicate with each other through well-known endpoints in the form of http or gRPC messages. Dapr provides an endpoint that acts as a combination of a reverse proxy with built-in service discovery, while leveraging built-in distributed tracing and error handling.
| [**State management**]({{< ref "state-management-overview.md" >}}) | `/v1.0/state` | Application state is anything an application wants to preserve beyond a single session. Dapr provides a key/value-based state and query APIs with pluggable state stores for persistence.
| [**Publish and subscribe**]({{< ref "pubsub-overview.md" >}}) | `/v1.0/publish` `/v1.0/subscribe`| Pub/Sub is a loosely coupled messaging pattern where senders (or publishers) publish messages to a topic, to which subscribers subscribe. Dapr supports the pub/sub pattern between applications. | [**Publish and subscribe**]({{< ref "pubsub-overview.md" >}}) | `/v1.0/publish` `/v1.0/subscribe`| Pub/Sub is a loosely coupled messaging pattern where senders (or publishers) publish messages to a topic, to which subscribers subscribe. Dapr supports the pub/sub pattern between applications.
| [**Workflows**]({{< ref "workflow-overview.md" >}}) | `/v1.0-beta1/workflow` | The Workflow API enables you to define long running, persistent processes or data flows that span multiple microservices using Dapr workflows or workflow components. The Workflow API can be combined with other Dapr API building blocks. For example, a workflow can call another service with service invocation or retrieve secrets, providing flexibility and portability.
| [**State management**]({{< ref "state-management-overview.md" >}}) | `/v1.0/state` | Application state is anything an application wants to preserve beyond a single session. Dapr provides a key/value-based state and query APIs with pluggable state stores for persistence.
| [**Bindings**]({{< ref "bindings-overview.md" >}}) | `/v1.0/bindings` | A binding provides a bi-directional connection to an external cloud/on-premise service or system. Dapr allows you to invoke the external service through the Dapr binding API, and it allows your application to be triggered by events sent by the connected service. | [**Bindings**]({{< ref "bindings-overview.md" >}}) | `/v1.0/bindings` | A binding provides a bi-directional connection to an external cloud/on-premise service or system. Dapr allows you to invoke the external service through the Dapr binding API, and it allows your application to be triggered by events sent by the connected service.
| [**Actors**]({{< ref "actors-overview.md" >}}) | `/v1.0/actors` | An actor is an isolated, independent unit of compute and state with single-threaded execution. Dapr provides an actor implementation based on the virtual actor pattern which provides a single-threaded programming model and where actors are garbage collected when not in use. | [**Actors**]({{< ref "actors-overview.md" >}}) | `/v1.0/actors` | An actor is an isolated, independent unit of compute and state with single-threaded execution. Dapr provides an actor implementation based on the virtual actor pattern which provides a single-threaded programming model and where actors are garbage collected when not in use.
| [**Secrets**]({{< ref "secrets-overview.md" >}}) | `/v1.0/secrets` | Dapr provides a secrets building block API and integrates with secret stores such as public cloud stores, local stores and Kubernetes to store the secrets. Services can call the secrets API to retrieve secrets, for example to get a connection string to a database. | [**Secrets**]({{< ref "secrets-overview.md" >}}) | `/v1.0/secrets` | Dapr provides a secrets building block API and integrates with secret stores such as public cloud stores, local stores and Kubernetes to store the secrets. Services can call the secrets API to retrieve secrets, for example to get a connection string to a database.
| [**Configuration**]({{< ref "configuration-api-overview.md" >}}) | `/v1.0/configuration` | The Configuration API enables you to retrieve and subscribe to application configuration items for supported configuration stores. This enables an application to retrieve specific configuration information, for example, at start up or when configuration changes are made in the store. | [**Configuration**]({{< ref "configuration-api-overview.md" >}}) | `/v1.0/configuration` | The Configuration API enables you to retrieve and subscribe to application configuration items for supported configuration stores. This enables an application to retrieve specific configuration information, for example, at start up or when configuration changes are made in the store.
| [**Distributed lock**]({{< ref "distributed-lock-api-overview.md" >}}) | `/v1.0-alpha1/lock` | The distributed lock API enables you to take a lock on a resource so that multiple instances of an application can access the resource without conflicts and provide consistency guarantees. | [**Distributed lock**]({{< ref "distributed-lock-api-overview.md" >}}) | `/v1.0-alpha1/lock` | The distributed lock API enables you to take a lock on a resource so that multiple instances of an application can access the resource without conflicts and provide consistency guarantees.
| [**Workflows**]({{< ref "workflow-overview.md" >}}) | `/v1.0-beta1/workflow` | The Workflow API enables you to define long running, persistent processes or data flows that span multiple microservices using Dapr workflows or workflow components. The Workflow API can be combined with other Dapr API building blocks. For example, a workflow can call another service with service invocation or retrieve secrets, providing flexibility and portability.
| [**Cryptography**]({{< ref "cryptography-overview.md" >}}) | `/v1.0-alpha1/crypto` | The Cryptography API enables you to perform cryptographic operations, such as encrypting and decrypting messages, without exposing keys to your application. | [**Cryptography**]({{< ref "cryptography-overview.md" >}}) | `/v1.0-alpha1/crypto` | The Cryptography API enables you to perform cryptographic operations, such as encrypting and decrypting messages, without exposing keys to your application.

View File

@ -64,13 +64,6 @@ The component is unavailable for a short period of time during reload and reinit
The following are the component types provided by Dapr: The following are the component types provided by Dapr:
### State stores
State store components are data stores (databases, files, memory) that store key-value pairs as part of the [state management]({{< ref "state-management-overview.md" >}}) building block.
- [List of state stores]({{< ref supported-state-stores >}})
- [State store implementations](https://github.com/dapr/components-contrib/tree/master/state)
### Name resolution ### Name resolution
Name resolution components are used with the [service invocation]({{< ref "service-invocation-overview.md" >}}) building block to integrate with the hosting environment and provide service-to-service discovery. For example, the Kubernetes name resolution component integrates with the Kubernetes DNS service, self-hosted uses mDNS and clusters of VMs can use the Consul name resolution component. Name resolution components are used with the [service invocation]({{< ref "service-invocation-overview.md" >}}) building block to integrate with the hosting environment and provide service-to-service discovery. For example, the Kubernetes name resolution component integrates with the Kubernetes DNS service, self-hosted uses mDNS and clusters of VMs can use the Consul name resolution component.
@ -85,6 +78,20 @@ Pub/sub broker components are message brokers that can pass messages to/from ser
- [List of pub/sub brokers]({{< ref supported-pubsub >}}) - [List of pub/sub brokers]({{< ref supported-pubsub >}})
- [Pub/sub broker implementations](https://github.com/dapr/components-contrib/tree/master/pubsub) - [Pub/sub broker implementations](https://github.com/dapr/components-contrib/tree/master/pubsub)
### Workflows
A [workflow]({{< ref workflow-overview.md >}}) is custom application logic that defines a reliable business process or data flow. Workflow components are workflow runtimes (or engines) that run the business logic written for that workflow and store their state into a state store.
<!--- [List of supported workflows]()
- [Workflow implementations](https://github.com/dapr/components-contrib/tree/master/workflows)-->
### State stores
State store components are data stores (databases, files, memory) that store key-value pairs as part of the [state management]({{< ref "state-management-overview.md" >}}) building block.
- [List of state stores]({{< ref supported-state-stores >}})
- [State store implementations](https://github.com/dapr/components-contrib/tree/master/state)
### Bindings ### Bindings
External resources can connect to Dapr in order to trigger a method on an application or be called from an application as part of the [bindings]({{< ref bindings-overview.md >}}) building block. External resources can connect to Dapr in order to trigger a method on an application or be called from an application as part of the [bindings]({{< ref bindings-overview.md >}}) building block.
@ -113,13 +120,6 @@ Lock components are used as a distributed lock to provide mutually exclusive acc
- [List of supported locks]({{< ref supported-locks >}}) - [List of supported locks]({{< ref supported-locks >}})
- [Lock implementations](https://github.com/dapr/components-contrib/tree/master/lock) - [Lock implementations](https://github.com/dapr/components-contrib/tree/master/lock)
### Workflows
A [workflow]({{< ref workflow-overview.md >}}) is custom application logic that defines a reliable business process or data flow. Workflow components are workflow runtimes (or engines) that run the business logic written for that workflow and store their state into a state store.
<!--- [List of supported workflows]()
- [Workflow implementations](https://github.com/dapr/components-contrib/tree/master/workflows)-->
### Cryptography ### Cryptography
[Cryptography]({{< ref cryptography-overview.md >}}) components are used to perform crypographic operations, including encrypting and decrypting messages, without exposing keys to your application. [Cryptography]({{< ref cryptography-overview.md >}}) components are used to perform crypographic operations, including encrypting and decrypting messages, without exposing keys to your application.

View File

@ -45,14 +45,14 @@ Each of these building block APIs is independent, meaning that you can use any n
| Building Block | Description | | Building Block | Description |
|----------------|-------------| |----------------|-------------|
| [**Service-to-service invocation**]({{< ref "service-invocation-overview.md" >}}) | Resilient service-to-service invocation enables method calls, including retries, on remote services, wherever they are located in the supported hosting environment. | [**Service-to-service invocation**]({{< ref "service-invocation-overview.md" >}}) | Resilient service-to-service invocation enables method calls, including retries, on remote services, wherever they are located in the supported hosting environment.
| [**State management**]({{< ref "state-management-overview.md" >}}) | With state management for storing and querying key/value pairs, long-running, highly available, stateful services can be easily written alongside stateless services in your application. The state store is pluggable and examples include AWS DynamoDB, Azure Cosmos DB, Azure SQL Server, GCP Firebase, PostgreSQL or Redis, among others.
| [**Publish and subscribe**]({{< ref "pubsub-overview.md" >}}) | Publishing events and subscribing to topics between services enables event-driven architectures to simplify horizontal scalability and make them resilient to failure. Dapr provides at-least-once message delivery guarantee, message TTL, consumer groups and other advance features. | [**Publish and subscribe**]({{< ref "pubsub-overview.md" >}}) | Publishing events and subscribing to topics between services enables event-driven architectures to simplify horizontal scalability and make them resilient to failure. Dapr provides at-least-once message delivery guarantee, message TTL, consumer groups and other advance features.
| [**Workflows**]({{< ref "workflow-overview.md" >}}) | The workflow API can be combined with other Dapr building blocks to define long running, persistent processes or data flows that span multiple microservices using Dapr workflows or workflow components.
| [**State management**]({{< ref "state-management-overview.md" >}}) | With state management for storing and querying key/value pairs, long-running, highly available, stateful services can be easily written alongside stateless services in your application. The state store is pluggable and examples include AWS DynamoDB, Azure Cosmos DB, Azure SQL Server, GCP Firebase, PostgreSQL or Redis, among others.
| [**Resource bindings**]({{< ref "bindings-overview.md" >}}) | Resource bindings with triggers builds further on event-driven architectures for scale and resiliency by receiving and sending events to and from any external source such as databases, queues, file systems, etc. | [**Resource bindings**]({{< ref "bindings-overview.md" >}}) | Resource bindings with triggers builds further on event-driven architectures for scale and resiliency by receiving and sending events to and from any external source such as databases, queues, file systems, etc.
| [**Actors**]({{< ref "actors-overview.md" >}}) | A pattern for stateful and stateless objects that makes concurrency simple, with method and state encapsulation. Dapr provides many capabilities in its actor runtime, including concurrency, state, and life-cycle management for actor activation/deactivation, and timers and reminders to wake up actors. | [**Actors**]({{< ref "actors-overview.md" >}}) | A pattern for stateful and stateless objects that makes concurrency simple, with method and state encapsulation. Dapr provides many capabilities in its actor runtime, including concurrency, state, and life-cycle management for actor activation/deactivation, and timers and reminders to wake up actors.
| [**Secrets**]({{< ref "secrets-overview.md" >}}) | The secrets management API integrates with public cloud and local secret stores to retrieve the secrets for use in application code. | [**Secrets**]({{< ref "secrets-overview.md" >}}) | The secrets management API integrates with public cloud and local secret stores to retrieve the secrets for use in application code.
| [**Configuration**]({{< ref "configuration-api-overview.md" >}}) | The configuration API enables you to retrieve and subscribe to application configuration items from configuration stores. | [**Configuration**]({{< ref "configuration-api-overview.md" >}}) | The configuration API enables you to retrieve and subscribe to application configuration items from configuration stores.
| [**Distributed lock**]({{< ref "distributed-lock-api-overview.md" >}}) | The distributed lock API enables your application to acquire a lock for any resource that gives it exclusive access until either the lock is released by the application, or a lease timeout occurs. | [**Distributed lock**]({{< ref "distributed-lock-api-overview.md" >}}) | The distributed lock API enables your application to acquire a lock for any resource that gives it exclusive access until either the lock is released by the application, or a lease timeout occurs.
| [**Workflows**]({{< ref "workflow-overview.md" >}}) | The workflow API can be combined with other Dapr building blocks to define long running, persistent processes or data flows that span multiple microservices using Dapr workflows or workflow components.
| [**Cryptography**]({{< ref "cryptography-overview.md" >}}) | The cryptography API provides an abstraction layer on top of security infrastructure such as key vaults. It contains APIs that allow you to perform cryptographic operations, such as encrypting and decrypting messages, without exposing keys to your applications. | [**Cryptography**]({{< ref "cryptography-overview.md" >}}) | The cryptography API provides an abstraction layer on top of security infrastructure such as key vaults. It contains APIs that allow you to perform cryptographic operations, such as encrypting and decrypting messages, without exposing keys to your applications.
### Cross-cutting APIs ### Cross-cutting APIs
@ -138,7 +138,7 @@ Dapr can be used from any developer framework. Here are some that have been inte
| [.NET]({{< ref dotnet >}}) | [ASP.NET Core](https://github.com/dapr/dotnet-sdk/tree/master/examples/AspNetCore) | Brings stateful routing controllers that respond to pub/sub events from other services. Can also take advantage of [ASP.NET Core gRPC Services](https://docs.microsoft.com/aspnet/core/grpc/). | [.NET]({{< ref dotnet >}}) | [ASP.NET Core](https://github.com/dapr/dotnet-sdk/tree/master/examples/AspNetCore) | Brings stateful routing controllers that respond to pub/sub events from other services. Can also take advantage of [ASP.NET Core gRPC Services](https://docs.microsoft.com/aspnet/core/grpc/).
| [Java]({{< ref java >}}) | [Spring Boot](https://spring.io/) | Build Spring boot applications with Dapr APIs | [Java]({{< ref java >}}) | [Spring Boot](https://spring.io/) | Build Spring boot applications with Dapr APIs
| [Python]({{< ref python >}}) | [Flask]({{< ref python-flask.md >}}) | Build Flask applications with Dapr APIs | [Python]({{< ref python >}}) | [Flask]({{< ref python-flask.md >}}) | Build Flask applications with Dapr APIs
| [Javascript](https://github.com/dapr/js-sdk) | [Express](http://expressjs.com/) | Build Express applications with Dapr APIs | [JavaScript](https://github.com/dapr/js-sdk) | [Express](http://expressjs.com/) | Build Express applications with Dapr APIs
| [PHP]({{< ref php >}}) | | You can serve with Apache, Nginx, or Caddyserver. | [PHP]({{< ref php >}}) | | You can serve with Apache, Nginx, or Caddyserver.
#### Integrations and extensions #### Integrations and extensions

View File

@ -7,7 +7,7 @@ description: >
How Dapr compares to and works with service meshes How Dapr compares to and works with service meshes
--- ---
Dapr uses a sidecar architecture, running as a separate process alongside the application and includes features such as service invocation, network security, and distributed tracing. This often raises the question: how does Dapr compare to service mesh solutions such as [Linkerd](https://linkerd.io/), [Istio](https://istio.io/) and [Open Service Mesh](https://openservicemesh.io/) among others? Dapr uses a sidecar architecture, running as a separate process alongside the application and includes features such as service invocation, network security, and [distributed tracing](https://middleware.io/blog/what-is-distributed-tracing/). This often raises the question: how does Dapr compare to service mesh solutions such as [Linkerd](https://linkerd.io/), [Istio](https://istio.io/) and [Open Service Mesh](https://openservicemesh.io/) among others?
## How Dapr and service meshes compare ## How Dapr and service meshes compare
While Dapr and service meshes do offer some overlapping capabilities, **Dapr is not a service mesh**, where a service mesh is defined as a *networking* service mesh. Unlike a service mesh which is focused on networking concerns, Dapr is focused on providing building blocks that make it easier for developers to build applications as microservices. Dapr is developer-centric, versus service meshes which are infrastructure-centric. While Dapr and service meshes do offer some overlapping capabilities, **Dapr is not a service mesh**, where a service mesh is defined as a *networking* service mesh. Unlike a service mesh which is focused on networking concerns, Dapr is focused on providing building blocks that make it easier for developers to build applications as microservices. Dapr is developer-centric, versus service meshes which are infrastructure-centric.

View File

@ -13,7 +13,7 @@ We welcome community members giving presentations on Dapr and spreading the word
{{% alert color="primary" %}} {{% alert color="primary" %}}
If you're using the PowerPoint template with MacOS, please install the Space Grotesk font to ensure the text is rendered properly: If you're using the PowerPoint template with MacOS, please install the Space Grotesk font to ensure the text is rendered properly:
```sh ```sh
brew install --cask homebrew/cask-fonts/font-space-grotesk brew install --cask font-space-grotesk
``` ```
{{% /alert %}} {{% /alert %}}

View File

@ -2,7 +2,7 @@
type: docs type: docs
title: "Actors" title: "Actors"
linkTitle: "Actors" linkTitle: "Actors"
weight: 50 weight: 60
description: Encapsulate code and data in reusable actor objects as a common microservices design pattern description: Encapsulate code and data in reusable actor objects as a common microservices design pattern
--- ---

View File

@ -52,6 +52,12 @@ You would use Dapr Workflow when you need to define and orchestrate complex work
[Learn more about Dapr Workflow and how to use workflows in your application.]({{< ref workflow-overview.md >}}) [Learn more about Dapr Workflow and how to use workflows in your application.]({{< ref workflow-overview.md >}})
## Actor types and actor IDs
Actors are uniquely defined as an instance of an actor type, similar to how an object is an instance of a class. For example, you might have an actor type that implements the functionality of a calculator. There could be many actors of that type distributed across various nodes in a cluster.
Each actor is uniquely identified by an actor ID. An actor ID can be _any_ string value you choose. If you do not provide an actor ID, Dapr generates a random string for you as an ID.
## Features ## Features
### Actor lifetime ### Actor lifetime

View File

@ -2,7 +2,7 @@
type: docs type: docs
title: "Bindings" title: "Bindings"
linkTitle: "Bindings" linkTitle: "Bindings"
weight: 40 weight: 50
description: Interface with or be triggered from external systems description: Interface with or be triggered from external systems
--- ---

View File

@ -106,7 +106,7 @@ spec:
The code examples below leverage Dapr SDKs to invoke the output bindings endpoint on a running Dapr instance. The code examples below leverage Dapr SDKs to invoke the output bindings endpoint on a running Dapr instance.
{{< tabs Dotnet Java Python Go JavaScript>}} {{< tabs ".NET" Java Python Go JavaScript>}}
{{% codetab %}} {{% codetab %}}

View File

@ -115,7 +115,7 @@ Configure your application to receive incoming events. If you're using HTTP, you
Below are code examples that leverage Dapr SDKs to demonstrate an output binding. Below are code examples that leverage Dapr SDKs to demonstrate an output binding.
{{< tabs Dotnet Java Python Go JavaScript>}} {{< tabs ".NET" Java Python Go JavaScript>}}
{{% codetab %}} {{% codetab %}}

View File

@ -71,7 +71,7 @@ spec:
The following example shows how to get a saved configuration item using the Dapr Configuration API. The following example shows how to get a saved configuration item using the Dapr Configuration API.
{{< tabs ".NET" Java Python Go Javascript "HTTP API (BASH)" "HTTP API (Powershell)">}} {{< tabs ".NET" Java Python Go JavaScript "HTTP API (BASH)" "HTTP API (Powershell)">}}
{{% codetab %}} {{% codetab %}}
@ -252,7 +252,7 @@ Invoke-RestMethod -Uri 'http://localhost:3601/v1.0/configuration/configstore?key
Below are code examples that leverage SDKs to subscribe to keys `[orderId1, orderId2]` using `configstore` store component. Below are code examples that leverage SDKs to subscribe to keys `[orderId1, orderId2]` using `configstore` store component.
{{< tabs ".NET" "ASP.NET Core" Java Python Go Javascript>}} {{< tabs ".NET" "ASP.NET Core" Java Python Go JavaScript>}}
{{% codetab %}} {{% codetab %}}
@ -521,7 +521,7 @@ After you've subscribed to watch configuration items, you will receive updates f
Following are the code examples showing how you can unsubscribe to configuration updates using unsubscribe API. Following are the code examples showing how you can unsubscribe to configuration updates using unsubscribe API.
{{< tabs ".NET" Java Python Go Javascript "HTTP API (BASH)" "HTTP API (Powershell)">}} {{< tabs ".NET" Java Python Go JavaScript "HTTP API (BASH)" "HTTP API (Powershell)">}}
{{% codetab %}} {{% codetab %}}
```csharp ```csharp

View File

@ -2,7 +2,7 @@
type: docs type: docs
title: "Cryptography" title: "Cryptography"
linkTitle: "Cryptography" linkTitle: "Cryptography"
weight: 110 weight: 100
description: "Perform cryptographic operations without exposing keys to your application" description: "Perform cryptographic operations without exposing keys to your application"
--- ---

View File

@ -41,7 +41,7 @@ spec:
### Acquire lock ### Acquire lock
{{< tabs HTTP Dotnet Go >}} {{< tabs HTTP ".NET" Go >}}
{{% codetab %}} {{% codetab %}}
@ -122,7 +122,7 @@ func main() {
### Unlock existing lock ### Unlock existing lock
{{< tabs HTTP Dotnet Go >}} {{< tabs HTTP ".NET" Go >}}
{{% codetab %}} {{% codetab %}}

View File

@ -2,7 +2,7 @@
type: docs type: docs
title: "Publish & subscribe messaging" title: "Publish & subscribe messaging"
linkTitle: "Publish & subscribe" linkTitle: "Publish & subscribe"
weight: 30 weight: 20
description: Secure, scalable messaging between services description: Secure, scalable messaging between services
--- ---

View File

@ -65,7 +65,7 @@ scopes:
You can override this file with another [pubsub component]({{< ref setup-pubsub >}}) by creating a components directory (in this example, `myComponents`) containing the file and using the flag `--resources-path` with the `dapr run` CLI command. You can override this file with another [pubsub component]({{< ref setup-pubsub >}}) by creating a components directory (in this example, `myComponents`) containing the file and using the flag `--resources-path` with the `dapr run` CLI command.
{{< tabs Dotnet Java Python Go Javascript >}} {{< tabs ".NET" Java Python Go JavaScript >}}
{{% codetab %}} {{% codetab %}}
@ -186,7 +186,7 @@ Place `subscription.yaml` in the same directory as your `pubsub.yaml` component.
Below are code examples that leverage Dapr SDKs to subscribe to the topic you defined in `subscription.yaml`. Below are code examples that leverage Dapr SDKs to subscribe to the topic you defined in `subscription.yaml`.
{{< tabs Dotnet Java Python Go JavaScript>}} {{< tabs ".NET" Java Python Go JavaScript>}}
{{% codetab %}} {{% codetab %}}
@ -422,7 +422,7 @@ Invoke-RestMethod -Method Post -ContentType 'application/json' -Body '{"orderId"
Below are code examples that leverage Dapr SDKs to publish a topic. Below are code examples that leverage Dapr SDKs to publish a topic.
{{< tabs Dotnet Java Python Go Javascript>}} {{< tabs ".NET" Java Python Go JavaScript>}}
{{% codetab %}} {{% codetab %}}

View File

@ -40,7 +40,7 @@ scopes:
In the programmatic approach, the `routes` structure is returned instead of `route`. The JSON structure matches the declarative YAML: In the programmatic approach, the `routes` structure is returned instead of `route`. The JSON structure matches the declarative YAML:
{{< tabs Python Node "C#" Go PHP>}} {{< tabs Python JavaScript ".NET" Go PHP>}}
{{% codetab %}} {{% codetab %}}
```python ```python

View File

@ -22,7 +22,7 @@ The bulk publish operation also does not guarantee any ordering of messages.
### Example ### Example
{{< tabs Java Javascript Dotnet Python Go "HTTP API (Bash)" "HTTP API (PowerShell)" >}} {{< tabs Java JavaScript ".NET" Python Go "HTTP API (Bash)" "HTTP API (PowerShell)" >}}
{{% codetab %}} {{% codetab %}}

View File

@ -69,17 +69,17 @@ The table below shows which applications are allowed to publish into the topics:
| | topic1 | topic2 | topic3 | | | topic1 | topic2 | topic3 |
|------|--------|--------|--------| |------|--------|--------|--------|
| app1 | X | | | | app1 | | | |
| app2 | | X | X | | app2 | | ✅ | ✅ |
| app3 | | | | | app3 | | | |
The table below shows which applications are allowed to subscribe to the topics: The table below shows which applications are allowed to subscribe to the topics:
| | topic1 | topic2 | topic3 | | | topic1 | topic2 | topic3 |
|------|--------|--------|--------| |------|--------|--------|--------|
| app1 | X | X | X | | app1 | ✅ | ✅ | ✅ |
| app2 | | | | | app2 | | | |
| app3 | X | | | | app3 | | | |
> Note: If an application is not listed (e.g. app1 in subscriptionScopes) it is allowed to subscribe to all topics. Because `allowedTopics` is not used and app1 does not have any subscription scopes, it can also use additional topics not listed above. > Note: If an application is not listed (e.g. app1 in subscriptionScopes) it is allowed to subscribe to all topics. Because `allowedTopics` is not used and app1 does not have any subscription scopes, it can also use additional topics not listed above.
@ -143,17 +143,17 @@ The table below shows which application is allowed to publish into the topics:
| | A | B | C | | | A | B | C |
|------|---|---|---| |------|---|---|---|
| app1 | X | | | | app1 | | | |
| app2 | X | X | | | app2 | ✅ | ✅ | |
| app3 | X | X | | | app3 | ✅ | ✅ | |
The table below shows which application is allowed to subscribe to the topics: The table below shows which application is allowed to subscribe to the topics:
| | A | B | C | | | A | B | C |
|------|---|---|---| |------|---|---|---|
| app1 | | | | | app1 | | | |
| app2 | X | | | | app2 | | | |
| app3 | X | X | | | app3 | ✅ | ✅ | |
## Example 4: Mark topics as protected ## Example 4: Mark topics as protected
@ -187,17 +187,17 @@ The table below shows which application is allowed to publish into the topics:
| | A | B | C | | | A | B | C |
|------|---|---|---| |------|---|---|---|
| app1 | X | X | | | app1 | ✅ | ✅ | |
| app2 | | X | | | app2 | | | |
| app3 | | | X | | app3 | | | |
The table below shows which application is allowed to subscribe to the topics: The table below shows which application is allowed to subscribe to the topics:
| | A | B | C | | | A | B | C |
|------|---|---|---| |------|---|---|---|
| app1 | X | X | | | app1 | ✅ | ✅ | |
| app2 | | X | | | app2 | | | |
| app3 | | | X | | app3 | | | |
## Demo ## Demo

View File

@ -71,7 +71,7 @@ See a [full API reference]({{< ref secrets_api.md >}}).
Now that you've set up the local secret store, call Dapr to get the secrets from your application code. Below are code examples that leverage Dapr SDKs for retrieving a secret. Now that you've set up the local secret store, call Dapr to get the secrets from your application code. Below are code examples that leverage Dapr SDKs for retrieving a secret.
{{< tabs Dotnet Java Python Go Javascript>}} {{< tabs ".NET" Java Python Go JavaScript>}}
{{% codetab %}} {{% codetab %}}

View File

@ -24,7 +24,7 @@ Dapr allows you to assign a global, unique ID for your app. This ID encapsulates
{{% codetab %}} {{% codetab %}}
```bash ```bash
dapr run --app-id checkout --app-protocol http --dapr-http-port 3500 -- python3 checkout/app.py dapr run --app-id checkout --app-protocol http --dapr-http-port 3500 -- python3 checkout/app.py
dapr run --app-id order-processor --app-port 8001 --app-protocol http --dapr-http-port 3501 -- python3 order-processor/app.py dapr run --app-id order-processor --app-port 8001 --app-protocol http --dapr-http-port 3501 -- python3 order-processor/app.py
``` ```
@ -32,7 +32,7 @@ dapr run --app-id order-processor --app-port 8001 --app-protocol http --dapr-ht
If your app uses a TLS, you can tell Dapr to invoke your app over a TLS connection by setting `--app-protocol https`: If your app uses a TLS, you can tell Dapr to invoke your app over a TLS connection by setting `--app-protocol https`:
```bash ```bash
dapr run --app-id checkout --app-protocol https --dapr-http-port 3500 -- python3 checkout/app.py dapr run --app-id checkout --app-protocol https --dapr-http-port 3500 -- python3 checkout/app.py
dapr run --app-id order-processor --app-port 8001 --app-protocol https --dapr-http-port 3501 -- python3 order-processor/app.py dapr run --app-id order-processor --app-port 8001 --app-protocol https --dapr-http-port 3501 -- python3 order-processor/app.py
``` ```
@ -42,7 +42,7 @@ dapr run --app-id order-processor --app-port 8001 --app-protocol https --dapr-ht
{{% codetab %}} {{% codetab %}}
```bash ```bash
dapr run --app-id checkout --app-protocol http --dapr-http-port 3500 -- npm start dapr run --app-id checkout --app-protocol http --dapr-http-port 3500 -- npm start
dapr run --app-id order-processor --app-port 5001 --app-protocol http --dapr-http-port 3501 -- npm start dapr run --app-id order-processor --app-port 5001 --app-protocol http --dapr-http-port 3501 -- npm start
``` ```
@ -50,7 +50,7 @@ dapr run --app-id order-processor --app-port 5001 --app-protocol http --dapr-ht
If your app uses a TLS, you can tell Dapr to invoke your app over a TLS connection by setting `--app-protocol https`: If your app uses a TLS, you can tell Dapr to invoke your app over a TLS connection by setting `--app-protocol https`:
```bash ```bash
dapr run --app-id checkout --dapr-http-port 3500 --app-protocol https -- npm start dapr run --app-id checkout --dapr-http-port 3500 --app-protocol https -- npm start
dapr run --app-id order-processor --app-port 5001 --dapr-http-port 3501 --app-protocol https -- npm start dapr run --app-id order-processor --app-port 5001 --dapr-http-port 3501 --app-protocol https -- npm start
``` ```
@ -60,7 +60,7 @@ dapr run --app-id order-processor --app-port 5001 --dapr-http-port 3501 --app-pr
{{% codetab %}} {{% codetab %}}
```bash ```bash
dapr run --app-id checkout --app-protocol http --dapr-http-port 3500 -- dotnet run dapr run --app-id checkout --app-protocol http --dapr-http-port 3500 -- dotnet run
dapr run --app-id order-processor --app-port 7001 --app-protocol http --dapr-http-port 3501 -- dotnet run dapr run --app-id order-processor --app-port 7001 --app-protocol http --dapr-http-port 3501 -- dotnet run
``` ```
@ -68,7 +68,7 @@ dapr run --app-id order-processor --app-port 7001 --app-protocol http --dapr-htt
If your app uses a TLS, you can tell Dapr to invoke your app over a TLS connection by setting `--app-protocol https`: If your app uses a TLS, you can tell Dapr to invoke your app over a TLS connection by setting `--app-protocol https`:
```bash ```bash
dapr run --app-id checkout --dapr-http-port 3500 --app-protocol https -- dotnet run dapr run --app-id checkout --dapr-http-port 3500 --app-protocol https -- dotnet run
dapr run --app-id order-processor --app-port 7001 --dapr-http-port 3501 --app-protocol https -- dotnet run dapr run --app-id order-processor --app-port 7001 --dapr-http-port 3501 --app-protocol https -- dotnet run
``` ```
@ -247,7 +247,9 @@ namespace EventService
var content = new StringContent(orderJson, Encoding.UTF8, "application/json"); var content = new StringContent(orderJson, Encoding.UTF8, "application/json");
var httpClient = DaprClient.CreateInvokeHttpClient(); var httpClient = DaprClient.CreateInvokeHttpClient();
await httpClient.PostAsJsonAsync($"http://order-processor/orders", content); var response = await httpClient.PostAsJsonAsync("http://order-processor/orders", content);
var result = await response.Content.ReadAsStringAsync();
Console.WriteLine("Order requested: " + orderId); Console.WriteLine("Order requested: " + orderId);
Console.WriteLine("Result: " + result); Console.WriteLine("Result: " + result);
} }
@ -408,6 +410,14 @@ Using CLI:
dapr invoke --app-id checkout --method checkout/100 dapr invoke --app-id checkout --method checkout/100
``` ```
#### Including a query string in the URL
You can also append a query string or a fragment to the end of the URL and Dapr will pass it through unchanged. This means that if you need to pass some additional arguments in your service invocation that aren't part of a payload or the path, you can do so by appending a `?` to the end of the URL, followed by the key/value pairs separated by `=` signs and delimited by `&`. For example:
```bash
curl 'http://dapr-app-id:checkout@localhost:3602/checkout/100?basket=1234&key=abc` -X POST
```
### Namespaces ### Namespaces
When running on [namespace supported platforms]({{< ref "service_invocation_api.md#namespace-supported-platforms" >}}), you include the namespace of the target app in the app ID. For example, following the `<app>.<namespace>` format, use `checkout.production`. When running on [namespace supported platforms]({{< ref "service_invocation_api.md#namespace-supported-platforms" >}}), you include the namespace of the target app in the app ID. For example, following the `<app>.<namespace>` format, use `checkout.production`.

View File

@ -70,7 +70,7 @@ There are two ways to invoke a non-Dapr endpoint when communicating either to Da
``` ```
### Using appId when calling Dapr enabled applications ### Using appId when calling Dapr enabled applications
AppIDs are always used to call Dapr applications with the `appID` and `my-method``. Read the [How-To: Invoke services using HTTP]({{< ref howto-invoke-discover-services.md >}}) guide for more information. For example: AppIDs are always used to call Dapr applications with the `appID` and `my-method`. Read the [How-To: Invoke services using HTTP]({{< ref howto-invoke-discover-services.md >}}) guide for more information. For example:
```sh ```sh
localhost:3500/v1.0/invoke/<appID>/method/<my-method> localhost:3500/v1.0/invoke/<appID>/method/<my-method>

View File

@ -126,7 +126,7 @@ ctx = metadata.AppendToOutgoingContext(ctx, "dapr-app-id", "server")
All languages supported by gRPC allow for adding metadata. Here are a few examples: All languages supported by gRPC allow for adding metadata. Here are a few examples:
{{< tabs Java Dotnet Python JavaScript Ruby "C++">}} {{< tabs Java ".NET" Python JavaScript Ruby "C++">}}
{{% codetab %}} {{% codetab %}}
```java ```java
@ -249,7 +249,7 @@ When using Dapr to proxy streaming RPC calls using gRPC, you must set an additio
For example: For example:
{{< tabs Go Java Dotnet Python JavaScript Ruby "C++">}} {{< tabs Go Java ".NET" Python JavaScript Ruby "C++">}}
{{% codetab %}} {{% codetab %}}
```go ```go

View File

@ -2,7 +2,7 @@
type: docs type: docs
title: "State management" title: "State management"
linkTitle: "State management" linkTitle: "State management"
weight: 20 weight: 40
description: Create long running stateful services description: Create long running stateful services
--- ---

View File

@ -68,7 +68,7 @@ Set an `app-id`, as the state keys are prefixed with this value. If you don't se
The following example shows how to save and retrieve a single key/value pair using the Dapr state management API. The following example shows how to save and retrieve a single key/value pair using the Dapr state management API.
{{< tabs Dotnet Java Python Go Javascript "HTTP API (Bash)" "HTTP API (PowerShell)">}} {{< tabs ".NET" Java Python Go JavaScript "HTTP API (Bash)" "HTTP API (PowerShell)">}}
{{% codetab %}} {{% codetab %}}
@ -356,7 +356,7 @@ Restart your sidecar and try retrieving state again to observe that state persis
Below are code examples that leverage Dapr SDKs for deleting the state. Below are code examples that leverage Dapr SDKs for deleting the state.
{{< tabs Dotnet Java Python Go Javascript "HTTP API (Bash)" "HTTP API (PowerShell)">}} {{< tabs ".NET" Java Python Go JavaScript "HTTP API (Bash)" "HTTP API (PowerShell)">}}
{{% codetab %}} {{% codetab %}}
@ -537,7 +537,7 @@ Try getting state again. Note that no value is returned.
Below are code examples that leverage Dapr SDKs for saving and retrieving multiple states. Below are code examples that leverage Dapr SDKs for saving and retrieving multiple states.
{{< tabs Dotnet Java Python Go Javascript "HTTP API (Bash)" "HTTP API (PowerShell)">}} {{< tabs ".NET" Java Python Go JavaScript "HTTP API (Bash)" "HTTP API (PowerShell)">}}
{{% codetab %}} {{% codetab %}}
@ -788,7 +788,7 @@ State transactions require a state store that supports multi-item transactions.
Below are code examples that leverage Dapr SDKs for performing state transactions. Below are code examples that leverage Dapr SDKs for performing state transactions.
{{< tabs Dotnet Java Python Go Javascript "HTTP API (Bash)" "HTTP API (PowerShell)">}} {{< tabs ".NET" Java Python Go JavaScript "HTTP API (Bash)" "HTTP API (PowerShell)">}}
{{% codetab %}} {{% codetab %}}

View File

@ -2,7 +2,7 @@
type: docs type: docs
title: "Workflow" title: "Workflow"
linkTitle: "Workflow" linkTitle: "Workflow"
weight: 100 weight: 30
description: "Orchestrate logic across various microservices" description: "Orchestrate logic across various microservices"
--- ---

View File

@ -48,7 +48,7 @@ This "replay" behavior continues until the workflow function completes or fails
Using this replay technique, a workflow is able to resume execution from any "await" point as if it had never been unloaded from memory. Even the values of local variables from previous runs can be restored without the workflow engine knowing anything about what data they stored. This ability to restore state makes Dapr Workflows _durable_ and _fault tolerant_. Using this replay technique, a workflow is able to resume execution from any "await" point as if it had never been unloaded from memory. Even the values of local variables from previous runs can be restored without the workflow engine knowing anything about what data they stored. This ability to restore state makes Dapr Workflows _durable_ and _fault tolerant_.
{{% alert title="Note" color="primary" %}} {{% alert title="Note" color="primary" %}}
The workflow replay behavior described here requires that workflow function code be _deterministic_. Deterministic workflow functions take the exact same actions when provided the exact same inputs. [Learn more about the limitations around deterministic workflow code.]({{< ref "workflow-features-concepts.md#workflow-determinism-and-code-constraints" >}}) The workflow replay behavior described here requires that workflow function code be _deterministic_. Deterministic workflow functions take the exact same actions when provided the exact same inputs. [Learn more about the limitations around deterministic workflow code.]({{< ref "workflow-features-concepts.md#workflow-determinism-and-code-restraints" >}})
{{% /alert %}} {{% /alert %}}
@ -75,9 +75,9 @@ You can use the following two techniques to write workflows that may need to sch
### Updating workflow code ### Updating workflow code
Because workflows are long-running and durable, updating workflow code must be done with extreme care. As discussed in the [workflow determinism]({{< ref "#workflow-determinism-and-code-constraints" >}}) limitation section, workflow code must be deterministic. Updates to workflow code must preserve this determinism if there are any non-completed workflow instances in the system. Otherwise, updates to workflow code can result in runtime failures the next time those workflows execute. Because workflows are long-running and durable, updating workflow code must be done with extreme care. As discussed in the [workflow determinism]({{< ref "#workflow-determinism-and-code-restraints" >}}) limitation section, workflow code must be deterministic. Updates to workflow code must preserve this determinism if there are any non-completed workflow instances in the system. Otherwise, updates to workflow code can result in runtime failures the next time those workflows execute.
[See known limitations]({{< ref "workflow-features-concepts.md#workflow-determinism-and-code-constraints" >}}) [See known limitations]({{< ref "#limitations" >}})
## Workflow activities ## Workflow activities
@ -123,7 +123,7 @@ Retries are internally implemented using durable timers. This means that workflo
The actions performed by a retry policy are saved into a workflow's history. Care must be taken not to change the behavior of a retry policy after a workflow has already been executed. Otherwise, the workflow may behave unexpectedly when replayed. See the notes on [updating workflow code]({{< ref "#updating-workflow-code" >}}) for more information. The actions performed by a retry policy are saved into a workflow's history. Care must be taken not to change the behavior of a retry policy after a workflow has already been executed. Otherwise, the workflow may behave unexpectedly when replayed. See the notes on [updating workflow code]({{< ref "#updating-workflow-code" >}}) for more information.
{{% /alert %}} {{% /alert %}}
It's possible to use both workflow retry policies and Dapr Resiliency policies together. For example, if a workflow activity uses a Dapr client to invoke a service, the Dapr client uses the configured resiliency policy. See [Quickstart: Service-to-service resiliency]({{< ref "#resiliency-serviceinvo-quickstart" >}}) for more information with an example. However, if the activity itself fails for any reason, including exhausting the retries on the resiliency policy, then the workflow's resiliency policy kicks in. It's possible to use both workflow retry policies and Dapr Resiliency policies together. For example, if a workflow activity uses a Dapr client to invoke a service, the Dapr client uses the configured resiliency policy. See [Quickstart: Service-to-service resiliency]({{< ref "resiliency-serviceinvo-quickstart.md" >}}) for more information with an example. However, if the activity itself fails for any reason, including exhausting the retries on the resiliency policy, then the workflow's resiliency policy kicks in.
{{% alert title="Note" color="primary" %}} {{% alert title="Note" color="primary" %}}
Using workflow retry policies and resiliency policies together can result in unexpected behavior. For example, if a workflow activity exhausts its configured retry policy, the workflow engine will still retry the activity according to the workflow retry policy. This can result in the activity being retried more times than expected. Using workflow retry policies and resiliency policies together can result in unexpected behavior. For example, if a workflow activity exhausts its configured retry policy, the workflow engine will still retry the activity according to the workflow retry policy. This can result in the activity being retried more times than expected.

View File

@ -6,9 +6,6 @@ description: "Access Dapr capabilities from your Azure Functions runtime applica
weight: 3000 weight: 3000
--- ---
{{% alert title="Note" color="primary" %}}
The Dapr extension for Azure Functions is currently in preview.
{{% /alert %}}
Dapr integrates with the [Azure Functions runtime](https://learn.microsoft.com/azure/azure-functions/functions-overview) via an extension that lets a function seamlessly interact with Dapr. Dapr integrates with the [Azure Functions runtime](https://learn.microsoft.com/azure/azure-functions/functions-overview) via an extension that lets a function seamlessly interact with Dapr.
- **Azure Functions** provides an event-driven programming model. - **Azure Functions** provides an event-driven programming model.

View File

@ -0,0 +1,21 @@
---
type: docs
title: "How to: Integrate using Testcontainers Dapr Module"
linkTitle: "Dapr Testcontainers"
weight: 3000
description: "Use the Dapr Testcontainer module from your Java application"
---
You can use the Testcontainers Dapr Module provided by Diagrid to set up Dapr locally for your Java applications. Simply add the following dependency to your Maven project:
```xml
<dependency>
<groupId>io.diagrid.dapr</groupId>
<artifactId>testcontainers-dapr</artifactId>
<version>0.10.x</version>
</dependency>
```
[If you're using Spring Boot, you can also use the Spring Boot Starter.](https://github.com/diagridio/spring-boot-starter-dapr)
{{< button text="Use the Testcontainers Dapr Module" link="https://github.com/diagridio/testcontainers-dapr" >}}

View File

@ -1,7 +1,7 @@
--- ---
type: docs type: docs
title: "How to: Autoscale a Dapr app with KEDA" title: "How to: Autoscale a Dapr app with KEDA"
linkTitle: "How to: Autoscale with KEDA" linkTitle: "KEDA"
description: "How to configure your Dapr application to autoscale using KEDA" description: "How to configure your Dapr application to autoscale using KEDA"
weight: 3000 weight: 3000
--- ---

View File

@ -1,7 +1,7 @@
--- ---
type: docs type: docs
title: "How to: Use the gRPC interface in your Dapr application" title: "How to: Use the gRPC interface in your Dapr application"
linkTitle: "How to: gRPC interface" linkTitle: "gRPC interface"
weight: 6000 weight: 6000
description: "Use the Dapr gRPC API in your application" description: "Use the Dapr gRPC API in your application"
--- ---

View File

@ -2,7 +2,7 @@
type: docs type: docs
weight: 5000 weight: 5000
title: "How to: Use the Dapr CLI in a GitHub Actions workflow" title: "How to: Use the Dapr CLI in a GitHub Actions workflow"
linkTitle: "How to: GitHub Actions" linkTitle: "GitHub Actions"
description: "Add the Dapr CLI to your GitHub Actions to deploy and manage Dapr in your environments." description: "Add the Dapr CLI to your GitHub Actions to deploy and manage Dapr in your environments."
--- ---

View File

@ -0,0 +1,17 @@
---
type: docs
title: "How to: Integrate with Kratix"
linkTitle: "Kratix Marketplace"
weight: 8000
description: "Integrate with Kratix using a Dapr promise"
---
As part of the [Kratix Marketplace](https://docs.kratix.io/marketplace), Dapr can be used to build custom platforms tailored to your needs.
{{% alert title="Note" color="warning" %}}
The Dapr Helm chart generates static public and private key pairs that are published in the repository. This promise should only be used _locally_ for demo purposes. If you wish to use this promise for more than demo purposes, it's recommended to manually update all the secrets in the promise with keys with your own credentials.
{{% /alert %}}
Get started by simply installing the Dapr Promise, which installs Dapr on all matching clusters.
{{< button text="Install the Dapr Promise" link="https://github.com/syntasso/kratix-marketplace/tree/main/dapr" >}}

View File

@ -0,0 +1,11 @@
---
type: docs
title: "How to: Use the Dapr Kubernetes Operator"
linkTitle: "Dapr Kubernetes Operator"
weight: 7000
description: "Use the Dapr Kubernetes Operator to manage the Dapr control plane"
---
You can use the Dapr Kubernetes Operator to manage the Dapr control plane. Use the operator to automate the tasks required to manage the lifecycle of Dapr control plane in Kubernetes mode.
{{< button text="Install and use the Dapr Kubernetes Operator" link="https://github.com/dapr/kubernetes-operator" >}}

View File

@ -27,7 +27,7 @@ Select your [preferred language below]({{< ref "#sdk-languages" >}}) to learn mo
| [Java]({{< ref java >}}) | Stable | ✔ | Spring Boot <br /> Quarkus| ✔ | ✔ | | [Java]({{< ref java >}}) | Stable | ✔ | Spring Boot <br /> Quarkus| ✔ | ✔ |
| [Go]({{< ref go >}}) | Stable | ✔ | ✔ | ✔ | ✔ | | [Go]({{< ref go >}}) | Stable | ✔ | ✔ | ✔ | ✔ |
| [PHP]({{< ref php >}}) | Stable | ✔ | ✔ | ✔ | | | [PHP]({{< ref php >}}) | Stable | ✔ | ✔ | ✔ | |
| [Javascript]({{< ref js >}}) | Stable| ✔ | | ✔ | ✔ | | [JavaScript]({{< ref js >}}) | Stable| ✔ | | ✔ | ✔ |
| [C++](https://github.com/dapr/cpp-sdk) | In development | ✔ | | | | [C++](https://github.com/dapr/cpp-sdk) | In development | ✔ | | |
| [Rust](https://github.com/dapr/rust-sdk) | In development | ✔ | | ✔ | | | [Rust](https://github.com/dapr/rust-sdk) | In development | ✔ | | ✔ | |

View File

@ -22,13 +22,13 @@ Hit the ground running with our Dapr quickstarts, complete with code samples aim
| Quickstarts | Description | | Quickstarts | Description |
| ----------- | ----------- | | ----------- | ----------- |
| [Publish and Subscribe]({{< ref pubsub-quickstart.md >}}) | Asynchronous communication between two services using messaging. |
| [Service Invocation]({{< ref serviceinvocation-quickstart.md >}}) | Synchronous communication between two services using HTTP or gRPC. | | [Service Invocation]({{< ref serviceinvocation-quickstart.md >}}) | Synchronous communication between two services using HTTP or gRPC. |
| [Publish and Subscribe]({{< ref pubsub-quickstart.md >}}) | Asynchronous communication between two services using messaging. |
| [Workflow]({{< ref workflow-quickstart.md >}}) | Orchestrate business workflow activities in long running, fault-tolerant, stateful applications. |
| [State Management]({{< ref statemanagement-quickstart.md >}}) | Store a service's data as key/value pairs in supported state stores. | | [State Management]({{< ref statemanagement-quickstart.md >}}) | Store a service's data as key/value pairs in supported state stores. |
| [Bindings]({{< ref bindings-quickstart.md >}}) | Work with external systems using input bindings to respond to events and output bindings to call operations. | | [Bindings]({{< ref bindings-quickstart.md >}}) | Work with external systems using input bindings to respond to events and output bindings to call operations. |
| [Actors]({{< ref actors-quickstart.md >}}) | Run a microservice and a simple console client to demonstrate stateful object patterns in Dapr Actors. | | [Actors]({{< ref actors-quickstart.md >}}) | Run a microservice and a simple console client to demonstrate stateful object patterns in Dapr Actors. |
| [Secrets Management]({{< ref secrets-quickstart.md >}}) | Securely fetch secrets. | | [Secrets Management]({{< ref secrets-quickstart.md >}}) | Securely fetch secrets. |
| [Configuration]({{< ref configuration-quickstart.md >}}) | Get configuration items and subscribe for configuration updates. | | [Configuration]({{< ref configuration-quickstart.md >}}) | Get configuration items and subscribe for configuration updates. |
| [Resiliency]({{< ref resiliency >}}) | Define and apply fault-tolerance policies to your Dapr API requests. | | [Resiliency]({{< ref resiliency >}}) | Define and apply fault-tolerance policies to your Dapr API requests. |
| [Workflow]({{< ref workflow-quickstart.md >}}) | Orchestrate business workflow activities in long running, fault-tolerant, stateful applications. |
| [Cryptography]({{< ref cryptography-quickstart.md >}}) | Encrypt and decrypt data using Dapr's cryptographic APIs. | | [Cryptography]({{< ref cryptography-quickstart.md >}}) | Encrypt and decrypt data using Dapr's cryptographic APIs. |

View File

@ -2,7 +2,7 @@
type: docs type: docs
title: "Quickstart: Actors" title: "Quickstart: Actors"
linkTitle: "Actors" linkTitle: "Actors"
weight: 75 weight: 76
description: "Get started with Dapr's Actors building block" description: "Get started with Dapr's Actors building block"
--- ---

View File

@ -2,7 +2,7 @@
type: docs type: docs
title: "Quickstart: Input & Output Bindings" title: "Quickstart: Input & Output Bindings"
linkTitle: "Bindings" linkTitle: "Bindings"
weight: 74 weight: 75
description: "Get started with Dapr's Binding building block" description: "Get started with Dapr's Binding building block"
--- ---

View File

@ -2,7 +2,7 @@
type: docs type: docs
title: "Quickstart: Configuration" title: "Quickstart: Configuration"
linkTitle: Configuration linkTitle: Configuration
weight: 77 weight: 78
description: Get started with Dapr's Configuration building block description: Get started with Dapr's Configuration building block
--- ---

View File

@ -2,7 +2,7 @@
type: docs type: docs
title: "Quickstart: Publish and Subscribe" title: "Quickstart: Publish and Subscribe"
linkTitle: "Publish and Subscribe" linkTitle: "Publish and Subscribe"
weight: 73 weight: 72
description: "Get started with Dapr's Publish and Subscribe building block" description: "Get started with Dapr's Publish and Subscribe building block"
--- ---
@ -32,7 +32,7 @@ Select your preferred language-specific Dapr SDK before proceeding with the Quic
For this example, you will need: For this example, you will need:
- [Dapr CLI and initialized environment](https://docs.dapr.io/getting-started). - [Dapr CLI and initialized environment](https://docs.dapr.io/getting-started).
- [Python 3.7+ installed](https://www.python.org/downloads/). - [Python 3.8+ installed](https://www.python.org/downloads/).
<!-- IGNORE_LINKS --> <!-- IGNORE_LINKS -->
- [Docker Desktop](https://www.docker.com/products/docker-desktop) - [Docker Desktop](https://www.docker.com/products/docker-desktop)
<!-- END_IGNORE --> <!-- END_IGNORE -->

View File

@ -55,7 +55,7 @@ pip3 install -r requirements.txt
Run the `order-processor` service alongside a Dapr sidecar. Run the `order-processor` service alongside a Dapr sidecar.
```bash ```bash
dapr run --app-port 8001 --app-id order-processor --resources-path ../../../resources/ --app-protocol http --dapr-http-port 3501 -- python3 app.py dapr run --app-port 8001 --app-id order-processor --resources-path ../../../resources/ --app-protocol http --dapr-http-port 3501 -- python3 app.py
``` ```
### Step 3: Run the `checkout` service application ### Step 3: Run the `checkout` service application
@ -75,7 +75,7 @@ pip3 install -r requirements.txt
Run the `checkout` service alongside a Dapr sidecar. Run the `checkout` service alongside a Dapr sidecar.
```bash ```bash
dapr run --app-id checkout --resources-path ../../../resources/ --app-protocol http --dapr-http-port 3500 -- python3 app.py dapr run --app-id checkout --resources-path ../../../resources/ --app-protocol http --dapr-http-port 3500 -- python3 app.py
``` ```
The Dapr sidecar then loads the resiliency spec located in the resources directory: The Dapr sidecar then loads the resiliency spec located in the resources directory:
@ -262,7 +262,7 @@ npm install
Run the `order-processor` service alongside a Dapr sidecar. Run the `order-processor` service alongside a Dapr sidecar.
```bash ```bash
dapr run --app-port 5001 --app-id order-processor --resources-path ../../../resources/ --app-protocol http --dapr-http-port 3501 -- npm start dapr run --app-port 5001 --app-id order-processor --resources-path ../../../resources/ --app-protocol http --dapr-http-port 3501 -- npm start
``` ```
### Step 3: Run the `checkout` service application ### Step 3: Run the `checkout` service application
@ -283,7 +283,7 @@ npm install
Run the `checkout` service alongside a Dapr sidecar. Run the `checkout` service alongside a Dapr sidecar.
```bash ```bash
dapr run --app-id checkout --resources-path ../../../resources/ --app-protocol http --dapr-http-port 3500 -- npm start dapr run --app-id checkout --resources-path ../../../resources/ --app-protocol http --dapr-http-port 3500 -- npm start
``` ```
The Dapr sidecar then loads the resiliency spec located in the resources directory: The Dapr sidecar then loads the resiliency spec located in the resources directory:
@ -426,7 +426,7 @@ Once you restart the `order-processor` service, the application will recover sea
In the `order-processor` service terminal, restart the application: In the `order-processor` service terminal, restart the application:
```bash ```bash
dapr run --app-port 5001 --app-id order-processor --resources-path ../../../resources/ --app-protocol http --dapr-http-port 3501 -- npm start dapr run --app-port 5001 --app-id order-processor --resources-path ../../../resources/ --app-protocol http --dapr-http-port 3501 -- npm start
``` ```
`checkout` service output: `checkout` service output:
@ -494,7 +494,7 @@ dotnet build
Run the `order-processor` service alongside a Dapr sidecar. Run the `order-processor` service alongside a Dapr sidecar.
```bash ```bash
dapr run --app-port 7001 --app-id order-processor --resources-path ../../../resources/ --app-protocol http --dapr-http-port 3501 -- dotnet run dapr run --app-port 7001 --app-id order-processor --resources-path ../../../resources/ --app-protocol http --dapr-http-port 3501 -- dotnet run
``` ```
### Step 3: Run the `checkout` service application ### Step 3: Run the `checkout` service application
@ -516,7 +516,7 @@ dotnet build
Run the `checkout` service alongside a Dapr sidecar. Run the `checkout` service alongside a Dapr sidecar.
```bash ```bash
dapr run --app-id checkout --resources-path ../../../resources/ --app-protocol http --dapr-http-port 3500 -- dotnet run dapr run --app-id checkout --resources-path ../../../resources/ --app-protocol http --dapr-http-port 3500 -- dotnet run
``` ```
The Dapr sidecar then loads the resiliency spec located in the resources directory: The Dapr sidecar then loads the resiliency spec located in the resources directory:
@ -727,7 +727,7 @@ mvn clean install
Run the `order-processor` service alongside a Dapr sidecar. Run the `order-processor` service alongside a Dapr sidecar.
```bash ```bash
dapr run --app-id order-processor --resources-path ../../../resources/ --app-port 9001 --app-protocol http --dapr-http-port 3501 -- java -jar target/OrderProcessingService-0.0.1-SNAPSHOT.jar dapr run --app-id order-processor --resources-path ../../../resources/ --app-port 9001 --app-protocol http --dapr-http-port 3501 -- java -jar target/OrderProcessingService-0.0.1-SNAPSHOT.jar
``` ```
### Step 3: Run the `checkout` service application ### Step 3: Run the `checkout` service application
@ -748,7 +748,7 @@ mvn clean install
Run the `checkout` service alongside a Dapr sidecar. Run the `checkout` service alongside a Dapr sidecar.
```bash ```bash
dapr run --app-id checkout --resources-path ../../../resources/ --app-protocol http --dapr-http-port 3500 -- java -jar target/CheckoutService-0.0.1-SNAPSHOT.jar dapr run --app-id checkout --resources-path ../../../resources/ --app-protocol http --dapr-http-port 3500 -- java -jar target/CheckoutService-0.0.1-SNAPSHOT.jar
``` ```
The Dapr sidecar then loads the resiliency spec located in the resources directory: The Dapr sidecar then loads the resiliency spec located in the resources directory:
@ -891,7 +891,7 @@ Once you restart the `order-processor` service, the application will recover sea
In the `order-processor` service terminal, restart the application: In the `order-processor` service terminal, restart the application:
```bash ```bash
dapr run --app-id order-processor --resources-path ../../../resources/ --app-port 9001 --app-protocol http --dapr-http-port 3501 -- java -jar target/OrderProcessingService-0.0.1-SNAPSHOT.jar dapr run --app-id order-processor --resources-path ../../../resources/ --app-port 9001 --app-protocol http --dapr-http-port 3501 -- java -jar target/OrderProcessingService-0.0.1-SNAPSHOT.jar
``` ```
`checkout` service output: `checkout` service output:
@ -957,7 +957,7 @@ go build .
Run the `order-processor` service alongside a Dapr sidecar. Run the `order-processor` service alongside a Dapr sidecar.
```bash ```bash
dapr run --app-port 6001 --app-id order-processor --resources-path ../../../resources/ --app-protocol http --dapr-http-port 3501 -- go run . dapr run --app-port 6001 --app-id order-processor --resources-path ../../../resources/ --app-protocol http --dapr-http-port 3501 -- go run .
``` ```
### Step 3: Run the `checkout` service application ### Step 3: Run the `checkout` service application
@ -978,7 +978,7 @@ go build .
Run the `checkout` service alongside a Dapr sidecar. Run the `checkout` service alongside a Dapr sidecar.
```bash ```bash
dapr run --app-id checkout --resources-path ../../../resources/ --app-protocol http --dapr-http-port 3500 -- go run . dapr run --app-id checkout --resources-path ../../../resources/ --app-protocol http --dapr-http-port 3500 -- go run .
``` ```
The Dapr sidecar then loads the resiliency spec located in the resources directory: The Dapr sidecar then loads the resiliency spec located in the resources directory:
@ -1121,7 +1121,7 @@ Once you restart the `order-processor` service, the application will recover sea
In the `order-processor` service terminal, restart the application: In the `order-processor` service terminal, restart the application:
```bash ```bash
dapr run --app-port 6001 --app-id order-processor --resources-path ../../../resources/ --app-protocol http --dapr-http-port 3501 -- go run . dapr run --app-port 6001 --app-id order-processor --resources-path ../../../resources/ --app-protocol http --dapr-http-port 3501 -- go run .
``` ```
`checkout` service output: `checkout` service output:

View File

@ -2,7 +2,7 @@
type: docs type: docs
title: "Quickstart: Secrets Management" title: "Quickstart: Secrets Management"
linkTitle: "Secrets Management" linkTitle: "Secrets Management"
weight: 76 weight: 77
description: "Get started with Dapr's Secrets Management building block" description: "Get started with Dapr's Secrets Management building block"
--- ---

View File

@ -2,7 +2,7 @@
type: docs type: docs
title: "Quickstart: State Management" title: "Quickstart: State Management"
linkTitle: "State Management" linkTitle: "State Management"
weight: 72 weight: 74
description: "Get started with Dapr's State Management building block" description: "Get started with Dapr's State Management building block"
--- ---
@ -169,14 +169,6 @@ Clone the [sample provided in the Quickstarts repo](https://github.com/dapr/quic
git clone https://github.com/dapr/quickstarts.git git clone https://github.com/dapr/quickstarts.git
``` ```
Install the dependencies for the `order-processor` app:
```bash
cd ./order-processor
npm install
cd ..
```
### Step 2: Manipulate service state ### Step 2: Manipulate service state
In a terminal window, navigate to the `order-processor` directory. In a terminal window, navigate to the `order-processor` directory.

View File

@ -2,7 +2,7 @@
type: docs type: docs
title: "Quickstart: Workflow" title: "Quickstart: Workflow"
linkTitle: Workflow linkTitle: Workflow
weight: 78 weight: 73
description: Get started with the Dapr Workflow building block description: Get started with the Dapr Workflow building block
--- ---

View File

@ -75,15 +75,14 @@ spec:
type: bindings.azure.servicebusqueues type: bindings.azure.servicebusqueues
version: v1 version: v1
metadata: metadata:
-name: connectionString - name: connectionString
secretKeyRef: secretKeyRef:
name: asbNsConnString name: asbNsConnString
key: asbNsConnString key: asbNsConnString
-name: queueName - name: queueName
value: servicec-inputq value: servicec-inputq
auth: auth:
secretStore: <SECRET_STORE_NAME> secretStore: <SECRET_STORE_NAME>
``` ```
The above "Secret is a string" case yaml tells Dapr to extract a connection string named `asbNsConnstring` from the defined `secretStore` and assign the value to the `connectionString` field in the component since there is no key embedded in the "secret" from the `secretStore` because it is a plain string. This requires the secret `name` and secret `key` to be identical. The above "Secret is a string" case yaml tells Dapr to extract a connection string named `asbNsConnstring` from the defined `secretStore` and assign the value to the `connectionString` field in the component since there is no key embedded in the "secret" from the `secretStore` because it is a plain string. This requires the secret `name` and secret `key` to be identical.
@ -95,7 +94,7 @@ The following example shows you how to create a Kubernetes secret to hold the co
1. First, create the Kubernetes secret: 1. First, create the Kubernetes secret:
```bash ```bash
kubectl create secret generic eventhubs-secret --from-literal=connectionString=********* kubectl create secret generic eventhubs-secret --from-literal=connectionString=*********
``` ```
2. Next, reference the secret in your binding: 2. Next, reference the secret in your binding:

View File

@ -52,7 +52,7 @@ Since you are running Dapr in the same host as the component, verify that this f
### Component discovery and multiplexing ### Component discovery and multiplexing
A pluggable component accessible through a [Unix Domain Socket][UDS] (UDS) can host multiple distinct component APIs . During the components' initial discovery process, Dapr uses reflection to enumerate all the component APIs behind a UDS. The `my-component` pluggable component in the example above can contain both state store (`state`) and a pub/sub (`pubsub`) component APIs. A pluggable component accessible through a [Unix Domain Socket][UDS] (UDS) can host multiple distinct component APIs. During the components' initial discovery process, Dapr uses reflection to enumerate all the component APIs behind a UDS. The `my-component` pluggable component in the example above can contain both state store (`state`) and a pub/sub (`pubsub`) component APIs.
Typically, a pluggable component implements a single component API for packaging and deployment. However, at the expense of increasing its dependencies and broadening its security attack surface, a pluggable component can have multiple component APIs implemented. This could be done to ease the deployment and monitoring burden. Best practice for isolation, fault tolerance, and security is a single component API implementation for each pluggable component. Typically, a pluggable component implements a single component API for packaging and deployment. However, at the expense of increasing its dependencies and broadening its security attack surface, a pluggable component can have multiple component APIs implemented. This could be done to ease the deployment and monitoring burden. Best practice for isolation, fault tolerance, and security is a single component API implementation for each pluggable component.

View File

@ -23,12 +23,12 @@ The table below shows which resources are deployed to which namespaces:
| Resource | namespace-a | namespace-b | | Resource | namespace-a | namespace-b |
|------------------------ |-------------|-------------| |------------------------ |-------------|-------------|
| Redis master | X | | | Redis master | ✅ | ❌ |
| Redis replicas | X | | | Redis replicas | ✅ | ❌ |
| Dapr's PubSub component | X | X | | Dapr's PubSub component | ✅ | ✅ |
| Node subscriber | X | | | Node subscriber | ✅ | ❌ |
| Python subscriber | X | | | Python subscriber | ✅ | ❌ |
| React UI publisher | | X | | React UI publisher | ❌ | ✅ |
{{% alert title="Note" color="primary" %}} {{% alert title="Note" color="primary" %}}
All pub/sub components support limiting pub/sub topics to specific applications using [namespace or component scopes]({{< ref pubsub-scopes.md >}}). All pub/sub components support limiting pub/sub topics to specific applications using [namespace or component scopes]({{< ref pubsub-scopes.md >}}).

View File

@ -118,7 +118,7 @@ The following table lists the properties for metrics:
|--------------|--------|-------------| |--------------|--------|-------------|
| `enabled` | boolean | When set to true, the default, enables metrics collection and the metrics endpoint. | | `enabled` | boolean | When set to true, the default, enables metrics collection and the metrics endpoint. |
| `rules` | array | Named rule to filter metrics. Each rule contains a set of `labels` to filter on and a `regex` expression to apply to the metrics path. | | `rules` | array | Named rule to filter metrics. Each rule contains a set of `labels` to filter on and a `regex` expression to apply to the metrics path. |
| `http.increasedCardinality` | boolean | When set to true, in the Dapr HTTP server each request path causes the creation of a new "bucket" of metrics. This can cause issues, including excessive memory consumption, when there many different requested endpoints (such as when interacting with RESTful APIs).<br>In Dapr 1.13 the default value is `true` (to preserve the behavior of Dapr <= 1.12), but will change to `false` in Dapr 1.14. | | `http.increasedCardinality` | boolean | When set to true, in the Dapr HTTP server each request path causes the creation of a new "bucket" of metrics. This can cause issues, including excessive memory consumption, when there many different requested endpoints (such as when interacting with RESTful APIs).<br>In Dapr 1.13 the default value is `true` (to preserve the behavior of Dapr <= 1.12), but changes to `false` in Dapr 1.14..|
To mitigate high memory usage and egress costs associated with [high cardinality metrics]({{< ref "metrics-overview.md#high-cardinality-metrics" >}}) with the HTTP server, you should set the `metrics.http.increasedCardinality` property to `false`. To mitigate high memory usage and egress costs associated with [high cardinality metrics]({{< ref "metrics-overview.md#high-cardinality-metrics" >}}) with the HTTP server, you should set the `metrics.http.increasedCardinality` property to `false`.

View File

@ -0,0 +1,87 @@
---
type: docs
title: "Deploy Dapr per-node or per-cluster with Dapr Shared"
linkTitle: "Dapr Shared"
weight: 50000
description: "Learn more about using Dapr Shared as an alternative deployment to sidecars"
---
Dapr automatically injects a sidecar to enable the Dapr APIs for your applications for the best availability and reliability.
Dapr Shared enables two alternative deployment strategies to create Dapr applications using a Kubernetes `Daemonset` for a per-node deployment or a `Deployment` for a per-cluster deployment.
- **`DaemonSet`:** When running Dapr Shared as a Kubernetes `DaemonSet` resource, the daprd container runs on each Kubernetes node in the cluster. This can reduce network hops between the applications and Dapr.
- **`Deployment`:** When running Dapr Shared as a Kubernetes `Deployment`, the Kubernetes scheduler decides on which single node in the cluster the daprd container instance runs.
{{% alert title="Dapr Shared deployments" color="primary" %}}
For each Dapr application you deploy, you need to deploy the Dapr Shared Helm chart using different `shared.appId`s.
{{% /alert %}}
## Why Dapr Shared?
By default, when Dapr is installed into a Kubernetes cluster, the Dapr control plane injects Dapr as a sidecar to applications annotated with Dapr annotations ( `dapr.io/enabled: "true"`). Sidecars offer many advantages, including improved resiliency, since there is an instance per application and all communication between the application and the sidecar happens without involving the network.
<img src="/images/dapr-shared/sidecar.png" width=800 style="padding-bottom:15px;">
While sidecars are Dapr's default deployment, some use cases require other approaches. Let's say you want to decouple the lifecycle of your workloads from the Dapr APIs. A typical example of this is functions, or function-as-a-service runtimes, which might automatically downscale your idle workloads to free up resources. For such cases, keeping the Dapr APIs and all the Dapr async functionalities (such as subscriptions) separate might be required.
Dapr Shared was created for these scenarios, extending the Dapr sidecar model with two new deployment approaches: `DaemonSet` (per-node) and `Deployment` (per-cluster).
{{% alert title="Important" color="primary" %}}
No matter which deployment approach you choose, it is important to understand that in most use cases, you have one instance of Dapr Shared (Helm release) per service (app-id). This means that if you have an application composed of three microservices, each service is recommended to have its own Dapr Shared instance. You can see this in action by trying the [Hello Kubernetes with Dapr Shared tutorial](https://github.com/dapr/dapr-shared/blob/main/docs/tutorial/README.md).
{{% /alert %}}
### `DeamonSet`(Per-node)
With Kubernetes `DaemonSet`, you can define applications that need to be deployed once per node in the cluster. This enables applications that are running on the same node to communicate with local Dapr APIs, no matter where the Kubernetes `Scheduler` schedules your workload.
<img src="/images/dapr-shared/daemonset.png" width=800 style="padding-bottom:15px;">
{{% alert title="Note" color="primary" %}}
Since `DaemonSet` installs one instance per node, it consumes more resources in your cluster, compared to `Deployment` for a per cluster deployment, with the advantage of improved resiliency.
{{% /alert %}}
### `Deployment` (Per-cluster)
Kubernetes `Deployments` are installed once per cluster. Based on available resources, the Kubernetes `Scheduler` decides on which node the workload is scheduled. For Dapr Shared, this means that your workload and the Dapr instance might be located on separate nodes, which can introduce considerable network latency with the trade-off of reduce resource usage.
<img src="/images/dapr-shared/deployment.png" width=800 style="padding-bottom:15px;">
## Getting Started with Dapr Shared
{{% alert title="Prerequisites" color="primary" %}}
Before installing Dapr Shared, make ensure you have [Dapr installed in your cluster]({{< ref "kubernetes-deploy.md" >}}).
{{% /alert %}}
If you want to get started with Dapr Shared, you can create a new Dapr Shared instance by installing the official Helm Chart:
```
helm install my-shared-instance oci://registry-1.docker.io/daprio/dapr-shared-chart --set shared.appId=<DAPR_APP_ID> --set shared.remoteURL=<REMOTE_URL> --set shared.remotePort=<REMOTE_PORT> --set shared.strategy=deployment
```
Your Dapr-enabled applications can now make use of the Dapr Shared instance by pointing the Dapr SDKs to or sending requests to the `my-shared-instance-dapr` Kubernetes service exposed by the Dapr Shared instance.
> The `my-shared-instance` above is the Helm Chart release name.
If you are using the Dapr SDKs, you can set the following environment variables for your application to connect to the Dapr Shared instance (in this case, running on the `default` namespace):
```
env:
- name: DAPR_HTTP_ENDPOINT
value: http://my-shared-instance-dapr.default.svc.cluster.local:3500
- name: DAPR_GRPC_ENDPOINT
value: http://my-shared-instance-dapr.default.svc.cluster.local:50001
```
If you are not using the SDKs, you can send HTTP or gRPC requests to those endpoints.
## Next steps
- Try the [Hello Kubernetes tutorial with Dapr Shared](https://github.com/dapr/dapr-shared/blob/main/docs/tutorial/README.md).
- Read more in the [Dapr Shared repo](https://github.com/dapr/dapr-shared/blob/main/README.md)

View File

@ -2,7 +2,7 @@
type: docs type: docs
title: "Deploy to hybrid Linux/Windows Kubernetes clusters" title: "Deploy to hybrid Linux/Windows Kubernetes clusters"
linkTitle: "Hybrid clusters" linkTitle: "Hybrid clusters"
weight: 60000 weight: 70000
description: "How to run Dapr apps on Kubernetes clusters with Windows nodes" description: "How to run Dapr apps on Kubernetes clusters with Windows nodes"
--- ---

View File

@ -2,7 +2,7 @@
type: docs type: docs
title: "Running Dapr with a Kubernetes Job" title: "Running Dapr with a Kubernetes Job"
linkTitle: "Kubernetes Jobs" linkTitle: "Kubernetes Jobs"
weight: 70000 weight: 80000
description: "Use Dapr API in a Kubernetes Job context" description: "Use Dapr API in a Kubernetes Job context"
--- ---

View File

@ -2,7 +2,7 @@
type: docs type: docs
title: "How-to: Mount Pod volumes to the Dapr sidecar" title: "How-to: Mount Pod volumes to the Dapr sidecar"
linkTitle: "How-to: Mount Pod volumes" linkTitle: "How-to: Mount Pod volumes"
weight: 80000 weight: 90000
description: "Configure the Dapr sidecar to mount Pod Volumes" description: "Configure the Dapr sidecar to mount Pod Volumes"
--- ---

View File

@ -20,7 +20,7 @@ spec:
tracing: tracing:
samplingRate: "1" samplingRate: "1"
otel: otel:
endpointAddress: "https://..." endpointAddress: "myendpoint.cluster.local:4317"
zipkin: zipkin:
endpointAddress: "https://..." endpointAddress: "https://..."
@ -32,10 +32,10 @@ The following table lists the properties for tracing:
|--------------|--------|-------------| |--------------|--------|-------------|
| `samplingRate` | string | Set sampling rate for tracing to be enabled or disabled. | `samplingRate` | string | Set sampling rate for tracing to be enabled or disabled.
| `stdout` | bool | True write more verbose information to the traces | `stdout` | bool | True write more verbose information to the traces
| `otel.endpointAddress` | string | Set the Open Telemetry (OTEL) server address. | `otel.endpointAddress` | string | Set the Open Telemetry (OTEL) target hostname and optionally port. If this is used, you do not need to specify the 'zipkin' section.
| `otel.isSecure` | bool | Is the connection to the endpoint address encrypted. | `otel.isSecure` | bool | Is the connection to the endpoint address encrypted.
| `otel.protocol` | string | Set to `http` or `grpc` protocol. | `otel.protocol` | string | Set to `http` or `grpc` protocol.
| `zipkin.endpointAddress` | string | Set the Zipkin server address. If this is used, you do not need to specify the `otel` section. | `zipkin.endpointAddress` | string | Set the Zipkin server URL. If this is used, you do not need to specify the `otel` section.
To enable tracing, use a configuration file (in self hosted mode) or a Kubernetes configuration object (in Kubernetes mode). For example, the following configuration object changes the sample rate to 1 (every span is sampled), and sends trace using OTEL protocol to the OTEL server at localhost:4317 To enable tracing, use a configuration file (in self hosted mode) or a Kubernetes configuration object (in Kubernetes mode). For example, the following configuration object changes the sample rate to 1 (every span is sampled), and sends trace using OTEL protocol to the OTEL server at localhost:4317
@ -66,7 +66,7 @@ turns on tracing for the sidecar.
| Environment Variable | Description | | Environment Variable | Description |
|----------------------|-------------| |----------------------|-------------|
| `OTEL_EXPORTER_OTLP_ENDPOINT` | Sets the Open Telemetry (OTEL) server address, turns on tracing | | `OTEL_EXPORTER_OTLP_ENDPOINT` | Sets the Open Telemetry (OTEL) server hostname and optionally port, turns on tracing |
| `OTEL_EXPORTER_OTLP_INSECURE` | Sets the connection to the endpoint as unencrypted (true/false) | | `OTEL_EXPORTER_OTLP_INSECURE` | Sets the connection to the endpoint as unencrypted (true/false) |
| `OTEL_EXPORTER_OTLP_PROTOCOL` | Transport protocol (`grpc`, `http/protobuf`, `http/json`) | | `OTEL_EXPORTER_OTLP_PROTOCOL` | Transport protocol (`grpc`, `http/protobuf`, `http/json`) |

View File

@ -45,6 +45,9 @@ The table below shows the versions of Dapr releases that have been tested togeth
| Release date | Runtime | CLI | SDKs | Dashboard | Status | Release notes | | Release date | Runtime | CLI | SDKs | Dashboard | Status | Release notes |
|--------------------|:--------:|:--------|---------|---------|---------|------------| |--------------------|:--------:|:--------|---------|---------|---------|------------|
| June 28th 2024 | 1.13.5</br> | 1.13.0 | Java 1.11.0 </br>Go 1.10.0 </br>PHP 1.2.0 </br>Python 1.13.0 </br>.NET 1.13.0 </br>JS 3.3.0 | 0.14.0 | Supported (current) | [v1.13.5 release notes](https://github.com/dapr/dapr/releases/tag/v1.13.5) |
| May 29th 2024 | 1.13.4</br> | 1.13.0 | Java 1.11.0 </br>Go 1.10.0 </br>PHP 1.2.0 </br>Python 1.13.0 </br>.NET 1.13.0 </br>JS 3.3.0 | 0.14.0 | Supported (current) | [v1.13.4 release notes](https://github.com/dapr/dapr/releases/tag/v1.13.4) |
| May 21st 2024 | 1.13.3</br> | 1.13.0 | Java 1.11.0 </br>Go 1.10.0 </br>PHP 1.2.0 </br>Python 1.13.0 </br>.NET 1.13.0 </br>JS 3.3.0 | 0.14.0 | Supported (current) | [v1.13.3 release notes](https://github.com/dapr/dapr/releases/tag/v1.13.3) |
| April 3rd 2024 | 1.13.2</br> | 1.13.0 | Java 1.11.0 </br>Go 1.10.0 </br>PHP 1.2.0 </br>Python 1.13.0 </br>.NET 1.13.0 </br>JS 3.3.0 | 0.14.0 | Supported (current) | [v1.13.2 release notes](https://github.com/dapr/dapr/releases/tag/v1.13.2) | | April 3rd 2024 | 1.13.2</br> | 1.13.0 | Java 1.11.0 </br>Go 1.10.0 </br>PHP 1.2.0 </br>Python 1.13.0 </br>.NET 1.13.0 </br>JS 3.3.0 | 0.14.0 | Supported (current) | [v1.13.2 release notes](https://github.com/dapr/dapr/releases/tag/v1.13.2) |
| March 26th 2024 | 1.13.1</br> | 1.13.0 | Java 1.11.0 </br>Go 1.10.0 </br>PHP 1.2.0 </br>Python 1.13.0 </br>.NET 1.13.0 </br>JS 3.3.0 | 0.14.0 | Supported (current) | [v1.13.1 release notes](https://github.com/dapr/dapr/releases/tag/v1.13.1) | | March 26th 2024 | 1.13.1</br> | 1.13.0 | Java 1.11.0 </br>Go 1.10.0 </br>PHP 1.2.0 </br>Python 1.13.0 </br>.NET 1.13.0 </br>JS 3.3.0 | 0.14.0 | Supported (current) | [v1.13.1 release notes](https://github.com/dapr/dapr/releases/tag/v1.13.1) |
| March 6th 2024 | 1.13.0</br> | 1.13.0 | Java 1.11.0 </br>Go 1.10.0 </br>PHP 1.2.0 </br>Python 1.13.0 </br>.NET 1.13.0 </br>JS 3.3.0 | 0.14.0 | Supported (current) | [v1.13.0 release notes](https://github.com/dapr/dapr/releases/tag/v1.13.0) | | March 6th 2024 | 1.13.0</br> | 1.13.0 | Java 1.11.0 </br>Go 1.10.0 </br>PHP 1.2.0 </br>Python 1.13.0 </br>.NET 1.13.0 </br>JS 3.3.0 | 0.14.0 | Supported (current) | [v1.13.0 release notes](https://github.com/dapr/dapr/releases/tag/v1.13.0) |
@ -135,7 +138,7 @@ General guidance on upgrading can be found for [self hosted mode]({{< ref self-h
| 1.10.0 | N/A | 1.10.8 | | 1.10.0 | N/A | 1.10.8 |
| 1.11.0 | N/A | 1.11.4 | | 1.11.0 | N/A | 1.11.4 |
| 1.12.0 | N/A | 1.12.4 | | 1.12.0 | N/A | 1.12.4 |
| 1.13.0 | N/A | 1.13.2 | | 1.13.0 | N/A | 1.13.5 |
## Upgrade on Hosting platforms ## Upgrade on Hosting platforms

View File

@ -3,7 +3,7 @@ type: docs
title: "Actors API reference" title: "Actors API reference"
linkTitle: "Actors API" linkTitle: "Actors API"
description: "Detailed documentation on the actors API" description: "Detailed documentation on the actors API"
weight: 500 weight: 600
--- ---
Dapr provides native, cross-platform, and cross-language virtual actor capabilities. Dapr provides native, cross-platform, and cross-language virtual actor capabilities.
@ -114,7 +114,7 @@ Parameter | Description
#### Examples #### Examples
> Note, the following example uses the `ttlInSeconds` field, which requires the [`ActorStateTTL` feature enabled]]({{< ref "support-preview-features.md" >}}). > Note, the following example uses the `ttlInSeconds` field, which requires the [`ActorStateTTL` feature enabled]({{< ref "support-preview-features.md" >}}).
```shell ```shell
curl -X POST http://localhost:3500/v1.0/actors/stormtrooper/50/state \ curl -X POST http://localhost:3500/v1.0/actors/stormtrooper/50/state \
@ -202,6 +202,8 @@ A JSON object with the following fields:
|-------|--------------| |-------|--------------|
| `dueTime` | Specifies the time after which the reminder is invoked. Its format should be [time.ParseDuration](https://pkg.go.dev/time#ParseDuration) | `dueTime` | Specifies the time after which the reminder is invoked. Its format should be [time.ParseDuration](https://pkg.go.dev/time#ParseDuration)
| `period` | Specifies the period between different invocations. Its format should be [time.ParseDuration](https://pkg.go.dev/time#ParseDuration) or ISO 8601 duration format with optional recurrence. | `period` | Specifies the period between different invocations. Its format should be [time.ParseDuration](https://pkg.go.dev/time#ParseDuration) or ISO 8601 duration format with optional recurrence.
| `ttl` | Sets time at or interval after which the timer or reminder will be expired and deleted. Its format should be [time.ParseDuration format](https://pkg.go.dev/time#ParseDuration), RFC3339 date format, or ISO 8601 duration format.
| `data` | A string value and can be any related content. Content is returned when the reminder expires. For example this may be useful for returning a URL or anything related to the content.
`period` field supports `time.Duration` format and ISO 8601 format with some limitations. For `period`, only duration format of ISO 8601 duration `Rn/PnYnMnWnDTnHnMnS` is supported. `Rn/` specifies that the reminder will be invoked `n` number of times. `period` field supports `time.Duration` format and ISO 8601 format with some limitations. For `period`, only duration format of ISO 8601 duration `Rn/PnYnMnWnDTnHnMnS` is supported. `Rn/` specifies that the reminder will be invoked `n` number of times.
@ -210,7 +212,11 @@ A JSON object with the following fields:
If `Rn/` is not specified, the reminder will run an infinite number of times until deleted. If `Rn/` is not specified, the reminder will run an infinite number of times until deleted.
The following specifies a `dueTime` of 3 seconds and a period of 7 seconds. If only `ttl` and `dueTime` are set, the reminder will be accepted. However, only the `dueTime` takes effect. For example, the reminder triggers at `dueTime`, and `ttl` is ignored.
If `ttl`, `dueTime`, and `period` are set, the reminder first fires at `dueTime`, then repeatedly fires and expires according to `period` and `ttl`.
The following example specifies a `dueTime` of 3 seconds and a period of 7 seconds.
```json ```json
{ {
@ -237,6 +243,25 @@ To configure the reminder to fire only once, the period should be set to empty s
} }
``` ```
When you specify the repetition number in both `period` and `ttl`, the timer/reminder is stopped when either condition is met. The following example has a timer with a `period` of 3 seconds (in ISO 8601 duration format) and a `ttl` of 20 seconds. This timer fires immediately after registration, then every 3 seconds after that for the duration of 20 seconds, after which it never fires again since the `ttl` was met
```json
{
"period":"PT3S",
"ttl":"20s"
}
```
Need description for data.
```json
{
"data": "someData",
"dueTime": "1m",
"period": "20s"
}
```
#### HTTP Response Codes #### HTTP Response Codes
Code | Description Code | Description

View File

@ -3,7 +3,7 @@ type: docs
title: "Bindings API reference" title: "Bindings API reference"
linkTitle: "Bindings API" linkTitle: "Bindings API"
description: "Detailed documentation on the bindings API" description: "Detailed documentation on the bindings API"
weight: 400 weight: 500
--- ---
Dapr provides bi-directional binding capabilities for applications and a consistent approach to interacting with different cloud/on-premise services or systems. Dapr provides bi-directional binding capabilities for applications and a consistent approach to interacting with different cloud/on-premise services or systems.

View File

@ -3,7 +3,7 @@ type: docs
title: "Configuration API reference" title: "Configuration API reference"
linkTitle: "Configuration API" linkTitle: "Configuration API"
description: "Detailed documentation on the configuration API" description: "Detailed documentation on the configuration API"
weight: 700 weight: 800
--- ---
## Get Configuration ## Get Configuration

View File

@ -3,7 +3,7 @@ type: docs
title: "Distributed Lock API reference" title: "Distributed Lock API reference"
linkTitle: "Distributed Lock API" linkTitle: "Distributed Lock API"
description: "Detailed documentation on the distributed lock API" description: "Detailed documentation on the distributed lock API"
weight: 800 weight: 900
--- ---
## Lock ## Lock

View File

@ -3,7 +3,7 @@ type: docs
title: "Pub/sub API reference" title: "Pub/sub API reference"
linkTitle: "Pub/Sub API" linkTitle: "Pub/Sub API"
description: "Detailed documentation on the pub/sub API" description: "Detailed documentation on the pub/sub API"
weight: 300 weight: 200
--- ---
## Publish a message to a given topic ## Publish a message to a given topic
@ -177,7 +177,15 @@ Example:
{ {
"pubsubname": "pubsub", "pubsubname": "pubsub",
"topic": "newOrder", "topic": "newOrder",
"route": "/orders", "routes": {
"rules": [
{
"match": "event.type == order",
"path": "/orders"
}
]
"default" : "/otherorders"
},
"metadata": { "metadata": {
"rawPayload": "true" "rawPayload": "true"
} }
@ -197,7 +205,7 @@ Parameter | Description
### Provide route(s) for Dapr to deliver topic events ### Provide route(s) for Dapr to deliver topic events
In order to deliver topic events, a `POST` call will be made to user code with the route specified in the subscription response. In order to deliver topic events, a `POST` call will be made to user code with the route specified in the subscription response. Under `routes`, you can provide [rules that match a certain condition to a specific path when a message topic is received.]({{< ref "howto-route-messages.md" >}}) You can also provide a default route for any rules that do not have a specific match.
The following example illustrates this point, considering a subscription for topic `newOrder` with route `orders` on port 3000: `POST http://localhost:3000/orders` The following example illustrates this point, considering a subscription for topic `newOrder` with route `orders` on port 3000: `POST http://localhost:3000/orders`

View File

@ -3,7 +3,7 @@ type: docs
title: "Secrets API reference" title: "Secrets API reference"
linkTitle: "Secrets API" linkTitle: "Secrets API"
description: "Detailed documentation on the secrets API" description: "Detailed documentation on the secrets API"
weight: 600 weight: 700
--- ---
## Get Secret ## Get Secret

View File

@ -3,7 +3,7 @@ type: docs
title: "State management API reference" title: "State management API reference"
linkTitle: "State management API" linkTitle: "State management API"
description: "Detailed documentation on the state management API" description: "Detailed documentation on the state management API"
weight: 200 weight: 400
--- ---
## Component file ## Component file

View File

@ -3,7 +3,7 @@ type: docs
title: "Workflow API reference" title: "Workflow API reference"
linkTitle: "Workflow API" linkTitle: "Workflow API"
description: "Detailed documentation on the workflow API" description: "Detailed documentation on the workflow API"
weight: 900 weight: 300
--- ---
{{% alert title="Note" color="primary" %}} {{% alert title="Note" color="primary" %}}

View File

@ -71,10 +71,10 @@ dapr init -s
**Specify a runtime version** **Specify a runtime version**
You can also specify a specific runtime version. Be default, the latest version is used. You can also specify a specific runtime version. By default, the latest version is used.
```bash ```bash
dapr init --runtime-version 1.13.0 dapr init --runtime-version 1.13.4
``` ```
**Install with image variant** **Install with image variant**

View File

@ -2,7 +2,7 @@
type: docs type: docs
title: "Bindings component specs" title: "Bindings component specs"
linkTitle: "Bindings" linkTitle: "Bindings"
weight: 3000 weight: 4000
description: The supported external bindings that interface with Dapr description: The supported external bindings that interface with Dapr
aliases: aliases:
- "/operations/components/setup-bindings/supported-bindings/" - "/operations/components/setup-bindings/supported-bindings/"

View File

@ -2,7 +2,7 @@
type: docs type: docs
title: "Configuration store component specs" title: "Configuration store component specs"
linkTitle: "Configuration stores" linkTitle: "Configuration stores"
weight: 5000 weight: 6000
description: The supported configuration stores that interface with Dapr description: The supported configuration stores that interface with Dapr
aliases: aliases:
- "/operations/components/setup-configuration-store/supported-configuration-stores/" - "/operations/components/setup-configuration-store/supported-configuration-stores/"

View File

@ -2,7 +2,7 @@
type: docs type: docs
title: "Cryptography component specs" title: "Cryptography component specs"
linkTitle: "Cryptography" linkTitle: "Cryptography"
weight: 7000 weight: 8000
description: The supported cryptography components that interface with Dapr description: The supported cryptography components that interface with Dapr
no_list: true no_list: true
--- ---

View File

@ -2,7 +2,7 @@
type: docs type: docs
title: "Lock component specs" title: "Lock component specs"
linkTitle: "Locks" linkTitle: "Locks"
weight: 6000 weight: 7000
description: The supported locks that interface with Dapr description: The supported locks that interface with Dapr
no_list: true no_list: true
--- ---

View File

@ -2,7 +2,7 @@
type: docs type: docs
title: "Middleware component specs" title: "Middleware component specs"
linkTitle: "Middleware" linkTitle: "Middleware"
weight: 9000 weight: 10000
description: List of all the supported middleware components that can be injected in Dapr's processing pipeline. description: List of all the supported middleware components that can be injected in Dapr's processing pipeline.
no_list: true no_list: true
aliases: aliases:

View File

@ -2,7 +2,7 @@
type: docs type: docs
title: "Name resolution provider component specs" title: "Name resolution provider component specs"
linkTitle: "Name resolution" linkTitle: "Name resolution"
weight: 8000 weight: 9000
description: The supported name resolution providers to enable Dapr service invocation description: The supported name resolution providers to enable Dapr service invocation
no_list: true no_list: true
--- ---

View File

@ -2,7 +2,7 @@
type: docs type: docs
title: "Pub/sub brokers component specs" title: "Pub/sub brokers component specs"
linkTitle: "Pub/sub brokers" linkTitle: "Pub/sub brokers"
weight: 2000 weight: 1000
description: The supported pub/sub brokers that interface with Dapr description: The supported pub/sub brokers that interface with Dapr
aliases: aliases:
- "/operations/components/setup-pubsub/supported-pubsub/" - "/operations/components/setup-pubsub/supported-pubsub/"

View File

@ -70,7 +70,7 @@ spec:
|--------------------|:--------:|---------|---------| |--------------------|:--------:|---------|---------|
| brokers | Y | A comma-separated list of Kafka brokers. | `"localhost:9092,dapr-kafka.myapp.svc.cluster.local:9093"` | brokers | Y | A comma-separated list of Kafka brokers. | `"localhost:9092,dapr-kafka.myapp.svc.cluster.local:9093"`
| consumerGroup | N | A kafka consumer group to listen on. Each record published to a topic is delivered to one consumer within each consumer group subscribed to the topic. If a value for `consumerGroup` is provided, any value for `consumerID` is ignored - a combination of the consumer group and a random unique identifier will be set for the `consumerID` instead. | `"group1"` | consumerGroup | N | A kafka consumer group to listen on. Each record published to a topic is delivered to one consumer within each consumer group subscribed to the topic. If a value for `consumerGroup` is provided, any value for `consumerID` is ignored - a combination of the consumer group and a random unique identifier will be set for the `consumerID` instead. | `"group1"`
| consumerID | N | Consumer ID (consumer tag) organizes one or more consumers into a group. Consumers with the same consumer ID work as one virtual consumer; for example, a message is processed only once by one of the consumers in the group. If the `consumerID` is not provided, the Dapr runtime set it to the Dapr application ID (`appID`) value. If a value for `consumerGroup` is provided, any value for `consumerID` is ignored - a combination of the consumer group and a random unique identifier will be set for the `consumerID` instead. | `"channel1"` | consumerID | N | Consumer ID (consumer tag) organizes one or more consumers into a group. Consumers with the same consumer ID work as one virtual consumer; for example, a message is processed only once by one of the consumers in the group. If the `consumerID` is not provided, the Dapr runtime set it to the Dapr application ID (`appID`) value. If a value for `consumerGroup` is provided, any value for `consumerID` is ignored - a combination of the consumer group and a random unique identifier will be set for the `consumerID` instead. | Can be set to string value (such as `"channel1"` in the example above) or string format value (such as `"{podName}"`, etc.). [See all of template tags you can use in your component metadata.]({{< ref "component-schema.md#templated-metadata-values" >}})
| clientID | N | A user-provided string sent with every request to the Kafka brokers for logging, debugging, and auditing purposes. Defaults to `"namespace.appID"` for Kubernetes mode or `"appID"` for Self-Hosted mode. | `"my-namespace.my-dapr-app"`, `"my-dapr-app"` | clientID | N | A user-provided string sent with every request to the Kafka brokers for logging, debugging, and auditing purposes. Defaults to `"namespace.appID"` for Kubernetes mode or `"appID"` for Self-Hosted mode. | `"my-namespace.my-dapr-app"`, `"my-dapr-app"`
| authRequired | N | *Deprecated* Enable [SASL](https://en.wikipedia.org/wiki/Simple_Authentication_and_Security_Layer) authentication with the Kafka brokers. | `"true"`, `"false"` | authRequired | N | *Deprecated* Enable [SASL](https://en.wikipedia.org/wiki/Simple_Authentication_and_Security_Layer) authentication with the Kafka brokers. | `"true"`, `"false"`
| authType | Y | Configure or disable authentication. Supported values: `none`, `password`, `mtls`, `oidc` or `awsiam` | `"password"`, `"none"` | authType | Y | Configure or disable authentication. Supported values: `none`, `password`, `mtls`, `oidc` or `awsiam` | `"password"`, `"none"`

View File

@ -83,7 +83,7 @@ The above example uses secrets as plain strings. It is recommended to use [a sec
| accessKey | Y | ID of the AWS account/role with appropriate permissions to SNS and SQS (see below) | `"AKIAIOSFODNN7EXAMPLE"` | accessKey | Y | ID of the AWS account/role with appropriate permissions to SNS and SQS (see below) | `"AKIAIOSFODNN7EXAMPLE"`
| secretKey | Y | Secret for the AWS user/role. If using an `AssumeRole` access, you will also need to provide a `sessionToken` |`"wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY"` | secretKey | Y | Secret for the AWS user/role. If using an `AssumeRole` access, you will also need to provide a `sessionToken` |`"wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY"`
| region | Y | The AWS region where the SNS/SQS assets are located or be created in. See [this page](https://aws.amazon.com/about-aws/global-infrastructure/regional-product-services/?p=ugi&l=na) for valid regions. Ensure that SNS and SQS are available in that region | `"us-east-1"` | region | Y | The AWS region where the SNS/SQS assets are located or be created in. See [this page](https://aws.amazon.com/about-aws/global-infrastructure/regional-product-services/?p=ugi&l=na) for valid regions. Ensure that SNS and SQS are available in that region | `"us-east-1"`
| consumerID | N | Consumer ID (consumer tag) organizes one or more consumers into a group. Consumers with the same consumer ID work as one virtual consumer; for example, a message is processed only once by one of the consumers in the group. If the `consumerID` is not provided, the Dapr runtime set it to the Dapr application ID (`appID`) value. See the [pub/sub broker component file]({{< ref setup-pubsub.md >}}) to learn how ConsumerID is automatically generated. | `"channel1"` | consumerID | N | Consumer ID (consumer tag) organizes one or more consumers into a group. Consumers with the same consumer ID work as one virtual consumer; for example, a message is processed only once by one of the consumers in the group. If the `consumerID` is not provided, the Dapr runtime set it to the Dapr application ID (`appID`) value. See the [pub/sub broker component file]({{< ref setup-pubsub.md >}}) to learn how ConsumerID is automatically generated. | Can be set to string value (such as `"channel1"` in the example above) or string format value (such as `"{podName}"`, etc.). [See all of template tags you can use in your component metadata.]({{< ref "component-schema.md#templated-metadata-values" >}})
| endpoint | N | AWS endpoint for the component to use. Only used for local development with, for example, [localstack](https://github.com/localstack/localstack). The `endpoint` is unnecessary when running against production AWS | `"http://localhost:4566"` | endpoint | N | AWS endpoint for the component to use. Only used for local development with, for example, [localstack](https://github.com/localstack/localstack). The `endpoint` is unnecessary when running against production AWS | `"http://localhost:4566"`
| sessionToken | N | AWS session token to use. A session token is only required if you are using temporary security credentials | `"TOKEN"` | sessionToken | N | AWS session token to use. A session token is only required if you are using temporary security credentials | `"TOKEN"`
| messageReceiveLimit | N | Number of times a message is received, after processing of that message fails, that once reached, results in removing of that message from the queue. If `sqsDeadLettersQueueName` is specified, `messageReceiveLimit` is the number of times a message is received, after processing of that message fails, that once reached, results in moving of the message to the SQS dead-letters queue. Default: `10` | `10` | messageReceiveLimit | N | Number of times a message is received, after processing of that message fails, that once reached, results in removing of that message from the queue. If `sqsDeadLettersQueueName` is specified, `messageReceiveLimit` is the number of times a message is received, after processing of that message fails, that once reached, results in moving of the message to the SQS dead-letters queue. Default: `10` | `10`

View File

@ -64,7 +64,7 @@ The above example uses secrets as plain strings. It is recommended to use a secr
|--------------------|:--------:|---------|---------| |--------------------|:--------:|---------|---------|
| `connectionString` | Y* | Connection string for the Event Hub or the Event Hub namespace.<br>* Mutally exclusive with `eventHubNamespace` field.<br>* Required when not using [Microsoft Entra ID Authentication]({{< ref "authenticating-azure.md" >}}) | `"Endpoint=sb://{EventHubNamespace}.servicebus.windows.net/;SharedAccessKeyName={PolicyName};SharedAccessKey={Key};EntityPath={EventHub}"` or `"Endpoint=sb://{EventHubNamespace}.servicebus.windows.net/;SharedAccessKeyName={PolicyName};SharedAccessKey={Key}"` | `connectionString` | Y* | Connection string for the Event Hub or the Event Hub namespace.<br>* Mutally exclusive with `eventHubNamespace` field.<br>* Required when not using [Microsoft Entra ID Authentication]({{< ref "authenticating-azure.md" >}}) | `"Endpoint=sb://{EventHubNamespace}.servicebus.windows.net/;SharedAccessKeyName={PolicyName};SharedAccessKey={Key};EntityPath={EventHub}"` or `"Endpoint=sb://{EventHubNamespace}.servicebus.windows.net/;SharedAccessKeyName={PolicyName};SharedAccessKey={Key}"`
| `eventHubNamespace` | Y* | The Event Hub Namespace name.<br>* Mutally exclusive with `connectionString` field.<br>* Required when using [Microsoft Entra ID Authentication]({{< ref "authenticating-azure.md" >}}) | `"namespace"` | `eventHubNamespace` | Y* | The Event Hub Namespace name.<br>* Mutally exclusive with `connectionString` field.<br>* Required when using [Microsoft Entra ID Authentication]({{< ref "authenticating-azure.md" >}}) | `"namespace"`
| `consumerID` | N | Consumer ID (consumer tag) organizes one or more consumers into a group. Consumers with the same consumer ID work as one virtual consumer; for example, a message is processed only once by one of the consumers in the group. If the `consumerID` is not provided, the Dapr runtime set it to the Dapr application ID (`appID`) value. | `"channel1"` | `consumerID` | N | Consumer ID (consumer tag) organizes one or more consumers into a group. Consumers with the same consumer ID work as one virtual consumer; for example, a message is processed only once by one of the consumers in the group. If the `consumerID` is not provided, the Dapr runtime set it to the Dapr application ID (`appID`) value. | Can be set to string value (such as `"channel1"` in the example above) or string format value (such as `"{podName}"`, etc.). [See all of template tags you can use in your component metadata.]({{< ref "component-schema.md#templated-metadata-values" >}})
| `storageAccountName` | Y | Storage account name to use for the checkpoint store. |`"myeventhubstorage"` | `storageAccountName` | Y | Storage account name to use for the checkpoint store. |`"myeventhubstorage"`
| `storageAccountKey` | Y* | Storage account key for the checkpoint store account.<br>* When using Microsoft Entra ID, it's possible to omit this if the service principal has access to the storage account too. | `"112233445566778899"` | `storageAccountKey` | Y* | Storage account key for the checkpoint store account.<br>* When using Microsoft Entra ID, it's possible to omit this if the service principal has access to the storage account too. | `"112233445566778899"`
| `storageConnectionString` | Y* | Connection string for the checkpoint store, alternative to specifying `storageAccountKey` | `"DefaultEndpointsProtocol=https;AccountName=myeventhubstorage;AccountKey=<account-key>"` | `storageConnectionString` | Y* | Connection string for the checkpoint store, alternative to specifying `storageAccountKey` | `"DefaultEndpointsProtocol=https;AccountName=myeventhubstorage;AccountKey=<account-key>"`

View File

@ -71,7 +71,7 @@ The above example uses secrets as plain strings. It is recommended to use a secr
| Field | Required | Details | Example | | Field | Required | Details | Example |
|--------------------|:--------:|---------|---------| |--------------------|:--------:|---------|---------|
| `connectionString` | Y | Shared access policy connection string for the Service Bus. Required unless using Microsoft Entra ID authentication. | See example above | `connectionString` | Y | Shared access policy connection string for the Service Bus. Required unless using Microsoft Entra ID authentication. | See example above
| `consumerID` | N | Consumer ID (consumer tag) organizes one or more consumers into a group. Consumers with the same consumer ID work as one virtual consumer; for example, a message is processed only once by one of the consumers in the group. If the `consumerID` is not provided, the Dapr runtime set it to the Dapr application ID (`appID`) value. | `"channel1"` | `consumerID` | N | Consumer ID (consumer tag) organizes one or more consumers into a group. Consumers with the same consumer ID work as one virtual consumer; for example, a message is processed only once by one of the consumers in the group. If the `consumerID` is not provided, the Dapr runtime set it to the Dapr application ID (`appID`) value. | Can be set to string value (such as `"channel1"` in the example above) or string format value (such as `"{podName}"`, etc.). [See all of template tags you can use in your component metadata.]({{< ref "component-schema.md#templated-metadata-values" >}})
| `namespaceName`| N | Parameter to set the address of the Service Bus namespace, as a fully-qualified domain name. Required if using Microsoft Entra ID authentication. | `"namespace.servicebus.windows.net"` | | `namespaceName`| N | Parameter to set the address of the Service Bus namespace, as a fully-qualified domain name. Required if using Microsoft Entra ID authentication. | `"namespace.servicebus.windows.net"` |
| `timeoutInSec` | N | Timeout for sending messages and for management operations. Default: `60` |`30` | `timeoutInSec` | N | Timeout for sending messages and for management operations. Default: `60` |`30`
| `handlerTimeoutInSec`| N | Timeout for invoking the app's handler. Default: `60` | `30` | `handlerTimeoutInSec`| N | Timeout for invoking the app's handler. Default: `60` | `30`
@ -181,6 +181,23 @@ When subscribing to a topic, you can configure `bulkSubscribe` options. Refer to
Follow the instructions [here](https://learn.microsoft.com/azure/service-bus-messaging/service-bus-quickstart-portal) on setting up Azure Service Bus Queues. Follow the instructions [here](https://learn.microsoft.com/azure/service-bus-messaging/service-bus-quickstart-portal) on setting up Azure Service Bus Queues.
{{% alert title="Note" color="primary" %}}
Your queue must have the same name as the topic you are publishing to with Dapr. For example, if you are publishing to the pub/sub `"myPubsub"` on the topic `"orders"`, your queue must be named `"orders"`.
If you are using a shared access policy to connect to the queue, that policy must be able to "manage" the queue. To work with a dead-letter queue, the policy must live on the Service Bus Namespace that contains both the main queue and the dead-letter queue.
{{% /alert %}}
### Retry policy and dead-letter queues
By default, an Azure Service Bus Queue has a dead-letter queue. The messages are retried the amount given for `maxDeliveryCount`. The default `maxDeliveryCount` value defaults to 10, but can be set up to 2000. These retries happen very rapidly and the message is put in the dead-letter queue if no success is returned.
Dapr Pub/sub offers its own dead-letter queue concept that lets you control the retry policy and subscribe to the dead-letter queue through Dapr.
1. Set up a separate queue as that dead-letter queue in the Azure Service Bus namespace, and a resilience policy that defines how to retry.
1. Subscribe to the topic to get the failed messages and deal with them.
For example, setting up a dead-letter queue `orders-dlq` in the subscription and a resiliency policy lets you subscribe to the topic `orders-dlq` to handle failed messages.
For more details on setting up dead-letter queues, see the [dead-letter article]({{< ref pubsub-deadletter >}}).
## Related links ## Related links
- [Basic schema for a Dapr component]({{< ref component-schema >}}) - [Basic schema for a Dapr component]({{< ref component-schema >}})

View File

@ -30,7 +30,7 @@ spec:
- name: connectionString - name: connectionString
value: "Endpoint=sb://{ServiceBusNamespace}.servicebus.windows.net/;SharedAccessKeyName={PolicyName};SharedAccessKey={Key};EntityPath={ServiceBus}" value: "Endpoint=sb://{ServiceBusNamespace}.servicebus.windows.net/;SharedAccessKeyName={PolicyName};SharedAccessKey={Key};EntityPath={ServiceBus}"
# - name: consumerID # Optional: defaults to the app's own ID # - name: consumerID # Optional: defaults to the app's own ID
# value: "{identifier}" # value: channel1
# - name: timeoutInSec # Optional # - name: timeoutInSec # Optional
# value: 60 # value: 60
# - name: handlerTimeoutInSec # Optional # - name: handlerTimeoutInSec # Optional
@ -75,7 +75,7 @@ The above example uses secrets as plain strings. It is recommended to use a secr
|--------------------|:--------:|---------|---------| |--------------------|:--------:|---------|---------|
| `connectionString` | Y | Shared access policy connection string for the Service Bus. Required unless using Microsoft Entra ID authentication. | See example above | `connectionString` | Y | Shared access policy connection string for the Service Bus. Required unless using Microsoft Entra ID authentication. | See example above
| `namespaceName`| N | Parameter to set the address of the Service Bus namespace, as a fully-qualified domain name. Required if using Microsoft Entra ID authentication. | `"namespace.servicebus.windows.net"` | | `namespaceName`| N | Parameter to set the address of the Service Bus namespace, as a fully-qualified domain name. Required if using Microsoft Entra ID authentication. | `"namespace.servicebus.windows.net"` |
| `consumerID` | N | Consumer ID (consumer tag) organizes one or more consumers into a group. Consumers with the same consumer ID work as one virtual consumer; for example, a message is processed only once by one of the consumers in the group. If the `consumerID` is not provided, the Dapr runtime set it to the Dapr application ID (`appID`) value. (`appID`) value. | | `consumerID` | N | Consumer ID (consumer tag) organizes one or more consumers into a group. Consumers with the same consumer ID work as one virtual consumer; for example, a message is processed only once by one of the consumers in the group. If the `consumerID` is not provided, the Dapr runtime set it to the Dapr application ID (`appID`) value. (`appID`) value. | Can be set to string value (such as `"channel1"` in the example above) or string format value (such as `"{podName}"`, etc.). [See all of template tags you can use in your component metadata.]({{< ref "component-schema.md#templated-metadata-values" >}})
| `timeoutInSec` | N | Timeout for sending messages and for management operations. Default: `60` |`30` | `timeoutInSec` | N | Timeout for sending messages and for management operations. Default: `60` |`30`
| `handlerTimeoutInSec`| N | Timeout for invoking the app's handler. Default: `60` | `30` | `handlerTimeoutInSec`| N | Timeout for invoking the app's handler. Default: `60` | `30`
| `lockRenewalInSec` | N | Defines the frequency at which buffered message locks will be renewed. Default: `20`. | `20` | `lockRenewalInSec` | N | Defines the frequency at which buffered message locks will be renewed. Default: `20`. | `20`

View File

@ -72,7 +72,7 @@ The above example uses secrets as plain strings. It is recommended to use a secr
|--------------------|:--------:|---------|---------| |--------------------|:--------:|---------|---------|
| projectId | Y | GCP project id| `myproject-123` | projectId | Y | GCP project id| `myproject-123`
| endpoint | N | GCP endpoint for the component to use. Only used for local development (for example) with [GCP Pub/Sub Emulator](https://cloud.google.com/pubsub/docs/emulator). The `endpoint` is unnecessary when running against the GCP production API. | `"http://localhost:8085"` | endpoint | N | GCP endpoint for the component to use. Only used for local development (for example) with [GCP Pub/Sub Emulator](https://cloud.google.com/pubsub/docs/emulator). The `endpoint` is unnecessary when running against the GCP production API. | `"http://localhost:8085"`
| `consumerID` | N | The Consumer ID organizes one or more consumers into a group. Consumers with the same consumer ID work as one virtual consumer; for example, a message is processed only once by one of the consumers in the group. If the `consumerID` is not provided, the Dapr runtime set it to the Dapr application ID (`appID`) value. The `consumerID`, along with the `topic` provided as part of the request, are used to build the Pub/Sub subscription ID | | `consumerID` | N | The Consumer ID organizes one or more consumers into a group. Consumers with the same consumer ID work as one virtual consumer; for example, a message is processed only once by one of the consumers in the group. If the `consumerID` is not provided, the Dapr runtime set it to the Dapr application ID (`appID`) value. The `consumerID`, along with the `topic` provided as part of the request, are used to build the Pub/Sub subscription ID | Can be set to string value (such as `"channel1"`) or string format value (such as `"{podName}"`, etc.). [See all of template tags you can use in your component metadata.]({{< ref "component-schema.md#templated-metadata-values" >}})
| identityProjectId | N | If the GCP pubsub project is different from the identity project, specify the identity project using this attribute | `"myproject-123"` | identityProjectId | N | If the GCP pubsub project is different from the identity project, specify the identity project using this attribute | `"myproject-123"`
| privateKeyId | N | If using explicit credentials, this field should contain the `private_key_id` field from the service account json document | `"my-private-key"` | privateKeyId | N | If using explicit credentials, this field should contain the `private_key_id` field from the service account json document | `"my-private-key"`
| privateKey | N | If using explicit credentials, this field should contain the `private_key` field from the service account json | `-----BEGIN PRIVATE KEY-----MIIBVgIBADANBgkqhkiG9w0B` | privateKey | N | If using explicit credentials, this field should contain the `private_key` field from the service account json | `-----BEGIN PRIVATE KEY-----MIIBVgIBADANBgkqhkiG9w0B`

View File

@ -34,7 +34,7 @@ spec:
|-------------------|:--------:|-----------------------------------------------------------------------------------------------------------------------------|----------------------------------------| |-------------------|:--------:|-----------------------------------------------------------------------------------------------------------------------------|----------------------------------------|
| address | Y | Address of the KubeMQ server | `"localhost:50000"` | | address | Y | Address of the KubeMQ server | `"localhost:50000"` |
| store | N | type of pubsub, true: pubsub persisted (EventsStore), false: pubsub in-memory (Events) | `true` or `false` (default is `false`) | | store | N | type of pubsub, true: pubsub persisted (EventsStore), false: pubsub in-memory (Events) | `true` or `false` (default is `false`) |
| consumerID | N | Consumer ID (consumer tag) organizes one or more consumers into a group. Consumers with the same consumer ID work as one virtual consumer; for example, a message is processed only once by one of the consumers in the group. If the `consumerID` is not provided, the Dapr runtime set it to the Dapr application ID (`appID`) value. | `"channel1"` | consumerID | N | Consumer ID (consumer tag) organizes one or more consumers into a group. Consumers with the same consumer ID work as one virtual consumer; for example, a message is processed only once by one of the consumers in the group. If the `consumerID` is not provided, the Dapr runtime set it to the Dapr application ID (`appID`) value. | Can be set to string value (such as `"channel1"` in the example above) or string format value (such as `"{podName}"`, etc.). [See all of template tags you can use in your component metadata.]({{< ref "component-schema.md#templated-metadata-values" >}})
| clientID | N | Name for client id connection | `sub-client-12345` | | clientID | N | Name for client id connection | `sub-client-12345` |
| authToken | N | Auth JWT token for connection Check out [KubeMQ Authentication](https://docs.kubemq.io/learn/access-control/authentication) | `ew...` | | authToken | N | Auth JWT token for connection Check out [KubeMQ Authentication](https://docs.kubemq.io/learn/access-control/authentication) | `ew...` |
| group | N | Subscriber group for load balancing | `g1` | | group | N | Subscriber group for load balancing | `g1` |

View File

@ -41,7 +41,7 @@ The above example uses secrets as plain strings. It is recommended to use a secr
| Field | Required | Details | Example | | Field | Required | Details | Example |
|--------------------|:--------:|---------|---------| |--------------------|:--------:|---------|---------|
| url | Y | Address of the MQTT broker. Can be `secretKeyRef` to use a secret reference. <br> Use the **`tcp://`** URI scheme for non-TLS communication. <br> Use the **`ssl://`** URI scheme for TLS communication. | `"tcp://[username][:password]@host.domain[:port]"` | url | Y | Address of the MQTT broker. Can be `secretKeyRef` to use a secret reference. <br> Use the **`tcp://`** URI scheme for non-TLS communication. <br> Use the **`ssl://`** URI scheme for TLS communication. | `"tcp://[username][:password]@host.domain[:port]"`
| consumerID | N | The client ID used to connect to the MQTT broker for the consumer connection. Defaults to the Dapr app ID.<br>Note: if `producerID` is not set, `-consumer` is appended to this value for the consumer connection | `"myMqttClientApp"` | consumerID | N | The client ID used to connect to the MQTT broker for the consumer connection. Defaults to the Dapr app ID.<br>Note: if `producerID` is not set, `-consumer` is appended to this value for the consumer connection | Can be set to string value (such as `"channel1"` in the example above) or string format value (such as `"{podName}"`, etc.). [See all of template tags you can use in your component metadata.]({{< ref "component-schema.md#templated-metadata-values" >}})
| producerID | N | The client ID used to connect to the MQTT broker for the producer connection. Defaults to `{consumerID}-producer`. | `"myMqttProducerApp"` | producerID | N | The client ID used to connect to the MQTT broker for the producer connection. Defaults to `{consumerID}-producer`. | `"myMqttProducerApp"`
| qos | N | Indicates the Quality of Service Level (QoS) of the message ([more info](https://www.hivemq.com/blog/mqtt-essentials-part-6-mqtt-quality-of-service-levels/)). Defaults to `1`. |`0`, `1`, `2` | qos | N | Indicates the Quality of Service Level (QoS) of the message ([more info](https://www.hivemq.com/blog/mqtt-essentials-part-6-mqtt-quality-of-service-levels/)). Defaults to `1`. |`0`, `1`, `2`
| retain | N | Defines whether the message is saved by the broker as the last known good value for a specified topic. Defaults to `"false"`. | `"true"`, `"false"` | retain | N | Defines whether the message is saved by the broker as the last known good value for a specified topic. Defaults to `"false"`. | `"true"`, `"false"`

View File

@ -43,7 +43,7 @@ The above example uses secrets as plain strings. It is recommended to use a secr
| Field | Required | Details | Example | | Field | Required | Details | Example |
|--------------------|:--------:|---------|---------| |--------------------|:--------:|---------|---------|
| `url` | Y | Address of the MQTT broker. Can be `secretKeyRef` to use a secret reference. <br> Use the **`tcp://`** URI scheme for non-TLS communication. <br> Use the **`ssl://`** URI scheme for TLS communication. | `"tcp://[username][:password]@host.domain[:port]"` | `url` | Y | Address of the MQTT broker. Can be `secretKeyRef` to use a secret reference. <br> Use the **`tcp://`** URI scheme for non-TLS communication. <br> Use the **`ssl://`** URI scheme for TLS communication. | `"tcp://[username][:password]@host.domain[:port]"`
| `consumerID` | N | The client ID used to connect to the MQTT broker. Defaults to the Dapr app ID. | `"myMqttClientApp"` | `consumerID` | N | The client ID used to connect to the MQTT broker. Defaults to the Dapr app ID. | Can be set to string value (such as `"channel1"` in the example above) or string format value (such as `"{podName}"`, etc.). [See all of template tags you can use in your component metadata.]({{< ref "component-schema.md#templated-metadata-values" >}})
| `retain` | N | Defines whether the message is saved by the broker as the last known good value for a specified topic. Defaults to `"false"`. | `"true"`, `"false"` | `retain` | N | Defines whether the message is saved by the broker as the last known good value for a specified topic. Defaults to `"false"`. | `"true"`, `"false"`
| `cleanSession` | N | Sets the `clean_session` flag in the connection message to the MQTT broker if `"true"` ([more info](http://www.steves-internet-guide.com/mqtt-clean-sessions-example/)). Defaults to `"false"`. | `"true"`, `"false"` | `cleanSession` | N | Sets the `clean_session` flag in the connection message to the MQTT broker if `"true"` ([more info](http://www.steves-internet-guide.com/mqtt-clean-sessions-example/)). Defaults to `"false"`. | `"true"`, `"false"`
| `caCert` | Required for using TLS | Certificate Authority (CA) certificate in PEM format for verifying server TLS certificates. | See example below | `caCert` | Required for using TLS | Certificate Authority (CA) certificate in PEM format for verifying server TLS certificates. | See example below

View File

@ -74,7 +74,7 @@ The above example uses secrets as plain strings. It is recommended to use a [sec
| host | Y | Address of the Pulsar broker. Default is `"localhost:6650"` | `"localhost:6650"` OR `"http://pulsar-pj54qwwdpz4b-pulsar.ap-sg.public.pulsar.com:8080"`| | host | Y | Address of the Pulsar broker. Default is `"localhost:6650"` | `"localhost:6650"` OR `"http://pulsar-pj54qwwdpz4b-pulsar.ap-sg.public.pulsar.com:8080"`|
| enableTLS | N | Enable TLS. Default: `"false"` | `"true"`, `"false"` | | enableTLS | N | Enable TLS. Default: `"false"` | `"true"`, `"false"` |
| tenant | N | The topic tenant within the instance. Tenants are essential to multi-tenancy in Pulsar, and spread across clusters. Default: `"public"` | `"public"` | | tenant | N | The topic tenant within the instance. Tenants are essential to multi-tenancy in Pulsar, and spread across clusters. Default: `"public"` | `"public"` |
| consumerID | N | Used to set the subscription name or consumer ID. | `"channel1"` | consumerID | N | Used to set the subscription name or consumer ID. | Can be set to string value (such as `"channel1"` in the example above) or string format value (such as `"{podName}"`, etc.). [See all of template tags you can use in your component metadata.]({{< ref "component-schema.md#templated-metadata-values" >}})
| namespace | N | The administrative unit of the topic, which acts as a grouping mechanism for related topics. Default: `"default"` | `"default"` | namespace | N | The administrative unit of the topic, which acts as a grouping mechanism for related topics. Default: `"default"` | `"default"`
| persistent | N | Pulsar supports two kinds of topics: [persistent](https://pulsar.apache.org/docs/en/concepts-architecture-overview#persistent-storage) and [non-persistent](https://pulsar.apache.org/docs/en/concepts-messaging/#non-persistent-topics). With persistent topics, all messages are durably persisted on disks (if the broker is not standalone, messages are durably persisted on multiple disks), whereas data for non-persistent topics is not persisted to storage disks. | persistent | N | Pulsar supports two kinds of topics: [persistent](https://pulsar.apache.org/docs/en/concepts-architecture-overview#persistent-storage) and [non-persistent](https://pulsar.apache.org/docs/en/concepts-messaging/#non-persistent-topics). With persistent topics, all messages are durably persisted on disks (if the broker is not standalone, messages are durably persisted on multiple disks), whereas data for non-persistent topics is not persisted to storage disks.
| disableBatching | N | disable batching.When batching enabled default batch delay is set to 10 ms and default batch size is 1000 messages,Setting `disableBatching: true` will make the producer to send messages individually. Default: `"false"` | `"true"`, `"false"`| | disableBatching | N | disable batching.When batching enabled default batch delay is set to 10 ms and default batch size is 1000 messages,Setting `disableBatching: true` will make the producer to send messages individually. Default: `"false"` | `"true"`, `"false"`|

View File

@ -31,7 +31,7 @@ spec:
- name: password - name: password
value: password value: password
- name: consumerID - name: consumerID
value: myapp value: channel1
- name: durable - name: durable
value: false value: false
- name: deletedWhenUnused - name: deletedWhenUnused
@ -81,7 +81,7 @@ The above example uses secrets as plain strings. It is recommended to use a secr
| hostname | N* | The RabbitMQ hostname. *Mutally exclusive with connectionString field | `localhost` | | hostname | N* | The RabbitMQ hostname. *Mutally exclusive with connectionString field | `localhost` |
| username | N* | The RabbitMQ username. *Mutally exclusive with connectionString field | `username` | | username | N* | The RabbitMQ username. *Mutally exclusive with connectionString field | `username` |
| password | N* | The RabbitMQ password. *Mutally exclusive with connectionString field | `password` | | password | N* | The RabbitMQ password. *Mutally exclusive with connectionString field | `password` |
| consumerID | N | Consumer ID (consumer tag) organizes one or more consumers into a group. Consumers with the same consumer ID work as one virtual consumer; for example, a message is processed only once by one of the consumers in the group. If the `consumerID` is not provided, the Dapr runtime set it to the Dapr application ID (`appID`) value. | | consumerID | N | Consumer ID (consumer tag) organizes one or more consumers into a group. Consumers with the same consumer ID work as one virtual consumer; for example, a message is processed only once by one of the consumers in the group. If the `consumerID` is not provided, the Dapr runtime set it to the Dapr application ID (`appID`) value. | Can be set to string value (such as `"channel1"` in the example above) or string format value (such as `"{podName}"`, etc.). [See all of template tags you can use in your component metadata.]({{< ref "component-schema.md#templated-metadata-values" >}})
| durable | N | Whether or not to use [durable](https://www.rabbitmq.com/queues.html#durability) queues. Defaults to `"false"` | `"true"`, `"false"` | durable | N | Whether or not to use [durable](https://www.rabbitmq.com/queues.html#durability) queues. Defaults to `"false"` | `"true"`, `"false"`
| deletedWhenUnused | N | Whether or not the queue should be configured to [auto-delete](https://www.rabbitmq.com/queues.html) Defaults to `"true"` | `"true"`, `"false"` | deletedWhenUnused | N | Whether or not the queue should be configured to [auto-delete](https://www.rabbitmq.com/queues.html) Defaults to `"true"` | `"true"`, `"false"`
| autoAck | N | Whether or not the queue consumer should [auto-ack](https://www.rabbitmq.com/confirms.html) messages. Defaults to `"false"` | `"true"`, `"false"` | autoAck | N | Whether or not the queue consumer should [auto-ack](https://www.rabbitmq.com/confirms.html) messages. Defaults to `"false"` | `"true"`, `"false"`

View File

@ -25,7 +25,7 @@ spec:
- name: redisPassword - name: redisPassword
value: "KeFg23!" value: "KeFg23!"
- name: consumerID - name: consumerID
value: "myGroup" value: "channel1"
- name: enableTLS - name: enableTLS
value: "false" value: "false"
``` ```
@ -41,7 +41,7 @@ The above example uses secrets as plain strings. It is recommended to use a secr
| redisHost | Y | Connection-string for the redis host. If `"redisType"` is `"cluster"` it can be multiple hosts separated by commas or just a single host | `localhost:6379`, `redis-master.default.svc.cluster.local:6379` | redisHost | Y | Connection-string for the redis host. If `"redisType"` is `"cluster"` it can be multiple hosts separated by commas or just a single host | `localhost:6379`, `redis-master.default.svc.cluster.local:6379`
| redisPassword | Y | Password for Redis host. No Default. Can be `secretKeyRef` to use a secret reference | `""`, `"KeFg23!"` | redisPassword | Y | Password for Redis host. No Default. Can be `secretKeyRef` to use a secret reference | `""`, `"KeFg23!"`
| redisUsername | N | Username for Redis host. Defaults to empty. Make sure your redis server version is 6 or above, and have created acl rule correctly. | `""`, `"default"` | redisUsername | N | Username for Redis host. Defaults to empty. Make sure your redis server version is 6 or above, and have created acl rule correctly. | `""`, `"default"`
| consumerID | N | The consumer group ID | `"myGroup"` | consumerID | N | The consumer group ID. | Can be set to string value (such as `"channel1"` in the example above) or string format value (such as `"{podName}"`, etc.). [See all of template tags you can use in your component metadata.]({{< ref "component-schema.md#templated-metadata-values" >}})
| enableTLS | N | If the Redis instance supports TLS with public certificates, can be configured to be enabled or disabled. Defaults to `"false"` | `"true"`, `"false"` | enableTLS | N | If the Redis instance supports TLS with public certificates, can be configured to be enabled or disabled. Defaults to `"false"` | `"true"`, `"false"`
| redeliverInterval | N | The interval between checking for pending messages to redeliver. Can use either be Go duration string (for example "ms", "s", "m") or milliseconds number. Defaults to `"60s"`. `"0"` disables redelivery. | `"30s"`, `"5000"` | redeliverInterval | N | The interval between checking for pending messages to redeliver. Can use either be Go duration string (for example "ms", "s", "m") or milliseconds number. Defaults to `"60s"`. `"0"` disables redelivery. | `"30s"`, `"5000"`
| processingTimeout | N | The amount time that a message must be pending before attempting to redeliver it. Can use either be Go duration string ( for example "ms", "s", "m") or milliseconds number. Defaults to `"15s"`. `"0"` disables redelivery. | `"60s"`, `"600000"` | processingTimeout | N | The amount time that a message must be pending before attempting to redeliver it. Can use either be Go duration string ( for example "ms", "s", "m") or milliseconds number. Defaults to `"15s"`. `"0"` disables redelivery. | `"60s"`, `"600000"`

View File

@ -26,7 +26,7 @@ spec:
- name: producerGroup - name: producerGroup
value: dapr-rocketmq-test-g-p value: dapr-rocketmq-test-g-p
- name: consumerID - name: consumerID
value: topic value: channel1
- name: nameSpace - name: nameSpace
value: dapr-test value: dapr-test
- name: nameServer - name: nameServer
@ -49,7 +49,7 @@ The above example uses secrets as plain strings. It is recommended to use a secr
| instanceName | N | Instance name | `time.Now().String()` | `dapr-rocketmq-test` | | instanceName | N | Instance name | `time.Now().String()` | `dapr-rocketmq-test` |
| consumerGroup | N | Consumer group name. Recommend. If `producerGroup` is `null``groupName` is used. | | `dapr-rocketmq-test-g-c ` | | consumerGroup | N | Consumer group name. Recommend. If `producerGroup` is `null``groupName` is used. | | `dapr-rocketmq-test-g-c ` |
| producerGroup (consumerID) | N | Producer group name. Recommended. If `producerGroup` is `null``consumerID` is used. If `consumerID` also is null, `groupName` is used. | | `dapr-rocketmq-test-g-p` | | producerGroup (consumerID) | N | Producer group name. Recommended. If `producerGroup` is `null``consumerID` is used. If `consumerID` also is null, `groupName` is used. | | `dapr-rocketmq-test-g-p` |
| consumerID | N | Consumer ID (consumer tag) organizes one or more consumers into a group. Consumers with the same consumer ID work as one virtual consumer; for example, a message is processed only once by one of the consumers in the group. If the `consumerID` is not provided, the Dapr runtime set it to the Dapr application ID (`appID`) value. | `"channel1"` | consumerID | N | Consumer ID (consumer tag) organizes one or more consumers into a group. Consumers with the same consumer ID work as one virtual consumer; for example, a message is processed only once by one of the consumers in the group. If the `consumerID` is not provided, the Dapr runtime set it to the Dapr application ID (`appID`) value. | Can be set to string value (such as `"channel1"` in the example above) or string format value (such as `"{podName}"`, etc.). [See all of template tags you can use in your component metadata.]({{< ref "component-schema.md#templated-metadata-values" >}})
| groupName | N | Consumer/Producer group name. **Depreciated**. | | `dapr-rocketmq-test-g` | | groupName | N | Consumer/Producer group name. **Depreciated**. | | `dapr-rocketmq-test-g` |
| nameSpace | N | RocketMQ namespace | | `dapr-rocketmq` | | nameSpace | N | RocketMQ namespace | | `dapr-rocketmq` |
| nameServerDomain | N | RocketMQ name server domain | | `https://my-app.net:8080/nsaddr` | | nameServerDomain | N | RocketMQ name server domain | | `https://my-app.net:8080/nsaddr` |

View File

@ -41,7 +41,7 @@ The above example uses secrets as plain strings. It is recommended to use a secr
| url | Y | Address of the AMQP broker. Can be `secretKeyRef` to use a secret reference. <br> Use the **`amqp://`** URI scheme for non-TLS communication. <br> Use the **`amqps://`** URI scheme for TLS communication. | `"amqp://host.domain[:port]"` | url | Y | Address of the AMQP broker. Can be `secretKeyRef` to use a secret reference. <br> Use the **`amqp://`** URI scheme for non-TLS communication. <br> Use the **`amqps://`** URI scheme for TLS communication. | `"amqp://host.domain[:port]"`
| username | Y | The username to connect to the broker. Only required if anonymous is not specified or set to `false` .| `default` | username | Y | The username to connect to the broker. Only required if anonymous is not specified or set to `false` .| `default`
| password | Y | The password to connect to the broker. Only required if anonymous is not specified or set to `false`. | `default` | password | Y | The password to connect to the broker. Only required if anonymous is not specified or set to `false`. | `default`
| consumerID | N | Consumer ID (consumer tag) organizes one or more consumers into a group. Consumers with the same consumer ID work as one virtual consumer; for example, a message is processed only once by one of the consumers in the group. If the `consumerID` is not provided, the Dapr runtime set it to the Dapr application ID (`appID`) value. | `"channel1"` | consumerID | N | Consumer ID (consumer tag) organizes one or more consumers into a group. Consumers with the same consumer ID work as one virtual consumer; for example, a message is processed only once by one of the consumers in the group. If the `consumerID` is not provided, the Dapr runtime set it to the Dapr application ID (`appID`) value. | Can be set to string value (such as `"channel1"` in the example above) or string format value (such as `"{podName}"`, etc.). [See all of template tags you can use in your component metadata.]({{< ref "component-schema.md#templated-metadata-values" >}})
| anonymous | N | To connect to the broker without credential validation. Only works if enabled on the broker. A username and password would not be required if this is set to `true`. | `true` | anonymous | N | To connect to the broker without credential validation. Only works if enabled on the broker. A username and password would not be required if this is set to `true`. | `true`
| caCert | Required for using TLS | Certificate Authority (CA) certificate in PEM format for verifying server TLS certificates. | `"-----BEGIN CERTIFICATE-----\n<base64-encoded DER>\n-----END CERTIFICATE-----"` | caCert | Required for using TLS | Certificate Authority (CA) certificate in PEM format for verifying server TLS certificates. | `"-----BEGIN CERTIFICATE-----\n<base64-encoded DER>\n-----END CERTIFICATE-----"`
| clientCert | Required for using TLS | TLS client certificate in PEM format. Must be used with `clientKey`. | `"-----BEGIN CERTIFICATE-----\n<base64-encoded DER>\n-----END CERTIFICATE-----"` | clientCert | Required for using TLS | TLS client certificate in PEM format. Must be used with `clientKey`. | `"-----BEGIN CERTIFICATE-----\n<base64-encoded DER>\n-----END CERTIFICATE-----"`

View File

@ -2,7 +2,7 @@
type: docs type: docs
title: "Secret store component specs" title: "Secret store component specs"
linkTitle: "Secret stores" linkTitle: "Secret stores"
weight: 4000 weight: 5000
description: The supported secret stores that interface with Dapr description: The supported secret stores that interface with Dapr
aliases: aliases:
- "/operations/components/setup-secret-store/supported-secret-stores/" - "/operations/components/setup-secret-store/supported-secret-stores/"

View File

@ -3,7 +3,7 @@ type: docs
title: "State store component specs" title: "State store component specs"
linkTitle: "State stores" linkTitle: "State stores"
description: "The supported state stores that interface with Dapr" description: "The supported state stores that interface with Dapr"
weight: 1000 weight: 4000
aliases: aliases:
- "/operations/components/setup-state-store/supported-state-stores/" - "/operations/components/setup-state-store/supported-state-stores/"
no_list: true no_list: true

View File

@ -175,7 +175,9 @@ az cosmosdb sql role assignment create \
--role-definition-id "$ROLE_ID" --role-definition-id "$ROLE_ID"
``` ```
## Optimizing Cosmos DB for bulk operation write performance ## Optimizations
### Optimizing Cosmos DB for bulk operation write performance
If you are building a system that only ever reads data from Cosmos DB via key (`id`), which is the default Dapr behavior when using the state management API or actors, there are ways you can optimize Cosmos DB for improved write speeds. This is done by excluding all paths from indexing. By default, Cosmos DB indexes all fields inside of a document. On systems that are write-heavy and run little-to-no queries on values within a document, this indexing policy slows down the time it takes to write or update a document in Cosmos DB. This is exacerbated in high-volume systems. If you are building a system that only ever reads data from Cosmos DB via key (`id`), which is the default Dapr behavior when using the state management API or actors, there are ways you can optimize Cosmos DB for improved write speeds. This is done by excluding all paths from indexing. By default, Cosmos DB indexes all fields inside of a document. On systems that are write-heavy and run little-to-no queries on values within a document, this indexing policy slows down the time it takes to write or update a document in Cosmos DB. This is exacerbated in high-volume systems.
@ -211,6 +213,18 @@ This optimization comes at the cost of queries against fields inside of document
{{% /alert %}} {{% /alert %}}
### Optimizing Cosmos DB for cost savings
If you intend to use Cosmos DB only as a key-value pair, it may be in your interest to consider converting your state object to JSON and compressing it before persisting it to state, and subsequently decompressing it when reading it out of state. This is because Cosmos DB bills your usage based on the maximum number of RU/s used in a given time period (typically each hour). Furthermore, RU usage is calculated as 1 RU per 1 KB of data you read or write. Compression helps by reducing the size of the data stored in Cosmos DB and subsequently reducing RU usage.
This savings is particularly significant for Dapr actors. While the Dapr State Management API does a base64 encoding of your object before saving, Dapr actor state is saved as raw, formatted JSON. This means multiple lines with indentations for formatting. Compressing can signficantly reduce the size of actor state objects. For example, if you have an actor state object that is 75KB in size when the actor is hydrated, you will use 75 RU/s to read that object out of state. If you then modify the state object and it grows to 100KB, you will use 100 RU/s to write that object to Cosmos DB, totalling 175 RU/s for the I/O operation. Let's say your actors are concurrently handling 1000 requests per second, you will need at least 175,000 RU/s to meet that load. With effective compression, the size reduction can be in the region of 90%, which means you will only need in the region of 17,500 RU/s to meet the load.
{{% alert title="Note" color="primary" %}}
This particular optimization only makes sense if you are saving large objects to state. The performance and memory tradeoff for performing the compression and decompression on either end need to make sense for your use case. Furthermore, once the data is saved to state, it is not human readable, nor is it queryable. You should only adopt this optimization if you are saving large state objects as key-value pairs.
{{% /alert %}}
## Related links ## Related links
- [Basic schema for a Dapr component]({{< ref component-schema >}}) - [Basic schema for a Dapr component]({{< ref component-schema >}})

View File

@ -119,7 +119,6 @@ If you wish to use Redis as an actor store, append the following to the yaml.
| minIdleConns | N | Minimum number of idle connections to keep open in order to avoid the performance degradation associated with creating new connections. Defaults to `"0"`. | `"2"` | minIdleConns | N | Minimum number of idle connections to keep open in order to avoid the performance degradation associated with creating new connections. Defaults to `"0"`. | `"2"`
| idleCheckFrequency | N | Frequency of idle checks made by idle connections reaper. Default is `"1m"`. `"-1"` disables idle connections reaper. | `"-1"` | idleCheckFrequency | N | Frequency of idle checks made by idle connections reaper. Default is `"1m"`. `"-1"` disables idle connections reaper. | `"-1"`
| idleTimeout | N | Amount of time after which the client closes idle connections. Should be less than server's timeout. Default is `"5m"`. `"-1"` disables idle timeout check. | `"10m"` | idleTimeout | N | Amount of time after which the client closes idle connections. Should be less than server's timeout. Default is `"5m"`. `"-1"` disables idle timeout check. | `"10m"`
| actorStateStore | N | Consider this state store for actors. Defaults to `"false"` | `"true"`, `"false"`
| ttlInSeconds | N | Allows specifying a default Time-to-live (TTL) in seconds that will be applied to every state store request unless TTL is explicitly defined via the [request metadata]({{< ref "state-store-ttl.md" >}}). | `600` | ttlInSeconds | N | Allows specifying a default Time-to-live (TTL) in seconds that will be applied to every state store request unless TTL is explicitly defined via the [request metadata]({{< ref "state-store-ttl.md" >}}). | `600`
| queryIndexes | N | Indexing schemas for querying JSON objects | see [Querying JSON objects](#querying-json-objects) | queryIndexes | N | Indexing schemas for querying JSON objects | see [Querying JSON objects](#querying-json-objects)
| actorStateStore | N | Consider this state store for actors. Defaults to `"false"` | `"true"`, `"false"` | actorStateStore | N | Consider this state store for actors. Defaults to `"false"` | `"true"`, `"false"`

View File

@ -2,7 +2,7 @@
type: docs type: docs
title: "Workflow backend component specs" title: "Workflow backend component specs"
linkTitle: "Workflow backend" linkTitle: "Workflow backend"
weight: 9000 weight: 2000
description: The supported workflow backend that orchestrate workflow and save workflow state description: The supported workflow backend that orchestrate workflow and save workflow state
no_list: true no_list: true
--- ---

View File

@ -1 +1 @@
{{- if .Get "short" }}1.13{{ else if .Get "long" }}1.13.0{{ else if .Get "cli" }}1.13.0{{ else }}1.13.0{{ end -}} {{- if .Get "short" }}1.13{{ else if .Get "long" }}1.13.5{{ else if .Get "cli" }}1.13.0{{ else }}1.13.5{{ end -}}

View File

@ -178,12 +178,12 @@
} }
}, },
"node_modules/braces": { "node_modules/braces": {
"version": "3.0.2", "version": "3.0.3",
"resolved": "https://registry.npmjs.org/braces/-/braces-3.0.2.tgz", "resolved": "https://registry.npmjs.org/braces/-/braces-3.0.3.tgz",
"integrity": "sha512-b8um+L1RzM3WDSzvhm6gIz1yfTbBt6YTlcEKAvsmqCZZFw46z626lVj9j1yEPW33H5H+lBQpZMP1k8l+78Ha0A==", "integrity": "sha512-yQbXgO/OSZVD2IsiLlro+7Hf6Q18EJrKSEsdoMzKePKXct3gvD8oLcOQdIzGupr5Fj+EDe8gO/lxc1BzfMpxvA==",
"dev": true, "dev": true,
"dependencies": { "dependencies": {
"fill-range": "^7.0.1" "fill-range": "^7.1.1"
}, },
"engines": { "engines": {
"node": ">=8" "node": ">=8"
@ -400,9 +400,9 @@
} }
}, },
"node_modules/fill-range": { "node_modules/fill-range": {
"version": "7.0.1", "version": "7.1.1",
"resolved": "https://registry.npmjs.org/fill-range/-/fill-range-7.0.1.tgz", "resolved": "https://registry.npmjs.org/fill-range/-/fill-range-7.1.1.tgz",
"integrity": "sha512-qOo9F+dMUmC2Lcb4BbVvnKJxTPjCm+RRpe4gDuGrzkL7mEVl/djYSu2OdQ2Pa302N4oqkSg9ir6jaLWJ2USVpQ==", "integrity": "sha512-YsGpe3WHLK8ZYi4tWDg2Jy3ebRz2rXowDxnld4bkQB00cc/1Zw9AWnC0i9ztDJitivtQvaI9KaLyKrc+hBW0yg==",
"dev": true, "dev": true,
"dependencies": { "dependencies": {
"to-regex-range": "^5.0.1" "to-regex-range": "^5.0.1"

View File

@ -17,8 +17,8 @@ data:
zpages: zpages:
endpoint: :55679 endpoint: :55679
exporters: exporters:
logging: debug:
loglevel: debug verbosity: basic
azuremonitor: azuremonitor:
endpoint: "https://dc.services.visualstudio.com/v2/track" endpoint: "https://dc.services.visualstudio.com/v2/track"
instrumentation_key: "<INSTRUMENTATION-KEY>" instrumentation_key: "<INSTRUMENTATION-KEY>"
@ -33,7 +33,7 @@ data:
pipelines: pipelines:
traces: traces:
receivers: [zipkin] receivers: [zipkin]
exporters: [azuremonitor,logging] exporters: [azuremonitor,debug]
--- ---
apiVersion: v1 apiVersion: v1
kind: Service kind: Service
@ -71,7 +71,7 @@ spec:
spec: spec:
containers: containers:
- name: otel-collector - name: otel-collector
image: otel/opentelemetry-collector-contrib:0.77.0 image: otel/opentelemetry-collector-contrib:0.101.0
command: command:
- "/otelcol-contrib" - "/otelcol-contrib"
- "--config=/conf/otel-collector-config.yaml" - "--config=/conf/otel-collector-config.yaml"

Binary file not shown.

Before

Width:  |  Height:  |  Size: 72 KiB

After

Width:  |  Height:  |  Size: 79 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 45 KiB

After

Width:  |  Height:  |  Size: 40 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 338 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 355 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 78 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 134 KiB

After

Width:  |  Height:  |  Size: 131 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 102 KiB

After

Width:  |  Height:  |  Size: 136 KiB

Some files were not shown because too many files have changed in this diff Show More