Merge with v1.10
Signed-off-by: Shubham Sharma <shubhash@microsoft.com>
|
|
@ -20,12 +20,13 @@ The following are the building blocks provided by Dapr:
|
|||
|
||||
| Building Block | Endpoint | Description |
|
||||
|----------------|----------|-------------|
|
||||
| [**Service-to-service invocation**]({{<ref "service-invocation-overview.md">}}) | `/v1.0/invoke` | Service invocation enables applications to communicate with each other through well-known endpoints in the form of http or gRPC messages. Dapr provides an endpoint that acts as a combination of a reverse proxy with built-in service discovery, while leveraging built-in distributed tracing and error handling.
|
||||
| [**State management**]({{<ref "state-management-overview.md">}}) | `/v1.0/state` | Application state is anything an application wants to preserve beyond a single session. Dapr provides a key/value-based state and query APIs with pluggable state stores for persistence.
|
||||
| [**Publish and subscribe**]({{<ref "pubsub-overview.md">}}) | `/v1.0/publish` `/v1.0/subscribe`| Pub/Sub is a loosely coupled messaging pattern where senders (or publishers) publish messages to a topic, to which subscribers subscribe. Dapr supports the pub/sub pattern between applications.
|
||||
| [**Bindings**]({{<ref "bindings-overview.md">}}) | `/v1.0/bindings` | A binding provides a bi-directional connection to an external cloud/on-premise service or system. Dapr allows you to invoke the external service through the Dapr binding API, and it allows your application to be triggered by events sent by the connected service.
|
||||
| [**Actors**]({{<ref "actors-overview.md">}}) | `/v1.0/actors` | An actor is an isolated, independent unit of compute and state with single-threaded execution. Dapr provides an actor implementation based on the virtual actor pattern which provides a single-threaded programming model and where actors are garbage collected when not in use.
|
||||
| [**Observability**]({{<ref "observability-concept.md">}}) | `N/A` | Dapr system components and runtime emit metrics, logs, and traces to debug, operate and monitor Dapr system services, components and user applications.
|
||||
| [**Secrets**]({{<ref "secrets-overview.md">}}) | `/v1.0/secrets` | Dapr provides a secrets building block API and integrates with secret stores such as public cloud stores, local stores and Kubernetes to store the secrets. Services can call the secrets API to retrieve secrets, for example to get a connection string to a database.
|
||||
| [**Configuration**]({{<ref "configuration-api-overview.md">}}) | `/v1.0-alpha1/configuration` | The Configuration API enables you to retrieve and subscribe to application configuration items for supported configuration stores. This enables an application to retrieve specific configuration information, for example, at start up or when configuration changes are made in the store.
|
||||
| [**Distributed lock**]({{<ref "distributed-lock-api-overview.md">}}) | `/v1.0-alpha1/lock` | The distributed lock API enables you to take a lock on a resource so that multiple instances of an application can access the resource without conflicts and provide consistency guarantees.
|
||||
| [**Service-to-service invocation**]({{< ref "service-invocation-overview.md" >}}) | `/v1.0/invoke` | Service invocation enables applications to communicate with each other through well-known endpoints in the form of http or gRPC messages. Dapr provides an endpoint that acts as a combination of a reverse proxy with built-in service discovery, while leveraging built-in distributed tracing and error handling.
|
||||
| [**State management**]({{< ref "state-management-overview.md" >}}) | `/v1.0/state` | Application state is anything an application wants to preserve beyond a single session. Dapr provides a key/value-based state and query APIs with pluggable state stores for persistence.
|
||||
| [**Publish and subscribe**]({{< ref "pubsub-overview.md" >}}) | `/v1.0/publish` `/v1.0/subscribe`| Pub/Sub is a loosely coupled messaging pattern where senders (or publishers) publish messages to a topic, to which subscribers subscribe. Dapr supports the pub/sub pattern between applications.
|
||||
| [**Bindings**]({{< ref "bindings-overview.md" >}}) | `/v1.0/bindings` | A binding provides a bi-directional connection to an external cloud/on-premise service or system. Dapr allows you to invoke the external service through the Dapr binding API, and it allows your application to be triggered by events sent by the connected service.
|
||||
| [**Actors**]({{< ref "actors-overview.md" >}}) | `/v1.0/actors` | An actor is an isolated, independent unit of compute and state with single-threaded execution. Dapr provides an actor implementation based on the virtual actor pattern which provides a single-threaded programming model and where actors are garbage collected when not in use.
|
||||
| [**Observability**]({{< ref "observability-concept.md" >}}) | `N/A` | Dapr system components and runtime emit metrics, logs, and traces to debug, operate and monitor Dapr system services, components and user applications.
|
||||
| [**Secrets**]({{< ref "secrets-overview.md" >}}) | `/v1.0/secrets` | Dapr provides a secrets building block API and integrates with secret stores such as public cloud stores, local stores and Kubernetes to store the secrets. Services can call the secrets API to retrieve secrets, for example to get a connection string to a database.
|
||||
| [**Configuration**]({{< ref "configuration-api-overview.md" >}}) | `/v1.0-alpha1/configuration` | The Configuration API enables you to retrieve and subscribe to application configuration items for supported configuration stores. This enables an application to retrieve specific configuration information, for example, at start up or when configuration changes are made in the store.
|
||||
| [**Distributed lock**]({{< ref "distributed-lock-api-overview.md" >}}) | `/v1.0-alpha1/lock` | The distributed lock API enables you to take a lock on a resource so that multiple instances of an application can access the resource without conflicts and provide consistency guarantees.
|
||||
| [**Workflows**]({{< ref "workflow-overview.md" >}}) | `/v1.0-alpha1/workflow` | The Workflow API enables you to define long running, persistent processes or data flows that span multiple microservices using Dapr workflows or workflow components. The Workflow API can be combined with other Dapr API building blocks. For example, a workflow can call another service with service invocation or retrieve secrets, providing flexibility and portability.
|
||||
|
|
@ -13,9 +13,9 @@ You can contribute implementations and extend Dapr's component interfaces capabi
|
|||
- The [components-contrib repository](https://github.com/dapr/components-contrib)
|
||||
- [Pluggable components]({{<ref "components-concept.md#built-in-and-pluggable-components" >}}).
|
||||
|
||||
A building block can use any combination of components. For example, the [actors]({{<ref "actors-overview.md">}}) and the [state management]({{<ref "state-management-overview.md">}}) building blocks both use [state components](https://github.com/dapr/components-contrib/tree/master/state).
|
||||
A building block can use any combination of components. For example, the [actors]({{< ref "actors-overview.md" >}}) and the [state management]({{< ref "state-management-overview.md" >}}) building blocks both use [state components](https://github.com/dapr/components-contrib/tree/master/state).
|
||||
|
||||
As another example, the [pub/sub]({{<ref "pubsub-overview.md">}}) building block uses [pub/sub components](https://github.com/dapr/components-contrib/tree/master/pubsub).
|
||||
As another example, the [pub/sub]({{< ref "pubsub-overview.md" >}}) building block uses [pub/sub components](https://github.com/dapr/components-contrib/tree/master/pubsub).
|
||||
|
||||
You can get a list of current components available in the hosting environment using the `dapr components` CLI command.
|
||||
|
||||
|
|
@ -26,9 +26,9 @@ Each component has a specification (or spec) that it conforms to. Components are
|
|||
- A `components/local` folder within your solution, or
|
||||
- Globally in the `.dapr` folder created when invoking `dapr init`.
|
||||
|
||||
These YAML files adhere to the generic [Dapr component schema]({{<ref "component-schema.md">}}), but each is specific to the component specification.
|
||||
These YAML files adhere to the generic [Dapr component schema]({{< ref "component-schema.md" >}}), but each is specific to the component specification.
|
||||
|
||||
It is important to understand that the component spec values, particularly the spec `metadata`, can change between components of the same component type, for example between different state stores, and that some design-time spec values can be overridden at runtime when making requests to a component's API. As a result, it is strongly recommended to review a [component's specs]({{<ref "components-reference">}}), paying particular attention to the sample payloads for requests to set the metadata used to interact with the component.
|
||||
It is important to understand that the component spec values, particularly the spec `metadata`, can change between components of the same component type, for example between different state stores, and that some design-time spec values can be overridden at runtime when making requests to a component's API. As a result, it is strongly recommended to review a [component's specs]({{< ref "components-reference" >}}), paying particular attention to the sample payloads for requests to set the metadata used to interact with the component.
|
||||
|
||||
The diagram below shows some examples of the components for each component type
|
||||
<img src="/images/concepts-components.png" width=1200>
|
||||
|
|
@ -46,7 +46,7 @@ For example:
|
|||
- Your component may be specific to your company or pose IP concerns, so it cannot be included in the Dapr component repo.
|
||||
- You want decouple your component updates from the Dapr release cycle.
|
||||
|
||||
For more information read [Pluggable components overview]({{<ref "pluggable-components-overview">}})
|
||||
For more information read [Pluggable components overview]({{< ref "pluggable-components-overview" >}})
|
||||
|
||||
## Available component types
|
||||
|
||||
|
|
@ -61,7 +61,7 @@ State store components are data stores (databases, files, memory) that store key
|
|||
|
||||
### Name resolution
|
||||
|
||||
Name resolution components are used with the [service invocation]({{<ref "service-invocation-overview.md">}}) building block to integrate with the hosting environment and provide service-to-service discovery. For example, the Kubernetes name resolution component integrates with the Kubernetes DNS service, self-hosted uses mDNS and clusters of VMs can use the Consul name resolution component.
|
||||
Name resolution components are used with the [service invocation]({{< ref "service-invocation-overview.md" >}}) building block to integrate with the hosting environment and provide service-to-service discovery. For example, the Kubernetes name resolution component integrates with the Kubernetes DNS service, self-hosted uses mDNS and clusters of VMs can use the Consul name resolution component.
|
||||
|
||||
- [List of name resolution components]({{< ref supported-name-resolution >}})
|
||||
- [Name resolution implementations](https://github.com/dapr/components-contrib/tree/master/nameresolution)
|
||||
|
|
@ -82,7 +82,7 @@ External resources can connect to Dapr in order to trigger a method on an applic
|
|||
|
||||
### Secret stores
|
||||
|
||||
A [secret]({{<ref "secrets-overview.md">}}) is any piece of private information that you want to guard against unwanted access. Secrets stores are used to store secrets that can be retrieved and used in applications.
|
||||
A [secret]({{< ref "secrets-overview.md" >}}) is any piece of private information that you want to guard against unwanted access. Secrets stores are used to store secrets that can be retrieved and used in applications.
|
||||
|
||||
- [List of supported secret stores]({{< ref supported-secret-stores >}})
|
||||
- [Secret store implementations](https://github.com/dapr/components-contrib/tree/master/secretstores)
|
||||
|
|
@ -101,9 +101,16 @@ Lock components are used as a distributed lock to provide mutually exclusive acc
|
|||
- [List of supported locks]({{< ref supported-locks >}})
|
||||
- [Lock implementations](https://github.com/dapr/components-contrib/tree/master/lock)
|
||||
|
||||
### Workflows
|
||||
|
||||
A [workflow]({{< ref workflow-overview.md >}}) is custom application logic that defines a reliable business process or data flow. Workflow components are workflow runtimes (or engines) that run the business logic written for that workflow and store their state into a state store.
|
||||
|
||||
<!--- [List of supported workflows]()
|
||||
- [Workflow implementations](https://github.com/dapr/components-contrib/tree/master/workflows)-->
|
||||
|
||||
### Middleware
|
||||
|
||||
Dapr allows custom [middleware]({{<ref "middleware.md">}}) to be plugged into the HTTP request processing pipeline. Middleware can perform additional actions on an HTTP request (such as authentication, encryption, and message transformation) before the request is routed to the user code, or the response is returned to the client. The middleware components are used with the [service invocation]({{<ref "service-invocation-overview.md">}}) building block.
|
||||
Dapr allows custom [middleware]({{< ref "middleware.md" >}}) to be plugged into the HTTP request processing pipeline. Middleware can perform additional actions on an HTTP request (such as authentication, encryption, and message transformation) before the request is routed to the user code, or the response is returned to the client. The middleware components are used with the [service invocation]({{< ref "service-invocation-overview.md" >}}) building block.
|
||||
|
||||
- [List of supported middleware components]({{< ref supported-middleware >}})
|
||||
- [Middleware implementations](https://github.com/dapr/components-contrib/tree/master/middleware)
|
||||
|
|
|
|||
|
|
@ -23,7 +23,11 @@ The sidecar APIs are called from your application over local http or gRPC endpoi
|
|||
|
||||
## Self-hosted with `dapr run`
|
||||
|
||||
When Dapr is installed in [self-hosted mode]({{<ref self-hosted>}}), the `daprd` binary is downloaded and placed under the user home directory (`$HOME/.dapr/bin` for Linux/MacOS or `%USERPROFILE%\.dapr\bin\` for Windows). In self-hosted mode, running the Dapr CLI [`run` command]({{< ref dapr-run.md >}}) launches the `daprd` executable together with the provided application executable. This is the recommended way of running the Dapr sidecar when working locally in scenarios such as development and testing. The various arguments the CLI exposes to configure the sidecar can be found in the [Dapr run command reference]({{<ref dapr-run>}}).
|
||||
When Dapr is installed in [self-hosted mode]({{<ref self-hosted>}}), the `daprd` binary is downloaded and placed under the user home directory (`$HOME/.dapr/bin` for Linux/macOS or `%USERPROFILE%\.dapr\bin\` for Windows).
|
||||
|
||||
In self-hosted mode, running the Dapr CLI [`run` command]({{< ref dapr-run.md >}}) launches the `daprd` executable with the provided application executable. This is the recommended way of running the Dapr sidecar when working locally in scenarios such as development and testing.
|
||||
|
||||
You can find the various arguments that the CLI exposes to configure the sidecar in the [Dapr run command reference]({{<ref dapr-run>}}).
|
||||
|
||||
## Kubernetes with `dapr-sidecar-injector`
|
||||
|
||||
|
|
@ -37,7 +41,9 @@ For a detailed list of all available arguments run `daprd --help` or see this [t
|
|||
|
||||
### Examples
|
||||
|
||||
1. Start a sidecar along with an application by specifying its unique ID. Note `--app-id` is a required field:
|
||||
1. Start a sidecar alongside an application by specifying its unique ID.
|
||||
|
||||
**Note:** `--app-id` is a required field, and cannot contain dots.
|
||||
|
||||
```bash
|
||||
daprd --app-id myapp
|
||||
|
|
|
|||
|
|
@ -27,8 +27,6 @@ Creating a new actor follows a local call like `http://localhost:3500/v1.0/actor
|
|||
|
||||
The Dapr runtime SDKs have language-specific actor frameworks. For example, the .NET SDK has C# actors. The goal is for all the Dapr language SDKs to have an actor framework. Currently .NET, Java, Go and Python SDK have actor frameworks.
|
||||
|
||||
## Developer language SDKs and frameworks
|
||||
|
||||
### Does Dapr have any SDKs I can use if I want to work with a particular programming language or framework?
|
||||
|
||||
To make using Dapr more natural for different languages, it includes [language specific SDKs]({{<ref sdks>}}) for Go, Java, JavaScript, .NET, Python, PHP, Rust and C++. These SDKs expose the functionality in the Dapr building blocks, such as saving state, publishing an event or creating an actor, through a typed language API rather than calling the http/gRPC API. This enables you to write a combination of stateless and stateful functions and actors all in the language of your choice. And because these SDKs share the Dapr runtime, you get cross-language actor and functions support.
|
||||
|
|
|
|||
|
|
@ -35,15 +35,17 @@ Each of these building block APIs is independent, meaning that you can use one,
|
|||
|
||||
| Building Block | Description |
|
||||
|----------------|-------------|
|
||||
| [**Service-to-service invocation**]({{<ref "service-invocation-overview.md">}}) | Resilient service-to-service invocation enables method calls, including retries, on remote services, wherever they are located in the supported hosting environment.
|
||||
| [**State management**]({{<ref "state-management-overview.md">}}) | With state management for storing and querying key/value pairs, long-running, highly available, stateful services can be easily written alongside stateless services in your application. The state store is pluggable and examples include AWS DynamoDB, Azure CosmosDB, Azure SQL Server, GCP Firebase, PostgreSQL or Redis, among others.
|
||||
| [**Publish and subscribe**]({{<ref "pubsub-overview.md">}}) | Publishing events and subscribing to topics between services enables event-driven architectures to simplify horizontal scalability and make them resilient to failure. Dapr provides at-least-once message delivery guarantee, message TTL, consumer groups and other advance features.
|
||||
| [**Resource bindings**]({{<ref "bindings-overview.md">}}) | Resource bindings with triggers builds further on event-driven architectures for scale and resiliency by receiving and sending events to and from any external source such as databases, queues, file systems, etc.
|
||||
| [**Actors**]({{<ref "actors-overview.md">}}) | A pattern for stateful and stateless objects that makes concurrency simple, with method and state encapsulation. Dapr provides many capabilities in its actor runtime, including concurrency, state, and life-cycle management for actor activation/deactivation, and timers and reminders to wake up actors.
|
||||
| [**Observability**]({{<ref "observability-concept.md">}}) | Dapr emits metrics, logs, and traces to debug and monitor both Dapr and user applications. Dapr supports distributed tracing to easily diagnose and serve inter-service calls in production using the W3C Trace Context standard and Open Telemetry to send to different monitoring tools.
|
||||
| [**Secrets**]({{<ref "secrets-overview.md">}}) | The secrets management API integrates with public cloud and local secret stores to retrieve the secrets for use in application code.
|
||||
| [**Configuration**]({{<ref "configuration-api-overview.md">}}) | The configuration API enables you to retrieve and subscribe to application configuration items from configuration stores.
|
||||
| [**Distributed lock**]({{<ref "distributed-lock-api-overview.md">}}) | The distributed lock API enables your application to acquire a lock for any resource that gives it exclusive access until either the lock is released by the application, or a lease timeout occurs.
|
||||
| [**Service-to-service invocation**]({{< ref "service-invocation-overview.md" >}}) | Resilient service-to-service invocation enables method calls, including retries, on remote services, wherever they are located in the supported hosting environment.
|
||||
| [**State management**]({{< ref "state-management-overview.md" >}}) | With state management for storing and querying key/value pairs, long-running, highly available, stateful services can be easily written alongside stateless services in your application. The state store is pluggable and examples include AWS DynamoDB, Azure CosmosDB, Azure SQL Server, GCP Firebase, PostgreSQL or Redis, among others.
|
||||
| [**Publish and subscribe**]({{< ref "pubsub-overview.md" >}}) | Publishing events and subscribing to topics between services enables event-driven architectures to simplify horizontal scalability and make them resilient to failure. Dapr provides at-least-once message delivery guarantee, message TTL, consumer groups and other advance features.
|
||||
| [**Resource bindings**]({{< ref "bindings-overview.md" >}}) | Resource bindings with triggers builds further on event-driven architectures for scale and resiliency by receiving and sending events to and from any external source such as databases, queues, file systems, etc.
|
||||
| [**Actors**]({{< ref "actors-overview.md" >}}) | A pattern for stateful and stateless objects that makes concurrency simple, with method and state encapsulation. Dapr provides many capabilities in its actor runtime, including concurrency, state, and life-cycle management for actor activation/deactivation, and timers and reminders to wake up actors.
|
||||
| [**Observability**]({{< ref "observability-concept.md" >}}) | Dapr emits metrics, logs, and traces to debug and monitor both Dapr and user applications. Dapr supports distributed tracing to easily diagnose and serve inter-service calls in production using the W3C Trace Context standard and Open Telemetry to send to different monitoring tools.
|
||||
| [**Secrets**]({{< ref "secrets-overview.md" >}}) | The secrets management API integrates with public cloud and local secret stores to retrieve the secrets for use in application code.
|
||||
| [**Configuration**]({{< ref "configuration-api-overview.md" >}}) | The configuration API enables you to retrieve and subscribe to application configuration items from configuration stores.
|
||||
| [**Distributed lock**]({{< ref "distributed-lock-api-overview.md" >}}) | The distributed lock API enables your application to acquire a lock for any resource that gives it exclusive access until either the lock is released by the application, or a lease timeout occurs.
|
||||
| [**Workflows**]({{< ref "workflow-overview.md" >}}) | `/v1.0-alpha1/workflow` | The workflow API can be combined with other Dapr building blocks to define long running, persistent processes or data flows that span multiple microservices using Dapr workflows or workflow components.
|
||||
|
||||
|
||||
## Sidecar architecture
|
||||
|
||||
|
|
|
|||
|
|
@ -37,3 +37,10 @@ Dapr provides a way to determine its health using an [HTTP `/healthz` endpoint](
|
|||
- Determined for readiness and liveness
|
||||
|
||||
Read more on about how to apply [dapr health checks]({{< ref sidecar-health >}}) to your application.
|
||||
|
||||
## Next steps
|
||||
|
||||
- [Learn more about resiliency]({{< ref resiliency-overview.md >}})
|
||||
- Try out one of the Resiliency quickstarts:
|
||||
- [Resiliency: Service-to-service]({{< ref resiliency-serviceinvo-quickstart.md >}})
|
||||
- [Resiliency: State Management]({{< ref resiliency-state-quickstart.md >}})
|
||||
|
|
@ -1,11 +1,11 @@
|
|||
---
|
||||
type: docs
|
||||
title: "API building blocks"
|
||||
linkTitle: "API building blocks"
|
||||
title: "Building blocks"
|
||||
linkTitle: "Building blocks"
|
||||
weight: 10
|
||||
description: "Dapr capabilities that solve common development challenges for distributed applications"
|
||||
---
|
||||
|
||||
Get a high-level [overview of Dapr API building blocks]({{< ref building-blocks-concept >}}) in the **Concepts** section.
|
||||
Get a high-level [overview of Dapr building blocks]({{< ref building-blocks-concept >}}) in the **Concepts** section.
|
||||
|
||||
<img src="/images/buildingblocks-overview.png" alt="Diagram showing the different Dapr API building blocks" width=1000>
|
||||
|
|
@ -0,0 +1,431 @@
|
|||
---
|
||||
type: docs
|
||||
title: "Publish and subscribe to bulk messages"
|
||||
linkTitle: "Publish and subscribe to bulk messages"
|
||||
weight: 7100
|
||||
description: "Learn how to use the bulk publish and subscribe APIs in Dapr."
|
||||
---
|
||||
|
||||
{{% alert title="alpha" color="warning" %}}
|
||||
The bulk publish and subscribe APIs are in **alpha** stage.
|
||||
{{% /alert %}}
|
||||
|
||||
With the bulk publish and subscribe APIs, you can publish and subscribe to multiple messages in a single request. When writing applications that need to send or receive a large number of messages, using bulk operations allows achieving high throughput by reducing the overall number of requests between the Dapr sidecar, the application, and the underlying pub/sub broker.
|
||||
|
||||
## Publishing messages in bulk
|
||||
|
||||
### Restrictions when publishing messages in bulk
|
||||
|
||||
The bulk publish API allows you to publish multiple messages to a topic in a single request. It is *non-transactional*, i.e., from a single bulk request, some messages can succeed and some can fail. If any of the messages fail to publish, the bulk publish operation returns a list of failed messages.
|
||||
|
||||
The bulk publish operation also does not guarantee any ordering of messages.
|
||||
|
||||
### Example
|
||||
|
||||
{{< tabs Java Javascript Dotnet Python Go "HTTP API (Bash)" "HTTP API (PowerShell)" >}}
|
||||
|
||||
{{% codetab %}}
|
||||
|
||||
```java
|
||||
import io.dapr.client.DaprClientBuilder;
|
||||
import io.dapr.client.DaprPreviewClient;
|
||||
import io.dapr.client.domain.BulkPublishResponse;
|
||||
import io.dapr.client.domain.BulkPublishResponseFailedEntry;
|
||||
import java.util.ArrayList;
|
||||
import java.util.List;
|
||||
|
||||
class BulkPublisher {
|
||||
private static final String PUBSUB_NAME = "my-pubsub-name";
|
||||
private static final String TOPIC_NAME = "topic-a";
|
||||
|
||||
public void publishMessages() {
|
||||
try (DaprPreviewClient client = (new DaprClientBuilder()).buildPreviewClient()) {
|
||||
// Create a list of messages to publish
|
||||
List<String> messages = new ArrayList<>();
|
||||
for (int i = 0; i < 10; i++) {
|
||||
String message = String.format("This is message #%d", i);
|
||||
messages.add(message);
|
||||
}
|
||||
|
||||
// Publish list of messages using the bulk publish API
|
||||
BulkPublishResponse<String> res = client.publishEvents(PUBSUB_NAME, TOPIC_NAME, "text/plain", messages).block();
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
{{% /codetab %}}
|
||||
|
||||
{{% codetab %}}
|
||||
|
||||
```typescript
|
||||
|
||||
import { DaprClient } from "@dapr/dapr";
|
||||
|
||||
const pubSubName = "my-pubsub-name";
|
||||
const topic = "topic-a";
|
||||
|
||||
async function start() {
|
||||
const client = new DaprClient();
|
||||
|
||||
// Publish multiple messages to a topic.
|
||||
await client.pubsub.publishBulk(pubSubName, topic, ["message 1", "message 2", "message 3"]);
|
||||
|
||||
// Publish multiple messages to a topic with explicit bulk publish messages.
|
||||
const bulkPublishMessages = [
|
||||
{
|
||||
entryID: "entry-1",
|
||||
contentType: "application/json",
|
||||
event: { hello: "foo message 1" },
|
||||
},
|
||||
{
|
||||
entryID: "entry-2",
|
||||
contentType: "application/cloudevents+json",
|
||||
event: {
|
||||
specversion: "1.0",
|
||||
source: "/some/source",
|
||||
type: "example",
|
||||
id: "1234",
|
||||
data: "foo message 2",
|
||||
datacontenttype: "text/plain"
|
||||
},
|
||||
},
|
||||
{
|
||||
entryID: "entry-3",
|
||||
contentType: "text/plain",
|
||||
event: "foo message 3",
|
||||
},
|
||||
];
|
||||
await client.pubsub.publishBulk(pubSubName, topic, bulkPublishMessages);
|
||||
}
|
||||
|
||||
start().catch((e) => {
|
||||
console.error(e);
|
||||
process.exit(1);
|
||||
});
|
||||
```
|
||||
|
||||
{{% /codetab %}}
|
||||
|
||||
{{% codetab %}}
|
||||
|
||||
```csharp
|
||||
using System;
|
||||
using System.Collections.Generic;
|
||||
using Dapr.Client;
|
||||
|
||||
const string PubsubName = "my-pubsub-name";
|
||||
const string TopicName = "topic-a";
|
||||
IReadOnlyList<object> BulkPublishData = new List<object>() {
|
||||
new { Id = "17", Amount = 10m },
|
||||
new { Id = "18", Amount = 20m },
|
||||
new { Id = "19", Amount = 30m }
|
||||
};
|
||||
|
||||
using var client = new DaprClientBuilder().Build();
|
||||
|
||||
var res = await client.BulkPublishEventAsync(PubsubName, TopicName, BulkPublishData);
|
||||
if (res == null) {
|
||||
throw new Exception("null response from dapr");
|
||||
}
|
||||
if (res.FailedEntries.Count > 0)
|
||||
{
|
||||
Console.WriteLine("Some events failed to be published!");
|
||||
foreach (var failedEntry in res.FailedEntries)
|
||||
{
|
||||
Console.WriteLine("EntryId: " + failedEntry.Entry.EntryId + " Error message: " +
|
||||
failedEntry.ErrorMessage);
|
||||
}
|
||||
}
|
||||
else
|
||||
{
|
||||
Console.WriteLine("Published all events!");
|
||||
}
|
||||
```
|
||||
|
||||
{{% /codetab %}}
|
||||
|
||||
{{% codetab %}}
|
||||
|
||||
```python
|
||||
import requests
|
||||
import json
|
||||
|
||||
base_url = "http://localhost:3500/v1.0-alpha1/publish/bulk/{}/{}"
|
||||
pubsub_name = "my-pubsub-name"
|
||||
topic_name = "topic-a"
|
||||
payload = [
|
||||
{
|
||||
"entryId": "ae6bf7c6-4af2-11ed-b878-0242ac120002",
|
||||
"event": "first text message",
|
||||
"contentType": "text/plain"
|
||||
},
|
||||
{
|
||||
"entryId": "b1f40bd6-4af2-11ed-b878-0242ac120002",
|
||||
"event": {
|
||||
"message": "second JSON message"
|
||||
},
|
||||
"contentType": "application/json"
|
||||
}
|
||||
]
|
||||
|
||||
response = requests.post(base_url.format(pubsub_name, topic_name), json=payload)
|
||||
print(response.status_code)
|
||||
```
|
||||
|
||||
{{% /codetab %}}
|
||||
|
||||
{{% codetab %}}
|
||||
|
||||
```go
|
||||
package main
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"strings"
|
||||
"net/http"
|
||||
"io/ioutil"
|
||||
)
|
||||
|
||||
const (
|
||||
pubsubName = "my-pubsub-name"
|
||||
topicName = "topic-a"
|
||||
baseUrl = "http://localhost:3500/v1.0-alpha1/publish/bulk/%s/%s"
|
||||
)
|
||||
|
||||
func main() {
|
||||
url := fmt.Sprintf(baseUrl, pubsubName, topicName)
|
||||
method := "POST"
|
||||
payload := strings.NewReader(`[
|
||||
{
|
||||
"entryId": "ae6bf7c6-4af2-11ed-b878-0242ac120002",
|
||||
"event": "first text message",
|
||||
"contentType": "text/plain"
|
||||
},
|
||||
{
|
||||
"entryId": "b1f40bd6-4af2-11ed-b878-0242ac120002",
|
||||
"event": {
|
||||
"message": "second JSON message"
|
||||
},
|
||||
"contentType": "application/json"
|
||||
}
|
||||
]`)
|
||||
|
||||
client := &http.Client {}
|
||||
req, _ := http.NewRequest(method, url, payload)
|
||||
|
||||
req.Header.Add("Content-Type", "application/json")
|
||||
res, err := client.Do(req)
|
||||
// ...
|
||||
}
|
||||
```
|
||||
|
||||
{{% /codetab %}}
|
||||
|
||||
{{% codetab %}}
|
||||
|
||||
```bash
|
||||
curl -X POST http://localhost:3500/v1.0-alpha1/publish/bulk/my-pubsub-name/topic-a \
|
||||
-H 'Content-Type: application/json' \
|
||||
-d '[
|
||||
{
|
||||
"entryId": "ae6bf7c6-4af2-11ed-b878-0242ac120002",
|
||||
"event": "first text message",
|
||||
"contentType": "text/plain"
|
||||
},
|
||||
{
|
||||
"entryId": "b1f40bd6-4af2-11ed-b878-0242ac120002",
|
||||
"event": {
|
||||
"message": "second JSON message"
|
||||
},
|
||||
"contentType": "application/json"
|
||||
},
|
||||
]'
|
||||
```
|
||||
|
||||
{{% /codetab %}}
|
||||
|
||||
{{% codetab %}}
|
||||
|
||||
```powershell
|
||||
Invoke-RestMethod -Method Post -ContentType 'application/json' -Uri 'http://localhost:3500/v1.0-alpha1/publish/bulk/my-pubsub-name/topic-a' `
|
||||
-Body '[
|
||||
{
|
||||
"entryId": "ae6bf7c6-4af2-11ed-b878-0242ac120002",
|
||||
"event": "first text message",
|
||||
"contentType": "text/plain"
|
||||
},
|
||||
{
|
||||
"entryId": "b1f40bd6-4af2-11ed-b878-0242ac120002",
|
||||
"event": {
|
||||
"message": "second JSON message"
|
||||
},
|
||||
"contentType": "application/json"
|
||||
},
|
||||
]'
|
||||
```
|
||||
|
||||
{{% /codetab %}}
|
||||
|
||||
{{< /tabs >}}
|
||||
|
||||
## Subscribing messages in bulk
|
||||
|
||||
The bulk subscribe API allows you to subscribe multiple messages from a topic in a single request.
|
||||
As we know from [How to: Publish & Subscribe to topics]({{< ref howto-publish-subscribe.md >}}), there are two ways to subscribe to topic(s):
|
||||
|
||||
- **Declaratively** - subscriptions are defined in an external file.
|
||||
- **Programmatically** - subscriptions are defined in code.
|
||||
|
||||
To Bulk Subscribe to topic(s), we just need to use `bulkSubscribe` spec attribute, something like following:
|
||||
|
||||
```yaml
|
||||
apiVersion: dapr.io/v2alpha1
|
||||
kind: Subscription
|
||||
metadata:
|
||||
name: order-pub-sub
|
||||
spec:
|
||||
topic: orders
|
||||
routes:
|
||||
default: /checkout
|
||||
pubsubname: order-pub-sub
|
||||
bulkSubscribe:
|
||||
enabled: true
|
||||
maxMessagesCount: 100
|
||||
maxAwaitDurationMs: 40
|
||||
scopes:
|
||||
- orderprocessing
|
||||
- checkout
|
||||
```
|
||||
|
||||
In the example above, `bulkSubscribe` is _optional_. If you use `bulkSubscribe`, then:
|
||||
- `enabled` is mandatory and enables or disables bulk subscriptions on this topic
|
||||
- You can optionally configure the max number of messages (`maxMessagesCount`) delivered in a bulk message.
|
||||
Default value of `maxMessagesCount` for components not supporting bulk subscribe is 100 i.e. for default bulk events between App and Dapr. Please refer [How components handle publishing and subscribing to bulk messages]({{< ref pubsub-bulk >}}).
|
||||
If a component supports bulk subscribe, then default value for this parameter can be found in that component doc. Please refer [Supported components]({{< ref pubsub-bulk >}}).
|
||||
- You can optionally provide the max duration to wait (`maxAwaitDurationMs`) before a bulk message is sent to the app.
|
||||
Default value of `maxAwaitDurationMs` for components not supporting bulk subscribe is 1000 i.e. for default bulk events between App and Dapr. Please refer [How components handle publishing and subscribing to bulk messages]({{< ref pubsub-bulk >}}).
|
||||
If a component supports bulk subscribe, then default value for this parameter can be found in that component doc. Please refer [Supported components]({{< ref pubsub-bulk >}}).
|
||||
|
||||
The application receives an `EntryId` associated with each entry (individual message) in the bulk message. This `EntryId` must be used by the app to communicate the status of that particular entry. If the app fails to notify on an `EntryId` status, it's considered a `RETRY`.
|
||||
|
||||
A JSON-encoded payload body with the processing status against each entry needs to be sent:
|
||||
|
||||
```json
|
||||
{
|
||||
"statuses": {
|
||||
"entryId": "<entryId>",
|
||||
"status": "<status>"
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
Possible status values:
|
||||
|
||||
Status | Description
|
||||
--------- | -----------
|
||||
`SUCCESS` | Message is processed successfully
|
||||
`RETRY` | Message to be retried by Dapr
|
||||
`DROP` | Warning is logged and message is dropped
|
||||
|
||||
Please refer [Expected HTTP Response for Bulk Subscribe]({{< ref pubsub_api.md >}}) for further insights on response.
|
||||
|
||||
### Example
|
||||
|
||||
Please refer following code samples for how to use Bulk Subscribe:
|
||||
|
||||
{{< tabs Java Javascript "HTTP API (Bash)" "HTTP API (PowerShell)" >}}
|
||||
|
||||
{{% codetab %}}
|
||||
|
||||
```java
|
||||
import io.dapr.Topic;
|
||||
import io.dapr.client.domain.BulkSubscribeAppResponse;
|
||||
import io.dapr.client.domain.BulkSubscribeAppResponseEntry;
|
||||
import io.dapr.client.domain.BulkSubscribeAppResponseStatus;
|
||||
import io.dapr.client.domain.BulkSubscribeMessage;
|
||||
import io.dapr.client.domain.BulkSubscribeMessageEntry;
|
||||
import io.dapr.client.domain.CloudEvent;
|
||||
import io.dapr.springboot.annotations.BulkSubscribe;
|
||||
import org.springframework.web.bind.annotation.PostMapping;
|
||||
import org.springframework.web.bind.annotation.RequestBody;
|
||||
import reactor.core.publisher.Mono;
|
||||
|
||||
class BulkSubscriber {
|
||||
@BulkSubscribe()
|
||||
// @BulkSubscribe(maxMessagesCount = 100, maxAwaitDurationMs = 40)
|
||||
@Topic(name = "topicbulk", pubsubName = "orderPubSub")
|
||||
@PostMapping(path = "/topicbulk")
|
||||
public Mono<BulkSubscribeAppResponse> handleBulkMessage(
|
||||
@RequestBody(required = false) BulkSubscribeMessage<CloudEvent<String>> bulkMessage) {
|
||||
return Mono.fromCallable(() -> {
|
||||
List<BulkSubscribeAppResponseEntry> entries = new ArrayList<BulkSubscribeAppResponseEntry>();
|
||||
for (BulkSubscribeMessageEntry<?> entry : bulkMessage.getEntries()) {
|
||||
try {
|
||||
CloudEvent<?> cloudEvent = (CloudEvent<?>) entry.getEvent();
|
||||
System.out.printf("Bulk Subscriber got: %s\n", cloudEvent.getData());
|
||||
entries.add(new BulkSubscribeAppResponseEntry(entry.getEntryId(), BulkSubscribeAppResponseStatus.SUCCESS));
|
||||
} catch (Exception e) {
|
||||
e.printStackTrace();
|
||||
entries.add(new BulkSubscribeAppResponseEntry(entry.getEntryId(), BulkSubscribeAppResponseStatus.RETRY));
|
||||
}
|
||||
}
|
||||
return new BulkSubscribeAppResponse(entries);
|
||||
});
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
{{% /codetab %}}
|
||||
|
||||
{{% codetab %}}
|
||||
|
||||
```typescript
|
||||
|
||||
import { DaprServer } from "@dapr/dapr";
|
||||
|
||||
const pubSubName = "orderPubSub";
|
||||
const topic = "topicbulk";
|
||||
|
||||
const DAPR_HOST = process.env.DAPR_HOST || "127.0.0.1";
|
||||
const DAPR_HTTP_PORT = process.env.DAPR_HTTP_PORT || "3502";
|
||||
const SERVER_HOST = process.env.SERVER_HOST || "127.0.0.1";
|
||||
const SERVER_PORT = process.env.APP_PORT || 5001;
|
||||
|
||||
async function start() {
|
||||
const server = new DaprServer(SERVER_HOST, SERVER_PORT, DAPR_HOST, DAPR_HTTP_PORT);
|
||||
|
||||
// Publish multiple messages to a topic with default config.
|
||||
await client.pubsub.bulkSubscribeWithDefaultConfig(pubSubName, topic, (data) => console.log("Subscriber received: " + JSON.stringify(data)));
|
||||
|
||||
// Publish multiple messages to a topic with specific maxMessagesCount and maxAwaitDurationMs.
|
||||
await client.pubsub.bulkSubscribeWithConfig(pubSubName, topic, (data) => console.log("Subscriber received: " + JSON.stringify(data)), 100, 40);
|
||||
}
|
||||
|
||||
```
|
||||
|
||||
{{% /codetab %}}
|
||||
|
||||
{{% codetab %}}
|
||||
|
||||
{{< /tabs >}}
|
||||
## How components handle publishing and subscribing to bulk messages
|
||||
|
||||
Some pub/sub brokers support sending and receiving multiple messages in a single request. When a component supports bulk publish or subscribe operations, Dapr runtime uses them to further optimize the communication between the Dapr sidecar and the underlying pub/sub broker.
|
||||
|
||||
For components that do not have bulk publish or subscribe support, Dapr runtime uses the regular publish and subscribe APIs to send and receive messages one by one. This is still more efficient than directly using the regular publish or subscribe APIs, because applications can still send/receive multiple messages in a single request to/from Dapr.
|
||||
|
||||
## Watch the demo
|
||||
|
||||
Watch [this video for an demo on bulk pub/sub](https://youtu.be/BxiKpEmchgQ?t=1170):
|
||||
|
||||
<iframe width="560" height="315" src="https://www.youtube-nocookie.com/embed/BxiKpEmchgQ?start=1170" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" allowfullscreen></iframe>
|
||||
|
||||
## Supported components
|
||||
|
||||
Refer to the [component reference]({{< ref supported-pubsub >}}) to see which components support bulk publish and subscribe operations.
|
||||
|
||||
## Related links
|
||||
|
||||
- List of [supported pub/sub components]({{< ref supported-pubsub >}})
|
||||
- Read the [API reference]({{< ref pubsub_api.md >}})
|
||||
|
|
@ -119,6 +119,10 @@ By default, all topic messages associated with an instance of a pub/sub componen
|
|||
|
||||
Dapr can set a timeout message on a per-message basis, meaning that if the message is not read from the pub/sub component, then the message is discarded. This timeout message prevents a build up of unread messages. If a message has been in the queue longer than the configured TTL, it is marked as dead. For more information, read [pub/sub message TTL]({{< ref pubsub-message-ttl.md >}}).
|
||||
|
||||
### Publish and subscribe to bulk messages
|
||||
|
||||
Dapr supports sending and receiving multiple messages in a single request. When writing applications that need to send or receive a large number of messages, using bulk operations allows achieving high throughput by reducing the overall number of requests. For more information, read [pub/sub bulk messages]({{< ref pubsub-bulk.md >}}).
|
||||
|
||||
## Try out pub/sub
|
||||
|
||||
### Quickstarts and tutorials
|
||||
|
|
|
|||
|
|
@ -191,25 +191,24 @@ using System.Threading;
|
|||
//code
|
||||
namespace EventService
|
||||
{
|
||||
class Program
|
||||
{
|
||||
static async Task Main(string[] args)
|
||||
{
|
||||
while(true) {
|
||||
System.Threading.Thread.Sleep(5000);
|
||||
Random random = new Random();
|
||||
int orderId = random.Next(1,1000);
|
||||
CancellationTokenSource source = new CancellationTokenSource();
|
||||
CancellationToken cancellationToken = source.Token;
|
||||
using var client = new DaprClientBuilder().Build();
|
||||
//Using Dapr SDK to invoke a method
|
||||
var result = client.CreateInvokeMethodRequest(HttpMethod.Get, "checkout", "checkout/" + orderId, cancellationToken);
|
||||
await client.InvokeMethodAsync(result);
|
||||
Console.WriteLine("Order requested: " + orderId);
|
||||
Console.WriteLine("Result: " + result);
|
||||
}
|
||||
}
|
||||
}
|
||||
class Program
|
||||
{
|
||||
static async Task Main(string[] args)
|
||||
{
|
||||
while(true) {
|
||||
System.Threading.Thread.Sleep(5000);
|
||||
Random random = new Random();
|
||||
int orderId = random.Next(1,1000);
|
||||
using var client = new DaprClientBuilder().Build();
|
||||
|
||||
//Using Dapr SDK to invoke a method
|
||||
var result = client.CreateInvokeMethodRequest(HttpMethod.Get, "checkout", "checkout/" + orderId);
|
||||
await client.InvokeMethodAsync(result);
|
||||
Console.WriteLine("Order requested: " + orderId);
|
||||
Console.WriteLine("Result: " + result);
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
|
|
|
|||
|
|
@ -7,7 +7,8 @@ weight: 3000
|
|||
---
|
||||
|
||||
This article describe how to use Dapr to connect services using gRPC.
|
||||
By using Dapr's gRPC proxying capability, you can use your existing proto based gRPC services and have the traffic go through the Dapr sidecar. Doing so yields the following [Dapr service invocation]({{< ref service-invocation-overview.md >}}) benefits to developers:
|
||||
|
||||
By using Dapr's gRPC proxying capability, you can use your existing proto-based gRPC services and have the traffic go through the Dapr sidecar. Doing so yields the following [Dapr service invocation]({{< ref service-invocation-overview.md >}}) benefits to developers:
|
||||
|
||||
1. Mutual authentication
|
||||
2. Tracing
|
||||
|
|
@ -16,11 +17,11 @@ By using Dapr's gRPC proxying capability, you can use your existing proto based
|
|||
5. Network level resiliency
|
||||
6. API token based authentication
|
||||
|
||||
Dapr allows proxying all kinds of gRPC invocations, including unary and [stream-based](#proxying-of-streaming-rpcs) ones.
|
||||
|
||||
## Step 1: Run a gRPC server
|
||||
|
||||
The following example is taken from the [hello world grpc-go example](https://github.com/grpc/grpc-go/tree/master/examples/helloworld).
|
||||
|
||||
Note this example is in Go, but applies to all programming languages supported by gRPC.
|
||||
The following example is taken from the ["hello world" grpc-go example](https://github.com/grpc/grpc-go/tree/master/examples/helloworld). Although this example is in Go, the same concepts apply to all programming languages supported by gRPC.
|
||||
|
||||
```go
|
||||
package main
|
||||
|
|
@ -175,7 +176,7 @@ response = service.sayHello({ 'name': 'Darth Bane' }, metadata)
|
|||
{{% codetab %}}
|
||||
```c++
|
||||
grpc::ClientContext context;
|
||||
context.AddMetadata("dapr-app-id", "Darth Sidious");
|
||||
context.AddMetadata("dapr-app-id", "server");
|
||||
```
|
||||
{{% /codetab %}}
|
||||
|
||||
|
|
@ -191,7 +192,7 @@ dapr run --app-id client --dapr-grpc-port 50007 -- go run main.go
|
|||
|
||||
If you're running Dapr locally with Zipkin installed, open the browser at `http://localhost:9411` and view the traces between the client and server.
|
||||
|
||||
## Deploying to Kubernetes
|
||||
### Deploying to Kubernetes
|
||||
|
||||
Set the following Dapr annotations on your deployment:
|
||||
|
||||
|
|
@ -241,15 +242,88 @@ The example above showed you how to directly invoke a different service running
|
|||
|
||||
For more information on tracing and logs see the [observability]({{< ref observability-concept.md >}}) article.
|
||||
|
||||
## Related Links
|
||||
## Proxying of streaming RPCs
|
||||
|
||||
When using Dapr to proxy streaming RPC calls using gRPC, you must set an additional metadata option `dapr-stream` with value `true`.
|
||||
|
||||
For example:
|
||||
|
||||
{{< tabs Go Java Dotnet Python JavaScript Ruby "C++">}}
|
||||
|
||||
{{% codetab %}}
|
||||
```go
|
||||
ctx = metadata.AppendToOutgoingContext(ctx, "dapr-app-id", "server")
|
||||
ctx = metadata.AppendToOutgoingContext(ctx, "dapr-stream", "true")
|
||||
```
|
||||
{{% /codetab %}}
|
||||
|
||||
{{% codetab %}}
|
||||
```java
|
||||
Metadata headers = new Metadata();
|
||||
Metadata.Key<String> jwtKey = Metadata.Key.of("dapr-app-id", "server");
|
||||
Metadata.Key<String> jwtKey = Metadata.Key.of("dapr-stream", "true");
|
||||
```
|
||||
{{% /codetab %}}
|
||||
|
||||
{{% codetab %}}
|
||||
```csharp
|
||||
var metadata = new Metadata
|
||||
{
|
||||
{ "dapr-app-id", "server" },
|
||||
{ "dapr-stream", "true" }
|
||||
};
|
||||
```
|
||||
{{% /codetab %}}
|
||||
|
||||
{{% codetab %}}
|
||||
```python
|
||||
metadata = (('dapr-app-id', 'server'), ('dapr-stream', 'true'),)
|
||||
```
|
||||
{{% /codetab %}}
|
||||
|
||||
{{% codetab %}}
|
||||
```javascript
|
||||
const metadata = new grpc.Metadata();
|
||||
metadata.add('dapr-app-id', 'server');
|
||||
metadata.add('dapr-stream', 'true');
|
||||
```
|
||||
{{% /codetab %}}
|
||||
|
||||
{{% codetab %}}
|
||||
```ruby
|
||||
metadata = { 'dapr-app-id' : 'server' }
|
||||
metadata = { 'dapr-stream' : 'true' }
|
||||
```
|
||||
{{% /codetab %}}
|
||||
|
||||
{{% codetab %}}
|
||||
```c++
|
||||
grpc::ClientContext context;
|
||||
context.AddMetadata("dapr-app-id", "server");
|
||||
context.AddMetadata("dapr-stream", "true");
|
||||
```
|
||||
{{% /codetab %}}
|
||||
|
||||
{{< /tabs >}}
|
||||
|
||||
### Streaming gRPCs and Resiliency
|
||||
|
||||
When proxying streaming gRPCs, due to their long-lived nature, [resiliency]({{< ref "resiliency-overview.md" >}}) policies are applied on the "initial handshake" only. As a consequence:
|
||||
|
||||
- If the stream is interrupted after the initial handshake, it will not be automatically re-established by Dapr. Your application will be notified that the stream has ended, and will need to recreate it.
|
||||
- Retry policies only impact the initial connection "handshake". If your resiliency policy includes retries, Dapr will detect failures in establishing the initial connection to the target app and will retry until it succeeds (or until the number of retries defined in the policy is exhausted).
|
||||
- Likewise, timeouts defined in resiliency policies only apply to the initial "handshake". After the connection has been established, timeouts do not impact the stream anymore.
|
||||
|
||||
## Related Links
|
||||
|
||||
* [Service invocation overview]({{< ref service-invocation-overview.md >}})
|
||||
* [Service invocation API specification]({{< ref service_invocation_api.md >}})
|
||||
* [gRPC proxying community call video](https://youtu.be/B_vkXqptpXY?t=70)
|
||||
|
||||
## Community call demo
|
||||
|
||||
Watch this [video](https://youtu.be/B_vkXqptpXY?t=69) on how to use Dapr's gRPC proxying capability:
|
||||
|
||||
<div class="embed-responsive embed-responsive-16by9">
|
||||
<iframe width="560" height="315" src="https://www.youtube.com/embed/B_vkXqptpXY?start=69" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe>
|
||||
</div>
|
||||
<iframe width="560" height="315" src="https://www.youtube-nocookie.com/embed/B_vkXqptpXY?start=69" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe>
|
||||
</div>
|
||||
|
|
|
|||
|
|
@ -0,0 +1,7 @@
|
|||
---
|
||||
type: docs
|
||||
title: "Workflow"
|
||||
linkTitle: "Workflow"
|
||||
weight: 100
|
||||
description: "Orchestrate logic across various microservices"
|
||||
---
|
||||
|
|
@ -0,0 +1,173 @@
|
|||
---
|
||||
type: docs
|
||||
title: "How to: Author a workflow"
|
||||
linkTitle: "How to: Author workflows"
|
||||
weight: 5000
|
||||
description: "Learn how to develop and author workflows"
|
||||
---
|
||||
|
||||
This article provides a high-level overview of how to author workflows that are executed by the Dapr Workflow engine.
|
||||
|
||||
{{% alert title="Note" color="primary" %}}
|
||||
If you haven't already, [try out the workflow quickstart](todo) for a quick walk-through on how to use workflows.
|
||||
|
||||
{{% /alert %}}
|
||||
|
||||
|
||||
## Author workflows as code
|
||||
|
||||
Dapr Workflow logic is implemented using general purpose programming languages, allowing you to:
|
||||
|
||||
- Use your preferred programming language (no need to learn a new DSL or YAML schema)
|
||||
- Have access to the language’s standard libraries
|
||||
- Build your own libraries and abstractions
|
||||
- Use debuggers and examine local variables
|
||||
- Write unit tests for your workflows, just like any other part of your application logic
|
||||
|
||||
The Dapr sidecar doesn’t load any workflow definitions. Rather, the sidecar simply drives the execution of the workflows, leaving all other details to the application layer.
|
||||
|
||||
## Write the workflow activities
|
||||
|
||||
Define the workflow activities you'd like your workflow to perform. Activities are a class definition and can take inputs and outputs. Activities also participate in dependency injection, like binding to a Dapr client.
|
||||
|
||||
{{< tabs ".NET" >}}
|
||||
|
||||
{{% codetab %}}
|
||||
|
||||
Continuing the ASP.NET order processing example, the `OrderProcessingWorkflow` class is derived from a base class called `Workflow` with input and output parameter types.
|
||||
|
||||
It also includes a `RunAsync` method that will do the heavy lifting of the workflow and call the workflow activities. The activities called in the example are:
|
||||
- `NotifyActivity`: Receive notification of a new order
|
||||
- `ReserveInventoryActivity`: Check for sufficient inventory to meet the new order
|
||||
- `ProcessPaymentActivity`: Process payment for the order. Includes `NotifyActivity` to send notification of successful order
|
||||
|
||||
```csharp
|
||||
class OrderProcessingWorkflow : Workflow<OrderPayload, OrderResult>
|
||||
{
|
||||
public override async Task<OrderResult> RunAsync(WorkflowContext context, OrderPayload order)
|
||||
{
|
||||
//...
|
||||
|
||||
await context.CallActivityAsync(
|
||||
nameof(NotifyActivity),
|
||||
new Notification($"Received order {orderId} for {order.Name} at {order.TotalCost:c}"));
|
||||
|
||||
//...
|
||||
|
||||
InventoryResult result = await context.CallActivityAsync<InventoryResult>(
|
||||
nameof(ReserveInventoryActivity),
|
||||
new InventoryRequest(RequestId: orderId, order.Name, order.Quantity));
|
||||
//...
|
||||
await context.CallActivityAsync(
|
||||
nameof(ProcessPaymentActivity),
|
||||
new PaymentRequest(RequestId: orderId, order.TotalCost, "USD"));
|
||||
|
||||
await context.CallActivityAsync(
|
||||
nameof(NotifyActivity),
|
||||
new Notification($"Order {orderId} processed successfully!"));
|
||||
|
||||
// End the workflow with a success result
|
||||
return new OrderResult(Processed: true);
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
{{% /codetab %}}
|
||||
|
||||
{{< /tabs >}}
|
||||
|
||||
## Write the workflow
|
||||
|
||||
Compose the workflow activities into a workflow.
|
||||
|
||||
{{< tabs ".NET" >}}
|
||||
|
||||
{{% codetab %}}
|
||||
|
||||
[In the following example](https://github.com/dapr/dotnet-sdk/blob/master/examples/Workflow/WorkflowWebApp/Program.cs), for a basic ASP.NET order processing application using the .NET SDK, your project code would include:
|
||||
|
||||
- A NuGet package called `Dapr.Workflow` to receive the .NET SDK capabilities
|
||||
- A builder with an extension method called `AddDaprWorkflow`
|
||||
- This will allow you to register workflows and workflow activities (tasks that workflows can schedule)
|
||||
- HTTP API calls
|
||||
- One for submitting a new order
|
||||
- One for checking the status of an existing order
|
||||
|
||||
```csharp
|
||||
using Dapr.Workflow;
|
||||
//...
|
||||
|
||||
// Dapr Workflows are registered as part of the service configuration
|
||||
builder.Services.AddDaprWorkflow(options =>
|
||||
{
|
||||
// Note that it's also possible to register a lambda function as the workflow
|
||||
// or activity implementation instead of a class.
|
||||
options.RegisterWorkflow<OrderProcessingWorkflow>();
|
||||
|
||||
// These are the activities that get invoked by the workflow(s).
|
||||
options.RegisterActivity<NotifyActivity>();
|
||||
options.RegisterActivity<ReserveInventoryActivity>();
|
||||
options.RegisterActivity<ProcessPaymentActivity>();
|
||||
});
|
||||
|
||||
WebApplication app = builder.Build();
|
||||
|
||||
// POST starts new order workflow instance
|
||||
app.MapPost("/orders", async (WorkflowEngineClient client, [FromBody] OrderPayload orderInfo) =>
|
||||
{
|
||||
if (orderInfo?.Name == null)
|
||||
{
|
||||
return Results.BadRequest(new
|
||||
{
|
||||
message = "Order data was missing from the request",
|
||||
example = new OrderPayload("Paperclips", 99.95),
|
||||
});
|
||||
}
|
||||
|
||||
//...
|
||||
});
|
||||
|
||||
// GET fetches state for order workflow to report status
|
||||
app.MapGet("/orders/{orderId}", async (string orderId, WorkflowEngineClient client) =>
|
||||
{
|
||||
WorkflowState state = await client.GetWorkflowStateAsync(orderId, true);
|
||||
if (!state.Exists)
|
||||
{
|
||||
return Results.NotFound($"No order with ID = '{orderId}' was found.");
|
||||
}
|
||||
|
||||
var httpResponsePayload = new
|
||||
{
|
||||
details = state.ReadInputAs<OrderPayload>(),
|
||||
status = state.RuntimeStatus.ToString(),
|
||||
result = state.ReadOutputAs<OrderResult>(),
|
||||
};
|
||||
|
||||
//...
|
||||
}).WithName("GetOrderInfoEndpoint");
|
||||
|
||||
app.Run();
|
||||
```
|
||||
|
||||
{{% /codetab %}}
|
||||
|
||||
{{< /tabs >}}
|
||||
|
||||
|
||||
|
||||
{{% alert title="Important" color="warning" %}}
|
||||
Because of how replay-based workflows execute, you'll write most logic that does things like IO and interacting with systems **inside activities**. Meanwhile, **workflow method** is just for orchestrating those activities.
|
||||
|
||||
{{% /alert %}}
|
||||
|
||||
|
||||
## Next steps
|
||||
|
||||
Now that you've authored a workflow, learn how to manage it.
|
||||
|
||||
{{< button text="Manage workflows >>" page="howto-manage-workflow.md" >}}
|
||||
|
||||
## Related links
|
||||
- [Learn more about the Workflow API]({{< ref workflow-overview.md >}})
|
||||
- [Workflow API reference]({{< ref workflow_api.md >}})
|
||||
- Learn more about [how to manage workflows with the .NET SDK](todo) and try out [the .NET example](https://github.com/dapr/dotnet-sdk/tree/master/examples/Workflow)
|
||||
|
|
@ -0,0 +1,78 @@
|
|||
---
|
||||
type: docs
|
||||
title: "How to: Manage workflows"
|
||||
linkTitle: "How to: Manage workflows"
|
||||
weight: 6000
|
||||
description: Manage and expose workflows
|
||||
---
|
||||
|
||||
Now that you've [set up the workflow and its activities in your application]({{< ref howto-author-workflow.md >}}), you can start, terminate, and get information about the workflow using HTTP API calls. For more information, read the [workflow API reference]({{< ref workflow_api.md >}}).
|
||||
|
||||
{{< tabs ".NET SDK" HTTP >}}
|
||||
|
||||
<!--NET-->
|
||||
{{% codetab %}}
|
||||
|
||||
Manage your workflow within your code. In the `OrderProcessingWorkflow` example from the [Author a workflow]({{< ref "howto-author-workflow.md#write-the-workflow" >}}) guide, the workflow is registered in the code. You can then start, terminate, and get information about the workflow:
|
||||
|
||||
```csharp
|
||||
string orderId = "exampleOrderId";
|
||||
string workflowComponent = "dapr";
|
||||
string workflowName = "OrderProcessingWorkflow";
|
||||
OrderPayload input = new OrderPayload("Paperclips", 99.95);
|
||||
Dictionary<string, string> workflowOptions; // This is an optional parameter
|
||||
CancellationToken cts = CancellationToken.None;
|
||||
|
||||
// Start the workflow. This returns back a "WorkflowReference" which contains the instanceID for the particular workflow instance.
|
||||
WorkflowReference startResponse = await daprClient.StartWorkflowAsync(orderId, workflowComponent, workflowName, input, workflowOptions, cts);
|
||||
|
||||
// Get information on the workflow. This response will contain information such as the status of the workflow, when it started, and more!
|
||||
GetWorkflowResponse getResponse = await daprClient.GetWorkflowAsync(orderId, workflowComponent, workflowName);
|
||||
|
||||
// Terminate the workflow
|
||||
await daprClient.TerminateWorkflowAsync(instanceId, workflowComponent);
|
||||
```
|
||||
|
||||
{{% /codetab %}}
|
||||
|
||||
<!--HTTP-->
|
||||
{{% codetab %}}
|
||||
|
||||
Manage your workflow using HTTP calls. The example below plugs in the properties from the [Author a workflow example]({{< ref "howto-author-workflow.md#write-the-workflow" >}}) with a random instance ID number.
|
||||
|
||||
### Start workflow
|
||||
|
||||
To start your workflow, run:
|
||||
|
||||
```bash
|
||||
POST http://localhost:3500/v1.0-alpha1/workflows/dapr/OrderProcessingWorkflow/12345678/start
|
||||
```
|
||||
|
||||
### Terminate workflow
|
||||
|
||||
To terminate your workflow, run:
|
||||
|
||||
```bash
|
||||
POST http://localhost:3500/v1.0-alpha1/workflows/dapr/12345678/terminate
|
||||
```
|
||||
|
||||
### Get information about a workflow
|
||||
|
||||
To fetch workflow outputs and inputs, run:
|
||||
|
||||
```bash
|
||||
GET http://localhost:3500/v1.0-alpha1/workflows/dapr/OrderProcessingWorkflow/12345678
|
||||
```
|
||||
|
||||
Learn more about these HTTP calls in the [workflow API reference guide]({{< ref workflow_api.md >}}).
|
||||
|
||||
|
||||
{{% /codetab %}}
|
||||
|
||||
{{< /tabs >}}
|
||||
|
||||
|
||||
## Next steps
|
||||
|
||||
- Learn more about [how to manage workflows with the .NET SDK](todo) and try out [the .NET example](https://github.com/dapr/dotnet-sdk/tree/master/examples/Workflow)
|
||||
- [Workflow API reference]({{< ref workflow_api.md >}})
|
||||
|
|
@ -0,0 +1,190 @@
|
|||
---
|
||||
type: docs
|
||||
title: "Workflow architecture"
|
||||
linkTitle: "Workflow architecture"
|
||||
weight: 4000
|
||||
description: "The Dapr Workflow engine architecture"
|
||||
---
|
||||
|
||||
[Dapr Workflows]({{< ref "workflow-overview.md" >}}) allow developers to define workflows using ordinary code in a variety of programming languages. The workflow engine runs inside of the Dapr sidecar and orchestrates workflow code deployed as part of your application. This article describes:
|
||||
|
||||
- The architecture of the Dapr Workflow engine
|
||||
- How the workflow engine interacts with application code
|
||||
- How the workflow engine fits into the overall Dapr architecture
|
||||
|
||||
For more information on how to author Dapr Workflows in your application, see [How to: Author a workflow]({{< ref "workflow-overview.md" >}}).
|
||||
|
||||
The Dapr Workflow engine is internally powered by Dapr's actor runtime. The following diagram illustrates the Dapr Workflow architecture in Kubernetes mode:
|
||||
|
||||
<img src="/images/workflow-overview/workflows-architecture-k8s.png" width=800 alt="Diagram showing how the workflow architecture works in Kubernetes mode">
|
||||
|
||||
To use the Dapr Workflow building block, you write workflow code in your application using the Dapr Workflow SDK, which internally connects to the sidecar using a gRPC stream. This registers the workflow and any workflow activities, or tasks that workflows can schedule.
|
||||
|
||||
The engine is embedded directly into the sidecar and implemented using the [`durabletask-go`](https://github.com/microsoft/durabletask-go) framework library. This framework allows you to swap out different storage providers, including a storage provider created for Dapr that leverages internal actors behind the scenes. Since Dapr Workflows use actors, you can store workflow state in state stores.
|
||||
|
||||
## Sidecar interactions
|
||||
|
||||
When a workflow application starts up, it uses a workflow authoring SDK to send a gRPC request to the Dapr sidecar and get back a stream of workflow work-items, following the [server streaming RPC pattern](https://grpc.io/docs/what-is-grpc/core-concepts/#server-streaming-rpc). These work items can be anything from "start a new X workflow" (where X is the type of a workflow) to "schedule activity Y with input Z to run on behalf of workflow X".
|
||||
|
||||
The workflow app executes the appropriate workflow code and then sends a gRPC request back to the sidecar with the execution results.
|
||||
|
||||
<img src="/images/workflow-overview/workflow-engine-protocol.png" alt="Dapr Workflow Engine Protocol" />
|
||||
|
||||
All interactions happen over a single gRPC channel and are initiated by the application, which means the application doesn't need to open any inbound ports. The details of these interactions are internally handled by the language-specific Dapr Workflow authoring SDK.
|
||||
|
||||
### Differences between workflow and actor sidecar interactions
|
||||
|
||||
If you're familiar with Dapr actors, you may notice a few differences in terms of how sidecar interactions works for workflows compared to actors.
|
||||
|
||||
| Actors | Workflows |
|
||||
| ------ | --------- |
|
||||
| Actors can interact with the sidecar using either HTTP or gRPC. | Workflows only use gRPC. Due to the workflow gRPC protocol's complexity, an SDK is _required_ when implementing workflows. |
|
||||
| Actor operations are pushed to application code from the sidecar. This requires the application to listen on a particular _app port_. | For workflows, operations are _pulled_ from the sidecar by the application using a streaming protocol. The application doesn't need to listen on any ports to run workflows. |
|
||||
| Actors explicitly register themselves with the sidecar. | Workflows do not register themselves with the sidecar. The embedded engine doesn't keep track of workflow types. This responsibility is instead delegated to the workflow application and its SDK. |
|
||||
|
||||
## Workflow distributed tracing
|
||||
|
||||
The `durabletask-go` core used by the workflow engine writes distributed traces using Open Telemetry SDKs. These traces are captured automatically by the Dapr sidecar and exported to the configured Open Telemetry provider, such as Zipkin.
|
||||
|
||||
Each workflow instance managed by the engine is represented as one or more spans. There is a single parent span representing the full workflow execution and child spans for the various tasks, including spans for activity task execution and durable timers. Workflow activity code also has access to the trace context, allowing distributed trace context to flow to external services that are invoked by the workflow.
|
||||
|
||||
## Internal workflow actors
|
||||
|
||||
There are two types of actors that are internally registered within the Dapr sidecar in support of the workflow engine:
|
||||
- `dapr.internal.wfengine.workflow`
|
||||
- `dapr.internal.wfengine.activity`
|
||||
|
||||
The following diagram demonstrates how internal workflow actors operate in a Kubernetes scenario:
|
||||
|
||||
<img src="/images/workflow-overview/workflow-execution.png" alt="Diagram demonstrating internally registered actors across a cluster" />
|
||||
|
||||
Just like user-defined actors, internal workflow actors are distributed across the cluster by the actor placement service. They also maintain their own state and make use of reminders. However, unlike actors that live in application code, these _internal_ actors are embedded into the Dapr sidecar. Application code is completely unaware that these actors exist.
|
||||
|
||||
There are two types of actors registered by the Dapr sidecar for workflow: the _workflow_ actor and the _activity_ actor. The next sections will go into more details on each.
|
||||
|
||||
### Workflow actors
|
||||
|
||||
A new instance of the `dapr.internal.wfengine.workflow` actor is activated for every workflow instance that gets created. The ID of the _workflow_ actor is the ID of the workflow. This internal actor stores the state of the workflow as it progresses and determines the node on which the workflow code executes via the actor placement service.
|
||||
|
||||
Each workflow actor saves its state using the following keys in the configured state store:
|
||||
|
||||
| Key | Description |
|
||||
| --- | ----------- |
|
||||
| `inbox-NNNNNN` | A workflow's inbox is effectively a FIFO queue of _messages_ that drive a workflow's execution. Example messages include workflow creation messages, activity task completion messages, etc. Each message is stored in its own key in the state store with the name `inbox-NNNNNN` where `NNNNNN` is a 6-digit number indicating the ordering of the messages. These state keys are removed once the corresponding messages are consumed by the workflow. |
|
||||
| `history-NNNNNN` | A workflow's history is an ordered list of events that represent a workflow's execution history. Each key in the history holds the data for a single history event. Like an append-only log, workflow history events are only added and never removed (except when a workflow performs a "continue as new" operation, which purges all history and restarts a workflow with a new input). |
|
||||
| `customStatus` | Contains a user-defined workflow status value. There is exactly one `customStatus` key for each workflow actor instance. |
|
||||
| `metadata` | Contains meta information about the workflow as a JSON blob and includes details such as the length of the inbox, the length of the history, and a 64-bit integer representing the workflow generation (for cases where the instance ID gets reused). The length information is used to determine which keys need to be read or written to when loading or saving workflow state updates. |
|
||||
|
||||
|
||||
{{% alert title="Warning" color="warning" %}}
|
||||
In the [Alpha release of the Dapr Workflow engine]({{< ref support-preview-features.md >}}), workflow actor state will remain in the state store even after a workflow has completed. Creating a large number of workflows could result in unbounded storage usage. In a future release, data retention policies will be introduced that can automatically purge the state store of old workflow state.
|
||||
{{% /alert %}}
|
||||
|
||||
The following diagram illustrates the typical lifecycle of a workflow actor.
|
||||
|
||||
<img src="/images/workflow-overview/workflow-actor-flowchart.png" alt="Dapr Workflow Actor Flowchart"/>
|
||||
|
||||
To summarize:
|
||||
|
||||
1. A workflow actor is activated when it receives a new message.
|
||||
1. New messages then trigger the associated workflow code (in your application) to run and return an execution result back to the workflow actor.
|
||||
1. Once the result is received, the actor schedules any tasks as necessary.
|
||||
1. After scheduling, the actor updates its state in the state store.
|
||||
1. Finally, the actor goes idle until it receives another message. During this idle time, the sidecar may decide to unload the workflow actor from memory.
|
||||
|
||||
### Activity actors
|
||||
|
||||
A new instance of the `dapr.internal.wfengine.activity` actor is activated for every activity task that gets scheduled by a workflow. The ID of the _activity_ actor is the ID of the workflow combined with a sequence number (sequence numbers start with 0). For example, if a workflow has an ID of `876bf371` and is the third activity to be scheduled by the workflow, it's ID will be `876bf371#2` where `2` is the sequence number.
|
||||
|
||||
Each activity actor stores a single key into the state store:
|
||||
|
||||
| Key | Description |
|
||||
| --- | ----------- |
|
||||
| `activityreq-N` | The key contains the activity invocation payload, which includes the serialized activity input data. The `N` value is a 64-bit unsigned integer that represents the _generation_ of the workflow, a concept which is outside the scope of this documentation. |
|
||||
|
||||
{{% alert title="Warning" color="warning" %}}
|
||||
In the [Alpha release of the Dapr Workflow engine]({{< ref support-preview-features.md >}}), activity actor state will remain in the state store even after the activity task has completed. Scheduling a large number of workflow activities could result in unbounded storage usage. In a future release, data retention policies will be introduced that can automatically purge the state store of completed activity state.
|
||||
{{% /alert %}}
|
||||
|
||||
The following diagram illustrates the typical lifecycle of an activity actor.
|
||||
|
||||
<img src="/images/workflow-overview/workflow-activity-actor-flowchart.png" alt="Workflow Activity Actor Flowchart"/>
|
||||
|
||||
Activity actors are short-lived:
|
||||
|
||||
1. Activity actors are activated when a workflow actor schedules an activity task.
|
||||
1. Activity actors then immediately call into the workflow application to invoke the associated activity code.
|
||||
1. Once the activity code has finished running and has returned its result, the activity actor sends a message to the parent workflow actor with the execution results.
|
||||
1. Once the results are sent, the workflow is triggered to move forward to its next step.
|
||||
|
||||
### Reminder usage and execution guarantees
|
||||
|
||||
The Dapr Workflow ensures workflow fault-tolerance by using [actor reminders]({{< ref "howto-actors.md#actor-timers-and-reminders" >}}) to recover from transient system failures. Prior to invoking application workflow code, the workflow or activity actor will create a new reminder. If the application code executes without interruption, the reminder is deleted. However, if the node or the sidecar hosting the associated workflow or activity crashes, the reminder will reactivate the corresponding actor and the execution will be retried.
|
||||
|
||||
TODO: Diagrams showing the process of invoking workflow and activity actors
|
||||
|
||||
{{% alert title="Important" color="warning" %}}
|
||||
Too many active reminders in a cluster may result in performance issues. If your application is already using actors and reminders heavily, be mindful of the additional load that Dapr Workflows may add to your system.
|
||||
{{% /alert %}}
|
||||
|
||||
### State store usage
|
||||
|
||||
Dapr Workflows use actors internally to drive the execution of workflows. Like any actors, these internal workflow actors store their state in the configured state store. Any state store that supports actors implicitly supports Dapr Workflow.
|
||||
|
||||
As discussed in the [workflow actors]({{< ref "workflow-architecture.md#workflow-actors" >}}) section, workflows save their state incrementally by appending to a history log. The history log for a workflow is distributed across multiple state store keys so that each "checkpoint" only needs to append the newest entries.
|
||||
|
||||
The size of each checkpoint is determined by the number of concurrent actions scheduled by the workflow before it goes into an idle state. [Sequential workflows]({{< ref "workflow-overview.md#task-chaining" >}}) will therefore make smaller batch updates to the state store, while [fan-out/fan-in workflows]({{< ref "workflow-overview.md#fan-outfan-in" >}}) will require larger batches. The size of the batch is also impacted by the size of inputs and outputs when workflows [invoke activities]({<< ref "workflow-features-concepts.md#workflow-activities" >>}) or [child workflows]({{< ref "workflow-features-concepts.md#child-workflows" >}}).
|
||||
|
||||
TODO: Image illustrating a workflow appending a batch of keys to a state store.
|
||||
|
||||
Different state store implementations may implicitly put restrictions on the types of workflows you can author. For example, the Azure Cosmos DB state store limits item sizes to 2 MB of UTF-8 encoded JSON ([source](https://learn.microsoft.com/azure/cosmos-db/concepts-limits#per-item-limits)). The input or output payload of an activity or child workflow is stored as a single record in the state store, so a item limit of 2 MB means that workflow and activity inputs and outputs can't exceed 2 MB of JSON-serialized data.
|
||||
|
||||
Similarly, if a state store imposes restrictions on the size of a batch transaction, that may limit the number of parallel actions that can be scheduled by a workflow.
|
||||
|
||||
## Workflow scalability
|
||||
|
||||
Because Dapr Workflows are internally implemented using actors, Dapr Workflows have the same scalability characteristics as actors. The placement service:
|
||||
|
||||
- Doesn't distinguish between workflow actors and actors you define in your application
|
||||
- Will load balance workflows using the same algorithms that it uses for actors
|
||||
|
||||
The expected scalability of a workflow is determined by the following factors:
|
||||
|
||||
* The number of machines used to host your workflow application
|
||||
* The CPU and memory resources available on the machines running workflows
|
||||
* The scalability of the state store configured for actors
|
||||
* The scalability of the actor placement service and the reminder subsystem
|
||||
|
||||
The implementation details of the workflow code in the target application also plays a role in the scalability of individual workflow instances. Each workflow instance executes on a single node at a time, but a workflow can schedule activities and child workflows which run on other nodes.
|
||||
|
||||
Workflows can also schedule these activities and child workflows to run in parallel, allowing a single workflow to potentially distribute compute tasks across all available nodes in the cluster.
|
||||
|
||||
TODO: Diagram showing an example distribution of workflows, child-workflows, and activity tasks.
|
||||
|
||||
{{% alert title="Important" color="warning" %}}
|
||||
Currently, there are no global limits imposed on workflow and activity concurrency. A runaway workflow could therefore potentially consume all resources in a cluster if it attempts to schedule too many tasks in parallel. Use care when authoring Dapr Workflows that schedule large batches of work in parallel.
|
||||
|
||||
Also, the Dapr Workflow engine requires that all instances of each workflow app register the exact same set of workflows and activities. In other words, it's not possible to scale certain workflows or activities independently. All workflows and activities within an app must be scaled together.
|
||||
{{% /alert %}}
|
||||
|
||||
Workflows don't control the specifics of how load is distributed across the cluster. For example, if a workflow schedules 10 activity tasks to run in parallel, all 10 tasks may run on as many as 10 different compute nodes or as few as a single compute node. The actual scale behavior is determined by the actor placement service, which manages the distribution of the actors that represent each of the workflow's tasks.
|
||||
|
||||
## Workflow latency
|
||||
|
||||
In order to provide guarantees around durability and resiliency, Dapr Workflows frequently write to the state store and rely on reminders to drive execution. Dapr Workflows therefore may not be appropriate for latency-sensitive workloads. Expected sources of high latency include:
|
||||
|
||||
* Latency from the state store when persisting workflow state.
|
||||
* Latency from the state store when rehydrating workflows with large histories.
|
||||
* Latency caused by too many active reminders in the cluster.
|
||||
* Latency caused by high CPU usage in the cluster.
|
||||
|
||||
See the [Reminder usage and execution guarantees section]({{< ref "workflow-architecture.md#reminder-usage-and-execution-guarantees" >}}) for more details on how the design of workflow actors may impact execution latency.
|
||||
|
||||
## Next steps
|
||||
|
||||
{{< button text="Author workflows >>" page="howto-author-workflow.md" >}}
|
||||
|
||||
## Related links
|
||||
- [Workflow overview]({{< ref workflow-overview.md >}})
|
||||
- [Workflow API reference]({{< ref workflow_api.md >}})
|
||||
- Learn more about [how to manage workflows with the .NET SDK](todo) and try out [the .NET example](https://github.com/dapr/dotnet-sdk/tree/master/examples/Workflow)
|
||||
|
|
@ -0,0 +1,128 @@
|
|||
---
|
||||
type: docs
|
||||
title: "Features and concepts"
|
||||
linkTitle: "Features and concepts"
|
||||
weight: 2000
|
||||
description: "Learn more about the Dapr Workflow features and concepts"
|
||||
---
|
||||
|
||||
Now that you've learned about the [workflow building block]({{< ref workflow-overview.md >}}) on a high level, let's deep dive into the features and concepts included with the Dapr Workflow SDK. The Dapr Workflow SDK exposes several core features and concepts which are common across all supported languages.
|
||||
|
||||
## Workflows
|
||||
|
||||
Dapr Workflows are functions you write that define a series of steps or tasks to be executed in a particular order. The Dapr Workflow engine takes care of coordinating and managing the execution of the steps, including managing failures and retries. If the app hosting your workflows is scaled out across multiple machines, the workflow engine may also load balance the execution of workflows and their tasks across multiple machines.
|
||||
|
||||
There are several different kinds of tasks that a workflow can schedule, including [activities]({{< ref "workflow-features-concepts.md#workflow-activities" >}}) for executing custom logic, [durable timers]({{< ref "workflow-features-concepts.md#durable-timers" >}}) for putting the workflow to sleep for arbitrary lengths of time, [child workflows]({{< ref "workflow-features-concepts.md#child-workflows" >}}) for breaking larger workflows into smaller pieces, and [external event waiters]({{< ref "workflow-features-concepts.md#external-events" >}}) for blocking workflows until they receive external event signals. These tasks are described in more details in their corresponding sections.
|
||||
|
||||
### Workflow identity
|
||||
|
||||
Each workflow you define has a name, and individual executions of a workflow have a unique _instance ID_. Workflow instance IDs can be generated by your app code, which is useful when workflows correspond to business entities like documents or jobs, or can be auto-generated UUIDs. A workflow's instance ID is useful for debugging and also for managing workflows using the [Workflow APIs]({{< ref workflow_api.md >}}).
|
||||
|
||||
Only one workflow instance with a given ID can exist at any given time. However, if a workflow instance completes or fails, its ID can be reused by a new workflow instance. Note, however, that the new workflow instance will effectively replace the old one in the configured state store.
|
||||
|
||||
### Workflow replay
|
||||
|
||||
Dapr Workflows maintain their execution state by using a technique known as [event sourcing](https://learn.microsoft.com/azure/architecture/patterns/event-sourcing). Instead of directly storing the current state of a workflow as a snapshot, the workflow engine manages an append-only log of history events that describe the various steps that a workflow has taken. When using the workflow authoring SDK, the storing of these history events happens automatically whenever the workflow "awaits" for the result of a scheduled task.
|
||||
|
||||
{{% alert title="Note" color="primary" %}}
|
||||
For more information on how workflow state is managed, see the [workflow architecture guide]({{< ref workflow-architecture.md >}}).
|
||||
{{% /alert %}}
|
||||
|
||||
When a workflow "awaits" a scheduled task, it may unload itself from memory until the task completes. Once the task completes, the workflow engine will schedule the workflow function to run again. This second execution of the workflow function is known as a _replay_. When a workflow function is replayed, it runs again from the beginning. However, when it encounters a task that it already scheduled, instead of scheduling that task again, the workflow engine will return the result of the scheduled task to the workflow and continue execution until the next "await" point. This "replay" behavior continues until the workflow function completes or fails with an error.
|
||||
|
||||
Using this replay technique, a workflow is able to resume execution from any "await" point as if it had never been unloaded from memory. Even the values of local variables from previous runs can be restored without the workflow engine knowing anything about what data they stored. This ability to restore state is also what makes Dapr Workflows _durable_ and fault tolerant.
|
||||
|
||||
### Workflow determinism and code constraints
|
||||
|
||||
The workflow replay behavior described previously requires that workflow function code be _deterministic_. A deterministic workflow function is one that takes the exact same actions when provided the exact same inputs.
|
||||
|
||||
You must follow the following rules to ensure that your workflow code is deterministic.
|
||||
|
||||
1. **Workflow functions must not call non-deterministic APIs.**
|
||||
For example, APIs that generate random numbers, random UUIDs, or the current date are non-deterministic. To work around this limitation, use these APIs in activity functions or (preferred) use built-in equivalent APIs offered by the authoring SDK. For example, each authoring SDK provides an API for retrieving the current time in a deterministic manner.
|
||||
|
||||
1. **Workflow functions must not interact _directly_ with external state.**
|
||||
External data includes any data that isn't stored in the workflow state. For example, workflows must not interact with global variables, environment variables, the file system, or make network calls. Instead, workflows should interact with external state _indirectly_ using workflow inputs, activity tasks, and through external event handling.
|
||||
|
||||
1. **Workflow functions must execute only on the workflow dispatch thread.**
|
||||
The implementation of each language SDK requires that all workflow function operations operate on the same thread (goroutine, etc.) that the function was scheduled on. Workflow functions must never schedule background threads or use APIs that schedule a callback function to run on another thread. Failure to follow this rule could result in undefined behavior. Any background processing should instead be delegated to activity tasks, which can be scheduled to run serially or concurrently.
|
||||
|
||||
While it's critically important to follow these determinism code constraints, you'll quickly become familiar with them and learn how to work with them effectively when writing workflow code.
|
||||
|
||||
### Infinite loops and eternal workflows
|
||||
|
||||
As discussed in the [workflow replay]({{< ref "#workflow-replay" >}}) section, workflows maintain a write-only event-sourced history log of all its operations. To avoid runaway resource usage, workflows should limit the number of operations they schedule. For example, a workflow should never use infinite loops in its implementation, nor should it schedule millions of tasks.
|
||||
|
||||
There are two techniques that can be used to write workflows that need to potentially schedule extreme numbers of tasks:
|
||||
|
||||
1. **Use the _continue-as-new_ API**:
|
||||
Each workflow authoring SDK exposes a _continue-as-new_ API that workflows can invoke to restart themselves with a new input and history. The _continue-as-new_ API is especially ideal for implementing "eternal workflows" or workflows that have no logical end state, like monitoring agents, which would otherwise be implemented using a `while (true)`-like construct. Using _continue-as-new_ is a great way to keep the workflow history size small.
|
||||
|
||||
1. **Use child workflows**:
|
||||
Each workflow authoring SDK also exposes an API for creating child workflows. A child workflow is just like any other workflow except that it's scheduled by a parent workflow. Child workflows have their own history and also have the benefit of allowing you to distribute workflow function execution across multiple machines. If a workflow needs to schedule thousands of tasks or more, it's recommended that those tasks be distributed across child workflows so that no single workflow's history size grows too large.
|
||||
|
||||
### Updating workflow code
|
||||
|
||||
Because workflows are long-running and durable, updating workflow code must be done with extreme care. As discussed in the [Workflow determinism]({{< ref "#workflow-determinism-and-code-constraints" >}}) section, workflow code must be deterministic so that the workflow engine can rebuild its state to exactly match its previous checkpoint. Updates to workflow code must preserve this determinism if there are any non-completed workflow instances in the system. Otherwise, updates to workflow code can result in runtime failures the next time those workflows execute.
|
||||
|
||||
We'll mention a couple examples of code updates that can break workflow determinism:
|
||||
|
||||
* **Changing workflow function signatures**: Changing the name, input, or output of a workflow or activity function is considered a breaking change and must be avoided.
|
||||
* **Changing the number or order of workflow tasks**: Changing the number or order of workflow tasks will cause a workflow instance's history to no longer match the code and may result in runtime errors or other unexpected behavior.
|
||||
|
||||
To work around these constraints, instead of updating existing workflow code, leave the existing workflow code as-is and create new workflow definitions that include the updates. Upstream code that creates workflows should also be updated to only create instances of the new workflows. Leaving the old code around ensures that existing workflow instances can continue to run without interruption. If and when it's known that all instances of the old workflow logic have completed, then the old workflow code can be safely deleted.
|
||||
|
||||
## Workflow activities
|
||||
|
||||
Workflow activities are the basic unit of work in a workflow and are the tasks that get orchestrated in the business process. For example, you might create a workflow to process an order. The tasks may involve checking the inventory, charging the customer, and creating a shipment. Each task would be a separate activity. These activities may be executed serially, in parallel, or some combination of both.
|
||||
|
||||
Unlike workflows, activities aren't restricted in the type of work you can do in them. Activities are frequently used to make network calls or run CPU intensive operations. An activity can also return data back to the workflow.
|
||||
|
||||
The Dapr Workflow engine guarantees that each called activity will be executed **at least once** as part of a workflow's execution. Because activities only guarantee at-least-once execution, it's recommended that activity logic be implemented as idempotent whenever possible.
|
||||
|
||||
## Child workflows
|
||||
|
||||
In addition to activities, workflows can schedule other workflows as _child workflows_. A child workflow has its own instance ID, history, and status that is independent of the parent workflow that started it.
|
||||
|
||||
Child workflows have many benefits:
|
||||
|
||||
* You can split large workflows into a series of smaller child workflows, making your code more maintainable.
|
||||
* You can distribute workflow logic across multiple compute nodes concurrently, which is useful if your workflow logic otherwise needs to coordinate a lot of tasks.
|
||||
* You can reduce memory usage and CPU overhead by keeping the history of parent workflow smaller.
|
||||
|
||||
The return value of a child workflow is its output. If a child workflow fails with an exception, then that exception will be surfaced to the parent workflow, just like it is when an activity task fails with an exception. Child workflows also support automatic retry policies.
|
||||
|
||||
{{% alert title="Note" color="primary" %}}
|
||||
Because child workflows are independent of their parents, terminating a parent workflow does not affect any child workflows. You must terminate each child workflow independently using its instance ID.
|
||||
{{% /alert %}}
|
||||
|
||||
## Durable timers
|
||||
|
||||
Dapr Workflows allow you to schedule reminder-like durable delays for any time range, including minutes, days, or even years. These _durable timers_ can be scheduled by workflows to implement simple delays or to set up ad-hoc timeouts on other async tasks. More specifically, a durable timer can be set to trigger on a particular date or after a specified duration. There are no limits to the maximum duration of durable timers, which are internally backed by internal actor reminders. For example, a workflow that tracks a 30-day free subscription to a service could be implemented using a durable timer that fires 30-days after the workflow is created. Workflows can be safely unloaded from memory while waiting for a durable timer to fire.
|
||||
|
||||
{{% alert title="Note" color="primary" %}}
|
||||
Some APIs in the workflow authoring SDK may internally schedule durable timers to implement internal timeout behavior.
|
||||
{{% /alert %}}
|
||||
|
||||
## External events
|
||||
|
||||
Sometimes workflows will need to wait for events that are raised by external systems. For example, an approval workflow may require a human to explicitly approve an order request within an order processing workflow if the total cost exceeds some threshold. Another example is a trivia game orchestration workflow that pauses while waiting for all participants to submit their answers to trivia questions. These mid-execution inputs are referred to as _external events_.
|
||||
|
||||
External events have a _name_ and a _payload_ and are delivered to a single workflow instance. Workflows can create "_wait for external event_" tasks that subscribe to external events and _await_ those tasks to block execution until the event is received. The workflow can then read the payload of these events and make decisions about which next steps to take. External events can be processed serially or in parallel. External events can be raised by other workflows or by workflow code.
|
||||
|
||||
{{% alert title="Note" color="primary" %}}
|
||||
The ability to raise external events to workflows is not included in the alpha version of Dapr's workflow API.
|
||||
{{% /alert %}}
|
||||
|
||||
Workflows can also wait for multiple external event signals of the same name, in which case they are dispatched to the corresponding workflow tasks in a first-in, first-out (FIFO) manner. If a workflow receives an external event signal but has not yet created a "wait for external event" task, the event will be saved into the workflow's history and consumed immediately after the workflow requests the event.
|
||||
|
||||
## Next steps
|
||||
|
||||
{{< button text="Workflow patterns >>" page="workflow-patterns.md" >}}
|
||||
|
||||
## Related links
|
||||
|
||||
- [Try out Dapr Workflows using the quickstart](todo)
|
||||
- [Workflow overview]({{< ref workflow-overview.md >}})
|
||||
- [Workflow API reference]({{< ref workflow_api.md >}})
|
||||
- Learn more about [how to manage workflows with the .NET SDK](todo) and try out [the .NET example](https://github.com/dapr/dotnet-sdk/tree/master/examples/Workflow)
|
||||
|
|
@ -0,0 +1,116 @@
|
|||
---
|
||||
type: docs
|
||||
title: Workflow overview
|
||||
linkTitle: Overview
|
||||
weight: 1000
|
||||
description: "Overview of Dapr Workflow"
|
||||
---
|
||||
|
||||
{{% alert title="Note" color="primary" %}}
|
||||
Dapr Workflow is currently in alpha.
|
||||
{{% /alert %}}
|
||||
|
||||
Dapr Workflow makes orchestrating logic for messaging, state management, and failure handling across various microservices easier for developers. Prior to adding workflows to Dapr, you'd often need to build ad-hoc workflows behind-the-scenes in order to bridge that gap.
|
||||
|
||||
The durable, resilient Dapr Workflow capability:
|
||||
|
||||
- Offers a built-in workflow runtime for driving Dapr Workflow execution
|
||||
- Provides SDKs for authoring workflows in code, using any language
|
||||
- Provides HTTP and gRPC APIs for managing workflows (start, query, suspend/resume, terminate)
|
||||
- Will integrate with future supported external workflow runtime components
|
||||
|
||||
<img src="/images/workflow-overview/workflow-overview.png" width=800 alt="Diagram showing basics of Dapr Workflows">
|
||||
|
||||
Dapr Workflows can assist with scenarios like:
|
||||
|
||||
- Order processing involving inventory management, payment systems, shipping, etc.
|
||||
- HR onboarding workflows coordinating tasks across multiple departments and participants
|
||||
- Orchestrating the roll-out of digital menu updates in a national restaurant chain
|
||||
- Image processing workflows involving API-based classification and storage
|
||||
|
||||
|
||||
## Features
|
||||
|
||||
### Workflows and activities
|
||||
|
||||
With Dapr Workflow, you can write activites and then compose those activities together into a workflow. Workflow activities are:
|
||||
- The basic unit of work in a workflow
|
||||
- The tasks that get orchestrated in the business process
|
||||
|
||||
[Learn more about workflow activities.]({{< ref "workflow-features-concepts.md##workflow-activities" >}})
|
||||
|
||||
### Child workflows
|
||||
|
||||
In addition to activities, you can write workflows to schedule other workflows as child workflows. A child workflow is independent of the parent workflow that started it and support automatic retry policies.
|
||||
|
||||
[Learn more about child workflows.]({{< ref "workflow-features-concepts.md#child-workflows" >}})
|
||||
|
||||
### Timers and reminders
|
||||
|
||||
Like in user-defined actors, you can schedule reminder-like durable delays for any time range.
|
||||
|
||||
[Learn more about workflow timers]({{< ref "workflow-features-concepts.md#durable-timers" >}}) and [reminders]({{< ref "workflow-architecture.md#reminder-usage-and-execution-guarantees" >}})
|
||||
|
||||
### Workflow HTTP calls to manage a workflow
|
||||
|
||||
When you create an application with workflow code and run it with Dapr, you can call specific workflows that reside in the application. Each individual workflow can be:
|
||||
|
||||
- Started or terminated through a POST request
|
||||
- Queried through a GET request
|
||||
|
||||
[Learn more about how manage a workflow using HTTP calls.]({{< ref workflow_api.md >}})
|
||||
|
||||
### Manage other workflow runtimes with workflow components
|
||||
|
||||
You can call other workflow runtimes (for example, Temporal and Netflix Conductor) by writing your own workflow component.
|
||||
|
||||
|
||||
## Workflow patterns
|
||||
|
||||
Dapr Workflows simplify complex, stateful coordination requirements in microservice architectures. The following sections describe several application patterns that can benefit from Dapr Workflows.
|
||||
|
||||
Learn more about [different types of workflow patterns](todo)
|
||||
|
||||
## Workflow SDKs
|
||||
|
||||
The Dapr Workflow _authoring SDKs_ are language-specific SDKs that contain types and functions to implement workflow logic. The workflow logic lives in your application and is orchestrated by the Dapr Workflow engine running in the Dapr sidecar via a gRPC stream.
|
||||
|
||||
### Supported SDKs
|
||||
|
||||
You can use the following SDKs to author a workflow.
|
||||
|
||||
| Language stack | Package |
|
||||
| - | - |
|
||||
| .NET | [Dapr.Workflow](https://www.nuget.org/profiles/dapr.io) |
|
||||
| Go | todo |
|
||||
| Java | todo |
|
||||
|
||||
## Try out workflows
|
||||
|
||||
### Quickstarts and tutorials
|
||||
|
||||
Want to put workflows to the test? Walk through the following quickstart and tutorials to see workflows in action:
|
||||
|
||||
| Quickstart/tutorial | Description |
|
||||
| ------------------- | ----------- |
|
||||
| [Workflow quickstart](link) | Description of the quickstart. |
|
||||
| [Workflow tutorial](link) | Description of the tutorial. |
|
||||
|
||||
### Start using workflows directly in your app
|
||||
|
||||
Want to skip the quickstarts? Not a problem. You can try out the workflow building block directly in your application. After [Dapr is installed]({{< ref install-dapr-cli.md >}}), you can begin using workflows, starting with [how to author a workflow]({{< ref howto-author-workflow.md >}}).
|
||||
|
||||
## Watch the demo
|
||||
|
||||
Watch [this video for an overview on Dapr Workflows](https://youtu.be/s1p9MNl4VGo?t=131):
|
||||
|
||||
<iframe width="560" height="315" src="https://www.youtube-nocookie.com/embed/s1p9MNl4VGo?start=131" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" allowfullscreen></iframe>
|
||||
|
||||
## Next steps
|
||||
|
||||
{{< button text="Workflow features and concepts >>" page="workflow-features-concepts.md" >}}
|
||||
|
||||
## Related links
|
||||
|
||||
- [Workflow API reference]({{< ref workflow_api.md >}})
|
||||
- Learn more about [how to manage workflows with the .NET SDK](todo) and try out [the .NET example](https://github.com/dapr/dotnet-sdk/tree/master/examples/Workflow)
|
||||
|
|
@ -0,0 +1,278 @@
|
|||
---
|
||||
type: docs
|
||||
title: Workflow patterns
|
||||
linkTitle: Workflow patterns
|
||||
weight: 3000
|
||||
description: "Write different types of workflow patterns"
|
||||
---
|
||||
|
||||
Dapr Workflows simplify complex, stateful coordination requirements in microservice architectures. The following sections describe several application patterns that can benefit from Dapr Workflows.
|
||||
|
||||
## Task chaining
|
||||
|
||||
In the task chaining pattern, multiple steps in a workflow are run in succession, and the output of one step may be passed as the input to the next step. Task chaining workflows typically involve creating a sequence of operations that need to be performed on some data, such as filtering, transforming, and reducing.
|
||||
|
||||
In some cases, the steps of the workflow may need to be orchestrated across multiple microservices. For increased reliability and scalability, you're also likely to use queues to trigger the various steps.
|
||||
|
||||
<img src="/images/workflow-overview/workflows-chaining.png" width=800 alt="Diagram showing how the task chaining workflow pattern works">
|
||||
|
||||
While the pattern is simple, there are many complexities hidden in the implementation. For example:
|
||||
|
||||
- What happens if one of the microservices are unavailable for an extended period of time?
|
||||
- Can failed steps be automatically retried?
|
||||
- If not, how do you facilitate the rollback of previously completed steps, if applicable?
|
||||
- Implementation details aside, is there a way to visualize the workflow so that other engineers can understand what it does and how it works?
|
||||
|
||||
Dapr Workflow solves these complexities by allowing you to implement the task chaining pattern concisely as a simple function in the programming language of your choice, as shown in the following example.
|
||||
|
||||
{{< tabs ".NET" >}}
|
||||
|
||||
{{% codetab %}}
|
||||
|
||||
```csharp
|
||||
// Expotential backoff retry policy that survives long outages
|
||||
var retryPolicy = TaskOptions.FromRetryPolicy(new RetryPolicy(
|
||||
maxNumberOfAttempts: 10,
|
||||
firstRetryInterval: TimeSpan.FromMinutes(1),
|
||||
backoffCoefficient: 2.0,
|
||||
maxRetryInterval: TimeSpan.FromHours(1)));
|
||||
|
||||
// Task failures are surfaced as ordinary .NET exceptions
|
||||
try
|
||||
{
|
||||
var result1 = await context.CallActivityAsync<string>("Step1", wfInput, retryPolicy);
|
||||
var result2 = await context.CallActivityAsync<byte[]>("Step2", result1, retryPolicy);
|
||||
var result3 = await context.CallActivityAsync<long[]>("Step3", result2, retryPolicy);
|
||||
var result4 = await context.CallActivityAsync<Guid[]>("Step4", result3, retryPolicy);
|
||||
return string.Join(", ", result4);
|
||||
}
|
||||
catch (TaskFailedException)
|
||||
{
|
||||
// Retries expired - apply custom compensation logic
|
||||
await context.CallActivityAsync<long[]>("MyCompensation", options: retryPolicy);
|
||||
throw;
|
||||
}
|
||||
```
|
||||
|
||||
{{% /codetab %}}
|
||||
|
||||
{{< /tabs >}}
|
||||
|
||||
{{% alert title="Note" color="primary" %}}
|
||||
In the example above, `"Step1"`, `"Step2"`, `"MyCompensation"`, etc. represent workflow activities, which are functions in your code that actually implement the steps of the workflow. For brevity, these activity implementations are left out of this example.
|
||||
{{% /alert %}}
|
||||
|
||||
As you can see, the workflow is expressed as a simple series of statements in the programming language of your choice. This allows any engineer in the organization to quickly understand the end-to-end flow without necessarily needing to understand the end-to-end system architecture.
|
||||
|
||||
Behind the scenes, the Dapr Workflow runtime:
|
||||
|
||||
- Takes care of executing the workflow and ensuring that it runs to completion.
|
||||
- Saves progress automatically.
|
||||
- Automatically resumes the workflow from the last completed step if the workflow process itself fails for any reason.
|
||||
- Enables error handling to be expressed naturally in your target programming language, allowing you to implement compensation logic easily.
|
||||
- Provides built-in retry configuration primitives to simplify the process of configuring complex retry policies for individual steps in the workflow.
|
||||
|
||||
## Fan-out/fan-in
|
||||
|
||||
In the fan-out/fan-in design pattern, you execute multiple tasks simultaneously across potentially multiple workers, wait for them to finish, and perform some aggregation on the result.
|
||||
|
||||
<img src="/images/workflow-overview/workflows-fanin-fanout.png" width=800 alt="Diagram showing how the fan-out/fan-in workflow pattern works">
|
||||
|
||||
In addition to the challenges mentioned in [the previous pattern]({{< ref "workflow-overview.md#task-chaining" >}}), there are several important questions to consider when implementing the fan-out/fan-in pattern manually:
|
||||
|
||||
- How do you control the degree of parallelism?
|
||||
- How do you know when to trigger subsequent aggregation steps?
|
||||
- What if the number of parallel steps is dynamic?
|
||||
|
||||
Dapr Workflows provides a way to express the fan-out/fan-in pattern as a simple function, as shown in the following example:
|
||||
|
||||
{{< tabs ".NET" >}}
|
||||
|
||||
{{% codetab %}}
|
||||
|
||||
```csharp
|
||||
// Get a list of N work items to process in parallel.
|
||||
object[] workBatch = await context.CallActivityAsync<object[]>("GetWorkBatch", null);
|
||||
|
||||
// Schedule the parallel tasks, but don't wait for them to complete yet.
|
||||
var parallelTasks = new List<Task<int>>(workBatch.Length);
|
||||
for (int i = 0; i < workBatch.Length; i++)
|
||||
{
|
||||
Task<int> task = context.CallActivityAsync<int>("ProcessWorkItem", workBatch[i]);
|
||||
parallelTasks.Add(task);
|
||||
}
|
||||
|
||||
// Everything is scheduled. Wait here until all parallel tasks have completed.
|
||||
await Task.WhenAll(parallelTasks);
|
||||
|
||||
// Aggregate all N outputs and publish the result.
|
||||
int sum = parallelTasks.Sum(t => t.Result);
|
||||
await context.CallActivityAsync("PostResults", sum);
|
||||
```
|
||||
|
||||
{{% /codetab %}}
|
||||
|
||||
{{< /tabs >}}
|
||||
|
||||
The key takeaways from this example are:
|
||||
|
||||
- The fan-out/fan-in pattern can be expressed as a simple function using ordinary programming constructs
|
||||
- The number of parallel tasks can be static or dynamic
|
||||
- The workflow itself is capable of aggregating the results of parallel executions
|
||||
|
||||
While not shown in the example, it's possible to go further and limit the degree of concurrency using simple, language-specific constructs. Furthermore, the execution of the workflow is durable. If a workflow starts 100 parallel task executions and only40 complete before the process crashes, the workflow will restart itself automatically and schedule only the remaining 60 tasks.
|
||||
|
||||
## Async HTTP APIs
|
||||
|
||||
Asynchronous HTTP APIs are typically implemented using the [Asynchronous Request-Reply pattern](https://learn.microsoft.com/azure/architecture/patterns/async-request-reply). Implementing this pattern traditionally involves the following:
|
||||
|
||||
1. A client sends a request to an HTTP API endpoint (the _start API_)
|
||||
1. The _start API_ writes a message to a backend queue, which triggers the start of a long-running operation
|
||||
1. Immediately after scheduling the backend operation, the _start API_ returns an HTTP 202 response to the client with an identifier that can be used to poll for status
|
||||
1. The _status API_ queries a database that contains the status of the long-running operation
|
||||
1. The client repeatedly polls the _status API_ either until some timeout expires or it receives a "completion" response
|
||||
|
||||
The end-to-end flow is illustrated in the following diagram.
|
||||
|
||||
<img src="/images/workflow-overview/workflow-async-request-response.png" width=800 alt="Diagram showing how the async request response pattern works"/>
|
||||
|
||||
The challenge with implementing the asynchronous request-reply pattern is that it involves the use of multiple APIs and state stores. It also involves implementing the protocol correctly so that the client knows how to automatically poll for status and know when the operation is complete.
|
||||
|
||||
The Dapr workflow HTTP API supports the asynchronous request-reply pattern out-of-the box, without requiring you to write any code or do any state management.
|
||||
|
||||
The following `curl` commands illustrate how the workflow APIs support this pattern.
|
||||
|
||||
```bash
|
||||
curl -X POST http://localhost:3500/v1.0-alpha1/workflows/dapr/OrderProcessingWorkflow/12345678/start -d '{"input":{"Name":"Paperclips","Quantity":1,"TotalCost":9.95}}'
|
||||
```
|
||||
|
||||
The previous command will result in the following response JSON:
|
||||
|
||||
```json
|
||||
{"instance_id":"12345678"}
|
||||
```
|
||||
|
||||
The HTTP client can then construct the status query URL using the workflow instance ID and poll it repeatedly until it sees the "COMPLETE", "FAILURE", or "TERMINATED" status in the payload.
|
||||
|
||||
```bash
|
||||
curl http://localhost:3500/v1.0-alpha1/workflows/dapr/OrderProcessingWorkflow/12345678
|
||||
```
|
||||
|
||||
The following is an example of what an in-progress workflow status might look like.
|
||||
|
||||
```json
|
||||
{
|
||||
"WFInfo": {
|
||||
"instance_id": "12345678"
|
||||
},
|
||||
"start_time": "2023-02-05T00:32:05Z",
|
||||
"metadata": {
|
||||
"dapr.workflow.custom_status": "",
|
||||
"dapr.workflow.input": "{\"Name\":\"Paperclips\",\"Quantity\":1,\"TotalCost\":9.95}",
|
||||
"dapr.workflow.last_updated": "2023-02-05T00:32:18Z",
|
||||
"dapr.workflow.name": "OrderProcessingWorkflow",
|
||||
"dapr.workflow.runtime_status": "RUNNING"
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
As you can see from the previous example, the workflow's runtime status is `RUNNING`, which lets the client know that it should continue polling.
|
||||
|
||||
If the workflow has completed, the status might look as follows.
|
||||
|
||||
```json
|
||||
{
|
||||
"WFInfo": {
|
||||
"instance_id": "12345678"
|
||||
},
|
||||
"start_time": "2023-02-05T00:32:05Z",
|
||||
"metadata": {
|
||||
"dapr.workflow.custom_status": "",
|
||||
"dapr.workflow.input": "{\"Name\":\"Paperclips\",\"Quantity\":1,\"TotalCost\":9.95}",
|
||||
"dapr.workflow.last_updated": "2023-02-05T00:32:23Z",
|
||||
"dapr.workflow.name": "OrderProcessingWorkflow",
|
||||
"dapr.workflow.output": "{\"Processed\":true}",
|
||||
"dapr.workflow.runtime_status": "COMPLETED"
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
As you can see from the previous example, the runtime status of the workflow is now `COMPLETED`, which means the client can stop polling for updates.
|
||||
|
||||
## Monitor
|
||||
|
||||
The monitor pattern is recurring process that typically:
|
||||
|
||||
1. Checks the status of a system
|
||||
1. Takes some action based on that status - e.g. send a notification
|
||||
1. Sleeps for some period of time
|
||||
1. Repeat
|
||||
|
||||
The following diagram provides a rough illustration of this pattern.
|
||||
|
||||
<img src="/images/workflow-overview/workflow-monitor-pattern.png" width=600 alt="Diagram showing how the monitor pattern works"/>
|
||||
|
||||
Depending on the business needs, there may be a single monitor or there may be multiple monitors, one for each business entity (for example, a stock). Furthermore, the amount of time to sleep may need to change, depending on the circumstances. These requirements make using cron-based scheduling systems impractical.
|
||||
|
||||
Dapr Workflow supports this pattern natively by allowing you to implement _eternal workflows_. Rather than writing infinite while-loops ([which is an anti-pattern]({{< ref "workflow-features-concepts.md#infinite-loops-and-eternal-workflows" >}})), Dapr Workflow exposes a _continue-as-new_ API that workflow authors can use to restart a workflow function from the beginning with a new input.
|
||||
|
||||
{{< tabs ".NET" >}}
|
||||
|
||||
{{% codetab %}}
|
||||
|
||||
```csharp
|
||||
public override async Task<object> RunAsync(WorkflowContext context, MyEntityState myEntityState)
|
||||
{
|
||||
TimeSpan nextSleepInterval;
|
||||
|
||||
var status = await context.CallActivityAsync<string>("GetStatus");
|
||||
if (status == "healthy")
|
||||
{
|
||||
myEntityState.IsHealthy = true;
|
||||
|
||||
// Check less frequently when in a healthy state
|
||||
nextSleepInterval = TimeSpan.FromMinutes(60);
|
||||
}
|
||||
else
|
||||
{
|
||||
if (myEntityState.IsHealthy)
|
||||
{
|
||||
myEntityState.IsHealthy = false;
|
||||
await context.CallActivityAsync("SendAlert", myEntityState);
|
||||
}
|
||||
|
||||
// Check more frequently when in an unhealthy state
|
||||
nextSleepInterval = TimeSpan.FromMinutes(5);
|
||||
}
|
||||
|
||||
// Put the workflow to sleep until the determined time
|
||||
await context.CreateTimer(nextSleepInterval);
|
||||
|
||||
// Restart from the beginning with the updated state
|
||||
context.ContinueAsNew(myEntityState);
|
||||
return null;
|
||||
}
|
||||
```
|
||||
|
||||
> This example assumes you have a predefined `MyEntityState` class with a boolean `IsHealthy` property.
|
||||
|
||||
{{% /codetab %}}
|
||||
|
||||
{{< /tabs >}}
|
||||
|
||||
A workflow implementing the monitor pattern can loop forever or it can terminate itself gracefully by not calling _continue-as-new_.
|
||||
|
||||
{{% alert title="Note" color="primary" %}}
|
||||
This pattern can also be expressed using actors and reminders. The difference is that this workflow is expressed as a single function with inputs and state stored in local variables. Workflows can also execute a sequence of actions with stronger reliability guarantees, if necessary.
|
||||
{{% /alert %}}
|
||||
|
||||
## Next steps
|
||||
|
||||
{{< button text="Workflow architecture >>" page="workflow-architecture.md" >}}
|
||||
|
||||
## Related links
|
||||
|
||||
- [Try out Dapr Workflows using the quickstart](todo)
|
||||
- [Workflow overview]({{< ref workflow-overview.md >}})
|
||||
- [Workflow API reference]({{< ref workflow_api.md >}})
|
||||
- Learn more about [how to manage workflows with the .NET SDK](todo) and try out [the .NET example](https://github.com/dapr/dotnet-sdk/tree/master/examples/Workflow)
|
||||
|
|
@ -11,27 +11,36 @@ aliases:
|
|||
|
||||
Dapr allows custom processing pipelines to be defined by chaining a series of middleware components. In this guide, you'll learn how to create a middleware component. To learn how to configure an existing middleware component, see [Configure middleware components]({{< ref middleware.md >}})
|
||||
|
||||
## Writing a custom middleware
|
||||
## Writing a custom HTTP middleware
|
||||
|
||||
Dapr uses [FastHTTP](https://github.com/valyala/fasthttp) to implement its HTTP server. Hence, your HTTP middleware needs to be written as a FastHTTP handler. Your middleware needs to implement a middleware interface, which defines a **GetHandler** method that returns **fasthttp.RequestHandler** and **error**:
|
||||
HTTP middlewares in Dapr wrap standard Go [net/http](https://pkg.go.dev/net/http) handler functions.
|
||||
|
||||
Your middleware needs to implement a middleware interface, which defines a **GetHandler** method that returns a [**http.Handler**](https://pkg.go.dev/net/http#Handler) callback and an **error**:
|
||||
|
||||
```go
|
||||
type Middleware interface {
|
||||
GetHandler(metadata Metadata) (func(h fasthttp.RequestHandler) fasthttp.RequestHandler, error)
|
||||
GetHandler(metadata middleware.Metadata) (func(next http.Handler) http.Handler, error)
|
||||
}
|
||||
```
|
||||
|
||||
Your handler implementation can include any inbound logic, outbound logic, or both:
|
||||
The handler receives a `next` callback that should be invoked to continue processing the request.
|
||||
|
||||
Your handler implementation can include an inbound logic, outbound logic, or both:
|
||||
|
||||
```go
|
||||
|
||||
func (m *customMiddleware) GetHandler(metadata Metadata) (func(fasthttp.RequestHandler) fasthttp.RequestHandler, error) {
|
||||
func (m *customMiddleware) GetHandler(metadata middleware.Metadata) (func(next http.Handler) http.Handler, error) {
|
||||
var err error
|
||||
return func(h fasthttp.RequestHandler) fasthttp.RequestHandler {
|
||||
return func(ctx *fasthttp.RequestCtx) {
|
||||
// inbound logic
|
||||
h(ctx) // call the downstream handler
|
||||
// outbound logic
|
||||
return func(next http.Handler) http.Handler {
|
||||
return http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
|
||||
// Inbound logic
|
||||
// ...
|
||||
|
||||
// Call the next handler
|
||||
next.ServeHTTP(w, r)
|
||||
|
||||
// Outbound logic
|
||||
// ...
|
||||
}
|
||||
}, err
|
||||
}
|
||||
|
|
|
|||
|
|
@ -1,7 +1,7 @@
|
|||
---
|
||||
type: docs
|
||||
title: "How to: Implement pluggable components"
|
||||
linkTitle: "Pluggable components"
|
||||
linkTitle: "Implement pluggable components"
|
||||
weight: 1100
|
||||
description: "Learn how to author and implement pluggable components"
|
||||
---
|
||||
|
|
@ -105,6 +105,201 @@ After generating the above state store example's service scaffolding code using
|
|||
|
||||
This concrete implementation and auxiliary code are the **core** of your pluggable component. They define how your component behaves when handling gRPC requests from Dapr.
|
||||
|
||||
## Returning semantic errors
|
||||
|
||||
Returning semantic errors are also part of the pluggable component protocol. The component must return specific gRPC codes that have semantic meaning for the user application, those errors are used to a variety of situations from concurrency requirements to informational only.
|
||||
|
||||
| Error | gRPC error code | Source component | Description |
|
||||
| ------------------------ | ------------------------------- | ---------------- | ----------- |
|
||||
| ETag Mismatch | `codes.FailedPrecondition` | State store | Error mapping to meet concurrency requirements |
|
||||
| ETag Invalid | `codes.InvalidArgument` | State store | |
|
||||
| Bulk Delete Row Mismatch | `codes.Internal` | State store | |
|
||||
|
||||
Learn more about concurrency requirements in the [State Management overview]({{< ref "state-management-overview.md#concurrency" >}}).
|
||||
|
||||
The following examples demonstrate how to return an error in your own pluggable component, changing the messages to suit your needs.
|
||||
|
||||
{{< tabs ".NET" "Java" "Go" >}}
|
||||
<!-- .NET -->
|
||||
{{% codetab %}}
|
||||
|
||||
> **Important:** In order to use .NET for error mapping, first install the [`Google.Api.CommonProtos` NuGet package](https://www.nuget.org/packages/Google.Api.CommonProtos/).
|
||||
|
||||
**Etag Mismatch**
|
||||
|
||||
```csharp
|
||||
var badRequest = new BadRequest();
|
||||
var des = "The ETag field provided does not match the one in the store";
|
||||
badRequest.FieldViolations.Add(
|
||||
new Google.Rpc.BadRequest.Types.FieldViolation
|
||||
{
|
||||
Field = "etag",
|
||||
Description = des
|
||||
});
|
||||
|
||||
var baseStatusCode = Grpc.Core.StatusCode.FailedPrecondition;
|
||||
var status = new Google.Rpc.Status{
|
||||
Code = (int)baseStatusCode
|
||||
};
|
||||
|
||||
status.Details.Add(Google.Protobuf.WellKnownTypes.Any.Pack(badRequest));
|
||||
|
||||
var metadata = new Metadata();
|
||||
metadata.Add("grpc-status-details-bin", status.ToByteArray());
|
||||
throw new RpcException(new Grpc.Core.Status(baseStatusCode, "fake-err-msg"), metadata);
|
||||
```
|
||||
|
||||
**Etag Invalid**
|
||||
|
||||
```csharp
|
||||
var badRequest = new BadRequest();
|
||||
var des = "The ETag field must only contain alphanumeric characters";
|
||||
badRequest.FieldViolations.Add(
|
||||
new Google.Rpc.BadRequest.Types.FieldViolation
|
||||
{
|
||||
Field = "etag",
|
||||
Description = des
|
||||
});
|
||||
|
||||
var baseStatusCode = Grpc.Core.StatusCode.InvalidArgument;
|
||||
var status = new Google.Rpc.Status
|
||||
{
|
||||
Code = (int)baseStatusCode
|
||||
};
|
||||
|
||||
status.Details.Add(Google.Protobuf.WellKnownTypes.Any.Pack(badRequest));
|
||||
|
||||
var metadata = new Metadata();
|
||||
metadata.Add("grpc-status-details-bin", status.ToByteArray());
|
||||
throw new RpcException(new Grpc.Core.Status(baseStatusCode, "fake-err-msg"), metadata);
|
||||
```
|
||||
|
||||
**Bulk Delete Row Mismatch**
|
||||
|
||||
```csharp
|
||||
var errorInfo = new Google.Rpc.ErrorInfo();
|
||||
|
||||
errorInfo.Metadata.Add("expected", "100");
|
||||
errorInfo.Metadata.Add("affected", "99");
|
||||
|
||||
var baseStatusCode = Grpc.Core.StatusCode.Internal;
|
||||
var status = new Google.Rpc.Status{
|
||||
Code = (int)baseStatusCode
|
||||
};
|
||||
|
||||
status.Details.Add(Google.Protobuf.WellKnownTypes.Any.Pack(errorInfo));
|
||||
|
||||
var metadata = new Metadata();
|
||||
metadata.Add("grpc-status-details-bin", status.ToByteArray());
|
||||
throw new RpcException(new Grpc.Core.Status(baseStatusCode, "fake-err-msg"), metadata);
|
||||
```
|
||||
|
||||
{{% /codetab %}}
|
||||
|
||||
<!-- Java -->
|
||||
{{% codetab %}}
|
||||
|
||||
Just like the [Dapr Java SDK](https://github.com/tmacam/dapr-java-sdk/), the Java Pluggable Components SDK uses [Project Reactor](https://projectreactor.io/), which provides an asynchronous API for Java.
|
||||
|
||||
Errors can be returned directly by:
|
||||
1. Calling the `.error()` method in the `Mono` or `Flux` that your method returns
|
||||
1. Providing the appropriate exception as parameter.
|
||||
|
||||
You can also raise an exception, as long as it is captured and fed back to your resulting `Mono` or `Flux`.
|
||||
|
||||
**ETag Mismatch**
|
||||
|
||||
```java
|
||||
final Status status = Status.newBuilder()
|
||||
.setCode(io.grpc.Status.Code.FAILED_PRECONDITION.value())
|
||||
.setMessage("fake-err-msg-for-etag-mismatch")
|
||||
.addDetails(Any.pack(BadRequest.FieldViolation.newBuilder()
|
||||
.setField("etag")
|
||||
.setDescription("The ETag field provided does not match the one in the store")
|
||||
.build()))
|
||||
.build();
|
||||
return Mono.error(StatusProto.toStatusException(status));
|
||||
```
|
||||
|
||||
**ETag Invalid**
|
||||
|
||||
```java
|
||||
final Status status = Status.newBuilder()
|
||||
.setCode(io.grpc.Status.Code.INVALID_ARGUMENT.value())
|
||||
.setMessage("fake-err-msg-for-invalid-etag")
|
||||
.addDetails(Any.pack(BadRequest.FieldViolation.newBuilder()
|
||||
.setField("etag")
|
||||
.setDescription("The ETag field must only contain alphanumeric characters")
|
||||
.build()))
|
||||
.build();
|
||||
return Mono.error(StatusProto.toStatusException(status));
|
||||
```
|
||||
|
||||
**Bulk Delete Row Mismatch**
|
||||
|
||||
```java
|
||||
final Status status = Status.newBuilder()
|
||||
.setCode(io.grpc.Status.Code.INTERNAL.value())
|
||||
.setMessage("fake-err-msg-for-bulk-delete-row-mismatch")
|
||||
.addDetails(Any.pack(ErrorInfo.newBuilder()
|
||||
.putAllMetadata(Map.ofEntries(
|
||||
Map.entry("affected", "99"),
|
||||
Map.entry("expected", "100")
|
||||
))
|
||||
.build()))
|
||||
.build();
|
||||
return Mono.error(StatusProto.toStatusException(status));
|
||||
```
|
||||
|
||||
{{% /codetab %}}
|
||||
|
||||
<!-- Go -->
|
||||
{{% codetab %}}
|
||||
|
||||
**ETag Mismatch**
|
||||
|
||||
```go
|
||||
st := status.New(codes.FailedPrecondition, "fake-err-msg")
|
||||
desc := "The ETag field provided does not match the one in the store"
|
||||
v := &errdetails.BadRequest_FieldViolation{
|
||||
Field: etagField,
|
||||
Description: desc,
|
||||
}
|
||||
br := &errdetails.BadRequest{}
|
||||
br.FieldViolations = append(br.FieldViolations, v)
|
||||
st, err := st.WithDetails(br)
|
||||
```
|
||||
|
||||
**ETag Invalid**
|
||||
|
||||
```go
|
||||
st := status.New(codes.InvalidArgument, "fake-err-msg")
|
||||
desc := "The ETag field must only contain alphanumeric characters"
|
||||
v := &errdetails.BadRequest_FieldViolation{
|
||||
Field: etagField,
|
||||
Description: desc,
|
||||
}
|
||||
br := &errdetails.BadRequest{}
|
||||
br.FieldViolations = append(br.FieldViolations, v)
|
||||
st, err := st.WithDetails(br)
|
||||
```
|
||||
|
||||
**Bulk Delete Row Mismatch**
|
||||
|
||||
```go
|
||||
st := status.New(codes.Internal, "fake-err-msg")
|
||||
br := &errdetails.ErrorInfo{}
|
||||
br.Metadata = map[string]string{
|
||||
affected: "99",
|
||||
expected: "100",
|
||||
}
|
||||
st, err := st.WithDetails(br)
|
||||
```
|
||||
|
||||
{{% /codetab %}}
|
||||
|
||||
{{< /tabs >}}
|
||||
|
||||
## Next steps
|
||||
|
||||
- Get started with developing .NET pluggable component using this [sample code](https://github.com/dapr/samples/tree/master/pluggable-components-dotnet-template)
|
||||
|
|
|
|||
|
|
@ -0,0 +1,7 @@
|
|||
---
|
||||
type: docs
|
||||
title: "Multi-App Run"
|
||||
linkTitle: "Multi-App Run"
|
||||
weight: 300
|
||||
description: "Support for running multiple Dapr applications with one command"
|
||||
---
|
||||
|
|
@ -0,0 +1,85 @@
|
|||
---
|
||||
type: docs
|
||||
title: Multi-App Run overview
|
||||
linkTitle: Multi-App Run overview
|
||||
weight: 1000
|
||||
description: Run multiple applications with one CLI command
|
||||
---
|
||||
|
||||
{{% alert title="Note" color="primary" %}}
|
||||
Multi-App Run is currently a preview feature only supported in Linux/MacOS.
|
||||
{{% /alert %}}
|
||||
|
||||
Let's say you want to run several applications locally to test them together, similar to a production scenario. With a local Kubernetes cluster, you'd be able to do this with helm/deployment YAML files. You'd also have to build them as containers and set up Kubernetes, which can add some complexity.
|
||||
|
||||
Instead, you simply want to run them as local executables in self-hosted mode. However, self-hosted mode requires you to:
|
||||
|
||||
- Run multiple `dapr run` commands
|
||||
- Keep track of all ports opened (you cannot have duplicate ports for different applications).
|
||||
- Remember the resources folders and configuration files that each application refers to.
|
||||
- Recall all of the additional flags you used to tweak the `dapr run` command behavior (`--app-health-check-path`, `--dapr-grpc-port`, `--unix-domain-socket`, etc.)
|
||||
|
||||
With Multi-App Run, you can start multiple applications in self-hosted mode using a single `dapr run -f` command using a template file. The template file describes how to start multiple applications as if you had run many separate CLI `run`commands. By default, this template file is called `dapr.yaml`.
|
||||
|
||||
## Multi-App Run template file
|
||||
|
||||
When you execute `dapr run -f .`, it uses the multi-app template file (named `dapr.yaml`) present in the current directory to run all the applications.
|
||||
|
||||
You can name template file with preferred name other than the default. For example `dapr run -f ./<your-preferred-file-name>.yaml`.
|
||||
|
||||
The following example includes some of the template properties you can customize for your applications. In the example, you can simultaneously launch 2 applications with app IDs of `processor` and `emit-metrics`.
|
||||
|
||||
```yaml
|
||||
version: 1
|
||||
apps:
|
||||
- appID: processor
|
||||
appDirPath: ../apps/processor/
|
||||
appPort: 9081
|
||||
daprHTTPPort: 3510
|
||||
command: ["go","run", "app.go"]
|
||||
- appID: emit-metrics
|
||||
appDirPath: ../apps/emit-metrics/
|
||||
daprHTTPPort: 3511
|
||||
env:
|
||||
DAPR_HOST_ADD: localhost
|
||||
command: ["go","run", "app.go"]
|
||||
```
|
||||
|
||||
For a more in-depth example and explanation of the template properties, see [Multi-app template]({{< ref multi-app-template.md >}}).
|
||||
|
||||
## Locations for resources and configuration files
|
||||
|
||||
You have options on where to place your applications' resources and configuration files when using Multi-App Run.
|
||||
|
||||
### Point to one file location (with convention)
|
||||
|
||||
You can set all of your applications resources and configurations at the `~/.dapr` root. This is helpful when all applications share the same resources path, like when testing on a local machine.
|
||||
|
||||
### Separate file locations for each application (with convention)
|
||||
|
||||
When using Multi-App Run, each application directory can have a `.dapr` folder, which contains a `config.yaml` file and a `resources` directory. Otherwise, if the `.dapr` directory is not present within the app directory, the default `~/.dapr/resources/` and `~/.dapr/config.yaml` locations are used.
|
||||
|
||||
If you decide to add a `.dapr` directory in each application directory, with a `/resources` directory and `config.yaml` file, you can specify different resources paths for each application. This approach remains within convention by using the default `~/.dapr`.
|
||||
|
||||
### Point to separate locations (custom)
|
||||
|
||||
You can also name each app directory's `.dapr` directory something other than `.dapr`, such as, `webapp`, or `backend`. This helps if you'd like to be explicit about resource or application directory paths.
|
||||
|
||||
## Logs
|
||||
|
||||
Logs for application and `daprd` are captured in separate files. These log files are created automatically under `.dapr/logs` directory under each application directory (`appDirPath` in the template). These log file names follow the pattern seen below:
|
||||
|
||||
- `<appID>_app_<timestamp>.log` (file name format for `app` log)
|
||||
- `<appID>_daprd_<timestamp>.log` (file name format for `daprd` log)
|
||||
|
||||
Even if you've decided to rename your resources folder to something other than `.dapr`, the log files are written only to the `.dapr/logs` folder (created in the application directory).
|
||||
|
||||
## Watch the demo
|
||||
|
||||
Watch [this video for an overview on Multi-App Run](https://youtu.be/s1p9MNl4VGo?t=2456):
|
||||
|
||||
<iframe width="560" height="315" src="https://www.youtube-nocookie.com/embed/s1p9MNl4VGo?start=2456" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" allowfullscreen></iframe>
|
||||
|
||||
## Next steps
|
||||
|
||||
[Learn the Multi-App Run template file structure and its properties]({{< ref multi-app-template.md >}})
|
||||
|
|
@ -0,0 +1,145 @@
|
|||
---
|
||||
type: docs
|
||||
title: "How to: Use the Multi-App Run template file"
|
||||
linkTitle: "How to: Use the Multi-App Run template"
|
||||
weight: 2000
|
||||
description: Unpack the Multi-App Run template file and its properties
|
||||
---
|
||||
|
||||
{{% alert title="Note" color="primary" %}}
|
||||
Multi-App Run is currently a preview feature only supported in Linux/MacOS.
|
||||
{{% /alert %}}
|
||||
|
||||
The Multi-App Run template file is a YAML file that you can use to run multiple applications at once. In this guide, you'll learn how to:
|
||||
- Use the multi-app template
|
||||
- View started applications
|
||||
- Stop the multi-app template
|
||||
- Stucture the multi-app template file
|
||||
|
||||
## Use the multi-app template
|
||||
|
||||
You can use the multi-app template file in one of the following two ways:
|
||||
|
||||
### Execute by providing a directory path
|
||||
|
||||
When you provide a directory path, the CLI will try to locate the Multi-App Run template file, named `dapr.yaml` by default in the directory. If the file is not found, the CLI will return an error.
|
||||
|
||||
Execute the following CLI command to read the Multi-App Run template file, named `dapr.yaml` by default:
|
||||
|
||||
```cmd
|
||||
# the template file needs to be called `dapr.yaml` by default if a directory path is given
|
||||
|
||||
dapr run -f <dir_path>
|
||||
```
|
||||
|
||||
### Execute by providing a file path
|
||||
|
||||
If the Multi-App Run template file is named something other than `dapr.yaml`, then you can provide the relative or absolute file path to the command:
|
||||
|
||||
```cmd
|
||||
dapr run -f ./path/to/<your-preferred-file-name>.yaml
|
||||
```
|
||||
|
||||
## View the started applications
|
||||
|
||||
Once the multi-app template is running, you can view the started applications with the following command:
|
||||
|
||||
```cmd
|
||||
dapr list
|
||||
```
|
||||
|
||||
## Stop the multi-app template
|
||||
|
||||
Stop the multi-app run template anytime with either of the following commands:
|
||||
|
||||
```cmd
|
||||
# the template file needs to be called `dapr.yaml` by default if a directory path is given
|
||||
|
||||
dapr stop -f
|
||||
```
|
||||
or:
|
||||
|
||||
```cmd
|
||||
dapr stop -f ./path/to/<your-preferred-file-name>.yaml
|
||||
```
|
||||
|
||||
## Template file structure
|
||||
|
||||
The Multi-App Run template file can include the following properties. Below is an example template showing two applications that are configured with some of the properties.
|
||||
|
||||
```yaml
|
||||
version: 1
|
||||
common: # optional section for variables shared across apps
|
||||
resourcesPath: ./app/components # any dapr resources to be shared across apps
|
||||
env: # any environment variable shared across apps
|
||||
- DEBUG: true
|
||||
apps:
|
||||
- appID: webapp # optional
|
||||
appDirPath: .dapr/webapp/ # REQUIRED
|
||||
resourcesPath: .dapr/resources # (optional) can be default by convention
|
||||
configFilePath: .dapr/config.yaml # (optional) can be default by convention too, ignore if file is not found.
|
||||
appProtocol: HTTP
|
||||
appPort: 8080
|
||||
appHealthCheckPath: "/healthz"
|
||||
command: ["python3" "app.py"]
|
||||
- appID: backend # optional
|
||||
appDirPath: .dapr/backend/ # REQUIRED
|
||||
appProtocol: GRPC
|
||||
appPort: 3000
|
||||
unixDomainSocket: "/tmp/test-socket"
|
||||
env:
|
||||
- DEBUG: false
|
||||
command: ["./backend"]
|
||||
```
|
||||
|
||||
{{% alert title="Important" color="warning" %}}
|
||||
The following rules apply for all the paths present in the template file:
|
||||
- If the path is absolute, it is used as is.
|
||||
- All relative paths under comman section should be provided relative to the template file path.
|
||||
- `appDirPath` under apps section should be provided relative to the template file path.
|
||||
- All relative paths under app section should be provided relative to the appDirPath.
|
||||
|
||||
{{% /alert %}}
|
||||
|
||||
## Template properties
|
||||
|
||||
The properties for the Multi-App Run template align with the `dapr run` CLI flags, [listed in the CLI reference documentation]({{< ref "dapr-run.md#flags" >}}).
|
||||
|
||||
|
||||
| Properties | Required | Details | Example |
|
||||
|--------------------------|:--------:|--------|---------|
|
||||
| `appDirPath` | Y | Path to the your application code | `./webapp/`, `./backend/` |
|
||||
| `appID` | N | Application's app ID. If not provided, will be derived from `appDirPath` | `webapp`, `backend` |
|
||||
| `resourcesPath` | N | Path to your Dapr resources. Can be default by convention; ignore if directory isn't found | `./app/components`, `./webapp/components` |
|
||||
| `configFilePath` | N | Path to your application's configuration file | `./webapp/config.yaml` |
|
||||
| `appProtocol` | N | The protocol Dapr uses to talk to the application. | `HTTP`, `GRPC` |
|
||||
| `appPort` | N | The port your application is listening on | `8080`, `3000` |
|
||||
| `daprHTTPPort` | N | Dapr HTTP port | |
|
||||
| `daprGRPCPort` | N | Dapr GRPC port | |
|
||||
| `daprInternalGRPCPort` | N | gRPC port for the Dapr Internal API to listen on; used when parsing the value from a local DNS component | |
|
||||
| `metricsPort` | N | The port that Dapr sends its metrics information to | |
|
||||
| `unixDomainSocket` | N | Path to a unix domain socket dir mount. If specified, communication with the Dapr sidecar uses unix domain sockets for lower latency and greater throughput when compared to using TCP ports. Not available on Windows. | `/tmp/test-socket` |
|
||||
| `profilePort` | N | The port for the profile server to listen on | |
|
||||
| `enableProfiling` | N | Enable profiling via an HTTP endpoint | |
|
||||
| `apiListenAddresses` | N | Dapr API listen addresses | |
|
||||
| `logLevel` | N | The log verbosity. | |
|
||||
| `appMaxConcurrency` | N | The concurrency level of the application; default is unlimited | |
|
||||
| `placementHostAddress` | N | | |
|
||||
| `appSSL` | N | Enable https when Dapr invokes the application | |
|
||||
| `daprHTTPMaxRequestSize` | N | Max size of the request body in MB. | |
|
||||
| `daprHTTPReadBufferSize` | N | Max size of the HTTP read buffer in KB. This also limits the maximum size of HTTP headers. The default 4 KB | |
|
||||
| `enableAppHealthCheck` | N | Enable the app health check on the application | `true`, `false` |
|
||||
| `appHealthCheckPath` | N | Path to the health check file | `/healthz` |
|
||||
| `appHealthProbeInterval` | N | Interval to probe for the health of the app in seconds
|
||||
| |
|
||||
| `appHealthProbeTimeout` | N | Timeout for app health probes in milliseconds | |
|
||||
| `appHealthThreshold` | N | Number of consecutive failures for the app to be considered unhealthy | |
|
||||
| `enableApiLogging` | N | Enable the logging of all API calls from application to Dapr | |
|
||||
| `runtimePath` | N | Dapr runtime install path | |
|
||||
| `env` | N | Map to environment variable; environment variables applied per application will overwrite environment variables shared across applications | `DEBUG`, `DAPR_HOST_ADD` |
|
||||
|
||||
## Next steps
|
||||
|
||||
Watch [this video for an overview on Multi-App Run](https://youtu.be/s1p9MNl4VGo?t=2456):
|
||||
|
||||
<iframe width="560" height="315" src="https://www.youtube-nocookie.com/embed/s1p9MNl4VGo?start=2456" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" allowfullscreen></iframe>
|
||||
|
|
@ -640,7 +640,7 @@ For this example, you will need:
|
|||
|
||||
- [Dapr CLI and initialized environment](https://docs.dapr.io/getting-started).
|
||||
- Java JDK 11 (or greater):
|
||||
- [Oracle JDK](https://www.oracle.com/technetwork/java/javase/downloads/index.html#JDK11), or
|
||||
- [Oracle JDK](https://www.oracle.com/java/technologies/downloads), or
|
||||
- OpenJDK
|
||||
- [Apache Maven](https://maven.apache.org/install.html), version 3.x.
|
||||
<!-- IGNORE_LINKS -->
|
||||
|
|
|
|||
|
|
@ -511,7 +511,7 @@ For this example, you will need:
|
|||
|
||||
- [Dapr CLI and initialized environment](https://docs.dapr.io/getting-started).
|
||||
- Java JDK 11 (or greater):
|
||||
- [Oracle JDK](https://www.oracle.com/technetwork/java/javase/downloads/index.html#JDK11), or
|
||||
- [Oracle JDK](https://www.oracle.com/java/technologies/downloads), or
|
||||
- OpenJDK
|
||||
- [Apache Maven](https://maven.apache.org/install.html), version 3.x.
|
||||
<!-- IGNORE_LINKS -->
|
||||
|
|
|
|||
|
|
@ -6,10 +6,6 @@ weight: 120
|
|||
description: "Get started with Dapr's resiliency capabilities via the service invocation API"
|
||||
---
|
||||
|
||||
{{% alert title="Note" color="primary" %}}
|
||||
Resiliency is currently a preview feature.
|
||||
{{% /alert %}}
|
||||
|
||||
Observe Dapr resiliency capabilities by simulating a system failure. In this Quickstart, you will:
|
||||
|
||||
- Run two microservice applications: `checkout` and `order-processor`. `checkout` will continuously make Dapr service invocation requests to `order-processor`.
|
||||
|
|
@ -59,10 +55,10 @@ pip3 install -r requirements.txt
|
|||
Run the `order-processor` service alongside a Dapr sidecar.
|
||||
|
||||
```bash
|
||||
dapr run --app-port 8001 --app-id order-processor --config ../config.yaml --components-path ../../../components/ --app-protocol http --dapr-http-port 3501 -- python3 app.py
|
||||
dapr run --app-port 8001 --app-id order-processor --resources-path ../../../resources/ --app-protocol http --dapr-http-port 3501 -- python3 app.py
|
||||
```
|
||||
|
||||
### Step 3: Run the `checkout` service application with resiliency enabled
|
||||
### Step 3: Run the `checkout` service application
|
||||
|
||||
In a new terminal window, from the root of the Quickstart directory, navigate to the `checkout` directory.
|
||||
|
||||
|
|
@ -76,13 +72,13 @@ Install dependencies:
|
|||
pip3 install -r requirements.txt
|
||||
```
|
||||
|
||||
Run the `checkout` service alongside a Dapr sidecar. The `--config` parameter applies a Dapr configuration that enables the resiliency feature.
|
||||
Run the `checkout` service alongside a Dapr sidecar.
|
||||
|
||||
```bash
|
||||
dapr run --app-id checkout --config ../config.yaml --components-path ../../../components/ --app-protocol http --dapr-http-port 3500 -- python3 app.py
|
||||
dapr run --app-id checkout --resources-path ../../../resources/ --app-protocol http --dapr-http-port 3500 -- python3 app.py
|
||||
```
|
||||
|
||||
By enabling resiliency, the resiliency spec located in the components directory is detected and loaded by the Dapr sidecar:
|
||||
The Dapr sidecar then loads the resiliency spec located in the resources directory:
|
||||
|
||||
```yaml
|
||||
apiVersion: dapr.io/v1alpha1
|
||||
|
|
@ -287,10 +283,10 @@ npm install
|
|||
Run the `order-processor` service alongside a Dapr sidecar.
|
||||
|
||||
```bash
|
||||
dapr run --app-port 5001 --app-id order-processor --config ../config.yaml --components-path ../../../components/ --app-protocol http --dapr-http-port 3501 -- npm start
|
||||
dapr run --app-port 5001 --app-id order-processor --resources-path ../../../resources/ --app-protocol http --dapr-http-port 3501 -- npm start
|
||||
```
|
||||
|
||||
### Step 3: Run the `checkout` service application with resiliency enabled
|
||||
### Step 3: Run the `checkout` service application
|
||||
|
||||
In a new terminal window, from the root of the Quickstart directory,
|
||||
navigate to the `checkout` directory.
|
||||
|
|
@ -305,13 +301,14 @@ Install dependencies:
|
|||
npm install
|
||||
```
|
||||
|
||||
Run the `checkout` service alongside a Dapr sidecar. The `--config` parameter applies a Dapr configuration that enables the resiliency feature.
|
||||
Run the `checkout` service alongside a Dapr sidecar.
|
||||
|
||||
```bash
|
||||
dapr run --app-id checkout --config ../config.yaml --components-path ../../../components/ --app-protocol http --dapr-http-port 3500 -- npm start
|
||||
dapr run --app-id checkout --resources-path ../../../resources/ --app-protocol http --dapr-http-port 3500 -- npm start
|
||||
```
|
||||
|
||||
By enabling resiliency, the resiliency spec located in the components directory is detected and loaded by the Dapr sidecar:
|
||||
The Dapr sidecar then loads the resiliency spec located in the resources directory:
|
||||
|
||||
|
||||
```yaml
|
||||
apiVersion: dapr.io/v1alpha1
|
||||
|
|
@ -450,7 +447,7 @@ Once you restart the `order-processor` service, the application will recover sea
|
|||
In the `order-processor` service terminal, restart the application:
|
||||
|
||||
```bash
|
||||
dapr run --app-port 5001 --app-id order-processor --config ../config.yaml --components-path ../../../components/ --app-protocol http --dapr-http-port 3501 -- npm start
|
||||
dapr run --app-port 5001 --app-id order-processor --resources-path ../../../resources/ --app-protocol http --dapr-http-port 3501 -- npm start
|
||||
```
|
||||
|
||||
`checkout` service output:
|
||||
|
|
@ -518,10 +515,10 @@ dotnet build
|
|||
Run the `order-processor` service alongside a Dapr sidecar.
|
||||
|
||||
```bash
|
||||
dapr run --app-port 7001 --app-id order-processor --config ../config.yaml --components-path ../../../components/ --app-protocol http --dapr-http-port 3501 -- dotnet run
|
||||
dapr run --app-port 7001 --app-id order-processor --resources-path ../../../resources/ --app-protocol http --dapr-http-port 3501 -- dotnet run
|
||||
```
|
||||
|
||||
### Step 3: Run the `checkout` service application with resiliency enabled
|
||||
### Step 3: Run the `checkout` service application
|
||||
|
||||
In a new terminal window, from the root of the Quickstart directory,
|
||||
navigate to the `checkout` directory.
|
||||
|
|
@ -537,13 +534,13 @@ dotnet restore
|
|||
dotnet build
|
||||
```
|
||||
|
||||
Run the `checkout` service alongside a Dapr sidecar. The `--config` parameter applies a Dapr configuration that enables the resiliency feature.
|
||||
Run the `checkout` service alongside a Dapr sidecar.
|
||||
|
||||
```bash
|
||||
dapr run --app-id checkout --config ../config.yaml --components-path ../../../components/ --app-protocol http --dapr-http-port 3500 -- dotnet run
|
||||
dapr run --app-id checkout --resources-path ../../../resources/ --app-protocol http --dapr-http-port 3500 -- dotnet run
|
||||
```
|
||||
|
||||
By enabling resiliency, the resiliency spec located in the components directory is detected and loaded by the Dapr sidecar:
|
||||
The Dapr sidecar then loads the resiliency spec located in the resources directory:
|
||||
|
||||
```yaml
|
||||
apiVersion: dapr.io/v1alpha1
|
||||
|
|
@ -718,7 +715,7 @@ For this example, you will need:
|
|||
|
||||
- [Dapr CLI and initialized environment](https://docs.dapr.io/getting-started).
|
||||
- Java JDK 11 (or greater):
|
||||
- [Oracle JDK](https://www.oracle.com/technetwork/java/javase/downloads/index.html#JDK11), or
|
||||
- [Oracle JDK](https://www.oracle.com/java/technologies/downloads), or
|
||||
- OpenJDK
|
||||
- [Apache Maven](https://maven.apache.org/install.html), version 3.x.
|
||||
<!-- IGNORE_LINKS -->
|
||||
|
|
@ -751,10 +748,10 @@ mvn clean install
|
|||
Run the `order-processor` service alongside a Dapr sidecar.
|
||||
|
||||
```bash
|
||||
dapr run --app-id order-processor --config ../config.yaml --components-path ../../../components/ --app-port 9001 --app-protocol http --dapr-http-port 3501 -- java -jar target/OrderProcessingService-0.0.1-SNAPSHOT.jar
|
||||
dapr run --app-id order-processor --resources-path ../../../resources/ --app-port 9001 --app-protocol http --dapr-http-port 3501 -- java -jar target/OrderProcessingService-0.0.1-SNAPSHOT.jar
|
||||
```
|
||||
|
||||
### Step 3: Run the `checkout` service application with resiliency enabled
|
||||
### Step 3: Run the `checkout` service application
|
||||
|
||||
In a new terminal window, from the root of the Quickstart directory,
|
||||
navigate to the `checkout` directory.
|
||||
|
|
@ -769,13 +766,14 @@ Install dependencies:
|
|||
mvn clean install
|
||||
```
|
||||
|
||||
Run the `checkout` service alongside a Dapr sidecar. The `--config` parameter applies a Dapr configuration that enables the resiliency feature.
|
||||
Run the `checkout` service alongside a Dapr sidecar.
|
||||
|
||||
```bash
|
||||
dapr run --app-id checkout --config ../config.yaml --components-path ../../../components/ --app-protocol http --dapr-http-port 3500 -- java -jar target/CheckoutService-0.0.1-SNAPSHOT.jar
|
||||
dapr run --app-id checkout --resources-path ../../../resources/ --app-protocol http --dapr-http-port 3500 -- java -jar target/CheckoutService-0.0.1-SNAPSHOT.jar
|
||||
```
|
||||
|
||||
By enabling resiliency, the resiliency spec located in the components directory is detected and loaded by the Dapr sidecar:
|
||||
The Dapr sidecar then loads the resiliency spec located in the resources directory:
|
||||
|
||||
|
||||
```yaml
|
||||
apiVersion: dapr.io/v1alpha1
|
||||
|
|
@ -914,7 +912,7 @@ Once you restart the `order-processor` service, the application will recover sea
|
|||
In the `order-processor` service terminal, restart the application:
|
||||
|
||||
```bash
|
||||
dapr run --app-id order-processor --config ../config.yaml --components-path ../../../components/ --app-port 9001 --app-protocol http --dapr-http-port 3501 -- java -jar target/OrderProcessingService-0.0.1-SNAPSHOT.jar
|
||||
dapr run --app-id order-processor --resources-path ../../../resources/ --app-port 9001 --app-protocol http --dapr-http-port 3501 -- java -jar target/OrderProcessingService-0.0.1-SNAPSHOT.jar
|
||||
```
|
||||
|
||||
`checkout` service output:
|
||||
|
|
@ -980,10 +978,10 @@ go build .
|
|||
Run the `order-processor` service alongside a Dapr sidecar.
|
||||
|
||||
```bash
|
||||
dapr run --app-port 6001 --app-id order-processor --config ../config.yaml --components-path ../../../components/ --app-protocol http --dapr-http-port 3501 -- go run .
|
||||
dapr run --app-port 6001 --app-id order-processor --resources-path ../../../resources/ --app-protocol http --dapr-http-port 3501 -- go run .
|
||||
```
|
||||
|
||||
### Step 3: Run the `checkout` service application with resiliency enabled
|
||||
### Step 3: Run the `checkout` service application
|
||||
|
||||
In a new terminal window, from the root of the Quickstart directory,
|
||||
navigate to the `checkout` directory.
|
||||
|
|
@ -998,13 +996,14 @@ Install dependencies:
|
|||
go build .
|
||||
```
|
||||
|
||||
Run the `checkout` service alongside a Dapr sidecar. The `--config` parameter applies a Dapr configuration that enables the resiliency feature.
|
||||
Run the `checkout` service alongside a Dapr sidecar.
|
||||
|
||||
```bash
|
||||
dapr run --app-id checkout --config ../config.yaml --components-path ../../../components/ --app-protocol http --dapr-http-port 3500 -- go run .
|
||||
dapr run --app-id checkout --resources-path ../../../resources/ --app-protocol http --dapr-http-port 3500 -- go run .
|
||||
```
|
||||
|
||||
By enabling resiliency, the resiliency spec located in the components directory is detected and loaded by the Dapr sidecar:
|
||||
The Dapr sidecar then loads the resiliency spec located in the resources directory:
|
||||
|
||||
|
||||
```yaml
|
||||
apiVersion: dapr.io/v1alpha1
|
||||
|
|
@ -1143,7 +1142,7 @@ Once you restart the `order-processor` service, the application will recover sea
|
|||
In the `order-processor` service terminal, restart the application:
|
||||
|
||||
```bash
|
||||
dapr run --app-port 6001 --app-id order-processor --config ../config.yaml --components-path ../../../components/ --app-protocol http --dapr-http-port 3501 -- go run .
|
||||
dapr run --app-port 6001 --app-id order-processor --resources-path ../../../resources/ --app-protocol http --dapr-http-port 3501 -- go run .
|
||||
```
|
||||
|
||||
`checkout` service output:
|
||||
|
|
|
|||
|
|
@ -6,13 +6,9 @@ weight: 110
|
|||
description: "Get started with Dapr's resiliency capabilities via the state management API"
|
||||
---
|
||||
|
||||
{{% alert title="Note" color="primary" %}}
|
||||
Resiliency is currently a preview feature.
|
||||
{{% /alert %}}
|
||||
|
||||
Observe Dapr resiliency capabilities by simulating a system failure. In this Quickstart, you will:
|
||||
|
||||
- Execute a microservice application with resiliency enabled that continuously persists and retrieves state via Dapr's state management API.
|
||||
- Execute a microservice application that continuously persists and retrieves state via Dapr's state management API.
|
||||
- Trigger resiliency policies by simulating a system failure.
|
||||
- Resolve the failure and the microservice application will resume.
|
||||
|
||||
|
|
@ -54,9 +50,10 @@ Install dependencies
|
|||
pip3 install -r requirements.txt
|
||||
```
|
||||
|
||||
### Step 2: Run the application with resiliency enabled
|
||||
### Step 2: Run the application
|
||||
|
||||
Run the `order-processor` service alongside a Dapr sidecar. The Dapr sidecar then loads the resiliency spec located in the resources directory:
|
||||
|
||||
Run the `order-processor` service alongside a Dapr sidecar. In the `dapr run` command below, the `--config` parameter applies a Dapr configuration that enables the resiliency feature. By enabling resiliency, the resiliency spec located in the components directory is loaded by the `order-processor` sidecar. The resilency spec is:
|
||||
|
||||
```yaml
|
||||
apiVersion: dapr.io/v1alpha1
|
||||
|
|
@ -89,7 +86,7 @@ Run the `order-processor` service alongside a Dapr sidecar. In the `dapr run` co
|
|||
|
||||
|
||||
```bash
|
||||
dapr run --app-id order-processor --config ../config.yaml --components-path ../../../components/ -- python3
|
||||
dapr run --app-id order-processor --resources-path ../../../resources/ -- python3
|
||||
```
|
||||
|
||||
Once the application has started, the `order-processor`service writes and reads `orderId` key/value pairs to the `statestore` Redis instance [defined in the `statestore.yaml` component]({{< ref "statemanagement-quickstart.md#statestoreyaml-component-file" >}}).
|
||||
|
|
@ -132,7 +129,7 @@ Once Redis is stopped, the requests begin to fail and the retry policy titled `r
|
|||
INFO[0006] Error processing operation component[statestore] output. Retrying...
|
||||
```
|
||||
|
||||
As per the `retryFroever` policy, retries will continue for each failed request indefinitely, in 5 second intervals.
|
||||
As per the `retryForever` policy, retries will continue for each failed request indefinitely, in 5 second intervals.
|
||||
|
||||
```yaml
|
||||
retryForever:
|
||||
|
|
@ -223,9 +220,10 @@ Install dependencies
|
|||
npm install
|
||||
```
|
||||
|
||||
### Step 2: Run the application with resiliency enabled
|
||||
### Step 2: Run the application
|
||||
|
||||
Run the `order-processor` service alongside a Dapr sidecar. The Dapr sidecar then loads the resiliency spec located in the resources directory:
|
||||
|
||||
Run the `order-processor` service alongside a Dapr sidecar. In the `dapr run` command below, the `--config` parameter applies a Dapr configuration that enables the resiliency feature. By enabling resiliency, the resiliency spec located in the components directory is loaded by the `order-processor` sidecar. The resilency spec is:
|
||||
|
||||
```yaml
|
||||
apiVersion: dapr.io/v1alpha1
|
||||
|
|
@ -257,7 +255,7 @@ Run the `order-processor` service alongside a Dapr sidecar. In the `dapr run` co
|
|||
```
|
||||
|
||||
```bash
|
||||
dapr run --app-id order-processor --config ../config.yaml --components-path ../../../components/ -- npm start
|
||||
dapr run --app-id order-processor --resources-path ../../../resources/ -- npm start
|
||||
```
|
||||
|
||||
Once the application has started, the `order-processor`service writes and reads `orderId` key/value pairs to the `statestore` Redis instance [defined in the `statestore.yaml` component]({{< ref "statemanagement-quickstart.md#statestoreyaml-component-file" >}}).
|
||||
|
|
@ -300,7 +298,7 @@ Once Redis is stopped, the requests begin to fail and the retry policy titled `r
|
|||
INFO[0006] Error processing operation component[statestore] output. Retrying...
|
||||
```
|
||||
|
||||
As per the `retryFroever` policy, retries will continue for each failed request indefinitely, in 5 second intervals.
|
||||
As per the `retryForever` policy, retries will continue for each failed request indefinitely, in 5 second intervals.
|
||||
|
||||
```yaml
|
||||
retryForever:
|
||||
|
|
@ -392,9 +390,9 @@ dotnet restore
|
|||
dotnet build
|
||||
```
|
||||
|
||||
### Step 2: Run the application with resiliency enabled
|
||||
### Step 2: Run the application
|
||||
|
||||
Run the `order-processor` service alongside a Dapr sidecar. In the `dapr run` command below, the `--config` parameter applies a Dapr configuration that enables the resiliency feature. By enabling resiliency, the resiliency spec located in the components directory is loaded by the `order-processor` sidecar. The resilency spec is:
|
||||
Run the `order-processor` service alongside a Dapr sidecar. The Dapr sidecar then loads the resiliency spec located in the resources directory:
|
||||
|
||||
```yaml
|
||||
apiVersion: dapr.io/v1alpha1
|
||||
|
|
@ -426,7 +424,7 @@ Run the `order-processor` service alongside a Dapr sidecar. In the `dapr run` co
|
|||
```
|
||||
|
||||
```bash
|
||||
dapr run --app-id order-processor --config ../config.yaml --components-path ../../../components/ -- dotnet run
|
||||
dapr run --app-id order-processor --resources-path ../../../resources/ -- dotnet run
|
||||
```
|
||||
|
||||
Once the application has started, the `order-processor`service writes and reads `orderId` key/value pairs to the `statestore` Redis instance [defined in the `statestore.yaml` component]({{< ref "statemanagement-quickstart.md#statestoreyaml-component-file" >}}).
|
||||
|
|
@ -469,7 +467,7 @@ Once Redis is stopped, the requests begin to fail and the retry policy titled `r
|
|||
INFO[0006] Error processing operation component[statestore] output. Retrying...
|
||||
```
|
||||
|
||||
As per the `retryFroever` policy, retries will continue for each failed request indefinitely, in 5 second intervals.
|
||||
As per the `retryForever` policy, retries will continue for each failed request indefinitely, in 5 second intervals.
|
||||
|
||||
```yaml
|
||||
retryForever:
|
||||
|
|
@ -536,7 +534,7 @@ For this example, you will need:
|
|||
|
||||
- [Dapr CLI and initialized environment](https://docs.dapr.io/getting-started).
|
||||
- Java JDK 11 (or greater):
|
||||
- [Oracle JDK](https://www.oracle.com/technetwork/java/javase/downloads/index.html#JDK11), or
|
||||
- [Oracle JDK](https://www.oracle.com/java/technologies/downloads), or
|
||||
- OpenJDK
|
||||
- [Apache Maven](https://maven.apache.org/install.html), version 3.x.
|
||||
<!-- IGNORE_LINKS -->
|
||||
|
|
@ -563,9 +561,9 @@ Install dependencies
|
|||
mvn clean install
|
||||
```
|
||||
|
||||
### Step 2: Run the application with resiliency enabled
|
||||
### Step 2: Run the application
|
||||
|
||||
Run the `order-processor` service alongside a Dapr sidecar. In the `dapr run` command below, the `--config` parameter applies a Dapr configuration that enables the resiliency feature. By enabling resiliency, the resiliency spec located in the components directory is loaded by the `order-processor` sidecar. The resilency spec is:
|
||||
Run the `order-processor` service alongside a Dapr sidecar. The Dapr sidecar then loads the resiliency spec located in the resources directory:
|
||||
|
||||
```yaml
|
||||
apiVersion: dapr.io/v1alpha1
|
||||
|
|
@ -597,7 +595,7 @@ Run the `order-processor` service alongside a Dapr sidecar. In the `dapr run` co
|
|||
```
|
||||
|
||||
```bash
|
||||
dapr run --app-id order-processor --config ../config.yaml --components-path ../../../components/ -- java -jar target/OrderProcessingService-0.0.1-SNAPSHOT.jar
|
||||
dapr run --app-id order-processor --resources-path ../../../resources/ -- java -jar target/OrderProcessingService-0.0.1-SNAPSHOT.jar
|
||||
```
|
||||
|
||||
Once the application has started, the `order-processor`service writes and reads `orderId` key/value pairs to the `statestore` Redis instance [defined in the `statestore.yaml` component]({{< ref "statemanagement-quickstart.md#statestoreyaml-component-file" >}}).
|
||||
|
|
@ -640,7 +638,7 @@ Once Redis is stopped, the requests begin to fail and the retry policy titled `r
|
|||
INFO[0006] Error processing operation component[statestore] output. Retrying...
|
||||
```
|
||||
|
||||
As per the `retryFroever` policy, retries will continue for each failed request indefinitely, in 5 second intervals.
|
||||
As per the `retryForever` policy, retries will continue for each failed request indefinitely, in 5 second intervals.
|
||||
|
||||
```yaml
|
||||
retryForever:
|
||||
|
|
@ -731,9 +729,9 @@ Install dependencies
|
|||
go build .
|
||||
```
|
||||
|
||||
### Step 2: Run the application with resiliency enabled
|
||||
### Step 2: Run the application
|
||||
|
||||
Run the `order-processor` service alongside a Dapr sidecar. In the `dapr run` command below, the `--config` parameter applies a Dapr configuration that enables the resiliency feature. By enabling resiliency, the resiliency spec located in the components directory is loaded by the `order-processor` sidecar. The resilency spec is:
|
||||
Run the `order-processor` service alongside a Dapr sidecar. The Dapr sidecar then loads the resiliency spec located in the resources directory:
|
||||
|
||||
```yaml
|
||||
apiVersion: dapr.io/v1alpha1
|
||||
|
|
@ -765,7 +763,7 @@ Run the `order-processor` service alongside a Dapr sidecar. In the `dapr run` co
|
|||
```
|
||||
|
||||
```bash
|
||||
dapr run --app-id order-processor --config ../config.yaml --components-path ../../../components -- go run .
|
||||
dapr run --app-id order-processor --resources-path ../../../resources -- go run .
|
||||
```
|
||||
|
||||
Once the application has started, the `order-processor`service writes and reads `orderId` key/value pairs to the `statestore` Redis instance [defined in the `statestore.yaml` component]({{< ref "statemanagement-quickstart.md#statestoreyaml-component-file" >}}).
|
||||
|
|
@ -808,7 +806,7 @@ Once Redis is stopped, the requests begin to fail and the retry policy titled `r
|
|||
INFO[0006] Error processing operation component[statestore] output. Retrying...
|
||||
```
|
||||
|
||||
As per the `retryFroever` policy, retries will continue for each failed request indefinitely, in 5 second intervals.
|
||||
As per the `retryForever` policy, retries will continue for each failed request indefinitely, in 5 second intervals.
|
||||
|
||||
```yaml
|
||||
retryForever:
|
||||
|
|
|
|||
|
|
@ -357,7 +357,7 @@ For this example, you will need:
|
|||
|
||||
- [Dapr CLI and initialized environment](https://docs.dapr.io/getting-started).
|
||||
- Java JDK 11 (or greater):
|
||||
- [Oracle JDK](https://www.oracle.com/technetwork/java/javase/downloads/index.html#JDK11), or
|
||||
- [Oracle JDK](https://www.oracle.com/java/technologies/downloads), or
|
||||
- OpenJDK
|
||||
- [Apache Maven](https://maven.apache.org/install.html), version 3.x.
|
||||
<!-- IGNORE_LINKS -->
|
||||
|
|
|
|||
|
|
@ -391,7 +391,7 @@ For this example, you will need:
|
|||
|
||||
- [Dapr CLI and initialized environment](https://docs.dapr.io/getting-started).
|
||||
- Java JDK 11 (or greater):
|
||||
- [Oracle JDK](https://www.oracle.com/technetwork/java/javase/downloads/index.html#JDK11), or
|
||||
- [Oracle JDK](https://www.oracle.com/java/technologies/downloads), or
|
||||
- OpenJDK
|
||||
- [Apache Maven](https://maven.apache.org/install.html), version 3.x.
|
||||
<!-- IGNORE_LINKS -->
|
||||
|
|
|
|||
|
|
@ -387,7 +387,7 @@ For this example, you will need:
|
|||
|
||||
- [Dapr CLI and initialized environment](https://docs.dapr.io/getting-started).
|
||||
- Java JDK 11 (or greater):
|
||||
- [Oracle JDK](https://www.oracle.com/technetwork/java/javase/downloads/index.html#JDK11), or
|
||||
- [Oracle JDK](https://www.oracle.com/java/technologies/downloads), or
|
||||
- OpenJDK
|
||||
- [Apache Maven](https://maven.apache.org/install.html), version 3.x.
|
||||
<!-- IGNORE_LINKS -->
|
||||
|
|
|
|||
|
|
@ -42,9 +42,18 @@ spec:
|
|||
| spec.ignoreErrors | N | Tells the Dapr sidecar to continue initialization if the component fails to load. Default is false | `false`
|
||||
| **spec.metadata** | - | **A key/value pair of component specific configuration. See your component definition for fields**|
|
||||
|
||||
### Special metadata values
|
||||
### Templated metadata values
|
||||
|
||||
Metadata values can contain a `{uuid}` tag that is replaced with a randomly generate UUID when the Dapr sidecar starts up. A new UUID is generated on every start up. It can be used, for example, to have a pod on Kubernetes with multiple application instances consuming a [shared MQTT subscription]({{< ref "setup-mqtt3.md" >}}). Below is an example of using the `{uuid}` tag.
|
||||
Metadata values can contain template tags that are resolved on Dapr sidecar startup. The table below shows the current templating tags that can be used in components.
|
||||
|
||||
| Tag | Details | Example use case |
|
||||
|-------------|--------------------------------------------------------------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
|
||||
| {uuid} | Randomly generated UUIDv4 | When you need a unique identifier in self-hosted mode; for example, multiple application instances consuming a [shared MQTT subscription]({{< ref "setup-mqtt3.md" >}}) |
|
||||
| {podName} | Name of the pod containing the Dapr sidecar | Use to have a persisted behavior, where the ConsumerID does not change on restart when using StatefulSets in Kubernetes |
|
||||
| {namespace} | Namespace where the Dapr sidecar resides combined with its appId | Using a shared `clientId` when multiple application instances consume a Kafka topic in Kubernetes |
|
||||
| {appID} | The configured `appID` of the resource containing the Dapr sidecar | Having a shared `clientId` when multiple application instances consumer a Kafka topic in self-hosted mode |
|
||||
|
||||
Below is an example of using the `{uuid}` tag in an MQTT pubsub component. Note that multiple template tags can be used in a single metadata value.
|
||||
|
||||
```yaml
|
||||
apiVersion: dapr.io/v1alpha1
|
||||
|
|
@ -67,9 +76,6 @@ spec:
|
|||
value: "false"
|
||||
```
|
||||
|
||||
The consumerID metadata values can also contain a `{podName}` tag that is replaced with the Kubernetes POD's name when the Dapr sidecar starts up. This can be used to have a persisted behavior where the ConsumerID does not change on restart when using StatefulSets in Kubernetes.
|
||||
|
||||
|
||||
## Further reading
|
||||
- [Components concept]({{< ref components-concept.md >}})
|
||||
- [Reference secrets in component definitions]({{< ref component-secrets.md >}})
|
||||
|
|
|
|||
|
|
@ -6,22 +6,22 @@ weight: 2000
|
|||
description: "Customize processing pipelines by adding middleware components"
|
||||
---
|
||||
|
||||
Dapr allows custom processing pipelines to be defined by chaining a series of middleware components. There are two places that you can use a middleware pipeline;
|
||||
Dapr allows custom processing pipelines to be defined by chaining a series of middleware components. There are two places that you can use a middleware pipeline:
|
||||
|
||||
1) Building block APIs - HTTP middleware components are executed when invoking any Dapr HTTP APIs.
|
||||
2) Service-to-Service invocation - HTTP middleware components are applied to service-to-service invocation calls.
|
||||
1. Building block APIs - HTTP middleware components are executed when invoking any Dapr HTTP APIs.
|
||||
2. Service-to-Service invocation - HTTP middleware components are applied to service-to-service invocation calls.
|
||||
|
||||
## Configure API middleware pipelines
|
||||
|
||||
When launched, a Dapr sidecar constructs a middleware processing pipeline for incoming HTTP calls. By default, the pipeline consists of [tracing middleware]({{< ref tracing-overview.md >}}) and CORS middleware. Additional middleware, configured by a Dapr [configuration]({{< ref configuration-concept.md >}}), can be added to the pipeline in the order they are defined. The pipeline applies to all Dapr API endpoints, including state, pub/sub, service invocation, bindings, secrets, configuration, distributed lock, and others.
|
||||
When launched, a Dapr sidecar constructs a middleware processing pipeline for incoming HTTP calls. By default, the pipeline consists of the [tracing]({{< ref tracing-overview.md >}}) and CORS middlewares. Additional middlewares, configured by a Dapr [Configuration]({{< ref configuration-concept.md >}}), can be added to the pipeline in the order they are defined. The pipeline applies to all Dapr API endpoints, including state, pub/sub, service invocation, bindings, secrets, configuration, distributed lock, etc.
|
||||
|
||||
A request goes through all the defined middleware components before it's routed to user code, and then goes through the defined middleware, in reverse order, before it's returned to the client, as shown in the following diagram.
|
||||
|
||||
<img src="/images/middleware.png" width=800>
|
||||
<img src="/images/middleware.png" width="800" alt="Diagram showing the flow of a request and a response through the middlewares, as described in the paragraph above" />
|
||||
|
||||
HTTP middleware components are executed when invoking Dapr HTTP APIs using the `httpPipeline` configuration.
|
||||
|
||||
The following configuration example defines a custom pipeline that uses a [OAuth 2.0 middleware]({{< ref middleware-oauth2.md >}}) and an [uppercase middleware component]({{< ref middleware-uppercase.md >}}). In this case, all requests are authorized through the OAuth 2.0 protocol, and transformed to uppercase text, before they are forwarded to user code.
|
||||
The following configuration example defines a custom pipeline that uses an [OAuth 2.0 middleware]({{< ref middleware-oauth2.md >}}) and an [uppercase middleware component]({{< ref middleware-uppercase.md >}}). In this case, all requests are authorized through the OAuth 2.0 protocol, and transformed to uppercase text, before they are forwarded to user code.
|
||||
|
||||
```yaml
|
||||
apiVersion: dapr.io/v1alpha1
|
||||
|
|
@ -38,19 +38,19 @@ spec:
|
|||
type: middleware.http.uppercase
|
||||
```
|
||||
|
||||
As with other components, middleware components can be found in the [supported Middleware reference]({{< ref supported-middleware >}}) and in the [components-contrib repo](https://github.com/dapr/components-contrib/tree/master/middleware/http).
|
||||
As with other components, middleware components can be found in the [supported Middleware reference]({{< ref supported-middleware >}}) and in the [`dapr/components-contrib` repo](https://github.com/dapr/components-contrib/tree/master/middleware/http).
|
||||
|
||||
{{< button page="supported-middleware" text="See all middleware components">}}
|
||||
|
||||
## Configure app middleware pipelines
|
||||
|
||||
You can also use any middleware components when making service-to-service invocation calls. For example, for token validation in a zero-trust environment, a request transformation for a specific app endpoint, or to apply OAuth policies.
|
||||
You can also use any middleware component when making service-to-service invocation calls. For example, to add token validation in a zero-trust environment, to transform a request for a specific app endpoint, or to apply OAuth policies.
|
||||
|
||||
Service-to-service invocation middleware components apply to all outgoing calls from Dapr sidecar to the receiving application (service) as shown in the diagram below.
|
||||
Service-to-service invocation middleware components apply to all **outgoing** calls from a Dapr sidecar to the receiving application (service), as shown in the diagram below.
|
||||
|
||||
<img src="/images/app-middleware.png" width=800>
|
||||
<img src="/images/app-middleware.png" width="800" alt="Diagram showing the flow of a service invocation request. Requests from the callee Dapr sidecar to the callee application go through the app middleware pipeline as described in the paragraph above." />
|
||||
|
||||
Any middleware component that can be applied to HTTP middleware can also be applied to service-to-service invocation calls as a middleware component using `appHttpPipeline` configuration. The example below adds the `uppercase` middleware component for all outgoing calls from the Dapr sidecar to the application that this configuration is applied to.
|
||||
Any middleware component that can be used as HTTP middleware can also be applied to service-to-service invocation calls as a middleware component using the `appHttpPipeline` configuration. The example below adds the `uppercase` middleware component for all outgoing calls from the Dapr sidecar (target of service invocation) to the application that this configuration is applied to.
|
||||
|
||||
```yaml
|
||||
apiVersion: dapr.io/v1alpha1
|
||||
|
|
|
|||
|
|
@ -57,7 +57,7 @@ Since you are running Dapr in the same host as the component, verify this folder
|
|||
|
||||
Define your component using a [component spec]({{< ref component-schema.md >}}). Your component's `type` is derived from the socket name, without the file extension.
|
||||
|
||||
Save the component YAML file in the components-path, replacing:
|
||||
Save the component YAML file in the resources-path, replacing:
|
||||
|
||||
- `your_socket_goes_here` with your component socket name (no extension)
|
||||
- `your_component_type` with your component type
|
||||
|
|
@ -132,7 +132,7 @@ Follow the steps provided in the [Deploy Dapr on a Kubernetes cluster]({{< ref k
|
|||
|
||||
## Add the pluggable component container in your deployments
|
||||
|
||||
When running in Kubernetes mode, pluggable components are deployed as containers in the same pod as your application.
|
||||
Pluggable components are deployed as containers **in the same pod** as your application.
|
||||
|
||||
Since pluggable components are backed by [Unix Domain Sockets][uds], make the socket created by your pluggable component accessible by Dapr runtime. Configure the deployment spec to:
|
||||
|
||||
|
|
@ -140,7 +140,7 @@ Since pluggable components are backed by [Unix Domain Sockets][uds], make the so
|
|||
2. Hint to Dapr the mounted Unix socket volume location
|
||||
3. Attach volume to your pluggable component container
|
||||
|
||||
Below is an example of a deployment that configures a pluggable component:
|
||||
In the following example, your configured pluggable component is deployed as a container within the same pod as your application container.
|
||||
|
||||
```yaml
|
||||
apiVersion: apps/v1
|
||||
|
|
@ -167,17 +167,51 @@ spec:
|
|||
- name: dapr-unix-domain-socket
|
||||
emptyDir: {}
|
||||
containers:
|
||||
### --------------------- YOUR APPLICATION CONTAINER GOES HERE -----------
|
||||
##
|
||||
### --------------------- YOUR APPLICATION CONTAINER GOES HERE -----------
|
||||
### This is the pluggable component container.
|
||||
containers:
|
||||
### --------------------- YOUR APPLICATION CONTAINER GOES HERE -----------
|
||||
- name: app
|
||||
image: YOUR_APP_IMAGE:YOUR_APP_IMAGE_VERSION
|
||||
### --------------------- YOUR PLUGGABLE COMPONENT CONTAINER GOES HERE -----------
|
||||
- name: component
|
||||
image: YOUR_IMAGE_GOES_HERE:YOUR_IMAGE_VERSION
|
||||
volumeMounts: # required, the sockets volume mount
|
||||
- name: dapr-unix-domain-socket
|
||||
mountPath: /tmp/dapr-components-sockets
|
||||
image: YOUR_IMAGE_GOES_HERE:YOUR_IMAGE_VERSION
|
||||
```
|
||||
|
||||
Alternatively, you can annotate your pods, telling Dapr which containers within that pod are pluggable components, like in the example below:
|
||||
|
||||
```yaml
|
||||
apiVersion: apps/v1
|
||||
kind: Deployment
|
||||
metadata:
|
||||
name: app
|
||||
labels:
|
||||
app: app
|
||||
spec:
|
||||
replicas: 1
|
||||
selector:
|
||||
matchLabels:
|
||||
app: app
|
||||
template:
|
||||
metadata:
|
||||
labels:
|
||||
app: app
|
||||
annotations:
|
||||
dapr.io/pluggable-components: "component" ## the name of the pluggable component container separated by `,`, e.g "componentA,componentB".
|
||||
dapr.io/app-id: "my-app"
|
||||
dapr.io/enabled: "true"
|
||||
spec:
|
||||
containers:
|
||||
### --------------------- YOUR APPLICATION CONTAINER GOES HERE -----------
|
||||
- name: app
|
||||
image: YOUR_APP_IMAGE:YOUR_APP_IMAGE_VERSION
|
||||
### --------------------- YOUR PLUGGABLE COMPONENT CONTAINER GOES HERE -----------
|
||||
- name: component
|
||||
image: YOUR_IMAGE_GOES_HERE:YOUR_IMAGE_VERSION
|
||||
```
|
||||
|
||||
Before applying the deployment, let's add one more configuration: the component spec.
|
||||
|
||||
## Define a component
|
||||
|
|
|
|||
|
|
@ -117,6 +117,7 @@ The `logging` section under the `Configuration` spec contains the following prop
|
|||
logging:
|
||||
apiLogging:
|
||||
enabled: false
|
||||
obfuscateURLs: false
|
||||
omitHealthChecks: false
|
||||
```
|
||||
|
||||
|
|
@ -125,6 +126,7 @@ The following table lists the properties for logging:
|
|||
| Property | Type | Description |
|
||||
|--------------|--------|-------------|
|
||||
| `apiLogging.enabled` | boolean | The default value for the `--enable-api-logging` flag for `daprd` (and the corresponding `dapr.io/enable-api-logging` annotation): the value set in the Configuration spec is used as default unless a `true` or `false` value is passed to each Dapr Runtime. Default: `false`.
|
||||
| `apiLogging.obfuscateURLs` | boolean | When enabled, obfuscates the values of URLs in HTTP API logs (if enabled), logging the abstract route name rather than the full path being invoked, which could contain Personal Identifiable Information (PII). Default: `false`.
|
||||
| `apiLogging.omitHealthChecks` | boolean | If `true`, calls to health check endpoints (e.g. `/v1.0/healthz`) are not logged when API logging is enabled. This is useful if those calls are adding a lot of noise in your logs. Default: `false`
|
||||
|
||||
See [logging documentation]({{< ref "logs.md" >}}) for more information.
|
||||
|
|
|
|||
|
|
@ -6,11 +6,13 @@ weight: 40000
|
|||
description: "Recommendations and practices for deploying Dapr to a Kubernetes cluster in a production-ready configuration"
|
||||
---
|
||||
|
||||
## Cluster capacity requirements
|
||||
## Cluster and capacity requirements
|
||||
|
||||
Dapr support for Kubernetes is aligned with [Kubernetes Version Skew Policy](https://kubernetes.io/releases/version-skew-policy/).
|
||||
|
||||
For a production-ready Kubernetes cluster deployment, we recommended you run a cluster of at least 3 worker nodes to support a highly-available control plane installation.
|
||||
|
||||
Use the following resource settings as a starting point. Requirements will vary depending on cluster size and other factors, so you should perform individual testing to find the right values for your environment:
|
||||
Use the following resource settings as a starting point. Requirements will vary depending on cluster size, number of pods, and other factors, so you should perform individual testing to find the right values for your environment:
|
||||
|
||||
| Deployment | CPU | Memory
|
||||
|-------------|-----|-------
|
||||
|
|
@ -196,6 +198,20 @@ It is recommended that a production-ready deployment includes the following sett
|
|||
|
||||
6. Dapr also supports **scoping components for certain applications**. This is not a required practice, and can be enabled according to your security needs. See [here]({{< ref "component-scopes.md" >}}) for more info.
|
||||
|
||||
## Service account tokens
|
||||
|
||||
By default, Kubernetes mounts a volume containing a [Service Account token](https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/) in each container. Applications can use this token, whose permissions vary depending on the configuration of the cluster and namespace, among other things, to perform API calls against the Kubernetes control plane.
|
||||
|
||||
When creating a new Pod (or a Deployment, StatefulSet, Job, etc), you can disable auto-mounting the Service Account token by setting `automountServiceAccountToken: false` in your pod's spec.
|
||||
|
||||
It is recommended that you consider deploying your apps with `automountServiceAccountToken: false` to improve the security posture of your pods, unless your apps depend on having a Service Account token. For example, you may need a Service Account token if:
|
||||
|
||||
- You are using Dapr components that interact with the Kubernetes APIs, for example the [Kubernetes secret store]({{< ref "kubernetes-secret-store.md" >}}) or the [Kubernetes Events binding]{{< ref "kubernetes-binding.md" >}}).
|
||||
Note that initializing Dapr components using [component secrets]({{< ref "component-secrets.md" >}}) stored as Kubernetes secrets does **not** require a Service Account token, so you can still set `automountServiceAccountToken: false` in this case. Only calling the Kubernetes secret store at runtime, using the [Secrets management]({{< ref "secrets-overview.md" >}}) building block, is impacted.
|
||||
- Your own application needs to interact with the Kubernetes APIs.
|
||||
|
||||
Because of the reasons above, Dapr does not set `automountServiceAccountToken: false` automatically for you. However, in all situations where the Service Account is not required by your solution, it is recommended that you set this option in the pods spec.
|
||||
|
||||
## Tracing and metrics configuration
|
||||
|
||||
Dapr has tracing and metrics enabled by default. It is *recommended* that you set up distributed tracing and metrics for your applications and the Dapr control plane in production.
|
||||
|
|
|
|||
|
|
@ -72,6 +72,30 @@ spec:
|
|||
enabled: false
|
||||
```
|
||||
|
||||
## High cardinality metrics
|
||||
|
||||
Depending on your use case, some metrics emitted by Dapr might contain values that have a high cardinality. This might cause increased memory usage for the Dapr process/container and incur expensive egress costs in certain cloud environments. To mitigate this issue, you can set regular expressions for every metric exposed by the Dapr sidecar. [See a list of all Dapr metrics](https://github.com/dapr/dapr/blob/master/docs/development/dapr-metrics.md).
|
||||
|
||||
The following example shows how to apply a regular expression for the label `method` in the metric `dapr_runtime_service_invocation_req_sent_total`:
|
||||
|
||||
```yaml
|
||||
apiVersion: dapr.io/v1alpha1
|
||||
kind: Configuration
|
||||
metadata:
|
||||
name: daprConfig
|
||||
spec:
|
||||
metric:
|
||||
enabled: true
|
||||
rules:
|
||||
- name: dapr_runtime_service_invocation_req_sent_total
|
||||
labels:
|
||||
- name: method
|
||||
regex:
|
||||
"orders/": "orders/.+"
|
||||
```
|
||||
|
||||
When this configuration is applied, a recorded metric with the `method` label of `orders/a746dhsk293972nz` will be replaced with `orders/`.
|
||||
|
||||
## References
|
||||
|
||||
* [Howto: Run Prometheus locally]({{< ref prometheus.md >}})
|
||||
|
|
|
|||
|
|
@ -3,10 +3,10 @@ type: docs
|
|||
title: "Policies"
|
||||
linkTitle: "Policies"
|
||||
weight: 4500
|
||||
description: "Configure resiliency policies for timeouts, retries and circuit breakers"
|
||||
description: "Configure resiliency policies for timeouts, retries, and circuit breakers"
|
||||
---
|
||||
|
||||
You define timeouts, retries and circuit breaker policies under `policies`. Each policy is given a name so you can refer to them from the `targets` section in the resiliency spec.
|
||||
Define timeouts, retries, and circuit breaker policies under `policies`. Each policy is given a name so you can refer to them from the `targets` section in the resiliency spec.
|
||||
|
||||
> Note: Dapr offers default retries for specific APIs. [See here]({{< ref "#override-default-retries" >}}) to learn how you can overwrite default retry logic with user defined retry policies.
|
||||
|
||||
|
|
@ -285,4 +285,10 @@ The table below is a break down of which policies are applied when attempting to
|
|||
| statestore | DefaultStatestoreComponentOutboundRetryPolicy |
|
||||
| actorstore | fastRetries |
|
||||
| EventActor | retryForever |
|
||||
| SummaryActor | DefaultActorRetryPolicy |
|
||||
| SummaryActor | DefaultActorRetryPolicy |
|
||||
|
||||
## Next steps
|
||||
|
||||
Try out one of the Resiliency quickstarts:
|
||||
- [Resiliency: Service-to-service]({{< ref resiliency-serviceinvo-quickstart.md >}})
|
||||
- [Resiliency: State Management]({{< ref resiliency-state-quickstart.md >}})
|
||||
|
|
@ -5,11 +5,8 @@ linkTitle: "Overview"
|
|||
weight: 4500
|
||||
description: "Configure Dapr retries, timeouts, and circuit breakers"
|
||||
---
|
||||
{{% alert title="Note" color="primary" %}}
|
||||
Resiliency is currently a preview feature. Before you can utilize a resiliency spec, you must first [enable the resiliency preview feature]({{< ref support-preview-features >}}).
|
||||
{{% /alert %}}
|
||||
|
||||
Dapr provides a capability for defining and applying fault tolerance resiliency policies via a [resiliency spec]({{< ref "resiliency-overview.md#complete-example-policy" >}}). Resiliency specs are saved in the same location as components specs and are applied when the Dapr sidecar starts. The sidecar determines how to apply resiliency policies to your Dapr API calls. In self-hosted mode, the resiliency spec must be named `resiliency.yaml`. In Kubernetes Dapr finds the named resiliency specs used by your application. Within the resiliency spec, you can define policies for popular resiliency patterns, such as:
|
||||
Dapr provides a capability for defining and applying fault tolerance resiliency policies via a [resiliency spec]({{< ref "resiliency-overview.md#complete-example-policy" >}}). Resiliency specs are saved in the same location as components specs and are applied when the Dapr sidecar starts. The sidecar determines how to apply resiliency policies to your Dapr API calls. In self-hosted mode, the resiliency spec must be named `resiliency.yaml`. In Kubernetes Dapr finds the named resiliency specs used by your application. Within the resiliency spec, you can define policies for popular resiliency patterns, such as:
|
||||
|
||||
- [Timeouts]({{< ref "policies.md#timeouts" >}})
|
||||
- [Retries/back-offs]({{< ref "policies.md#retries" >}})
|
||||
|
|
@ -171,3 +168,9 @@ Watch this video for how to use [resiliency](https://www.youtube.com/watch?t=184
|
|||
|
||||
- [Policies]({{< ref "policies.md" >}})
|
||||
- [Targets]({{< ref "targets.md" >}})
|
||||
|
||||
## Next steps
|
||||
|
||||
Try out one of the Resiliency quickstarts:
|
||||
- [Resiliency: Service-to-service]({{< ref resiliency-serviceinvo-quickstart.md >}})
|
||||
- [Resiliency: State Management]({{< ref resiliency-state-quickstart.md >}})
|
||||
|
|
@ -7,6 +7,7 @@ description: "Apply resiliency policies to apps, components and actors"
|
|||
---
|
||||
|
||||
### Targets
|
||||
|
||||
Named policies are applied to targets. Dapr supports three target types that apply all Dapr building block APIs:
|
||||
- `apps`
|
||||
- `components`
|
||||
|
|
@ -129,4 +130,10 @@ spec:
|
|||
circuitBreaker: general
|
||||
circuitBreakerScope: both
|
||||
circuitBreakerCacheSize: 5000
|
||||
```
|
||||
```
|
||||
|
||||
## Next steps
|
||||
|
||||
Try out one of the Resiliency quickstarts:
|
||||
- [Resiliency: Service-to-service]({{< ref resiliency-serviceinvo-quickstart.md >}})
|
||||
- [Resiliency: State Management]({{< ref resiliency-state-quickstart.md >}})
|
||||
|
|
@ -14,9 +14,35 @@ For CLI there is no explicit opt-in, just the version that this was first made a
|
|||
## Current preview features
|
||||
|
||||
| Feature | Description | Setting | Documentation | Version introduced |
|
||||
| ---------- |-------------|---------|---------------|-----------------|
|
||||
| **`--image-registry`** flag in Dapr CLI| In self hosted mode you can set this flag to specify any private registry to pull the container images required to install Dapr| N/A | [CLI init command reference]({{<ref "dapr-init.md#self-hosted-environment" >}}) | v1.7 |
|
||||
| **Resiliency** | Allows configuring of fine-grained policies for retries, timeouts and circuitbreaking. | `Resiliency` | [Configure Resiliency Policies]({{<ref "resiliency-overview">}}) | v1.7|
|
||||
| **App Middleware** | Allow middleware components to be executed when making service-to-service calls | N/A | [App Middleware]({{<ref "middleware.md#app-middleware" >}}) | v1.9 |
|
||||
| **Streaming for HTTP service invocation** | Enables (partial) support for using streams in HTTP service invocation; see below for more details. | `ServiceInvocationStreaming` | [Details]({{< ref "support-preview-features.md#streaming-for-http-service-invocation" >}}) | v1.10 |
|
||||
| **App health checks** | Allows configuring app health checks | `AppHealthCheck` | [App health checks]({{<ref "app-health.md" >}}) | v1.9 |
|
||||
| **Pluggable components** | Allows creating self-hosted gRPC-based components written in any language that supports gRPC. The following component APIs are supported: State stores, Pub/sub, Bindings | N/A | [Pluggable components concept]({{<ref "components-concept#pluggable-components" >}})| v1.9 |
|
||||
| **Multi-App Run** | Configure multiple Dapr applications from a single configuration file and run from a single command | `dapr run -f` | [Multi-App Run]({{< ref multi-app-dapr-run.md >}}) | v1.10 |
|
||||
| **Workflows** | Author workflows as code to automate and orchestrate tasks within your application, like messaging, state management, and failure handling | N/A | [Workflows concept]({{< ref "components-concept#workflows" >}})| v1.10 |
|
||||
|
||||
### Streaming for HTTP service invocation
|
||||
|
||||
Running Dapr with the `ServiceInvocationStreaming` feature flag enables partial support for handling data as a stream in HTTP service invocation. This can offer improvements in performance and memory utilization when using Dapr to invoke another service using HTTP with large request or response bodies.
|
||||
|
||||
The table below summarizes the current state of support for streaming in HTTP service invocation in Dapr, including the impact of enabling `ServiceInvocationStreaming`, in the example where "app A" is invoking "app B" using Dapr. There are six steps in the data flow, with various levels of support for handling data as a stream:
|
||||
|
||||
<img src="/images/service-invocation-simple.webp" width=600 alt="Diagram showing the steps of service invocation described in the table below" />
|
||||
|
||||
| Step | Handles data as a stream | Dapr 1.10 | Dapr 1.10 with<br/>`ServiceInvocationStreaming` |
|
||||
|:---:|---|:---:|:---:|
|
||||
| 1 | Request: "App A" to "Dapr sidecar A | <span role="img" aria-label="No">❌</span> | <span role="img" aria-label="No">❌</span> |
|
||||
| 2 | Request: "Dapr sidecar A" to "Dapr sidecar B | <span role="img" aria-label="No">❌</span> | <span role="img" aria-label="Yes">✅</span> |
|
||||
| 3 | Request: "Dapr sidecar B" to "App B" | <span role="img" aria-label="Yes">✅</span> | <span role="img" aria-label="Yes">✅</span> |
|
||||
| 4 | Response: "App B" to "Dapr sidecar B" | <span role="img" aria-label="Yes">✅</span> | <span role="img" aria-label="Yes">✅</span> |
|
||||
| 5 | Response: "Dapr sidecar B" to "Dapr sidecar A | <span role="img" aria-label="No">❌</span> | <span role="img" aria-label="Yes">✅</span> |
|
||||
| 6 | Response: "Dapr sidecar A" to "App A | <span role="img" aria-label="No">❌</span> | <span role="img" aria-label="Yes">✅</span> |
|
||||
|
||||
Important notes:
|
||||
|
||||
- `ServiceInvocationStreaming` needs to be applied on caller sidecars only.
|
||||
In the example above, streams are used for HTTP service invocation if `ServiceInvocationStreaming` is applied to the configuration of "app A" and its Dapr sidecar, regardless of whether the feature flag is enabled for "app B" and its sidecar.
|
||||
- When `ServiceInvocationStreaming` is enabled, you should make sure that all services your app invokes using Dapr ("app B") are updated to Dapr 1.10, even if `ServiceInvocationStreaming` is not enabled for those sidecars.
|
||||
Invoking an app using Dapr 1.9 or older is still possible, but those calls may fail if you have applied a Dapr Resiliency policy with retries enabled.
|
||||
|
||||
> Full support for streaming for HTTP service invocation will be completed in a future Dapr version.
|
||||
|
|
|
|||
|
|
@ -40,11 +40,11 @@ $ dapr run --enable-api-logging -- node myapp.js
|
|||
ℹ️ Starting Dapr with id order-processor on port 56730
|
||||
✅ You are up and running! Both Dapr and your app logs will appear here.
|
||||
.....
|
||||
INFO[0000] HTTP API Called app_id=order-processor instance=mypc method="POST /v1.0/state/{name}" scope=dapr.runtime.http-info type=log useragent=Go-http-client/1.1 ver=edge
|
||||
INFO[0000] HTTP API Called app_id=order-processor instance=mypc method="POST /v1.0/state/mystate" scope=dapr.runtime.http-info type=log useragent=Go-http-client/1.1 ver=edge
|
||||
== APP == INFO:root:Saving Order: {'orderId': '483'}
|
||||
INFO[0000] HTTP API Called app_id=order-processor instance=mypc method="GET /v1.0/state/{name}/{key}" scope=dapr.runtime.http-info type=log useragent=Go-http-client/1.1 ver=edge
|
||||
INFO[0000] HTTP API Called app_id=order-processor instance=mypc method="GET /v1.0/state/mystate/key123" scope=dapr.runtime.http-info type=log useragent=Go-http-client/1.1 ver=edge
|
||||
== APP == INFO:root:Getting Order: {'orderId': '483'}
|
||||
INFO[0000] HTTP API Called app_id=order-processor instance=mypc method="DELETE /v1.0/state/{name}" scope=dapr.runtime.http-info type=log useragent=Go-http-client/1.1 ver=edge
|
||||
INFO[0000] HTTP API Called app_id=order-processor instance=mypc method="DELETE /v1.0/state/mystate" scope=dapr.runtime.http-info type=log useragent=Go-http-client/1.1 ver=edge
|
||||
== APP == INFO:root:Deleted Order: {'orderId': '483'}
|
||||
INFO[0000] HTTP API Called app_id=order-processor instance=mypc method="PUT /v1.0/metadata/cliPID" scope=dapr.runtime.http-info type=log useragent=Go-http-client/1.1 ver=edge
|
||||
```
|
||||
|
|
@ -68,7 +68,7 @@ See the kubernetes API logs by executing the below command.
|
|||
kubectl logs <pod_name> daprd -n <name_space>
|
||||
```
|
||||
|
||||
The example below show `info` level API logging in Kubernetes.
|
||||
The example below show `info` level API logging in Kubernetes (with [URL obfuscation](#obfuscate-urls-in-http-api-logging) enabled).
|
||||
|
||||
```bash
|
||||
time="2022-03-16T18:32:02.487041454Z" level=info msg="HTTP API Called" method="POST /v1.0/invoke/{id}/method/{method:*}" app_id=invoke-caller instance=invokecaller-f4f949886-cbnmt scope=dapr.runtime.http-info type=log useragent=Go-http-client/1.1 ver=edge
|
||||
|
|
@ -98,6 +98,22 @@ logging:
|
|||
enabled: true
|
||||
```
|
||||
|
||||
### Obfuscate URLs in HTTP API logging
|
||||
|
||||
By default, logs for API calls in the HTTP endpoints include the full URL being invoked (for example, `POST /v1.0/invoke/directory/method/user-123`), which could contain Personal Identifiable Information (PII).
|
||||
|
||||
To reduce the risk of PII being accidentally included in API logs (when enabled), Dapr can instead log the abstract route being invoked (for example, `POST /v1.0/invoke/{id}/method/{method:*}`). This can help ensuring compliance with privacy regulations such as GDPR.
|
||||
|
||||
To enable obfuscation of URLs in Dapr's HTTP API logs, set `logging.apiLogging.obfuscateURLs` to `true`. For example:
|
||||
|
||||
```yaml
|
||||
logging:
|
||||
apiLogging:
|
||||
obfuscateURLs: true
|
||||
```
|
||||
|
||||
Logs emitted by the Dapr gRPC APIs are not impacted by this configuration option, as they only include the name of the method invoked and no arguments.
|
||||
|
||||
### Omit health checks from API logging
|
||||
|
||||
When API logging is enabled, all calls to the Dapr API server are logged, including those to health check endpoints (e.g. `/v1.0/healthz`). Depending on your environment, this may generate multiple log lines per minute and could create unwanted noise.
|
||||
|
|
|
|||
|
|
@ -3,7 +3,7 @@ type: docs
|
|||
title: "Configuration API reference"
|
||||
linkTitle: "Configuration API"
|
||||
description: "Detailed documentation on the configuration API"
|
||||
weight: 650
|
||||
weight: 700
|
||||
---
|
||||
|
||||
## Get Configuration
|
||||
|
|
|
|||
|
|
@ -3,7 +3,7 @@ type: docs
|
|||
title: "Distributed Lock API reference"
|
||||
linkTitle: "Distributed Lock API"
|
||||
description: "Detailed documentation on the distributed lock API"
|
||||
weight: 660
|
||||
weight: 800
|
||||
---
|
||||
|
||||
## Lock
|
||||
|
|
|
|||
|
|
@ -3,7 +3,7 @@ type: docs
|
|||
title: "Error codes returned by APIs"
|
||||
linkTitle: "Error codes"
|
||||
description: "Detailed reference of the Dapr API error codes"
|
||||
weight: 1000
|
||||
weight: 1200
|
||||
---
|
||||
|
||||
For http calls made to Dapr runtime, when an error is encountered, an error json is returned in http response body. The json contains an error code and an descriptive error message, e.g.
|
||||
|
|
|
|||
|
|
@ -3,7 +3,7 @@ type: docs
|
|||
title: "Health API reference"
|
||||
linkTitle: "Health API"
|
||||
description: "Detailed documentation on the health API"
|
||||
weight: 700
|
||||
weight: 1000
|
||||
---
|
||||
|
||||
Dapr provides health checking probes that can be used as readiness or liveness of Dapr.
|
||||
|
|
|
|||
|
|
@ -3,7 +3,7 @@ type: docs
|
|||
title: "Metadata API reference"
|
||||
linkTitle: "Metadata API"
|
||||
description: "Detailed documentation on the Metadata API"
|
||||
weight: 800
|
||||
weight: 1100
|
||||
---
|
||||
|
||||
Dapr has a metadata API that returns information about the sidecar allowing runtime discoverability. The metadata endpoint returns a list of the components loaded, the activated actors (if present) and attributes with information attached.
|
||||
|
|
|
|||
|
|
@ -64,6 +64,90 @@ Parameter | Description
|
|||
|
||||
> Additional metadata parameters are available based on each pubsub component.
|
||||
|
||||
## Publish multiple messages to a given topic
|
||||
|
||||
This endpoint lets you publish multiple messages to consumers who are listening on a `topic`.
|
||||
|
||||
### HTTP Request
|
||||
|
||||
```
|
||||
POST http://localhost:<daprPort>/v1.0-alpha1/publish/bulk/<pubsubname>/<topic>[?<metadata>]
|
||||
```
|
||||
|
||||
The request body should contain a JSON array of entries with:
|
||||
- Unique entry IDs
|
||||
- The event to publish
|
||||
- The content type of the event
|
||||
|
||||
If the content type for an event is not `application/cloudevents+json`, it is auto-wrapped as a CloudEvent (unless `metadata.rawPayload` is set to `true`).
|
||||
|
||||
Example:
|
||||
|
||||
```bash
|
||||
curl -X POST http://localhost:3500/v1.0-alpha1/publish/bulk/pubsubName/deathStarStatus \
|
||||
-H 'Content-Type: application/json' \
|
||||
-d '[
|
||||
{
|
||||
"entryId": "ae6bf7c6-4af2-11ed-b878-0242ac120002",
|
||||
"event": "first text message",
|
||||
"contentType": "text/plain"
|
||||
},
|
||||
{
|
||||
"entryId": "b1f40bd6-4af2-11ed-b878-0242ac120002",
|
||||
"event": {
|
||||
"message": "second JSON message"
|
||||
},
|
||||
"contentType": "application/json"
|
||||
},
|
||||
]'
|
||||
```
|
||||
|
||||
### Headers
|
||||
|
||||
The `Content-Type` header should always be set to `application/json` since the request body is a JSON array.
|
||||
|
||||
### URL Parameters
|
||||
|
||||
|**Parameter**|**Description**|
|
||||
|--|--|
|
||||
|`daprPort`|The Dapr port|
|
||||
|`pubsubname`|The name of pub/sub component|
|
||||
|`topic`|The name of the topic|
|
||||
|`metadata`|Query parameters for [metadata]({{< ref "pubsub_api.md#metadata" >}})|
|
||||
|
||||
### Metadata
|
||||
|
||||
Metadata can be sent via query parameters in the request's URL. It must be prefixed with `metadata.`, as shown in the table below.
|
||||
|
||||
|**Parameter**|**Description**|
|
||||
|--|--|
|
||||
|`metadata.rawPayload`|Boolean to determine if Dapr should publish the messages without wrapping them as CloudEvent.|
|
||||
|`metadata.maxBulkPubBytes`|Maximum bytes to publish in a bulk publish request.|
|
||||
|
||||
|
||||
#### HTTP Response
|
||||
|
||||
|**HTTP Status**|**Description**|
|
||||
|--|--|
|
||||
|204|All messages delivered|
|
||||
|400|Pub/sub does not exist|
|
||||
|403|Forbidden by access controls|
|
||||
|500|At least one message failed to be delivered|
|
||||
|
||||
In case of a 500 status code, the response body will contain a JSON object containing a list of entries that failed to be delivered. For example from our request above, if the entry with event `"first text message"` failed to be delivered, the response would contain its entry ID and an error message from the underlying pub/sub component.
|
||||
|
||||
```json
|
||||
{
|
||||
"failedEntries": [
|
||||
{
|
||||
"entryId": "ae6bf7c6-4af2-11ed-b878-0242ac120002",
|
||||
"error": "some error message"
|
||||
},
|
||||
],
|
||||
"errorCode": "ERR_PUBSUB_PUBLISH_MESSAGE"
|
||||
}
|
||||
```
|
||||
|
||||
## Optional Application (User Code) Routes
|
||||
|
||||
### Provide a route for Dapr to discover topic subscriptions
|
||||
|
|
@ -161,6 +245,46 @@ HTTP Status | Description
|
|||
404 | error is logged and message is dropped
|
||||
other | warning is logged and message to be retried
|
||||
|
||||
## Subscribe multiple messages from a given topic
|
||||
|
||||
This allows you to subscribe to multiple messages from a broker when listening to a `topic`.
|
||||
|
||||
In order to receive messages in a bulk manner for a topic subscription, the application:
|
||||
|
||||
- Needs to opt for `bulkSubscribe` while sending list of topics to be subscribed to
|
||||
- Optionally, can configure `maxMessagesCount` and/or `maxAwaitDurationMs`
|
||||
Refer to the [Send and receive messages in bulk]({{< ref pubsub-bulk.md >}}) guide for more details on how to opt-in.
|
||||
|
||||
#### Expected HTTP Response for Bulk Subscribe
|
||||
|
||||
An HTTP 2xx response denotes that entries (individual messages) inside this bulk message have been processed by the application and Dapr will now check each EntryId status.
|
||||
A JSON-encoded payload body with the processing status against each entry needs to be sent:
|
||||
|
||||
```json
|
||||
{
|
||||
"statuses": {
|
||||
"entryId": "<entryId>",
|
||||
"status": "<status>"
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
> Note: If an EntryId status is not found by Dapr in a response received from the application, that entry's status is considered `RETRY`.
|
||||
|
||||
Status | Description
|
||||
--------- | -----------
|
||||
`SUCCESS` | Message is processed successfully
|
||||
`RETRY` | Message to be retried by Dapr
|
||||
`DROP` | Warning is logged and message is dropped
|
||||
|
||||
The HTTP response might be different from HTTP 2xx. The following are Dapr's behavior in different HTTP statuses:
|
||||
|
||||
HTTP Status | Description
|
||||
--------- | -----------
|
||||
2xx | message is processed as per status in payload.
|
||||
404 | error is logged and all messages are dropped
|
||||
other | warning is logged and all messages to be retried
|
||||
|
||||
## Message envelope
|
||||
|
||||
Dapr pub/sub adheres to version 1.0 of CloudEvents.
|
||||
|
|
|
|||
|
|
@ -0,0 +1,120 @@
|
|||
---
|
||||
type: docs
|
||||
title: "Workflow API reference"
|
||||
linkTitle: "Workflow API"
|
||||
description: "Detailed documentation on the workflow API"
|
||||
weight: 900
|
||||
---
|
||||
## Component format
|
||||
|
||||
A Dapr `workflow.yaml` component file has the following structure:
|
||||
```yaml
|
||||
apiVersion: dapr.io/v1alpha1
|
||||
kind: Component
|
||||
metadata:
|
||||
name: <NAME>
|
||||
spec:
|
||||
type: workflow.<TYPE>
|
||||
version: v1.0-alpha1
|
||||
metadata:
|
||||
- name: <NAME>
|
||||
value: <VALUE>
|
||||
```
|
||||
| Setting | Description |
|
||||
| ------- | ----------- |
|
||||
| `metadata.name` | The name of the workflow component. |
|
||||
| `spec/metadata` | Additional metadata parameters specified by workflow component |
|
||||
|
||||
|
||||
|
||||
## Supported workflow methods
|
||||
|
||||
### POST start workflow request
|
||||
```bash
|
||||
POST http://localhost:3500/v1.0-alpha1/workflows/<workflowComponentName>/<workflowName>/<instanceId>/start
|
||||
```
|
||||
### POST terminate workflow request
|
||||
```bash
|
||||
POST http://localhost:3500/v1.0-alpha1/workflows/<workflowComponentName>/<instanceId>/terminate
|
||||
```
|
||||
### GET workflow request
|
||||
```bash
|
||||
GET http://localhost:3500/v1.0-alpha1/workflows/<workflowComponentName>/<workflowName>/<instanceId>
|
||||
```
|
||||
|
||||
### URL parameters
|
||||
|
||||
Parameter | Description
|
||||
--------- | -----------
|
||||
`workflowComponentName` | Current default is `dapr` for Dapr Workflows
|
||||
`workflowName` | Identify the workflow type
|
||||
`instanceId` | Unique value created for each run of a specific workflow
|
||||
|
||||
|
||||
### Headers
|
||||
|
||||
As part of the start HTTP request, the caller can optionally include one or more `dapr-workflow-metadata` HTTP request headers. The format of the header value is a list of `{key}={value}` values, similar to the format for HTTP cookie request headers. These key/value pairs are saved in the workflow instance’s metadata and can be made available for search (in cases where the workflow implementation supports this kind of search).
|
||||
|
||||
|
||||
## HTTP responses
|
||||
|
||||
### Response codes
|
||||
|
||||
Code | Description
|
||||
---- | -----------
|
||||
`202` | Accepted
|
||||
`400` | Request was malformed
|
||||
`500` | Request formatted correctly, error in dapr code or underlying component
|
||||
|
||||
### Examples of response body for each method
|
||||
|
||||
#### POST start workflow response body
|
||||
|
||||
```bash
|
||||
"WFInfo": {
|
||||
"instance_id": "SampleWorkflow"
|
||||
}
|
||||
```
|
||||
|
||||
|
||||
#### POST terminate workflow response body
|
||||
|
||||
```bash
|
||||
HTTP/1.1 202 Accepted
|
||||
Server: fasthttp
|
||||
Date: Thu, 12 Jan 2023 21:31:16 GMT
|
||||
Content-Type: application/json
|
||||
Content-Length: 139
|
||||
Traceparent: 00-e3dedffedbeb9efbde9fbed3f8e2d8-5f38960d43d24e98-01
|
||||
Connection: close
|
||||
```
|
||||
|
||||
|
||||
### GET workflow response body
|
||||
|
||||
```bash
|
||||
HTTP/1.1 202 Accepted
|
||||
Server: fasthttp
|
||||
Date: Thu, 12 Jan 2023 21:31:16 GMT
|
||||
Content-Type: application/json
|
||||
Content-Length: 139
|
||||
Traceparent: 00-e3dedffedbeb9efbde9fbed3f8e2d8-5f38960d43d24e98-01
|
||||
Connection: close
|
||||
|
||||
{
|
||||
"WFInfo": {
|
||||
"instance_id": "SampleWorkflow"
|
||||
},
|
||||
"start_time": "2023-01-12T21:31:13Z",
|
||||
"metadata": {
|
||||
"status": "Running",
|
||||
"task_queue": "WorkflowSampleQueue"
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
|
||||
## Next Steps
|
||||
|
||||
- [Workflow API overview]({{< ref workflow-overview.md >}})
|
||||
- [Route user to workflow patterns ](todo)
|
||||
|
|
@ -23,18 +23,19 @@ dapr run [flags] [command]
|
|||
|
||||
| Name | Environment Variable | Default | Description |
|
||||
| ------------------------------ | -------------------- | ---------------------------------------------------------------------------------- | ---------------------------------------------------------------------------------------------------- |
|
||||
| `--app-id`, `-a` | `APP_ID` | | The id for your application, used for service discovery |
|
||||
| `--app-max-concurrency` | | `unlimited` | The concurrency level of the application; default is unlimited |
|
||||
| `--app-id`, `-a` | `APP_ID` | | The id for your application, used for service discovery. Cannot contain dots. |
|
||||
| `--app-max-concurrency` | | `unlimited` | The concurrency level of the application; default is unlimited |
|
||||
| `--app-port`, `-p` | `APP_PORT` | | The port your application is listening on |
|
||||
| `--app-protocol`, `-P` | | `http` | The protocol Dapr uses to talk to the application. Valid values are: `http` or `grpc` |
|
||||
| `--app-ssl` | | `false` | Enable https when Dapr invokes the application |
|
||||
| `--components-path`, `-d` | | Linux/Mac: `$HOME/.dapr/components` <br/>Windows: `%USERPROFILE%\.dapr\components` | **Deprecated** in favor of `--resources-path` |
|
||||
| `--resources-path`, `-d` | | Linux/Mac: `$HOME/.dapr/components` <br/>Windows: `%USERPROFILE%\.dapr\components` | The path for components directory |
|
||||
| `--resources-path`, `-d` | | Linux/Mac: `$HOME/.dapr/components` <br/>Windows: `%USERPROFILE%\.dapr\components` | The path for components directory |
|
||||
| `--runtime-path` | | | Dapr runtime install path |
|
||||
| `--config`, `-c` | | Linux/Mac: `$HOME/.dapr/config.yaml` <br/>Windows: `%USERPROFILE%\.dapr\config.yaml` | Dapr configuration file |
|
||||
| `--dapr-grpc-port` | `DAPR_GRPC_PORT` | `50001` | The gRPC port for Dapr to listen on |
|
||||
| `--dapr-http-port` | `DAPR_HTTP_PORT` | `3500` | The HTTP port for Dapr to listen on |
|
||||
| `--enable-profiling` | | `false` | Enable "pprof" profiling via an HTTP endpoint |
|
||||
| `--help`, `-h` | | | Print the help message |
|
||||
| `--run-file`, `-f` | | Linux/MacOS: `$HOME/.dapr/dapr.yaml` | Run multiple applications at once using a Multi-App Run template file. Currently in [alpha]({{< ref "support-preview-features.md" >}}) and only availale in Linux/MacOS |
|
||||
| `--image` | | | Use a custom Docker image. Format is `repository/image` for Docker Hub, or `example.com/repository/image` for a custom registry. |
|
||||
| `--log-level` | | `info` | The log verbosity. Valid values are: `debug`, `info`, `warn`, `error`, `fatal`, or `panic` |
|
||||
| `--enable-api-logging` | | `false` | Enable the logging of all API calls from application to Dapr |
|
||||
|
|
@ -46,8 +47,9 @@ dapr run [flags] [command]
|
|||
| `--app-health-probe-timeout` | | | Timeout for app health probes in milliseconds |
|
||||
| `--app-health-threshold` | | | Number of consecutive failures for the app to be considered unhealthy |
|
||||
| `--unix-domain-socket`, `-u` | | | Path to a unix domain socket dir mount. If specified, communication with the Dapr sidecar uses unix domain sockets for lower latency and greater throughput when compared to using TCP ports. Not available on Windows. |
|
||||
| `--dapr-http-max-request-size` | | `4` | Max size of the request body in MB. |
|
||||
| `--dapr-http-max-request-size` | | `4` | Max size of the request body in MB. |
|
||||
| `--dapr-http-read-buffer-size` | | `4` | Max size of the HTTP read buffer in KB. This also limits the maximum size of HTTP headers. The default 4 KB |
|
||||
| `--components-path`, `-d` | | Linux/Mac: `$HOME/.dapr/components` <br/>Windows: `%USERPROFILE%\.dapr\components` | **Deprecated** in favor of `--resources-path` |
|
||||
|
||||
### Examples
|
||||
|
||||
|
|
|
|||
|
|
@ -21,10 +21,11 @@ dapr stop [flags]
|
|||
|
||||
### Flags
|
||||
|
||||
| Name | Environment Variable | Default | Description |
|
||||
| ---------------- | -------------------- | ------- | -------------------------------- |
|
||||
| `--app-id`, `-a` | `APP_ID` | | The application id to be stopped |
|
||||
| `--help`, `-h` | | | Print this help message |
|
||||
| Name | Environment Variable | Default | Description |
|
||||
| -------------------- | -------------------- | ------- | -------------------------------- |
|
||||
| `--app-id`, `-a` | `APP_ID` | | The application id to be stopped |
|
||||
| `--help`, `-h` | | | Print this help message |
|
||||
| `--run-file`, `-f` | | | Stop running multiple applications at once using a Multi-App Run template file. Currently in [alpha]({{< ref "support-preview-features.md" >}}) and only availale in Linux/MacOS |
|
||||
|
||||
### Examples
|
||||
|
||||
|
|
|
|||
|
|
@ -9,9 +9,9 @@ aliases:
|
|||
|
||||
## Component format
|
||||
|
||||
To setup Azure Event Grid binding create a component of type `bindings.azure.eventgrid`. See [this guide]({{< ref "howto-bindings.md#1-create-a-binding" >}}) on how to create and apply a binding configuration.
|
||||
To setup an Azure Event Grid binding create a component of type `bindings.azure.eventgrid`. See [this guide]({{< ref "howto-bindings.md#1-create-a-binding" >}}) on how to create and apply a binding configuration.
|
||||
|
||||
See [this](https://docs.microsoft.com/azure/event-grid/) for Azure Event Grid documentation.
|
||||
See [this](https://docs.microsoft.com/azure/event-grid/) for the documentation for Azure Event Grid.
|
||||
|
||||
```yml
|
||||
apiVersion: dapr.io/v1alpha1
|
||||
|
|
@ -22,29 +22,30 @@ spec:
|
|||
type: bindings.azure.eventgrid
|
||||
version: v1
|
||||
metadata:
|
||||
# Required Input Binding Metadata
|
||||
- name: tenantId
|
||||
value: "[AzureTenantId]"
|
||||
- name: subscriptionId
|
||||
value: "[AzureSubscriptionId]"
|
||||
- name: clientId
|
||||
value: "[ClientId]"
|
||||
- name: clientSecret
|
||||
value: "[ClientSecret]"
|
||||
- name: subscriberEndpoint
|
||||
value: "[SubscriberEndpoint]"
|
||||
- name: handshakePort
|
||||
value: [HandshakePort]
|
||||
- name: scope
|
||||
value: "[Scope]"
|
||||
# Optional Input Binding Metadata
|
||||
- name: eventSubscriptionName
|
||||
value: "[EventSubscriptionName]"
|
||||
# Required Output Binding Metadata
|
||||
- name: accessKey
|
||||
value: "[AccessKey]"
|
||||
- name: topicEndpoint
|
||||
value: "[TopicEndpoint]"
|
||||
# Required Input Binding Metadata
|
||||
- name: azureTenantId
|
||||
value: "[AzureTenantId]"
|
||||
- name: azureSubscriptionId
|
||||
value: "[AzureSubscriptionId]"
|
||||
- name: azureClientId
|
||||
value: "[ClientId]"
|
||||
- name: azureClientSecret
|
||||
value: "[ClientSecret]"
|
||||
- name: subscriberEndpoint
|
||||
value: "[SubscriberEndpoint]"
|
||||
- name: handshakePort
|
||||
# Make sure to pass this as a string, with quotes around the value
|
||||
value: "[HandshakePort]"
|
||||
- name: scope
|
||||
value: "[Scope]"
|
||||
# Optional Input Binding Metadata
|
||||
- name: eventSubscriptionName
|
||||
value: "[EventSubscriptionName]"
|
||||
```
|
||||
|
||||
{{% alert title="Warning" color="warning" %}}
|
||||
|
|
@ -55,57 +56,99 @@ The above example uses secrets as plain strings. It is recommended to use a secr
|
|||
|
||||
| Field | Required | Binding support | Details | Example |
|
||||
|--------------------|:--------:|------------|-----|---------|
|
||||
| tenantId | Y | Input | The Azure tenant id in which this Event Grid Event Subscription should be created | `"tenentID"` |
|
||||
| subscriptionId | Y | Input | The Azure subscription id in which this Event Grid Event Subscription should be created | `"subscriptionId"` |
|
||||
| clientId | Y | Input | The client id that should be used by the binding to create or update the Event Grid Event Subscription | `"clientId"` |
|
||||
| clientSecret | Y | Input | The client id that should be used by the binding to create or update the Event Grid Event Subscription | `"clientSecret"` |
|
||||
| subscriberEndpoint | Y | Input | The https endpoint in which Event Grid will handshake and send Cloud Events. If you aren't re-writing URLs on ingress, it should be in the form of: `https://[YOUR HOSTNAME]/api/events` If testing on your local machine, you can use something like [ngrok](https://ngrok.com) to create a public endpoint. | `"https://[YOUR HOSTNAME]/api/events"` |
|
||||
| handshakePort | Y | Input | The container port that the input binding will listen on for handshakes and events | `"9000"` |
|
||||
| scope | Y | Input | The identifier of the resource to which the event subscription needs to be created or updated. See [here](#scope) for more details | `"/subscriptions/{subscriptionId}/"` |
|
||||
| eventSubscriptionName | N | Input | The name of the event subscription. Event subscription names must be between 3 and 64 characters in length and should use alphanumeric letters only | `"name"` |
|
||||
| accessKey | Y | Output | The Access Key to be used for publishing an Event Grid Event to a custom topic | `"accessKey"` |
|
||||
| topicEndpoint | Y | Output | The topic endpoint in which this output binding should publish events | `"topic-endpoint"` |
|
||||
| `accessKey` | Y | Output | The Access Key to be used for publishing an Event Grid Event to a custom topic | `"accessKey"` |
|
||||
| `topicEndpoint` | Y | Output | The topic endpoint in which this output binding should publish events | `"topic-endpoint"` |
|
||||
| `azureTenantId` | Y | Input | The Azure tenant ID of the Event Grid resource | `"tenentID"` |
|
||||
| `azureSubscriptionId` | Y | Input | The Azure subscription ID of the Event Grid resource | `"subscriptionId"` |
|
||||
| `azureClientId` | Y | Input | The client ID that should be used by the binding to create or update the Event Grid Event Subscription and to authenticate incoming messages | `"clientId"` |
|
||||
| `azureClientSecret` | Y | Input | The client id that should be used by the binding to create or update the Event Grid Event Subscription and to authenticate incoming messages | `"clientSecret"` |
|
||||
| `subscriberEndpoint` | Y | Input | The HTTPS endpoint of the webhook Event Grid sends events (formatted as Cloud Events) to. If you're not re-writing URLs on ingress, it should be in the form of: `"https://[YOUR HOSTNAME]/<path>"`<br/>If testing on your local machine, you can use something like [ngrok](https://ngrok.com) to create a public endpoint. | `"https://[YOUR HOSTNAME]/<path>"` |
|
||||
| `handshakePort` | Y | Input | The container port that the input binding listens on when receiving events on the webhook | `"9000"` |
|
||||
| `scope` | Y | Input | The identifier of the resource to which the event subscription needs to be created or updated. See the [scope section](#scope) for more details | `"/subscriptions/{subscriptionId}/"` |
|
||||
| `eventSubscriptionName` | N | Input | The name of the event subscription. Event subscription names must be between 3 and 64 characters long and should use alphanumeric letters only | `"name"` |
|
||||
|
||||
### Scope
|
||||
|
||||
Scope is the identifier of the resource to which the event subscription needs to be created or updated. The scope can be a subscription, or a resource group, or a top level resource belonging to a resource provider namespace, or an Event Grid topic. For example:
|
||||
- `'/subscriptions/{subscriptionId}/'` for a subscription
|
||||
- `'/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}'` for a resource group
|
||||
- `'/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/{resourceProviderNamespace}/{resourceType}/{resourceName}'` for a resource
|
||||
- `'/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.EventGrid/topics/{topicName}'` for an Event Grid topic
|
||||
Scope is the identifier of the resource to which the event subscription needs to be created or updated. The scope can be a subscription, a resource group, a top-level resource belonging to a resource provider namespace, or an Event Grid topic. For example:
|
||||
|
||||
- `/subscriptions/{subscriptionId}/` for a subscription
|
||||
- `/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}` for a resource group
|
||||
- `/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/{resourceProviderNamespace}/{resourceType}/{resourceName}` for a resource
|
||||
- `/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.EventGrid/topics/{topicName}` for an Event Grid topic
|
||||
|
||||
> Values in braces {} should be replaced with actual values.
|
||||
|
||||
## Binding support
|
||||
|
||||
This component supports both **input and output** binding interfaces.
|
||||
|
||||
This component supports **output binding** with the following operations:
|
||||
|
||||
- `create`
|
||||
## Additional information
|
||||
- `create`: publishes a message on the Event Grid topic
|
||||
|
||||
Event Grid Binding creates an [event subscription](https://docs.microsoft.com/azure/event-grid/concepts#event-subscriptions) when Dapr initializes. Your Service Principal needs to have the RBAC permissions to enable this.
|
||||
## Azure AD credentials
|
||||
|
||||
The Azure Event Grid binding requires an Azure AD application and service principal for two reasons:
|
||||
|
||||
- Creating an [event subscription](https://docs.microsoft.com/azure/event-grid/concepts#event-subscriptions) when Dapr is started (and updating it if the Dapr configuration changes)
|
||||
- Authenticating messages delivered by Event Hubs to your application.
|
||||
|
||||
Requirements:
|
||||
|
||||
- The [Azure CLI](https://docs.microsoft.com/cli/azure/install-azure-cli) installed.
|
||||
- [PowerShell 7](https://learn.microsoft.com/powershell/scripting/install/installing-powershell) installed.
|
||||
- [Az module for PowerShell](https://learn.microsoft.com/powershell/azure/install-az-ps) for PowerShell installed:
|
||||
`Install-Module Az -Scope CurrentUser -Repository PSGallery -Force`
|
||||
- [Microsoft.Graph module for PowerShell](https://learn.microsoft.com/powershell/microsoftgraph/installation) for PowerShell installed:
|
||||
`Install-Module Microsoft.Graph -Scope CurrentUser -Repository PSGallery -Force`
|
||||
|
||||
For the first purpose, you will need to [create an Azure Service Principal](https://learn.microsoft.com/azure/active-directory/develop/howto-create-service-principal-portal). After creating it, take note of the Azure AD application's **clientID** (a UUID), and run the following script with the Azure CLI:
|
||||
|
||||
```bash
|
||||
# Set the client ID of the app you created
|
||||
CLIENT_ID="..."
|
||||
# Scope of the resource, usually in the format:
|
||||
# `/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.EventGrid/topics/{topicName}`
|
||||
SCOPE="..."
|
||||
|
||||
# First ensure that Azure Resource Manager provider is registered for Event Grid
|
||||
az provider register --namespace Microsoft.EventGrid
|
||||
az provider show --namespace Microsoft.EventGrid --query "registrationState"
|
||||
az provider register --namespace "Microsoft.EventGrid"
|
||||
az provider show --namespace "Microsoft.EventGrid" --query "registrationState"
|
||||
# Give the SP needed permissions so that it can create event subscriptions to Event Grid
|
||||
az role assignment create --assignee <clientId> --role "EventGrid EventSubscription Contributor" --scopes <scope>
|
||||
az role assignment create --assignee "$CLIENT_ID" --role "EventGrid EventSubscription Contributor" --scopes "$SCOPE"
|
||||
```
|
||||
|
||||
_Make sure to also to add quotes around the `[HandshakePort]` in your Event Grid binding component because Kubernetes expects string values from the config._
|
||||
For the second purpose, first download a script:
|
||||
|
||||
```sh
|
||||
curl -LO "https://raw.githubusercontent.com/dapr/components-contrib/master/.github/infrastructure/conformance/azure/setup-eventgrid-sp.ps1"
|
||||
```
|
||||
|
||||
Then, **using PowerShell** (`pwsh`), run:
|
||||
|
||||
```powershell
|
||||
# Set the client ID of the app you created
|
||||
$clientId = "..."
|
||||
|
||||
# Authenticate with the Microsoft Graph
|
||||
# You may need to add the -TenantId flag to the next command if needed
|
||||
Connect-MgGraph -Scopes "Application.Read.All","Application.ReadWrite.All"
|
||||
./setup-eventgrid-sp.ps1 $clientId
|
||||
```
|
||||
|
||||
> Note: if your directory does not have a Service Principal for the application "Microsoft.EventGrid", you may need to run the command `Connect-MgGraph` and sign in as an admin for the Azure AD tenant (this is related to permissions on the Azure AD directory, and not the Azure subscription). Otherwise, please ask your tenant's admin to sign in and run this PowerShell command: `New-MgServicePrincipal -AppId "4962773b-9cdb-44cf-a8bf-237846a00ab7"` (the UUID is a constant)
|
||||
|
||||
### Testing locally
|
||||
|
||||
- Install [ngrok](https://ngrok.com/download)
|
||||
- Run locally using custom port `9000` for handshakes
|
||||
- Run locally using a custom port, for example `9000`, for handshakes
|
||||
|
||||
```bash
|
||||
# Using random port 9000 as an example
|
||||
# Using port 9000 as an example
|
||||
ngrok http --host-header=localhost 9000
|
||||
```
|
||||
|
||||
- Configure the ngrok's HTTPS endpoint and custom port to input binding metadata
|
||||
- Configure the ngrok's HTTPS endpoint and the custom port to input binding metadata
|
||||
- Run Dapr
|
||||
|
||||
```bash
|
||||
|
|
@ -115,19 +158,19 @@ dapr run --app-id dotnetwebapi --app-port 5000 --dapr-http-port 3500 dotnet run
|
|||
|
||||
### Testing on Kubernetes
|
||||
|
||||
Azure Event Grid requires a valid HTTPS endpoint for custom webhooks. Self signed certificates won't do. In order to enable traffic from public internet to your app's Dapr sidecar you need an ingress controller enabled with Dapr. There's a good article on this topic: [Kubernetes NGINX ingress controller with Dapr](https://carlos.mendible.com/2020/04/05/kubernetes-nginx-ingress-controller-with-dapr/).
|
||||
Azure Event Grid requires a valid HTTPS endpoint for custom webhooks; self-signed certificates aren't accepted. In order to enable traffic from the public internet to your app's Dapr sidecar you need an ingress controller enabled with Dapr. There's a good article on this topic: [Kubernetes NGINX ingress controller with Dapr](https://carlos.mendible.com/2020/04/05/kubernetes-nginx-ingress-controller-with-dapr/).
|
||||
|
||||
To get started, first create `dapr-annotations.yaml` for Dapr annotations
|
||||
To get started, first create a `dapr-annotations.yaml` file for Dapr annotations:
|
||||
|
||||
```yaml
|
||||
controller:
|
||||
podAnnotations:
|
||||
dapr.io/enabled: "true"
|
||||
dapr.io/app-id: "nginx-ingress"
|
||||
dapr.io/app-port: "80"
|
||||
podAnnotations:
|
||||
dapr.io/enabled: "true"
|
||||
dapr.io/app-id: "nginx-ingress"
|
||||
dapr.io/app-port: "80"
|
||||
```
|
||||
|
||||
Then install NGINX ingress controller to your Kubernetes cluster with Helm 3 using the annotations
|
||||
Then install the NGINX ingress controller to your Kubernetes cluster with Helm 3 using the annotations:
|
||||
|
||||
```bash
|
||||
helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx
|
||||
|
|
@ -137,12 +180,13 @@ helm install nginx-ingress ingress-nginx/ingress-nginx -f ./dapr-annotations.yam
|
|||
kubectl get svc -l component=controller -o jsonpath='Public IP is: {.items[0].status.loadBalancer.ingress[0].ip}{"\n"}'
|
||||
```
|
||||
|
||||
If deploying to Azure Kubernetes Service, you can follow [the official MS documentation for rest of the steps](https://docs.microsoft.com/azure/aks/ingress-tls)
|
||||
If deploying to Azure Kubernetes Service, you can follow [the official Microsoft documentation for rest of the steps](https://docs.microsoft.com/azure/aks/ingress-tls):
|
||||
|
||||
- Add an A record to your DNS zone
|
||||
- Install cert-manager
|
||||
- Create a CA cluster issuer
|
||||
|
||||
Final step for enabling communication between Event Grid and Dapr is to define `http` and custom port to your app's service and an `ingress` in Kubernetes. This example uses .NET Core web api and Dapr default ports and custom port 9000 for handshakes.
|
||||
Final step for enabling communication between Event Grid and Dapr is to define `http` and custom port to your app's service and an `ingress` in Kubernetes. This example uses a .NET Core web api and Dapr default ports and custom port 9000 for handshakes.
|
||||
|
||||
```yaml
|
||||
# dotnetwebapi.yaml
|
||||
|
|
@ -217,7 +261,7 @@ spec:
|
|||
imagePullPolicy: Always
|
||||
```
|
||||
|
||||
Deploy binding and app (including ingress) to Kubernetes
|
||||
Deploy the binding and app (including ingress) to Kubernetes
|
||||
|
||||
```bash
|
||||
# Deploy Dapr components
|
||||
|
|
@ -226,7 +270,7 @@ kubectl apply -f eventgrid.yaml
|
|||
kubectl apply -f dotnetwebapi.yaml
|
||||
```
|
||||
|
||||
> **Note:** This manifest deploys everything to Kubernetes default namespace.
|
||||
> **Note:** This manifest deploys everything to Kubernetes' default namespace.
|
||||
|
||||
#### Troubleshooting possible issues with Nginx controller
|
||||
|
||||
|
|
|
|||
|
|
@ -9,7 +9,7 @@ aliases:
|
|||
|
||||
## Component format
|
||||
|
||||
To setup Azure Event Hubs binding create a component of type `bindings.azure.eventhubs`. See [this guide]({{< ref "howto-bindings.md#1-create-a-binding" >}}) on how to create and apply a binding configuration.
|
||||
To setup an Azure Event Hubs binding, create a component of type `bindings.azure.eventhubs`. See [this guide]({{< ref "howto-bindings.md#1-create-a-binding" >}}) on how to create and apply a binding configuration.
|
||||
|
||||
See [this](https://docs.microsoft.com/azure/event-hubs/event-hubs-dotnet-framework-getstarted-send) for instructions on how to set up an Event Hub.
|
||||
|
||||
|
|
@ -22,18 +22,39 @@ spec:
|
|||
type: bindings.azure.eventhubs
|
||||
version: v1
|
||||
metadata:
|
||||
- name: connectionString # Azure EventHubs connection string
|
||||
value: "Endpoint=sb://****"
|
||||
- name: consumerGroup # EventHubs consumer group
|
||||
value: "group1"
|
||||
- name: storageAccountName # Azure Storage Account Name
|
||||
value: "accountName"
|
||||
- name: storageAccountKey # Azure Storage Account Key
|
||||
value: "accountKey"
|
||||
- name: storageContainerName # Azure Storage Container Name
|
||||
value: "containerName"
|
||||
- name: partitionID # (Optional) PartitionID to send and receive events
|
||||
value: 0
|
||||
# Hub name ("topic")
|
||||
- name: eventHub
|
||||
value: "mytopic"
|
||||
- name: consumerGroup
|
||||
value: "myapp"
|
||||
# Either connectionString or eventHubNamespace is required
|
||||
# Use connectionString when *not* using Azure AD
|
||||
- name: connectionString
|
||||
value: "Endpoint=sb://{EventHubNamespace}.servicebus.windows.net/;SharedAccessKeyName={PolicyName};SharedAccessKey={Key};EntityPath={EventHub}"
|
||||
# Use eventHubNamespace when using Azure AD
|
||||
- name: eventHubNamespace
|
||||
value: "namespace"
|
||||
- name: enableEntityManagement
|
||||
value: "false"
|
||||
# The following four properties are needed only if enableEntityManagement is set to true
|
||||
- name: resourceGroupName
|
||||
value: "test-rg"
|
||||
- name: subscriptionID
|
||||
value: "value of Azure subscription ID"
|
||||
- name: partitionCount
|
||||
value: "1"
|
||||
- name: messageRetentionInDays
|
||||
value: "3"
|
||||
# Checkpoint store attributes
|
||||
- name: storageAccountName
|
||||
value: "myeventhubstorage"
|
||||
- name: storageAccountKey
|
||||
value: "112233445566778899"
|
||||
- name: storageContainerName
|
||||
value: "myeventhubstoragecontainer"
|
||||
# Alternative to passing storageAccountKey
|
||||
- name: storageConnectionString
|
||||
value: "DefaultEndpointsProtocol=https;AccountName=<account>;AccountKey=<account-key>"
|
||||
```
|
||||
|
||||
{{% alert title="Warning" color="warning" %}}
|
||||
|
|
@ -42,25 +63,31 @@ The above example uses secrets as plain strings. It is recommended to use a secr
|
|||
|
||||
## Spec metadata fields
|
||||
|
||||
| Field | Required | Binding support | Details | Example |
|
||||
| Field | Required | Binding support | Details | Example |
|
||||
|--------------------|:--------:|------------|-----|---------|
|
||||
| connectionString | Y | Output | The [EventHubs connection string](https://docs.microsoft.com/azure/event-hubs/authorize-access-shared-access-signature). Note that this is the EventHub itself and not the EventHubs namespace. Make sure to use the child EventHub shared access policy connection string | `"Endpoint=sb://****"` |
|
||||
| consumerGroup | Y | Output | The name of an [EventHubs Consumer Group](https://docs.microsoft.com/azure/event-hubs/event-hubs-features#consumer-groups) to listen on | `"group1"` |
|
||||
| storageAccountName | Y | Output | The name of the account of the Azure Storage account to persist checkpoints data on | `"accountName"` |
|
||||
| storageAccountKey | Y* | Output | The account key for the Azure Storage account to persist checkpoints data on. ***Not required if using AAD authentication.** | `"accountKey"` |
|
||||
| storageContainerName | Y | Output | The name of the container in the Azure Storage account to persist checkpoints data on | `"containerName"` |
|
||||
| partitionID | N | Output | ID of the partition to send and receive events | `0` |
|
||||
| eventHub | N | Output | The name of the EventHubs hub. **Required if using AAD authentication.** | `eventHubsNamespace-hubName` |
|
||||
| eventHubNamespace | N | Output | The name of the EventHubs namespace. **Required if using AAD authentication.** | `eventHubsNamespace` |
|
||||
| `eventHub` | Y* | Input/Output | The name of the Event Hubs hub ("topic"). Required if using Azure AD authentication or if the connection string doesn't contain an `EntityPath` value | `mytopic` |
|
||||
| `connectionString` | Y* | Input/Output | Connection string for the Event Hub or the Event Hub namespace.<br>* Mutally exclusive with `eventHubNamespace` field.<br>* Required when not using [Azure AD Authentication]({{< ref "authenticating-azure.md" >}}) | `"Endpoint=sb://{EventHubNamespace}.servicebus.windows.net/;SharedAccessKeyName={PolicyName};SharedAccessKey={Key};EntityPath={EventHub}"` or `"Endpoint=sb://{EventHubNamespace}.servicebus.windows.net/;SharedAccessKeyName={PolicyName};SharedAccessKey={Key}"`
|
||||
| `eventHubNamespace` | Y* | Input/Output | The Event Hub Namespace name.<br>* Mutally exclusive with `connectionString` field.<br>* Required when using [Azure AD Authentication]({{< ref "authenticating-azure.md" >}}) | `"namespace"`
|
||||
| `enableEntityManagement` | N | Input/Output | Boolean value to allow management of the EventHub namespace and storage account. Default: `false` | `"true", "false"`
|
||||
| `resourceGroupName` | N | Input/Output | Name of the resource group the Event Hub namespace is part of. Required when entity management is enabled | `"test-rg"`
|
||||
| `subscriptionID` | N | Input/Output | Azure subscription ID value. Required when entity management is enabled | `"azure subscription id"`
|
||||
| `partitionCount` | N | Input/Output | Number of partitions for the new Event Hub namespace. Used only when entity management is enabled. Default: `"1"` | `"2"`
|
||||
| `messageRetentionInDays` | N | Input/Output | Number of days to retain messages for in the newly created Event Hub namespace. Used only when entity management is enabled. Default: `"1"` | `"90"`
|
||||
| `consumerGroup` | Y | Input | The name of the [Event Hubs Consumer Group](https://docs.microsoft.com/azure/event-hubs/event-hubs-features#consumer-groups) to listen on | `"group1"` |
|
||||
| `storageAccountName` | Y | Input | Storage account name to use for the checkpoint store. |`"myeventhubstorage"`
|
||||
| `storageAccountKey` | Y* | Input | Storage account key for the checkpoint store account.<br>* When using Azure AD, it's possible to omit this if the service principal has access to the storage account too. | `"112233445566778899"`
|
||||
| `storageConnectionString` | Y* | Input | Connection string for the checkpoint store, alternative to specifying `storageAccountKey` | `"DefaultEndpointsProtocol=https;AccountName=myeventhubstorage;AccountKey=<account-key>"`
|
||||
| `storageContainerName` | Y | Input | Storage container name for the storage account name. | `"myeventhubstoragecontainer"`
|
||||
|
||||
### Azure Active Directory (AAD) authentication
|
||||
The Azure Event Hubs pubsub component supports authentication using all Azure Active Directory mechanisms. For further information and the relevant component metadata fields to provide depending on the choice of AAD authentication mechanism, see the [docs for authenticating to Azure]({{< ref authenticating-azure.md >}}).
|
||||
|
||||
The Azure Event Hubs pub/sub component supports authentication using all Azure Active Directory mechanisms. For further information and the relevant component metadata fields to provide depending on the choice of AAD authentication mechanism, see the [docs for authenticating to Azure]({{< ref authenticating-azure.md >}}).
|
||||
|
||||
## Binding support
|
||||
|
||||
This component supports **output binding** with the following operations:
|
||||
|
||||
- `create`
|
||||
- `create`: publishes a new message to Azure Event Hubs
|
||||
|
||||
## Input Binding to Azure IoT Hub Events
|
||||
|
||||
|
|
@ -79,7 +106,7 @@ The device-to-cloud events created by Azure IoT Hub devices will contain additio
|
|||
|
||||
For example, the headers of a HTTP `Read()` response would contain:
|
||||
|
||||
```nodejs
|
||||
```js
|
||||
{
|
||||
'user-agent': 'fasthttp',
|
||||
'host': '127.0.0.1:3000',
|
||||
|
|
|
|||
|
|
@ -20,6 +20,18 @@ spec:
|
|||
metadata:
|
||||
- name: url
|
||||
value: http://something.com
|
||||
- name: MTLSRootCA
|
||||
value: /Users/somepath/root.pem # OPTIONAL <path to root CA> or <pem encoded string>
|
||||
- name: MTLSClientCert
|
||||
value: /Users/somepath/client.pem # OPTIONAL <path to client cert> or <pem encoded string>
|
||||
- name: MTLSClientKey
|
||||
value: /Users/somepath/client.key # OPTIONAL <path to client key> or <pem encoded string>
|
||||
- name: securityToken # OPTIONAL <token to include as a header on HTTP requests>
|
||||
secretKeyRef:
|
||||
name: mysecret
|
||||
key: mytoken
|
||||
- name: securityTokenHeader
|
||||
value: "Authorization: Bearer" # OPTIONAL <header name for the security token>
|
||||
```
|
||||
|
||||
## Spec metadata fields
|
||||
|
|
@ -27,6 +39,11 @@ spec:
|
|||
| Field | Required | Binding support | Details | Example |
|
||||
|--------------------|:--------:|--------|--------|---------|
|
||||
| url | Y | Output |The base URL of the HTTP endpoint to invoke | `http://host:port/path`, `http://myservice:8000/customers`
|
||||
| MTLSRootCA | N | Output |Path to root ca certificate or pem encoded string |
|
||||
| MTLSClientCert | N | Output |Path to client certificate or pem encoded string |
|
||||
| MTLSClientKey | N | Output |Path client private key or pem encoded string |
|
||||
| securityToken | N | Output |The value of a token to be added to an HTTP request as a header. Used together with `securityTokenHeader` |
|
||||
| securityTokenHeader| N | Output |The name of the header for `securityToken` on an HTTP request that |
|
||||
|
||||
## Binding support
|
||||
|
||||
|
|
@ -292,6 +309,17 @@ curl -d '{ "operation": "get" }' \
|
|||
|
||||
{{< /tabs >}}
|
||||
|
||||
## Using mTLS or enabling client TLS authentication along with HTTPS
|
||||
You can configure the HTTP binding to use mTLS or client TLS authentication along with HTTPS by providing the `MTLSRootCA`, `MTLSClientCert`, and `MTLSClientKey` metadata fields in the binding component.
|
||||
|
||||
These fields can be passed as a file path or as a pem encoded string.
|
||||
- If the file path is provided, the file is read and the contents are used.
|
||||
- If the pem encoded string is provided, the string is used as is.
|
||||
When these fields are configured, the Dapr sidecar uses the provided certificate to authenticate itself with the server during the TLS handshake process.
|
||||
|
||||
### When to use:
|
||||
You can use this when the server with which the HTTP binding is configured to communicate requires mTLS or client TLS authentication.
|
||||
|
||||
|
||||
## Related links
|
||||
|
||||
|
|
|
|||
|
|
@ -11,6 +11,9 @@ aliases:
|
|||
|
||||
To setup Kafka binding create a component of type `bindings.kafka`. See [this guide]({{< ref "howto-bindings.md#1-create-a-binding" >}}) on how to create and apply a binding configuration. For details on using `secretKeyRef`, see the guide on [how to reference secrets in components]({{< ref component-secrets.md >}}).
|
||||
|
||||
All component metadata field values can carry [templated metadata values]({{< ref "component-schema.md#templated-metadata-values" >}}), which are resolved on Dapr sidecar startup.
|
||||
For example, you can choose to use `{namespace}` as the `consumerGroup`, to enable using the same `appId` in different namespaces using the same topics as described in [this article]({{< ref "howto-namespace.md#with-namespace-consumer-groups">}}).
|
||||
|
||||
```yaml
|
||||
apiVersion: dapr.io/v1alpha1
|
||||
kind: Component
|
||||
|
|
|
|||
|
|
@ -0,0 +1,118 @@
|
|||
---
|
||||
type: docs
|
||||
title: "KubeMQ binding spec"
|
||||
linkTitle: "KubeMQ"
|
||||
description: "Detailed documentation on the KubeMQ binding component"
|
||||
aliases:
|
||||
- "/operations/components/setup-bindings/supported-bindings/kubemq/"
|
||||
---
|
||||
|
||||
## Component format
|
||||
|
||||
To setup KubeMQ binding create a component of type `bindings.kubemq`. See [this guide]({{< ref "howto-bindings.md#1-create-a-binding" >}}) on how to create and apply a binding configuration.
|
||||
|
||||
|
||||
```yaml
|
||||
apiVersion: dapr.io/v1alpha1
|
||||
kind: Component
|
||||
metadata:
|
||||
name: binding-topic
|
||||
spec:
|
||||
type: bindings.kubemq
|
||||
version: v1
|
||||
metadata:
|
||||
- name: address
|
||||
value: localhost:50000
|
||||
- name: channel
|
||||
value: queue1
|
||||
```
|
||||
|
||||
## Spec metadata fields
|
||||
|
||||
| Field | Required | Details | Example |
|
||||
|--------------------|:--------:|------------------------------------------------------------------------------------------------------------------------------|----------------------------------------|
|
||||
| address | Y | Address of the KubeMQ server | `"localhost:50000"` |
|
||||
| channel | Y | The Queue channel name | `queue1` |
|
||||
| authToken | N | Auth JWT token for connection. Check out [KubeMQ Authentication](https://docs.kubemq.io/learn/access-control/authentication) | `ew...` |
|
||||
| autoAcknowledged | N | Sets if received queue message is automatically acknowledged | `true` or `false` (default is `false`) |
|
||||
| pollMaxItems | N | Sets the number of messages to poll on every connection | `1` |
|
||||
| pollTimeoutSeconds | N | Sets the time in seconds for each poll interval | `3600` |
|
||||
|
||||
## Binding support
|
||||
|
||||
This component supports both **input and output** binding interfaces.
|
||||
|
||||
|
||||
## Create a KubeMQ broker
|
||||
|
||||
{{< tabs "Self-Hosted" "Kubernetes">}}
|
||||
|
||||
{{% codetab %}}
|
||||
1. Obtain KubeMQ Key by visiting [https://account.kubemq.io/login/register](https://account.kubemq.io/login/register) and register for a key.
|
||||
2. Wait for an email confirmation with your Key
|
||||
|
||||
You can run a KubeMQ broker with Docker:
|
||||
|
||||
```bash
|
||||
docker run -d -p 8080:8080 -p 50000:50000 -p 9090:9090 -e KUBEMQ_TOKEN=<your-key> kubemq/kubemq
|
||||
```
|
||||
You can then interact with the server using the client port: `localhost:50000`
|
||||
|
||||
{{% /codetab %}}
|
||||
|
||||
{{% codetab %}}
|
||||
1. Obtain KubeMQ Key by visiting [https://account.kubemq.io/login/register](https://account.kubemq.io/login/register) and register for a key.
|
||||
2. Wait for an email confirmation with your Key
|
||||
|
||||
Then Run the following kubectl commands:
|
||||
|
||||
```bash
|
||||
kubectl apply -f https://deploy.kubemq.io/init
|
||||
```
|
||||
|
||||
```bash
|
||||
kubectl apply -f https://deploy.kubemq.io/key/<your-key>
|
||||
```
|
||||
{{% /codetab %}}
|
||||
|
||||
{{< /tabs >}}
|
||||
|
||||
## Install KubeMQ CLI
|
||||
Go to [KubeMQ CLI](https://github.com/kubemq-io/kubemqctl/releases) and download the latest version of the CLI.
|
||||
|
||||
## Browse KubeMQ Dashboard
|
||||
|
||||
{{< tabs "Self-Hosted" "Kubernetes">}}
|
||||
|
||||
{{% codetab %}}
|
||||
<!-- IGNORE_LINKS -->
|
||||
Open a browser and navigate to [http://localhost:8080](http://localhost:8080)
|
||||
<!-- END_IGNORE -->
|
||||
{{% /codetab %}}
|
||||
|
||||
{{% codetab %}}
|
||||
With KubeMQCTL installed, run the following command:
|
||||
|
||||
```bash
|
||||
kubemqctl get dashboard
|
||||
```
|
||||
Or, with kubectl installed, run port-forward command:
|
||||
|
||||
```bash
|
||||
kubectl port-forward svc/kubemq-cluster-api -n kubemq 8080:8080
|
||||
```
|
||||
{{% /codetab %}}
|
||||
|
||||
{{< /tabs >}}
|
||||
```
|
||||
|
||||
## KubeMQ Documentation
|
||||
Visit [KubeMQ Documentation](https://docs.kubemq.io/) for more information.
|
||||
|
||||
## Related links
|
||||
|
||||
- [Basic schema for a Dapr component]({{< ref component-schema >}})
|
||||
- [Bindings building block]({{< ref bindings >}})
|
||||
- [How-To: Trigger application with input binding]({{< ref howto-triggers.md >}})
|
||||
- [How-To: Use bindings to interface with external resources]({{< ref howto-bindings.md >}})
|
||||
- [Bindings API reference]({{< ref bindings_api.md >}})
|
||||
|
|
@ -21,18 +21,19 @@ spec:
|
|||
type: bindings.mqtt3
|
||||
version: v1
|
||||
metadata:
|
||||
- name: url
|
||||
value: "tcp://[username][:password]@host.domain[:port]"
|
||||
- name: topic
|
||||
value: "mytopic"
|
||||
- name: qos
|
||||
value: 1
|
||||
- name: retain
|
||||
value: "false"
|
||||
- name: cleanSession
|
||||
value: "true"
|
||||
- name: backOffMaxRetries
|
||||
value: "0"
|
||||
- name: url
|
||||
value: "tcp://[username][:password]@host.domain[:port]"
|
||||
- name: topic
|
||||
value: "mytopic"
|
||||
- name: consumerID
|
||||
value: "myapp"
|
||||
# Optional
|
||||
- name: retain
|
||||
value: "false"
|
||||
- name: cleanSession
|
||||
value: "false"
|
||||
- name: backOffMaxRetries
|
||||
value: "0"
|
||||
```
|
||||
|
||||
{{% alert title="Warning" color="warning" %}}
|
||||
|
|
@ -43,20 +44,19 @@ The above example uses secrets as plain strings. It is recommended to use a secr
|
|||
|
||||
| Field | Required | Binding support | Details | Example |
|
||||
|--------------------|:--------:|---------|---------|---------|
|
||||
| url | Y | Input/Output | Address of the MQTT broker. Can be `secretKeyRef` to use a secret reference. <br> Use the **`tcp://`** URI scheme for non-TLS communication. <br> Use the **`ssl://`** URI scheme for TLS communication. | `"tcp://[username][:password]@host.domain[:port]"`
|
||||
| topic | Y | Input/Output | The topic to listen on or send events to. | `"mytopic"` |
|
||||
| consumerID | N | Input/Output | The client ID used to connect to the MQTT broker. Defaults to the Dapr app ID. | `"myMqttClientApp"`
|
||||
| qos | N | Input/Output | Indicates the Quality of Service Level (QoS) of the message. Defaults to `0`. |`1`
|
||||
| retain | N | Input/Output | Defines whether the message is saved by the broker as the last known good value for a specified topic. Defaults to `"false"`. | `"true"`, `"false"`
|
||||
| cleanSession | N | Input/Output | Sets the `clean_session` flag in the connection message to the MQTT broker if `"true"`. Defaults to `"true"`. | `"true"`, `"false"`
|
||||
| caCert | Required for using TLS | Input/Output | Certificate Authority (CA) certificate in PEM format for verifying server TLS certificates. | `"-----BEGIN CERTIFICATE-----\n<base64-encoded DER>\n-----END CERTIFICATE-----"`
|
||||
| clientCert | Required for using TLS | Input/Output | TLS client certificate in PEM format. Must be used with `clientKey`. | `"-----BEGIN CERTIFICATE-----\n<base64-encoded DER>\n-----END CERTIFICATE-----"`
|
||||
| clientKey | Required for using TLS | Input/Output | TLS client key in PEM format. Must be used with `clientCert`. Can be `secretKeyRef` to use a secret reference. | `"-----BEGIN RSA PRIVATE KEY-----\n<base64-encoded PKCS8>\n-----END RSA PRIVATE KEY-----"`
|
||||
| backOffMaxRetries | N | Input | The maximum number of retries to process the message before returning an error. Defaults to `"0"`, which means that no retries will be attempted. `"-1"` can be specified to indicate that messages should be retried indefinitely until they are successfully processed or the application is shutdown. The component will wait 5 seconds between retries. | `"3"`
|
||||
| `url` | Y | Input/Output | Address of the MQTT broker. Can be `secretKeyRef` to use a secret reference. <br> Use the **`tcp://`** URI scheme for non-TLS communication. <br> Use the **`ssl://`** URI scheme for TLS communication. | `"tcp://[username][:password]@host.domain[:port]"`
|
||||
| `topic` | Y | Input/Output | The topic to listen on or send events to. | `"mytopic"` |
|
||||
| `consumerID` | Y | Input/Output | The client ID used to connect to the MQTT broker. | `"myMqttClientApp"`
|
||||
| `retain` | N | Input/Output | Defines whether the message is saved by the broker as the last known good value for a specified topic. Defaults to `"false"`. | `"true"`, `"false"`
|
||||
| `cleanSession` | N | Input/Output | Sets the `clean_session` flag in the connection message to the MQTT broker if `"true"`. Defaults to `"false"`. | `"true"`, `"false"`
|
||||
| `caCert` | Required for using TLS | Input/Output | Certificate Authority (CA) certificate in PEM format for verifying server TLS certificates. | See example below
|
||||
| `clientCert` | Required for using TLS | Input/Output | TLS client certificate in PEM format. Must be used with `clientKey`. | See example below
|
||||
| `clientKey` | Required for using TLS | Input/Output | TLS client key in PEM format. Must be used with `clientCert`. Can be `secretKeyRef` to use a secret reference. | See example below
|
||||
| `backOffMaxRetries` | N | Input | The maximum number of retries to process the message before returning an error. Defaults to `"0"`, which means that no retries will be attempted. `"-1"` can be specified to indicate that messages should be retried indefinitely until they are successfully processed or the application is shutdown. The component will wait 5 seconds between retries. | `"3"`
|
||||
|
||||
### Communication using TLS
|
||||
|
||||
To configure communication using TLS, ensure that the MQTT broker (e.g. mosquitto) is configured to support certificates and provide the `caCert`, `clientCert`, `clientKey` metadata in the component configuration. For example:
|
||||
To configure communication using TLS, ensure that the MQTT broker (e.g. emqx) is configured to support certificates and provide the `caCert`, `clientCert`, `clientKey` metadata in the component configuration. For example:
|
||||
|
||||
```yaml
|
||||
apiVersion: dapr.io/v1alpha1
|
||||
|
|
@ -67,35 +67,41 @@ spec:
|
|||
type: bindings.mqtt3
|
||||
version: v1
|
||||
metadata:
|
||||
- name: url
|
||||
value: "ssl://host.domain[:port]"
|
||||
- name: topic
|
||||
value: "topic1"
|
||||
- name: qos
|
||||
value: 1
|
||||
- name: retain
|
||||
value: "false"
|
||||
- name: cleanSession
|
||||
value: "false"
|
||||
- name: backoffMaxRetries
|
||||
value: "0"
|
||||
- name: caCert
|
||||
value: ${{ myLoadedCACert }}
|
||||
- name: clientCert
|
||||
value: ${{ myLoadedClientCert }}
|
||||
- name: clientKey
|
||||
secretKeyRef:
|
||||
name: myMqttClientKey
|
||||
key: myMqttClientKey
|
||||
auth:
|
||||
secretStore: <SECRET_STORE_NAME>
|
||||
- name: url
|
||||
value: "ssl://host.domain[:port]"
|
||||
- name: topic
|
||||
value: "topic1"
|
||||
- name: consumerID
|
||||
value: "myapp"
|
||||
# TLS configuration
|
||||
- name: caCert
|
||||
value: |
|
||||
-----BEGIN CERTIFICATE-----
|
||||
...
|
||||
-----END CERTIFICATE-----
|
||||
- name: clientCert
|
||||
value: |
|
||||
-----BEGIN CERTIFICATE-----
|
||||
...
|
||||
-----END CERTIFICATE-----
|
||||
- name: clientKey
|
||||
secretKeyRef:
|
||||
name: myMqttClientKey
|
||||
key: myMqttClientKey
|
||||
# Optional
|
||||
- name: retain
|
||||
value: "false"
|
||||
- name: cleanSession
|
||||
value: "false"
|
||||
- name: backoffMaxRetries
|
||||
value: "0"
|
||||
```
|
||||
|
||||
Note that while the `caCert` and `clientCert` values may not be secrets, they can be referenced from a Dapr secret store as well for convenience.
|
||||
> Note that while the `caCert` and `clientCert` values may not be secrets, they can be referenced from a Dapr secret store as well for convenience.
|
||||
|
||||
### Consuming a shared topic
|
||||
|
||||
When consuming a shared topic, each consumer must have a unique identifier. By default, the application ID is used to uniquely identify each consumer and publisher. In self-hosted mode, invoking each `dapr run` with a different application ID is sufficient to have them consume from the same shared topic. However, on Kubernetes, multiple instances of an application pod will share the same application ID, prohibiting all instances from consuming the same topic. To overcome this, configure the component's `consumerID` metadata with a `{uuid}` tag, which will give each instance a randomly generated `consumerID` value on start up. For example:
|
||||
When consuming a shared topic, each consumer must have a unique identifier. If running multiple instances of an application, you configure the component's `consumerID` metadata with a `{uuid}` tag, which will give each instance a randomly generated `consumerID` value on start up. For example:
|
||||
|
||||
```yaml
|
||||
apiVersion: dapr.io/v1alpha1
|
||||
|
|
@ -113,12 +119,10 @@ spec:
|
|||
value: "tcp://admin:public@localhost:1883"
|
||||
- name: topic
|
||||
value: "topic1"
|
||||
- name: qos
|
||||
value: 1
|
||||
- name: retain
|
||||
value: "false"
|
||||
- name: cleanSession
|
||||
value: "false"
|
||||
value: "true"
|
||||
- name: backoffMaxRetries
|
||||
value: "0"
|
||||
```
|
||||
|
|
@ -127,13 +131,15 @@ spec:
|
|||
The above example uses secrets as plain strings. It is recommended to use a secret store for the secrets as described [here]({{< ref component-secrets.md >}}).
|
||||
{{% /alert %}}
|
||||
|
||||
> In this case, the value of the consumer ID is random every time Dapr restarts, so you should set `cleanSession` to `true` as well.
|
||||
|
||||
## Binding support
|
||||
|
||||
This component supports both **input and output** binding interfaces.
|
||||
|
||||
This component supports **output binding** with the following operations:
|
||||
|
||||
- `create`
|
||||
- `create`: publishes a new message
|
||||
|
||||
## Set topic per-request
|
||||
|
||||
|
|
@ -149,6 +155,20 @@ You can override the topic in component metadata on a per-request basis:
|
|||
}
|
||||
```
|
||||
|
||||
## Set retain property per-request
|
||||
|
||||
You can override the retain property in component metadata on a per-request basis:
|
||||
|
||||
```json
|
||||
{
|
||||
"operation": "create",
|
||||
"metadata": {
|
||||
"retain": "true"
|
||||
},
|
||||
"data": "<h1>Testing Dapr Bindings</h1>This is a test.<br>Bye!"
|
||||
}
|
||||
```
|
||||
|
||||
## Related links
|
||||
|
||||
- [Basic schema for a Dapr component]({{< ref component-schema >}})
|
||||
|
|
|
|||
|
|
@ -23,25 +23,39 @@ spec:
|
|||
version: v1
|
||||
metadata:
|
||||
- name: connectionString # Required when not using Azure Authentication.
|
||||
value: "Endpoint=sb://************"
|
||||
value: "Endpoint=sb://{ServiceBusNamespace}.servicebus.windows.net/;SharedAccessKeyName={PolicyName};SharedAccessKey={Key};EntityPath={ServiceBus}"
|
||||
- name: queueName
|
||||
value: queue1
|
||||
# - name: ttlInSeconds # Optional
|
||||
# value: 86400
|
||||
# - name: maxRetriableErrorsPerSec # Optional
|
||||
# - name: timeoutInSec # Optional
|
||||
# value: 60
|
||||
# - name: handlerTimeoutInSec # Optional
|
||||
# value: 60
|
||||
# - name: disableEntityManagement # Optional
|
||||
# value: "false"
|
||||
# - name: maxDeliveryCount # Optional
|
||||
# value: 3
|
||||
# - name: lockDurationInSec # Optional
|
||||
# value: 60
|
||||
# - name: lockRenewalInSec # Optional
|
||||
# value: 20
|
||||
# - name: maxActiveMessages # Optional
|
||||
# value: 10000
|
||||
# - name: maxConcurrentHandlers # Optional
|
||||
# value: 10
|
||||
# - name: defaultMessageTimeToLiveInSec # Optional
|
||||
# value: 10
|
||||
# - name: autoDeleteOnIdleInSec # Optional
|
||||
# value: 3600
|
||||
# - name: minConnectionRecoveryInSec # Optional
|
||||
# value: 2
|
||||
# - name: maxConnectionRecoveryInSec # Optional
|
||||
# value: 300
|
||||
# - name: maxActiveMessages # Optional
|
||||
# value: 1
|
||||
# - name: maxConcurrentHandlers # Optional
|
||||
# value: 1
|
||||
# - name: lockRenewalInSec # Optional
|
||||
# value: 20
|
||||
# - name: timeoutInSec # Optional
|
||||
# value: 60
|
||||
# - name: maxRetriableErrorsPerSec # Optional
|
||||
# value: 10
|
||||
# - name: publishMaxRetries # Optional
|
||||
# value: 5
|
||||
# - name: publishInitialRetryIntervalInMs # Optional
|
||||
# value: 500
|
||||
```
|
||||
{{% alert title="Warning" color="warning" %}}
|
||||
The above example uses secrets as plain strings. It is recommended to use a secret store for the secrets as described [here]({{< ref component-secrets.md >}}).
|
||||
|
|
@ -52,16 +66,26 @@ The above example uses secrets as plain strings. It is recommended to use a secr
|
|||
| Field | Required | Binding support | Details | Example |
|
||||
|--------------------|:--------:|-----------------|----------|---------|
|
||||
| `connectionString` | Y | Input/Output | The Service Bus connection string. Required unless using Azure AD authentication. | `"Endpoint=sb://************"` |
|
||||
| `namespaceName`| N | Input/Output | Parameter to set the address of the Service Bus namespace, as a fully-qualified domain name. Required if using Azure AD authentication. | `"namespace.servicebus.windows.net"` |
|
||||
| `queueName` | Y | Input/Output | The Service Bus queue name. Queue names are case-insensitive and will always be forced to lowercase. | `"queuename"` |
|
||||
| `ttlInSeconds` | N | Output | Parameter to set the default message [time to live](https://docs.microsoft.com/azure/service-bus-messaging/message-expiration). If this parameter is omitted, messages will expire after 14 days. See [also](#specifying-a-ttl-per-message) | `86400` |
|
||||
| `maxRetriableErrorsPerSec` | N | Input | Maximum number of retriable errors that are processed per second. If a message fails to be processed with a retriable error, the component adds a delay before it starts processing another message, to avoid immediately re-processing messages that have failed. Default: `10` | `10` |
|
||||
| `timeoutInSec` | N | Input/Output | Timeout for all invocations to the Azure Service Bus endpoint, in seconds. *Note that this option impacts network calls and it's unrelated to the TTL applies to messages*. Default: `60` | `60` |
|
||||
| `namespaceName`| N | Input/Output | Parameter to set the address of the Service Bus namespace, as a fully-qualified domain name. Required if using Azure AD authentication. | `"namespace.servicebus.windows.net"` |
|
||||
| `disableEntityManagement` | N | Input/Output | When set to true, queues and subscriptions do not get created automatically. Default: `"false"` | `"true"`, `"false"`
|
||||
| `lockDurationInSec` | N | Input/Output | Defines the length in seconds that a message will be locked for before expiring. Used during subscription creation only. Default set by server. | `30`
|
||||
| `autoDeleteOnIdleInSec` | N | Input/Output | Time in seconds to wait before auto deleting idle subscriptions. Used during subscription creation only. Default: `0` (disabled) | `3600`
|
||||
| `defaultMessageTimeToLiveInSec` | N | Input/Output | Default message time to live, in seconds. Used during subscription creation only. | `10`
|
||||
| `maxDeliveryCount` | N | Input/Output | Defines the number of attempts the server will make to deliver a message. Used during subscription creation only. Default set by server. | `10`
|
||||
| `minConnectionRecoveryInSec` | N | Input/Output | Minimum interval (in seconds) to wait before attempting to reconnect to Azure Service Bus in case of a connection failure. Default: `2` | `5`
|
||||
| `maxConnectionRecoveryInSec` | N | Input/Output | Maximum interval (in seconds) to wait before attempting to reconnect to Azure Service Bus in case of a connection failure. After each attempt, the component waits a random number of seconds, increasing every time, between the minimum and the maximum. Default: `300` (5 minutes) | `600`
|
||||
| `maxActiveMessages` | N | Defines the maximum number of messages to be processing or in the buffer at once. This should be at least as big as the maximum concurrent handlers. Default: `1` | `1`
|
||||
| `handlerTimeoutInSec`| N | Input | Timeout for invoking the app's handler. Default: `0` (no timeout) | `30`
|
||||
| `minConnectionRecoveryInSec` | N | Input | Minimum interval (in seconds) to wait before attempting to reconnect to Azure Service Bus in case of a connection failure. Default: `2` | `5` |
|
||||
| `maxConnectionRecoveryInSec` | N | Input | Maximum interval (in seconds) to wait before attempting to reconnect to Azure Service Bus in case of a connection failure. After each attempt, the binding waits a random number of seconds, increasing every time, between the minimum and the maximum. Default: `300` (5 minutes) | `600` |
|
||||
| `maxActiveMessages` | N |Defines the maximum number of messages to be processing or in the buffer at once. This should be at least as big as the maximum concurrent handlers. Default: `1` | `1`
|
||||
| `maxConcurrentHandlers` | N |Defines the maximum number of concurrent message handlers. Default: `1`. | `1`
|
||||
| `lockRenewalInSec` | N |Defines the frequency at which buffered message locks will be renewed. Default: `20`. | `20`
|
||||
| `timeoutInSec` | N | Input/Output | Timeout for all invocations to the Azure Service Bus endpoint, in seconds. *Note that this option impacts network calls and it's unrelated to the TTL applies to messages*. Default: `60` | `60` |
|
||||
| `lockRenewalInSec` | N | Input | Defines the frequency at which buffered message locks will be renewed. Default: `20`. | `20`
|
||||
| `maxActiveMessages` | N | Input | Defines the maximum number of messages to be processing or in the buffer at once. This should be at least as big as the maximum concurrent handlers. Default: `1` | `2000`
|
||||
| `maxConcurrentHandlers` | N | Input | Defines the maximum number of concurrent message handlers; set to `0` for unlimited. Default: `1` | `10`
|
||||
| `maxRetriableErrorsPerSec` | N | Input | Maximum number of retriable errors that are processed per second. If a message fails to be processed with a retriable error, the component adds a delay before it starts processing another message, to avoid immediately re-processing messages that have failed. Default: `10` | `10`
|
||||
| `publishMaxRetries` | N | Output | The max number of retries for when Azure Service Bus responds with "too busy" in order to throttle messages. Defaults: `5` | `5`
|
||||
| `publishInitialRetryIntervalInMs` | N | Output | Time in milliseconds for the initial exponential backoff when Azure Service Bus throttle messages. Defaults: `500` | `500`
|
||||
|
||||
### Azure Active Directory (AAD) authentication
|
||||
|
||||
|
|
@ -100,15 +124,13 @@ This component supports both **input and output** binding interfaces.
|
|||
|
||||
This component supports **output binding** with the following operations:
|
||||
|
||||
- `create`
|
||||
- `create`: publishes a message to the specified queue
|
||||
|
||||
## Specifying a TTL per message
|
||||
|
||||
Time to live can be defined on queue level (as illustrated above) or at the message level. The value defined at message level overwrites any value set at queue level.
|
||||
Time to live can be defined on a per-queue level (as illustrated above) or at the message level. The value defined at message level overwrites any value set at the queue level.
|
||||
|
||||
To set time to live at message level use the `metadata` section in the request body during the binding invocation.
|
||||
|
||||
The field name is `ttlInSeconds`.
|
||||
To set time to live at message level use the `metadata` section in the request body during the binding invocation: the field name is `ttlInSeconds`.
|
||||
|
||||
{{< tabs "Linux">}}
|
||||
|
||||
|
|
|
|||
|
|
@ -31,8 +31,12 @@ spec:
|
|||
# value: "60"
|
||||
# - name: decodeBase64
|
||||
# value: "false"
|
||||
# - name: encodeBase64
|
||||
# value: "false"
|
||||
# - name: endpoint
|
||||
# value: "http://127.0.0.1:10001"
|
||||
# - name: visibilityTimeout
|
||||
# value: "30s"
|
||||
```
|
||||
|
||||
{{% alert title="Warning" color="warning" %}}
|
||||
|
|
@ -47,8 +51,10 @@ The above example uses secrets as plain strings. It is recommended to use a secr
|
|||
| `accountKey` | Y* | Input/Output | The access key of the Azure Storage account. Only required when not using Azure AD authentication. | `"access-key"` |
|
||||
| `queueName` | Y | Input/Output | The name of the Azure Storage queue | `"myqueue"` |
|
||||
| `ttlInSeconds` | N | Output | Parameter to set the default message time to live. If this parameter is omitted, messages will expire after 10 minutes. See [also](#specifying-a-ttl-per-message) | `"60"` |
|
||||
| `decodeBase64` | N | Output | Configuration to decode base64 file content before saving to Blob Storage. (In case of saving a file with binary content). `true` is the only allowed positive value. Other positive variations like `"True", "1"` are not acceptable. Defaults to `false` | `true`, `false` |
|
||||
| `decodeBase64` | N | Output | Configuration to decode base64 file content before saving to Storage Queues. (In case of saving a file with binary content). Defaults to `false` | `true`, `false` |
|
||||
| `encodeBase64` | N | Output | If enabled base64 encodes the data payload before uploading to Azure storage queues. Default `false`. | `true`, `false` |
|
||||
| `endpoint` | N | Input/Output | Optional custom endpoint URL. This is useful when using the [Azurite emulator](https://github.com/Azure/azurite) or when using custom domains for Azure Storage (although this is not officially supported). The endpoint must be the full base URL, including the protocol (`http://` or `https://`), the IP or FQDN, and optional port. | `"http://127.0.0.1:10001"` or `"https://accountName.queue.example.com"` |
|
||||
| `visibilityTimeout` | N | Input | Allows setting a custom queue visibility timeout to avoid immediate retrying of recently failed messages. Defaults to 30 seconds. | "100s" |
|
||||
|
||||
### Azure Active Directory (Azure AD) authentication
|
||||
|
||||
|
|
|
|||
|
|
@ -7,6 +7,10 @@ aliases:
|
|||
- "/operations/components/setup-bindings/supported-bindings/twitter/"
|
||||
---
|
||||
|
||||
{{% alert title="Deprecation notice" color="warning" %}}
|
||||
The Twitter binding component has been deprecated and will be removed in a future release. See [this GitHub issue](https://github.com/dapr/components-contrib/issues/2503) for details.
|
||||
{{% /alert %}}
|
||||
|
||||
## Component format
|
||||
|
||||
To setup Twitter binding create a component of type `bindings.twitter`. See [this guide]({{< ref "howto-bindings.md#1-create-a-binding" >}}) on how to create and apply a binding configuration.
|
||||
|
|
|
|||
|
|
@ -2,7 +2,7 @@
|
|||
type: docs
|
||||
title: "Configuration store component specs"
|
||||
linkTitle: "Configuration stores"
|
||||
weight: 4500
|
||||
weight: 5000
|
||||
description: The supported configuration stores that interface with Dapr
|
||||
aliases:
|
||||
- "/operations/components/setup-configuration-store/supported-configuration-stores/"
|
||||
|
|
|
|||
|
|
@ -2,7 +2,7 @@
|
|||
type: docs
|
||||
title: "Lock component specs"
|
||||
linkTitle: "Locks"
|
||||
weight: 4500
|
||||
weight: 6000
|
||||
description: The supported locks that interface with Dapr
|
||||
no_list: true
|
||||
---
|
||||
|
|
|
|||
|
|
@ -2,7 +2,7 @@
|
|||
type: docs
|
||||
title: "Middleware component specs"
|
||||
linkTitle: "Middleware"
|
||||
weight: 6000
|
||||
weight: 9000
|
||||
description: List of all the supported middleware components that can be injected in Dapr's processing pipeline.
|
||||
no_list: true
|
||||
aliases:
|
||||
|
|
|
|||
|
|
@ -7,7 +7,7 @@ aliases:
|
|||
- /developing-applications/middleware/supported-middleware/middleware-opa/
|
||||
---
|
||||
|
||||
The Open Policy Agent (OPA) [HTTP middleware]({{< ref middleware.md >}}) applys [OPA Policies](https://www.openpolicyagent.org/) to incoming Dapr HTTP requests. This can be used to apply reusable authorization policies to app endpoints.
|
||||
The Open Policy Agent (OPA) [HTTP middleware]({{< ref middleware.md >}}) applies [OPA Policies](https://www.openpolicyagent.org/) to incoming Dapr HTTP requests. This can be used to apply reusable authorization policies to app endpoints.
|
||||
|
||||
## Component format
|
||||
|
||||
|
|
@ -30,6 +30,11 @@ spec:
|
|||
- name: defaultStatus
|
||||
value: 403
|
||||
|
||||
# `readBody` controls whether the middleware reads the entire request body in-memory and make it
|
||||
# availble for policy decisions.
|
||||
- name: readBody
|
||||
value: "false"
|
||||
|
||||
# `rego` is the open policy agent policy to evaluate. required
|
||||
# The policy package must be http and the policy must set data.http.allow
|
||||
- name: rego
|
||||
|
|
@ -66,15 +71,16 @@ spec:
|
|||
}
|
||||
```
|
||||
|
||||
You can prototype and experiment with policies using the [official opa playground](https://play.openpolicyagent.org). For example, [you can find the example policy above here](https://play.openpolicyagent.org/p/oRIDSo6OwE).
|
||||
You can prototype and experiment with policies using the [official OPA playground](https://play.openpolicyagent.org). For example, [you can find the example policy above here](https://play.openpolicyagent.org/p/oRIDSo6OwE).
|
||||
|
||||
## Spec metadata fields
|
||||
|
||||
| Field | Details | Example |
|
||||
|--------|---------|---------|
|
||||
| rego | The Rego policy language | See above |
|
||||
| defaultStatus | The status code to return for denied responses | `"https://accounts.google.com"`, `"https://login.salesforce.com"`
|
||||
| includedHeaders | A comma-separated set of case-insensitive headers to include in the request input. Request headers are not passed to the policy by default. Include to receive incoming request headers in the input | `"x-my-custom-header, x-jwt-header"`
|
||||
| `rego` | The Rego policy language | See above |
|
||||
| `defaultStatus` | The status code to return for denied responses | `"https://accounts.google.com"`, `"https://login.salesforce.com"`
|
||||
| `readBody` | If set to `true` (the default value), the body of each request is read fully in-memory and can be used to make policy decisions. If your policy doesn't depend on inspecting the request body, consider disabling this (setting to `false`) for significant performance improvements. | `"false"`
|
||||
| `includedHeaders` | A comma-separated set of case-insensitive headers to include in the request input. Request headers are not passed to the policy by default. Include to receive incoming request headers in the input | `"x-my-custom-header, x-jwt-header"`
|
||||
|
||||
## Dapr configuration
|
||||
|
||||
|
|
@ -193,6 +199,7 @@ allow = { "allow": true, "additional_headers": { "X-JWT-Payload": payload } } {
|
|||
```
|
||||
|
||||
### Result structure
|
||||
|
||||
```go
|
||||
type Result bool
|
||||
// or
|
||||
|
|
|
|||
|
|
@ -2,7 +2,7 @@
|
|||
type: docs
|
||||
title: "Name resolution provider component specs"
|
||||
linkTitle: "Name resolution"
|
||||
weight: 5000
|
||||
weight: 8000
|
||||
description: The supported name resolution providers that interface with Dapr service invocation
|
||||
no_list: true
|
||||
---
|
||||
|
|
|
|||
|
|
@ -11,6 +11,9 @@ aliases:
|
|||
|
||||
To setup Apache Kafka pubsub create a component of type `pubsub.kafka`. See [this guide]({{< ref "howto-publish-subscribe.md#step-1-setup-the-pubsub-component" >}}) on how to create and apply a pubsub configuration. For details on using `secretKeyRef`, see the guide on [how to reference secrets in components]({{< ref component-secrets.md >}}).
|
||||
|
||||
All component metadata field values can carry [templated metadata values]({{< ref "component-schema.md#templated-metadata-values" >}}), which are resolved on Dapr sidecar startup.
|
||||
For example, you can choose to use `{namespace}` as the `consumerGroup` to enable using the same `appId` in different namespaces using the same topics as described in [this article]({{< ref "howto-namespace.md#with-namespace-consumer-groups">}}).
|
||||
|
||||
```yaml
|
||||
apiVersion: dapr.io/v1alpha1
|
||||
kind: Component
|
||||
|
|
@ -23,7 +26,7 @@ spec:
|
|||
- name: brokers # Required. Kafka broker connection setting
|
||||
value: "dapr-kafka.myapp.svc.cluster.local:9092"
|
||||
- name: consumerGroup # Optional. Used for input bindings.
|
||||
value: "group1"
|
||||
value: "{namespace}"
|
||||
- name: clientID # Optional. Used as client tracing ID by Kafka brokers.
|
||||
value: "my-dapr-app-id"
|
||||
- name: authType # Required.
|
||||
|
|
@ -108,7 +111,7 @@ spec:
|
|||
value: 200ms
|
||||
- name: version # Optional.
|
||||
value: 0.10.2.0
|
||||
- name: disableTls
|
||||
- name: disableTls
|
||||
value: "true"
|
||||
```
|
||||
|
||||
|
|
@ -198,13 +201,13 @@ spec:
|
|||
|
||||
#### OAuth2 or OpenID Connect
|
||||
|
||||
Setting `authType` to `oidc` enables SASL authentication via the **OAUTHBEARER** mechanism. This supports specifying a bearer token from an external OAuth2 or [OIDC](https://en.wikipedia.org/wiki/OpenID) identity provider. Currently, only the **client_credentials** grant is supported.
|
||||
Setting `authType` to `oidc` enables SASL authentication via the **OAUTHBEARER** mechanism. This supports specifying a bearer token from an external OAuth2 or [OIDC](https://en.wikipedia.org/wiki/OpenID) identity provider. Currently, only the **client_credentials** grant is supported.
|
||||
|
||||
Configure `oidcTokenEndpoint` to the full URL for the identity provider access token endpoint.
|
||||
Configure `oidcTokenEndpoint` to the full URL for the identity provider access token endpoint.
|
||||
|
||||
Set `oidcClientID` and `oidcClientSecret` to the client credentials provisioned in the identity provider.
|
||||
Set `oidcClientID` and `oidcClientSecret` to the client credentials provisioned in the identity provider.
|
||||
|
||||
If `caCert` is specified in the component configuration, the certificate is appended to the system CA trust for verifying the identity provider certificate. Similarly, if `skipVerify` is specified in the component configuration, verification will also be skipped when accessing the identity provider.
|
||||
If `caCert` is specified in the component configuration, the certificate is appended to the system CA trust for verifying the identity provider certificate. Similarly, if `skipVerify` is specified in the component configuration, verification will also be skipped when accessing the identity provider.
|
||||
|
||||
By default, the only scope requested for the token is `openid`; it is **highly** recommended that additional scopes be specified via `oidcScopes` in a comma-separated list and validated by the Kafka broker. If additional scopes are not used to narrow the validity of the access token,
|
||||
a compromised Kafka broker could replay the token to access other services as the Dapr clientID.
|
||||
|
|
@ -293,6 +296,19 @@ auth:
|
|||
secretStore: <SECRET_STORE_NAME>
|
||||
```
|
||||
|
||||
## Sending and receiving multiple messages
|
||||
|
||||
Apache Kafka component supports sending and receiving multiple messages in a single operation using the bulk Pub/sub API.
|
||||
|
||||
### Configuring bulk subscribe
|
||||
|
||||
When subscribing to a topic, you can configure `bulkSubscribe` options. Refer to [Subscription methods]({{< ref subscription-methods >}}) for more details. Learn more about [the bulk subscribe API]({{< ref pubsub-bulk.md >}}).
|
||||
|
||||
| Configuration | Default |
|
||||
|----------|---------|
|
||||
| `maxBulkAwaitDurationMs` | `10000` (10s) |
|
||||
| `maxBulkSubCount` | `80` |
|
||||
|
||||
## Per-call metadata fields
|
||||
|
||||
### Partition Key
|
||||
|
|
|
|||
|
|
@ -105,14 +105,21 @@ Using SQS FIFO (`fifo` metadata field set to `"true"`) per AWS specifications pr
|
|||
|
||||
Specifying `fifoMessageGroupID` limits the number of concurrent consumers of the FIFO queue used to only one but guarantees global ordering of messages published by the app's Dapr sidecars. See [this AWS blog post](https://aws.amazon.com/blogs/compute/solving-complex-ordering-challenges-with-amazon-sqs-fifo-queues/) to better understand the topic of Message Group IDs and FIFO queues.
|
||||
|
||||
To avoid losing the order of messages delivered to consumers, the FIFO configuration for the SQS Component requires the `concurrencyMode` metadata field set to `"single"`.
|
||||
|
||||
#### Default parallel `concurrencyMode`
|
||||
|
||||
Since v1.8.0, the component supports the `"parallel"` `concurrencyMode` as its default mode. In prior versions, the component default behavior was calling the subscriber a single message at a time and waiting for its response.
|
||||
|
||||
#### SQS dead-letter Queues
|
||||
|
||||
When configuring the PubSub component with SQS dead-letter queues, the metadata fields `messageReceiveLimit` and `sqsDeadLettersQueueName` must both be set to a value. For `messageReceiveLimit`, the value must be greater than `0` and the `sqsDeadLettersQueueName` must not be empty string.
|
||||
|
||||
{{% alert title="Important" color="warning" %}}
|
||||
When running the Dapr sidecar (`daprd`) with your application on EKS (AWS Kubernetes) node/pod already attached to an IAM policy defining access to AWS resources, you **must not** provide AWS access-key, secret-key, and tokens in the definition of the component spec.
|
||||
{{% /alert %}}
|
||||
|
||||
|
||||
## Create an SNS/SQS instance
|
||||
|
||||
{{< tabs "Self-Hosted" "Kubernetes" "AWS" >}}
|
||||
|
|
|
|||
|
|
@ -8,7 +8,8 @@ aliases:
|
|||
---
|
||||
|
||||
## Component format
|
||||
To setup Azure Event Hubs pubsub create a component of type `pubsub.azure.eventhubs`. See [this guide]({{< ref "howto-publish-subscribe.md#step-1-setup-the-pubsub-component" >}}) on how to create and apply a pubsub configuration.
|
||||
|
||||
To setup an Azure Event Hubs pub/sub, create a component of type `pubsub.azure.eventhubs`. See [this guide]({{< ref "howto-publish-subscribe.md#step-1-setup-the-pubsub-component" >}}) on how to create and apply a pub/sub configuration.
|
||||
Apart from the configuration metadata fields shown below, Azure Event Hubs also supports [Azure Authentication]({{< ref "authenticating-azure.md" >}}) mechanisms.
|
||||
|
||||
```yaml
|
||||
|
|
@ -20,29 +21,34 @@ spec:
|
|||
type: pubsub.azure.eventhubs
|
||||
version: v1
|
||||
metadata:
|
||||
- name: connectionString # Either connectionString or eventHubNamespace. Should not be used when
|
||||
# Azure Authentication mechanism is used.
|
||||
value: "Endpoint=sb://{EventHubNamespace}.servicebus.windows.net/;SharedAccessKeyName={PolicyName};SharedAccessKey={Key};EntityPath={EventHub}"
|
||||
- name: eventHubNamespace # Either connectionString or eventHubNamespace. Should be used when
|
||||
# Azure Authentication mechanism is used.
|
||||
value: "namespace"
|
||||
- name: enableEntityManagement
|
||||
value: "false"
|
||||
## The following four properties are needed only if enableEntityManagement is set to true
|
||||
- name: resourceGroupName
|
||||
value: "test-rg"
|
||||
- name: subscriptionID
|
||||
value: "value of Azure subscription ID"
|
||||
- name: partitionCount
|
||||
value: "1"
|
||||
- name: messageRetentionInDays
|
||||
## Subscriber attributes
|
||||
- name: storageAccountName
|
||||
value: "myeventhubstorage"
|
||||
- name: storageAccountKey
|
||||
value: "112233445566778899"
|
||||
- name: storageContainerName
|
||||
value: "myeventhubstoragecontainer"
|
||||
# Either connectionString or eventHubNamespace is required
|
||||
# Use connectionString when *not* using Azure AD
|
||||
- name: connectionString
|
||||
value: "Endpoint=sb://{EventHubNamespace}.servicebus.windows.net/;SharedAccessKeyName={PolicyName};SharedAccessKey={Key};EntityPath={EventHub}"
|
||||
# Use eventHubNamespace when using Azure AD
|
||||
- name: eventHubNamespace
|
||||
value: "namespace"
|
||||
- name: enableEntityManagement
|
||||
value: "false"
|
||||
# The following four properties are needed only if enableEntityManagement is set to true
|
||||
- name: resourceGroupName
|
||||
value: "test-rg"
|
||||
- name: subscriptionID
|
||||
value: "value of Azure subscription ID"
|
||||
- name: partitionCount
|
||||
value: "1"
|
||||
- name: messageRetentionInDays
|
||||
value: "3"
|
||||
# Checkpoint store attributes
|
||||
- name: storageAccountName
|
||||
value: "myeventhubstorage"
|
||||
- name: storageAccountKey
|
||||
value: "112233445566778899"
|
||||
- name: storageContainerName
|
||||
value: "myeventhubstoragecontainer"
|
||||
# Alternative to passing storageAccountKey
|
||||
- name: storageConnectionString
|
||||
value: "DefaultEndpointsProtocol=https;AccountName=<account>;AccountKey=<account-key>"
|
||||
```
|
||||
|
||||
{{% alert title="Warning" color="warning" %}}
|
||||
|
|
@ -53,21 +59,24 @@ The above example uses secrets as plain strings. It is recommended to use a secr
|
|||
|
||||
| Field | Required | Details | Example |
|
||||
|--------------------|:--------:|---------|---------|
|
||||
| connectionString | Y* | Connection-string for the Event Hub or the Event Hub namespace. *Mutally exclusive with `eventHubNamespace` field. *Not to be used when [Azure Authentication]({{< ref "authenticating-azure.md" >}}) is used | `"Endpoint=sb://{EventHubNamespace}.servicebus.windows.net/;SharedAccessKeyName={PolicyName};SharedAccessKey={Key};EntityPath={EventHub}"` or `"Endpoint=sb://{EventHubNamespace}.servicebus.windows.net/;SharedAccessKeyName={PolicyName};SharedAccessKey={Key}"`
|
||||
| eventHubNamespace | N* | The Event Hub Namespace name. *Mutally exclusive with `connectionString` field. *To be used when [Azure Authentication]({{< ref "authenticating-azure.md" >}}) is used | `"namespace"`
|
||||
| storageAccountName | Y | Storage account name to use for the EventProcessorHost |`"myeventhubstorage"`
|
||||
| storageAccountKey | Y* | Storage account key to use for the EventProcessorHost. Can be `secretKeyRef` to use a secret reference. *Omit if using [Azure Authentication]({{< ref "authenticating-azure.md" >}}) and AAD authentication to the storage account is preferred. | `"112233445566778899"`
|
||||
| storageContainerName | Y | Storage container name for the storage account name. | `"myeventhubstoragecontainer"`
|
||||
| enableEntityManagement | N | Boolean value to allow management of EventHub namespace. Default: `false` | `"true", "false"`
|
||||
| resourceGroupName | N | Name of the resource group the event hub namespace is a part of. Needed when entity management is enabled | `"test-rg"`
|
||||
| subscriptionID | N | Azure subscription ID value. Needed when entity management is enabled | `"azure subscription id"`
|
||||
| partitionCount | N | Number of partitions for the new event hub. Only used when entity management is enabled. Default: `"1"` | `"2"`
|
||||
| messageRetentionInDays | N | Number of days to retain messages for in the newly created event hub. Used only when entity management is enabled. Default: `"1"` | `"90"`
|
||||
| `connectionString` | Y* | Connection string for the Event Hub or the Event Hub namespace.<br>* Mutally exclusive with `eventHubNamespace` field.<br>* Required when not using [Azure AD Authentication]({{< ref "authenticating-azure.md" >}}) | `"Endpoint=sb://{EventHubNamespace}.servicebus.windows.net/;SharedAccessKeyName={PolicyName};SharedAccessKey={Key};EntityPath={EventHub}"` or `"Endpoint=sb://{EventHubNamespace}.servicebus.windows.net/;SharedAccessKeyName={PolicyName};SharedAccessKey={Key}"`
|
||||
| `eventHubNamespace` | Y* | The Event Hub Namespace name.<br>* Mutally exclusive with `connectionString` field.<br>* Required when using [Azure AD Authentication]({{< ref "authenticating-azure.md" >}}) | `"namespace"`
|
||||
| `storageAccountName` | Y | Storage account name to use for the checkpoint store. |`"myeventhubstorage"`
|
||||
| `storageAccountKey` | Y* | Storage account key for the checkpoint store account.<br>* When using Azure AD, it's possible to omit this if the service principal has access to the storage account too. | `"112233445566778899"`
|
||||
| `storageConnectionString` | Y* | Connection string for the checkpoint store, alternative to specifying `storageAccountKey` | `"DefaultEndpointsProtocol=https;AccountName=myeventhubstorage;AccountKey=<account-key>"`
|
||||
| `storageContainerName` | Y | Storage container name for the storage account name. | `"myeventhubstoragecontainer"`
|
||||
| `enableEntityManagement` | N | Boolean value to allow management of the EventHub namespace and storage account. Default: `false` | `"true", "false"`
|
||||
| `resourceGroupName` | N | Name of the resource group the Event Hub namespace is part of. Required when entity management is enabled | `"test-rg"`
|
||||
| `subscriptionID` | N | Azure subscription ID value. Required when entity management is enabled | `"azure subscription id"`
|
||||
| `partitionCount` | N | Number of partitions for the new Event Hub namespace. Used only when entity management is enabled. Default: `"1"` | `"2"`
|
||||
| `messageRetentionInDays` | N | Number of days to retain messages for in the newly created Event Hub namespace. Used only when entity management is enabled. Default: `"1"` | `"90"`
|
||||
|
||||
### Azure Active Directory (AAD) authentication
|
||||
The Azure Event Hubs pubsub component supports authentication using all Azure Active Directory mechanisms. For further information and the relevant component metadata fields to provide depending on the choice of AAD authentication mechanism, see the [docs for authenticating to Azure]({{< ref authenticating-azure.md >}}).
|
||||
|
||||
The Azure Event Hubs pub/sub component supports authentication using all Azure Active Directory mechanisms. For further information and the relevant component metadata fields to provide depending on the choice of AAD authentication mechanism, see the [docs for authenticating to Azure]({{< ref authenticating-azure.md >}}).
|
||||
|
||||
#### Example Configuration
|
||||
|
||||
```yaml
|
||||
apiVersion: dapr.io/v1alpha1
|
||||
kind: Component
|
||||
|
|
@ -77,32 +86,31 @@ spec:
|
|||
type: pubsub.azure.eventhubs
|
||||
version: v1
|
||||
metadata:
|
||||
# Azure Authentication Used
|
||||
- name: azureTenantId
|
||||
value: "***"
|
||||
- name: azureClientId
|
||||
value: "***"
|
||||
- name: azureClientSecret
|
||||
value: "***"
|
||||
- name: eventHubNamespace
|
||||
value: "namespace"
|
||||
- name: enableEntityManagement
|
||||
value: "false"
|
||||
## The following four properties are needed only if enableEntityManagement is set to true
|
||||
- name: resourceGroupName
|
||||
value: "test-rg"
|
||||
- name: subscriptionID
|
||||
value: "value of Azure subscription ID"
|
||||
- name: partitionCount
|
||||
value: "1"
|
||||
- name: messageRetentionInDays
|
||||
## Subscriber attributes
|
||||
- name: storageAccountName
|
||||
value: "myeventhubstorage"
|
||||
- name: storageAccountKey
|
||||
value: "112233445566778899"
|
||||
- name: storageContainerName
|
||||
value: "myeventhubstoragecontainer"
|
||||
# Azure Authentication Used
|
||||
- name: azureTenantId
|
||||
value: "***"
|
||||
- name: azureClientId
|
||||
value: "***"
|
||||
- name: azureClientSecret
|
||||
value: "***"
|
||||
- name: eventHubNamespace
|
||||
value: "namespace"
|
||||
- name: enableEntityManagement
|
||||
value: "false"
|
||||
# The following four properties are needed only if enableEntityManagement is set to true
|
||||
- name: resourceGroupName
|
||||
value: "test-rg"
|
||||
- name: subscriptionID
|
||||
value: "value of Azure subscription ID"
|
||||
- name: partitionCount
|
||||
value: "1"
|
||||
- name: messageRetentionInDays
|
||||
# Checkpoint store attributes
|
||||
# In this case, we're using Azure AD to access the storage account too
|
||||
- name: storageAccountName
|
||||
value: "myeventhubstorage"
|
||||
- name: storageContainerName
|
||||
value: "myeventhubstoragecontainer"
|
||||
```
|
||||
|
||||
## Sending multiple messages
|
||||
|
|
@ -115,27 +123,27 @@ Azure Event Hubs supports sending multiple messages in a single operation. To se
|
|||
|
||||
## Create an Azure Event Hub
|
||||
|
||||
Follow the instructions [here](https://docs.microsoft.com/azure/event-hubs/event-hubs-create) on setting up Azure Event Hubs.
|
||||
Since this implementation uses the Event Processor Host, you will also need an [Azure Storage Account](https://docs.microsoft.com/azure/storage/common/storage-account-create?tabs=azure-portal). Follow the instructions [here](https://docs.microsoft.com/azure/storage/common/storage-account-keys-manage) to manage the storage account access keys.
|
||||
Follow the instructions on the [documentation](https://docs.microsoft.com/azure/event-hubs/event-hubs-create) to set up Azure Event Hubs.
|
||||
|
||||
See [here](https://docs.microsoft.com/azure/event-hubs/authorize-access-shared-access-signature) on how to get the Event Hubs connection string. Note this is not the Event Hubs namespace.
|
||||
Because this component uses Azure Storage as checkpoint store, you will also need an [Azure Storage Account](https://docs.microsoft.com/azure/storage/common/storage-account-create?tabs=azure-portal). Follow the instructions on the [documentation](https://docs.microsoft.com/azure/storage/common/storage-account-keys-manage) to manage the storage account access keys.
|
||||
|
||||
See the [documentation](https://docs.microsoft.com/azure/event-hubs/authorize-access-shared-access-signature) on how to get the Event Hubs connection string (note this is not for the Event Hubs namespace).
|
||||
|
||||
### Create consumer groups for each subscriber
|
||||
|
||||
For every Dapr app that wants to subscribe to events, create an Event Hubs consumer group with the name of the `dapr id`.
|
||||
For example, a Dapr app running on Kubernetes with `dapr.io/app-id: "myapp"` will need an Event Hubs consumer group named `myapp`.
|
||||
For every Dapr app that wants to subscribe to events, create an Event Hubs consumer group with the name of the Dapr app ID. For example, a Dapr app running on Kubernetes with `dapr.io/app-id: "myapp"` will need an Event Hubs consumer group named `myapp`.
|
||||
|
||||
Note: Dapr passes the name of the Consumer group to the EventHub and so this is not supplied in the metadata.
|
||||
Note: Dapr passes the name of the consumer group to the Event Hub, so this is not supplied in the metadata.
|
||||
|
||||
## Entity Management
|
||||
|
||||
When entity management is enabled in configuration, as long as the application has the right role and permissions to manipulate the Event Hub namespace, creation of Event Hubs and consumer groups can be done on the fly.
|
||||
When entity management is enabled in the metadata, as long as the application has the right role and permissions to manipulate the Event Hub namespace, Dapr can automatically create the Event Hub and consumer group for you.
|
||||
|
||||
The Evet Hub name is the `topic` field in the incoming request to publish or subscribe to, while the consumer group name is the name of the `dapr app` which subscribes to a given Event Hub. For example, a Dapr app running on Kubernetes with name `dapr.io/app-id: "myapp"` requires an Event Hubs consumer group named `myapp`.
|
||||
The Evet Hub name is the `topic` field in the incoming request to publish or subscribe to, while the consumer group name is the name of the Dapr app which subscribes to a given Event Hub. For example, a Dapr app running on Kubernetes with name `dapr.io/app-id: "myapp"` requires an Event Hubs consumer group named `myapp`.
|
||||
|
||||
Entity management is only possible when using [Azure Authentication]({{< ref "authenticating-azure.md" >}}) mechanisms and not via `connectionString`.
|
||||
Entity management is only possible when using [Azure AD Authentication]({{< ref "authenticating-azure.md" >}}) and not using a connection string.
|
||||
|
||||
Note: Dapr passes the name of the Consumer group to the EventHub and this is not supplied in the metadata.
|
||||
> Dapr passes the name of the consumer group to the Event Hub, so this is not supplied in the metadata.
|
||||
|
||||
## Subscribing to Azure IoT Hub Events
|
||||
|
||||
|
|
@ -154,7 +162,7 @@ The device-to-cloud events created by Azure IoT Hub devices will contain additio
|
|||
|
||||
For example, the headers of a delivered HTTP subscription message would contain:
|
||||
|
||||
```nodejs
|
||||
```js
|
||||
{
|
||||
'user-agent': 'fasthttp',
|
||||
'host': '127.0.0.1:3000',
|
||||
|
|
@ -174,6 +182,7 @@ For example, the headers of a delivered HTTP subscription message would contain:
|
|||
```
|
||||
|
||||
## Related links
|
||||
|
||||
- [Basic schema for a Dapr component]({{< ref component-schema >}})
|
||||
- Read [this guide]({{< ref "howto-publish-subscribe.md#step-2-publish-a-topic" >}}) for instructions on configuring pub/sub components
|
||||
- [Pub/Sub building block]({{< ref pubsub >}})
|
||||
|
|
|
|||
|
|
@ -1,14 +1,18 @@
|
|||
---
|
||||
type: docs
|
||||
title: "Azure Service Bus"
|
||||
linkTitle: "Azure Service Bus"
|
||||
description: "Detailed documentation on the Azure Service Bus pubsub component"
|
||||
title: "Azure Service Bus Queues"
|
||||
linkTitle: "Azure Service Bus Queues"
|
||||
description: "Detailed documentation on the Azure Service Bus Queues pubsub component"
|
||||
aliases:
|
||||
- "/operations/components/setup-pubsub/supported-pubsub/setup-azure-servicebus/"
|
||||
- "/operations/components/setup-pubsub/supported-pubsub/setup-azure-servicebus-queues/"
|
||||
---
|
||||
|
||||
## Component format
|
||||
To setup Azure Service Bus pubsub create a component of type `pubsub.azure.servicebus`. See [this guide]({{< ref "howto-publish-subscribe.md#step-1-setup-the-pubsub-component" >}}) on how to create and apply a pubsub configuration.
|
||||
|
||||
To setup Azure Service Bus Queues pubsub create a component of type `pubsub.azure.servicebus.queues`. See [this guide]({{< ref "howto-publish-subscribe.md#step-1-setup-the-pubsub-component" >}}) on how to create and apply a pubsub configuration.
|
||||
|
||||
> This component uses queues on Azure Service Bus; see the official documentation for the differences between [topics and queues](https://learn.microsoft.com/azure/service-bus-messaging/service-bus-queues-topics-subscriptions).
|
||||
> For using topics, see the [Azure Service Bus Topics pubsub component]({{< ref "setup-azure-servicebus-topics" >}}).
|
||||
|
||||
### Connection String Authentication
|
||||
|
||||
|
|
@ -18,13 +22,12 @@ kind: Component
|
|||
metadata:
|
||||
name: servicebus-pubsub
|
||||
spec:
|
||||
type: pubsub.azure.servicebus
|
||||
type: pubsub.azure.servicebus.queues
|
||||
version: v1
|
||||
metadata:
|
||||
- name: connectionString # Required when not using Azure Authentication.
|
||||
# Required when not using Azure AD Authentication
|
||||
- name: connectionString
|
||||
value: "Endpoint=sb://{ServiceBusNamespace}.servicebus.windows.net/;SharedAccessKeyName={PolicyName};SharedAccessKey={Key};EntityPath={ServiceBus}"
|
||||
# - name: consumerID # Optional: defaults to the app's own ID
|
||||
# value: "{identifier}"
|
||||
# - name: timeoutInSec # Optional
|
||||
# value: 60
|
||||
# - name: handlerTimeoutInSec # Optional
|
||||
|
|
@ -53,12 +56,10 @@ spec:
|
|||
# value: 10
|
||||
# - name: publishMaxRetries # Optional
|
||||
# value: 5
|
||||
# - name: publishInitialRetryInternalInMs # Optional
|
||||
# - name: publishInitialRetryIntervalInMs # Optional
|
||||
# value: 500
|
||||
```
|
||||
|
||||
> __NOTE:__ The above settings are shared across all topics that use this component.
|
||||
|
||||
{{% alert title="Warning" color="warning" %}}
|
||||
The above example uses secrets as plain strings. It is recommended to use a secret store for the secrets as described [here]({{< ref component-secrets.md >}}).
|
||||
{{% /alert %}}
|
||||
|
|
@ -67,28 +68,27 @@ The above example uses secrets as plain strings. It is recommended to use a secr
|
|||
|
||||
| Field | Required | Details | Example |
|
||||
|--------------------|:--------:|---------|---------|
|
||||
| `connectionString` | Y | Shared access policy connection-string for the Service Bus. Required unless using Azure AD authentication. | "`Endpoint=sb://{ServiceBusNamespace}.servicebus.windows.net/;SharedAccessKeyName={PolicyName};SharedAccessKey={Key};EntityPath={ServiceBus}`"
|
||||
| `connectionString` | Y | Shared access policy connection string for the Service Bus. Required unless using Azure AD authentication. | See example above
|
||||
| `namespaceName`| N | Parameter to set the address of the Service Bus namespace, as a fully-qualified domain name. Required if using Azure AD authentication. | `"namespace.servicebus.windows.net"` |
|
||||
| `consumerID` | N | Consumer ID a.k.a consumer tag organizes one or more consumers into a group. Consumers with the same consumer ID work as one virtual consumer, i.e. a message is processed only once by one of the consumers in the group. If the consumer ID is not set, the dapr runtime will set it to the dapr application ID. |
|
||||
| `timeoutInSec` | N | Timeout for sending messages and for management operations. Default: `60` |`30`
|
||||
| `handlerTimeoutInSec`| N | Timeout for invoking the app's handler. Default: `60` | `30`
|
||||
| `disableEntityManagement` | N | When set to true, topics and subscriptions do not get created automatically. Default: `"false"` | `"true"`, `"false"`
|
||||
| `maxDeliveryCount` | N |Defines the number of attempts the server will make to deliver a message. Default set by server| `10`
|
||||
| `lockDurationInSec` | N |Defines the length in seconds that a message will be locked for before expiring. Default set by server | `30`
|
||||
| `lockRenewalInSec` | N |Defines the frequency at which buffered message locks will be renewed. Default: `20`. | `20`
|
||||
| `maxActiveMessages` | N |Defines the maximum number of messages to be processing or in the buffer at once. This should be at least as big as the maximum concurrent handlers. Default: `10000` | `2000`
|
||||
| `maxConcurrentHandlers` | N |Defines the maximum number of concurrent message handlers. | `10`
|
||||
| `defaultMessageTimeToLiveInSec` | N |Default message time to live. | `10`
|
||||
| `autoDeleteOnIdleInSec` | N |Time in seconds to wait before auto deleting idle subscriptions. | `3600`
|
||||
| `lockRenewalInSec` | N | Defines the frequency at which buffered message locks will be renewed. Default: `20`. | `20`
|
||||
| `maxActiveMessages` | N | Defines the maximum number of messages to be processing or in the buffer at once. This should be at least as big as the maximum concurrent handlers. Default: `1000` | `2000`
|
||||
| `maxConcurrentHandlers` | N | Defines the maximum number of concurrent message handlers. Default: `0` (unlimited) | `10`
|
||||
| `disableEntityManagement` | N | When set to true, queues and subscriptions do not get created automatically. Default: `"false"` | `"true"`, `"false"`
|
||||
| `defaultMessageTimeToLiveInSec` | N | Default message time to live, in seconds. Used during subscription creation only. | `10`
|
||||
| `autoDeleteOnIdleInSec` | N | Time in seconds to wait before auto deleting idle subscriptions. Used during subscription creation only. Default: `0` (disabled) | `3600`
|
||||
| `maxDeliveryCount` | N | Defines the number of attempts the server will make to deliver a message. Used during subscription creation only. Default set by server. | `10`
|
||||
| `lockDurationInSec` | N | Defines the length in seconds that a message will be locked for before expiring. Used during subscription creation only. Default set by server. | `30`
|
||||
| `minConnectionRecoveryInSec` | N | Minimum interval (in seconds) to wait before attempting to reconnect to Azure Service Bus in case of a connection failure. Default: `2` | `5`
|
||||
| `maxConnectionRecoveryInSec` | N | Maximum interval (in seconds) to wait before attempting to reconnect to Azure Service Bus in case of a connection failure. After each attempt, the component waits a random number of seconds, increasing every time, between the minimum and the maximum. Default: `300` (5 minutes) | `600`
|
||||
| `maxRetriableErrorsPerSec` | N | Maximum number of retriable errors that are processed per second. If a message fails to be processed with a retriable error, the component adds a delay before it starts processing another message, to avoid immediately re-processing messages that have failed. Default: `10` | `10`
|
||||
| `publishMaxRetries` | N | The max number of retries for when Azure Service Bus responds with "too busy" in order to throttle messages. Defaults: `5` | `5`
|
||||
| `publishInitialRetryInternalInMs` | N | Time in milliseconds for the initial exponential backoff when Azure Service Bus throttle messages. Defaults: `500` | `500`
|
||||
| `publishInitialRetryIntervalInMs` | N | Time in milliseconds for the initial exponential backoff when Azure Service Bus throttle messages. Defaults: `500` | `500`
|
||||
|
||||
### Azure Active Directory (AAD) authentication
|
||||
|
||||
The Azure Service Bus pubsub component supports authentication using all Azure Active Directory mechanisms, including Managed Identities. For further information and the relevant component metadata fields to provide depending on the choice of AAD authentication mechanism, see the [docs for authenticating to Azure]({{< ref authenticating-azure.md >}}).
|
||||
The Azure Service Bus Queues pubsub component supports authentication using all Azure Active Directory mechanisms, including Managed Identities. For further information and the relevant component metadata fields to provide depending on the choice of AAD authentication mechanism, see the [docs for authenticating to Azure]({{< ref authenticating-azure.md >}}).
|
||||
|
||||
#### Example Configuration
|
||||
|
||||
|
|
@ -98,7 +98,7 @@ kind: Component
|
|||
metadata:
|
||||
name: servicebus-pubsub
|
||||
spec:
|
||||
type: pubsub.azure.servicebus
|
||||
type: pubsub.azure.servicebus.queues
|
||||
version: v1
|
||||
metadata:
|
||||
- name: namespaceName
|
||||
|
|
@ -132,7 +132,7 @@ To set Azure Service Bus metadata when sending a message, set the query paramete
|
|||
- `metadata.ScheduledEnqueueTimeUtc`
|
||||
- `metadata.ReplyToSessionId`
|
||||
|
||||
> **NOTE:** The `metadata.MessageId` property does not set the `id` property of the cloud event and should be treated in isolation.
|
||||
> **Note:** The `metadata.MessageId` property does not set the `id` property of the cloud event returned by Dapr and should be treated in isolation.
|
||||
|
||||
### Receiving a message with metadata
|
||||
|
||||
|
|
@ -147,7 +147,7 @@ In addition to the [settable metadata listed above](#sending-a-message-with-meta
|
|||
|
||||
To find out more details on the purpose of any of these metadata properties, please refer to [the official Azure Service Bus documentation](https://docs.microsoft.com/rest/api/servicebus/message-headers-and-properties#message-headers).
|
||||
|
||||
> Note that all times are populated by the server and are not adjusted for clock skews.
|
||||
> Note: that all times are populated by the server and are not adjusted for clock skews.
|
||||
|
||||
## Sending and receiving multiple messages
|
||||
|
||||
|
|
@ -163,15 +163,15 @@ To set the metadata for bulk publish operation, set the query parameters on the
|
|||
|
||||
### Configuring bulk subscribe
|
||||
|
||||
When subscribing to a topic, you can configure `bulkSubscribe` options. Refer to [Subscription methods]({{< ref subscription-methods >}}) for more details. (TODO: Add link to bulk subscribe docs)
|
||||
When subscribing to a topic, you can configure `bulkSubscribe` options. Refer to [Subscribing messages in bulk]({{< ref "pubsub-bulk#subscribing-messages-in-bulk" >}}) for more details.
|
||||
|
||||
| Configuration | Default |
|
||||
|---------------|---------|
|
||||
| `maxMessagesCount` | `100` |
|
||||
|
||||
## Create an Azure Service Bus
|
||||
## Create an Azure Service Bus broker for queues
|
||||
|
||||
Follow the instructions [here](https://docs.microsoft.com/azure/service-bus-messaging/service-bus-quickstart-topics-subscriptions-portal) on setting up Azure Service Bus Topics.
|
||||
Follow the instructions [here](https://learn.microsoft.com/azure/service-bus-messaging/service-bus-quickstart-portal) on setting up Azure Service Bus Queues.
|
||||
|
||||
## Related links
|
||||
|
||||
|
|
@ -0,0 +1,166 @@
|
|||
---
|
||||
type: docs
|
||||
title: "Azure Service Bus Topics"
|
||||
linkTitle: "Azure Service Bus Topics"
|
||||
description: "Detailed documentation on the Azure Service Bus Topics pubsub component"
|
||||
aliases:
|
||||
- "/operations/components/setup-pubsub/supported-pubsub/setup-azure-servicebus-topics/"
|
||||
- "/operations/components/setup-pubsub/supported-pubsub/setup-azure-servicebus/"
|
||||
---
|
||||
|
||||
## Component format
|
||||
|
||||
To setup Azure Service Bus Topics pubsub create a component of type `pubsub.azure.servicebus.topics`. See [this guide]({{< ref "howto-publish-subscribe.md#step-1-setup-the-pubsub-component" >}}) on how to create and apply a pubsub configuration.
|
||||
|
||||
> This component uses topics on Azure Service Bus; see the official documentation for the differences between [topics and queues](https://learn.microsoft.com/azure/service-bus-messaging/service-bus-queues-topics-subscriptions).
|
||||
> For using queues, see the [Azure Service Bus Queues pubsub component]({{< ref "setup-azure-servicebus-queues" >}}).
|
||||
|
||||
### Connection String Authentication
|
||||
|
||||
```yaml
|
||||
apiVersion: dapr.io/v1alpha1
|
||||
kind: Component
|
||||
metadata:
|
||||
name: servicebus-pubsub
|
||||
spec:
|
||||
type: pubsub.azure.servicebus.topics
|
||||
version: v1
|
||||
metadata:
|
||||
# Required when not using Azure AD Authentication
|
||||
- name: connectionString
|
||||
value: "Endpoint=sb://{ServiceBusNamespace}.servicebus.windows.net/;SharedAccessKeyName={PolicyName};SharedAccessKey={Key};EntityPath={ServiceBus}"
|
||||
# - name: consumerID # Optional: defaults to the app's own ID
|
||||
# value: "{identifier}"
|
||||
# - name: timeoutInSec # Optional
|
||||
# value: 60
|
||||
# - name: handlerTimeoutInSec # Optional
|
||||
# value: 60
|
||||
# - name: disableEntityManagement # Optional
|
||||
# value: "false"
|
||||
# - name: maxDeliveryCount # Optional
|
||||
# value: 3
|
||||
# - name: lockDurationInSec # Optional
|
||||
# value: 60
|
||||
# - name: lockRenewalInSec # Optional
|
||||
# value: 20
|
||||
# - name: maxActiveMessages # Optional
|
||||
# value: 10000
|
||||
# - name: maxConcurrentHandlers # Optional
|
||||
# value: 10
|
||||
# - name: defaultMessageTimeToLiveInSec # Optional
|
||||
# value: 10
|
||||
# - name: autoDeleteOnIdleInSec # Optional
|
||||
# value: 3600
|
||||
# - name: minConnectionRecoveryInSec # Optional
|
||||
# value: 2
|
||||
# - name: maxConnectionRecoveryInSec # Optional
|
||||
# value: 300
|
||||
# - name: maxRetriableErrorsPerSec # Optional
|
||||
# value: 10
|
||||
# - name: publishMaxRetries # Optional
|
||||
# value: 5
|
||||
# - name: publishInitialRetryIntervalInMs # Optional
|
||||
# value: 500
|
||||
```
|
||||
|
||||
> __NOTE:__ The above settings are shared across all topics that use this component.
|
||||
|
||||
{{% alert title="Warning" color="warning" %}}
|
||||
The above example uses secrets as plain strings. It is recommended to use a secret store for the secrets as described [here]({{< ref component-secrets.md >}}).
|
||||
{{% /alert %}}
|
||||
|
||||
## Spec metadata fields
|
||||
|
||||
| Field | Required | Details | Example |
|
||||
|--------------------|:--------:|---------|---------|
|
||||
| `connectionString` | Y | Shared access policy connection string for the Service Bus. Required unless using Azure AD authentication. | See example above
|
||||
| `namespaceName`| N | Parameter to set the address of the Service Bus namespace, as a fully-qualified domain name. Required if using Azure AD authentication. | `"namespace.servicebus.windows.net"` |
|
||||
| `consumerID` | N | Consumer ID (a.k.a consumer tag) organizes one or more consumers into a group. Consumers with the same consumer ID work as one virtual consumer, i.e. a message is processed only once by one of the consumers in the group. If the consumer ID is not set, the dapr runtime will set it to the dapr application ID. |
|
||||
| `timeoutInSec` | N | Timeout for sending messages and for management operations. Default: `60` |`30`
|
||||
| `handlerTimeoutInSec`| N | Timeout for invoking the app's handler. Default: `60` | `30`
|
||||
| `lockRenewalInSec` | N | Defines the frequency at which buffered message locks will be renewed. Default: `20`. | `20`
|
||||
| `maxActiveMessages` | N | Defines the maximum number of messages to be processing or in the buffer at once. This should be at least as big as the maximum concurrent handlers. Default: `1000` | `2000`
|
||||
| `maxConcurrentHandlers` | N | Defines the maximum number of concurrent message handlers. Default: `0` (unlimited) | `10`
|
||||
| `disableEntityManagement` | N | When set to true, queues and subscriptions do not get created automatically. Default: `"false"` | `"true"`, `"false"`
|
||||
| `defaultMessageTimeToLiveInSec` | N | Default message time to live, in seconds. Used during subscription creation only. | `10`
|
||||
| `autoDeleteOnIdleInSec` | N | Time in seconds to wait before auto deleting idle subscriptions. Used during subscription creation only. Default: `0` (disabled) | `3600`
|
||||
| `maxDeliveryCount` | N | Defines the number of attempts the server will make to deliver a message. Used during subscription creation only. Default set by server. | `10`
|
||||
| `lockDurationInSec` | N | Defines the length in seconds that a message will be locked for before expiring. Used during subscription creation only. Default set by server. | `30`
|
||||
| `minConnectionRecoveryInSec` | N | Minimum interval (in seconds) to wait before attempting to reconnect to Azure Service Bus in case of a connection failure. Default: `2` | `5`
|
||||
| `maxConnectionRecoveryInSec` | N | Maximum interval (in seconds) to wait before attempting to reconnect to Azure Service Bus in case of a connection failure. After each attempt, the component waits a random number of seconds, increasing every time, between the minimum and the maximum. Default: `300` (5 minutes) | `600`
|
||||
| `maxRetriableErrorsPerSec` | N | Maximum number of retriable errors that are processed per second. If a message fails to be processed with a retriable error, the component adds a delay before it starts processing another message, to avoid immediately re-processing messages that have failed. Default: `10` | `10`
|
||||
| `publishMaxRetries` | N | The max number of retries for when Azure Service Bus responds with "too busy" in order to throttle messages. Defaults: `5` | `5`
|
||||
| `publishInitialRetryIntervalInMs` | N | Time in milliseconds for the initial exponential backoff when Azure Service Bus throttle messages. Defaults: `500` | `500`
|
||||
|
||||
### Azure Active Directory (AAD) authentication
|
||||
|
||||
The Azure Service Bus Topics pubsub component supports authentication using all Azure Active Directory mechanisms, including Managed Identities. For further information and the relevant component metadata fields to provide depending on the choice of AAD authentication mechanism, see the [docs for authenticating to Azure]({{< ref authenticating-azure.md >}}).
|
||||
|
||||
#### Example Configuration
|
||||
|
||||
```yaml
|
||||
apiVersion: dapr.io/v1alpha1
|
||||
kind: Component
|
||||
metadata:
|
||||
name: servicebus-pubsub
|
||||
spec:
|
||||
type: pubsub.azure.servicebus.topics
|
||||
version: v1
|
||||
metadata:
|
||||
- name: namespaceName
|
||||
# Required when using Azure Authentication.
|
||||
# Must be a fully-qualified domain name
|
||||
value: "servicebusnamespace.servicebus.windows.net"
|
||||
- name: azureTenantId
|
||||
value: "***"
|
||||
- name: azureClientId
|
||||
value: "***"
|
||||
- name: azureClientSecret
|
||||
value: "***"
|
||||
```
|
||||
|
||||
## Message metadata
|
||||
|
||||
Azure Service Bus messages extend the Dapr message format with additional contextual metadata. Some metadata fields are set by Azure Service Bus itself (read-only) and others can be set by the client when publishing a message.
|
||||
|
||||
### Sending a message with metadata
|
||||
|
||||
To set Azure Service Bus metadata when sending a message, set the query parameters on the HTTP request or the gRPC metadata as documented [here](https://docs.dapr.io/reference/api/pubsub_api/#metadata).
|
||||
|
||||
- `metadata.MessageId`
|
||||
- `metadata.CorrelationId`
|
||||
- `metadata.SessionId`
|
||||
- `metadata.Label`
|
||||
- `metadata.ReplyTo`
|
||||
- `metadata.PartitionKey`
|
||||
- `metadata.To`
|
||||
- `metadata.ContentType`
|
||||
- `metadata.ScheduledEnqueueTimeUtc`
|
||||
- `metadata.ReplyToSessionId`
|
||||
|
||||
> **Note:** The `metadata.MessageId` property does not set the `id` property of the cloud event returned by Dapr and should be treated in isolation.
|
||||
|
||||
### Receiving a message with metadata
|
||||
|
||||
When Dapr calls your application, it will attach Azure Service Bus message metadata to the request using either HTTP headers or gRPC metadata.
|
||||
In addition to the [settable metadata listed above](#sending-a-message-with-metadata), you can also access the following read-only message metadata.
|
||||
|
||||
- `metadata.DeliveryCount`
|
||||
- `metadata.LockedUntilUtc`
|
||||
- `metadata.LockToken`
|
||||
- `metadata.EnqueuedTimeUtc`
|
||||
- `metadata.SequenceNumber`
|
||||
|
||||
To find out more details on the purpose of any of these metadata properties, please refer to [the official Azure Service Bus documentation](https://docs.microsoft.com/rest/api/servicebus/message-headers-and-properties#message-headers).
|
||||
|
||||
> Note: that all times are populated by the server and are not adjusted for clock skews.
|
||||
|
||||
## Create an Azure Service Bus broker for topics
|
||||
|
||||
Follow the instructions [here](https://docs.microsoft.com/azure/service-bus-messaging/service-bus-quickstart-topics-subscriptions-portal) on setting up Azure Service Bus Topics.
|
||||
|
||||
## Related links
|
||||
|
||||
- [Basic schema for a Dapr component]({{< ref component-schema >}})
|
||||
- [Pub/Sub building block]({{< ref pubsub >}})
|
||||
- Read [this guide]({{< ref "howto-publish-subscribe.md#step-2-publish-a-topic" >}}) for instructions on configuring pub/sub components
|
||||
|
|
@ -0,0 +1,111 @@
|
|||
---
|
||||
type: docs
|
||||
title: "KubeMQ"
|
||||
linkTitle: "KubeMQ"
|
||||
description: "Detailed documentation on the KubeMQ pubsub component"
|
||||
aliases:
|
||||
- "/operations/components/setup-pubsub/supported-pubsub/setup-kubemq/"
|
||||
---
|
||||
|
||||
## Component format
|
||||
|
||||
To setup KubeMQ pub/sub, create a component of type `pubsub.kubemq`. See [this guide]({{< ref "howto-publish-subscribe.md#step-1-setup-the-pubsub-component" >}}) on how to create and apply a pub/sub configuration.
|
||||
|
||||
```yaml
|
||||
apiVersion: dapr.io/v1alpha1
|
||||
kind: Component
|
||||
metadata:
|
||||
name: kubemq-pubsub
|
||||
spec:
|
||||
type: pubsub.kubemq
|
||||
version: v1
|
||||
metadata:
|
||||
- name: address
|
||||
value: localhost:50000
|
||||
- name: store
|
||||
value: false
|
||||
```
|
||||
|
||||
## Spec metadata fields
|
||||
|
||||
| Field | Required | Details | Example |
|
||||
|-------------------|:--------:|-----------------------------------------------------------------------------------------------------------------------------|----------------------------------------|
|
||||
| address | Y | Address of the KubeMQ server | `"localhost:50000"` |
|
||||
| store | N | type of pubsub, true: pubsub persisted (EventsStore), false: pubsub in-memory (Events) | `true` or `false` (default is `false`) |
|
||||
| clientID | N | Name for client id connection | `sub-client-12345` |
|
||||
| authToken | N | Auth JWT token for connection Check out [KubeMQ Authentication](https://docs.kubemq.io/learn/access-control/authentication) | `ew...` |
|
||||
| group | N | Subscriber group for load balancing | `g1` |
|
||||
| disableReDelivery | N | Set if message should be re-delivered in case of error coming from application | `true` or `false` (default is `false`) |
|
||||
|
||||
## Create a KubeMQ broker
|
||||
|
||||
{{< tabs "Self-Hosted" "Kubernetes">}}
|
||||
|
||||
{{% codetab %}}
|
||||
1. Obtain KubeMQ Key by visiting [https://account.kubemq.io/login/register](https://account.kubemq.io/login/register) and register for a key.
|
||||
2. Wait for an email confirmation with your Key
|
||||
|
||||
You can run a KubeMQ broker with Docker:
|
||||
|
||||
```bash
|
||||
docker run -d -p 8080:8080 -p 50000:50000 -p 9090:9090 -e KUBEMQ_TOKEN=<your-key> kubemq/kubemq
|
||||
```
|
||||
You can then interact with the server using the client port: `localhost:50000`
|
||||
|
||||
{{% /codetab %}}
|
||||
|
||||
{{% codetab %}}
|
||||
1. Obtain KubeMQ Key by visiting [https://account.kubemq.io/login/register](https://account.kubemq.io/login/register) and register for a key.
|
||||
2. Wait for an email confirmation with your Key
|
||||
|
||||
Then Run the following kubectl commands:
|
||||
|
||||
```bash
|
||||
kubectl apply -f https://deploy.kubemq.io/init
|
||||
```
|
||||
|
||||
```bash
|
||||
kubectl apply -f https://deploy.kubemq.io/key/<your-key>
|
||||
```
|
||||
{{% /codetab %}}
|
||||
|
||||
{{< /tabs >}}
|
||||
|
||||
## Install KubeMQ CLI
|
||||
Go to [KubeMQ CLI](https://github.com/kubemq-io/kubemqctl/releases) and download the latest version of the CLI.
|
||||
|
||||
## Browse KubeMQ Dashboard
|
||||
|
||||
{{< tabs "Self-Hosted" "Kubernetes">}}
|
||||
|
||||
{{% codetab %}}
|
||||
<!-- IGNORE_LINKS -->
|
||||
Open a browser and navigate to [http://localhost:8080](http://localhost:8080)
|
||||
<!-- END_IGNORE -->
|
||||
{{% /codetab %}}
|
||||
|
||||
{{% codetab %}}
|
||||
With KubeMQCTL installed, run the following command:
|
||||
|
||||
```bash
|
||||
kubemqctl get dashboard
|
||||
```
|
||||
Or, with kubectl installed, run port-forward command:
|
||||
|
||||
```bash
|
||||
kubectl port-forward svc/kubemq-cluster-api -n kubemq 8080:8080
|
||||
```
|
||||
{{% /codetab %}}
|
||||
|
||||
{{< /tabs >}}
|
||||
```
|
||||
|
||||
## KubeMQ Documentation
|
||||
Visit [KubeMQ Documentation](https://docs.kubemq.io/) for more information.
|
||||
|
||||
|
||||
## Related links
|
||||
|
||||
- [Basic schema for a Dapr component]({{< ref component-schema >}})
|
||||
- Read [this guide]({{< ref "howto-publish-subscribe.md#step-2-publish-a-topic" >}}) for instructions on configuring pub/sub components
|
||||
- [Pub/sub building block]({{< ref pubsub >}})
|
||||
|
|
@ -21,16 +21,15 @@ spec:
|
|||
type: pubsub.mqtt3
|
||||
version: v1
|
||||
metadata:
|
||||
- name: url
|
||||
value: "tcp://[username][:password]@host.domain[:port]"
|
||||
- name: qos
|
||||
value: 1
|
||||
- name: retain
|
||||
value: "false"
|
||||
- name: cleanSession
|
||||
value: "false"
|
||||
- name: backOffMaxRetries
|
||||
value: "0"
|
||||
- name: url
|
||||
value: "tcp://[username][:password]@host.domain[:port]"
|
||||
# Optional
|
||||
- name: retain
|
||||
value: "false"
|
||||
- name: cleanSession
|
||||
value: "false"
|
||||
- name: qos
|
||||
value: "1"
|
||||
```
|
||||
|
||||
{{% alert title="Warning" color="warning" %}}
|
||||
|
|
@ -41,18 +40,18 @@ The above example uses secrets as plain strings. It is recommended to use a secr
|
|||
|
||||
| Field | Required | Details | Example |
|
||||
|--------------------|:--------:|---------|---------|
|
||||
| url | Y | Address of the MQTT broker. Can be `secretKeyRef` to use a secret reference. <br> Use the **`tcp://`** URI scheme for non-TLS communication. <br> Use the **`ssl://`** URI scheme for TLS communication. | `"tcp://[username][:password]@host.domain[:port]"`
|
||||
| consumerID | N | The client ID used to connect to the MQTT broker for the consumer connection. Defaults to the Dapr app ID.<br>Note: if `producerID` is not set, `-consumer` is appended to this value for the consumer connection | `"myMqttClientApp"`
|
||||
| producerID | N | The client ID used to connect to the MQTT broker for the producer connection. Defaults to `{consumerID}-producer`. | `"myMqttProducerApp"`
|
||||
| qos | N | Indicates the Quality of Service Level (QoS) of the message ([more info](https://www.hivemq.com/blog/mqtt-essentials-part-6-mqtt-quality-of-service-levels/)). Defaults to `1`. |`0`, `1`, `2`
|
||||
| retain | N | Defines whether the message is saved by the broker as the last known good value for a specified topic. Defaults to `"false"`. | `"true"`, `"false"`
|
||||
| cleanSession | N | Sets the `clean_session` flag in the connection message to the MQTT broker if `"true"` ([more info](http://www.steves-internet-guide.com/mqtt-clean-sessions-example/)). Defaults to `"false"`. | `"true"`, `"false"`
|
||||
| caCert | Required for using TLS | Certificate Authority (CA) certificate in PEM format for verifying server TLS certificates. | `"-----BEGIN CERTIFICATE-----\n<base64-encoded DER>\n-----END CERTIFICATE-----"`
|
||||
| clientCert | Required for using TLS | TLS client certificate in PEM format. Must be used with `clientKey`. | `"-----BEGIN CERTIFICATE-----\n<base64-encoded DER>\n-----END CERTIFICATE-----"`
|
||||
| clientKey | Required for using TLS | TLS client key in PEM format. Must be used with `clientCert`. Can be `secretKeyRef` to use a secret reference. | `"-----BEGIN RSA PRIVATE KEY-----\n<base64-encoded PKCS8>\n-----END RSA PRIVATE KEY-----"`
|
||||
| `url` | Y | Address of the MQTT broker. Can be `secretKeyRef` to use a secret reference. <br> Use the **`tcp://`** URI scheme for non-TLS communication. <br> Use the **`ssl://`** URI scheme for TLS communication. | `"tcp://[username][:password]@host.domain[:port]"`
|
||||
| `consumerID` | N | The client ID used to connect to the MQTT broker. Defaults to the Dapr app ID. | `"myMqttClientApp"`
|
||||
| `retain` | N | Defines whether the message is saved by the broker as the last known good value for a specified topic. Defaults to `"false"`. | `"true"`, `"false"`
|
||||
| `cleanSession` | N | Sets the `clean_session` flag in the connection message to the MQTT broker if `"true"` ([more info](http://www.steves-internet-guide.com/mqtt-clean-sessions-example/)). Defaults to `"false"`. | `"true"`, `"false"`
|
||||
| `caCert` | Required for using TLS | Certificate Authority (CA) certificate in PEM format for verifying server TLS certificates. | See example below
|
||||
| `clientCert` | Required for using TLS | TLS client certificate in PEM format. Must be used with `clientKey`. | See example below
|
||||
| `clientKey` | Required for using TLS | TLS client key in PEM format. Must be used with `clientCert`. Can be `secretKeyRef` to use a secret reference. | See example below
|
||||
| `qos` | N | Indicates the Quality of Service Level (QoS) of the message ([more info](https://www.hivemq.com/blog/mqtt-essentials-part-6-mqtt-quality-of-service-levels/)). Defaults to `1`. |`0`, `1`, `2`
|
||||
|
||||
### Communication using TLS
|
||||
|
||||
To configure communication using TLS, ensure that the MQTT broker (e.g. mosquitto) is configured to support certificates and provide the `caCert`, `clientCert`, `clientKey` metadata in the component configuration. For example:
|
||||
To configure communication using TLS, ensure that the MQTT broker (e.g. emqx) is configured to support certificates and provide the `caCert`, `clientCert`, `clientKey` metadata in the component configuration. For example:
|
||||
|
||||
```yaml
|
||||
apiVersion: dapr.io/v1alpha1
|
||||
|
|
@ -63,26 +62,30 @@ spec:
|
|||
type: pubsub.mqtt3
|
||||
version: v1
|
||||
metadata:
|
||||
- name: url
|
||||
value: "ssl://host.domain[:port]"
|
||||
- name: qos
|
||||
value: 1
|
||||
- name: retain
|
||||
value: "false"
|
||||
- name: cleanSession
|
||||
value: "false"
|
||||
- name: backoffMaxRetries
|
||||
value: "0"
|
||||
- name: caCert
|
||||
value: ${{ myLoadedCACert }}
|
||||
- name: clientCert
|
||||
value: ${{ myLoadedClientCert }}
|
||||
- name: clientKey
|
||||
secretKeyRef:
|
||||
name: myMqttClientKey
|
||||
key: myMqttClientKey
|
||||
auth:
|
||||
secretStore: <SECRET_STORE_NAME>
|
||||
- name: url
|
||||
value: "ssl://host.domain[:port]"
|
||||
# TLS configuration
|
||||
- name: caCert
|
||||
value: |
|
||||
-----BEGIN CERTIFICATE-----
|
||||
...
|
||||
-----END CERTIFICATE-----
|
||||
- name: clientCert
|
||||
value: |
|
||||
-----BEGIN CERTIFICATE-----
|
||||
...
|
||||
-----END CERTIFICATE-----
|
||||
- name: clientKey
|
||||
secretKeyRef:
|
||||
name: myMqttClientKey
|
||||
key: myMqttClientKey
|
||||
# Optional
|
||||
- name: retain
|
||||
value: "false"
|
||||
- name: cleanSession
|
||||
value: "false"
|
||||
- name: qos
|
||||
value: 1
|
||||
```
|
||||
|
||||
Note that while the `caCert` and `clientCert` values may not be secrets, they can be referenced from a Dapr secret store as well for convenience.
|
||||
|
|
@ -102,36 +105,34 @@ spec:
|
|||
metadata:
|
||||
- name: consumerID
|
||||
value: "{uuid}"
|
||||
- name: cleanSession
|
||||
value: "true"
|
||||
- name: url
|
||||
value: "tcp://admin:public@localhost:1883"
|
||||
- name: qos
|
||||
value: 1
|
||||
- name: retain
|
||||
value: "false"
|
||||
- name: cleanSession
|
||||
value: "true"
|
||||
- name: backoffMaxRetries
|
||||
value: "0"
|
||||
```
|
||||
|
||||
{{% alert title="Warning" color="warning" %}}
|
||||
The above example uses secrets as plain strings. It is recommended to use a secret store for the secrets as described [here]({{< ref component-secrets.md >}}).
|
||||
{{% /alert %}}
|
||||
|
||||
Note that in the case, the value of the consumer ID is random every time Dapr restarts, so we are setting `cleanSession` to true as well.
|
||||
Note that in the case, the value of the consumer ID is random every time Dapr restarts, so you should set `cleanSession` to `true` as well.
|
||||
|
||||
## Create a MQTT3 broker
|
||||
|
||||
{{< tabs "Self-Hosted" "Kubernetes">}}
|
||||
|
||||
{{% codetab %}}
|
||||
You can run a MQTT broker [locally using Docker](https://hub.docker.com/_/eclipse-mosquitto):
|
||||
You can run a MQTT broker like emqx [locally using Docker](https://hub.docker.com/_/emqx):
|
||||
|
||||
```bash
|
||||
docker run -d -p 1883:1883 -p 9001:9001 --name mqtt eclipse-mosquitto:1.6
|
||||
docker run -d -p 1883:1883 --name mqtt emqx:latest
|
||||
```
|
||||
|
||||
You can then interact with the server using the client port: `mqtt://localhost:1883`
|
||||
You can then interact with the server using the client port: `tcp://localhost:1883`
|
||||
{{% /codetab %}}
|
||||
|
||||
{{% codetab %}}
|
||||
|
|
@ -156,15 +157,12 @@ spec:
|
|||
spec:
|
||||
containers:
|
||||
- name: mqtt
|
||||
image: eclipse-mosquitto:1.6
|
||||
image: emqx:latest
|
||||
imagePullPolicy: IfNotPresent
|
||||
ports:
|
||||
- name: default
|
||||
containerPort: 1883
|
||||
protocol: TCP
|
||||
- name: websocket
|
||||
containerPort: 9001
|
||||
protocol: TCP
|
||||
---
|
||||
apiVersion: v1
|
||||
kind: Service
|
||||
|
|
@ -181,10 +179,6 @@ spec:
|
|||
targetPort: default
|
||||
name: default
|
||||
protocol: TCP
|
||||
- port: 9001
|
||||
targetPort: websocket
|
||||
name: websocket
|
||||
protocol: TCP
|
||||
```
|
||||
|
||||
You can then interact with the server using the client port: `tcp://mqtt-broker.default.svc.cluster.local:1883`
|
||||
|
|
|
|||
|
|
@ -38,6 +38,28 @@ spec:
|
|||
value: "-1"
|
||||
- name: disableBatching
|
||||
value: "false"
|
||||
- name: <topic-name>.jsonschema # sets a json schema validation for the configured topic
|
||||
value: |
|
||||
{
|
||||
"type": "record",
|
||||
"name": "Example",
|
||||
"namespace": "test",
|
||||
"fields": [
|
||||
{"name": "ID","type": "int"},
|
||||
{"name": "Name","type": "string"}
|
||||
]
|
||||
}
|
||||
- name: <topic-name>.avroschema # sets an avro schema validation for the configured topic
|
||||
value: |
|
||||
{
|
||||
"type": "record",
|
||||
"name": "Example",
|
||||
"namespace": "test",
|
||||
"fields": [
|
||||
{"name": "ID","type": "int"},
|
||||
{"name": "Name","type": "string"}
|
||||
]
|
||||
}
|
||||
```
|
||||
|
||||
## Spec metadata fields
|
||||
|
|
@ -62,6 +84,8 @@ spec:
|
|||
| batchingMaxPublishDelay | N | batchingMaxPublishDelay set the time period within which the messages sent will be batched,if batch messages are enabled. If set to a non zero value, messages will be queued until this time interval or batchingMaxMessages (see below) or batchingMaxSize (see below). There are two valid formats, one is the fraction with a unit suffix format, and the other is the pure digital format that is processed as milliseconds. Valid time units are "ns", "us" (or "µs"), "ms", "s", "m", "h". Default: `"10ms"` | `"10ms"`, `"10"`|
|
||||
| batchingMaxMessages | N | batchingMaxMessages set the maximum number of messages permitted in a batch.If set to a value greater than 1, messages will be queued until this threshold is reached or batchingMaxSize (see below) has been reached or the batch interval has elapsed. Default: `"1000"` | `"1000"`|
|
||||
| batchingMaxSize | N | batchingMaxSize sets the maximum number of bytes permitted in a batch. If set to a value greater than 1, messages will be queued until this threshold is reached or batchingMaxMessages (see above) has been reached or the batch interval has elapsed. Default: `"128KB"` | `"131072"`|
|
||||
| <topic-name>.jsonschema | N | Enforces JSON schema validation for the configured topic. |
|
||||
| <topic-name>.avroschema | N | Enforces Avro schema validation for the configured topic. |
|
||||
|
||||
### Delay queue
|
||||
|
||||
|
|
|
|||
|
|
@ -73,6 +73,65 @@ The above example uses secrets as plain strings. It is recommended to use a secr
|
|||
| maxLen | N | The maximum number of messages of a queue and its dead letter queue (if dead letter enabled). If both `maxLen` and `maxLenBytes` are set then both will apply; whichever limit is hit first will be enforced. Defaults to no limit. | `"1000"` |
|
||||
| maxLenBytes | N | Maximum length in bytes of a queue and its dead letter queue (if dead letter enabled). If both `maxLen` and `maxLenBytes` are set then both will apply; whichever limit is hit first will be enforced. Defaults to no limit. | `"1048576"` |
|
||||
| exchangeKind | N | Exchange kind of the rabbitmq exchange. Defaults to `"fanout"`. | `"fanout"`,`"topic"` |
|
||||
| caCert | Required for using TLS | Input/Output | Certificate Authority (CA) certificate in PEM format for verifying server TLS certificates. | `"-----BEGIN CERTIFICATE-----\n<base64-encoded DER>\n-----END CERTIFICATE-----"`
|
||||
| clientCert | Required for using TLS | Input/Output | TLS client certificate in PEM format. Must be used with `clientKey`. | `"-----BEGIN CERTIFICATE-----\n<base64-encoded DER>\n-----END CERTIFICATE-----"`
|
||||
| clientKey | Required for using TLS | Input/Output | TLS client key in PEM format. Must be used with `clientCert`. Can be `secretKeyRef` to use a secret reference. | `"-----BEGIN RSA PRIVATE KEY-----\n<base64-encoded PKCS8>\n-----END RSA PRIVATE KEY-----"`
|
||||
|
||||
|
||||
## Communication using TLS
|
||||
|
||||
To configure communication using TLS, ensure that the RabbitMQ nodes have TLS enabled and provide the `caCert`, `clientCert`, `clientKey` metadata in the component configuration. For example:
|
||||
|
||||
```yaml
|
||||
apiVersion: dapr.io/v1alpha1
|
||||
kind: Component
|
||||
metadata:
|
||||
name: rabbitmq-pubsub
|
||||
spec:
|
||||
type: pubsub.rabbitmq
|
||||
version: v1
|
||||
metadata:
|
||||
- name: host
|
||||
value: "amqps://localhost:5671"
|
||||
- name: consumerID
|
||||
value: myapp
|
||||
- name: durable
|
||||
value: false
|
||||
- name: deletedWhenUnused
|
||||
value: false
|
||||
- name: autoAck
|
||||
value: false
|
||||
- name: deliveryMode
|
||||
value: 0
|
||||
- name: requeueInFailure
|
||||
value: false
|
||||
- name: prefetchCount
|
||||
value: 0
|
||||
- name: reconnectWait
|
||||
value: 0
|
||||
- name: concurrencyMode
|
||||
value: parallel
|
||||
- name: publisherConfirm
|
||||
value: false
|
||||
- name: enableDeadLetter # Optional enable dead Letter or not
|
||||
value: true
|
||||
- name: maxLen # Optional max message count in a queue
|
||||
value: 3000
|
||||
- name: maxLenBytes # Optional maximum length in bytes of a queue.
|
||||
value: 10485760
|
||||
- name: exchangeKind
|
||||
value: fanout
|
||||
- name: caCert
|
||||
value: ${{ myLoadedCACert }}
|
||||
- name: clientCert
|
||||
value: ${{ myLoadedClientCert }}
|
||||
- name: clientKey
|
||||
secretKeyRef:
|
||||
name: myRabbitMQClientKey
|
||||
key: myRabbitMQClientKey
|
||||
```
|
||||
|
||||
Note that while the `caCert` and `clientCert` values may not be secrets, they can be referenced from a Dapr secret store as well for convenience.
|
||||
|
||||
### Enabling message delivery retries
|
||||
|
||||
|
|
|
|||
|
|
@ -43,6 +43,8 @@ spec:
|
|||
value: <REPLACE-WITH-CLIENT-x509-CERT-URL> # Required.
|
||||
- name: entity_kind
|
||||
value: <REPLACE-WITH-ENTITY-KIND> # Optional. default: "DaprState"
|
||||
- name: noindex
|
||||
value: <REPLACE-WITH-BOOLEAN> # Optional. default: "false"
|
||||
```
|
||||
|
||||
{{% alert title="Warning" color="warning" %}}
|
||||
|
|
@ -63,6 +65,7 @@ The above example uses secrets as plain strings. It is recommended to use a secr
|
|||
| auth_provider_x509_cert_url | Y | The auth provider certificate URL | `"https://www.googleapis.com/oauth2/v1/certs"`
|
||||
| client_x509_cert_url | Y | The client certificate URL | `"https://www.googleapis.com/robot/v1/metadata/x509/x"`
|
||||
| entity_kind | N | The entity name in Filestore. Defaults to `"DaprState"` | `"DaprState"`
|
||||
| noindex | N | Whether to disable indexing of state entities. Use this setting if you encounter Firestore index size limitations. Defaults to `"false"` | `"true"`
|
||||
|
||||
## Setup GCP Firestore
|
||||
|
||||
|
|
|
|||
|
|
@ -0,0 +1,86 @@
|
|||
---
|
||||
type: docs
|
||||
title: "SQLite"
|
||||
linkTitle: "SQLite"
|
||||
description: Detailed information on the SQLite state store component
|
||||
aliases:
|
||||
- "/operations/components/setup-state-store/supported-state-stores/setup-sqlite/"
|
||||
---
|
||||
|
||||
This component allows using SQLite 3 as state store for Dapr.
|
||||
|
||||
> The component is currently compiled with SQLite version 3.40.1.
|
||||
|
||||
## Create a Dapr component
|
||||
|
||||
Create a file called `sqlite.yaml`, paste the following, and replace the `<CONNECTION STRING>` value with your connection string, which is the path to a file on disk.
|
||||
|
||||
If you want to also configure SQLite to store actors, add the `actorStateStore` option as in the example below.
|
||||
|
||||
```yaml
|
||||
apiVersion: dapr.io/v1alpha1
|
||||
kind: Component
|
||||
metadata:
|
||||
name: <NAME>
|
||||
spec:
|
||||
type: state.sqlite
|
||||
version: v1
|
||||
metadata:
|
||||
# Connection string
|
||||
- name: connectionString
|
||||
value: "data.db"
|
||||
# Timeout for database operations, in seconds (optional)
|
||||
#- name: timeoutInSeconds
|
||||
# value: 20
|
||||
# Name of the table where to store the state (optional)
|
||||
#- name: tableName
|
||||
# value: "state"
|
||||
# Cleanup interval in seconds, to remove expired rows (optional)
|
||||
#- name: cleanupIntervalInSeconds
|
||||
# value: 3600
|
||||
# Uncomment this if you wish to use SQLite as a state store for actors (optional)
|
||||
#- name: actorStateStore
|
||||
# value: "true"
|
||||
```
|
||||
|
||||
## Spec metadata fields
|
||||
|
||||
| Field | Required | Details | Example |
|
||||
|--------------------|:--------:|---------|---------|
|
||||
| `connectionString` | Y | The connection string for the SQLite database. See below for more details. | `"path/to/data.db"`, `"file::memory:?cache=shared"`
|
||||
| `timeoutInSeconds` | N | Timeout, in seconds, for all database operations. Defaults to `20` | `30`
|
||||
| `tableName` | N | Name of the table where the data is stored. Defaults to `state`. | `"state"`
|
||||
| `cleanupIntervalInSeconds` | N | Interval, in seconds, to clean up rows with an expired TTL. Default: `3600` (i.e. 1 hour). Setting this to values <=0 disables the periodic cleanup. | `1800`, `-1`
|
||||
| `actorStateStore` | N | Consider this state store for actors. Defaults to `"false"` | `"true"`, `"false"`
|
||||
|
||||
The **`connectionString`** parameter configures how to open the SQLite database.
|
||||
|
||||
- Normally, this is the path to a file on disk, relative to the current working directory, or absolute. For example: `"data.db"` (relative to the working directory) or `"/mnt/data/mydata.db"`.
|
||||
- The path is interpreted by the SQLite library, so it's possible to pass additional options to the SQLite driver using "URI options" if the path begins with `file:`. For example: `"file:path/to/data.db?mode=ro"` opens the database at path `path/to/data.db` in read-only mode. [Refer to the SQLite documentation for all supported URI options](https://www.sqlite.org/uri.html).
|
||||
- The special case `":memory:"` launches the component backed by an in-memory SQLite database. This database is not persisted on disk, not shared across multiple Dapr instances, and all data is lost when the Dapr sidecar is stopped. When using an in-memory database, you should always set the `?cache=shared` URI option: `"file::memory:?cache=shared"`
|
||||
|
||||
## Advanced
|
||||
|
||||
### TTLs and cleanups
|
||||
|
||||
This state store supports [Time-To-Live (TTL)]({{< ref state-store-ttl.md >}}) for records stored with Dapr. When storing data using Dapr, you can set the `ttlInSeconds` metadata property to indicate when the data should be considered "expired".
|
||||
|
||||
Because SQLite doesn't have built-in support for TTLs, this is implemented in Dapr by adding a column in the state table indicating when the data is to be considered "expired". Records that are "expired" are not returned to the caller, even if they're still physically stored in the database. A background "garbage collector" periodically scans the state table for expired rows and deletes them.
|
||||
|
||||
The `cleanupIntervalInSeconds` metadata property sets the expired records deletion interval, which defaults to 3600 seconds (that is, 1 hour).
|
||||
|
||||
- Longer intervals require less frequent scans for expired rows, but can require storing expired records for longer, potentially requiring more storage space. If you plan to store many records in your state table, with short TTLs, consider setting `cleanupIntervalInSeconds` to a smaller value, for example `300` (300 seconds, or 5 minutes).
|
||||
- If you do not plan to use TTLs with Dapr and the SQLite state store, you should consider setting `cleanupIntervalInSeconds` to a value <= 0 (e.g. `0` or `-1`) to disable the periodic cleanup and reduce the load on the database.
|
||||
|
||||
The `expiration_time` column in the state table, where the expiration date for records is stored, **does not have an index by default**, so each periodic cleanup must perform a full-table scan. If you have a table with a very large number of records, and only some of them use a TTL, you may find it useful to create an index on that column. Assuming that your state table name is `state` (the default), you can use this query:
|
||||
|
||||
```sql
|
||||
CREATE INDEX idx_expiration_time
|
||||
ON state (expiration_time);
|
||||
```
|
||||
|
||||
## Related links
|
||||
|
||||
- [Basic schema for a Dapr component]({{< ref component-schema >}})
|
||||
- Read [this guide]({{< ref "howto-get-save-state.md#step-2-save-and-retrieve-a-single-state" >}}) for instructions on configuring state store components
|
||||
- [State management building block]({{< ref state-management >}})
|
||||
|
|
@ -96,9 +96,9 @@
|
|||
output: true
|
||||
- component: Redis
|
||||
link: redis
|
||||
state: Beta
|
||||
state: Stable
|
||||
version: v1
|
||||
since: "1.7"
|
||||
since: "1.9"
|
||||
features:
|
||||
input: false
|
||||
output: true
|
||||
|
|
@ -142,3 +142,11 @@
|
|||
features:
|
||||
input: false
|
||||
output: true
|
||||
- component: KubeMQ
|
||||
link: kubemq
|
||||
state: Beta
|
||||
version: v1
|
||||
since: "1.10"
|
||||
features:
|
||||
input: true
|
||||
output: true
|
||||
|
|
|
|||
|
|
@ -1,5 +1,8 @@
|
|||
- component: AWS SNS/SQS
|
||||
link: setup-aws-snssqs
|
||||
state: Beta
|
||||
state: Stable
|
||||
version: v1
|
||||
since: "1.6"
|
||||
features:
|
||||
bulkPublish: false
|
||||
bulkSubscribe: false
|
||||
|
|
|
|||
|
|
@ -3,8 +3,22 @@
|
|||
state: Stable
|
||||
version: v1
|
||||
since: "1.8"
|
||||
- component: Azure Service Bus
|
||||
link: setup-azure-servicebus
|
||||
features:
|
||||
bulkPublish: true
|
||||
bulkSubscribe: false
|
||||
- component: Azure Service Bus Topics
|
||||
link: setup-azure-servicebus-topics
|
||||
state: Stable
|
||||
version: v1
|
||||
since: "1.0"
|
||||
features:
|
||||
bulkPublish: true
|
||||
bulkSubscribe: true
|
||||
- component: Azure Service Bus Queues
|
||||
link: setup-azure-servicebus-queues
|
||||
state: Beta
|
||||
version: v1
|
||||
since: "1.10"
|
||||
features:
|
||||
bulkPublish: true
|
||||
bulkSubscribe: true
|
||||
|
|
|
|||
|
|
@ -3,3 +3,6 @@
|
|||
state: Alpha
|
||||
version: v1
|
||||
since: "1.0"
|
||||
features:
|
||||
bulkPublish: false
|
||||
bulkSubscribe: false
|
||||
|
|
|
|||
|
|
@ -3,53 +3,94 @@
|
|||
state: Deprecated
|
||||
version: v1
|
||||
since: "1.9"
|
||||
- component: In Memory
|
||||
features:
|
||||
bulkPublish: false
|
||||
bulkSubscribe: false
|
||||
- component: In-memory
|
||||
link: setup-inmemory
|
||||
state: Beta
|
||||
version: v1
|
||||
since: "1.7"
|
||||
features:
|
||||
bulkPublish: false
|
||||
bulkSubscribe: false
|
||||
- component: Apache Kafka
|
||||
link: setup-apache-kafka
|
||||
state: Stable
|
||||
version: v1
|
||||
since: "1.5"
|
||||
features:
|
||||
bulkPublish: true
|
||||
bulkSubscribe: true
|
||||
- component: Redis Streams
|
||||
link: setup-redis-pubsub
|
||||
state: Stable
|
||||
version: v1
|
||||
since: "1.0"
|
||||
features:
|
||||
bulkPublish: false
|
||||
bulkSubscribe: false
|
||||
- component: JetStream
|
||||
link: setup-jetstream
|
||||
state: Alpha
|
||||
version: v1
|
||||
since: "1.4"
|
||||
- component: Pulsar
|
||||
link: setup-pulsar
|
||||
state: Beta
|
||||
version: v1
|
||||
since: "1.7"
|
||||
since: "1.10"
|
||||
features:
|
||||
bulkPublish: false
|
||||
bulkSubscribe: false
|
||||
- component: Pulsar
|
||||
link: setup-pulsar
|
||||
state: Stable
|
||||
version: v1
|
||||
since: "1.10"
|
||||
features:
|
||||
bulkPublish: false
|
||||
bulkSubscribe: false
|
||||
- component: MQTT3
|
||||
link: setup-mqtt3
|
||||
state: Stable
|
||||
version: v1
|
||||
since: "1.7"
|
||||
features:
|
||||
bulkPublish: false
|
||||
bulkSubscribe: false
|
||||
- component: NATS Streaming
|
||||
link: setup-nats-streaming
|
||||
state: Beta
|
||||
version: v1
|
||||
since: "1.0"
|
||||
features:
|
||||
bulkPublish: false
|
||||
bulkSubscribe: false
|
||||
- component: RabbitMQ
|
||||
link: setup-rabbitmq
|
||||
state: Stable
|
||||
version: v1
|
||||
since: "1.7"
|
||||
features:
|
||||
bulkPublish: false
|
||||
bulkSubscribe: false
|
||||
- component: RocketMQ
|
||||
link: setup-rocketmq
|
||||
state: Alpha
|
||||
version: v1
|
||||
since: "1.8"
|
||||
features:
|
||||
bulkPublish: false
|
||||
bulkSubscribe: false
|
||||
- component: Solace-AMQP
|
||||
link: setup-solace-amqp
|
||||
state: Alpha
|
||||
state: Beta
|
||||
version: v1
|
||||
since: "1.10"
|
||||
features:
|
||||
bulkPublish: false
|
||||
bulkSubscribe: false
|
||||
- component: KubeMQ
|
||||
link: setup-kubemq
|
||||
state: Beta
|
||||
version: v1
|
||||
since: "1.10"
|
||||
features:
|
||||
bulkPublish: false
|
||||
bulkSubscribe: false
|
||||
|
|
|
|||
|
|
@ -33,9 +33,9 @@
|
|||
query: false
|
||||
- component: Azure Table Storage
|
||||
link: setup-azure-tablestorage
|
||||
state: Beta
|
||||
state: Stable
|
||||
version: v1
|
||||
since: "1.7"
|
||||
since: "1.9"
|
||||
features:
|
||||
crud: true
|
||||
transactions: false
|
||||
|
|
|
|||
|
|
@ -22,9 +22,9 @@
|
|||
query: false
|
||||
- component: CockroachDB
|
||||
link: setup-cockroachdb
|
||||
state: Beta
|
||||
state: Stable
|
||||
version: v1
|
||||
since: "1.7"
|
||||
since: "1.10"
|
||||
features:
|
||||
crud: true
|
||||
transactions: true
|
||||
|
|
@ -64,7 +64,7 @@
|
|||
etag: false
|
||||
ttl: false
|
||||
query: false
|
||||
- component: In Memory
|
||||
- component: In-memory
|
||||
link: setup-inmemory
|
||||
state: Developer-only
|
||||
version: v1
|
||||
|
|
@ -110,9 +110,9 @@
|
|||
query: true
|
||||
- component: MySQL
|
||||
link: setup-mysql
|
||||
state: Beta
|
||||
state: Stable
|
||||
version: v1
|
||||
since: "1.7"
|
||||
since: "1.10"
|
||||
features:
|
||||
crud: true
|
||||
transactions: true
|
||||
|
|
@ -163,6 +163,17 @@
|
|||
etag: false
|
||||
ttl: false
|
||||
query: false
|
||||
- component: SQLite
|
||||
link: setup-sqlite
|
||||
state: Beta
|
||||
version: v1
|
||||
since: "1.10"
|
||||
features:
|
||||
crud: true
|
||||
transactions: true
|
||||
etag: true
|
||||
ttl: true
|
||||
query: false
|
||||
- component: Zookeeper
|
||||
link: setup-zookeeper
|
||||
state: Alpha
|
||||
|
|
|
|||
|
|
@ -10,6 +10,8 @@
|
|||
<table width="100%">
|
||||
<tr>
|
||||
<th>Component</th>
|
||||
<th>Bulk Publish</th>
|
||||
<th>Bulk Subscribe</th>
|
||||
<th>Status</th>
|
||||
<th>Component version</th>
|
||||
<th>Since runtime version</th>
|
||||
|
|
@ -18,6 +20,20 @@
|
|||
<tr>
|
||||
<td><a href="/reference/components-reference/supported-pubsub/{{ .link }}/">{{ .component }}</a>
|
||||
</td>
|
||||
<td align="center">
|
||||
{{ if .features.bulkPublish }}
|
||||
<span role="img" aria-label="Bulk publish supported">✅</span>
|
||||
{{else}}
|
||||
<img src="/images/emptybox.png" alt="Bulk publish not supported" aria-label="Bulk publish not supported">
|
||||
{{ end }}
|
||||
</td>
|
||||
<td align="center">
|
||||
{{ if .features.bulkSubscribe }}
|
||||
<span role="img" aria-label="Bulk subscribe supported">✅</span>
|
||||
{{else}}
|
||||
<img src="/images/emptybox.png" alt="Bulk subscribe not supported" aria-label="Bulk subscribe not supported">
|
||||
{{ end }}
|
||||
</td>
|
||||
<td>{{ .state }}</td>
|
||||
<td>{{ .version }}</td>
|
||||
<td>{{ .since }}</td>
|
||||
|
|
|
|||
|
Before Width: | Height: | Size: 378 KiB After Width: | Height: | Size: 141 KiB |
|
Before Width: | Height: | Size: 124 KiB After Width: | Height: | Size: 50 KiB |
|
Before Width: | Height: | Size: 426 KiB After Width: | Height: | Size: 135 KiB |
|
After Width: | Height: | Size: 13 KiB |
|
Before Width: | Height: | Size: 362 KiB After Width: | Height: | Size: 128 KiB |
|
After Width: | Height: | Size: 38 KiB |
|
After Width: | Height: | Size: 17 KiB |
|
After Width: | Height: | Size: 21 KiB |
|
After Width: | Height: | Size: 72 KiB |
|
After Width: | Height: | Size: 21 KiB |
|
After Width: | Height: | Size: 41 KiB |
|
After Width: | Height: | Size: 57 KiB |
|
After Width: | Height: | Size: 36 KiB |
|
After Width: | Height: | Size: 113 KiB |
|
After Width: | Height: | Size: 34 KiB |
|
After Width: | Height: | Size: 21 KiB |