Merge branch 'v1.4' into add_js_sdk_to_docs

This commit is contained in:
Ori Zohar 2021-09-21 09:22:06 -07:00 committed by GitHub
commit acee8b57fb
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
92 changed files with 2310 additions and 618 deletions

View File

@ -3,11 +3,11 @@ name: Azure Static Web App Root
on:
push:
branches:
- v1.3
- v1.4
pull_request:
types: [opened, synchronize, reopened, closed]
branches:
- v1.3
- v1.4
jobs:
build_and_deploy_job:

View File

@ -1,13 +1,13 @@
name: Azure Static Web App v1.3
name: Azure Static Web App v1.4
on:
push:
branches:
- v1.3
- v1.4
pull_request:
types: [opened, synchronize, reopened, closed]
branches:
- v1.3
- v1.4
jobs:
build_and_deploy_job:
@ -27,7 +27,7 @@ jobs:
HUGO_ENV: production
HUGO_VERSION: "0.74.3"
with:
azure_static_web_apps_api_token: ${{ secrets.AZURE_STATIC_WEB_APPS_V1_3 }}
azure_static_web_apps_api_token: ${{ secrets.AZURE_STATIC_WEB_APPS_V1_4 }}
repo_token: ${{ secrets.GITHUB_TOKEN }} # Used for Github integrations (i.e. PR comments)
skip_deploy_on_missing_secrets: true
action: "upload"
@ -48,6 +48,6 @@ jobs:
id: closepullrequest
uses: Azure/static-web-apps-deploy@v0.0.1-preview
with:
azure_static_web_apps_api_token: ${{ secrets.AZURE_STATIC_WEB_APPS_V1_3 }}
azure_static_web_apps_api_token: ${{ secrets.AZURE_STATIC_WEB_APPS_V1_4 }}
skip_deploy_on_missing_secrets: true
action: "close"

View File

@ -14,8 +14,8 @@ The following branches are currently maintained:
| Branch | Website | Description |
|--------|---------|-------------|
| [v1.3](https://github.com/dapr/docs) (primary) | https://docs.dapr.io | Latest Dapr release documentation. Typo fixes, clarifications, and most documentation goes here.
| [v1.4](https://github.com/dapr/docs/tree/v1.4) (pre-release) | https://v1-4.docs.dapr.io/ | Pre-release documentation. Doc updates that are only applicable to v1.4+ go here.
| [v1.4](https://github.com/dapr/docs) (primary) | https://docs.dapr.io | Latest Dapr release documentation. Typo fixes, clarifications, and most documentation goes here.
| [v1.5](https://github.com/dapr/docs/tree/v1.5) (pre-release) | https://v1-5.docs.dapr.io/ | Pre-release documentation. Doc updates that are only applicable to v1.5+ go here.
For more information visit the [Dapr branch structure](https://docs.dapr.io/contributing/contributing-docs/#branch-guidance) document.

View File

@ -157,20 +157,23 @@ offlineSearch = false
github_repo = "https://github.com/dapr/docs"
github_project_repo = "https://github.com/dapr/dapr"
github_subdir = "daprdocs"
github_branch = "v1.3"
github_branch = "v1.4"
# Versioning
version_menu = "v1.3 (latest)"
version = "v1.3"
version_menu = "v1.4 (latest)"
version = "v1.4"
archived_version = false
url_latest_version = "https://docs.dapr.io"
[[params.versions]]
version = "v1.4 (preview)"
url = "https://v1-4.docs.dapr.io"
version = "v1.5 (preview)"
url = "https://v1-5.docs.dapr.io"
[[params.versions]]
version = "v1.3 (latest)"
version = "v1.4 (latest)"
url = "#"
[[params.versions]]
version = "v1.3"
url = "https://v1-3.docs.dapr.io"
[[params.versions]]
version = "v1.2"
url = "https://v1-2.docs.dapr.io"

View File

@ -1,6 +1,6 @@
---
type: docs
title: "Dapr Concepts"
title: "Dapr concepts"
linkTitle: "Concepts"
weight: 10
description: "Learn about Dapr including its main features and capabilities"

View File

@ -0,0 +1,7 @@
---
type: docs
title: "Overview of the Dapr services"
linkTitle: "Dapr services"
weight: 800
description: "Learn about the services that make up the Dapr runtime"
---

View File

@ -0,0 +1,12 @@
---
type: docs
title: "Dapr operator service overview"
linkTitle: "Operator"
description: "Overview of the Dapr operator process"
---
When running Dapr in [Kubernetes mode]({{< ref kubernetes >}}), a pod running the Dapr operator service manages [Dapr component]({{< ref components >}}) updates and provides Kubernetes services endpoints for Dapr.
## Running the operator service
The operator service is deployed as part of `dapr init -k`, or via the Dapr Helm charts. For more information on running Dapr on Kubernetes, visit the [Kubernetes hosting page]({{< ref kubernetes >}}).

View File

@ -0,0 +1,16 @@
---
type: docs
title: "Dapr placement service overview"
linkTitle: "Placement"
description: "Overview of the Dapr placement process"
---
The Dapr placement service is used to calculate and distribute distributed hash tables for the location of [Dapr actors]({{< ref actors >}}) running in [self-hosted mode]({{< ref self-hosted >}}) or on [Kubernetes]({{< ref kubernetes >}}). This hash table maps actor IDs to pods or processes so a Dapr application can communicate with the actor.Anytime a Dapr application activates a Dapr actor, the placement updates the hash tables with the latest actor locations.
## Self-hosted mode
The placement service Docker container is started automatically as part of [`dapr init`]({{< ref self-hosted-with-docker.md >}}). It can also be run manually as a process if you are running in [slim-init mode]({{< ref self-hosted-no-docker.md >}}).
## Kubernetes mode
The placement service is deployed as part of `dapr init -k`, or via the Dapr Helm charts. For more information on running Dapr on Kubernetes, visit the [Kubernetes hosting page]({{< ref kubernetes >}}).

View File

@ -0,0 +1,26 @@
---
type: docs
title: "Dapr sentry service overview"
linkTitle: "Sentry"
description: "Overview of the Dapr sentry process"
---
The Dapr sentry service manages mTLS between services and acts as a certificate authority. It generates mTLS certificates and distributes them to any running sidecars. This allows sidecars to communicate with encrypted, mTLS traffic. For more information read the [sidecar-to-sidecar communication overview]({{< ref "security-concept.md#sidecar-to-sidecar-communication" >}}).
## Self-hosted mode
The sentry service Docker container is started automatically as part of [`dapr init`]({{< ref self-hosted-with-docker.md >}}). It can also be run manually as a process if you are running in [slim-init mode]({{< ref self-hosted-no-docker.md >}}).
<img src="/images/security-mTLS-sentry-selfhosted.png" width=1000>
## Kubernetes mode
The sentry service is deployed as part of `dapr init -k`, or via the Dapr Helm charts. For more information on running Dapr on Kubernetes, visit the [Kubernetes hosting page]({{< ref kubernetes >}}).
<img src="/images/security-mTLS-sentry-kubernetes.png" width=1000>
## Further reading
- [Security overview]({{< ref security-concept.md >}})
- [Self-hosted mode]({{< ref self-hosted-with-docker.md >}})
- [Kubernetes mode]({{< ref kubernetes >}})

View File

@ -0,0 +1,12 @@
---
type: docs
title: "Dapr sidecar injector overview"
linkTitle: "Sidecar injector"
description: "Overview of the Dapr sidecar injector process"
---
When running Dapr in [Kubernetes mode]({{< ref kubernetes >}}), a pod is created running the dapr-sidecar-injector service, which looks for pods initialized with the [Dapr annotations]({{< ref arguments-annotations-overview.md >}}), and then creates another container in that pod for the [daprd service]({{< ref sidecar >}})
## Running the sidecar injector
The sidecar injector service is deployed as part of `dapr init -k`, or via the Dapr Helm charts. For more information on running Dapr on Kubernetes, visit the [Kubernetes hosting page]({{< ref kubernetes >}}).

View File

@ -0,0 +1,57 @@
---
type: docs
title: "Dapr sidecar (daprd) overview"
linkTitle: "Sidecar"
weight: 100
description: "Overview of the Dapr sidecar process"
---
Dapr uses a [sidecar pattern]({{< ref "overview.md#sidecar-architecture" >}}), meaning the Dapr APIs are run and exposed on a separate process (i.e. the Dapr sidecar) running alongside your application. The Dapr sidecar process is named `daprd` and is launched in different ways depending on the hosting environment.
<img src="/images/overview-sidecar-model.png" width=700>
## Self-hosted with `dapr run`
When Dapr is installed in [self-hosted mode]({{<ref self-hosted>}}), the `daprd` binary is downloaded and placed under the user home directory (`$HOME/.dapr/bin` for Linux/MacOS or `%USERPROFILE%\.dapr\bin\` for Windows). In self-hosted mode, running the Dapr CLI [`run` command]({{< ref dapr-run.md >}}) launches the `daprd` executable together with the provided application executable. This is the recommended way of running the Dapr sidecar when working locally in scenarios such as development and testing. The various arguments the CLI exposes to configure the sidecar can be found in the [Dapr run command reference]({{<ref dapr-run>}}).
## Kubernetes with `dapr-sidecar-injector`
On [Kubernetes]({{< ref kubernetes.md >}}), the Dapr control plane includes the [dapr-sidecar-injector service]({{< ref kubernetes-overview.md >}}), which watches for new pods with the `dapr.io/enabled` annotation and injects a container with the `daprd` process within the pod. In this case, sidecar arguments can be passed through annotations as outlined in the **Kubernetes annotations** column in [this table]({{<ref arguments-annotations-overview>}}).
## Running the sidecar directly
In most cases you do not need to run `daprd` explicitly, as the sidecar is either launched by the CLI (self-hosted mode) or by the dapr-sidecar-injector service (Kubernetes). For advanced use cases (debugging, scripted deployments, etc.) the `daprd` process can be launched directly.
For a detailed list of all available arguments run `daprd --help` or see this [table]({{< ref arguments-annotations-overview.md >}}) which outlines how the `daprd` arguments relate to the CLI arguments and Kubernetes annotations.
### Examples
1. Start a sidecar along an application by specifying its unique ID. Note `--app-id` is a required field:
```bash
daprd --app-id myapp
```
2. Specify the port your application is listening to
```bash
daprd --app-id --app-port 5000
```
3. If you are using several custom components and want to specify the location of the component definition files, use the `--components-path` argument:
```bash
daprd --app-id myapp --components-path <PATH-TO-COMPONENTS-FILES>
```
4. Enable collection of Prometheus metrics while running your app
```bash
daprd --app-id myapp --enable-metrics
```
5. Listen to IPv4 and IPv6 loopback only
```bash
daprd --app-id myapp --dapr-listen-addresses '127.0.0.1,[::1]'
```

View File

@ -35,6 +35,8 @@ Watch these recordings from the Dapr community calls showing presentations on ru
- General overview and a demo of [Dapr and Linkerd](https://youtu.be/xxU68ewRmz8?t=142)
- Demo of running [Dapr and Istio](https://youtu.be/ngIDOQApx8g?t=335)
Also, learn more about [running Dapr with Open Service Mesh (OSM)]({{<ref open-service-mesh>}}).
## When to choose using Dapr, a service mesh, or both
Should you be using Dapr, a service mesh, or both? The answer depends on your requirements. If, for example, you are looking to use Dapr for one or more building blocks such as state management or pub/sub, and you are considering using a service mesh just for network security or observability, you may find that Dapr is a good fit and that a service mesh is not required.

View File

@ -2,7 +2,7 @@
type: docs
title: "Dapr terminology and definitions"
linkTitle: "Terminology"
weight: 800
weight: 900
description: Definitions for common terms and acronyms in the Dapr documentation
---

View File

@ -10,10 +10,6 @@ aliases:
[GitHub Codespaces](https://github.com/features/codespaces) are the easiest way to get up and running for contributing to a Dapr repo. In as little as a single click, you can have an environment with all of the prerequisites ready to go in your browser.
{{% alert title="Private Beta" color="warning" %}}
GitHub Codespaces is currently in a private beta. Sign up [here](https://github.com/features/codespaces/signup).
{{% /alert %}}
## Features
- **Click and Run**: Get a dedicated and sandboxed environment with all of the required frameworks and packages ready to go.

View File

@ -274,7 +274,7 @@ Will result in the following output:
{{< code-snippet file="contributing-1.py" lang="python" marker="#SAMPLE" >}}
Use the `replace-key-[token]` and `replace-value-[token]` parameters to limit the embedded snipped to a portion of the sample file. This is useful when you want abbreviate a portion of the code sample. Multiple replacements are supported with multiple values of `token`.
Use the `replace-key-[token]` and `replace-value-[token]` parameters to limit the embedded snipped to a portion of the sample file. This is useful when you want abbreviate a portion of the code sample. Multiple replacements are supported with multiple values of `token`.
The shortcode below and code sample:

View File

@ -67,7 +67,7 @@ func configHandler(w http.ResponseWriter, r *http.Request) {
```
### Handling reentrant requests
The key to a reentrant request is the `Dapr-Reentrancy-Id` header. The value of this header is used to match requests to their call chain and allow them to bypass the actor's lock.
The key to a reentrant request is the `Dapr-Reentrancy-Id` header. The value of this header is used to match requests to their call chain and allow them to bypass the actor's lock.
The header is generated by the Dapr runtime for any actor request that has a reentrant config specified. Once it is generated, it is used to lock the actor and must be passed to all future requests. Below is a snippet of code from an actor handling this is Golang:

View File

@ -35,7 +35,7 @@ Every actor is defined as an instance of an actor type, identical to the way an
## Actor lifetime
Dapr actors are virtual, meaning that their lifetime is not tied to their in-memory representation. As a result, they do not need to be explicitly created or destroyed. The Dapr actors runtime automatically activates an actor the first time it receives a request for that actor ID. If an actor is not used for a period of time, the Dapr Actors runtime garbage-collects the in-memory object. It will also maintain knowledge of the actor's existence should it need to be reactivated later.
Dapr actors are virtual, meaning that their lifetime is not tied to their in-memory representation. As a result, they do not need to be explicitly created or destroyed. The Dapr actor runtime automatically activates an actor the first time it receives a request for that actor ID. If an actor is not used for a period of time, the Dapr actor runtime garbage-collects the in-memory object. It will also maintain knowledge of the actor's existence should it need to be reactivated later.
Invocation of actor methods and reminders reset the idle time, e.g. reminder firing will keep the actor active. Actor reminders fire whether an actor is active or inactive, if fired for inactive actor, it will activate the actor first. Actor timers do not reset the idle time, so timer firing will not keep the actor active. Timers only fire while the actor is active.
@ -77,11 +77,13 @@ POST/GET/PUT/DELETE http://localhost:3500/v1.0/actors/<actorType>/<actorId>/<met
You can provide any data for the actor method in the request body, and the response for the request would be in the response body which is the data from actor call.
Another, and perhaps more convenient, way of interacting with actors is via SDKs. Dapr currently supports actors SDKs in [.NET]({{< ref "dotnet-actors" >}}), [Java]({{< ref "java#actors" >}}), and [Python]({{< ref "python-actor" >}}).
Refer to [Dapr Actor Features]({{< ref howto-actors.md >}}) for more details.
### Concurrency
The Dapr Actors runtime provides a simple turn-based access model for accessing actor methods. This means that no more than one thread can be active inside an actor object's code at any time. Turn-based access greatly simplifies concurrent systems as there is no need for synchronization mechanisms for data access. It also means systems must be designed with special considerations for the single-threaded access nature of each actor instance.
The Dapr actor runtime provides a simple turn-based access model for accessing actor methods. This means that no more than one thread can be active inside an actor object's code at any time. Turn-based access greatly simplifies concurrent systems as there is no need for synchronization mechanisms for data access. It also means systems must be designed with special considerations for the single-threaded access nature of each actor instance.
A single actor instance cannot process more than one request at a time. An actor instance can cause a throughput bottleneck if it is expected to handle concurrent requests.
@ -94,9 +96,9 @@ As an enhancement to the base actors in dapr, reentrancy can now be enabled as a
### Turn-based access
A turn consists of the complete execution of an actor method in response to a request from other actors or clients, or the complete execution of a timer/reminder callback. Even though these methods and callbacks are asynchronous, the Dapr Actors runtime does not interleave them. A turn must be fully finished before a new turn is allowed. In other words, an actor method or timer/reminder callback that is currently executing must be fully finished before a new call to a method or callback is allowed. A method or callback is considered to have finished if the execution has returned from the method or callback and the task returned by the method or callback has finished. It is worth emphasizing that turn-based concurrency is respected even across different methods, timers, and callbacks.
A turn consists of the complete execution of an actor method in response to a request from other actors or clients, or the complete execution of a timer/reminder callback. Even though these methods and callbacks are asynchronous, the Dapr actor runtime does not interleave them. A turn must be fully finished before a new turn is allowed. In other words, an actor method or timer/reminder callback that is currently executing must be fully finished before a new call to a method or callback is allowed. A method or callback is considered to have finished if the execution has returned from the method or callback and the task returned by the method or callback has finished. It is worth emphasizing that turn-based concurrency is respected even across different methods, timers, and callbacks.
The Dapr actors runtime enforces turn-based concurrency by acquiring a per-actor lock at the beginning of a turn and releasing the lock at the end of the turn. Thus, turn-based concurrency is enforced on a per-actor basis and not across actors. Actor methods and timer/reminder callbacks can execute simultaneously on behalf of different actors.
The Dapr actor runtime enforces turn-based concurrency by acquiring a per-actor lock at the beginning of a turn and releasing the lock at the end of the turn. Thus, turn-based concurrency is enforced on a per-actor basis and not across actors. Actor methods and timer/reminder callbacks can execute simultaneously on behalf of different actors.
The following example illustrates the above concepts. Consider an actor type that implements two asynchronous methods (say, Method1 and Method2), a timer, and a reminder. The diagram below shows an example of a timeline for the execution of these methods and callbacks on behalf of two actors (ActorId1 and ActorId2) that belong to this actor type.

View File

@ -6,11 +6,11 @@ weight: 20
description: Learn more about the actor pattern
---
The Dapr actors runtime provides support for [virtual actors]({{< ref actors-overview.md >}}) through following capabilities:
The Dapr actor runtime provides support for [virtual actors]({{< ref actors-overview.md >}}) through following capabilities:
## Actor method invocation
You can interact with Dapr to invoke the actor method by calling HTTP/gRPC endpoint
You can interact with Dapr to invoke the actor method by calling HTTP/gRPC endpoint.
```html
POST/GET/PUT/DELETE http://localhost:3500/v1.0/actors/<actorType>/<actorId>/method/<method>
@ -20,38 +20,75 @@ You can provide any data for the actor method in the request body and the respon
Refer [api spec]({{< ref "actors_api.md#invoke-actor-method" >}}) for more details.
Alternatively, you can use the Dapr SDK in [.NET]({{< ref "dotnet-actors" >}}), [Java]({{< ref "java#actors" >}}), or [Python]({{< ref "python-actor" >}}).
## Actor state management
Actors can save state reliably using state management capability.
You can interact with Dapr through HTTP/gRPC endpoints for state management.
To use actors, your state store must support multi-item transactions. This means your state store [component](https://github.com/dapr/components-contrib/tree/master/state) must implement the [TransactionalStore](https://github.com/dapr/components-contrib/blob/master/state/transactional_store.go) interface. The list of components that support transactions/actors can be found here: [supported state stores]({{< ref supported-state-stores.md >}}). Only a single state store component can be used as the statestore for all actors.
To use actors, your state store must support multi-item transactions. This means your state store [component](https://github.com/dapr/components-contrib/tree/master/state) must implement the [TransactionalStore](https://github.com/dapr/components-contrib/blob/master/state/transactional_store.go) interface. The list of components that support transactions/actors can be found here: [supported state stores]({{< ref supported-state-stores.md >}}). Only a single state store component can be used as the statestore for all actors.
## Actor timers and reminders
Actors can schedule periodic work on themselves by registering either timers or reminders.
The functionality of timers and reminders is very similar. The main difference is that Dapr actor runtime is not retaining any information about timers after deactivation, while persisting the information about reminders using Dapr actor state provider.
This distintcion allows users to trade off between light-weight but stateless timers vs. more resource-demanding but stateful reminders.
The scheduling configuration of timers and reminders is identical, as summarized below:
---
`dueTime` is an optional parameter that sets time at which or time interval before the callback is invoked for the first time. If `dueTime` is omitted, the callback is invoked immediately after timer/reminder registration.
Supported formats:
- RFC3339 date format, e.g. `2020-10-02T15:00:00Z`
- time.Duration format, e.g. `2h30m`
- [ISO 8601 duration](https://en.wikipedia.org/wiki/ISO_8601#Durations) format, e.g. `PT2H30M`
---
`period` is an optional parameter that sets time interval between two consecutive callback invocations. When specified in `ISO 8601-1 duration` format, you can also configure the number of repetition in order to limit the total number of callback invocations.
If `period` is omitted, the callback will be invoked only once.
Supported formats:
- time.Duration format, e.g. `2h30m`
- [ISO 8601 duration](https://en.wikipedia.org/wiki/ISO_8601#Durations) format, e.g. `PT2H30M`, `R5/PT1M30S`
---
`ttl` is an optional parameter that sets time at which or time interval after which the timer/reminder will be expired and deleted. If `ttl` is omitted, no restrictions are applied.
Supported formats:
* RFC3339 date format, e.g. `2020-10-02T15:00:00Z`
* time.Duration format, e.g. `2h30m`
* [ISO 8601 duration](https://en.wikipedia.org/wiki/ISO_8601#Durations) format. Example: `PT2H30M`
---
The actor runtime validates correctess of the scheduling configuration and returns error on invalid input.
When you specify both the number of repetitions in `period` as well as `ttl`, the timer/reminder will be stopped when either condition is met.
### Actor timers
You can register a callback on actor to be executed based on a timer.
The Dapr actor runtime ensures that the callback methods respect the turn-based concurrency guarantees.This means that no other actor methods or timer/reminder callbacks will be in progress until this callback completes execution.
The Dapr actor runtime ensures that the callback methods respect the turn-based concurrency guarantees. This means that no other actor methods or timer/reminder callbacks will be in progress until this callback completes execution.
The next period of the timer starts after the callback completes execution. This implies that the timer is stopped while the callback is executing and is started when the callback finishes.
The Dapr actor runtime saves changes made to the actor's state when the callback finishes. If an error occurs in saving the state, that actor object is deactivated and a new instance will be activated.
The Dapr actors runtime saves changes made to the actor's state when the callback finishes. If an error occurs in saving the state, that actor object is deactivated and a new instance will be activated.
All timers are stopped when the actor is deactivated as part of garbage collection. No timer callbacks are invoked after that. Also, the Dapr actor runtime does not retain any information about the timers that were running before deactivation. It is up to the actor to register any timers that it needs when it is reactivated in the future.
All timers are stopped when the actor is deactivated as part of garbage collection. No timer callbacks are invoked after that. Also, the Dapr actors runtime does not retain any information about the timers that were running before deactivation. It is up to the actor to register any timers that it needs when it is reactivated in the future.
You can create a timer for an actor by calling the HTTP/gRPC request to Dapr.
You can create a timer for an actor by calling the HTTP/gRPC request to Dapr as shown below, or via Dapr SDK.
```md
POST/PUT http://localhost:3500/v1.0/actors/<actorType>/<actorId>/timers/<name>
```
The timer `duetime` and callback are specified in the request body. The due time represents when the timer will first fire after registration. The `period` represents how often the timer fires after that. A due time of 0 means to fire immediately. Negative due times and negative periods are invalid.
**Examples**
The following request body configures a timer with a `dueTime` of 9 seconds and a `period` of 3 seconds. This means it will first fire after 9 seconds, then every 3 seconds after that.
The timer parameters are specified in the request body.
The following request body configures a timer with a `dueTime` of 9 seconds and a `period` of 3 seconds. This means it will first fire after 9 seconds, then every 3 seconds after that.
```json
{
"dueTime":"0h0m9s0ms",
@ -59,11 +96,27 @@ The following request body configures a timer with a `dueTime` of 9 seconds and
}
```
The following request body configures a timer with a `dueTime` 0 seconds and a `period` of 3 seconds. This means it fires immediately after registration, then every 3 seconds after that.
The following request body configures a timer with a `period` of 3 seconds (in ISO 8601 duration format). It also limits the number of invocations to 10. This means it will fire 10 times: first, immediately after registration, then every 3 seconds after that.
```json
{
"dueTime":"0h0m0s0ms",
"period":"0h0m3s0ms"
"period":"R10/PT3S",
}
```
The following request body configures a timer with a `period` of 3 seconds (in ISO 8601 duration format) and a `ttl` of 20 seconds. This means it fires immediately after registration, then every 3 seconds after that for the duration of 20 seconds.
```json
{
"period":"PT3S",
"ttl":"20s"
}
```
The following request body configures a timer with a `dueTime` of 10 seconds, a `period` of 3 seconds, and a `ttl` of 10 seconds. It also limits the number of invocations to 4. This means it will first fire after 10 seconds, then every 3 seconds after that for the duration of 10 seconds, but no more than 4 times in total.
```json
{
"dueTime":"10s",
"period":"R4/PT3S",
"ttl":"10s"
}
```
@ -77,69 +130,15 @@ Refer [api spec]({{< ref "actors_api.md#invoke-timer" >}}) for more details.
### Actor reminders
Reminders are a mechanism to trigger *persistent* callbacks on an actor at specified times. Their functionality is similar to timers. But unlike timers, reminders are triggered under all circumstances until the actor explicitly unregisters them or the actor is explicitly deleted or the number in invocations is exhausted. Specifically, reminders are triggered across actor deactivations and failovers because the Dapr actors runtime persists the information about the actors' reminders using Dapr actor state provider.
Reminders are a mechanism to trigger *persistent* callbacks on an actor at specified times. Their functionality is similar to timers. But unlike timers, reminders are triggered under all circumstances until the actor explicitly unregisters them or the actor is explicitly deleted or the number in invocations is exhausted. Specifically, reminders are triggered across actor deactivations and failovers because the Dapr actor runtime persists the information about the actors' reminders using Dapr actor state provider.
You can create a persistent reminder for an actor by calling the Http/gRPC request to Dapr.
You can create a persistent reminder for an actor by calling the HTTP/gRPC request to Dapr as shown below, or via Dapr SDK.
```md
POST/PUT http://localhost:3500/v1.0/actors/<actorType>/<actorId>/reminders/<name>
```
The reminder `duetime` and callback can be specified in the request body. The due time represents when the reminder first fires after registration. The `period` represents how often the reminder will fire after that. A due time of 0 means to fire immediately. Negative due times and negative periods are invalid. To register a reminder that fires only once, set the period to an empty string.
The following request body configures a reminder with a `dueTime` 9 seconds and a `period` of 3 seconds. This means it will first fire after 9 seconds, then every 3 seconds after that.
```json
{
"dueTime":"0h0m9s0ms",
"period":"0h0m3s0ms"
}
```
The following request body configures a reminder with a `dueTime` 0 seconds and a `period` of 3 seconds. This means it will fire immediately after registration, then every 3 seconds after that.
```json
{
"dueTime":"0h0m0s0ms",
"period":"0h0m3s0ms"
}
```
The following request body configures a reminder with a `dueTime` 15 seconds and a `period` of empty string. This means it will first fire after 15 seconds, then never fire again.
```json
{
"dueTime":"0h0m15s0ms",
"period":""
}
```
[ISO 8601 duration](https://en.wikipedia.org/wiki/ISO_8601#Durations) can also be used to specify `period`. The following request body configures a reminder with a `dueTime` 0 seconds an `period` of 15 seconds.
```json
{
"dueTime":"0h0m0s0ms",
"period":"P0Y0M0W0DT0H0M15S"
}
```
The designators for zero are optional and the above `period` can be simplified to `PT15S`.
ISO 8601 specifies multiple recurrence formats but only the duration format is currently supported.
#### Reminders with repetitions
When configured with ISO 8601 durations, the `period` column also allows to specify number of times a reminder can run. The following request body will create a reminder that will execute for 5 number of times with a period of 15 seconds.
```json
{
"dueTime":"0h0m0s0ms",
"period":"R5/PT15S"
}
```
The number of repetitions i.e. the number of times the reminder is run should be a positive number.
**Example**
Watch this [video](https://www.youtube.com/watch?v=B_vkXqptpXY&t=1002s) for more information on using ISO 861 for Reminders
<div class="embed-responsive embed-responsive-16by9">
<iframe width="560" height="315" src="https://www.youtube.com/embed/B_vkXqptpXY?start=1003" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe>
</div>
The request structure for reminders is identical to those of actors. Please refer to the [actor timers examples]({{< ref "#actor-timers" >}}).
#### Retrieve actor reminder
@ -161,7 +160,7 @@ Refer [api spec]({{< ref "actors_api.md#invoke-reminder" >}}) for more details.
## Actor runtime configuration
You can configure the Dapr Actors runtime configuration to modify the default runtime behavior.
You can configure the Dapr actor runtime configuration to modify the default runtime behavior.
### Configuration parameters
- `actorIdleTimeout` - The timeout before deactivating an idle actor. Checks for timeouts occur every `actorScanInterval` interval. **Default: 60 minutes**
@ -200,7 +199,7 @@ public void ConfigureServices(IServiceCollection services)
{
// Register actor types and configure actor settings
options.Actors.RegisterActor<MyActor>();
// Configure default settings
options.ActorIdleTimeout = TimeSpan.FromMinutes(60);
options.ActorScanInterval = TimeSpan.FromSeconds(30);
@ -312,7 +311,7 @@ public void ConfigureServices(IServiceCollection services)
{
// Register actor types and configure actor settings
options.Actors.RegisterActor<MyActor>();
// Configure default settings
options.ActorIdleTimeout = TimeSpan.FromMinutes(60);
options.ActorScanInterval = TimeSpan.FromSeconds(30);

View File

@ -91,6 +91,20 @@ In order to tell Dapr that the event wasn't processed correctly in your applicat
res.status(500).send()
```
### Specifying a custom route
By default, incoming events will be sent to an HTTP endpoint that corresponds to the name of the input binding.
You can override this by setting the following metadata property:
```yaml
name: mybinding
spec:
type: binding.rabbitmq
metadata:
- name: route
value: /onevent
```
### Event delivery Guarantees
Event delivery guarantees are controlled by the binding implementation. Depending on the binding implementation, the event delivery can be exactly once or at least once.

View File

@ -416,6 +416,10 @@ app.post('/dsstatus', (req, res) => {
{{< /tabs >}}
{{% alert title="Note on message redelivery" color="primary" %}}
Some pubsub components (e.g. Redis) will redeliver a message if a response is not sent back within a specified time window. Make sure to configure metadata such as `processingTimeout` to customize this behavior. For more information refer to the respective [component references]({{< ref supported-pubsub >}}).
{{% /alert %}}
## (Optional) Step 5: Publishing a topic with code
{{< tabs Node PHP>}}
@ -485,6 +489,7 @@ Read about content types [here](#content-types), and about the [Cloud Events mes
## Next steps
- Try the [Pub/Sub quickstart sample](https://github.com/dapr/quickstarts/tree/master/pub-sub)
- Learn about [PubSub routing]({{< ref howto-route-messages >}})
- Learn about [topic scoping]({{< ref pubsub-scopes.md >}})
- Learn about [message time-to-live]({{< ref pubsub-message-ttl.md >}})
- Learn [how to configure Pub/Sub components with multiple namespaces]({{< ref pubsub-namespaces.md >}})

View File

@ -0,0 +1,250 @@
---
type: docs
title: "How-To: Route messages to different event handlers"
linkTitle: "How-To: Route events"
weight: 2100
description: "Learn how to route messages from a topic to different event handlers based on CloudEvent fields"
---
{{% alert title="Preview feature" color="warning" %}}
Pub/Sub message routing is currently in [preview]({{< ref preview-features.md >}}).
{{% /alert %}}
## Introduction
[Content-based routing](https://www.enterpriseintegrationpatterns.com/ContentBasedRouter.html) is a messaging pattern that utilizes a DSL instead of imperative application code. PubSub routing is an implementation of this pattern that allows developers to use expressions to route [CloudEvents](https://cloudevents.io) based on their contents to different URIs/paths and event handlers in your application. If no route matches, then an optional default route is used. This becomes useful as your applications expands to support multiple event versions, or special cases. Routing can be implemented with code; however, keeping routing rules external from the application can improve portability.
This feature is available to both the declarative and programmatic subscription approaches.
## Enable message routing
This is a preview feature. To enable it, add the `PubSub.Routing` feature entry to your application configuration like so:
```yaml
apiVersion: dapr.io/v1alpha1
kind: Configuration
metadata:
name: pubsubroutingconfig
spec:
features:
- name: PubSub.Routing
enabled: true
```
Learn more about enabling [preview features]({{<ref preview-features>}}).
## Declarative subscription
For declarative subscriptions, you must use `dapr.io/v2alpha1` as the `apiVersion`. Here is an example of `subscriptions.yaml` using routing.
```yaml
apiVersion: dapr.io/v2alpha1
kind: Subscription
metadata:
name: myevent-subscription
spec:
pubsubname: pubsub
topic: deathStarStatus
routes:
rules:
- match: event.type == "rebels.attacking.v3"
path: /dsstatus.v3
- match: event.type == "rebels.attacking.v2"
path: /dsstatus.v2
default: /dsstatus
scopes:
- app1
- app2
```
## Programmatic subscription
Alternatively, the programattic approach varies slightly in that the `routes` structure is returned instead of `route`. The JSON structure matches the declarative YAML.
{{< tabs Python Node Go PHP>}}
{{% codetab %}}
```python
import flask
from flask import request, jsonify
from flask_cors import CORS
import json
import sys
app = flask.Flask(__name__)
CORS(app)
@app.route('/dapr/subscribe', methods=['GET'])
def subscribe():
subscriptions = [
{
'pubsubname': 'pubsub',
'topic': 'deathStarStatus',
'routes': {
'rules': [
{
'match': 'event.type == "rebels.attacking.v3"',
'path': '/dsstatus.v3'
},
{
'match': 'event.type == "rebels.attacking.v2"',
'path': '/dsstatus.v2'
},
],
'default': '/dsstatus'
}
}]
return jsonify(subscriptions)
@app.route('/dsstatus', methods=['POST'])
def ds_subscriber():
print(request.json, flush=True)
return json.dumps({'success':True}), 200, {'ContentType':'application/json'}
app.run()
```
{{% /codetab %}}
{{% codetab %}}
```javascript
const express = require('express')
const bodyParser = require('body-parser')
const app = express()
app.use(bodyParser.json({ type: 'application/*+json' }));
const port = 3000
app.get('/dapr/subscribe', (req, res) => {
res.json([
{
pubsubname: "pubsub",
topic: "deathStarStatus",
routes: {
rules: [
{
match: 'event.type == "rebels.attacking.v3"',
path: '/dsstatus.v3'
},
{
match: 'event.type == "rebels.attacking.v2"',
path: '/dsstatus.v2'
},
],
default: '/dsstatus'
}
}
]);
})
app.post('/dsstatus', (req, res) => {
console.log(req.body);
res.sendStatus(200);
});
app.listen(port, () => console.log(`consumer app listening on port ${port}!`))
```
{{% /codetab %}}
{{% codetab %}}
```golang
package main
import (
"encoding/json"
"fmt"
"log"
"net/http"
"github.com/gorilla/mux"
)
const appPort = 3000
type subscription struct {
PubsubName string `json:"pubsubname"`
Topic string `json:"topic"`
Metadata map[string]string `json:"metadata,omitempty"`
Routes routes `json:"routes"`
}
type routes struct {
Rules []rule `json:"rules,omitempty"`
Default string `json:"default,omitempty"`
}
type rule struct {
Match string `json:"match"`
Path string `json:"path"`
}
// This handles /dapr/subscribe
func configureSubscribeHandler(w http.ResponseWriter, _ *http.Request) {
t := []subscription{
{
PubsubName: "pubsub",
Topic: "deathStarStatus",
Routes: routes{
Rules: []rule{
{
Match: `event.type == "rebels.attacking.v3"`,
Path: "/dsstatus.v3",
},
{
Match: `event.type == "rebels.attacking.v2"`,
Path: "/dsstatus.v2",
},
},
Default: "/dsstatus",
},
},
}
w.WriteHeader(http.StatusOK)
json.NewEncoder(w).Encode(t)
}
func main() {
router := mux.NewRouter().StrictSlash(true)
router.HandleFunc("/dapr/subscribe", configureSubscribeHandler).Methods("GET")
log.Fatal(http.ListenAndServe(fmt.Sprintf(":%d", appPort), router))
}
```
{{% /codetab %}}
{{% codetab %}}
```php
<?php
require_once __DIR__.'/vendor/autoload.php';
$app = \Dapr\App::create(configure: fn(\DI\ContainerBuilder $builder) => $builder->addDefinitions(['dapr.subscriptions' => [
new \Dapr\PubSub\Subscription(pubsubname: 'pubsub', topic: 'deathStarStatus', routes: (
rules: => [
('match': 'event.type == "rebels.attacking.v3"', path: '/dsstatus.v3'),
('match': 'event.type == "rebels.attacking.v2"', path: '/dsstatus.v2'),
]
default: '/dsstatus')),
]]));
$app->post('/dsstatus', function(
#[\Dapr\Attributes\FromBody]
\Dapr\PubSub\CloudEvent $cloudEvent,
\Psr\Log\LoggerInterface $logger
) {
$logger->alert('Received event: {event}', ['event' => $cloudEvent]);
return ['status' => 'SUCCESS'];
}
);
$app->start();
```
{{% /codetab %}}
{{< /tabs >}}
In these examples, depending on the type of the event (`event.type`), the application will be called on `/dsstatus.v3`, `/dsstatus.v2` or `/dsstatus`. The expressions are written as [Common Expression Language (CEL)](https://opensource.google/projects/cel) where `event` represents the cloud event. Any of the attributes from the [CloudEvents core specification](https://github.com/cloudevents/spec/blob/v1.0.1/spec.md#required-attributes) can be referenced in the expression. One caveat is that it is only possible to access the attributes inside `event.data` if it is nested JSON
## Next steps
- Try the [Pub/Sub quickstart sample](https://github.com/dapr/quickstarts/tree/master/pub-sub)
- Learn about [topic scoping]({{< ref pubsub-scopes.md >}})
- Learn about [message time-to-live]({{< ref pubsub-message-ttl.md >}})
- Learn [how to configure Pub/Sub components with multiple namespaces]({{< ref pubsub-namespaces.md >}})
- List of [pub/sub components]({{< ref setup-pubsub >}})
- Read the [API reference]({{< ref pubsub_api.md >}})

View File

@ -93,7 +93,7 @@ Similarly, if two different applications (different app-IDs) subscribe to the sa
### Topic scoping
By default, all topics backing the Dapr pub/sub component (e.g. Kafka, Redis Stream, RabbitMQ) are available to every application configured with that component. To limit which application can publish or subscribe to topics, Dapr provides topic scoping. This enables to you say which topics an application is allowed to published and which topics an application is allowed to subscribed to. For more information read [publish/subscribe topic scoping]({{< ref pubsub-scopes.md >}}).
By default, all topics backing the Dapr pub/sub component (e.g. Kafka, Redis Stream, RabbitMQ) are available to every application configured with that component. To limit which application can publish or subscribe to topics, Dapr provides topic scoping. This enables to you say which topics an application is allowed to publish and which topics an application is allowed to subscribe to. For more information read [publish/subscribe topic scoping]({{< ref pubsub-scopes.md >}}).
### Message Time-to-Live (TTL)
Dapr can set a timeout message on a per message basis, meaning that if the message is not read from the pub/sub component, then the message is discarded. This is to prevent the build up of messages that are not read. A message that has been in the queue for longer than the configured TTL is said to be dead. For more information read [publish/subscribe message time-to-live]({{< ref pubsub-message-ttl.md >}}).

View File

@ -3,7 +3,7 @@ type: docs
title: "Pub/Sub without CloudEvents"
linkTitle: "Pub/Sub without CloudEvents"
weight: 7000
description: "Use Pub/Sub without CloudEvents."
description: "Use Pub/Sub without CloudEvents."
---
## Introduction

View File

@ -21,7 +21,7 @@ Watch this [video](https://youtu.be/j99RN_nxExA?start=2272) for a demo on how to
## Scenario 1 : Deny access to all secrets for a secret store
In this example all secret access is denied to an application running on a Kubernetes cluster which has a configured [Kubernetes secret store]({{<ref kubernetes-secret-store>}}) named `mycustomsecretstore`. In the case of Kubernetes, aside from the user defined custom store, the default store named `kubernetes` is also addressed to ensure all secrets are denied access (See [here]({{<ref "kubernetes-secret-store.md#default-kubernetes-secret-store-component">}}) to learn more about the Kubernetes default secret store).
In this example all secret access is denied to an application running on a Kubernetes cluster which has a configured [Kubernetes secret store]({{<ref kubernetes-secret-store>}}) named `mycustomsecretstore`. In the case of Kubernetes, aside from the user defined custom store, the default store named `kubernetes` is also addressed to ensure all secrets are denied access (See [here]({{<ref "kubernetes-secret-store.md#default-kubernetes-secret-store-component">}}) to learn more about the Kubernetes default secret store).
To add this configuration follow the steps below:

View File

@ -18,13 +18,13 @@ Dapr allows you to assign a global, unique ID for your app. This ID encapsulates
In self hosted mode, set the `--app-id` flag:
```bash
dapr run --app-id cart --app-port 5000 python app.py
dapr run --app-id cart --dapr-http-port 3500 --app-port 5000 python app.py
```
If your app uses an SSL connection, you can tell Dapr to invoke your app over an insecure SSL connection:
```bash
dapr run --app-id cart --app-port 5000 --app-ssl python app.py
dapr run --app-id cart --dapr-http-port 3500 --app-port 5000 --app-ssl python app.py
```
{{% /codetab %}}
@ -111,6 +111,32 @@ curl http://localhost:3500/v1.0/invoke/cart/method/add -X DELETE
```
Dapr puts any payload returned by the called service in the HTTP response's body.
### Additional URL formats
In order to avoid changing URL paths as much as possible, Dapr provides the following ways to call the service invocation API:
1. Change the address in the URL to `localhost:<dapr-http-port>`.
2. Add a `dapr-app-id` header to specify the ID of the target service, or alternatively pass the ID via HTTP Basic Auth: `http://dapr-app-id:<service-id>@localhost:3500/path`.
For example, the following command
```bash
curl http://localhost:3500/v1.0/invoke/cart/method/add
```
is equivalent to:
```bash
curl -H 'dapr-app-id: cart' 'http://localhost:3500/add' -X POST
```
or:
```bash
curl 'http://dapr-app-id:cart@localhost:3500/add' -X POST
```
{{% /codetab %}}
{{% codetab %}}

View File

@ -5,9 +5,12 @@ linkTitle: "How-To: Invoke with gRPC"
description: "Call between services using service invocation"
weight: 3000
---
{{% alert title="Preview feature" color="warning" %}}
gRPC proxying is currently in [preview]({{< ref preview-features.md >}}).
{{% /alert %}}
This article describe how to use Dapr to connect services using gRPC.
By using Dapr's gRPC proxying capability, you can use your existing proto based gRPC services and have the traffic go through the Dapr sidecar. Doing so yields the following [Dapr Service Invocation]({{< ref service-invocation-overview.md >}}) benefits to developers:
By using Dapr's gRPC proxying capability, you can use your existing proto based gRPC services and have the traffic go through the Dapr sidecar. Doing so yields the following [Dapr service invocation]({{< ref service-invocation-overview.md >}}) benefits to developers:
1. Mutual authentication
2. Tracing
@ -170,7 +173,7 @@ var call = client.SayHello(new HelloRequest { Name = "Darth Nihilus" }, metadata
{{% codetab %}}
```python
metadata = (('dapr-app-id', 'server'))
metadata = (('dapr-app-id', 'server'),)
response = stub.SayHello(request={ name: 'Darth Revan' }, metadata=metadata)
```
{{% /codetab %}}
@ -305,3 +308,10 @@ For more information on tracing and logs see the [observability]({{< ref observa
* [Service invocation overview]({{< ref service-invocation-overview.md >}})
* [Service invocation API specification]({{< ref service_invocation_api.md >}})
* [gRPC proxying community call video](https://youtu.be/B_vkXqptpXY?t=70)
## Community call demo
Watch this [video](https://youtu.be/B_vkXqptpXY?t=69) on how to use Dapr's gRPC proxying capability:
<div class="embed-responsive embed-responsive-16by9">
<iframe width="560" height="315" src="https://www.youtube.com/embed/B_vkXqptpXY?start=69" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe>
</div>

View File

@ -83,7 +83,7 @@ Connection establishment via gRPC to the target sidecar has a timeout of 5 secon
### Pluggable service discovery
Dapr can run on a variety of [hosting platforms]({{< ref hosting >}}). To enable service discovery and service invocation, Dapr uses pluggable [name resolution components]({{< ref supported-name-resolution >}}). For example, the Kubernetes name resolution component uses the Kubernetes DNS service to resolve the location of other applications running in the cluster. Self-hosted machines can use the mDNS name resolution component. The Consul name resolution component can be used in any hosting environment including Kubernetes or self-hosted.
Dapr can run on a variety of [hosting platforms]({{< ref hosting >}}). To enable service discovery and service invocation, Dapr uses pluggable [name resolution components]({{< ref supported-name-resolution >}}). For example, the Kubernetes name resolution component uses the Kubernetes DNS service to resolve the location of other applications running in the cluster. Self-hosted machines can use the mDNS name resolution component. The Consul name resolution component can be used in any hosting environment including Kubernetes or self-hosted.
### Round robin load balancing with mDNS
@ -105,7 +105,7 @@ The API for service invocation can be found in the [service invocation API refer
### gRPC proxying
Dapr allows users to keep their own proto services and work natively with gRPC. This means that you can use service invocation to call your existing gRPC apps without having to include any Dapr SDKs or include custom gRPC services. For more information, see the [how-to tutorial for Dapr and gRPC]({{< ref howto-invoke-services-grpc.md >}}).
Dapr allows users to keep their own proto services and work natively with gRPC. This means that you can use service invocation to call your existing gRPC apps without having to include any Dapr SDKs or include custom gRPC services. For more information, see the [how-to tutorial for Dapr and gRPC]({{< ref howto-invoke-services-grpc.md >}}).
## Example

View File

@ -0,0 +1,92 @@
---
type: docs
title: "How-To: Encrypt application state"
linkTitle: "How-To: Encrypt state"
weight: 450
description: "Automatically encrypt state and manage key rotations"
---
{{% alert title="Preview feature" color="warning" %}}
State store encryption is currently in [preview]({{< ref preview-features.md >}}).
{{% /alert %}}
## Introduction
Application state often needs to get encrypted at rest to provide stronger security in enterprise workloads or regulated environments. Dapr offers automatic client side encryption based on [AES256](https://en.wikipedia.org/wiki/Advanced_Encryption_Standard).
In addition to automatic encryption, Dapr supports primary and secondary encryption keys to make it easier for developers and ops teams to enable a key rotation strategy.
This feature is supported by all Dapr state stores.
The encryption keys are fetched from a secret, and cannot be supplied as plaintext values on the `metadata` section.
## Enabling automatic encryption
1. Enable the state encryption preview feature using a standard [Dapr Configuration]({{< ref configuration-overview.md >}}):
```yaml
apiVersion: dapr.io/v1alpha1
kind: Configuration
metadata:
name: stateconfig
spec:
features:
- name: State.Encryption
enabled: true
```
2. Add the following `metadata` section to any Dapr supported state store:
```yaml
metadata:
- name: primaryEncryptionKey
secretKeyRef:
name: mysecret
key: mykey # key is optional.
```
For example, this is the full YAML of a Redis encrypted state store
```yaml
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
name: statestore
spec:
type: state.redis
version: v1
metadata:
- name: redisHost
value: localhost:6379
- name: redisPassword
value: ""
- name: primaryEncryptionKey
secretKeyRef:
name: mysecret
key: mykey
```
You now have a Dapr state store that's configured to fetch the encryption key from a secret named `mysecret`, containing the actual encryption key in a key named `mykey`.
The actual encryption key *must* be an AES256 encryption key. Dapr will error and exit if the encryption key is invalid.
*Note that the secret store does not have to support keys*
## Key rotation
To support key rotation, Dapr provides a way to specify a secondary encryption key:
```yaml
metadata:
- name: primaryEncryptionKey
secretKeyRef:
name: mysecret
key: mykey
- name: secondaryEncryptionKey
secretKeyRef:
name: mysecret2
key: mykey2
```
When Dapr starts, it will fetch the secrets containing the encryption keys listed in the `metadata` section. Dapr knows which state item has been encrypted with which key automatically, as it appends the `secretKeyRef.name` field to the end of the actual state key.
To rotate a key, simply change the `primaryEncryptionKey` to point to a secret containing your new key, and move the old primary encryption key to the `secondaryEncryptionKey`. New data will be encrypted using the new key, and old data that's retrieved will be decrypted using the secondary key. Any updates to data items encrypted using the old key will be re-encrypted using the new key.

View File

@ -50,6 +50,12 @@ For stores that don't natively support ETags, it's expected that the correspondi
Read the [API reference]({{< ref state_api.md >}}) to learn how to set concurrency options.
### Automatic encryption
Dapr supports automatic client encryption of application state with support for key rotations. This is a preview feature and it is supported on all Dapr state stores.
For more info, read the [How-To: Encrypt application state]({{< ref howto-encrypt-state.md >}}) section.
### Consistency
Dapr supports both **strong consistency** and **eventual consistency**, with eventual consistency as the default behavior.
@ -104,6 +110,7 @@ The API for state management can be found in the [state management API reference
* [How-To: Save and get state]({{< ref howto-get-save-state.md >}})
* [How-To: Build a stateful service]({{< ref howto-stateful-service.md >}})
* [How-To: Share state between applications]({{< ref howto-share-state.md >}})
* [How-To: Encrypt application state]({{< ref howto-encrypt-state.md >}})
* Try out the [hello world quickstart](https://github.com/dapr/quickstarts/blob/master/hello-world/README.md) which shows how to use state management or try the samples in the [Dapr SDKs]({{< ref sdks >}})
* List of [state store components]({{< ref supported-state-stores.md >}})
* Read the [state management API reference]({{< ref state_api.md >}})

View File

@ -21,7 +21,7 @@ When a TTL is not specified the default behavior of the state store is retained.
## Persisting state (ignoring an existing TTL)
To explictly persist a state (ignoring any TTLs set for the key), specify a `ttlInSeconds` value of `-1`.
To explicitly persist a state (ignoring any TTLs set for the key), specify a `ttlInSeconds` value of `-1`.
## Supported components

View File

@ -6,7 +6,7 @@ weight: 300
description: "Debug Dapr apps locally which still connected to your Kubernetes cluster"
---
Bridge to Kubernetes allows you to run and debug code on your development computer, while still connected to your Kubernetes cluster with the rest of your application or services. This type of debugging is often called *local tunnel debugging*.
Bridge to Kubernetes allows you to run and debug code on your development computer, while still connected to your Kubernetes cluster with the rest of your application or services. This type of debugging is often called *local tunnel debugging*.
{{< button text="Learn more about Bridge to Kubernetes" link="https://aka.ms/bridge-vscode-dapr" >}}

View File

@ -91,5 +91,5 @@ All done. Now you can point to port 40000 and start a remote debug session to da
- [Overview of Dapr on Kubernetes]({{< ref kubernetes-overview >}})
- [Deploy Dapr to a Kubernetes cluster]({{< ref kubernetes-deploy >}})
- [Debug Dapr services on Kubernetes]({{< ref debug-dapr-services >}})
- [Debug Dapr services on Kubernetes]({{< ref debug-dapr-services >}})
- [Dapr Kubernetes Quickstart](https://github.com/dapr/quickstarts/tree/master/hello-kubernetes)

View File

@ -56,7 +56,7 @@ In the case of the hello world quickstart, two applications are launched, each w
"type": "python",
"request": "launch",
"name": "Pythonapp with Dapr",
"program": "${workspaceFolder}/app.py",
"program": "${workspaceFolder}/app.py",
"console": "integratedTerminal",
"preLaunchTask": "daprd-debug-python",
"postDebugTask": "daprd-down-python"
@ -69,7 +69,7 @@ Each configuration requires a `request`, `type` and `name`. These parameters hel
- `type` defines the language used. Depending on the language, it might require an extension found in the marketplace, such as the [Python Extension](https://marketplace.visualstudio.com/items?itemName=ms-python.python).
- `name` is a unique name for the configuration. This is used for compound configurations when calling multiple configurations in your project.
- `${workspaceFolder}` is a VS Code variable reference. This is the path to the workspace opened in VS Code.
- `${workspaceFolder}` is a VS Code variable reference. This is the path to the workspace opened in VS Code.
- The `preLaunchTask` and `postDebugTask` parameters refer to the program configurations run before and after launching the application. See step 2 on how to configure these.
For more information on VSCode debugging parameters see [VS Code launch attributes](https://code.visualstudio.com/Docs/editor/debugging#_launchjson-attributes).
@ -153,7 +153,7 @@ Below are the supported parameters for VS Code tasks. These parameters are equiv
| `appPort` | This parameter tells Dapr which port your application is listening on | Yes | `"appPort": 4000`
| `appProtocol` | Tells Dapr which protocol your application is using. Valid options are http and grpc. Default is http | No | `"appProtocol": "http"`
| `appSsl` | Sets the URI scheme of the app to https and attempts an SSL connection | No | `"appSsl": true`
| `args` | Sets a list of arguments to pass on to the Dapr app | No | "args": []
| `args` | Sets a list of arguments to pass on to the Dapr app | No | "args": []
| `componentsPath` | Path for components directory. If empty, components will not be loaded. | No | `"componentsPath": "./components"`
| `config` | Tells Dapr which Configuration CRD to use | No | `"config": "./config"`
| `controlPlaneAddress` | Address for a Dapr control plane | No | `"controlPlaneAddress": "http://localhost:1366/"`

View File

@ -0,0 +1,407 @@
---
type: docs
title: "Authenticating to Azure"
linkTitle: "Authenticating to Azure"
description: "How to authenticate Azure components using Azure AD and/or Managed Identities"
aliases:
- "/operations/components/setup-secret-store/supported-secret-stores/azure-keyvault-managed-identity/"
- "/reference/components-reference/supported-secret-stores/azure-keyvault-managed-identity/"
---
## Common Azure authentication layer
Certain Azure components for Dapr offer support for the *common Azure authentication layer*, which enables applications to access data stored in Azure resources by authenticating with Azure AD. Thanks to this, administrators can leverage all the benefits of fine-tuned permissions with RBAC (Role-Based Access Control), and applications running on certain Azure services such as Azure VMs, Azure Kubernetes Service, or many Azure platform services can leverage [Managed Service Identities (MSI)](https://docs.microsoft.com/azure/active-directory/managed-identities-azure-resources/overview).
Some Azure components offer alternative authentication methods, such as systems based on "master keys" or "shared keys". Whenever possible, it is recommended that you authenticate your Dapr components using Azure AD for increased security and ease of management, as well as for the ability to leverage MSI if your app is running on supported Azure services.
> Currently, only a subset of Azure components for Dapr offer support for this authentication method. Over time, support will be expanded to all other Azure components for Dapr. You can track the progress of the work, component-by-component, on [this issue](https://github.com/dapr/components-contrib/issues/1103).
### About authentication with Azure AD
Azure AD is Azure's identity and access management (IAM) solution, which is used to authenticate and authorize users and services.
Azure AD is built on top of open standards such OAuth 2.0, which allows services (applications) to obtain access tokens to make requests to Azure services, including Azure Storage, Azure Key Vault, Cosmos DB, etc. In the Azure terminology, an application is also called a "Service Principal".
Many of the services listed above also support authentication using other systems, such as "master keys" or "shared keys". Although those are always valid methods to authenticate your application (and Dapr continues to support them, as explained in each component's reference page), using Azure AD when possible offers various benefits, including:
- The ability to leverage Managed Service Identities, which allow your application to authenticate with Azure AD, and obtain an access token to make requests to Azure services, without the need to use any credential. When your application is running on a supported Azure service (including, but not limited to, Azure VMs, Azure Kubernetes Service, Azure Web Apps, etc), an identity for your application can be assigned at the infrastructure level.
This way, your code does not have to deal with credentials of any kind, removing the challenge of safely managing credentials, allowing greater separation of concerns between development and operations teams and reducing the number of people with access to credentials, and lastly simplifying operational aspectsespecially when multiple environments are used.
- Using RBAC (Role-Based Access Control) with supported services (such as Azure Storage and Cosmos DB), permissions given to an application can be fine-tuned, for example allowing restricting access to a subset of data or making it read-only.
- Better auditing for access.
- Ability to authenticate using certificates (optional).
## Credentials metadata fields
To authenticate with Azure AD, you will need to add the following credentials as values in the metadata for your Dapr component (read the next section for how to create them). There are multiple options depending on the way you have chosen to pass the credentials to your Dapr service.
**Authenticating using client credentials:**
| Field | Required | Details | Example |
|---------------------|----------|--------------------------------------|----------------------------------------------|
| `azureTenantId` | Y | ID of the Azure AD tenant | `"cd4b2887-304c-47e1-b4d5-65447fdd542b"` |
| `azureClientId` | Y | Client ID (application ID) | `"c7dd251f-811f-4ba2-a905-acd4d3f8f08b"` |
| `azureClientSecret` | Y | Client secret (application password) | `"Ecy3XG7zVZK3/vl/a2NSB+a1zXLa8RnMum/IgD0E"` |
When running on Kubernetes, you can also use references to Kubernetes secrets for any or all of the values above.
**Authenticating using a PFX certificate:**
| Field | Required | Details | Example |
|--------|--------|--------|--------|
| `azureTenantId` | Y | ID of the Azure AD tenant | `"cd4b2887-304c-47e1-b4d5-65447fdd542b"` |
| `azureClientId` | Y | Client ID (application ID) | `"c7dd251f-811f-4ba2-a905-acd4d3f8f08b"` |
| `azureCertificate` | One of `azureCertificate` and `azureCertificateFile` | Certificate and private key (in PFX/PKCS#12 format) | `"-----BEGIN PRIVATE KEY-----\n MIIEvgI... \n -----END PRIVATE KEY----- \n -----BEGIN CERTIFICATE----- \n MIICoTC... \n -----END CERTIFICATE-----` |
| `azureCertificateFile` | One of `azureCertificate` and `azureCertificateFile` | Path to the PFX/PKCS#12 file containing the certificate and private key | `"/path/to/file.pem"` |
| `azureCertificatePassword` | N | Password for the certificate if encrypted | `"password"` |
When running on Kubernetes, you can also use references to Kubernetes secrets for any or all of the values above.
**Authenticating with Managed Service Identities (MSI):**
| Field | Required | Details | Example |
|-----------------|----------|----------------------------|------------------------------------------|
| `azureClientId` | N | Client ID (application ID) | `"c7dd251f-811f-4ba2-a905-acd4d3f8f08b"` |
Using MSI you're not required to specify any value, although you may optionally pass `azureClientId` if needed.
### Aliases
For backwards-compatibility reasons, the following values in the metadata are supported as aliases, although their use is discouraged.
| Metadata key | Aliases (supported but deprecated) |
|----------------------------|------------------------------------|
| `azureTenantId` | `spnTenantId`, `tenantId` |
| `azureClientId` | `spnClientId`, `clientId` |
| `azureClientSecret` | `spnClientSecret`, `clientSecret` |
| `azureCertificate` | `spnCertificate` |
| `azureCertificateFile` | `spnCertificateFile` |
| `azureCertificatePassword` | `spnCertificatePassword` |
## Generating a new Azure AD application and Service Principal
To start, create a new Azure AD application, which will also be used as Service Principal.
Prerequisites:
- [Azure Subscription](https://azure.microsoft.com/free/)
- [Azure CLI](https://docs.microsoft.com/cli/azure/install-azure-cli)
- [jq](https://stedolan.github.io/jq/download/)
- OpenSSL (included by default on all Linux and macOS systems, as well as on WSL)
- The scripts below are optimized for a bash or zsh shell
> If you haven't already, start by logging in to Azure using the Azure CLI:
>
> ```sh
> # Log in Azure
> az login
> # Set your default subscription
> az account set -s [your subscription id]
> ```
### Creating an Azure AD application
First, create the Azure AD application with:
```sh
# Friendly name for the application / Service Principal
APP_NAME="dapr-application"
# Create the app
APP_ID=$(az ad app create \
--display-name "${APP_NAME}" \
--available-to-other-tenants false \
--oauth2-allow-implicit-flow false \
| jq -r .appId)
```
{{< tabs "Client secret" "Certificate">}}
{{% codetab %}}
To create a **client secret**, then run this command. This will generate a random password based on the base64 charset and 40-characters long. Additionally, it will make the password valid for 2 years, before it will need to be rotated:
```sh
az ad app credential reset \
--id "${APP_ID}" \
--years 2 \
--password $(openssl rand -base64 30)
```
The ouput of the command above will be similar to this:
```json
{
"appId": "c7dd251f-811f-4ba2-a905-acd4d3f8f08b",
"name": "c7dd251f-811f-4ba2-a905-acd4d3f8f08b",
"password": "Ecy3XG7zVZK3/vl/a2NSB+a1zXLa8RnMum/IgD0E",
"tenant": "cd4b2887-304c-47e1-b4d5-65447fdd542b"
}
```
Take note of the values above, which you'll need to use in your Dapr components' metadata, to allow Dapr to authenticate with Azure:
- `appId` is the value for `azureClientId`
- `password` is the value for `azureClientSecret` (this was randomly-generated)
- `tenant` is the value for `azureTenantId`
{{% /codetab %}}
{{% codetab %}}
If you'd rather use a **PFX (PKCS#12) certificate**, run this command which will create a self-signed certificate:
```sh
az ad app credential reset \
--id "${APP_ID}" \
--create-cert
```
> Note: self-signed certificates are recommended for development only. For production, you should use certificates signed by a CA and imported with the `--cert` flag.
The output of the command above should look like:
```json
{
"appId": "c7dd251f-811f-4ba2-a905-acd4d3f8f08b",
"fileWithCertAndPrivateKey": "/Users/alessandro/tmpgtdgibk4.pem",
"name": "c7dd251f-811f-4ba2-a905-acd4d3f8f08b",
"password": null,
"tenant": "cd4b2887-304c-47e1-b4d5-65447fdd542b"
}
```
Take note of the values above, which you'll need to use in your Dapr components' metadata:
- `appId` is the value for `azureClientId`
- `tenant` is the value for `azureTenantId`
- The self-signed PFX certificate and private key are written in the file at the path specified in `fileWithCertAndPrivateKey`.
Use the contents of that file as `azureCertificate` (or write it to a file on the server and use `azureCertificateFile`)
> While the generated file has the `.pem` extension, it contains a certificate and private key encoded as PFX (PKCS#12).
{{% /codetab %}}
{{< /tabs >}}
### Creating a Service Principal
Once you have created an Azure AD application, create a Service Principal for that application, which will allow us to grant it access to Azure resources. Run:
```sh
SERVICE_PRINCIPAL_ID=$(az ad sp create \
--id "${APP_ID}" \
| jq -r .objectId)
echo "Service Principal ID: ${SERVICE_PRINCIPAL_ID}"
```
The output will be similar to:
```text
Service Principal ID: 1d0ccf05-5427-4b5e-8eb4-005ac5f9f163
```
Note that the value above is the ID of the **Service Principal** which is different from the ID of application in Azure AD (client ID)! The former is defined within an Azure tenant and is used to grant access to Azure resources to an application. The client ID instead is used by your application to authenticate. To sum things up:
- You'll use the client ID in Dapr manifests to configure authentication with Azure services
- You'll use the Service Principal ID to grant permissions to an application to access Azure resources
Keep in mind that the Service Principal that was just created does not have access to any Azure resource by default. Access will need to be granted to each resource as needed, as documented in the docs for the components.
> Note: this step is different from the [official documentation](https://docs.microsoft.com/en-us/cli/azure/create-an-azure-service-principal-azure-cli) as the short-hand commands included there create a Service Principal that has broad read-write access to all Azure resources in your subscription.
> Not only doing that would grant our Service Principal more access than you are likely going to desire, but this also applies only to the Azure management plane (Azure Resource Manager, or ARM), which is irrelevant for Dapr anyways (all Azure components are designed to interact with the data plane of various services, and not ARM).
### Example usage in a Dapr component
In this example, you will set up an Azure Key Vault secret store component that uses Azure AD to authenticate.
{{< tabs "Self-Hosted" "Kubernetes">}}
{{% codetab %}}
To use a **client secret**, create a file called `azurekeyvault.yaml` in the components directory, filling in with the details from the above setup process:
```yaml
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
name: azurekeyvault
namespace: default
spec:
type: secretstores.azure.keyvault
version: v1
metadata:
- name: vaultName
value: "[your_keyvault_name]"
- name: azureTenantId
value: "[your_tenant_id]"
- name: azureClientId
value: "[your_client_id]"
- name: azureClientSecret
value : "[your_client_secret]"
```
If you want to use a **certificate** saved on the local disk, instead, use:
```yaml
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
name: azurekeyvault
namespace: default
spec:
type: secretstores.azure.keyvault
version: v1
metadata:
- name: vaultName
value: "[your_keyvault_name]"
- name: azureTenantId
value: "[your_tenant_id]"
- name: azureClientId
value: "[your_client_id]"
- name: azureCertificateFile
value : "[pfx_certificate_file_fully_qualified_local_path]"
```
{{% /codetab %}}
{{% codetab %}}
In Kubernetes, you store the client secret or the certificate into the Kubernetes Secret Store and then refer to those in the YAML file.
To use a **client secret**:
1. Create a Kubernetes secret using the following command:
```bash
kubectl create secret generic [your_k8s_secret_name] --from-literal=[your_k8s_secret_key]=[your_client_secret]
```
- `[your_client_secret]` is the application's client secret as generated above
- `[your_k8s_secret_name]` is secret name in the Kubernetes secret store
- `[your_k8s_secret_key]` is secret key in the Kubernetes secret store
2. Create an `azurekeyvault.yaml` component file.
The component yaml refers to the Kubernetes secretstore using `auth` property and `secretKeyRef` refers to the client secret stored in the Kubernetes secret store.
```yaml
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
name: azurekeyvault
namespace: default
spec:
type: secretstores.azure.keyvault
version: v1
metadata:
- name: vaultName
value: "[your_keyvault_name]"
- name: azureTenantId
value: "[your_tenant_id]"
- name: azureClientId
value: "[your_client_id]"
- name: azureClientSecret
secretKeyRef:
name: "[your_k8s_secret_name]"
key: "[your_k8s_secret_key]"
auth:
secretStore: kubernetes
```
3. Apply the `azurekeyvault.yaml` component:
```bash
kubectl apply -f azurekeyvault.yaml
```
To use a **certificate**:
1. Create a Kubernetes secret using the following command:
```bash
kubectl create secret generic [your_k8s_secret_name] --from-file=[your_k8s_secret_key]=[pfx_certificate_file_fully_qualified_local_path]
```
- `[pfx_certificate_file_fully_qualified_local_path]` is the path to the PFX file you obtained earlier
- `[your_k8s_secret_name]` is secret name in the Kubernetes secret store
- `[your_k8s_secret_key]` is secret key in the Kubernetes secret store
2. Create an `azurekeyvault.yaml` component file.
The component yaml refers to the Kubernetes secretstore using `auth` property and `secretKeyRef` refers to the certificate stored in the Kubernetes secret store.
```yaml
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
name: azurekeyvault
namespace: default
spec:
type: secretstores.azure.keyvault
version: v1
metadata:
- name: vaultName
value: "[your_keyvault_name]"
- name: azureTenantId
value: "[your_tenant_id]"
- name: azureClientId
value: "[your_client_id]"
- name: azureCertificate
secretKeyRef:
name: "[your_k8s_secret_name]"
key: "[your_k8s_secret_key]"
auth:
secretStore: kubernetes
```
3. Apply the `azurekeyvault.yaml` component:
```bash
kubectl apply -f azurekeyvault.yaml
```
{{% /codetab %}}
{{< /tabs >}}
## Using Managed Service Identities
Using MSI, authentication happens automatically by virtue of your application running on top of an Azure service that has an assigned identity. For example, when you create an Azure VM or an Azure Kubernetes Service cluster and choose to enable a managed identity for that, an Azure AD application is created for you and automatically assigned to the service. Your Dapr services can then leverage that identity to authenticate with Azure AD, transparently and without you having to specify any credential.
To get started with managed identities, first you need to assign an identity to a new or existing Azure resource. The instructions depend on the service use. Below are links to the official documentation:
- [Azure Kubernetes Service (AKS)](https://docs.microsoft.com/azure/aks/use-managed-identity)
- [Azure App Service](https://docs.microsoft.com/azure/app-service/overview-managed-identity) (including Azure Web Apps and Azure Functions)
- [Azure Virtual Machines (VM)](https://docs.microsoft.com/azure/active-directory/managed-identities-azure-resources/qs-configure-cli-windows-vm)
- [Azure Virtual Machines Scale Sets (VMSS)](https://docs.microsoft.com/azure/active-directory/managed-identities-azure-resources/qs-configure-cli-windows-vmss)
- [Azure Container Instance (ACI)](https://docs.microsoft.com/azure/container-instances/container-instances-managed-identity)
Other Azure application services may offer support for MSI; please check the documentation for those services to understand how to configure them.
After assigning a managed identity to your Azure resource, you will have credentials such as:
```json
{
"principalId": "<object-id>",
"tenantId": "<tenant-id>",
"type": "SystemAssigned",
"userAssignedIdentities": null
}
```
From the list above, take note of **`principalId`** which is the ID of the Service Principal that was created. You'll need that to grant access to Azure resources to your Service Principal.
## Support for other Azure environments
By default, Dapr components are configured to interact with Azure resources in the "public cloud". If your application is deployed to another cloud, such as Azure China, Azure Government, or Azure Germany, you can enable that for supported components by setting the `azureEnvironment` metadata property to one of the supported values:
- Azure public cloud (default): `"AZUREPUBLICCLOUD"`
- Azure China: `"AZURECHINACLOUD"`
- Azure Government: `"AZUREUSGOVERNMENTCLOUD"`
- Azure Germany: `"AZUREGERMANCLOUD"`
## References
- [Azure AD app credential: Azure CLI reference](https://docs.microsoft.com/cli/azure/ad/app/credential)
- [Azure Managed Service Identity (MSI) overview](https://docs.microsoft.com/azure/active-directory/managed-identities-azure-resources/overview)
- [Secrets building block]({{< ref secrets >}})
- [How-To: Retrieve a secret]({{< ref "howto-secrets.md" >}})
- [How-To: Reference secrets in Dapr components]({{< ref component-secrets.md >}})
- [Secrets API reference]({{< ref secrets_api.md >}})

View File

@ -0,0 +1,37 @@
---
type: docs
weight: 10000
title: "Use the Dapr CLI in a GitHub Actions workflow"
linkTitle: "GitHub Actions"
description: "Learn how to add the Dapr CLI to your GitHub Actions to deploy and manage Dapr in your environments."
---
Dapr can be integrated with GitHub Actions via the [Dapr tool installer](https://github.com/marketplace/actions/dapr-tool-installer) available in the GitHub Marketplace. This installer adds the Dapr CLI to your workflow, allowing you to deploy, manage, and upgrade Dapr across your environments.
## Overview
The `dapr/setup-dapr` action will install the specified version of the Dapr CLI on macOS, Linux and Windows runners. Once installed, you can run any [Dapr CLI command]({{< ref cli >}}) to manage your Dapr environments.
## Example
```yaml
- name: Install Dapr
uses: dapr/setup-dapr@v1
with:
version: '{{% dapr-latest-version long="true" %}}'
- name: Initialize Dapr
shell: pwsh
run: |
# Get the credentials to K8s to use with dapr init
az aks get-credentials --resource-group ${{ env.RG_NAME }} --name "${{ steps.azure-deployment.outputs.aksName }}"
# Initialize Dapr
# Group the Dapr init logs so these lines can be collapsed.
Write-Output "::group::Initialize Dapr"
dapr init --kubernetes --wait --runtime-version ${{ env.DAPR_VERSION }}
Write-Output "::endgroup::"
dapr status --kubernetes
working-directory: ./twitter-sentiment-processor/demos/demo3
```

View File

@ -80,28 +80,28 @@ Prerequisites:
1. Set up the environment variables containing the Azure Storage Account credentials:
{{< tabs Windows "macOS/Linux" >}}
{{% codetab %}}
```bash
export STORAGE_ACCOUNT_KEY=<YOUR-STORAGE-ACCOUNT-KEY>
export STORAGE_ACCOUNT_NAME=<YOUR-STORAGE-ACCOUNT-NAME>
```
{{% /codetab %}}
{{% codetab %}}
```bash
set STORAGE_ACCOUNT_KEY=<YOUR-STORAGE-ACCOUNT-KEY>
set STORAGE_ACCOUNT_NAME=<YOUR-STORAGE-ACCOUNT-NAME>
```
{{% /codetab %}}
{{< /tabs >}}
1. Move to the workflows directory and run the sample runtime:
```bash
cd src/Dapr.Workflows
dapr run --app-id workflows --protocol grpc --port 3500 --app-port 50003 -- dotnet run --workflows-path ../../samples
```
@ -109,8 +109,8 @@ Prerequisites:
```bash
curl http://localhost:3500/v1.0/invoke/workflows/method/workflow1
{"value":"Hello from Logic App workflow running with Dapr!"}
{"value":"Hello from Logic App workflow running with Dapr!"}
```
### Kubernetes
@ -153,7 +153,7 @@ Prerequisites:
```bash
curl http://localhost:3500/v1.0/invoke/workflows/method/workflow1
{"value":"Hello from Logic App workflow running with Dapr!"}
```
@ -186,35 +186,35 @@ Prerequisites:
1. Next, apply the Dapr component:
{{< tabs Self-hosted Kubernetes >}}
{{% codetab %}}
Place the binding yaml file above in a `components` directory at the root of your application.
{{% /codetab %}}
{{% codetab %}}
```bash
kubectl apply -f my_binding.yaml
```
{{% /codetab %}}
{{< /tabs >}}
1. Once an event is sent to the bindings component, check the logs Dapr Workflows to see the output.
{{< tabs Self-hosted Kubernetes >}}
{{% codetab %}}
In standalone mode, the output will be printed to the local terminal.
{{% /codetab %}}
{{% codetab %}}
On Kubernetes, run the following command:
```bash
kubectl logs -l app=dapr-workflows-host -c host
```
{{% /codetab %}}
{{< /tabs >}}
## Example

View File

@ -125,6 +125,9 @@ spec:
secretKeyRef:
name: redis
key: redis-password
# uncomment below for connecting to redis cache instances over TLS (ex - Azure Redis Cache)
# - name: enableTLS
# value: true
```
This example uses the kubernetes secret that was created when setting up a cluster with the above instructions.
@ -153,6 +156,9 @@ spec:
secretKeyRef:
name: redis
key: redis-password
# uncomment below for connecting to redis cache instances over TLS (ex - Azure Redis Cache)
# - name: enableTLS
# value: true
```
This example uses the kubernetes secret that was created when setting up a cluster with the above instructions.
@ -179,6 +185,9 @@ spec:
value: <HOST>
- name: redisPassword
value: <PASSWORD>
# uncomment below for connecting to redis cache instances over TLS (ex - Azure Redis Cache)
# - name: enableTLS
# value: true
```
```yaml
@ -195,6 +204,9 @@ spec:
value: <HOST>
- name: redisPassword
value: <PASSWORD>
# uncomment below for connecting to redis cache instances over TLS (ex - Azure Redis Cache)
# - name: enableTLS
# value: true
```
## Apply the configuration

View File

@ -7,7 +7,7 @@ aliases:
- /getting-started/install-dapr/
---
Now that you have the [Dapr CLI installed]({{<ref install-dapr-cli.md>}}), it's time to initialize Dapr on your local machine using the CLI.
Now that you have the [Dapr CLI installed]({{<ref install-dapr-cli.md>}}), it's time to initialize Dapr on your local machine using the CLI.
Dapr runs as a sidecar alongside your application, and in self-hosted mode this means it is a process on your local machine. Therefore, initializing Dapr includes fetching the Dapr sidecar binaries and installing them locally.
@ -29,11 +29,11 @@ This recommended development environment requires [Docker](https://docs.docker.c
{{% codetab %}}
If you run your Docker commands with sudo, or the install path is `/usr/local/bin` (default install path), you will need to use `sudo` below.
{{% /codetab %}}
{{% codetab %}}
Make sure that you run Command Prompt as administrator (right click, run as administrator)
{{% /codetab %}}
{{< /tabs >}}
### Step 2: Run the init CLI command
@ -52,8 +52,8 @@ dapr --version
Output should look like this:
```
CLI version: 1.3.0
Runtime version: 1.3.0
CLI version: {{% dapr-latest-version long="true" %}}
Runtime version: {{% dapr-latest-version long="true" %}}
```
### Step 4: Verify containers are running

View File

@ -17,11 +17,12 @@ The [Dapr Quickstarts](https://github.com/dapr/quickstarts/tree/v1.0.0) are a co
| Quickstart | Description |
|--------------------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| [Hello World](https://github.com/dapr/quickstarts/tree/v1.3.0/hello-world) | Demonstrates how to run Dapr locally. Highlights service invocation and state management. |
| [Hello Kubernetes](https://github.com/dapr/quickstarts/tree/v1.3.0/hello-kubernetes) | Demonstrates how to run Dapr in Kubernetes. Highlights service invocation and state management. |
| [Distributed Calculator](https://github.com/dapr/quickstarts/tree/v1.3.0/distributed-calculator) | Demonstrates a distributed calculator application that uses Dapr services to power a React web app. Highlights polyglot (multi-language) programming, service invocation and state management. |
| [Pub/Sub](https://github.com/dapr/quickstarts/tree/v1.3.0/pub-sub) | Demonstrates how to use Dapr to enable pub-sub applications. Uses Redis as a pub-sub component. |
| [Bindings](https://github.com/dapr/quickstarts/tree/v1.3.0/bindings) | Demonstrates how to use Dapr to create input and output bindings to other components. Uses bindings to Kafka. |
| [Middleware](https://github.com/dapr/quickstarts/tree/v1.3.0/middleware) | Demonstrates use of Dapr middleware to enable OAuth 2.0 authorization. |
| [Observability](https://github.com/dapr/quickstarts/tree/v1.3.0/observability) | Demonstrates Dapr tracing capabilities. Uses Zipkin as a tracing component. |
| [Secret Store](https://github.com/dapr/quickstarts/tree/v1.3.0/secretstore) | Demonstrates the use of Dapr Secrets API to access secret stores. |
| [Hello World](https://github.com/dapr/quickstarts/tree/v1.4.0/hello-world) | Demonstrates how to run Dapr locally. Highlights service invocation and state management. |
| [Hello Kubernetes](https://github.com/dapr/quickstarts/tree/v1.4.0/hello-kubernetes) | Demonstrates how to run Dapr in Kubernetes. Highlights service invocation and state management. |
| [Distributed Calculator](https://github.com/dapr/quickstarts/tree/v1.4.0/distributed-calculator) | Demonstrates a distributed calculator application that uses Dapr services to power a React web app. Highlights polyglot (multi-language) programming, service invocation and state management. |
| [Pub/Sub](https://github.com/dapr/quickstarts/tree/v1.4.0/pub-sub) | Demonstrates how to use Dapr to enable pub-sub applications. Uses Redis as a pub-sub component. |
| [Bindings](https://github.com/dapr/quickstarts/tree/v1.4.0/bindings) | Demonstrates how to use Dapr to create input and output bindings to other components. Uses bindings to Kafka. |
| [Middleware](https://github.com/dapr/quickstarts/tree/v1.4.0/middleware) | Demonstrates use of Dapr middleware to enable OAuth 2.0 authorization. |
| [Observability](https://github.com/dapr/quickstarts/tree/v1.4.0/observability) | Demonstrates Dapr tracing capabilities. Uses Zipkin as a tracing component. |
| [Secret Store](https://github.com/dapr/quickstarts/tree/v1.4.0/secretstore) | Demonstrates the use of Dapr Secrets API to access secret stores. |

View File

@ -20,7 +20,7 @@ Go to [this]({{< ref "howto-secrets.md" >}}) link to see all the secret stores s
## Referencing secrets
While you have the option to use plain text secrets, this is not recommended for production:
While you have the option to use plain text secrets (like MyPassword), as shown in the yaml below for the `value` of `redisPassword`, this is not recommended for production:
```yml
apiVersion: dapr.io/v1alpha1
@ -38,7 +38,9 @@ spec:
value: MyPassword
```
Instead create the secret in your secret store and reference it in the component definition:
Instead create the secret in your secret store and reference it in the component definition. There are two cases for this shown below -- the "Secret contains an embedded key" and the "Secret is a string".
The "Secret contains an embedded key" case applies when there is a key embedded within the secret, i.e. the secret is **not** an entire connection string. This is shown in the following component definition yaml.
```yml
apiVersion: dapr.io/v1alpha1
@ -62,7 +64,30 @@ auth:
`SECRET_STORE_NAME` is the name of the configured [secret store component]({{< ref supported-secret-stores >}}). When running in Kubernetes and using a Kubernetes secret store, the field `auth.SecretStore` defaults to `kubernetes` and can be left empty.
The above component definition tells Dapr to extract a secret named `redis-secret` from the defined secret store and assign the value of the `redis-password` key in the secret to the `redisPassword` field in the Component.
The above component definition tells Dapr to extract a secret named `redis-secret` from the defined `secretStore` and assign the value associated with the `redis-password` key embedded in the secret to the `redisPassword` field in the component. One use of this case is when your code is constructing a connection string, for example putting together a URL, a secret, plus other information as necessary, into a string.
On the other hand, the below "Secret is a string" case applies when there is NOT a key embedded in the secret. Rather, the secret is just a string. Therefore, in the `secretKeyRef` section both the secret `name` and the secret `key` will be identical. This is the case when the secret itself is an entire connection string with no embedded key whose value needs to be extracted. Typically a connection string consists of connection information, some sort of secret to allow connection, plus perhaps other information and does not require a separate "secret". This case is shown in the below component definition yaml.
```yml
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
name: servicec-inputq-azkvsecret-asbqueue
spec:
type: bindings.azure.servicebusqueues
version: v1
metadata:
-name: connectionString
secretKeyRef:
name: asbNsConnString
key: asbNsConnString
-name: queueName
value: servicec-inputq
auth:
secretStore: <SECRET_STORE_NAME>
```
The above "Secret is a string" case yaml tells Dapr to extract a connection string named `asbNsConnstring` from the defined `secretStore` and assign the value to the `connectionString` field in the component since there is no key embedded in the "secret" from the `secretStore` because it is a plain string. This requires the secret `name` and secret `key` to be identical.
## Example

View File

@ -9,7 +9,7 @@ description: "Control how many requests and events will invoke your application
A common scenario in distributed computing is to only allow for a given number of requests to execute concurrently.
Using Dapr, you can control how many requests and events will invoke your application simultaneously.
*Note that this rate limiing is guaranteed for every event that's coming from Dapr, meaning Pub/Sub events, direct invocation from other services, bindings events etc. Dapr can't enforce the concurrency policy on requests that are coming to your app externally.*
*Note that this rate limiting is guaranteed for every event that's coming from Dapr, meaning Pub/Sub events, direct invocation from other services, bindings events etc. Dapr can't enforce the concurrency policy on requests that are coming to your app externally.*
*Note that rate limiting per second can be achieved by using the **middleware.http.ratelimit** middleware. However, there is an imporant difference between the two approaches. The rate limit middlware is time bound and limits the number of requests per second, while the `app-max-concurrency` flag specifies the number of concurrent requests (and events) at any point of time. See [Rate limit middleware]({{< ref middleware-rate-limit.md >}}). *
@ -61,4 +61,4 @@ To set app-max-concurrency with the Dapr CLI for running on your local dev machi
dapr run --app-max-concurrency 1 --app-port 5000 python ./app.py
```
The above examples will effectively turn your app into a single concurrent service.
The above examples will effectively turn your app into a single concurrent service.

View File

@ -263,9 +263,9 @@ The following steps run the Sentry service locally with mTLS enabled, set up nec
{{% codetab %}}
```powershell
$env:DAPR_TRUST_ANCHORS=$(Get-Content $env:USERPROFILE\.dapr\certs\ca.crt)
$env:DAPR_CERT_CHAIN=$(Get-Content $env:USERPROFILE\.dapr\certs\issuer.crt)
$env:DAPR_CERT_KEY=$(Get-Content $env:USERPROFILE\.dapr\certs\issuer.key)
$env:DAPR_TRUST_ANCHORS=$(Get-Content -raw $env:USERPROFILE\.dapr\certs\ca.crt)
$env:DAPR_CERT_CHAIN=$(Get-Content -raw $env:USERPROFILE\.dapr\certs\issuer.crt)
$env:DAPR_CERT_KEY=$(Get-Content -raw $env:USERPROFILE\.dapr\certs\issuer.key)
$env:NAMESPACE="default"
```
@ -300,9 +300,9 @@ The following steps run the Sentry service locally with mTLS enabled, set up nec
{{% codetab %}}
```powershell
$env:DAPR_TRUST_ANCHORS=$(Get-Content $env:USERPROFILE\.dapr\certs\ca.crt)
$env:DAPR_CERT_CHAIN=$(Get-Content $env:USERPROFILE\.dapr\certs\issuer.crt)
$env:DAPR_CERT_KEY=$(Get-Content $env:USERPROFILE\.dapr\certs\issuer.key)
$env:DAPR_TRUST_ANCHORS=$(Get-Content -raw $env:USERPROFILE\.dapr\certs\ca.crt)
$env:DAPR_CERT_CHAIN=$(Get-Content -raw $env:USERPROFILE\.dapr\certs\issuer.crt)
$env:DAPR_CERT_KEY=$(Get-Content -raw $env:USERPROFILE\.dapr\certs\issuer.key)
$env:NAMESPACE="default"
```
{{% /codetab %}}
@ -356,4 +356,4 @@ spec:
containers:
- name: python
image: dapriosamples/hello-k8s-python:edge
```
```

View File

@ -122,7 +122,7 @@ The latest Dapr helm chart no longer supports Helm v2. Please migrate from Helm
```bash
helm upgrade --install dapr dapr/dapr \
--version=1.2 \
--version={{% dapr-latest-version short="true" %}} \
--namespace dapr-system \
--create-namespace \
--wait
@ -132,7 +132,7 @@ The latest Dapr helm chart no longer supports Helm v2. Please migrate from Helm
```bash
helm upgrade --install dapr dapr/dapr \
--version=1.2 \
--version={{% dapr-latest-version short="true" %}} \
--namespace dapr-system \
--create-namespace \
--set global.ha.enabled=true \

View File

@ -11,15 +11,15 @@ description: "Follow these steps to upgrade Dapr on Kubernetes and ensure a smoo
- [Dapr CLI]({{< ref install-dapr-cli.md >}})
- [Helm 3](https://github.com/helm/helm/releases) (if using Helm)
## Upgrade existing cluster to 1.3.0
## Upgrade existing cluster to {{% dapr-latest-version long="true" %}}
There are two ways to upgrade the Dapr control plane on a Kubernetes cluster using either the Dapr CLI or Helm.
### Dapr CLI
The example below shows how to upgrade to version 1.3.0:
The example below shows how to upgrade to version {{% dapr-latest-version long="true" %}}:
```bash
dapr upgrade -k --runtime-version=1.3.0
dapr upgrade -k --runtime-version={{% dapr-latest-version long="true" %}}
```
You can provide all the available Helm chart configurations using the Dapr CLI.
@ -43,7 +43,7 @@ To resolve this issue please run the follow command to upgrade the CustomResourc
kubectl replace -f https://raw.githubusercontent.com/dapr/dapr/5a15b3e0f093d2d0938b12f144c7047474a290fe/charts/dapr/crds/configuration.yaml
```
Then proceed with the `dapr upgrade --runtime-version 1.3.0 -k` command as above.
Then proceed with the `dapr upgrade --runtime-version {{% dapr-latest-version long="true" %}} -k` command as above.
### Helm

View File

@ -1,6 +1,6 @@
---
type: docs
title: "Run Dapr in Self Hosted Mode"
title: "Run Dapr in self-hosted mode"
linkTitle: "Self-Hosted"
weight: 1000
description: "How to get Dapr up and running in your local environment"

View File

@ -20,16 +20,16 @@ The Dapr CLI provides an option to initialize Dapr using slim init, without the
dapr init --slim
```
In this mode two different binaries are installed `daprd` and `placement`. The `placement` binary is needed to enable [actors]({{< ref "actors-overview.md" >}}) in a Dapr self-hosted installation.
In this mode two different binaries are installed `daprd` and `placement`. The `placement` binary is needed to enable [actors]({{< ref "actors-overview.md" >}}) in a Dapr self-hosted installation.
In this mode no default components such as Redis are installed for state management or pub/sub. This means, that aside from [Service Invocation]({{< ref "service-invocation-overview.md" >}}), no other building block functionality is available on install out of the box. Users are free to setup their own environment and custom components. Furthermore, actor based service invocation is possible if a state store is configured as explained in the following sections.
## Service invocation
See [this sample](https://github.com/dapr/samples/tree/master/hello-dapr-slim) for an example on how to perform service invocation in this mode.
See [this sample](https://github.com/dapr/samples/tree/master/hello-dapr-slim) for an example on how to perform service invocation in this mode.
## Enabling state management or pub/sub
See configuring Redis in self-hosted mode [without docker](https://redis.io/topics/quickstart) to enable a local state store or pub/sub broker for messaging.
See configuring Redis in self-hosted mode [without docker](https://redis.io/topics/quickstart) to enable a local state store or pub/sub broker for messaging.
## Enabling actors
@ -51,7 +51,7 @@ INFO[0001] leader is established. instance=Nicoletaz-L10.
```
From here on you can follow the sample example created for the [java-sdk](https://github.com/dapr/java-sdk/tree/master/examples/src/main/java/io/dapr/examples/actors), [python-sdk](https://github.com/dapr/python-sdk/tree/master/examples/demo_actor) or [dotnet-sdk]({{< ref "dotnet-actors-howto.md" >}}) for running an application with Actors enabled.
From here on you can follow the sample example created for the [java-sdk](https://github.com/dapr/java-sdk/tree/master/examples/src/main/java/io/dapr/examples/actors), [python-sdk](https://github.com/dapr/python-sdk/tree/master/examples/demo_actor) or [dotnet-sdk]({{< ref "dotnet-actors-howto.md" >}}) for running an application with Actors enabled.
Update the state store configuration files to have the Redis host and password match the setup that you have. Additionally to enable it as a actor state store have the metadata piece added similar to the [sample Java Redis component](https://github.com/dapr/java-sdk/blob/master/examples/components/state/redis.yaml) definition.

View File

@ -8,7 +8,7 @@ description: "Overview of how to get Dapr running on a Windows/Linux/MacOS machi
## Overview
Dapr can be configured to run in self-hosted mode on your local developer machine or on production VMs. Each running service has a Dapr runtime process (or sidecar) which is configured to use state stores, pub/sub, binding components and the other building blocks.
Dapr can be configured to run in self-hosted mode on your local developer machine or on production VMs. Each running service has a Dapr runtime process (or sidecar) which is configured to use state stores, pub/sub, binding components and the other building blocks.
## Initialization
@ -17,13 +17,13 @@ Dapr can be initialized [with Docker]({{< ref self-hosted-with-docker.md >}}) (d
- A Zipkin container for diagnostics and tracing.
- A default Dapr configuration and components installed in `$HOME/.dapr/` (Mac/Linux) or `%USERPROFILE%\.dapr\` (Windows).
The `dapr-placement` service is responsible for managing the actor distribution scheme and key range settings. This service is not launched as a container and is only required if you are using Dapr actors. For more information on the actor `Placement` service read [actor overview]({{< ref "actors-overview.md" >}}).
The `dapr-placement` service is responsible for managing the actor distribution scheme and key range settings. This service is not launched as a container and is only required if you are using Dapr actors. For more information on the actor `Placement` service read [actor overview]({{< ref "actors-overview.md" >}}).
<img src="/images/overview-standalone-docker.png" width=1000 alt="Diagram of Dapr in self-hosted Docker mode" />
## Launching applications with Dapr
You can use the [`dapr run` CLI command]({{< ref dapr-run.md >}}) to a Dapr sidecar process along with your application. Additional arguments and flags can be found [here]({{< ref arguments-annotations-overview.md >}}).
You can use the [`dapr run` CLI command]({{< ref dapr-run.md >}}) to a Dapr sidecar process along with your application. Additional arguments and flags can be found [here]({{< ref arguments-annotations-overview.md >}}).
## Name resolution

View File

@ -12,7 +12,7 @@ description: "Follow these steps to upgrade Dapr in self-hosted mode and ensure
{{% alert title="Note" color="warning" %}}
This will remove the default `$HOME/.dapr` directory, binaries and all containers (dapr_redis, dapr_placement and dapr_zipkin). Linux users need to run `sudo` if docker command needs sudo.
{{% /alert %}}
```bash
dapr uninstall --all
```
@ -25,11 +25,11 @@ description: "Follow these steps to upgrade Dapr in self-hosted mode and ensure
dapr init
```
1. Ensure you are using the latest version of Dapr (v1.3) with:
1. Ensure you are using the latest version of Dapr (v{{% dapr-latest-version long="true" %}})) with:
```bash
$ dapr --version
CLI version: 1.3
Runtime version: 1.3
CLI version: {{% dapr-latest-version short="true" %}}
Runtime version: {{% dapr-latest-version short="true" %}}
```

View File

@ -7,16 +7,13 @@ description: "Set up Jaeger for distributed tracing"
type: docs
---
Dapr currently supports the Zipkin protocol. Since Jaeger is
compatible with Zipkin, the Zipkin protocol can be used to talk to
Jaeger.
Dapr supports the Zipkin protocol. Since Jaeger is compatible with Zipkin, the Zipkin protocol can be used to communication with Jaeger.
## Configure self hosted mode
### Setup
The simplest way to start Jaeger is to use the pre-built all-in-one
Jaeger image published to DockerHub:
The simplest way to start Jaeger is to use the pre-built all-in-one Jaeger image published to DockerHub:
```bash
docker run -d --name jaeger \
@ -55,15 +52,19 @@ dapr run --app-id mynode --app-port 3000 node app.js --config config.yaml
```
### Viewing Traces
To view traces, in your browser go to http://localhost:16686 and you will see the Jaeger UI.
To view traces, in your browser go to http://localhost:16686 to see the Jaeger UI.
## Configure Kubernetes
The following steps shows you how to configure Dapr to send distributed tracing data to Jaeger running as a container in your Kubernetes cluster, how to view them.
### Setup
First create the following YAML file to install Jaeger
* jaeger-operator.yaml
First create the following YAML file to install Jaeger, file name is `jaeger-operator.yaml`
#### Development and test
By default, the allInOne Jaeger image uses memory as the backend storage and it is not recommended to use this in a production environment.
```yaml
apiVersion: jaegertracing.io/v1
kind: "Jaeger"
@ -80,7 +81,54 @@ spec:
base-path: /jaeger
```
#### Production
Jaeger uses Elasticsearch as the backend storage, and you can create a secret in k8s cluster to access Elasticsearch server with access control. See [Configuring and Deploying Jaeger](https://docs.openshift.com/container-platform/4.7/jaeger/jaeger_install/rhbjaeger-deploying.html)
```shell
kubectl create secret generic jaeger-secret --from-literal=ES_PASSWORD='xxx' --from-literal=ES_USERNAME='xxx' -n ${NAMESPACE}
```
```yaml
apiVersion: jaegertracing.io/v1
kind: "Jaeger"
metadata:
name: jaeger
spec:
strategy: production
query:
options:
log-level: info
query:
base-path: /jaeger
collector:
maxReplicas: 5
resources:
limits:
cpu: 500m
memory: 516Mi
storage:
type: elasticsearch
esIndexCleaner:
enabled: false ## turn the job deployment on and off
numberOfDays: 7 ## number of days to wait before deleting a record
schedule: "55 23 * * *" ## cron expression for it to run
image: jaegertracing/jaeger-es-index-cleaner ## image of the job
secretName: jaeger-secret
options:
es:
server-urls: http://elasticsearch:9200
```
The pictures are as follows, include Elasticsearch and Grafana tracing data:
![jaeger-storage-es](/images/jaeger_storage_elasticsearch.png)
![grafana](/images/jaeger_grafana.png)
Now, use the above YAML file to install Jaeger
```bash
# Install Jaeger
helm repo add jaegertracing https://jaegertracing.github.io/helm-charts

View File

@ -1,7 +1,7 @@
---
type: docs
title: "Performance and Scalability"
linkTitle: "Performance and Scalability"
title: "Performance and scalability statistics of Dapr"
linkTitle: "Performance and scalability"
weight: 700
description: "Benchmarks and guidelines for Dapr building blocks"
---

View File

@ -217,6 +217,32 @@ spec:
enabled: true
```
In addition to the Dapr configuration, you will also need to provide the TLS certificates to each Dapr sidecar instance. You can do so by setting the following environment variables before running the Dapr instance:
{{< tabs "Linux/MacOS" Windows >}}
{{% codetab %}}
```bash
export DAPR_TRUST_ANCHORS=`cat $HOME/.dapr/certs/ca.crt`
export DAPR_CERT_CHAIN=`cat $HOME/.dapr/certs/issuer.crt`
export DAPR_CERT_KEY=`cat $HOME/.dapr/certs/issuer.key`
export NAMESPACE=default
```
{{% /codetab %}}
{{% codetab %}}
```powershell
$env:DAPR_TRUST_ANCHORS=$(Get-Content -raw $env:USERPROFILE\.dapr\certs\ca.crt)
$env:DAPR_CERT_CHAIN=$(Get-Content -raw $env:USERPROFILE\.dapr\certs\issuer.crt)
$env:DAPR_CERT_KEY=$(Get-Content -raw $env:USERPROFILE\.dapr\certs\issuer.key)
$env:NAMESPACE="default"
```
{{% /codetab %}}
{{< /tabs >}}
If using the Dapr CLI, point Dapr to the config file above to run the Dapr instance with mTLS enabled:
```

View File

@ -9,8 +9,10 @@ Preview features in Dapr are considered experimental when they are first release
## Current preview features
| Description | Setting | Documentation |
|-------------|---------|---------------|
| Preview feature that enables Actors to be called multiple times in the same call chain allowing call backs between actors. | Actor.Reentrancy | [Actor reentrancy]({{<ref actor-reentrancy>}}) |
| Preview feature that allows Actor reminders to be partitioned across multiple keys in the underlying statestore in order to improve scale and performance. | Actor.TypeMetadata | [How-To: Partition Actor Reminders]({{< ref "howto-actors.md#partitioning-reminders" >}}) |
| Preview feature that enables you to call endpoints using service invocation on gRPC services through Dapr via gRPC proxying, without requiring the use of Dapr SDKs. | proxy.grpc | [How-To: Invoke services using gRPC]({{<ref howto-invoke-services-grpc>}}) |
| Feature | Description | Setting | Documentation |
| ---------- |-------------|---------|---------------|
| **Actor reentrancy** | Enables actors to be called multiple times in the same call chain allowing call backs between actors. | `Actor.Reentrancy` | [Actor reentrancy]({{<ref actor-reentrancy>}}) |
| **Partition actor reminders** | Allows actor reminders to be partitioned across multiple keys in the underlying statestore in order to improve scale and performance. | `Actor.TypeMetadata` | [How-To: Partition Actor Reminders]({{< ref "howto-actors.md#partitioning-reminders" >}}) |
| **gRPC proxying** | Enables calling endpoints using service invocation on gRPC services through Dapr via gRPC proxying, without requiring the use of Dapr SDKs. | `proxy.grpc` | [How-To: Invoke services using gRPC]({{<ref howto-invoke-services-grpc>}}) |
| **State store encryption** | Enables automatic client side encryption for state stores | `State.Encryption` | [How-To: Encrypt application state]({{<ref howto-encrypt-state>}}) |
| **Pub/Sub routing** | Allow the use of expressions to route cloud events to different URIs/paths and event handlers in your application. | `PubSub.Routing` | [How-To: Publish a message and subscribe to a topic]({{<ref howto-route-messages>}}) |

View File

@ -31,15 +31,17 @@ The table below shows the versions of Dapr releases that have been tested togeth
| Release date | Runtime | CLI | SDKs | Dashboard | Status |
|--------------------|:--------:|:--------|---------|---------|---------|
| Feb 17th 2021 | 1.0.0</br>| 1.0.0 | Java 1.0.0 </br>Go 1.0.0 </br>PHP 1.0.0 </br>Python 1.0.0 </br>.NET 1.0.0 | 0.6.0 | Unsupported |
| Mar 4th 2021 | 1.0.1</br>| 1.0.1 | Java 1.0.2 </br>Go 1.0.0 </br>PHP 1.0.0 </br>Python 1.0.0 </br>.NET 1.0.0 | 0.6.0 | Unsupported |
| Apr 1st 2021 | 1.1.0</br> | 1.1.0 | Java 1.0.2 </br>Go 1.1.0 </br>PHP 1.0.0 </br>Python 1.1.0 </br>.NET 1.1.0 | 0.6.0 | Unsupported |
| Apr 6th 2021 | 1.1.1</br> | 1.1.0 | Java 1.0.2 </br>Go 1.1.0 </br>PHP 1.0.0 </br>Python 1.1.0 </br>.NET 1.1.0 | 0.6.0 | Unsupported |
| Apr 16th 2021 | 1.1.2</br> | 1.1.0 | Java 1.0.2 </br>Go 1.1.0 </br>PHP 1.0.0 </br>Python 1.1.0 </br>.NET 1.1.0 | 0.6.0 | Unsupported |
| May 26th 2021 | 1.2.0</br> | 1.2.0 | Java 1.1.0 </br>Go 1.1.0 </br>PHP 1.1.0 </br>Python 1.1.0 </br>.NET 1.2.0 | 0.6.0 | Supported |
| June 16th 2021 | 1.2.1</br> | 1.2.0 | Java 1.1.0 </br>Go 1.1.0 </br>PHP 1.1.0 </br>Python 1.1.0 </br>.NET 1.2.0 | 0.6.0 | Supported |
| June 16th 2021 | 1.2.2</br> | 1.2.0 | Java 1.1.0 </br>Go 1.1.0 </br>PHP 1.1.0 </br>Python 1.1.0 </br>.NET 1.2.0 | 0.6.0 | Supported |
| July 26th 2021 | 1.3</br> | 1.3.0 | Java 1.2.0 </br>Go 1.2.0 </br>PHP 1.1.0 </br>Python 1.2.0 </br>.NET 1.3.0 | 0.7.0 | Supported (current) |
| Feb 17th 2021 | 1.0.0</br> | 1.0.0 | Java 1.0.0 </br>Go 1.0.0 </br>PHP 1.0.0 </br>Python 1.0.0 </br>.NET 1.0.0 | 0.6.0 | Unsupported |
| Mar 4th 2021 | 1.0.1</br> | 1.0.1 | Java 1.0.2 </br>Go 1.0.0 </br>PHP 1.0.0 </br>Python 1.0.0 </br>.NET 1.0.0 | 0.6.0 | Unsupported |
| Apr 1st 2021 | 1.1.0</br> | 1.1.0 | Java 1.0.2 </br>Go 1.1.0 </br>PHP 1.0.0 </br>Python 1.1.0 </br>.NET 1.1.0 | 0.6.0 | Unsupported |
| Apr 6th 2021 | 1.1.1</br> | 1.1.0 | Java 1.0.2 </br>Go 1.1.0 </br>PHP 1.0.0 </br>Python 1.1.0 </br>.NET 1.1.0 | 0.6.0 | Unsupported |
| Apr 16th 2021 | 1.1.2</br> | 1.1.0 | Java 1.0.2 </br>Go 1.1.0 </br>PHP 1.0.0 </br>Python 1.1.0 </br>.NET 1.1.0 | 0.6.0 | Unsupported |
| May 26th 2021 | 1.2.0</br> | 1.2.0 | Java 1.1.0 </br>Go 1.1.0 </br>PHP 1.1.0 </br>Python 1.1.0 </br>.NET 1.2.0 | 0.6.0 | Unsupported |
| Jun 16th 2021 | 1.2.1</br> | 1.2.0 | Java 1.1.0 </br>Go 1.1.0 </br>PHP 1.1.0 </br>Python 1.1.0 </br>.NET 1.2.0 | 0.6.0 | Unsupported |
| Jun 16th 2021 | 1.2.2</br> | 1.2.0 | Java 1.1.0 </br>Go 1.1.0 </br>PHP 1.1.0 </br>Python 1.1.0 </br>.NET 1.2.0 | 0.6.0 | Unsupported |
| Jul 26th 2021 | 1.3</br> | 1.3.0 | Java 1.2.0 </br>Go 1.2.0 </br>PHP 1.1.0 </br>Python 1.2.0 </br>.NET 1.3.0 | 0.7.0 | Supported |
| Sep 14th 2021 | 1.3.1</br> | 1.3.0 | Java 1.2.0 </br>Go 1.2.0 </br>PHP 1.1.0 </br>Python 1.2.0 </br>.NET 1.3.0 | 0.7.0 | Supported |
| Sep 15th 2021 | 1.4</br> | 1.4.0 | Java 1.3.0 </br>Go 1.3.0 </br>PHP 1.2.0 </br>Python 1.3.0 </br>.NET 1.4.0 | 0.8.0 | Supported (current) |
## Upgrade paths
After the 1.0 release of the runtime there may be situations where it is necessary to explicitly upgrade through an additional release to reach the desired target. For example an upgrade from v1.0 to v1.2 may need go pass through v1.1
@ -53,9 +55,18 @@ General guidance on upgrading can be found for [self hosted mode]({{<ref self-ho
| 1.0.0 or 1.0.1 | N/A | 1.1.2 |
| | 1.1.2 | 1.2.2 |
| | 1.2.2 | 1.3.0 |
| | 1.3.0 | 1.3.1 |
| | 1.3.1 | 1.4.0 |
| 1.1.0 to 1.1.2 | N/A | 1.2.2 |
| | 1.2.2 | 1.3.0 |
| | 1.3.0 | 1.3.1 |
| | 1.3.1 | 1.4.0 |
| 1.2.0 to 1.2.2 | N/A | 1.3.0 |
| | 1.3.0 | 1.3.1 |
| | 1.3.1 | 1.4.0 |
| 1.3.0 | N/A | 1.3.1 |
| | 1.3.1 | 1.4.0 |
| 1.3.1 | N/A | 1.4.0 |
## Feature and deprecations
There is a process for announcing feature deprecations. Deprecations are applied two (2) releases after the release in which they were announced. For example Feature X is announced to be deprecated in the 1.0.0 release notes and will then be removed in 1.2.0.

View File

@ -223,6 +223,6 @@ In order for mDNS to function properly, ensure `Micorosft Content Filter` is ina
- Type `mdatp system-extension network-filter disable` and hit enter.
- Enter your account password.
Microsoft Content Filter is disabled when the output is "Success".
Microsoft Content Filter is disabled when the output is "Success".
> Some organizations will re-enable the filter from time to time. If you repeatedly encounter app-id values missing, first check to see if the filter has been re-enabled before doing more extensive troubleshooting.
> Some organizations will re-enable the filter from time to time. If you repeatedly encounter app-id values missing, first check to see if the filter has been re-enabled before doing more extensive troubleshooting.

View File

@ -10,35 +10,36 @@ aliases:
This table is meant to help users understand the equivalent options for running Dapr sidecars in different contexts--via the [CLI]({{< ref cli-overview.md >}}) directly, via daprd, or on [Kubernetes]({{< ref kubernetes-overview.md >}}) via annotations.
| daprd | dapr CLI | CLI shorthand | K8s annotations | Description
|----- | ------- | -----------| ----------| ------------ | ------------ |
| `--allowed-origins` | not supported | | not supported | Allowed HTTP origins (default "*") |
| `--app-id` | `--app-id` | `-i` | `dapr.io/app-id` | The unique ID of the application. Used for service discovery, state encapsulation and the pub/sub consumer ID |
| `--app-port` | `--app-port` | `-p` | `dapr.io/app-port` | This parameter tells Dapr which port your application is listening on |
| daprd | Dapr CLI | CLI shorthand | Kubernetes annotations | Description
|----- | ------- | -----------| ----------| ------------ |
| `--allowed-origins` | not supported | | not supported | Allowed HTTP origins (default "*") |
| `--app-id` | `--app-id` | `-i` | `dapr.io/app-id` | The unique ID of the application. Used for service discovery, state encapsulation and the pub/sub consumer ID |
| `--app-port` | `--app-port` | `-p` | `dapr.io/app-port` | This parameter tells Dapr which port your application is listening on |
| `--app-ssl` | `--app-ssl` | | `dapr.io/app-ssl` | Sets the URI scheme of the app to https and attempts an SSL connection |
| `--components-path` | `--components-path` | `-d` | not supported | Path for components directory. If empty, components will not be loaded. |
| `--config` | `--config` | `-c` | `dapr.io/config` | Tells Dapr which Configuration CRD to use |
| `--control-plane-address` | not supported | | not supported | Address for a Dapr control plane |
| `--dapr-grpc-port` | `--dapr-grpc-port` | | not supported | gRPC port for the Dapr API to listen on (default "50001") |
| `--dapr-http-port` | `--dapr-http-port` | | not supported | The HTTP port for the Dapr API |
|` --dapr-http-max-request-size` | --dapr-http-max-request-size | | `dapr.io/http-max-request-size` | Increasing max size of request body http and grpc servers parameter in MB to handle uploading of big files. Default is `4` MB |
| not supported | `--image` | | not supported
| `--internal-grpc-port` | not supported | | not supported | gRPC port for the Dapr Internal API to listen on |
| `--enable-metrics` | not supported | | configuration spec | Enable prometheus metric (default true) |
| `--enable-mtls` | not supported | | configuration spec | Enables automatic mTLS for daprd to daprd communication channels |
| `--enable-profiling` | `--enable-profiling` | | `dapr.io/enable-profiling` | Enable profiling |
| `--log-as-json` | not supported | | `dapr.io/log-as-json` | Setting this parameter to `true` outputs logs in JSON format. Default is `false` |
| `--log-level` | `--log-level` | | `dapr.io/log-level` | Sets the log level for the Dapr sidecar. Allowed values are `debug`, `info`, `warn`, `error`. Default is `info` |
| `--components-path` | `--components-path` | `-d` | not supported | Path for components directory. If empty, components will not be loaded. |
| `--config` | `--config` | `-c` | `dapr.io/config` | Tells Dapr which Configuration CRD to use |
| `--control-plane-address` | not supported | | not supported | Address for a Dapr control plane |
| `--dapr-grpc-port` | `--dapr-grpc-port` | | not supported | gRPC port for the Dapr API to listen on (default "50001") |
| `--dapr-http-port` | `--dapr-http-port` | | not supported | The HTTP port for the Dapr API |
|` --dapr-http-max-request-size` | --dapr-http-max-request-size | | `dapr.io/http-max-request-size` | Increasing max size of request body http and grpc servers parameter in MB to handle uploading of big files. Default is `4` MB |
| not supported | `--image` | | not supported
| `--internal-grpc-port` | not supported | | not supported | gRPC port for the Dapr Internal API to listen on |
| `--enable-metrics` | not supported | | configuration spec | Enable prometheus metric (default true) |
| `--enable-mtls` | not supported | | configuration spec | Enables automatic mTLS for daprd to daprd communication channels |
| `--enable-profiling` | `--enable-profiling` | | `dapr.io/enable-profiling` | Enable profiling |
| `--log-as-json` | not supported | | `dapr.io/log-as-json` | Setting this parameter to `true` outputs logs in JSON format. Default is `false` |
| `--log-level` | `--log-level` | | `dapr.io/log-level` | Sets the log level for the Dapr sidecar. Allowed values are `debug`, `info`, `warn`, `error`. Default is `info` |
| `--app-max-concurrency` | `--app-max-concurrency` | | `dapr.io/app-max-concurrency` | Limit the concurrency of your application. A valid value is any number larger than `0`
| `--metrics-port` | `--metrics-port` | | `dapr.io/metrics-port` | Sets the port for the sidecar metrics server. Default is `9090` |
| `--mode` | not supported | | not supported | Runtime mode for Dapr (default "standalone") |
| `--placement-address` | `--placement-address` | | not supported | Addresses for Dapr Actor Placement servers |
| `--profiling-port` | `--profiling-port` | | not supported | The port for the profile server (default "7777") |
| `--app-protocol` | `--app-protocol` | `-P` | `dapr.io/app-protocol` | Tells Dapr which protocol your application is using. Valid options are `http` and `grpc`. Default is `http` |
| `--sentry-address` | `--sentry-address` | | not supported | Address for the Sentry CA service |
| `--version` | `--version` | `-v` | not supported | Prints the runtime version |
| not supported | not supported | | `dapr.io/enabled` | Setting this paramater to true injects the Dapr sidecar into the pod |
| not supported | not supported | | `dapr.io/api-token-secret` | Tells Dapr which Kubernetes secret to use for token based API authentication. By default this is not set |
| `--metrics-port` | `--metrics-port` | | `dapr.io/metrics-port` | Sets the port for the sidecar metrics server. Default is `9090` |
| `--mode` | not supported | | not supported | Runtime mode for Dapr (default "standalone") |
| `--placement-address` | `--placement-address` | | not supported | Addresses for Dapr Actor Placement servers |
| `--profiling-port` | `--profiling-port` | | not supported | The port for the profile server (default "7777") |
| `--app-protocol` | `--app-protocol` | `-P` | `dapr.io/app-protocol` | Tells Dapr which protocol your application is using. Valid options are `http` and `grpc`. Default is `http` |
| `--sentry-address` | `--sentry-address` | | not supported | Address for the Sentry CA service |
| `--version` | `--version` | `-v` | not supported | Prints the runtime version |
| not supported | not supported | | `dapr.io/enabled` | Setting this paramater to true injects the Dapr sidecar into the pod |
| not supported | not supported | | `dapr.io/api-token-secret` | Tells Dapr which Kubernetes secret to use for token based API authentication. By default this is not set |
| `--dapr-listen-addresses` | not supported | | `dapr.io/sidecar-listen-addresses` | Comma separated list of IP addresses that sidecar will listen to. Defaults to all in standalone mode. Defaults to `[::1],127.0.0.1` in Kubernetes. To listen to all IPv4 addresses, use `0.0.0.0`. To listen to all IPv6 addresses, use `[::]`.
| not supported | not supported | | `dapr.io/sidecar-cpu-limit` | Maximum amount of CPU that the Dapr sidecar can use. See valid values [here](https://kubernetes.io/docs/tasks/administer-cluster/manage-resources/quota-memory-cpu-namespace/). By default this is not set
| not supported | not supported | | `dapr.io/sidecar-memory-limit` | Maximum amount of Memory that the Dapr sidecar can use. See valid values [here](https://kubernetes.io/docs/tasks/administer-cluster/manage-resources/quota-memory-cpu-namespace/). By default this is not set
| not supported | not supported | | `dapr.io/sidecar-cpu-request` | Amount of CPU that the Dapr sidecar requests. See valid values [here](https://kubernetes.io/docs/tasks/administer-cluster/manage-resources/quota-memory-cpu-namespace/). By default this is not set

View File

@ -52,7 +52,7 @@ Use "dapr [command] --help" for more information about a command.
## Command Reference
You can learn more about each Dapr command from the links below.
You can learn more about each Dapr command from the links below.
- [`dapr build-info`]({{< ref dapr-build-info.md >}})
- [`dapr completion`]({{< ref dapr-completion.md >}})

View File

@ -23,6 +23,7 @@ dapr dashboard [flags]
| Name | Environment Variable | Default | Description |
|------|----------------------|---------|-------------|
| `--address`, `-a` | | `localhost` | Address to listen on. Only accepts IP address or localhost as a value |
| `--help`, `-h` | | | Prints this help message |
| `--kubernetes`, `-k` | | `false` | Opens Dapr dashboard in local browser via local proxy to Kubernetes cluster |
| `--namespace`, `-n` | | `dapr-system` | The namespace where Dapr dashboard is running |
@ -46,6 +47,11 @@ dapr dashboard -p 9999
dapr dashboard -k
```
### Port forward to dashboard service running in Kubernetes on all addresses on a specified port
```bash
dapr dashboard -k -p 9999 --address 0.0.0.0
```
### Port forward to dashboard service running in Kubernetes on a specified port
```bash
dapr dashboard -k -p 9999

View File

@ -26,7 +26,7 @@ dapr invoke [flags]
| `--help`, `-h` | | | Print this help message |
| `--method`, `-m` | | | The method to invoke |
| `--data`, `-d` | | | The JSON serialized data string (optional) |
| `--data-file`, `-f` | | | A file containing the JSON serialized data (optional)
| `--data-file`, `-f` | | | A file containing the JSON serialized data (optional)
| `--verb`, `-v` | | `POST` | The HTTP verb to use |
## Examples

View File

@ -7,7 +7,13 @@ description: "Detailed information on the upgrade CLI command"
## Description
Upgrade Dapr on supported hosting platforms.
Upgrade or downgrade Dapr on supported hosting platforms.
{{% alert title="Warning" color="warning" %}}
Version steps should be done incrementally, including minor versions as you upgrade or downgrade.
Prior to downgrading, confirm components are backwards compatible and application code does ultilize APIs that are not supported in previous versions of Dapr.
{{% /alert %}}
## Supported platforms
@ -23,8 +29,8 @@ dapr upgrade [flags]
| Name | Environment Variable | Default | Description
| --- | --- | --- | --- |
| `--help`, `-h` | | | Print this help message |
| `--kubernetes`, `-k` | | `false` | Upgrade Dapr in a Kubernetes cluster |
| `--runtime-version` | | `latest` | The version of the Dapr runtime to upgrade to, for example: `1.0.0` |
| `--kubernetes`, `-k` | | `false` | Upgrade/Downgrade Dapr in a Kubernetes cluster |
| `--runtime-version` | | `latest` | The version of the Dapr runtime to upgrade/downgrade to, for example: `1.0.0` |
| `--set` | | | Set values on the command line (can specify multiple or separate values with commas: key1=val1,key2=val2) |
## Examples
@ -34,15 +40,16 @@ dapr upgrade [flags]
dapr upgrade -k
```
### Upgrade specified version of Dapr runtime in Kubernetes
### Upgrade or downgrade to a specified version of Dapr runtime in Kubernetes
```bash
dapr upgrade -k --runtime-version 1.2
```
### Upgrade specified version of Dapr runtime in Kubernetes with value set
### Upgrade or downgrade to a specified version of Dapr runtime in Kubernetes with value set
```bash
dapr upgrade -k --runtime-version 1.2 --set global.logAsJson=true
```
# Related links
- [Upgrade Dapr on a Kubernetes cluster]({{< ref kubernetes-upgrade.md >}})
- [Upgrade Dapr on a Kubernetes cluster]({{< ref kubernetes-upgrade.md >}})

View File

@ -50,6 +50,7 @@ Table captions:
|------|:----------------:|:-----------------:|--------| ------ |----------|
| [Alibaba Cloud DingTalk]({{< ref alicloud-dingtalk.md >}}) | ✅ | ✅ | Alpha | v1 | 1.2 |
| [Alibaba Cloud OSS]({{< ref alicloudoss.md >}}) | | ✅ | Alpha | v1 | 1.0 |
| [Alibaba Cloud Tablestore]({{< ref alicloudtablestore.md >}}) | | ✅ | Alpha | v1 | 1.4 |
### Amazon Web Services (AWS)
@ -57,6 +58,7 @@ Table captions:
|------|:----------------:|:-----------------:|--------| ------ |----------|
| [AWS DynamoDB]({{< ref dynamodb.md >}}) | | ✅ | Alpha | v1 | 1.0 |
| [AWS S3]({{< ref s3.md >}}) | | ✅ | Alpha | v1 | 1.0 |
| [AWS SES]({{< ref ses.md >}}) | | ✅ | Alpha | v1 | 1.4 |
| [AWS SNS]({{< ref sns.md >}}) | | ✅ | Alpha | v1 | 1.0 |
| [AWS SQS]({{< ref sqs.md >}}) | ✅ | ✅ | Alpha | v1 | 1.0 |
| [AWS Kinesis]({{< ref kinesis.md >}}) | ✅ | ✅ | Alpha | v1 | 1.0 |
@ -82,7 +84,7 @@ Table captions:
### Zeebe (Camunda Cloud)
| Name | Input<br>Binding | Output<br>Binding | Status | Component version | Since |
| Name | Input<br>Binding | Output<br>Binding | Status | Component version | Since |
|------|:----------------:|:-----------------:|--------| --------- | ---------- |
| [Zeebe Command]({{< ref zeebe-command.md >}}) | | ✅ | Alpha | v1 | 1.2 |
| [Zeebe Job Worker]({{< ref zeebe-jobworker.md >}}) | ✅ | | Alpha | v1 | 1.2 |

View File

@ -0,0 +1,143 @@
---
type: docs
title: "Alibaba Cloud Tablestore binding spec"
linkTitle: "Alibaba Cloud Tablestore"
description: "Detailed documentation on the Alibaba Tablestore binding component"
aliases:
- "/operations/components/setup-bindings/supported-bindings/alicloudtablestore/"
---
## Component format
To setup an Alibaba Cloud Tablestore binding create a component of type `bindings.alicloud.tablestore`. See [this guide]({{< ref "howto-bindings.md#1-create-a-binding" >}}) on how to create and apply a secretstore configuration. See this guide on [referencing secrets]({{< ref component-secrets.md >}}) to retrieve and use the secret with Dapr components.
```yaml
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
name: mytablestore
namespace: default
spec:
type: bindings.alicloud.tablestore
version: v1
metadata:
- name: endpoint
value: "[endpoint]"
- name: accessKeyID
value: "[key-id]"
- name: accessKey
value: "[access-key]"
- name: instanceName
value: "[instance]"
- name: tableName
value: "[table]"
- name: endpoint
value: "[endpoint]"
```
{{% alert title="Warning" color="warning" %}}
The above example uses secrets as plain strings. It is recommended to use a secret store for the secrets as described [here]({{< ref component-secrets.md >}}).
{{% /alert %}}
## Spec metadata fields
| Field | Required | Binding support | Details | Example |
|---------------|----------|---------|---------|---------|
| `endpoint` | Y | Output | Alicloud Tablestore endpoint. | https://tablestore-cn-hangzhou.aliyuncs.com
| `accessKeyID` | Y | Output | Access key ID credential. |
| `accessKey` | Y | Output | Access key credential. |
| `instanceName` | Y | Output | Name of the instance. |
| `tableName` | Y | Output | Name of the table. |
## Binding support
This component supports **output binding** with the following operations:
- `create`: [Create object](#create-object)
### Create object
To perform a create object operation, invoke the binding with a `POST` method and the following JSON body:
```json
{
"operation": "create",
"data": "YOUR_CONTENT",
"metadata": {
"primaryKeys": "pk1"
}
}
```
{{% alert title="Note" color="primary" %}}
Note the `metadata.primaryKeys` field is mandatory.
{{% /alert %}}
### Delete object
To perform a delete object operation, invoke the binding with a `POST` method and the following JSON body:
```json
{
"operation": "delete",
"metadata": {
"primaryKeys": "pk1",
"columnToGet": "name,age,date"
},
"data": {
"pk1": "data1"
}
}
```
{{% alert title="Note" color="primary" %}}
Note the `metadata.primaryKeys` field is mandatory.
{{% /alert %}}
### List objects
To perform a list objects operation, invoke the binding with a `POST` method and the following JSON body:
```json
{
"operation": "delete",
"metadata": {
"primaryKeys": "pk1",
"columnToGet": "name,age,date"
},
"data": {
"pk1": "data1",
"pk2": "data2"
}
}
```
{{% alert title="Note" color="primary" %}}
Note the `metadata.primaryKeys` field is mandatory.
{{% /alert %}}
### Get object
To perform a get object operation, invoke the binding with a `POST` method and the following JSON body:
```json
{
"operation": "delete",
"metadata": {
"primaryKeys": "pk1"
},
"data": {
"pk1": "data1"
}
}
```
{{% alert title="Note" color="primary" %}}
Note the `metadata.primaryKeys` field is mandatory.
{{% /alert %}}
## Related links
- [Bindings building block]({{< ref bindings >}})
- [How-To: Use bindings to interface with external resources]({{< ref howto-bindings.md >}})
- [Bindings API reference]({{< ref bindings_api.md >}})

View File

@ -70,6 +70,8 @@ The above example uses secrets as plain strings. It is recommended to use a secr
## Binding support
This component supports both **input and output** binding interfaces.
This component supports **output binding** with the following operations:
- `create`

View File

@ -22,7 +22,7 @@ spec:
version: v1
metadata:
- name: endpoint
value: http://localhost:8080/v1/graphql
value: http://localhost:8080/v1/graphql
- name: header:x-hasura-access-key
value: adminkey
- name: header:Cache-Control

View File

@ -52,6 +52,7 @@ spec:
| authRequired | Y | Input/Output | Enable [SASL](https://en.wikipedia.org/wiki/Simple_Authentication_and_Security_Layer) authentication with the Kafka brokers. | `"true"`, `"false"` |
| saslUsername | N | Input/Output | The SASL username used for authentication. Only required if `authRequired` is set to `"true"`. | `"adminuser"` |
| saslPassword | N | Input/Output | The SASL password used for authentication. Can be `secretKeyRef` to use a [secret reference]({{< ref component-secrets.md >}}). Only required if `authRequired` is set to `"true"`. | `""`, `"KeFg23!"` |
| initialOffset | N | The initial offset to use if no offset was previously committed. Should be "newest" or "oldest". Defaults to "newest". | `"oldest"` |
| maxMessageBytes | N | Input/Output | The maximum size in bytes allowed for a single Kafka message. Defaults to 1024. | `2048` |
## Binding support

View File

@ -33,6 +33,10 @@ spec:
value: *****************
- name: sessionToken
value: mysession
- name: decodeBase64
value: <bool>
- name: encodeBase64
value: <bool>
```
{{% alert title="Warning" color="warning" %}}
@ -48,13 +52,18 @@ The above example uses secrets as plain strings. It is recommended to use a secr
| accessKey | Y | Output | The AWS Access Key to access this resource | `"key"` |
| secretKey | Y | Output | The AWS Secret Access Key to access this resource | `"secretAccessKey"` |
| sessionToken | N | Output | The AWS session token to use | `"sessionToken"` |
| decodeBase64 | N | Output | Configuration to decode base64 file content before saving to bucket storage. (In case of saving a file with binary content). `true` is the only allowed positive value. Other positive variations like `"True", "1"` are not acceptable. Defaults to `false` | `true`, `false` |
| encodeBase64 | N | Output | Configuration to encode base64 file content before return the content. (In case of opening a file with binary content). `true` is the only allowed positive value. Other positive variations like `"True", "1"` are not acceptable. Defaults to `false` | `true`, `false` |
## Binding support
This component supports **output binding** with the following operations:
- `create`
- `create` : [Create file](#create-file)
- `get` : [Get file](#get-file)
- `delete` : [Delete file](#delete-file)
- `list`: [List file](#list-files)
### Create file
@ -70,8 +79,6 @@ To perform a create operation, invoke the AWS S3 binding with a `POST` method an
```
#### Examples
##### Save text to a random generated UUID file
{{< tabs Windows Linux >}}
@ -111,10 +118,33 @@ To perform a create operation, invoke the AWS S3 binding with a `POST` method an
{{< /tabs >}}
##### Save a file to a object
##### Upload a file
To upload a file, encode it as Base64 and let the Binding know to deserialize it:
To upload a file, pass the file contents as the data payload; you may want to encode this in e.g. Base64 for binary content.
```yaml
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
name: <NAME>
namespace: <NAMESPACE>
spec:
type: bindings.aws.s3
version: v1
metadata:
- name: bucket
value: mybucket
- name: region
value: us-west-2
- name: accessKey
value: *****************
- name: secretKey
value: *****************
- name: sessionToken
value: mysession
- name: decodeBase64
value: <bool>
```
Then you can upload it as you would normally:
@ -122,19 +152,172 @@ Then you can upload it as you would normally:
{{% codetab %}}
```bash
curl -d "{ \"operation\": \"create\", \"data\": \"(YOUR_FILE_CONTENTS)\", \"metadata\": { \"key\": \"my-test-file.jpg\" } }" http://localhost:<dapr-port>/v1.0/bindings/<binding-name>
curl -d "{ \"operation\": \"create\", \"data\": \"YOUR_BASE_64_CONTENT\", \"metadata\": { \"key\": \"my-test-file.jpg\" } }" http://localhost:<dapr-port>/v1.0/bindings/<binding-name>
```
{{% /codetab %}}
{{% codetab %}}
```bash
curl -d '{ "operation": "create", "data": "$(cat my-test-file.jpg)", "metadata": { "key": "my-test-file.jpg" } }' \
curl -d '{ "operation": "create", "data": "YOUR_BASE_64_CONTENT", "metadata": { "key": "my-test-file.jpg" } }' \
http://localhost:<dapr-port>/v1.0/bindings/<binding-name>
```
{{% /codetab %}}
{{< /tabs >}}
#### Response
The response body will contain the following JSON:
```json
{
"location":"https://<your bucket>.s3.<your region>.amazonaws.com/<key>",
"versionID":"<version ID if Bucket Versioning is enabled"
}
```
### Get object
To perform a get file operation, invoke the AWS S3 binding with a `POST` method and the following JSON body:
```json
{
"operation": "get",
"metadata": {
"key": "my-test-file.txt"
}
}
```
The metadata parameters are:
- `key` - the name of the object
#### Example
{{< tabs Windows Linux >}}
{{% codetab %}}
```bash
curl -d '{ \"operation\": \"get\", \"metadata\": { \"key\": \"my-test-file.txt\" }}' http://localhost:<dapr-port>/v1.0/bindings/<binding-name>
```
{{% /codetab %}}
{{% codetab %}}
```bash
curl -d '{ "operation": "get", "metadata": { "key": "my-test-file.txt" }}' \
http://localhost:<dapr-port>/v1.0/bindings/<binding-name>
```
{{% /codetab %}}
{{< /tabs >}}
#### Response
The response body contains the value stored in the object.
### Delete object
To perform a delete object operation, invoke the AWS S3 binding with a `POST` method and the following JSON body:
```json
{
"operation": "delete",
"metadata": {
"key": "my-test-file.txt"
}
}
```
The metadata parameters are:
- `key` - the name of the object
#### Examples
##### Delete object
{{< tabs Windows Linux >}}
{{% codetab %}}
```bash
curl -d '{ \"operation\": \"delete\", \"metadata\": { \"key\": \"my-test-file.txt\" }}' http://localhost:<dapr-port>/v1.0/bindings/<binding-name>
```
{{% /codetab %}}
{{% codetab %}}
```bash
curl -d '{ "operation": "delete", "metadata": { "key": "my-test-file.txt" }}' \
http://localhost:<dapr-port>/v1.0/bindings/<binding-name>
```
{{% /codetab %}}
{{< /tabs >}}
#### Response
An HTTP 204 (No Content) and empty body will be retuned if successful.
### List objects
To perform a list object operation, invoke the S3 binding with a `POST` method and the following JSON body:
```json
{
"operation": "list",
"data": {
"maxResults": 10,
"prefix": "file",
"marker": "hvlcCQFSOD5TD",
"delimiter": "i0FvxAn2EOEL6"
}
}
```
The data parameters are:
- `maxResults` - (optional) sets the maximum number of keys returned in the response. By default the action returns up to 1,000 key names. The response might contain fewer keys but will never contain more.
- `prefix` - (optional) limits the response to keys that begin with the specified prefix.
- `marker` - (optional) marker is where you want Amazon S3 to start listing from. Amazon S3 starts listing after this specified key. Marker can be any key in the bucket.
The marker value may then be used in a subsequent call to request the next set of list items.
- `delimiter` - (optional) A delimiter is a character you use to group keys.
#### Response
The response body contains the list of found objects.
The list of objects will be returned as JSON array in the following form:
```json
{
"CommonPrefixes": null,
"Contents": [
{
"ETag": "\"7e94cc9b0f5226557b05a7c2565dd09f\"",
"Key": "hpNdFUxruNuwm",
"LastModified": "2021-08-16T06:44:14Z",
"Owner": {
"DisplayName": "owner name",
"ID": "owner id"
},
"Size": 6916,
"StorageClass": "STANDARD"
}
],
"Delimiter": "",
"EncodingType": null,
"IsTruncated": true,
"Marker": "hvlcCQFSOD5TD",
"MaxKeys": 1,
"Name": "mybucketdapr",
"NextMarker": "hzaUPWjmvyi9W",
"Prefix": ""
}
```
## Related links
- [Basic schema for a Dapr component]({{< ref component-schema >}})

View File

@ -0,0 +1,105 @@
---
type: docs
title: "AWS SES binding spec"
linkTitle: "AWS SES"
description: "Detailed documentation on the AWS SES binding component"
aliases:
- "/operations/components/setup-bindings/supported-bindings/ses/"
---
## Component format
To setup AWS binding create a component of type `bindings.aws.ses`. See [this guide]({{< ref "howto-bindings.md#1-create-a-binding" >}}) on how to create and apply a binding configuration.
See [Authenticating to AWS]({{< ref authenticating-aws.md >}}) for information about authentication-related attributes
```yaml
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
name: ses
namespace: default
spec:
type: bindings.aws.ses
version: v1
metadata:
- name: accessKey
value: *****************
- name: secretKey
value: *****************
- name: region
value: "eu-west-1"
- name: sessionToken
value: mysession
- name: emailFrom
value: "sender@example.com"
- name: emailTo
value: "receiver@example.com"
- name: emailCc
value: "cc@example.com"
- name: emailBcc
value: "bcc@example.com"
- name: subject
value: "subject"
```
{{% alert title="Warning" color="warning" %}}
The above example uses secrets as plain strings. It is recommended to use a secret store for the secrets as described [here]({{< ref component-secrets.md >}}).
{{% /alert %}}
## Spec metadata fields
| Field | Required | Binding support | Details | Example |
|--------------------|:--------:|------------|-----|---------|
| region | Y | Output | The specific AWS region | `"eu-west-1"` |
| accessKey | Y | Output | The AWS Access Key to access this resource | `"key"` |
| secretKey | Y | Output | The AWS Secret Access Key to access this resource | `"secretAccessKey"` |
| sessionToken | N | Output | The AWS session token to use | `"sessionToken"` |
| emailFrom | N | Output | If set, this specifies the email address of the sender. See [also](#example-request) | `"me@example.com"` |
| emailTo | N | Output | If set, this specifies the email address of the receiver. See [also](#example-request) | `"me@example.com"` |
| emailCc | N | Output | If set, this specifies the email address to CC in. See [also](#example-request) | `"me@example.com"` |
| emailBcc | N | Output | If set, this specifies email address to BCC in. See [also](#example-request) | `"me@example.com"` |
| subject | N | Output | If set, this specifies the subject of the email message. See [also](#example-request) | `"subject of mail"` |
## Binding support
This component supports **output binding** with the following operations:
- `create`
## Example request
You can specify any of the following optional metadata properties with each request:
- `emailFrom`
- `emailTo`
- `emailCc`
- `emailBcc`
- `subject`
When sending an email, the metadata in the configuration and in the request is combined. The combined set of metadata must contain at least the `emailFrom`, `emailTo`, `emailCc`, `emailBcc` and `subject` fields.
The `emailTo`, `emailCc` and `emailBcc` fields can contain multiple email addresses separated by a semicolon.
Example:
```json
{
"operation": "create",
"metadata": {
"emailTo": "dapr-smtp-binding@example.net",
"emailCc": "cc1@example.net",
"subject": "Email subject"
},
"data": "Testing Dapr SMTP Binding"
}
```
The `emailTo`, `emailCc` and `emailBcc` fields can contain multiple email addresses separated by a semicolon.
## Related links
- [Basic schema for a Dapr component]({{< ref component-schema >}})
- [Bindings building block]({{< ref bindings >}})
- [How-To: Trigger application with input binding]({{< ref howto-triggers.md >}})
- [How-To: Use bindings to interface with external resources]({{< ref howto-bindings.md >}})
- [Bindings API reference]({{< ref bindings_api.md >}})

View File

@ -42,6 +42,8 @@ spec:
value: "bcc@example.com"
- name: subject
value: "subject"
- name: priority
value: "[value 1-5]"
```
{{% alert title="Warning" color="warning" %}}
@ -62,6 +64,7 @@ The example configuration shown above, contain a username and password as plain-
| emailCc | N | Output | If set, this specifies the email address to CC in. See [also](#example-request) | `"me@example.com"` |
| emailBcc | N | Output | If set, this specifies email address to BCC in. See [also](#example-request) | `"me@example.com"` |
| subject | N | Output | If set, this specifies the subject of the email message. See [also](#example-request) | `"subject of mail"` |
| priority | N | Output | If set, this specifies the priority (X-Priority) of the email message, from 1 (lowest) to 5 (highest) (default value: 3). See [also](#example-request) | `"1"` |
## Binding support
@ -78,6 +81,7 @@ You can specify any of the following optional metadata properties with each requ
- `emailCC`
- `emailBCC`
- `subject`
- `priority`
When sending an email, the metadata in the configuration and in the request is combined. The combined set of metadata must contain at least the `emailFrom`, `emailTo` and `subject` fields.
@ -90,7 +94,8 @@ Example:
"metadata": {
"emailTo": "dapr-smtp-binding@example.net",
"emailCC": "cc1@example.net; cc2@example.net",
"subject": "Email subject"
"subject": "Email subject",
"priority: "1"
},
"data": "Testing Dapr SMTP Binding"
}

View File

@ -35,10 +35,10 @@ spec:
| Field | Required | Binding support | Details | Example |
|-------------------------|:--------:|------------|-----|---------|
| gatewayAddr | Y | Output | Zeebe gateway address | `localhost:26500` |
| gatewayKeepAlive | N | Output | Sets how often keep alive messages should be sent to the gateway. Defaults to 45 seconds | `45s` |
| usePlainTextConnection | N | Output | Whether to use a plain text connection or not | `true,false` |
| caCertificatePath | N | Output | The path to the CA cert | `/path/to/ca-cert` |
| gatewayAddr | Y | Output | Zeebe gateway address | `localhost:26500` |
| gatewayKeepAlive | N | Output | Sets how often keep alive messages should be sent to the gateway. Defaults to 45 seconds | `45s` |
| usePlainTextConnection | N | Output | Whether to use a plain text connection or not | `true,false` |
| caCertificatePath | N | Output | The path to the CA cert | `/path/to/ca-cert` |
## Binding support
@ -59,7 +59,7 @@ This component supports **output binding** with the following operations:
### Output binding
Zeebe uses gRPC under the hood for the Zeebe client we use in this binding. Please consult the [gRPC API reference](https://stage.docs.zeebe.io/reference/grpc.html) for more information.
Zeebe uses gRPC under the hood for the Zeebe client we use in this binding. Please consult the [gRPC API reference](https://stage.docs.zeebe.io/reference/grpc.html) for more information.
#### topology
@ -161,7 +161,7 @@ The response values are:
- `key` - the unique key identifying the deployment
- `processes` - a list of deployed processes
- `bpmnProcessId` - the bpmn process ID, as parsed during deployment; together with the version forms a unique identifier for a specific
- `bpmnProcessId` - the bpmn process ID, as parsed during deployment; together with the version forms a unique identifier for a specific
process definition
- `version` - the assigned process version
- `processDefinitionKey` - the assigned key, which acts as a unique identifier for this process
@ -169,7 +169,7 @@ The response values are:
#### create-instance
The `create-instance` operation creates and starts an instance of the specified process. The process definition to use to create the instance can be
The `create-instance` operation creates and starts an instance of the specified process. The process definition to use to create the instance can be
specified either using its unique key (as returned by the `deploy-process` operation), or using the BPMN process ID and a version.
Note that only processes with none start events can be started through this command.
@ -296,7 +296,7 @@ To perform a `set-variables` operation, invoke the Zeebe command binding with a
The data parameters are:
- `elementInstanceKey` - the unique identifier of a particular element; can be the process instance key (as
- `elementInstanceKey` - the unique identifier of a particular element; can be the process instance key (as
obtained during instance creation), or a given element, such as a service task (see elementInstanceKey on the job message)
- `local` - (optional, default: `false`) if true, the variables will be merged strictly into the local scope (as indicated by
elementInstanceKey); this means the variables is not propagated to upper scopes.
@ -369,7 +369,7 @@ The data parameters are:
- `messageName` - the name of the message
- `correlationKey` - (optional) the correlation key of the message
- `timeToLive` - (optional) how long the message should be buffered on the broker
- `messageId` - (optional) the unique ID of the message; can be omitted. only useful to ensure only one message with the given ID will ever
- `messageId` - (optional) the unique ID of the message; can be omitted. only useful to ensure only one message with the given ID will ever
be published (during its lifetime)
- `variables` - (optional) the message variables as a JSON document; to be valid, the root of the document must be an object, e.g. { "a": "foo" }.
[ "foo" ] would not be valid
@ -390,7 +390,7 @@ The response values are:
#### activate-jobs
The `activate-jobs` operation iterates through all known partitions round-robin and activates up to the requested maximum and streams them back to
The `activate-jobs` operation iterates through all known partitions round-robin and activates up to the requested maximum and streams them back to
the client as they are activated.
To perform a `activate-jobs` operation, invoke the Zeebe command binding with a `POST` method, and the following JSON body:
@ -419,7 +419,7 @@ The data parameters are:
- `maxJobsToActivate` - the maximum jobs to activate by this request
- `timeout` - (optional, default: 5 minutes) a job returned after this call will not be activated by another call until the timeout has been reached
- `workerName` - (optional, default: `default`) the name of the worker activating the jobs, mostly used for logging purposes
- `fetchVariables` - (optional) a list of variables to fetch as the job variables; if empty, all visible variables at the time of activation for the
- `fetchVariables` - (optional) a list of variables to fetch as the job variables; if empty, all visible variables at the time of activation for the
scope of the job will be returned
##### Response
@ -429,7 +429,7 @@ The binding returns a JSON with the following response:
```json
[
{
}
]
```
@ -482,8 +482,8 @@ The binding does not return a response body.
#### fail-job
The `fail-job` operation marks the job as failed; if the retries argument is positive, then the job will be immediately activatable again, and a
worker could try again to process it. If it is zero or negative however, an incident will be raised, tagged with the given errorMessage, and the
The `fail-job` operation marks the job as failed; if the retries argument is positive, then the job will be immediately activatable again, and a
worker could try again to process it. If it is zero or negative however, an incident will be raised, tagged with the given errorMessage, and the
job will not be activatable until the incident is resolved.
To perform a `fail-job` operation, invoke the Zeebe command binding with a `POST` method, and the following JSON body:
@ -504,7 +504,7 @@ The data parameters are:
- `jobKey` - the unique job identifier, as obtained when activating the job
- `retries` - the amount of retries the job should have left
- `errorMessage ` - (optional) an message describing why the job failed this is particularly useful if a job runs out of retries and an
- `errorMessage ` - (optional) an message describing why the job failed this is particularly useful if a job runs out of retries and an
incident is raised, as it this message can help explain why an incident was raised
##### Response
@ -513,7 +513,7 @@ The binding does not return a response body.
#### update-job-retries
The `update-job-retries` operation updates the number of retries a job has left. This is mostly useful for jobs that have run out of retries, should the
The `update-job-retries` operation updates the number of retries a job has left. This is mostly useful for jobs that have run out of retries, should the
underlying problem be solved.
To perform a `update-job-retries` operation, invoke the Zeebe command binding with a `POST` method, and the following JSON body:
@ -540,7 +540,7 @@ The binding does not return a response body.
#### throw-error
The `throw-error` operation throw an error to indicate that a business error is occurred while processing the job. The error is identified
The `throw-error` operation throw an error to indicate that a business error is occurred while processing the job. The error is identified
by an error code and is handled by an error catch event in the process with the same error code.
To perform a `throw-error` operation, invoke the Zeebe command binding with a `POST` method, and the following JSON body:

View File

@ -53,19 +53,19 @@ spec:
| Field | Required | Binding support | Details | Example |
|-------------------------|:--------:|------------|-----|---------|
| gatewayAddr | Y | Input | Zeebe gateway address | `localhost:26500` |
| gatewayKeepAlive | N | Input | Sets how often keep alive messages should be sent to the gateway. Defaults to 45 seconds | `45s` |
| usePlainTextConnection | N | Input | Whether to use a plain text connection or not | `true,false` |
| caCertificatePath | N | Input | The path to the CA cert | `/path/to/ca-cert` |
| workerName | N | Input | The name of the worker activating the jobs, mostly used for logging purposes | `products-worker` |
| workerTimeout | N | Input | A job returned after this call will not be activated by another call until the timeout has been reached; defaults to 5 minutes | `5m` |
| requestTimeout | N | Input | The request will be completed when at least one job is activated or after the requestTimeout. If the requestTimeout = 0, a default timeout is used. If the requestTimeout < 0, long polling is disabled and the request is completed immediately, even when no job is activated. Defaults to 10 seconds | `30s` |
| jobType | Y | Input | the job type, as defined in the BPMN process (e.g. `<zeebe:taskDefinition type="fetch-products" />`) | `fetch-products` |
| maxJobsActive | N | Input | Set the maximum number of jobs which will be activated for this worker at the same time. Defaults to 32 | `32` |
| concurrency | N | Input | The maximum number of concurrent spawned goroutines to complete jobs. Defaults to 4 | `4` |
| pollInterval | N | Input | Set the maximal interval between polling for new jobs. Defaults to 100 milliseconds | `100ms` |
| pollThreshold | N | Input | Set the threshold of buffered activated jobs before polling for new jobs, i.e. threshold * maxJobsActive. Defaults to 0.3 | `0.3` |
| fetchVariables | N | Input | A list of variables to fetch as the job variables; if empty, all visible variables at the time of activation for the scope of the job will be returned | `productId, productName, productKey` |
| gatewayAddr | Y | Input | Zeebe gateway address | `localhost:26500` |
| gatewayKeepAlive | N | Input | Sets how often keep alive messages should be sent to the gateway. Defaults to 45 seconds | `45s` |
| usePlainTextConnection | N | Input | Whether to use a plain text connection or not | `true,false` |
| caCertificatePath | N | Input | The path to the CA cert | `/path/to/ca-cert` |
| workerName | N | Input | The name of the worker activating the jobs, mostly used for logging purposes | `products-worker` |
| workerTimeout | N | Input | A job returned after this call will not be activated by another call until the timeout has been reached; defaults to 5 minutes | `5m` |
| requestTimeout | N | Input | The request will be completed when at least one job is activated or after the requestTimeout. If the requestTimeout = 0, a default timeout is used. If the requestTimeout < 0, long polling is disabled and the request is completed immediately, even when no job is activated. Defaults to 10 seconds | `30s` |
| jobType | Y | Input | the job type, as defined in the BPMN process (e.g. `<zeebe:taskDefinition type="fetch-products" />`) | `fetch-products` |
| maxJobsActive | N | Input | Set the maximum number of jobs which will be activated for this worker at the same time. Defaults to 32 | `32` |
| concurrency | N | Input | The maximum number of concurrent spawned goroutines to complete jobs. Defaults to 4 | `4` |
| pollInterval | N | Input | Set the maximal interval between polling for new jobs. Defaults to 100 milliseconds | `100ms` |
| pollThreshold | N | Input | Set the threshold of buffered activated jobs before polling for new jobs, i.e. threshold * maxJobsActive. Defaults to 0.3 | `0.3` |
| fetchVariables | N | Input | A list of variables to fetch as the job variables; if empty, all visible variables at the time of activation for the scope of the job will be returned | `productId, productName, productKey` |
## Binding support
@ -75,10 +75,10 @@ This component supports **input** binding interfaces.
#### Variables
The Zeebe process engine handles the process state as also process variables which can be passed
The Zeebe process engine handles the process state as also process variables which can be passed
on process instantiation or which can be updated or created during process execution. These variables
can be passed to a registered job worker by defining the variable names as comma-separated list in
the `fetchVariables` metadata field. The process engine will then pass these variables with its current
the `fetchVariables` metadata field. The process engine will then pass these variables with its current
values to the job worker implementation.
If the binding will register three variables `productId`, `productName` and `productKey` then the worker will
@ -86,9 +86,9 @@ be called with the following JSON body:
```json
{
"productId": "some-product-id",
"productName": "some-product-name",
"productKey": "some-product-key"
"productId": "some-product-id",
"productName": "some-product-name",
"productKey": "some-product-key"
}
```

View File

@ -61,7 +61,7 @@ spec:
my_claim := jwt.payload["my-claim"]
}
jwt = { "payload": payload } {
auth_header := input.request.headers["authorization"]
auth_header := input.request.headers["Authorization"]
[_, jwt] := split(auth_header, " ")
[_, payload, _] := io.jwt.decode(jwt)
}
@ -75,7 +75,7 @@ You can prototype and experiment with policies using the [official opa playgroun
|--------|---------|---------|
| rego | The Rego policy language | See above |
| defaultStatus | The status code to return for denied responses | `"https://accounts.google.com"`, `"https://login.salesforce.com"`
| includedHeaders | A comma-separated set of case-insensitive headers to include in the request input. Request headers are not passed to the policy by default. Include to receive incoming request headers in the input | `"x-my-custom-header, x-jwt-header"`
| includedHeaders | A comma-separated set of case-insensitive headers to include in the request input. Request headers are not passed to the policy by default. Include to receive incoming request headers in the input | `"x-my-custom-header, x-jwt-header"`
## Dapr configuration

View File

@ -97,7 +97,7 @@ spec:
meta:
DAPR_METRICS_PORT: "${DAPR_METRICS_PORT}"
DAPR_PROFILE_PORT: "${DAPR_PROFILE_PORT}"
daprPortMetaKey: "DAPR_PORT"
daprPortMetaKey: "DAPR_PORT"
queryOptions:
useCache: true
filter: "Checks.ServiceTags contains dapr"

View File

@ -26,6 +26,8 @@ Table captions:
| [Hazelcast]({{< ref setup-hazelcast.md >}}) | Alpha | v1 | 1.0 |
| [MQTT]({{< ref setup-mqtt.md >}}) | Alpha | v1 | 1.0 |
| [NATS Streaming]({{< ref setup-nats-streaming.md >}}) | Beta | v1 | 1.0 |
| [In Memory]({{< ref setup-inmemory.md >}}) | Alpha | v1 | 1.4 |
| [JetStream]({{< ref setup-jetstream.md >}}) | Alpha | v1 | 1.4 |
| [Pulsar]({{< ref setup-pulsar.md >}}) | Alpha | v1 | 1.0 |
| [RabbitMQ]({{< ref setup-rabbitmq.md >}}) | Alpha | v1 | 1.0 |
| [Redis Streams]({{< ref setup-redis-pubsub.md >}}) | GA | v1 | 1.0 |
@ -46,5 +48,5 @@ Table captions:
| Name | Status | Component version | Since |
|-----------------------------------------------------------|--------| ----------------| -- |
| [Azure Events Hub]({{< ref setup-azure-eventhubs.md >}}) | Alpha | v1 | 1.0 |
| [Azure Event Hubs]({{< ref setup-azure-eventhubs.md >}}) | Alpha | v1 | 1.0 |
| [Azure Service Bus]({{< ref setup-azure-servicebus.md >}})| GA | v1 | 1.0 |

View File

@ -49,6 +49,7 @@ spec:
| authRequired | Y | Enable [SASL](https://en.wikipedia.org/wiki/Simple_Authentication_and_Security_Layer) authentication with the Kafka brokers. | `"true"`, `"false"`
| saslUsername | N | The SASL username used for authentication. Only required if `authRequired` is set to `"true"`. | `"adminuser"`
| saslPassword | N | The SASL password used for authentication. Can be `secretKeyRef` to use a [secret reference]({{< ref component-secrets.md >}}). Only required if `authRequired` is set to `"true"`. | `""`, `"KeFg23!"`
| initialOffset | N | The initial offset to use if no offset was previously committed. Should be "newest" or "oldest". Defaults to "newest". | `"oldest"`
| maxMessageBytes | N | The maximum size in bytes allowed for a single Kafka message. Defaults to 1024. | `2048`
## Per-call metadata fields

View File

@ -1,7 +1,7 @@
---
type: docs
title: "Azure Events Hub"
linkTitle: "Azure Events Hub"
title: "Azure Event Hubs"
linkTitle: "Azure Event Hubs"
description: "Detailed documentation on the Azure Event Hubs pubsub component"
aliases:
- "/operations/components/setup-pubsub/supported-pubsub/setup-azure-eventhubs/"

View File

@ -46,6 +46,8 @@ spec:
value: <PRIVATE_KEY> # replace x509 cert
- name: disableEntityManagement
value: "false"
- name: enableMessageOrdering
value: "false"
```
{{% alert title="Warning" color="warning" %}}
The above example uses secrets as plain strings. It is recommended to use a secret store for the secrets as described [here]({{< ref component-secrets.md >}}).
@ -67,6 +69,11 @@ The above example uses secrets as plain strings. It is recommended to use a secr
| authProviderX509CertUrl | N | If using explicit credentials, this field should contain the `auth_provider_x509_cert_url` field from the service account json | `https://www.googleapis.com/oauth2/v1/certs`
| clientX509CertUrl | N | If using explicit credentials, this field should contain the `client_x509_cert_url` field from the service account json | `https://www.googleapis.com/robot/v1/metadata/x509/myserviceaccount%40myproject.iam.gserviceaccount.com`
| disableEntityManagement | N | When set to `"true"`, topics and subscriptions do not get created automatically. Default: `"false"` | `"true"`, `"false"`
| enableMessageOrdering | N | When set to `"true"`, subscribed messages will be received in order, depending on publishing and permissions configuration. | `"true"`, `"false"`
{{% alert title="Warning" color="warning" %}}
If `enableMessageOrdering` is set to "true", the roles/viewer or roles/pubsub.viewer role will be required on the service account in order to guarantee ordering in cases where order tokens are not embedded in the messages. If this role is not given, or the call to Subscription.Config() fails for any other reason, ordering by embedded order tokens will still function correctly.
{{% /alert %}}
## Create a GCP Pub/Sub
You can use either "explicit" or "implicit" credentials to configure access to your GCP pubsub instance. If using explicit, most fields are required. Implicit relies on dapr running under a Kubernetes service account (KSA) mapped to a Google service account (GSA) which has the necessary permissions to access pubsub. In implicit mode, only the `projectId` attribute is needed, all other are optional.

View File

@ -0,0 +1,28 @@
---
type: docs
title: "In Memory"
linkTitle: "In Memory"
description: "Detailed documentation on the In Memory pubsub component"
aliases:
- "/operations/components/setup-pubsub/supported-pubsub/setup-inmemory/"
---
The In Memory pub/sub component is useful for development purposes and works inside of a single machine boundary.
## Component format
```yaml
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
name: pubsub
namespace: default
spec:
type: pubsub.in-memory
version: v1
```
## Related links
- [Basic schema for a Dapr component]({{< ref component-schema >}}) in the Related links section
- Read [this guide]({{< ref "howto-publish-subscribe.md#step-2-publish-a-topic" >}}) for instructions on configuring pub/sub components
- [Pub/Sub building block]({{< ref pubsub >}})

View File

@ -0,0 +1,97 @@
---
type: docs
title: "JetStream"
linkTitle: "JetStream"
description: "Detailed documentation on the NATS JetStream component"
aliases:
- "/operations/components/setup-pubsub/supported-pubsub/setup-jetstream/"
---
## Component format
To setup JetStream pubsub create a component of type `pubsub.jetstream`. See
[this guide]({{< ref
"howto-publish-subscribe.md#step-1-setup-the-pubsub-component" >}}) on how to
create and apply a pubsub configuration.
```yaml
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
name: jetstream-pubsub
namespace: default
spec:
type: pubsub.jetstream
version: v1
metadata:
- name: natsURL
value: "nats://localhost:4222"
- name: name
value: "connection name"
- name: durableName
value: "consumer durable name"
- name: queueGroupName
value: "queue group name"
- name: startSequence
value: 1
- name: startTime # in Unix format
value: 1630349391
- name: deliverAll
value: false
- name: flowControl
value: false
```
## Spec metadata fields
| Field | Required | Details | Example |
|----------------|:--------:|---------|---------|
| natsURL | Y | NATS server address URL | "`nats://localhost:4222`"|
| name | N | NATS connection name | `"my-conn-name"`|
| durableName | N | [Durable name] | `"my-durable"` |
| queueGroupName | N | Queue group name | `"my-queue"` |
| startSequence | N | [Start Sequence] | `1` |
| startTime | N | [Start Time] in Unix format | `1630349391` |
| deliverAll | N | Set deliver all as [Replay Policy] | `true` |
| flowControl | N | [Flow Control] | `true` |
## Create a NATS server
{{< tabs "Self-Hosted" "Kubernetes">}}
{{% codetab %}}
You can run a NATS Server with JetStream enabled locally using Docker:
```bash
docker run -d -p 4222:4222 nats:latest -js
```
You can then interact with the server using the client port: `localhost:4222`.
{{% /codetab %}}
{{% codetab %}}
Install NATS JetStream on Kubernetes by using the [helm](https://github.com/nats-io/k8s/tree/main/helm/charts/nats#jetstream):
```bash
helm repo add nats https://nats-io.github.io/k8s/helm/charts/
helm install my-nats nats/nats
```
This installs a single NATS server into the `default` namespace. To interact
with NATS, find the service with: `kubectl get svc my-nats`.
{{% /codetab %}}
{{< /tabs >}}
## Related links
- [Basic schema for a Dapr component]({{< ref component-schema >}})
- Read [this guide]({{< ref "howto-publish-subscribe.md#step-2-publish-a-topic" >}}) for instructions on configuring pub/sub components
- [Pub/Sub building block]({{< ref pubsub >}})
- [JetStream Documentation](https://docs.nats.io/jetstream/jetstream)
- [NATS CLI](https://github.com/nats-io/natscli)
[Durable Name]: https://docs.nats.io/jetstream/concepts/consumers#durable-name
[Start Sequence]: https://docs.nats.io/jetstream/concepts/consumers#deliverbystartsequence
[Start Time]: https://docs.nats.io/jetstream/concepts/consumers#deliverbystarttime
[Replay Policy]: https://docs.nats.io/jetstream/concepts/consumers#replaypolicy
[Flow Control]: https://docs.nats.io/jetstream/concepts/consumers#flowcontrol

View File

@ -54,6 +54,11 @@ spec:
The above example uses secrets as plain strings. It is recommended to use a secret store for the secrets as described [here]({{< ref component-secrets.md >}}).
{{% /alert %}}
{{% alert title="Warning" color="warning" %}}
NATS Streaming has been [deprecated](https://github.com/nats-io/nats-streaming-server/#warning--deprecation-notice-warning).
Please consider using [NATS JetStream]({{< ref setup-jetstream >}}) going forward.
{{% /alert %}}
## Spec metadata fields
| Field | Required | Details | Example |
@ -111,3 +116,4 @@ For example, if installing using the example above, the NATS Streaming address w
- [Basic schema for a Dapr component]({{< ref component-schema >}})
- Read [this guide]({{< ref "howto-publish-subscribe.md#step-2-publish-a-topic" >}}) for instructions on configuring pub/sub components
- [Pub/Sub building block]({{< ref pubsub >}})
- [NATS Streaming Deprecation Notice](https://github.com/nats-io/nats-streaming-server/#warning--deprecation-notice-warning)

View File

@ -45,5 +45,4 @@ Table captions:
| Name | Status | Component version | Since |
|---------------------------------------------------------------------------------------|--------| ---- |--------------|
| [Azure Key Vault w/ Managed Identity]({{< ref azure-keyvault-managed-identity.md >}}) | Alpha | v1 | 1.0 |
| [Azure Key Vault]({{< ref azure-keyvault.md >}}) | GA | v1 | 1.0 |

View File

@ -1,167 +0,0 @@
---
type: docs
title: "Azure Key Vault with Managed Identities on Kubernetes"
linkTitle: "Azure Key Vault w/ Managed Identity"
description: How to configure Azure Key Vault and Kubernetes to use Azure Managed Identities to access secrets
aliases:
- "/operations/components/setup-secret-store/supported-secret-stores/azure-keyvault-managed-identity/"
---
## Component format
To setup Azure Key Vault secret store with Managed Identies create a component of type `secretstores.azure.keyvault`. See [this guide]({{< ref "setup-secret-store.md#apply-the-configuration" >}}) on how to create and apply a secretstore configuration. See this guide on [referencing secrets]({{< ref component-secrets.md >}}) to retrieve and use the secret with Dapr components.
In Kubernetes mode, you store the certificate for the service principal into the Kubernetes Secret Store and then enable Azure Key Vault secret store with this certificate in Kubernetes secretstore.
The component yaml uses the name of your key vault and the Client ID of the managed identity to setup the secret store.
```yaml
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
name: azurekeyvault
namespace: default
spec:
type: secretstores.azure.keyvault
version: v1
metadata:
- name: vaultName
value: [your_keyvault_name]
- name: spnClientId
value: [your_managed_identity_client_id]
```
{{% alert title="Warning" color="warning" %}}
The above example uses secrets as plain strings. It is recommended to use a local secret store such as [Kubernetes secret store]({{< ref kubernetes-secret-store.md >}}) or a [local file]({{< ref file-secret-store.md >}}) to bootstrap secure key storage.
{{% /alert %}}
## Spec metadata fields
| Field | Required | Details | Example |
|--------------------|:--------:|-------------------------------------------------------------------------|---------------------|
| vaultName | Y | The name of the Azure Key Vault | `"mykeyvault"` |
| spnClientId | Y | Your managed identity client Id | `"yourId"` |
## Setup Managed Identity and Azure Key Vault
### Prerequisites
- [Azure Subscription](https://azure.microsoft.com/en-us/free/)
- [Azure CLI](https://docs.microsoft.com/en-us/cli/azure/install-azure-cli?view=azure-cli-latest)
### Steps
1. Login to Azure and set the default subscription
```bash
# Log in Azure
az login
# Set your subscription to the default subscription
az account set -s [your subscription id]
```
2. Create an Azure Key Vault in a region
```bash
az keyvault create --location [region] --name [your keyvault] --resource-group [your resource group]
```
3. Create the managed identity(Optional)
This step is required only if the AKS Cluster is provisoned without the flag "--enable-managed-identity". If the cluster is provisioned with managed identity, than it is suggested to use the autogenerated managed identity that is associated to the Resource Group MC_*.
```bash
$identity = az identity create -g [your resource group] -n [your managed identity name] -o json | ConvertFrom-Json
```
Below is the command to retrieve the managed identity in the autogenerated scenario:
```bash
az aks show -g <AKSResourceGroup> -n <AKSClusterName>
```
For more detail about the roles to assign to integrate AKS with Azure Services [Role Assignment](https://azure.github.io/aad-pod-identity/docs/getting-started/role-assignment/).
4. Retrieve Managed Identity ID
The two main scenario are:
- Service Principal, in this case the Resource Group is the one in which is deployed the AKS Service Cluster
```bash
$clientId= az aks show -g <AKSResourceGroup> -n <AKSClusterName> --query servicePrincipalProfile.clientId -otsv
```
- Managed Identity, in this case the Resource Group is the one in which is deployed the AKS Service Cluster
```bash
$clientId= az aks show -g <AKSResourceGroup> -n <AKSClusterName> --query identityProfile.kubeletidentity.clientId -otsv
```
5. Assign the Reader role to the managed identity
For AKS cluster, the cluster resource group refers to the resource group with a MC_ prefix, which contains all of the infrastructure resources associated with the cluster like VM/VMSS.
```bash
az role assignment create --role "Reader" --assignee $clientId --scope /subscriptions/[your subscription id]/resourcegroups/[your resource group]
```
6. Assign the Managed Identity Operator role to the AKS Service Principal
Refer to previous step about the Resource Group to use and which identity to assign
```bash
az role assignment create --role "Managed Identity Operator" --assignee $clientId --scope /subscriptions/[your subscription id]/resourcegroups/[your resource group]
az role assignment create --role "Virtual Machine Contributor" --assignee $clientId --scope /subscriptions/[your subscription id]/resourcegroups/[your resource group]
```
7. Add a policy to the Key Vault so the managed identity can read secrets
```bash
az keyvault set-policy --name [your keyvault] --spn $clientId --secret-permissions get list
```
8. Enable AAD Pod Identity on AKS
```bash
kubectl apply -f https://raw.githubusercontent.com/Azure/aad-pod-identity/master/deploy/infra/deployment-rbac.yaml
# For AKS clusters, deploy the MIC and AKS add-on exception by running -
kubectl apply -f https://raw.githubusercontent.com/Azure/aad-pod-identity/master/deploy/infra/mic-exception.yaml
```
9. Configure the Azure Identity and AzureIdentityBinding yaml
Save the following yaml as azure-identity-config.yaml:
```yaml
apiVersion: "aadpodidentity.k8s.io/v1"
kind: AzureIdentity
metadata:
name: [your managed identity name]
spec:
type: 0
resourceID: [your managed identity id]
clientID: [your managed identity Client ID]
---
apiVersion: "aadpodidentity.k8s.io/v1"
kind: AzureIdentityBinding
metadata:
name: [your managed identity name]-identity-binding
spec:
azureIdentity: [your managed identity name]
selector: [your managed identity selector]
```
10. Deploy the azure-identity-config.yaml:
```yaml
kubectl apply -f azure-identity-config.yaml
```
## References
- [Azure CLI Keyvault CLI](https://docs.microsoft.com/en-us/cli/azure/keyvault?view=azure-cli-latest#az-keyvault-create)
- [Create an Azure service principal with Azure CLI](https://docs.microsoft.com/en-us/cli/azure/create-an-azure-service-principal-azure-cli?view=azure-cli-latest)
- [AAD Pod Identity](https://github.com/Azure/aad-pod-identity)
- [Secrets building block]({{< ref secrets >}})
- [How-To: Retrieve a secret]({{< ref "howto-secrets.md" >}})
- [How-To: Reference secrets in Dapr components]({{< ref component-secrets.md >}})
- [Secrets API reference]({{< ref secrets_api.md >}})

View File

@ -7,10 +7,6 @@ aliases:
- "/operations/components/setup-secret-store/supported-secret-stores/azure-keyvault/"
---
{{% alert title="Note" color="primary" %}}
Azure Managed Identity can be used for Azure Key Vault access on Kubernetes. Instructions [here]({{< ref azure-keyvault-managed-identity.md >}}).
{{% /alert %}}
## Component format
To setup Azure Key Vault secret store create a component of type `secretstores.azure.keyvault`. See [this guide]({{< ref "setup-secret-store.md#apply-the-configuration" >}}) on how to create and apply a secretstore configuration. See this guide on [referencing secrets]({{< ref component-secrets.md >}}) to retrieve and use the secret with Dapr components.
@ -37,158 +33,91 @@ spec:
- name: spnCertificateFile
value : "[pfx_certificate_file_fully_qualified_local_path]"
```
{{% alert title="Warning" color="warning" %}}
The above example uses secrets as plain strings. It is recommended to use a local secret store such as [Kubernetes secret store]({{< ref kubernetes-secret-store.md >}}) or a [local file]({{< ref file-secret-store.md >}}) to bootstrap secure key storage.
{{% /alert %}}
## Spec metadata fields
## Authenticating with Azure AD
### Self-Hosted
The Azure Key Vault secret store component supports authentication with Azure AD only. Before you enable this component, make sure you've read the [Authenticating to Azure]({{< ref authenticating-azure.md >}}) document and created an Azure AD application (also called Service Principal). Alternatively, make sure you have created a managed identity for your application platform.
## Spec metadata fields
| Field | Required | Details | Example |
|--------------------|:--------:|---------|---------|
| vaultName | Y | The name of the Azure Key Vault. If you only provide a name, it will covert to `[your_keyvault_name].vault.azure.net` in Dapr. If your URL uses another suffix, please provide the entire URI, such as `test.vault.azure.cn`. | `"mykeyvault"`, `"mykeyvault.value.azure.cn"`
| spnTenantId | Y | Service Principal Tenant Id | `"spnTenantId"`
| spnClientId | Y | Service Principal App Id | `"spnAppId"`
| spnCertificateFile | Y | PFX certificate file path. <br></br> For Windows the `[pfx_certificate_file_fully_qualified_local_path]` value must use escaped backslashes, i.e. double backslashes. For example `"C:\\folder1\\folder2\\certfile.pfx"`. <br></br> For Linux you can use single slashes. For example `"/folder1/folder2/certfile.pfx"`. <br></br> See [configure the component](#configure-the-component) for more details | `"C:\\folder1\\folder2\\certfile.pfx"`, `"/folder1/folder2/certfile.pfx"`
| `vaultName` | Y | The name of the Azure Key Vault | `"mykeyvault"` |
| `azureEnvironment` | N | Optional name for the Azure environment if using a different Azure cloud | `"AZUREPUBLICCLOUD"` (default value), `"AZURECHINACLOUD"`, `"AZUREUSGOVERNMENTCLOUD"`, `"AZUREGERMANCLOUD"` |
Additionally, you must provide the authentication fields as explained in the [Authenticating to Azure]({{< ref authenticating-azure.md >}}) document.
### Kubernetes
| Field | Required | Details | Example |
|----------------|:--------:|---------|---------|
| vaultName | Y | The name of the Azure Key Vault | `"mykeyvault"`
| spnTenantId | Y | Service Principal Tenant Id | `"spnTenantId"`
| spnClientId | Y | Service Principal App Id | `"spnAppId"`
| spnCertificate | Y | PKCS 12 encoded bytes of the certificate. See [configure the component](#configure-the-component) for details on encoding this in a Kubernetes secret. | `secretKeyRef: ...` <br /> See [configure the component](#configure-the-component) for more information.
## Setup Key Vault and service principal
## Create the Azure Key Vault and authorize the Service Principal
### Prerequisites
- [Azure Subscription](https://azure.microsoft.com/en-us/free/)
- [Azure CLI](https://docs.microsoft.com/en-us/cli/azure/install-azure-cli?view=azure-cli-latest)
- [Azure Subscription](https://azure.microsoft.com/free/)
- [Azure CLI](https://docs.microsoft.com/cli/azure/install-azure-cli)
- [jq](https://stedolan.github.io/jq/download/)
- The scripts below are optimized for a bash or zsh shell
Make sure you have followed the steps in the [Authenticating to Azure]({{< ref authenticating-azure.md >}}) document to create an Azure AD application (also called Service Principal). You will need the following values:
- `SERVICE_PRINCIPAL_ID`: the ID of the Service Principal that you created for a given application
### Steps
1. Login to Azure and set the default subscription
1. Set a variable with the Service Principal that you created:
```bash
# Log in Azure
az login
```sh
SERVICE_PRINCIPAL_ID="[your_service_principal_object_id]"
```
# Set your subscription to the default subscription
az account set -s [your subscription id]
```
2. Set a variable with the location where to create all resources:
2. Create an Azure Key Vault in a region
```sh
LOCATION="[your_location]"
```
```bash
az keyvault create --location [region] --name [your_keyvault] --resource-group [your resource group]
```
(You can get the full list of options with: `az account list-locations --output tsv`)
3. Create a service principal
3. Create a Resource Group, giving it any name you'd like:
Create a service principal with a new certificate and store the 1-year certificate inside your keyvault's certificate vault. You can skip this step if you want to use an existing service principal for keyvault instead of creating new one
```sh
RG_NAME="[resource_group_name]"
RG_ID=$(az group create \
--name "${RG_NAME}" \
--location "${LOCATION}" \
| jq -r .id)
```
```bash
az ad sp create-for-rbac --name [your_service_principal_name] --create-cert --cert [certificate_name] --keyvault [your_keyvault] --skip-assignment --years 1
4. Create an Azure Key Vault (that uses Azure RBAC for authorization):
{
"appId": "a4f90000-0000-0000-0000-00000011d000",
"displayName": "[your_service_principal_name]",
"name": "http://[your_service_principal_name]",
"password": null,
"tenant": "34f90000-0000-0000-0000-00000011d000"
}
```
```sh
KEYVAULT_NAME="[key_vault_name]"
az keyvault create \
--name "${KEYVAULT_NAME}" \
--enable-rbac-authorization true \
--resource-group "${RG_NAME}" \
--location "${LOCATION}"
```
**Save both the appId and tenant from the output which will be used in the next step**
5. Using RBAC, assign a role to the Azure AD application so it can access the Key Vault.
In this case, assign the "Key Vault Crypto Officer" role, which has broad access; other more restrictive roles can be used as well, depending on your application.
4. Get the Object Id for [your_service_principal_name]
```bash
az ad sp show --id [service_principal_app_id]
{
...
"objectId": "[your_service_principal_object_id]",
"objectType": "ServicePrincipal",
...
}
```
5. Grant the service principal the GET permission to your Azure Key Vault
```bash
az keyvault set-policy --name [your_keyvault] --object-id [your_service_principal_object_id] --secret-permissions get
```
Now that your service principal has access to your keyvault you are ready to configure the secret store component to use secrets stored in your keyvault to access other components securely.
6. Download the certificate in PFX format from your Azure Key Vault either using the Azure portal or the Azure CLI:
- **Using the Azure portal:**
Go to your key vault on the Azure portal and navigate to the *Certificates* tab under *Settings*. Find the certificate that was created during the service principal creation, named [certificate_name] and click on it.
Click *Download in PFX/PEM format* to download the certificate.
- **Using the Azure CLI:**
```bash
az keyvault secret download --vault-name [your_keyvault] --name [certificate_name] --encoding base64 --file [certificate_name].pfx
```
```sh
az role assignment create \
--assignee "${SERVICE_PRINCIPAL_ID}" \
--role "Key Vault Crypto Officer" \
--scope "${RG_ID}/providers/Microsoft.KeyVault/vaults/${KEYVAULT_NAME}"
```
## Configure the component
{{< tabs "Self-Hosted" "Kubernetes">}}
{{% codetab %}}
1. Copy downloaded PFX cert from your Azure Keyvault into your components directory or a secure location on your local disk
2. Create a file called `azurekeyvault.yaml` in the components directory
```yaml
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
name: azurekeyvault
namespace: default
spec:
type: secretstores.azure.keyvault
version: v1
metadata:
- name: vaultName
value: [your_keyvault_name]
- name: spnTenantId
value: "[your_service_principal_tenant_id]"
- name: spnClientId
value: "[your_service_principal_app_id]"
- name: spnCertificateFile
value : "[pfx_certificate_file_fully_qualified_local_path]"
```
Fill in the metadata fields with your Key Vault details from the above setup process.
{{% /codetab %}}
{{% codetab %}}
In Kubernetes, you store the certificate for the service principal into the Kubernetes Secret Store and then enable Azure Key Vault secret store with this certificate in Kubernetes secretstore.
1. Create a kubernetes secret using the following command:
```bash
kubectl create secret generic [your_k8s_spn_secret_name] --from-file=[your_k8s_spn_secret_key]=[pfx_certificate_file_fully_qualified_local_path]
```
- `[pfx_certificate_file_fully_qualified_local_path]` is the path of PFX cert file you downloaded above
- `[your_k8s_spn_secret_name]` is secret name in Kubernetes secret store
- `[your_k8s_spn_secret_key]` is secret key in Kubernetes secret store
2. Create a `azurekeyvault.yaml` component file
The component yaml refers to the Kubernetes secretstore using `auth` property and `secretKeyRef` refers to the certificate stored in Kubernetes secret store.
To use a **client secret**, create a file called `azurekeyvault.yaml` in the components directory, filling in with the Azure AD application that you created following the [Authenticating to Azure]({{< ref authenticating-azure.md >}}) document:
```yaml
apiVersion: dapr.io/v1alpha1
@ -201,32 +130,142 @@ spec:
version: v1
metadata:
- name: vaultName
value: [your_keyvault_name]
- name: spnTenantId
value: "[your_service_principal_tenant_id]"
- name: spnClientId
value: "[your_service_principal_app_id]"
- name: spnCertificate
secretKeyRef:
name: [your_k8s_spn_secret_name]
key: [your_k8s_spn_secret_key]
auth:
secretStore: kubernetes
value: "[your_keyvault_name]"
- name: azureTenantId
value: "[your_tenant_id]"
- name: azureClientId
value: "[your_client_id]"
- name: azureClientSecret
value : "[your_client_secret]"
```
3. Apply `azurekeyvault.yaml` component
If you want to use a **certificate** saved on the local disk, instead, use this template, filling in with details of the Azure AD application that you created following the [Authenticating to Azure]({{< ref authenticating-azure.md >}}) document:
```bash
kubectl apply -f azurekeyvault.yaml
```yaml
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
name: azurekeyvault
namespace: default
spec:
type: secretstores.azure.keyvault
version: v1
metadata:
- name: vaultName
value: "[your_keyvault_name]"
- name: azureTenantId
value: "[your_tenant_id]"
- name: azureClientId
value: "[your_client_id]"
- name: azureCertificateFile
value : "[pfx_certificate_file_fully_qualified_local_path]"
```
{{% /codetab %}}
{{% codetab %}}
In Kubernetes, you store the client secret or the certificate into the Kubernetes Secret Store and then refer to those in the YAML file. You will need the details of the Azure AD application that was created following the [Authenticating to Azure]({{< ref authenticating-azure.md >}}) document.
To use a **client secret**:
1. Create a Kubernetes secret using the following command:
```bash
kubectl create secret generic [your_k8s_secret_name] --from-literal=[your_k8s_secret_key]=[your_client_secret]
```
- `[your_client_secret]` is the application's client secret as generated above
- `[your_k8s_secret_name]` is secret name in the Kubernetes secret store
- `[your_k8s_secret_key]` is secret key in the Kubernetes secret store
2. Create an `azurekeyvault.yaml` component file.
The component yaml refers to the Kubernetes secretstore using `auth` property and `secretKeyRef` refers to the client secret stored in the Kubernetes secret store.
```yaml
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
name: azurekeyvault
namespace: default
spec:
type: secretstores.azure.keyvault
version: v1
metadata:
- name: vaultName
value: "[your_keyvault_name]"
- name: azureTenantId
value: "[your_tenant_id]"
- name: azureClientId
value: "[your_client_id]"
- name: azureClientSecret
secretKeyRef:
name: "[your_k8s_secret_name]"
key: "[your_k8s_secret_key]"
auth:
secretStore: kubernetes
```
3. Apply the `azurekeyvault.yaml` component:
```bash
kubectl apply -f azurekeyvault.yaml
```
To use a **certificate**:
1. Create a Kubernetes secret using the following command:
```bash
kubectl create secret generic [your_k8s_secret_name] --from-file=[your_k8s_secret_key]=[pfx_certificate_file_fully_qualified_local_path]
```
- `[pfx_certificate_file_fully_qualified_local_path]` is the path of PFX file you obtained earlier
- `[your_k8s_secret_name]` is secret name in the Kubernetes secret store
- `[your_k8s_secret_key]` is secret key in the Kubernetes secret store
2. Create an `azurekeyvault.yaml` component file.
The component yaml refers to the Kubernetes secretstore using `auth` property and `secretKeyRef` refers to the certificate stored in the Kubernetes secret store.
```yaml
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
name: azurekeyvault
namespace: default
spec:
type: secretstores.azure.keyvault
version: v1
metadata:
- name: vaultName
value: "[your_keyvault_name]"
- name: azureTenantId
value: "[your_tenant_id]"
- name: azureClientId
value: "[your_client_id]"
- name: azureCertificate
secretKeyRef:
name: "[your_k8s_secret_name]"
key: "[your_k8s_secret_key]"
auth:
secretStore: kubernetes
```
3. Apply the `azurekeyvault.yaml` component:
```bash
kubectl apply -f azurekeyvault.yaml
```
{{% /codetab %}}
{{< /tabs >}}
## References
- [Azure CLI Keyvault CLI](https://docs.microsoft.com/en-us/cli/azure/keyvault?view=azure-cli-latest#az-keyvault-create)
- [Create an Azure service principal with Azure CLI](https://docs.microsoft.com/en-us/cli/azure/create-an-azure-service-principal-azure-cli?view=azure-cli-latest)
- [Authenticating to Azure]({{< ref authenticating-azure.md >}})
- [Azure CLI: keyvault commands](https://docs.microsoft.com/en-us/cli/azure/keyvault?view=azure-cli-latest#az-keyvault-create)
- [Secrets building block]({{< ref secrets >}})
- [How-To: Retrieve a secret]({{< ref "howto-secrets.md" >}})
- [How-To: Reference secrets in Dapr components]({{< ref component-secrets.md >}})

View File

@ -31,6 +31,8 @@ spec:
value: [path to the JSON file]
- name: nestedSeparator
value: ":"
- name: multiValued
value: "false"
```
## Spec metadata fields
@ -38,11 +40,12 @@ spec:
| Field | Required | Details | Example |
|--------------------|:--------:|-------------------------------------------------------------------------|--------------------------|
| secretsFile | Y | The path to the file where secrets are stored | `"path/to/file.json"` |
| nestedSeparator | N | Used by the store when flattening the JSON hierarchy to a map. Defaults to `":"` | `":"` |
| nestedSeparator | N | Used by the store when flattening the JSON hierarchy to a map. Defaults to `":"` | `":"`
| multiValued | N | Allows one level of multi-valued key/value pairs before flattening JSON hierarchy. Defaults to `"false"` | `"true"` |
## Setup JSON file to hold the secrets
Given the following json:
Given the following JSON loaded from `secretsFile`:
```json
{
@ -54,7 +57,7 @@ Given the following json:
}
```
The store will load the file and create a map with the following key value pairs:
If `multiValued` is `"false"`, the store will load the file and create a map with the following key value pairs:
| flattened key | value |
| --- | --- |
@ -62,7 +65,49 @@ The store will load the file and create a map with the following key value pairs
|"connectionStrings:sql" | "your sql connection string" |
|"connectionStrings:mysql"| "your mysql connection string" |
Use the flattened key (`connectionStrings:sql`) to access the secret.
Use the flattened key (`connectionStrings:sql`) to access the secret. The following JSON map returned:
```json
{
"connectionStrings:sql": "your sql connection string"
}
```
If `multiValued` is `"true"`, you would instead use the top level key. In this example, `connectionStrings` would return the following map:
```json
{
"sql": "your sql connection string",
"mysql": "your mysql connection string"
}
```
Nested structures after the top level will be flattened. In this example, `connectionStrings` would return the following map:
JSON from `secretsFile`:
```json
{
"redisPassword": "your redis password",
"connectionStrings": {
"mysql": {
"username": "your mysql username",
"password": "your mysql password"
}
}
}
```
Response:
```json
{
"mysql:username": "your mysql username",
"mysql:password": "your mysql password"
}
```
This is useful in order to mimic secret stores like Vault or Kubernetes that return multiple key/value pairs per secret key.
## Related links
- [Secrets building block]({{< ref secrets >}})

View File

@ -8,7 +8,7 @@ aliases:
---
## Default Kubernetes secret store component
When Dapr is deployed to a Kubernetes cluster, a secret store with the name `kubernetes` is automatically provisioned. This pre-provisioned secret store allows you to use the native Kubernetes secret store with no need to author, deploy or maintain a component configuration file for the secret store and is useful for developers looking to simply access secrets stored natively in a Kubernetes cluster.
When Dapr is deployed to a Kubernetes cluster, a secret store with the name `kubernetes` is automatically provisioned. This pre-provisioned secret store allows you to use the native Kubernetes secret store with no need to author, deploy or maintain a component configuration file for the secret store and is useful for developers looking to simply access secrets stored natively in a Kubernetes cluster.
A custom component definition file for a Kubernetes secret store can still be configured (See below for details). Using a custom definition decouples referencing the secret store in your code from the hosting platform as the store name is not fixed and can be customized, keeping you code more generic and portable. Additionally, by explicitly defining a Kubernetes secret store component you can connect to a Kubernetes secret store from a local Dapr self-hosted installation. This requires a valid [`kubeconfig`](https://kubernetes.io/docs/concepts/configuration/organize-cluster-access-kubeconfig/) file.

View File

@ -9,8 +9,7 @@ aliases:
## Component format
To setup Azure Blobstorage state store create a component of type `state.azure.blobstorage`. See [this guide]({{< ref "howto-get-save-state.md#step-1-setup-a-state-store" >}}) on how to create and apply a state store configuration.
To setup the Azure Blob Storage state store create a component of type `state.azure.blobstorage`. See [this guide]({{< ref "howto-get-save-state.md#step-1-setup-a-state-store" >}}) on how to create and apply a state store configuration.
```yaml
apiVersion: dapr.io/v1alpha1
@ -23,42 +22,105 @@ spec:
version: v1
metadata:
- name: accountName
value: <REPLACE-WITH-ACCOUNT-NAME>
value: "[your_account_name]"
- name: accountKey
value: <REPLACE-WITH-ACCOUNT-KEY>
value: "[your_account_key]"
- name: containerName
value: <REPLACE-WITH-CONTAINER-NAME>
value: "[your_container_name]"
```
{{% alert title="Warning" color="warning" %}}
The above example uses secrets as plain strings. It is recommended to use a secret store for the secrets as described [here]({{< ref component-secrets.md >}}).
{{% /alert %}}
## Spec metadata fields
| Field | Required | Details | Example |
|--------------------|:--------:|---------|---------|
| accountName | Y | The storage account name | `"mystorageaccount"`.
| accountKey | Y | Primary or secondary storage key | `"key"`
| accountKey | Y (unless using Azure AD) | Primary or secondary storage key | `"key"`
| containerName | Y | The name of the container to be used for Dapr state. The container will be created for you if it doesn't exist | `"container"`
| ContentType | N | The blobs content type | `"text/plain"`
| `azureEnvironment` | N | Optional name for the Azure environment if using a different Azure cloud | `"AZUREPUBLICCLOUD"` (default value), `"AZURECHINACLOUD"`, `"AZUREUSGOVERNMENTCLOUD"`, `"AZUREGERMANCLOUD"`
| ContentType | N | The blob's content type | `"text/plain"`
| ContentMD5 | N | The blob's MD5 hash | `"vZGKbMRDAnMs4BIwlXaRvQ=="`
| ContentEncoding | N | The blob's content encoding | `"UTF-8"`
| ContentLanguage | N | The blob's content language | `"en-us"`
| ContentDisposition | N | The blob's content disposition. Conveys additional information about how to process the response payload | `"attachment"`
| CacheControl | N | The blob's cache control | `"no-cache"`
## Setup Azure Blobstorage
## Setup Azure Blob Storage
[Follow the instructions](https://docs.microsoft.com/en-us/azure/storage/common/storage-account-create?tabs=azure-portal) from the Azure documentation on how to create an Azure Storage Account.
If you wish to create a container for Dapr to use, you can do so beforehand. However, Blob Storage state provider will create one for you automatically if it doesn't exist.
If you wish to create a container for Dapr to use, you can do so beforehand. However, the Blob Storage state provider will create one for you automatically if it doesn't exist.
In order to setup Azure Blob Storage as a state store, you will need the following properties:
- **AccountName**: The storage account name. For example: **mystorageaccount**.
- **AccountKey**: Primary or secondary storage key.
- **ContainerName**: The name of the container to be used for Dapr state. The container will be created for you if it doesn't exist.
- **accountName**: The storage account name. For example: **mystorageaccount**.
- **accountKey**: Primary or secondary storage account key.
- **containerName**: The name of the container to be used for Dapr state. The container will be created for you if it doesn't exist.
### Authenticating with Azure AD
This component supports authentication with Azure AD as an alternative to use account keys. Whenever possible, it is recommended that you use Azure AD for authentication in production systems, to take advantage of better security, fine-tuned access control, and the ability to use managed identities for apps running on Azure.
> The following scripts are optimized for a bash or zsh shell and require the following apps installed:
>
> - [Azure CLI](https://docs.microsoft.com/cli/azure/install-azure-cli)
> - [jq](https://stedolan.github.io/jq/download/)
>
> You must also be authenticated with Azure in your Azure CLI.
1. To get started with using Azure AD for authenticating the Blob Storage state store component, make sure you've created an Azure AD application and a Service Principal as explained in the [Authenticating to Azure]({{< ref authenticating-azure.md >}}) document.
Once done, set a variable with the ID of the Service Principal that you created:
```sh
SERVICE_PRINCIPAL_ID="[your_service_principal_object_id]"
```
2. Set the following variables with the name of your Azure Storage Account and the name of the Resource Group where it's located:
```sh
STORAGE_ACCOUNT_NAME="[your_storage_account_name]"
RG_NAME="[your_resource_group_name]"
```
3. Using RBAC, assign a role to our Service Principal so it can access data inside the Storage Account.
In this case, you are assigning the "Storage blob Data Contributor" role, which has broad access; other more restrictive roles can be used as well, depending on your application.
```sh
RG_ID=$(az group show --resource-group ${RG_NAME} | jq -r ".id")
az role assignment create \
--assignee "${SERVICE_PRINCIPAL_ID}" \
--role "Storage blob Data Contributor" \
--scope "${RG_ID}/providers/Microsoft.Storage/storageAccounts/${STORAGE_ACCOUNT_NAME}"
```
When authenticating your component using Azure AD, the `accountKey` field is not required. Instead, please specify the required credentials in the component's metadata (if any) according to the [Authenticating to Azure]({{< ref authenticating-azure.md >}}) document.
For example:
```yaml
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
name: <NAME>
namespace: <NAMESPACE>
spec:
type: state.azure.blobstorage
version: v1
metadata:
- name: accountName
value: "[your_account_name]"
- name: containerName
value: "[your_container_name]"
- name: azureTenantId
value: "[your_tenant_id]"
- name: azureClientId
value: "[your_client_id]"
- name: azureClientSecret
value : "[your_client_secret]"
```
## Apply the configuration
@ -66,16 +128,17 @@ In order to setup Azure Blob Storage as a state store, you will need the followi
To apply Azure Blob Storage state store to Kubernetes, use the `kubectl` CLI:
```
```sh
kubectl apply -f azureblob.yaml
```
### Running locally
To run locally, create a `components` dir containing the YAML file and provide the path to the `dapr run` command with the flag `--components-path`.
This state store creates a blob file in the container and puts raw state inside it.
For example, the following operation coming from service called `myservice`
For example, the following operation coming from service called `myservice`:
```shell
curl -X POST http://localhost:3500/v1.0/state \
@ -88,13 +151,14 @@ curl -X POST http://localhost:3500/v1.0/state \
]'
```
creates the blob file in the containter with `key` as filename and `value` as the contents of file.
This creates the blob file in the container with `key` as filename and `value` as the contents of file.
## Concurrency
Azure Blob Storage state concurrency is achieved by using `ETag`s according to [the Azure Blob Storage documentation](https://docs.microsoft.com/en-us/azure/storage/common/storage-concurrency#managing-concurrency-in-blob-storage).
## Related links
- [Basic schema for a Dapr component]({{< ref component-schema >}})
- Read [this guide]({{< ref "howto-get-save-state.md#step-2-save-and-retrieve-a-single-state" >}}) for instructions on configuring state store components
- [State management building block]({{< ref state-management >}})

View File

@ -39,6 +39,10 @@ spec:
The above example uses secrets as plain strings. It is recommended to use a secret store for the secrets as described [here]({{< ref component-secrets.md >}}).
{{% /alert %}}
## Primary Key
In order to use DynamoDB as a Dapr state store, the table must have a primary key named `key`.
## Spec metadata fields
| Field | Required | Details | Example |

View File

@ -83,7 +83,7 @@ docker run --name some-mongo -d mongo
You can then interact with the server using `localhost:27017`.
If you do not specify a `databaseName` value in your component definition, make sure to create a database named `daprStore`.
If you do not specify a `databaseName` value in your component definition, make sure to create a database named `daprStore`.
{{% /codetab %}}

View File

@ -28,8 +28,10 @@ spec:
value: "<SCHEMA NAME>"
- name: tableName
value: "<TABLE NAME>"
- name: pemPath
- name: pemPath # Required if pemContents not provided. Path to pem file.
value: "<PEM PATH>"
- name: pemContents # Required if pemPath not provided. Pem value.
value: "<PEM CONTENTS>"
```
{{% alert title="Warning" color="warning" %}}
@ -50,7 +52,8 @@ If you wish to use MySQL as an actor store, append the following to the yaml.
| connectionString | Y | The connection string to connect to MySQL. Do not add the schema to the connection string | [Non SSL connection](#non-ssl-connection): `"<user>:<password>@tcp(<server>:3306)/?allowNativePasswords=true"`, [Enforced SSL Connection](#enforced-ssl-connection): `"<user>:<password>@tcp(<server>:3306)/?allowNativePasswords=true&tls=custom"`|
| schemaName | N | The schema name to use. Will be created if schema does not exist. Defaults to `"dapr_state_store"` | `"custom_schema"`, `"dapr_schema"` |
| tableName | N | The table name to use. Will be created if table does not exist. Defaults to `"state"` | `"table_name"`, `"dapr_state"` |
| pemPath | N | Full path to the PEM file to use for [enforced SSL Connection](#enforced-ssl-connection) | `"/path/to/file.pem"`, `"C:\path\to\file.pem"` |
| pemPath | N | Full path to the PEM file to use for [enforced SSL Connection](#enforced-ssl-connection) required if pemContents is not provided. Cannot be used in K8s environment | `"/path/to/file.pem"`, `"C:\path\to\file.pem"` |
| pemContents | N | Contents of PEM file to use for [enforced SSL Connection](#enforced-ssl-connection) required if pemPath is not provided. Can be used in K8s environment | `"pem value"` |
## Setup MySQL

View File

@ -35,6 +35,8 @@ spec:
value: # Optional
- name: maxRetryBackoff
value: # Optional
- name: ttlInSeconds
value: <int> # Optional
```
**TLS:** If the Redis instance supports TLS with public certificates it can be configured to enable or disable TLS `true` or `false`.
@ -81,6 +83,7 @@ If you wish to use Redis as an actor store, append the following to the yaml.
| idleCheckFrequency | N | Frequency of idle checks made by idle connections reaper. Default is `"1m"`. `"-1"` disables idle connections reaper. | `"-1"`
| idleTimeout | N | Amount of time after which the client closes idle connections. Should be less than server's timeout. Default is `"5m"`. `"-1"` disables idle timeout check. | `"10m"`
| actorStateStore | N | Consider this state store for actors. Defaults to `"false"` | `"true"`, `"false"`
| ttlInSeconds | N | Allows specifying a default Time-to-live (TTL) in seconds that will be applied to every state store request unless TTL is explicitly defined via the [request metadata]({{< ref "state-store-ttl.md" >}}). | `600`
## Setup Redis

View File

@ -41,7 +41,7 @@ spec:
The above example uses secrets as plain strings. It is recommended to use a secret store for the secrets as described [here]({{< ref component-secrets.md >}}).
{{% /alert %}}
If you wish to use Redis as an [actor state store]({{< ref "state_api.md#configuring-state-store-for-actors" >}}), append the following to the yaml.
If you wish to use SQL server as an [actor state store]({{< ref "state_api.md#configuring-state-store-for-actors" >}}), append the following to the yaml.
```yaml
- name: actorStateStore

View File

@ -0,0 +1 @@
{{ if .Get "short" }}1.4{{ else if .Get "long" }}1.4.0{{ else }}1.4.0{{ end }}

Binary file not shown.

After

Width:  |  Height:  |  Size: 447 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 587 KiB