Merge branch 'v1.4' into binding-s3-update

This commit is contained in:
Yaron Schneider 2021-09-01 11:31:24 -07:00 committed by GitHub
commit 78ac53cb6b
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
122 changed files with 1776 additions and 567 deletions

32
.github/workflows/dapr-bot.yml vendored Normal file
View File

@ -0,0 +1,32 @@
name: dapr-bot
on:
issue_comment: {types: created}
jobs:
daprbot:
name: bot-processor
runs-on: ubuntu-latest
steps:
- name: Comment analyzer
uses: actions/github-script@v1
with:
github-token: ${{secrets.DAPR_BOT_TOKEN}}
script: |
const payload = context.payload;
const issue = context.issue;
const isFromPulls = !!payload.issue.pull_request;
const commentBody = payload.comment.body;
if (!isFromPulls && commentBody && commentBody.indexOf("/assign") == 0) {
if (!issue.assignees || issue.assignees.length === 0) {
await github.issues.addAssignees({
owner: issue.owner,
repo: issue.repo,
issue_number: issue.number,
assignees: [context.actor],
})
}
return;
}

View File

@ -3,11 +3,11 @@ name: Azure Static Web App Root
on:
push:
branches:
- v1.2
- v1.3
pull_request:
types: [opened, synchronize, reopened, closed]
branches:
- v1.2
- v1.3
jobs:
build_and_deploy_job:

View File

@ -1,13 +1,13 @@
name: Azure Static Web App v1.3
name: Azure Static Web App v1.4
on:
push:
branches:
- v1.3
- v1.4
pull_request:
types: [opened, synchronize, reopened, closed]
branches:
- v1.3
- v1.4
jobs:
build_and_deploy_job:
@ -27,7 +27,7 @@ jobs:
HUGO_ENV: production
HUGO_VERSION: "0.74.3"
with:
azure_static_web_apps_api_token: ${{ secrets.AZURE_STATIC_WEB_APPS_V1_3 }}
azure_static_web_apps_api_token: ${{ secrets.AZURE_STATIC_WEB_APPS_V1_4 }}
repo_token: ${{ secrets.GITHUB_TOKEN }} # Used for Github integrations (i.e. PR comments)
skip_deploy_on_missing_secrets: true
action: "upload"
@ -48,6 +48,6 @@ jobs:
id: closepullrequest
uses: Azure/static-web-apps-deploy@v0.0.1-preview
with:
azure_static_web_apps_api_token: ${{ secrets.AZURE_STATIC_WEB_APPS_V1_3 }}
azure_static_web_apps_api_token: ${{ secrets.AZURE_STATIC_WEB_APPS_V1_4 }}
skip_deploy_on_missing_secrets: true
action: "close"

1
.gitignore vendored
View File

@ -4,3 +4,4 @@
node_modules/
daprdocs/public
daprdocs/resources/_gen
.venv/

3
.gitmodules vendored
View File

@ -17,3 +17,6 @@
[submodule "sdkdocs/go"]
path = sdkdocs/go
url = https://github.com/dapr/go-sdk.git
[submodule "sdkdocs/java"]
path = sdkdocs/java
url = https://github.com/dapr/java-sdk.git

View File

@ -1,2 +0,0 @@
# These owners are the maintainers and approvers of this repo
* @maintainers-docs @approvers-docs

View File

@ -14,8 +14,8 @@ The following branches are currently maintained:
| Branch | Website | Description |
|--------|---------|-------------|
| [v1.2](https://github.com/dapr/docs) (primary) | https://docs.dapr.io | Latest Dapr release documentation. Typo fixes, clarifications, and most documentation goes here.
| [v1.3](https://github.com/dapr/docs/tree/v1.3) (pre-release) | https://v1-3.docs.dapr.io/ | Pre-release documentation. Doc updates that are only applicable to v1.3+ go here.
| [v1.3](https://github.com/dapr/docs) (primary) | https://docs.dapr.io | Latest Dapr release documentation. Typo fixes, clarifications, and most documentation goes here.
| [v1.4](https://github.com/dapr/docs/tree/v1.4) (pre-release) | https://v1-4.docs.dapr.io/ | Pre-release documentation. Doc updates that are only applicable to v1.4+ go here.
For more information visit the [Dapr branch structure](https://docs.dapr.io/contributing/contributing-docs/#branch-guidance) document.

View File

@ -1,5 +1,5 @@
# Site Configuration
baseURL = "https://v1-3.docs.dapr.io/"
baseURL = "https://v1-4.docs.dapr.io/"
title = "Dapr Docs"
theme = "docsy"
disableFastRender = true
@ -79,6 +79,14 @@ id = "UA-149338238-3"
source = "../sdkdocs/go/daprdocs/content/en/go-sdk-contributing"
target = "content/contributing/"
lang = "en"
[[module.mounts]]
source = "../sdkdocs/java/daprdocs/content/en/java-sdk-docs"
target = "content/developing-applications/sdks/java"
lang = "en"
[[module.mounts]]
source = "../sdkdocs/java/daprdocs/content/en/java-sdk-contributing"
target = "content/contributing/"
lang = "en"
[[module.mounts]]
source = "../translations/docs-zh/content/zh-hans"
@ -141,20 +149,23 @@ offlineSearch = false
github_repo = "https://github.com/dapr/docs"
github_project_repo = "https://github.com/dapr/dapr"
github_subdir = "daprdocs"
github_branch = "v1.3"
github_branch = "v1.4"
# Versioning
version_menu = "v1.3 (preview)"
version = "v1.3"
version_menu = "v1.4 (preview)"
version = "v1.4"
archived_version = false
url_latest_version = "https://docs.dapr.io"
[[params.versions]]
version = "v1.3 (preview)"
version = "v1.4 (preview)"
url = "#"
[[params.versions]]
version = "v1.2 (latest)"
version = "v1.3 (latest)"
url = "https://docs.dapr.io"
[[params.versions]]
version = "v1.2"
url = "https://v1-2.docs.dapr.io"
[[params.versions]]
version = "v1.1"
url = "https://v1-1.docs.dapr.io"

View File

@ -1,6 +1,6 @@
---
type: docs
title: "Dapr Concepts"
title: "Dapr concepts"
linkTitle: "Concepts"
weight: 10
description: "Learn about Dapr including its main features and capabilities"

View File

@ -0,0 +1,7 @@
---
type: docs
title: "Overview of the Dapr services"
linkTitle: "Dapr services"
weight: 800
description: "Learn about the services that make up the Dapr runtime"
---

View File

@ -0,0 +1,12 @@
---
type: docs
title: "Dapr operator service overview"
linkTitle: "Operator"
description: "Overview of the Dapr operator process"
---
When running Dapr in [Kubernetes mode]({{< ref kubernetes >}}), a pod running the Dapr operator service manages [Dapr component]({{< ref components >}}) updates and provides Kubernetes services endpoints for Dapr.
## Running the operator service
The operator service is deployed as part of `dapr init -k`, or via the Dapr Helm charts. For more information on running Dapr on Kubernetes, visit the [Kubernetes hosting page]({{< ref kubernetes >}}).

View File

@ -0,0 +1,16 @@
---
type: docs
title: "Dapr placement service overview"
linkTitle: "Placement"
description: "Overview of the Dapr placement process"
---
The Dapr placement service is used to calculate and distribute distributed hash tables for the location of [Dapr actors]({{< ref actors >}}) running in [self-hosted mode]({{< ref self-hosted >}}) or on [Kubernetes]({{< ref kubernetes >}}). This hash table maps actor IDs to pods or processes so a Dapr application can communicate with the actor.Anytime a Dapr application activates a Dapr actor, the placement updates the hash tables with the latest actor locations.
## Self-hosted mode
The placement service Docker container is started automatically as part of [`dapr init`]({{< ref self-hosted-with-docker.md >}}). It can also be run manually as a process if you are running in [slim-init mode]({{< ref self-hosted-no-docker.md >}}).
## Kubernetes mode
The placement service is deployed as part of `dapr init -k`, or via the Dapr Helm charts. For more information on running Dapr on Kubernetes, visit the [Kubernetes hosting page]({{< ref kubernetes >}}).

View File

@ -0,0 +1,26 @@
---
type: docs
title: "Dapr sentry service overview"
linkTitle: "Sentry"
description: "Overview of the Dapr sentry process"
---
The Dapr sentry service manages mTLS between services and acts as a certificate authority. It generates mTLS certificates and distributes them to any running sidecars. This allows sidecars to communicate with encrypted, mTLS traffic. For more information read the [sidecar-to-sidecar communication overview]({{< ref "security-concept.md#sidecar-to-sidecar-communication" >}}).
## Self-hosted mode
The sentry service Docker container is started automatically as part of [`dapr init`]({{< ref self-hosted-with-docker.md >}}). It can also be run manually as a process if you are running in [slim-init mode]({{< ref self-hosted-no-docker.md >}}).
<img src="/images/security-mTLS-sentry-selfhosted.png" width=1000>
## Kubernetes mode
The sentry service is deployed as part of `dapr init -k`, or via the Dapr Helm charts. For more information on running Dapr on Kubernetes, visit the [Kubernetes hosting page]({{< ref kubernetes >}}).
<img src="/images/security-mTLS-sentry-kubernetes.png" width=1000>
## Further reading
- [Security overview]({{< ref security-concept.md >}})
- [Self-hosted mode]({{< ref self-hosted-with-docker.md >}})
- [Kubernetes mode]({{< ref kubernetes >}})

View File

@ -0,0 +1,12 @@
---
type: docs
title: "Dapr sidecar injector overview"
linkTitle: "Sidecar injector"
description: "Overview of the Dapr sidecar injector process"
---
When running Dapr in [Kubernetes mode]({{< ref kubernetes >}}), a pod is created running the dapr-sidecar-injector service, which looks for pods initialized with the [Dapr annotations]({{< ref arguments-annotations-overview.md >}}), and then creates another container in that pod for the [daprd service]({{< ref sidecar >}})
## Running the sidecar injector
The sidecar injector service is deployed as part of `dapr init -k`, or via the Dapr Helm charts. For more information on running Dapr on Kubernetes, visit the [Kubernetes hosting page]({{< ref kubernetes >}}).

View File

@ -0,0 +1,51 @@
---
type: docs
title: "Dapr sidecar (daprd) overview"
linkTitle: "Sidecar"
weight: 100
description: "Overview of the Dapr sidecar process"
---
Dapr uses a [sidecar pattern]({{< ref "overview.md#sidecar-architecture" >}}), meaning the Dapr APIs are run and exposed on a separate process (i.e. the Dapr sidecar) running alongside your application. The Dapr sidecar process is named `daprd` and is launched in different ways depending on the hosting environment.
<img src="/images/overview-sidecar-model.png" width=700>
## Self-hosted with `dapr run`
When Dapr is installed in [self-hosted mode]({{<ref self-hosted>}}), the `daprd` binary is downloaded and placed under the user home directory (`$HOME/.dapr/bin` for Linux/MacOS or `%USERPROFILE%\.dapr\bin\` for Windows). In self-hosted mode, running the Dapr CLI [`run` command]({{< ref dapr-run.md >}}) launches the `daprd` executable together with the provided application executable. This is the recommended way of running the Dapr sidecar when working locally in scenarios such as development and testing. The various arguments the CLI exposes to configure the sidecar can be found in the [Dapr run command reference]({{<ref dapr-run>}}).
## Kubernetes with `dapr-sidecar-injector`
On [Kubernetes]({{< ref kubernetes.md >}}), the Dapr control plane includes the [dapr-sidecar-injector service]({{< ref kubernetes-overview.md >}}), which watches for new pods with the `dapr.io/enabled` annotation and injects a container with the `daprd` process within the pod. In this case, sidecar arguments can be passed through annotations as outlined in the **Kubernetes annotations** column in [this table]({{<ref arguments-annotations-overview>}}).
## Running the sidecar directly
In most cases you do not need to run `daprd` explicitly, as the sidecar is either launched by the CLI (self-hosted mode) or by the dapr-sidecar-injector service (Kubernetes). For advanced use cases (debugging, scripted deployments, etc.) the `daprd` process can be launched directly.
For a detailed list of all available arguments run `daprd --help` or see this [table]({{< ref arguments-annotations-overview.md >}}) which outlines how the `daprd` arguments relate to the CLI arguments and Kubernetes annotations.
### Examples
1. Start a sidecar along an application by specifying its unique ID. Note `--app-id` is a required field:
```bash
daprd --app-id myapp
```
2. Specify the port your application is listening to
```bash
daprd --app-id --app-port 5000
```
3. If you are using several custom components and want to specify the location of the component definition files, use the `--components-path` argument:
```bash
daprd --app-id myapp --components-path <PATH-TO-COMPONENTS-FILES>
```
4. Enable collection of Prometheus metrics while running your app
```bash
daprd --app-id myapp --enable-metrics
```

View File

@ -9,7 +9,9 @@ description: >
Dapr is a portable, event-driven runtime that makes it easy for any developer to build resilient, stateless and stateful applications that run on the cloud and edge and embraces the diversity of languages and developer frameworks.
<iframe width="1120" height="630" src="https://www.youtube.com/embed/9o9iDAgYBA8" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe>
<div class="embed-responsive embed-responsive-16by9">
<iframe width="1120" height="630" src="https://www.youtube.com/embed/9o9iDAgYBA8" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe>
</div>
## Any language, any framework, anywhere
@ -98,7 +100,7 @@ Dapr can be used from any developer framework. Here are some that have been inte
| Language | Frameworks | Description |
|----------|------------|-------------|
| [.NET]({{< ref dotnet >}}) | [ASP.NET]({{< ref dotnet-aspnet.md >}}) | Brings stateful routing controllers that respond to pub/sub events from other services. Can also take advantage of [ASP.NET Core gRPC Services](https://docs.microsoft.com/en-us/aspnet/core/grpc/).
| [Java](https://github.com/dapr/java-sdk) | [Spring Boot](https://spring.io/)
| [Java]({{< ref java >}}) | [Spring Boot](https://spring.io/)
| [Python]({{< ref python >}}) | [Flask]({{< ref python-flask.md >}})
| [Javascript](https://github.com/dapr/js-sdk) | [Express](http://expressjs.com/)
| [PHP]({{< ref php >}}) | | You can serve with Apache, Nginx, or Caddyserver.
@ -118,4 +120,4 @@ Dapr is designed for [operations]({{< ref operations >}}) and security. The Dapr
The [services dashboard](https://github.com/dapr/dashboard), installed via the Dapr CLI, provides a web-based UI enabling you to see information, view logs and more for the Dapr sidecars.
The [monitoring tools support]({{< ref monitoring >}}) provides deeper visibility into the Dapr system services and side-cars and the [observability capabilities]({{<ref "observability-concept.md">}}) of Dapr provide insights into your application such as tracing and metrics.
The [monitoring tools support]({{< ref monitoring >}}) provides deeper visibility into the Dapr system services and side-cars and the [observability capabilities]({{<ref "observability-concept.md">}}) of Dapr provide insights into your application such as tracing and metrics.

View File

@ -35,6 +35,8 @@ Watch these recordings from the Dapr community calls showing presentations on ru
- General overview and a demo of [Dapr and Linkerd](https://youtu.be/xxU68ewRmz8?t=142)
- Demo of running [Dapr and Istio](https://youtu.be/ngIDOQApx8g?t=335)
Also, learn more about [running Dapr with Open Service Mesh (OSM)]({{<ref open-service-mesh>}}).
## When to choose using Dapr, a service mesh, or both
Should you be using Dapr, a service mesh, or both? The answer depends on your requirements. If, for example, you are looking to use Dapr for one or more building blocks such as state management or pub/sub, and you are considering using a service mesh just for network security or observability, you may find that Dapr is a good fit and that a service mesh is not required.

View File

@ -2,7 +2,7 @@
type: docs
title: "Dapr terminology and definitions"
linkTitle: "Terminology"
weight: 800
weight: 900
description: Definitions for common terms and acronyms in the Dapr documentation
---

View File

@ -274,7 +274,7 @@ Will result in the following output:
{{< code-snippet file="contributing-1.py" lang="python" marker="#SAMPLE" >}}
Use the `replace-key-[token]` and `replace-value-[token]` parameters to limit the embedded snipped to a portion of the sample file. This is useful when you want abbreviate a portion of the code sample. Multiple replacements are supported with multiple values of `token`.
Use the `replace-key-[token]` and `replace-value-[token]` parameters to limit the embedded snipped to a portion of the sample file. This is useful when you want abbreviate a portion of the code sample. Multiple replacements are supported with multiple values of `token`.
The shortcode below and code sample:

View File

@ -67,7 +67,7 @@ func configHandler(w http.ResponseWriter, r *http.Request) {
```
### Handling reentrant requests
The key to a reentrant request is the `Dapr-Reentrancy-Id` header. The value of this header is used to match requests to their call chain and allow them to bypass the actor's lock.
The key to a reentrant request is the `Dapr-Reentrancy-Id` header. The value of this header is used to match requests to their call chain and allow them to bypass the actor's lock.
The header is generated by the Dapr runtime for any actor request that has a reentrant config specified. Once it is generated, it is used to lock the actor and must be passed to all future requests. Below is a snippet of code from an actor handling this is Golang:

View File

@ -25,7 +25,7 @@ Refer [api spec]({{< ref "actors_api.md#invoke-actor-method" >}}) for more detai
Actors can save state reliably using state management capability.
You can interact with Dapr through HTTP/gRPC endpoints for state management.
To use actors, your state store must support multi-item transactions. This means your state store [component](https://github.com/dapr/components-contrib/tree/master/state) must implement the [TransactionalStore](https://github.com/dapr/components-contrib/blob/master/state/transactional_store.go) interface. The list of components that support transactions/actors can be found here: [supported state stores]({{< ref supported-state-stores.md >}}). Only a single state store component can be used as the statestore for all actors.
To use actors, your state store must support multi-item transactions. This means your state store [component](https://github.com/dapr/components-contrib/tree/master/state) must implement the [TransactionalStore](https://github.com/dapr/components-contrib/blob/master/state/transactional_store.go) interface. The list of components that support transactions/actors can be found here: [supported state stores]({{< ref supported-state-stores.md >}}). Only a single state store component can be used as the statestore for all actors.
## Actor timers and reminders
@ -77,7 +77,7 @@ Refer [api spec]({{< ref "actors_api.md#invoke-timer" >}}) for more details.
### Actor reminders
Reminders are a mechanism to trigger *persistent* callbacks on an actor at specified times. Their functionality is similar to timers. But unlike timers, reminders are triggered under all circumstances until the actor explicitly unregisters them or the actor is explicitly deleted. Specifically, reminders are triggered across actor deactivations and failovers because the Dapr actors runtime persists the information about the actors' reminders using Dapr actor state provider.
Reminders are a mechanism to trigger *persistent* callbacks on an actor at specified times. Their functionality is similar to timers. But unlike timers, reminders are triggered under all circumstances until the actor explicitly unregisters them or the actor is explicitly deleted or the number in invocations is exhausted. Specifically, reminders are triggered across actor deactivations and failovers because the Dapr actors runtime persists the information about the actors' reminders using Dapr actor state provider.
You can create a persistent reminder for an actor by calling the Http/gRPC request to Dapr.
@ -111,6 +111,36 @@ The following request body configures a reminder with a `dueTime` 15 seconds and
}
```
[ISO 8601 duration](https://en.wikipedia.org/wiki/ISO_8601#Durations) can also be used to specify `period`. The following request body configures a reminder with a `dueTime` 0 seconds an `period` of 15 seconds.
```json
{
"dueTime":"0h0m0s0ms",
"period":"P0Y0M0W0DT0H0M15S"
}
```
The designators for zero are optional and the above `period` can be simplified to `PT15S`.
ISO 8601 specifies multiple recurrence formats but only the duration format is currently supported.
#### Reminders with repetitions
When configured with ISO 8601 durations, the `period` column also allows to specify number of times a reminder can run. The following request body will create a reminder that will execute for 5 number of times with a period of 15 seconds.
```json
{
"dueTime":"0h0m0s0ms",
"period":"R5/PT15S"
}
```
The number of repetitions i.e. the number of times the reminder is run should be a positive number.
**Example**
Watch this [video](https://www.youtube.com/watch?v=B_vkXqptpXY&t=1002s) for more information on using ISO 861 for Reminders
<div class="embed-responsive embed-responsive-16by9">
<iframe width="560" height="315" src="https://www.youtube.com/embed/B_vkXqptpXY?start=1003" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe>
</div>
#### Retrieve actor reminder
You can retrieve the actor reminder by calling
@ -139,6 +169,8 @@ You can configure the Dapr Actors runtime configuration to modify the default ru
- `drainOngoingCallTimeout` - The duration when in the process of draining rebalanced actors. This specifies the timeout for the current active actor method to finish. If there is no current actor method call, this is ignored. **Default: 60 seconds**
- `drainRebalancedActors` - If true, Dapr will wait for `drainOngoingCallTimeout` duration to allow a current actor call to complete before trying to deactivate an actor. **Default: true**
- `reentrancy` (ActorReentrancyConfig) - Configure the reentrancy behavior for an actor. If not provided, reentrancy is diabled. **Default: disabled**
**Default: 0**
- `remindersStoragePartitions` - Configure the number of partitions for actor's reminders. If not provided, all reminders are saved as a single record in actor's state store. **Default: 0**
{{< tabs Java Dotnet Python >}}
@ -152,6 +184,7 @@ ActorRuntime.getInstance().getConfig().setActorScanInterval(Duration.ofSeconds(3
ActorRuntime.getInstance().getConfig().setDrainOngoingCallTimeout(Duration.ofSeconds(60));
ActorRuntime.getInstance().getConfig().setDrainBalancedActors(true);
ActorRuntime.getInstance().getConfig().setActorReentrancyConfig(false, null);
ActorRuntime.getInstance().getConfig().setRemindersStoragePartitions(7);
```
See [this example](https://github.com/dapr/java-sdk/blob/master/examples/src/main/java/io/dapr/examples/actors/DemoActorService.java)
@ -167,12 +200,13 @@ public void ConfigureServices(IServiceCollection services)
{
// Register actor types and configure actor settings
options.Actors.RegisterActor<MyActor>();
// Configure default settings
options.ActorIdleTimeout = TimeSpan.FromMinutes(60);
options.ActorScanInterval = TimeSpan.FromSeconds(30);
options.DrainOngoingCallTimeout = TimeSpan.FromSeconds(60);
options.DrainRebalancedActors = true;
options.RemindersStoragePartitions = 7;
// reentrancy not implemented in the .NET SDK at this time
});
@ -194,7 +228,8 @@ ActorRuntime.set_actor_config(
actor_scan_interval=timedelta(seconds=30),
drain_ongoing_call_timeout=timedelta(minutes=1),
drain_rebalanced_actors=True,
reentrancy=ActorReentrancyConfig(enabled=False)
reentrancy=ActorReentrancyConfig(enabled=False),
remindersStoragePartitions=7
)
)
```
@ -203,3 +238,153 @@ ActorRuntime.set_actor_config(
{{< /tabs >}}
Refer to the documentation and examples of the [Dapr SDKs]({{< ref "developing-applications/sdks/#sdk-languages" >}}) for more details.
## Partitioning reminders
{{% alert title="Preview feature" color="warning" %}}
Actor reminders partitioning is currently in [preview]({{< ref preview-features.md >}}). Use this feature if you are runnining into issues due to a high number of reminders registered.
{{% /alert %}}
Actor reminders are persisted and continue to be triggered after sidecar restarts. Prior to Dapr runtime version 1.3, reminders were persisted on a single record in the actor state store:
| Key | Value |
| ----------- | ----------- |
| `actors\|\|<actor type>` | `[ <reminder 1>, <reminder 2>, ... , <reminder n> ]` |
Applications that register many reminders can experience the following issues:
* Low throughput on reminders registration and deregistration
* Limit on total number of reminders registered based on the single record size limit on the state store
Since version 1.3, applications can now enable partitioning of actor reminders in the state store. As data is distributed in multiple keys in the state store. First, there is a metadata record in `actors\|\|<actor type>\|\|metadata` that is used to store persisted configuration for a given actor type. Then, there are multiple records that stores subsets of the reminders for the same actor type.
| Key | Value |
| ----------- | ----------- |
| `actors\|\|<actor type>\|\|metadata` | `{ "id": <actor metadata identifier>, "actorRemindersMetadata": { "partitionCount": <number of partitions for reminders> } }` |
| `actors\|\|<actor type>\|\|<actor metadata identifier>\|\|reminders\|\|1` | `[ <reminder 1-1>, <reminder 1-2>, ... , <reminder 1-n> ]` |
| `actors\|\|<actor type>\|\|<actor metadata identifier>\|\|reminders\|\|2` | `[ <reminder 1-1>, <reminder 1-2>, ... , <reminder 1-m> ]` |
| ... | ... |
If the number of partitions is not enough, it can be changed and Dapr's sidecar will automatically redistribute the reminders's set.
### Enabling actor reminders partitioning
Actor reminders partitioning is currently in preview, so enabling it is a two step process.
#### Preview feature configuration
Before using reminders partitioning, actor type metadata must be enabled in Dapr. For more information on preview configurations, see [the full guide on opting into preview features in Dapr]({{< ref preview-features.md >}}). Below is an example of the configuration:
```yaml
apiVersion: dapr.io/v1alpha1
kind: Configuration
metadata:
name: myconfig
spec:
features:
- name: Actor.TypeMetadata
enabled: true
```
#### Actor runtime configuration
Once actor type metadata is enabled as an opt-in preview feature, the actor runtime must also provide the appropriate configuration to partition actor reminders. This is done by the actor's endpoint for `GET /dapr/config`, similar to other actor configuration elements.
{{< tabs Java Dotnet Python Go >}}
{{% codetab %}}
```java
// import io.dapr.actors.runtime.ActorRuntime;
// import java.time.Duration;
ActorRuntime.getInstance().getConfig().setActorIdleTimeout(Duration.ofMinutes(60));
ActorRuntime.getInstance().getConfig().setActorScanInterval(Duration.ofSeconds(30));
ActorRuntime.getInstance().getConfig().setRemindersStoragePartitions(7);
```
See [this example](https://github.com/dapr/java-sdk/blob/master/examples/src/main/java/io/dapr/examples/actors/DemoActorService.java)
{{% /codetab %}}
{{% codetab %}}
```csharp
// In Startup.cs
public void ConfigureServices(IServiceCollection services)
{
// Register actor runtime with DI
services.AddActors(options =>
{
// Register actor types and configure actor settings
options.Actors.RegisterActor<MyActor>();
// Configure default settings
options.ActorIdleTimeout = TimeSpan.FromMinutes(60);
options.ActorScanInterval = TimeSpan.FromSeconds(30);
options.RemindersStoragePartitions = 7;
// reentrancy not implemented in the .NET SDK at this time
});
// Register additional services for use with actors
services.AddSingleton<BankService>();
}
```
See the .NET SDK [documentation](https://github.com/dapr/dotnet-sdk/blob/master/daprdocs/content/en/dotnet-sdk-docs/dotnet-actors/dotnet-actors-usage.md#registering-actors).
{{% /codetab %}}
{{% codetab %}}
```python
from datetime import timedelta
ActorRuntime.set_actor_config(
ActorRuntimeConfig(
actor_idle_timeout=timedelta(hours=1),
actor_scan_interval=timedelta(seconds=30),
remindersStoragePartitions=7
)
)
```
{{% /codetab %}}
{{% codetab %}}
```go
type daprConfig struct {
Entities []string `json:"entities,omitempty"`
ActorIdleTimeout string `json:"actorIdleTimeout,omitempty"`
ActorScanInterval string `json:"actorScanInterval,omitempty"`
DrainOngoingCallTimeout string `json:"drainOngoingCallTimeout,omitempty"`
DrainRebalancedActors bool `json:"drainRebalancedActors,omitempty"`
RemindersStoragePartitions int `json:"remindersStoragePartitions,omitempty"`
}
var daprConfigResponse = daprConfig{
[]string{defaultActorType},
actorIdleTimeout,
actorScanInterval,
drainOngoingCallTimeout,
drainRebalancedActors,
7,
}
func configHandler(w http.ResponseWriter, r *http.Request) {
w.Header().Set("Content-Type", "application/json")
w.WriteHeader(http.StatusOK)
json.NewEncoder(w).Encode(daprConfigResponse)
}
```
{{% /codetab %}}
{{< /tabs >}}
The following, is an example of a valid configuration for reminder partitioning:
```json
{
"entities": [ "MyActorType", "AnotherActorType" ],
"remindersStoragePartitions": 7
}
```
#### Handling configuration changes
For production scenarios, there are some points to be considered before enabling this feature:
* Enabling actor type metadata can only be reverted if the number of partitions remains zero, otherwise the reminders' set will be reverted to an previous state.
* Number of partitions can only be increased and not decreased. This allows Dapr to automatically redistribute the data on a rolling restart where one or more partition configurations might be active.
#### Demo
* [Actor reminder partitioning community call video](https://youtu.be/ZwFOEUYe1WA?t=1493)

View File

@ -10,8 +10,10 @@ Output bindings enable you to invoke external resources without taking dependenc
For a complete sample showing output bindings, visit this [link](https://github.com/dapr/quickstarts/tree/master/bindings).
Watch this [video](https://www.youtube.com/watch?v=ysklxm81MTs&feature=youtu.be&t=1960) on how to use bi-directional output bindings.
<iframe width="560" height="315" src="https://www.youtube.com/embed/ysklxm81MTs?start=1960" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe>
<div class="embed-responsive embed-responsive-16by9">
<iframe width="560" height="315" src="https://www.youtube.com/embed/ysklxm81MTs?start=1960" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe>
</div>
## 1. Create a binding
@ -93,4 +95,4 @@ You can check [here]({{< ref supported-bindings >}}) which operations are suppor
- [Binding API]({{< ref bindings_api.md >}})
- [Binding components]({{< ref bindings >}})
- [Binding detailed specifications]({{< ref supported-bindings >}})
- [Binding detailed specifications]({{< ref supported-bindings >}})

View File

@ -6,7 +6,7 @@ weight: 1000
description: "Use Dapr tracing to get visibility for distributed application"
---
Dapr uses the Zipkin protocol for distributed traces and metrics collection. Due to the ubiquity of the Zipkin protocol, many backends are supported out of the box, for examples [Stackdriver](https://cloud.google.com/stackdriver), [Zipkin](https://zipkin.io), [New Relic](https://newrelic.com) and others. Combining with the OpenTelemetry Collector, Dapr can export traces to many other backends including but not limted to [Azure Monitor](https://azure.microsoft.com/en-us/services/monitor/), [Datadog](https://www.datadoghq.com), [Instana](https://www.instana.com), [Jaeger](https://www.jaegertracing.io/), and [SignalFX](https://www.signalfx.com/).
Dapr uses the Zipkin protocol for distributed traces and metrics collection. Due to the ubiquity of the Zipkin protocol, many backends are supported out of the box, for examples [Stackdriver](https://cloud.google.com/stackdriver), [Zipkin](https://zipkin.io), [New Relic](https://newrelic.com) and others. Combining with the OpenTelemetry Collector, Dapr can export traces to many other backends including but not limted to [Azure Monitor](https://azure.microsoft.com/en-us/services/monitor/), [Datadog](https://www.datadoghq.com), Instana, [Jaeger](https://www.jaegertracing.io/), and [SignalFX](https://www.signalfx.com/).
<img src="/images/tracing.png" width=600>

View File

@ -72,7 +72,7 @@ In these scenarios Dapr does some of the work for you and you need to either cre
To understand how to extract the trace headers from a response and add the trace headers into a request, see the [how to use trace context]({{< ref w3c-tracing >}}) article.
2. You have chosen to generate your own trace context headers.
This is much more unusual. There may be occassions where you specifically chose to add W3C trace headers into a service call, for example if you have an existing application that does not currently use Dapr. In this case Dapr still propagates the trace context headers for you. If you decide to generate trace headers yourself, there are three ways this can be done :
This is much more unusual. There may be occasions where you specifically chose to add W3C trace headers into a service call, for example if you have an existing application that does not currently use Dapr. In this case Dapr still propagates the trace context headers for you. If you decide to generate trace headers yourself, there are three ways this can be done :
1. You can use the industry standard OpenCensus/OpenTelemetry SDKs to generate trace headers and pass these trace headers to a Dapr enabled service. This is the preferred recommendation.

View File

@ -416,6 +416,10 @@ app.post('/dsstatus', (req, res) => {
{{< /tabs >}}
{{% alert title="Note on message redelivery" color="primary" %}}
Some pubsub components (e.g. Redis) will redeliver a message if a response is not sent back within a specified time window. Make sure to configure metadata such as `processingTimeout` to customize this behavior. For more information refer to the respective [component references]({{< ref supported-pubsub >}}).
{{% /alert %}}
## (Optional) Step 5: Publishing a topic with code
{{< tabs Node PHP>}}

View File

@ -93,7 +93,7 @@ Similarly, if two different applications (different app-IDs) subscribe to the sa
### Topic scoping
By default, all topics backing the Dapr pub/sub component (e.g. Kafka, Redis Stream, RabbitMQ) are available to every application configured with that component. To limit which application can publish or subscribe to topics, Dapr provides topic scoping. This enables to you say which topics an application is allowed to published and which topics an application is allowed to subscribed to. For more information read [publish/subscribe topic scoping]({{< ref pubsub-scopes.md >}}).
By default, all topics backing the Dapr pub/sub component (e.g. Kafka, Redis Stream, RabbitMQ) are available to every application configured with that component. To limit which application can publish or subscribe to topics, Dapr provides topic scoping. This enables to you say which topics an application is allowed to publish and which topics an application is allowed to subscribe to. For more information read [publish/subscribe topic scoping]({{< ref pubsub-scopes.md >}}).
### Message Time-to-Live (TTL)
Dapr can set a timeout message on a per message basis, meaning that if the message is not read from the pub/sub component, then the message is discarded. This is to prevent the build up of messages that are not read. A message that has been in the queue for longer than the configured TTL is said to be dead. For more information read [publish/subscribe message time-to-live]({{< ref pubsub-message-ttl.md >}}).

View File

@ -3,7 +3,7 @@ type: docs
title: "Pub/Sub without CloudEvents"
linkTitle: "Pub/Sub without CloudEvents"
weight: 7000
description: "Use Pub/Sub without CloudEvents."
description: "Use Pub/Sub without CloudEvents."
---
## Introduction
@ -83,8 +83,7 @@ Dapr apps are also able to subscribe to raw events coming from existing pub/sub
<img src="/images/pubsub_subscribe_raw.png" alt="Diagram showing how to subscribe with Dapr when publisher does not use Dapr or CloudEvent" width=1000>
### Programmatically subscribe to raw events
### Programmatically subscribe to raw events
When subscribing programmatically, add the additional metadata entry for `rawPayload` so the Dapr sidecar automatically wraps the payloads into a CloudEvent that is compatible with current Dapr SDKs.
@ -148,10 +147,25 @@ $app->start();
{{< /tabs >}}
## Declaratively subscribe to raw events
Subscription Custom Resources Definitions (CRDs) do not currently contain metadata attributes ([issue #3225](https://github.com/dapr/dapr/issues/3225)). At this time subscribing to raw events can only be done through programmatic subscriptions.
Similarly, you can subscribe to raw events declaratively by adding the `rawPayload` metadata entry to your Subscription Custom Resource Definition (CRD):
```yaml
apiVersion: dapr.io/v1alpha1
kind: Subscription
metadata:
name: myevent-subscription
spec:
topic: deathStarStatus
route: /dsstatus
pubsubname: pubsub
metadata:
rawPayload: "true"
scopes:
- app1
- app2
```
## Related links

View File

@ -32,7 +32,7 @@ To use this topic scoping three metadata properties can be set for a pub/sub com
- `spec.metadata.allowedTopics`
- A comma-separated list of allowed topics for all applications.
- If `allowedTopics` is not set (default behavior), all topics are valid. `subscriptionScopes` and `publishingScopes` still take place if present.
- `publishingScopes` or `subscriptionScopes` can be used in conjuction with `allowedTopics` to add granular limitations
- `publishingScopes` or `subscriptionScopes` can be used in conjunction with `allowedTopics` to add granular limitations
These metadata properties can be used for all pub/sub components. The following examples use Redis as pub/sub component.
@ -158,7 +158,9 @@ The table below shows which application is allowed to subscribe to the topics:
## Demo
<div class="embed-responsive embed-responsive-16by9">
<iframe width="560" height="315" src="https://www.youtube.com/embed/7VdWBBGcbHQ?start=513" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe>
</div>
## Related links

View File

@ -42,6 +42,9 @@ spec:
Make sure to replace `<PATH TO SECRETS FILE>` with the path to the JSON file you just created.
>Note: the path to the secret store JSON is relative to where you call `dapr run` from.
To configure a different kind of secret store see the guidance on [how to configure a secret store]({{<ref setup-secret-store>}}) and review [supported secret stores]({{<ref supported-secret-stores >}}) to see specific details required for different secret store solutions.
## Get a secret

View File

@ -9,16 +9,21 @@ type: docs
You can read [guidance on setting up secret store components]({{< ref setup-secret-store >}}) to configure a secret store for an application. Once configured, by default *any* secret defined within that store is accessible from the Dapr application.
To limit the secrets to which the Dapr application has access to, you can can define secret scopes by adding a secret scope policy to the application configuration with restrictive permissions. Follow [these instructions]({{< ref configuration-concept.md >}}) to define an application configuration.
To limit the secrets to which the Dapr application has access to, you can define secret scopes by adding a secret scope policy to the application configuration with restrictive permissions. Follow [these instructions]({{< ref configuration-concept.md >}}) to define an application configuration.
The secret scoping policy applies to any [secret store]({{< ref supported-secret-stores.md >}}), whether that is a local secret store, a Kubernetes secret store or a public cloud secret store. For details on how to set up a [secret stores]({{< ref setup-secret-store.md >}}) read [How To: Retrieve a secret]({{< ref howto-secrets.md >}})
Watch this [video](https://youtu.be/j99RN_nxExA?start=2272) for a demo on how to use secret scoping with your application.
<div class="embed-responsive embed-responsive-16by9">
<iframe width="688" height="430" src="https://www.youtube.com/embed/j99RN_nxExA?start=2272" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe>
</div>
## Scenario 1 : Deny access to all secrets for a secret store
This example uses Kubernetes. The native Kubernetes secret store is added to you Dapr application by default. In some scenarios it may be necessary to deny access to Dapr secrets for a given application. To add this configuration follow the steps below:
In this example all secret access is denied to an application running on a Kubernetes cluster which has a configured [Kubernetes secret store]({{<ref kubernetes-secret-store>}}) named `mycustomsecretstore`. In the case of Kubernetes, aside from the user defined custom store, the default store named `kubernetes` is also addressed to ensure all secrets are denied access (See [here]({{<ref "kubernetes-secret-store.md#default-kubernetes-secret-store-component">}}) to learn more about the Kubernetes default secret store).
To add this configuration follow the steps below:
Define the following `appconfig.yaml` configuration and apply it to the Kubernetes cluster using the command `kubectl apply -f appconfig.yaml`.
@ -32,6 +37,8 @@ spec:
scopes:
- storeName: kubernetes
defaultAccess: deny
- storeName: mycustomsecreststore
defaultAccess: deny
```
For applications that need to be denied access to the Kubernetes secret store, follow [these instructions]({{< ref kubernetes-overview.md >}}), and add the following annotation to the application pod.

View File

@ -1,7 +1,7 @@
---
type: docs
title: "How-To: Invoke services using HTTP"
linkTitle: "How-To: Invoke services"
linkTitle: "How-To: Invoke with HTTP"
description: "Call between services using service invocation"
weight: 2000
---
@ -18,13 +18,13 @@ Dapr allows you to assign a global, unique ID for your app. This ID encapsulates
In self hosted mode, set the `--app-id` flag:
```bash
dapr run --app-id cart --app-port 5000 python app.py
dapr run --app-id cart --dapr-http-port 3500 --app-port 5000 python app.py
```
If your app uses an SSL connection, you can tell Dapr to invoke your app over an insecure SSL connection:
```bash
dapr run --app-id cart --app-port 5000 --app-ssl python app.py
dapr run --app-id cart --dapr-http-port 3500 --app-port 5000 --app-ssl python app.py
```
{{% /codetab %}}
@ -57,7 +57,7 @@ spec:
dapr.io/app-port: "5000"
...
```
*If your app uses an SSL connection, you can tell Dapr to invoke your app over an insecure SSL connection with the `app-ssl: "true"` annotation (full list [here]({{< ref kubernetes-annotations.md >}}))*
*If your app uses an SSL connection, you can tell Dapr to invoke your app over an insecure SSL connection with the `app-ssl: "true"` annotation (full list [here]({{< ref arguments-annotations-overview.md >}}))*
{{% /codetab %}}

View File

@ -0,0 +1,307 @@
---
type: docs
title: "How-To: Invoke services using gRPC"
linkTitle: "How-To: Invoke with gRPC"
description: "Call between services using service invocation"
weight: 3000
---
This article describe how to use Dapr to connect services using gRPC.
By using Dapr's gRPC proxying capability, you can use your existing proto based gRPC services and have the traffic go through the Dapr sidecar. Doing so yields the following [Dapr Service Invocation]({{< ref service-invocation-overview.md >}}) benefits to developers:
1. Mutual authentication
2. Tracing
3. Metrics
4. Access lists
5. Network level resiliency
6. API token based authentication
## Step 1: Run a gRPC server
The following example is taken from the [hello world grpc-go example](https://github.com/grpc/grpc-go/tree/master/examples/helloworld).
Note this example is in Go, but applies to all programming languages supported by gRPC.
```go
package main
import (
"context"
"log"
"net"
"google.golang.org/grpc"
pb "google.golang.org/grpc/examples/helloworld/helloworld"
)
const (
port = ":50051"
)
// server is used to implement helloworld.GreeterServer.
type server struct {
pb.UnimplementedGreeterServer
}
// SayHello implements helloworld.GreeterServer
func (s *server) SayHello(ctx context.Context, in *pb.HelloRequest) (*pb.HelloReply, error) {
log.Printf("Received: %v", in.GetName())
return &pb.HelloReply{Message: "Hello " + in.GetName()}, nil
}
func main() {
lis, err := net.Listen("tcp", port)
if err != nil {
log.Fatalf("failed to listen: %v", err)
}
s := grpc.NewServer()
pb.RegisterGreeterServer(s, &server{})
log.Printf("server listening at %v", lis.Addr())
if err := s.Serve(lis); err != nil {
log.Fatalf("failed to serve: %v", err)
}
}
```
This Go app implements the Greeter proto service and exposes a `SayHello` method.
### Run the gRPC server using the Dapr CLI
Since gRPC proxying is currently a preview feature, you need to opt-in using a configuration file. See https://docs.dapr.io/operations/configuration/preview-features/ for more information.
```yaml
apiVersion: dapr.io/v1alpha1
kind: Configuration
metadata:
name: serverconfig
spec:
tracing:
samplingRate: "1"
zipkin:
endpointAddress: http://localhost:9411/api/v2/spans
features:
- name: proxy.grpc
enabled: true
```
Run the sidecar and the Go server:
```bash
dapr run --app-id server --app-protocol grpc --app-port 50051 --config config.yaml -- go run main.go
```
Using the Dapr CLI, we're assigning a unique id to the app, `server`, using the `--app-id` flag.
## Step 2: Invoke the service
The following example shows you how to discover the Greeter service using Dapr from a gRPC client.
Notice that instead of invoking the target service directly at port `50051`, the client is invoking its local Dapr sidecar over port `50007` which then provides all the capabilities of service invocation including service discovery, tracing, mTLS and retries.
```go
package main
import (
"context"
"log"
"time"
"google.golang.org/grpc"
pb "google.golang.org/grpc/examples/helloworld/helloworld"
"google.golang.org/grpc/metadata"
)
const (
address = "localhost:50007"
)
func main() {
// Set up a connection to the server.
conn, err := grpc.Dial(address, grpc.WithInsecure(), grpc.WithBlock())
if err != nil {
log.Fatalf("did not connect: %v", err)
}
defer conn.Close()
c := pb.NewGreeterClient(conn)
ctx, cancel := context.WithTimeout(context.Background(), time.Second*2)
defer cancel()
ctx = metadata.AppendToOutgoingContext(ctx, "dapr-app-id", "server")
r, err := c.SayHello(ctx, &pb.HelloRequest{Name: "Darth Tyrannus"})
if err != nil {
log.Fatalf("could not greet: %v", err)
}
log.Printf("Greeting: %s", r.GetMessage())
}
```
The following line tells Dapr to discover and invoke an app named `server`:
```go
ctx = metadata.AppendToOutgoingContext(ctx, "dapr-app-id", "server")
```
All languages supported by gRPC allow for adding metadata. Here are a few examples:
{{< tabs Java Dotnet Python JavaScript Ruby "C++">}}
{{% codetab %}}
```java
Metadata headers = new Metadata();
Metadata.Key<String> jwtKey = Metadata.Key.of("dapr-app-id", "server");
GreeterService.ServiceBlockingStub stub = GreeterService.newBlockingStub(channel);
stub = MetadataUtils.attachHeaders(stub, header);
stub.SayHello(new HelloRequest() { Name = "Darth Malak" });
```
{{% /codetab %}}
{{% codetab %}}
```csharp
var metadata = new Metadata
{
{ "dapr-app-id", "server" }
};
var call = client.SayHello(new HelloRequest { Name = "Darth Nihilus" }, metadata);
```
{{% /codetab %}}
{{% codetab %}}
```python
metadata = (('dapr-app-id', 'server'))
response = stub.SayHello(request={ name: 'Darth Revan' }, metadata=metadata)
```
{{% /codetab %}}
{{% codetab %}}
```javascript
const metadata = new grpc.Metadata();
metadata.add('dapr-app-id', 'server');
client.sayHello({ name: "Darth Malgus", metadata })
```
{{% /codetab %}}
{{% codetab %}}
```ruby
metadata = { 'dapr-app-id' : 'server' }
response = service.sayHello({ 'name': 'Darth Bane' }, metadata)
```
{{% /codetab %}}
{{% codetab %}}
```c++
grpc::ClientContext context;
context.AddMetadata("dapr-app-id", "Darth Sidious");
```
{{% /codetab %}}
{{< /tabs >}}
### Run the client using the Dapr CLI
Since gRPC proxying is currently a preview feature, you need to opt-in using a configuration file. See https://docs.dapr.io/operations/configuration/preview-features/ for more information.
```yaml
apiVersion: dapr.io/v1alpha1
kind: Configuration
metadata:
name: serverconfig
spec:
tracing:
samplingRate: "1"
zipkin:
endpointAddress: http://localhost:9411/api/v2/spans
features:
- name: proxy.grpc
enabled: true
```
```bash
dapr run --app-id client --dapr-grpc-port 50007 --config config.yaml -- go run main.go
```
### View telemetry
If you're running Dapr locally with Zipkin installed, open the browser at `http://localhost:9411` and view the traces between the client and server.
## Deploying to Kubernetes
### Step 1: Apply the following configuration YAML using `kubectl`
```yaml
apiVersion: dapr.io/v1alpha1
kind: Configuration
metadata:
name: serverconfig
spec:
tracing:
samplingRate: "1"
zipkin:
endpointAddress: http://localhost:9411/api/v2/spans
features:
- name: proxy.grpc
enabled: true
```
```bash
kubectl apply -f config.yaml
```
### Step 2: set the following Dapr annotations on your pod
```yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: grpc-app
namespace: default
labels:
app: grpc-app
spec:
replicas: 1
selector:
matchLabels:
app: grpc-app
template:
metadata:
labels:
app: grpc-app
annotations:
dapr.io/enabled: "true"
dapr.io/app-id: "server"
dapr.io/app-protocol: "grpc"
dapr.io/app-port: "50051"
dapr.io/config: "serverconfig"
...
```
*If your app uses an SSL connection, you can tell Dapr to invoke your app over an insecure SSL connection with the `app-ssl: "true"` annotation (full list [here]({{< ref arguments-annotations-overview.md >}}))*
The `dapr.io/app-protocol: "grpc"` annotation tells Dapr to invoke the app using gRPC.
The `dapr.io/config: "serverconfig"` annotation tells Dapr to use the configuration applied above that enables gRPC proxying.
### Namespaces
When running on [namespace supported platforms]({{< ref "service_invocation_api.md#namespace-supported-platforms" >}}), you include the namespace of the target app in the app ID: `myApp.production`
For example, invoking the gRPC server on a different namespace:
```go
ctx = metadata.AppendToOutgoingContext(ctx, "dapr-app-id", "server.production")
```
See the [Cross namespace API spec]({{< ref "service_invocation_api.md#cross-namespace-invocation" >}}) for more information on namespaces.
## Step 3: View traces and logs
The example above showed you how to directly invoke a different service running locally or in Kubernetes. Dapr outputs metrics, tracing and logging information allowing you to visualize a call graph between services, log errors and optionally log the payload body.
For more information on tracing and logs see the [observability]({{< ref observability-concept.md >}}) article.
## Related Links
* [Service invocation overview]({{< ref service-invocation-overview.md >}})
* [Service invocation API specification]({{< ref service_invocation_api.md >}})
* [gRPC proxying community call video](https://youtu.be/B_vkXqptpXY?t=70)

View File

@ -83,7 +83,7 @@ Connection establishment via gRPC to the target sidecar has a timeout of 5 secon
### Pluggable service discovery
Dapr can run on a variety of [hosting platforms]({{< ref hosting >}}). To enable service discovery and service invocation, Dapr uses pluggable [name resolution components]({{< ref supported-name-resolution >}}). For example, the Kubernetes name resolution component uses the Kubernetes DNS service to resolve the location of other applications running in the cluster. Self-hosted machines can use the mDNS name resolution component. The Consul name resolution component can be used in any hosting environment including Kubernetes or self-hosted.
Dapr can run on a variety of [hosting platforms]({{< ref hosting >}}). To enable service discovery and service invocation, Dapr uses pluggable [name resolution components]({{< ref supported-name-resolution >}}). For example, the Kubernetes name resolution component uses the Kubernetes DNS service to resolve the location of other applications running in the cluster. Self-hosted machines can use the mDNS name resolution component. The Consul name resolution component can be used in any hosting environment including Kubernetes or self-hosted.
### Round robin load balancing with mDNS
@ -103,6 +103,10 @@ By default, all calls between applications are traced and metrics are gathered t
The API for service invocation can be found in the [service invocation API reference]({{< ref service_invocation_api.md >}}) which describes how to invoke a method on another service.
### gRPC proxying
Dapr allows users to keep their own proto services and work natively with gRPC. This means that you can use service invocation to call your existing gRPC apps without having to include any Dapr SDKs or include custom gRPC services. For more information, see the [how-to tutorial for Dapr and gRPC]({{< ref howto-invoke-services-grpc.md >}}).
## Example
Following the above call sequence, suppose you have the applications as described in the [hello world quickstart](https://github.com/dapr/quickstarts/blob/master/hello-world/README.md), where a python app invokes a node.js app. In such a scenario, the python app would be "Service A" , and a Node.js app would be "Service B".
@ -124,6 +128,7 @@ The diagram below shows sequence 1-7 again on a local machine showing the API ca
- Follow these guides on:
- [How-to: Invoke services using HTTP]({{< ref howto-invoke-discover-services.md >}})
- [How-To: Configure Dapr to use gRPC]({{< ref grpc >}})
- [How-to: Invoke services using gRPC]({{< ref howto-invoke-services-grpc.md >}})
- Try out the [hello world quickstart](https://github.com/dapr/quickstarts/blob/master/hello-world/README.md) which shows how to use HTTP service invocation or try the samples in the [Dapr SDKs]({{< ref sdks >}})
- Read the [service invocation API specification]({{< ref service_invocation_api.md >}})
- Understand the [service invocation performance]({{< ref perf-service-invocation.md >}}) numbers

View File

@ -0,0 +1,101 @@
---
type: docs
title: "State Time-to-Live (TTL)"
linkTitle: "State TTL"
weight: 500
description: "Manage state with time-to-live."
---
## Introduction
Dapr enables per state set request time-to-live (TTL). This means that applications can set time-to-live per state stored, and these states cannot be retrieved after expiration.
Only a subset of Dapr [state store components]({{< ref supported-state-stores >}}) are compatible with state TTL. For supported state stores simply set the `ttlInSeconds` metadata when publishing a message. Other state stores will ignore this value.
Some state stores can specify a default expiration on a per-table/container basis. Please refer to the official documentation of these state stores to utilize this feature of desired. Dapr support per-state TTLs for supported state stores.
## Native state TTL support
When state time-to-live has native support in the state store component, Dapr simply forwards the time-to-live configuration without adding any extra logic, keeping predictable behavior. This is helpful when the expired state is handled differently by the component.
When a TTL is not specified the default behavior of the state store is retained.
## Persisting state (ignoring an existing TTL)
To explicitly persist a state (ignoring any TTLs set for the key), specify a `ttlInSeconds` value of `-1`.
## Supported components
Please refer to the TTL column in the tables at [state store components]({{< ref supported-state-stores >}}).
## Example
State TTL can be set in the metadata as part of the state store set request:
{{< tabs "HTTP API (Bash)" "HTTP API (PowerShell)" "Python SDK" "PHP SDK">}}
{{% codetab %}}
```bash
curl -X POST -H "Content-Type: application/json" -d '[{ "key": "key1", "value": "value1", "metadata": { "ttlInSeconds": "120" } }]' http://localhost:3500/v1.0/state/statestore
```
{{% /codetab %}}
{{% codetab %}}
```powershell
Invoke-RestMethod -Method Post -ContentType 'application/json' -Body '[{"key": "key1", "value": "value1", "metadata": {"ttlInSeconds": "120"}}]' -Uri 'http://localhost:3500/v1.0/state/statestore'
```
{{% /codetab %}}
{{% codetab %}}
```python
from dapr.clients import DaprClient
with DaprClient() as d:
d.save_state(
store_name="statestore",
key="myFirstKey",
value="myFirstValue",
metadata=(
('ttlInSeconds', '120')
)
)
print("State has been stored")
```
{{% /codetab %}}
{{% codetab %}}
Save the following in `state-example.php`:
```php
<?php
require_once __DIR__.'/vendor/autoload.php';
$app = \Dapr\App::create();
$app->run(function(\Dapr\State\StateManager $stateManager, \Psr\Log\LoggerInterface $logger) {
$stateManager->save_state(store_name: 'statestore', item: new \Dapr\State\StateItem(
key: 'myFirstKey',
value: 'myFirstValue',
metadata: ['ttlInSeconds' => '120']
));
$logger->alert('State has been stored');
});
```
{{% /codetab %}}
{{< /tabs >}}
See [this guide]({{< ref state_api.md >}}) for a reference on the state API.
## Related links
- Learn [how to use key value pairs to persist a state]({{< ref howto-get-save-state.md >}})
- List of [state store components]({{< ref supported-state-stores >}})
- Read the [API reference]({{< ref state_api.md >}})

View File

@ -6,7 +6,7 @@ weight: 300
description: "Debug Dapr apps locally which still connected to your Kubernetes cluster"
---
Bridge to Kubernetes allows you to run and debug code on your development computer, while still connected to your Kubernetes cluster with the rest of your application or services. This type of debugging is often called *local tunnel debugging*.
Bridge to Kubernetes allows you to run and debug code on your development computer, while still connected to your Kubernetes cluster with the rest of your application or services. This type of debugging is often called *local tunnel debugging*.
{{< button text="Learn more about Bridge to Kubernetes" link="https://aka.ms/bridge-vscode-dapr" >}}
@ -14,7 +14,9 @@ Bridge to Kubernetes allows you to run and debug code on your development comput
Bridge to Kubernetes supports debugging Dapr apps on your machine, while still having them interact with the services and applications running on your Kubernetes cluster. This example showcases Bridge to Kubernetes enabling a developer to debug the [distributed calculator quickstart](https://github.com/dapr/quickstarts/tree/master/distributed-calculator):
<div class="embed-responsive embed-responsive-16by9">
<iframe width="560" height="315" src="https://www.youtube.com/embed/rxwg-__otso" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe>
</div>
{{% alert title="Isolation mode" color="warning" %}}
[Isolation mode](https://aka.ms/bridge-isolation-vscode-dapr) is currently not supported with Dapr apps. Make sure to launch Bridge to Kubernetes mode without isolation.

View File

@ -91,5 +91,5 @@ All done. Now you can point to port 40000 and start a remote debug session to da
- [Overview of Dapr on Kubernetes]({{< ref kubernetes-overview >}})
- [Deploy Dapr to a Kubernetes cluster]({{< ref kubernetes-deploy >}})
- [Debug Dapr services on Kubernetes]({{< ref debug-dapr-services >}})
- [Debug Dapr services on Kubernetes]({{< ref debug-dapr-services >}})
- [Dapr Kubernetes Quickstart](https://github.com/dapr/quickstarts/tree/master/hello-kubernetes)

View File

@ -80,7 +80,7 @@ Add a new `<tool></tool>` entry:
</toolSet>
```
Optionally, you may also create a new entry for a sidecar tool that can be reused accross many projects:
Optionally, you may also create a new entry for a sidecar tool that can be reused across many projects:
```xml
<toolSet name="External Tools">

View File

@ -63,4 +63,7 @@ Using the VS Code extension, you can debug multiple Dapr applications at the sam
### Community call demo
Watch this [video](https://www.youtube.com/watch?v=OtbYCBt9C34&t=85) on how to use the Dapr VS Code extension:
<div class="embed-responsive embed-responsive-16by9">
<iframe width="560" height="315" src="https://www.youtube.com/embed/OtbYCBt9C34?start=85" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe>
</div>

View File

@ -0,0 +1,178 @@
---
type: docs
title: "How-To: Debug Dapr applications with Visual Studio Code"
linkTitle: "How-To: Debug with VSCode"
weight: 20000
description: "Learn how to configure VSCode to debug Dapr applications"
aliases:
- /developing-applications/ides/vscode/vscode-manual-configuration/
---
## Manual debugging
When developing Dapr applications, you typically use the Dapr CLI to start your daprized service similar to this:
```bash
dapr run --app-id nodeapp --app-port 3000 --dapr-http-port 3500 app.js
```
One approach to attaching the debugger to your service is to first run daprd with the correct arguments from the command line and then launch your code and attach the debugger. While this is a perfectly acceptable solution, it does require a few extra steps and some instruction to developers who might want to clone your repo and hit the "play" button to begin debugging.
If your application is a collection of microservices, each with a Dapr sidecar, it will be useful to debug them together in Visual Studio Code. This page will use the [hello world quickstart](https://github.com/dapr/quickstarts/tree/master/hello-world) to showcase how to configure VSCode to debug multiple Dapr application using [VSCode debugging](https://code.visualstudio.com/Docs/editor/debugging).
## Prerequisites
- Install the [Dapr extension]({{< ref vscode-dapr-extension.md >}}). You will be using the [tasks](https://code.visualstudio.com/docs/editor/tasks) it offers later on.
- Optionally clone the [hello world quickstart](https://github.com/dapr/quickstarts/tree/master/hello-world)
## Step 1: Configure launch.json
The file `.vscode/launch.json` contains [launch configurations](https://code.visualstudio.com/Docs/editor/debugging#_launch-configurations) for a VS Code debug run. This file defines what will launch and how it is configured when the user begins debugging. Configurations are available for each programming language in the [Visual Studio Code marketplace](https://marketplace.visualstudio.com/VSCode).
{{% alert title="Scaffold debugging configuration" color="primary" %}}
The [Dapr VSCode extension]({{< ref vscode-dapr-extension.md >}}) offers built-in scaffolding to generate `launch.json` and `tasks.json` for you.
{{< button text="Learn more" page="vscode-dapr-extension#scaffold-dapr-components" >}}
{{% /alert %}}
In the case of the hello world quickstart, two applications are launched, each with its own Dapr sidecar. One is written in Node.JS, and the other in Python. You'll notice each configuration contains a `daprd run` preLaunchTask and a `daprd stop` postDebugTask.
```json
{
"version": "0.2.0",
"configurations": [
{
"type": "pwa-node",
"request": "launch",
"name": "Nodeapp with Dapr",
"skipFiles": [
"<node_internals>/**"
],
"program": "${workspaceFolder}/app.js",
"preLaunchTask": "daprd-debug-node",
"postDebugTask": "daprd-down-node"
},
{
"type": "python",
"request": "launch",
"name": "Pythonapp with Dapr",
"program": "${workspaceFolder}/app.py",
"console": "integratedTerminal",
"preLaunchTask": "daprd-debug-python",
"postDebugTask": "daprd-down-python"
}
]
}
```
Each configuration requires a `request`, `type` and `name`. These parameters help VSCode identify the task configurations in the `.vscode/task.json` files.
- `type` defines the language used. Depending on the language, it might require an extension found in the marketplace, such as the [Python Extension](https://marketplace.visualstudio.com/items?itemName=ms-python.python).
- `name` is a unique name for the configuration. This is used for compound configurations when calling multiple configurations in your project.
- `${workspaceFolder}` is a VS Code variable reference. This is the path to the workspace opened in VS Code.
- The `preLaunchTask` and `postDebugTask` parameters refer to the program configurations run before and after launching the application. See step 2 on how to configure these.
For more information on VSCode debugging parameters see [VS Code launch attributes](https://code.visualstudio.com/Docs/editor/debugging#_launchjson-attributes).
## Step 2: Configure task.json
For each [task](https://code.visualstudio.com/docs/editor/tasks) defined in `.vscode/launch.json` , a corresponding task definition must exist in `.vscode/task.json`.
For the quickstart, each service needs a task to launch a Dapr sidecar with the `daprd` type, and a task to stop the sidecar with `daprd-down`. The parameters `appId`, `httpPort`, `metricsPort`, `label` and `type` are required. Additional optional parameters are available, see the [reference table here](#daprd-parameter-table").
```json
{
"version": "2.0.0",
"tasks": [
{
"label": "daprd-debug-node",
"type": "daprd",
"appId": "nodeapp",
"appPort": 3000,
"httpPort": 3500,
"metricsPort": 9090
},
{
"label": "daprd-down-node",
"type": "daprd-down",
"appId": "nodeapp"
},
{
"label": "daprd-debug-python",
"type": "daprd",
"appId": "pythonapp",
"httpPort": 53109,
"grpcPort": 53317,
"metricsPort": 9091
},
{
"label": "daprd-down-python",
"type": "daprd-down",
"appId": "pythonapp"
}
]
}
```
## Step 3: Configure a compound launch in launch.json
A compound launch configuration can defined in `.vscode/launch.json` and is a set of two or more launch configurations that are launched in parallel. Optionally, a `preLaunchTask` can be specified and run before the individual debug sessions are started.
For this example the compound configuration is:
```json
{
"version": "2.0.0",
"tasks": [...],
"compounds": [
{
"name": "Node/Python Dapr",
"configurations": ["Nodeapp with Dapr","Pythonapp with Dapr"]
}
]
}
```
## Step 4: Launch your debugging session
You can now run the applications in debug mode by finding the compound command name you have defined in the previous step in the VS Code debugger:
<img src="/images/vscode-launch-configuration.png" width=400 >
You are now debugging multiple applications with Dapr!
## Daprd parameter table
Below are the supported parameters for VS Code tasks. These parameters are equivalent to `daprd` arguments as detailed in [this reference]({{< ref arguments-annotations-overview.md >}}):
| Parameter | Description | Required | Example |
|--------------|---------------|-------------|---------|
| `allowedOrigins` | Allowed HTTP origins (default "\*") | No | `"allowedOrigins": "*"`
| `appId`| The unique ID of the application. Used for service discovery, state encapsulation and the pub/sub consumer ID | Yes | `"appId": "divideapp"`
| `appMaxConcurrency` | Limit the concurrency of your application. A valid value is any number larger than 0 | No | `"appMaxConcurrency": -1`
| `appPort` | This parameter tells Dapr which port your application is listening on | Yes | `"appPort": 4000`
| `appProtocol` | Tells Dapr which protocol your application is using. Valid options are http and grpc. Default is http | No | `"appProtocol": "http"`
| `appSsl` | Sets the URI scheme of the app to https and attempts an SSL connection | No | `"appSsl": true`
| `args` | Sets a list of arguments to pass on to the Dapr app | No | "args": []
| `componentsPath` | Path for components directory. If empty, components will not be loaded. | No | `"componentsPath": "./components"`
| `config` | Tells Dapr which Configuration CRD to use | No | `"config": "./config"`
| `controlPlaneAddress` | Address for a Dapr control plane | No | `"controlPlaneAddress": "http://localhost:1366/"`
| `enableProfiling` | Enable profiling | No | `"enableProfiling": false`
| `enableMtls` | Enables automatic mTLS for daprd to daprd communication channels | No | `"enableMtls": false`
| `grpcPort` | gRPC port for the Dapr API to listen on (default “50001”) | Yes, if multiple apps | `"grpcPort": 50004`
| `httpPort` | The HTTP port for the Dapr API | Yes | `"httpPort": 3502`
| `internalGrpcPort` | gRPC port for the Dapr Internal API to listen on | No | `"internalGrpcPort": 50001`
| `logAsJson` | Setting this parameter to true outputs logs in JSON format. Default is false | No | `"logAsJson": false`
| `logLevel` | Sets the log level for the Dapr sidecar. Allowed values are debug, info, warn, error. Default is info | No | `"logLevel": "debug"`
| `metricsPort` | Sets the port for the sidecar metrics server. Default is 9090 | Yes, if multiple apps | `"metricsPort": 9093`
| `mode` | Runtime mode for Dapr (default “standalone”) | No | `"mode": "standalone"`
| `placementHostAddress` | Addresses for Dapr Actor Placement servers | No | `"placementHostAddress": "http://localhost:1313/"`
| `profilePort` | The port for the profile server (default “7777”) | No | `"profilePort": 7777`
| `sentryAddress` | Address for the Sentry CA service | No | `"sentryAddress": "http://localhost:1345/"`
| `type` | Tells VS Code it will be a daprd task type | Yes | `"type": "daprd"`
## Related Links
- [Visual Studio Code Extension Overview]({{< ref vscode-dapr-extension.md >}})
- [Visual Studio Code Debugging](https://code.visualstudio.com/docs/editor/debugging)

View File

@ -1,168 +0,0 @@
---
type: docs
title: "Visual Studio Code manual debugging configuration"
linkTitle: "Manual debugging"
weight: 30000
description: "How to manually setup Visual Studio Code debugging"
---
The [Dapr VSCode extension]({{< ref vscode-dapr-extension.md >}}) automates the setup of [VSCode debugging](https://code.visualstudio.com/Docs/editor/debugging).
If instead you wish to manually configure the `[tasks.json](https://code.visualstudio.com/Docs/editor/tasks)` and `[launch.json](https://code.visualstudio.com/Docs/editor/debugging)` files to use Dapr, these are the steps.
When developing Dapr applications, you typically use the Dapr cli to start your daprized service similar to this:
```bash
dapr run --app-id nodeapp --app-port 3000 --dapr-http-port 3500 app.js
```
One approach to attaching the debugger to your service is to first run daprd with the correct arguments from the command line and then launch your code and attach the debugger. While this is a perfectly acceptable solution, it does require a few extra steps and some instruction to developers who might want to clone your repo and hit the "play" button to begin debugging.
Using the [tasks.json](https://code.visualstudio.com/Docs/editor/tasks) and [launch.json](https://code.visualstudio.com/Docs/editor/debugging) files in Visual Studio Code, you can simplify the process and request that VS Code kick off the daprd process prior to launching the debugger.
#### Modifying launch.json configurations to include a preLaunchTask
In your [launch.json](https://code.visualstudio.com/Docs/editor/debugging) file add a [preLaunchTask](https://code.visualstudio.com/Docs/editor/debugging#_launchjson-attributes) for each configuration that you want daprd launched. The [preLaunchTask](https://code.visualstudio.com/Docs/editor/debugging#_launchjson-attributes) references tasks that you define in your tasks.json file. Here is an example for both Node and .NET Core. Notice the [preLaunchTasks](https://code.visualstudio.com/Docs/editor/debugging#_launchjson-attributes) referenced: daprd-web and daprd-leaderboard.
```json
{
"version": "0.2.0",
"configurations": [
{
"type": "node",
"request": "launch",
"name": "Node Launch w/Dapr (Web)",
"preLaunchTask": "daprd-web",
"program": "${workspaceFolder}/Game/Web/server.js",
"skipFiles": [
"<node_internals>/**"
]
},
{
"type": "coreclr",
"request": "launch",
"name": ".NET Core Launch w/Dapr (LeaderboardService)",
"preLaunchTask": "daprd-leaderboard",
"program": "${workspaceFolder}/Game/Services/LeaderboardService/bin/Debug/netcoreapp3.0/LeaderboardService.dll",
"args": [],
"cwd": "${workspaceFolder}/Game/Services/LeaderboardService",
"stopAtEntry": false,
"serverReadyAction": {
"action": "openExternally",
"pattern": "^\\s*Now listening on:\\s+(https?://\\S+)"
},
"env": {
"ASPNETCORE_ENVIRONMENT": "Development"
},
"sourceFileMap": {
"/Views": "${workspaceFolder}/Views"
}
}
]
}
```
#### Adding daprd tasks to tasks.json
You need to define a task and problem matcher for daprd in your [tasks.json](https://code.visualstudio.com/Docs/editor/tasks) file. Here are two examples (both referenced via the [preLaunchTask](https://code.visualstudio.com/Docs/editor/debugging#_launchjson-attributes) members above). Notice that in the case of the .NET Core daprd task (daprd-leaderboard) there is also a [dependsOn](https://code.visualstudio.com/Docs/editor/tasks#_compound-tasks) member that references the build task to ensure the latest code is being run/debugged. The [problemMatcher](https://code.visualstudio.com/Docs/editor/tasks#_defining-a-problem-matcher) is used so that VSCode can understand when the daprd process is up and running.
Let's take a quick look at the args that are being passed to the daprd command.
* -app-id -- the id (how you locate it via service invocation) of your microservice
* -app-port -- the port number that your application code is listening on
* -dapr-http-port -- the http port for the dapr api
* -dapr-grpc-port -- the grpc port for the dapr api
* -placement-host-address -- the location of the placement service (this should be running in docker as it was created when you installed dapr and ran ```dapr init```)
>Note: You need to ensure that you specify different http/grpc (-dapr-http-port and -dapr-grpc-port) ports for each daprd task that you create, otherwise you run into port conflicts when you attempt to launch the second configuration.
```json
{
"version": "2.0.0",
"tasks": [
{
"label": "build",
"command": "dotnet",
"type": "process",
"args": [
"build",
"${workspaceFolder}/Game/Services/LeaderboardService/LeaderboardService.csproj",
"/property:GenerateFullPaths=true",
"/consoleloggerparameters:NoSummary"
],
"problemMatcher": "$msCompile"
},
{
"label": "daprd-web",
"command": "daprd",
"args": [
"-app-id",
"whac-a-mole--web",
"-app-port",
"3000",
"-dapr-http-port",
"51000",
"-dapr-grpc-port",
"52000",
"-placement-host-address",
"localhost:50005"
],
"isBackground": true,
"problemMatcher": {
"pattern": [
{
"regexp": ".",
"file": 1,
"location": 2,
"message": 3
}
],
"background": {
"beginsPattern": "^.*starting Dapr Runtime.*",
"endsPattern": "^.*waiting on port.*"
}
}
},
{
"label": "daprd-leaderboard",
"command": "daprd",
"args": [
"-app-id",
"whac-a-mole--leaderboard",
"-app-port",
"5000",
"-dapr-http-port",
"51001",
"-dapr-grpc-port",
"52001",
"-placement-host-address",
"localhost:50005"
],
"isBackground": true,
"problemMatcher": {
"pattern": [
{
"regexp": ".",
"file": 1,
"location": 2,
"message": 3
}
],
"background": {
"beginsPattern": "^.*starting Dapr Runtime.*",
"endsPattern": "^.*waiting on port.*"
}
},
"dependsOn": "build"
}
]
}
```
#### Wrapping up
Once you have made the required changes, you should be able to switch to the [debug](https://code.visualstudio.com/Docs/editor/debugging) view in VSCode and launch your daprized configurations by clicking the "play" button. If everything was configured correctly, you should see daprd launch in the VSCode terminal window and the [debugger](https://code.visualstudio.com/Docs/editor/debugging) should attach to your application (you should see it's output in the debug window).
{{% alert title="Note" color="primary" %}}
Since you didn't launch the service(s) using the **dapr** ***run*** cli command, but instead by running **daprd**, the **dapr** ***list*** command will not show a list of apps that are currently running.
{{% /alert %}}

View File

@ -2,7 +2,7 @@
type: docs
title: "Developing Dapr applications with remote dev containers"
linkTitle: "Remote dev containers"
weight: 20000
weight: 30000
description: "How to setup a remote dev container environment with Dapr"
---
@ -28,4 +28,7 @@ Dapr has pre-built Docker remote containers for NodeJS and C#. You can pick the
#### Example
Watch this [video](https://www.youtube.com/watch?v=D2dO4aGpHcg&t=120) on how to use the Dapr VS Code Remote Containers with your application.
<iframe width="560" height="315" src="https://www.youtube.com/embed/D2dO4aGpHcg?start=120" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe>
<div class="embed-responsive embed-responsive-16by9">
<iframe width="560" height="315" src="https://www.youtube.com/embed/D2dO4aGpHcg?start=120" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe>
</div>

View File

@ -15,7 +15,7 @@ You can find a list of auto-generated clients [here](https://github.com/dapr/doc
The Dapr runtime implements a [proto service](https://github.com/dapr/dapr/blob/master/dapr/proto/runtime/v1/dapr.proto) that apps can communicate with via gRPC.
In addition to calling Dapr via gRPC, Dapr can communicate with an application via gRPC. To do that, the app needs to host a gRPC server and implements the [Dapr appcallback service](https://github.com/dapr/dapr/blob/master/dapr/proto/runtime/v1/appcallback.proto)
In addition to calling Dapr via gRPC, Dapr supports service to service calls with gRPC by acting as a proxy. See more information [here]({{< ref howto-invoke-services-grpc.md >}}).
## Configuring Dapr to communicate with an app via gRPC
@ -177,7 +177,7 @@ func (s *server) ListInputBindings(ctx context.Context, in *empty.Empty) (*pb.Li
}, nil
}
// This method gets invoked every time a new event is fired from a registerd binding. The message carries the binding name, a payload and optional metadata
// This method gets invoked every time a new event is fired from a registered binding. The message carries the binding name, a payload and optional metadata
func (s *server) OnBindingEvent(ctx context.Context, in *pb.BindingEventRequest) (*pb.BindingEventResponse, error) {
fmt.Println("Invoked from binding")
return &pb.BindingEventResponse{}, nil

View File

@ -22,7 +22,9 @@ Users are able to leverage both OSM SMI traffic policies and Dapr capabilities o
Watch the OSM team present the OSM and Dapr integration in the 05/18/2021 community call:
<div class="embed-responsive embed-responsive-16by9">
<iframe width="560" height="315" src="https://www.youtube.com/embed/LSYyTL0nS8Y?start=1916" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe>
</div>
## Additional resources

View File

@ -80,28 +80,28 @@ Prerequisites:
1. Set up the environment variables containing the Azure Storage Account credentials:
{{< tabs Windows "macOS/Linux" >}}
{{% codetab %}}
```bash
export STORAGE_ACCOUNT_KEY=<YOUR-STORAGE-ACCOUNT-KEY>
export STORAGE_ACCOUNT_NAME=<YOUR-STORAGE-ACCOUNT-NAME>
```
{{% /codetab %}}
{{% codetab %}}
```bash
set STORAGE_ACCOUNT_KEY=<YOUR-STORAGE-ACCOUNT-KEY>
set STORAGE_ACCOUNT_NAME=<YOUR-STORAGE-ACCOUNT-NAME>
```
{{% /codetab %}}
{{< /tabs >}}
1. Move to the workflows directory and run the sample runtime:
```bash
cd src/Dapr.Workflows
dapr run --app-id workflows --protocol grpc --port 3500 --app-port 50003 -- dotnet run --workflows-path ../../samples
```
@ -109,8 +109,8 @@ Prerequisites:
```bash
curl http://localhost:3500/v1.0/invoke/workflows/method/workflow1
{"value":"Hello from Logic App workflow running with Dapr!"}
{"value":"Hello from Logic App workflow running with Dapr!"}
```
### Kubernetes
@ -153,7 +153,7 @@ Prerequisites:
```bash
curl http://localhost:3500/v1.0/invoke/workflows/method/workflow1
{"value":"Hello from Logic App workflow running with Dapr!"}
```
@ -186,42 +186,44 @@ Prerequisites:
1. Next, apply the Dapr component:
{{< tabs Self-hosted Kubernetes >}}
{{% codetab %}}
Place the binding yaml file above in a `components` directory at the root of your application.
{{% /codetab %}}
{{% codetab %}}
```bash
kubectl apply -f my_binding.yaml
```
{{% /codetab %}}
{{< /tabs >}}
1. Once an event is sent to the bindings component, check the logs Dapr Workflows to see the output.
{{< tabs Self-hosted Kubernetes >}}
{{% codetab %}}
In standalone mode, the output will be printed to the local terminal.
{{% /codetab %}}
{{% codetab %}}
On Kubernetes, run the following command:
```bash
kubectl logs -l app=dapr-workflows-host -c host
```
{{% /codetab %}}
{{< /tabs >}}
## Example
Watch an example from the Dapr community call:
<div class="embed-responsive embed-responsive-16by9">
<iframe width="560" height="315" src="https://www.youtube.com/embed/7fP-0Ixmi-w?start=116" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe>
</div>
## Additional resources

View File

@ -33,7 +33,7 @@ The Dapr SDKs are the easiest way for you to get Dapr into your application. Cho
|----------|:------|:----------:|:-----------:|:---------:|
| [.NET]({{< ref dotnet >}}) | Stable | ✔ | [ASP.NET Core]({{< ref dotnet-aspnet >}}) | ✔ |
| [Python]({{< ref python >}}) | Stable | ✔ | [gRPC]({{< ref python-grpc.md >}}) | [FastAPI]({{< ref python-fastapi.md >}})<br />[Flask]({{< ref python-flask.md >}}) |
| [Java](https://github.com/dapr/java-sdk) | Stable | ✔ | Spring Boot | ✔ |
| [Java]({{< ref java >}}) | Stable | ✔ | Spring Boot | ✔ |
| [Go]({{< ref go >}}) | Stable | ✔ | ✔ | |
| [PHP]({{< ref php >}}) | Stable | ✔ | ✔ | ✔ |
| [C++](https://github.com/dapr/cpp-sdk) | In development | ✔ | |

View File

@ -125,6 +125,9 @@ spec:
secretKeyRef:
name: redis
key: redis-password
# uncomment below for connecting to redis cache instances over TLS (ex - Azure Redis Cache)
# - name: enableTLS
# value: true
```
This example uses the kubernetes secret that was created when setting up a cluster with the above instructions.
@ -153,6 +156,9 @@ spec:
secretKeyRef:
name: redis
key: redis-password
# uncomment below for connecting to redis cache instances over TLS (ex - Azure Redis Cache)
# - name: enableTLS
# value: true
```
This example uses the kubernetes secret that was created when setting up a cluster with the above instructions.
@ -179,6 +185,9 @@ spec:
value: <HOST>
- name: redisPassword
value: <PASSWORD>
# uncomment below for connecting to redis cache instances over TLS (ex - Azure Redis Cache)
# - name: enableTLS
# value: true
```
```yaml
@ -195,6 +204,9 @@ spec:
value: <HOST>
- name: redisPassword
value: <PASSWORD>
# uncomment below for connecting to redis cache instances over TLS (ex - Azure Redis Cache)
# - name: enableTLS
# value: true
```
## Apply the configuration

View File

@ -55,7 +55,7 @@ spec:
value: ":"
```
You can see that the above file definition has a `type: secretstores.local.file` which tells Dapr to use the local file component as a secret store. The metadata fields provide component specific information needed to work with this component (in this case, the path to the secret store JSON)
You can see that the above file definition has a `type: secretstores.local.file` which tells Dapr to use the local file component as a secret store. The metadata fields provide component specific information needed to work with this component (in this case, the path to the secret store JSON is relative to where you call `dapr run` from.)
## Step 3: Run the Dapr sidecar

View File

@ -12,40 +12,62 @@ Begin by downloading and installing the Dapr CLI:
{{< tabs Linux Windows MacOS Binaries>}}
{{% codetab %}}
### Install from Terminal
This command installs the latest linux Dapr CLI to `/usr/local/bin`:
```bash
wget -q https://raw.githubusercontent.com/dapr/cli/master/install/install.sh -O - | /bin/bash
```
{{% /codetab %}}
{{% codetab %}}
This Command Prompt command installs the latest windows Dapr cli to `C:\dapr` and adds this directory to User PATH environment variable.
```powershell
powershell -Command "iwr -useb https://raw.githubusercontent.com/dapr/cli/master/install/install.ps1 | iex"
### Install without `sudo`
If you do not have access to the `sudo` command or your username is not in the `sudoers` file you can install Dapr to an alternate directory via the `DAPR_INSTALL_DIR` environment variable.
```bash
wget -q https://raw.githubusercontent.com/dapr/cli/master/install/install.sh -O - | DAPR_INSTALL_DIR="$HOME/dapr" /bin/bash
```
{{% /codetab %}}
{{% codetab %}}
### Install from Command Prompt
This Command Prompt command installs the latest windows Dapr cli to `C:\dapr` and adds this directory to User PATH environment variable.
```powershell
powershell -Command "iwr -useb https://raw.githubusercontent.com/dapr/cli/master/install/install.ps1 | iex"
```
### Install without administrative rights
If you do not have admin rights you can install Dapr to an alternate directory via the `DAPR_INSTALL_DIR` environment variable.
```powershell
$script=iwr -useb https://raw.githubusercontent.com/dapr/cli/master/install/install.ps1; $block=[ScriptBlock]::Create($script); invoke-command -ScriptBlock $block -ArgumentList "", "$HOME/dapr"
```
{{% /codetab %}}
{{% codetab %}}
### Install from Terminal
This command installs the latest darwin Dapr CLI to `/usr/local/bin`:
```bash
curl -fsSL https://raw.githubusercontent.com/dapr/cli/master/install/install.sh | /bin/bash
```
Or you can install via [Homebrew](https://brew.sh):
### Install from Homebrew
You can install via [Homebrew](https://brew.sh):
```bash
brew install dapr/tap/dapr-cli
```
{{% alert title="Note for M1 Macs" color="primary" %}}
#### Note for M1 Macs
For M1 Macs, homebrew is not supported. You will need to use the dapr install script and have the rosetta amd64 compatibility layer installed. If you do not have it installed already, you can run the following:
```bash
softwareupdate --install-rosetta
```
{{% /alert %}}
### Install without `sudo`
If you do not have access to the `sudo` command or your username is not in the `sudoers` file you can install Dapr to an alternate directory via the `DAPR_INSTALL_DIR` environment variable.
```bash
curl -fsSL https://raw.githubusercontent.com/dapr/cli/master/install/install.sh | DAPR_INSTALL_DIR="$HOME/dapr" /bin/bash
```
{{% /codetab %}}
{{% codetab %}}
@ -54,7 +76,7 @@ Each release of Dapr CLI includes various OSes and architectures. These binary v
1. Download the desired Dapr CLI from the latest [Dapr Release](https://github.com/dapr/cli/releases)
2. Unpack it (e.g. dapr_linux_amd64.tar.gz, dapr_windows_amd64.zip)
3. Move it to your desired location.
- For Linux/MacOS - `/usr/local/bin`
- For Linux/MacOS `/usr/local/bin` is recommended.
- For Windows, create a directory and add this to your System PATH. For example create a directory called `C:\dapr` and add this directory to your User PATH, by editing your system environment variable.
{{% /codetab %}}
{{< /tabs >}}

View File

@ -7,7 +7,7 @@ aliases:
- /getting-started/install-dapr/
---
Now that you have the [Dapr CLI installed]({{<ref install-dapr-cli.md>}}), it's time to initialize Dapr on your local machine using the CLI.
Now that you have the [Dapr CLI installed]({{<ref install-dapr-cli.md>}}), it's time to initialize Dapr on your local machine using the CLI.
Dapr runs as a sidecar alongside your application, and in self-hosted mode this means it is a process on your local machine. Therefore, initializing Dapr includes fetching the Dapr sidecar binaries and installing them locally.
@ -29,11 +29,11 @@ This recommended development environment requires [Docker](https://docs.docker.c
{{% codetab %}}
If you run your Docker commands with sudo, or the install path is `/usr/local/bin` (default install path), you will need to use `sudo` below.
{{% /codetab %}}
{{% codetab %}}
Make sure that you run Command Prompt as administrator (right click, run as administrator)
{{% /codetab %}}
{{< /tabs >}}
### Step 2: Run the init CLI command
@ -52,8 +52,8 @@ dapr --version
Output should look like this:
```
CLI version: 1.2.0
Runtime version: 1.2.0
CLI version: 1.3.0
Runtime version: 1.3.0
```
### Step 4: Verify containers are running

View File

@ -17,11 +17,11 @@ The [Dapr Quickstarts](https://github.com/dapr/quickstarts/tree/v1.0.0) are a co
| Quickstart | Description |
|--------------------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| [Hello World](https://github.com/dapr/quickstarts/tree/v1.0.0/hello-world) | Demonstrates how to run Dapr locally. Highlights service invocation and state management. |
| [Hello Kubernetes](https://github.com/dapr/quickstarts/tree/v1.0.0/hello-kubernetes) | Demonstrates how to run Dapr in Kubernetes. Highlights service invocation and state management. |
| [Distributed Calculator](https://github.com/dapr/quickstarts/tree/v1.0.0/distributed-calculator) | Demonstrates a distributed calculator application that uses Dapr services to power a React web app. Highlights polyglot (multi-language) programming, service invocation and state management. |
| [Pub/Sub](https://github.com/dapr/quickstarts/tree/v1.0.0/pub-sub) | Demonstrates how to use Dapr to enable pub-sub applications. Uses Redis as a pub-sub component. |
| [Bindings](https://github.com/dapr/quickstarts/tree/v1.0.0/bindings) | Demonstrates how to use Dapr to create input and output bindings to other components. Uses bindings to Kafka. |
| [Middleware](https://github.com/dapr/quickstarts/tree/v1.0.0/middleware) | Demonstrates use of Dapr middleware to enable OAuth 2.0 authorization. |
| [Observability](https://github.com/dapr/quickstarts/tree/v1.0.0/observability) | Demonstrates Dapr tracing capabilities. Uses Zipkin as a tracing component. |
| [Secret Store](https://github.com/dapr/quickstarts/tree/v1.0.0/secretstore) | Demonstrates the use of Dapr Secrets API to access secret stores. |
| [Hello World](https://github.com/dapr/quickstarts/tree/v1.3.0/hello-world) | Demonstrates how to run Dapr locally. Highlights service invocation and state management. |
| [Hello Kubernetes](https://github.com/dapr/quickstarts/tree/v1.3.0/hello-kubernetes) | Demonstrates how to run Dapr in Kubernetes. Highlights service invocation and state management. |
| [Distributed Calculator](https://github.com/dapr/quickstarts/tree/v1.3.0/distributed-calculator) | Demonstrates a distributed calculator application that uses Dapr services to power a React web app. Highlights polyglot (multi-language) programming, service invocation and state management. |
| [Pub/Sub](https://github.com/dapr/quickstarts/tree/v1.3.0/pub-sub) | Demonstrates how to use Dapr to enable pub-sub applications. Uses Redis as a pub-sub component. |
| [Bindings](https://github.com/dapr/quickstarts/tree/v1.3.0/bindings) | Demonstrates how to use Dapr to create input and output bindings to other components. Uses bindings to Kafka. |
| [Middleware](https://github.com/dapr/quickstarts/tree/v1.3.0/middleware) | Demonstrates use of Dapr middleware to enable OAuth 2.0 authorization. |
| [Observability](https://github.com/dapr/quickstarts/tree/v1.3.0/observability) | Demonstrates Dapr tracing capabilities. Uses Zipkin as a tracing component. |
| [Secret Store](https://github.com/dapr/quickstarts/tree/v1.3.0/secretstore) | Demonstrates the use of Dapr Secrets API to access secret stores. |

View File

@ -119,7 +119,9 @@ scopes:
## Example
<div class="embed-responsive embed-responsive-16by9">
<iframe width="560" height="315" src="https://www.youtube.com/embed/8W-iBDNvCUM?start=1763" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe>
</div>
## Related links

View File

@ -37,7 +37,7 @@ A Dapr sidecar can apply a specific configuration by using a ```dapr.io/config``
dapr.io/app-port: "3000"
dapr.io/config: "myappconfig"
```
Note: There are more [Kubernetes annotations]({{< ref "kubernetes-annotations.md" >}}) available to configure the Dapr sidecar on activation by sidecar Injector system service.
Note: There are more [Kubernetes annotations]({{< ref "arguments-annotations-overview.md" >}}) available to configure the Dapr sidecar on activation by sidecar Injector system service.
### Sidecar configuration settings

View File

@ -14,7 +14,10 @@ Using Dapr, you can control how many requests and events will invoke your applic
*Note that rate limiting per second can be achieved by using the **middleware.http.ratelimit** middleware. However, there is an imporant difference between the two approaches. The rate limit middlware is time bound and limits the number of requests per second, while the `app-max-concurrency` flag specifies the number of concurrent requests (and events) at any point of time. See [Rate limit middleware]({{< ref middleware-rate-limit.md >}}). *
Watch this [video](https://youtu.be/yRI5g6o_jp8?t=1710) on how to control concurrency and rate limiting ".
<div class="embed-responsive embed-responsive-16by9">
<iframe width="764" height="430" src="https://www.youtube.com/embed/yRI5g6o_jp8?t=1710" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe>
</div>
## Setting app-max-concurrency
@ -58,4 +61,4 @@ To set app-max-concurrency with the Dapr CLI for running on your local dev machi
dapr run --app-max-concurrency 1 --app-port 5000 python ./app.py
```
The above examples will effectively turn your app into a single concurrent service.
The above examples will effectively turn your app into a single concurrent service.

View File

@ -57,4 +57,4 @@ spec:
{{< /tabs >}}
## Related links
- [Dapr Kubernetes pod annotations spec]({{< ref kubernetes-annotations.md >}})
- [Dapr Kubernetes pod annotations spec]({{< ref arguments-annotations-overview.md >}})

View File

@ -8,10 +8,13 @@ description: "Restrict what operations *calling* applications can perform, via s
Access control enables the configuration of policies that restrict what operations *calling* applications can perform, via service invocation, on the *called* application. To limit access to a called applications from specific operations and HTTP verbs from the calling applications, you can define an access control policy specification in configuration.
An access control policy is specified in configuration and be applied to Dapr sidecar for the *called* application. Example access policies are shown below and access to the called app is based on the matched policy action. You can provide a default global action for all calling applications and if no access control policy is specified, the default behavior is to allow all calling applicatons to access to the called app.
An access control policy is specified in configuration and be applied to Dapr sidecar for the *called* application. Example access policies are shown below and access to the called app is based on the matched policy action. You can provide a default global action for all calling applications and if no access control policy is specified, the default behavior is to allow all calling applications to access to the called app.
Watch this [video](https://youtu.be/j99RN_nxExA?t=1108) on how to apply access control list for service invocation.
<div class="embed-responsive embed-responsive-16by9">
<iframe width="688" height="430" src="https://www.youtube.com/embed/j99RN_nxExA?start=1108" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe>
</div>
## Concepts
@ -194,7 +197,7 @@ spec:
## Hello world examples
These examples show how to apply access control to the [hello world](https://github.com/dapr/quickstarts#quickstarts) quickstart samples where a python app invokes a node.js app.
Access control lists rely on the Dapr [Sentry service]({{< ref "security-concept.md" >}}) to generate the TLS certificates with a SPIFFE id for authentication, which means the Sentry service either has to be running locally or deployed to your hosting enviroment such as a Kubernetes cluster.
Access control lists rely on the Dapr [Sentry service]({{< ref "security-concept.md" >}}) to generate the TLS certificates with a SPIFFE id for authentication, which means the Sentry service either has to be running locally or deployed to your hosting environment such as a Kubernetes cluster.
The nodeappconfig example below shows how to **deny** access to the `neworder` method from the `pythonapp`, where the python app is in the `myDomain` trust domain and `default` namespace. The nodeapp is in the `public` trust domain.
@ -353,4 +356,4 @@ spec:
containers:
- name: python
image: dapriosamples/hello-k8s-python:edge
```
```

View File

@ -7,20 +7,19 @@ description: "How to specify and enable preview features"
---
## Overview
Some features in Dapr are considered experimental when they are first released. These features require explicit opt-in in order to be used. The opt-in is specified in Dapr's configuration.
Preview features in Dapr are considered experimental when they are first released. These preview features require explicit opt-in in order to be used. The opt-in is specified in Dapr's configuration.
Currently, preview features are enabled on a per application basis when running on Kubernetes. A global scope may be introduced in the future should there be a use case for it.
Preview features are enabled on a per application basis by setting configuration when running an application instance.
### Current preview features
Below is a list of existing preview features:
- [Actor Reentrancy]({{<ref actor-reentrancy.md>}})
### Preview features
The current list of preview features can be found [here]({{<ref support-preview-features>}}).
## Configuration properties
The `features` section under the `Configuration` spec contains the following properties:
| Property | Type | Description |
|----------------|--------|-------------|
|name|string|The name of the preview feature that will be enabled/disabled
|name|string|The name of the preview feature that is enabled/disabled
|enabled|bool|Boolean specifying if the feature is enabled or disabled
## Enabling a preview feature

View File

@ -96,7 +96,7 @@ spec:
This example defines configuration for secret store named vault. The default access to the secret store is `deny`, whereas some secrets are accessible by the application based on the `allowedSecrets` list. Follow [these instructions]({{< ref configuration-overview.md >}}) to apply configuration to the sidecar.
### Scenario 3: Deny access to certain senstive secrets in a secret store
### Scenario 3: Deny access to certain sensitive secrets in a secret store
Define the following `config.yaml`:

View File

@ -31,7 +31,7 @@ minikube config set vm-driver [driver_name]
Use 1.13.x or newer version of Kubernetes with `--kubernetes-version`
```bash
minikube start --cpus=4 --memory=4096 --kubernetes-version=1.16.2 --extra-config=apiserver.authorization-mode=RBAC
minikube start --cpus=4 --memory=4096
```
3. Enable dashboard and ingress addons

View File

@ -1,38 +0,0 @@
---
type: docs
title: "Dapr Kubernetes pod annotations spec"
linkTitle: "Kubernetes annotations"
weight: 50000
description: "The available annotations available when configuring Dapr in your Kubernetes environment"
---
The following table shows all the supported pod Spec annotations supported by Dapr.
| Annotation | Description |
|---------------------------------------------------|-------------|
| `dapr.io/enabled` | Setting this paramater to `true` injects the Dapr sidecar into the pod
| `dapr.io/app-port` | This parameter tells Dapr which port your application is listening on
| `dapr.io/app-id` | The unique ID of the application. Used for service discovery, state encapsulation and the pub/sub consumer ID
| `dapr.io/log-level` | Sets the log level for the Dapr sidecar. Allowed values are `debug`, `info`, `warn`, `error`. Default is `info`
| `dapr.io/config` | Tells Dapr which Configuration CRD to use
| `dapr.io/log-as-json` | Setting this parameter to `true` outputs logs in JSON format. Default is `false`
| `dapr.io/enable-profiling` | Setting this paramater to `true` starts the Dapr profiling server on port `7777`. Default is `false`
| `dapr.io/api-token-secret` | Tells Dapr which Kubernetes secret to use for token based API authentication. By default this is not set.
| `dapr.io/app-protocol` | Tells Dapr which protocol your application is using. Valid options are `http` and `grpc`. Default is `http`
| `dapr.io/app-max-concurrency` | Limit the concurrency of your application. A valid value is any number larger than `0`
| `dapr.io/app-ssl` | Tells Dapr to invoke the app over an insecure SSL connection. Applies to both HTTP and gRPC. Traffic between your app and the Dapr sidecar is encrypted with a certificate issued by a non-trusted certificate authority, which is considered insecure. Default is `false`.
| `dapr.io/metrics-port` | Sets the port for the sidecar metrics server. Default is `9090`
| `dapr.io/sidecar-cpu-limit` | Maximum amount of CPU that the Dapr sidecar can use. See valid values [here](https://kubernetes.io/docs/tasks/administer-cluster/manage-resources/quota-memory-cpu-namespace/). By default this is not set
| `dapr.io/sidecar-memory-limit` | Maximum amount of Memory that the Dapr sidecar can use. See valid values [here](https://kubernetes.io/docs/tasks/administer-cluster/manage-resources/quota-memory-cpu-namespace/). By default this is not set
| `dapr.io/sidecar-cpu-request` | Amount of CPU that the Dapr sidecar requests. See valid values [here](https://kubernetes.io/docs/tasks/administer-cluster/manage-resources/quota-memory-cpu-namespace/). By default this is not set
| `dapr.io/sidecar-memory-request` | Amount of Memory that the Dapr sidecar requests .See valid values [here](https://kubernetes.io/docs/tasks/administer-cluster/manage-resources/quota-memory-cpu-namespace/). By default this is not set
| `dapr.io/sidecar-liveness-probe-delay-seconds` | Number of seconds after the sidecar container has started before liveness probe is initiated. Read more [here](https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/#configure-probes). Default is `3`
| `dapr.io/sidecar-liveness-probe-timeout-seconds` | Number of seconds after which the sidecar liveness probe times out. Read more [here](https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/#configure-probes). Default is `3`
| `dapr.io/sidecar-liveness-probe-period-seconds` | How often (in seconds) to perform the sidecar liveness probe. Read more [here](https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/#configure-probes). Default is `6`
| `dapr.io/sidecar-liveness-probe-threshold` | When the sidecar liveness probe fails, Kubernetes will try N times before giving up. In this case, the Pod will be marked Unhealthy. Read more about `failureThreshold` [here](https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/#configure-probes). Default is `3`
| `dapr.io/sidecar-readiness-probe-delay-seconds` | Number of seconds after the sidecar container has started before readiness probe is initiated. Read more [here](https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/#configure-probes). Default is `3`
| `dapr.io/sidecar-readiness-probe-timeout-seconds` | Number of seconds after which the sidecar readiness probe times out. Read more [here](https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/#configure-probes). Default is `3`
| `dapr.io/sidecar-readiness-probe-period-seconds` | How often (in seconds) to perform the sidecar readiness probe. Read more [here](https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/#configure-probes). Default is `6`
| `dapr.io/sidecar-readiness-probe-threshold` | When the sidecar readiness probe fails, Kubernetes will try N times before giving up. In this case, the Pod will be marked Unready. Read more about `failureThreshold` [here](https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/#configure-probes). Default is `3`
| `dapr.io/http-max-request-size` | Increasing max size of request body http and grpc servers parameter in MB to handle uploading of big files. Default is `4` MB
| `dapr.io/env` | List of environment variable to be injected into the sidecar. Strings consisting of key=value pairs separated by a comma.

View File

@ -22,7 +22,7 @@ Read [this guide]({{< ref kubernetes-deploy.md >}}) to learn how to deploy Dapr
## Adding Dapr to a Kubernetes deployment
Deploying and running a Dapr enabled application into your Kubernetes cluster is as simple as adding a few annotations to the deployment schemes. To give your service an `id` and `port` known to Dapr, turn on tracing through configuration and launch the Dapr sidecar container, you annotate your Kubernetes deployment like this. For more information check [dapr annotations]({{< ref kubernetes-annotations.md >}})
Deploying and running a Dapr enabled application into your Kubernetes cluster is as simple as adding a few annotations to the deployment schemes. To give your service an `id` and `port` known to Dapr, turn on tracing through configuration and launch the Dapr sidecar container, you annotate your Kubernetes deployment like this. For more information check [dapr annotations]({{< ref arguments-annotations-overview.md >}})
```yml
annotations:
@ -34,7 +34,7 @@ Deploying and running a Dapr enabled application into your Kubernetes cluster is
## Pulling container images from private registries
Dapr works seamlessly with any user application container image, regardless of its origin. Simply init Dapr and add the [Dapr annotations]({{< ref kubernetes-annotations >}}) to your Kubernetes definition to add the Dapr sidecar.
Dapr works seamlessly with any user application container image, regardless of its origin. Simply init Dapr and add the [Dapr annotations]({{< ref arguments-annotations-overview.md >}}) to your Kubernetes definition to add the Dapr sidecar.
The Dapr control-plane and sidecar images come from the [daprio Docker Hub](https://hub.docker.com/u/daprio) container registry, which is a public registry.

View File

@ -35,7 +35,7 @@ The following Dapr control plane deployments are optional:
## Sidecar resource settings
To set the resource assignments for the Dapr sidecar, see the annotations [here]({{< ref "kubernetes-annotations.md" >}}).
To set the resource assignments for the Dapr sidecar, see the annotations [here]({{< ref "arguments-annotations-overview.md" >}}).
The specific annotations related to resource constraints are:
- `dapr.io/sidecar-cpu-limit`
@ -61,7 +61,9 @@ The CPU and memory limits above account for the fact that Dapr is intended to a
When deploying Dapr in a production-ready configuration, it's recommended to deploy with a highly available (HA) configuration of the control plane, which creates 3 replicas of each control plane pod in the dapr-system namespace. This configuration allows for the Dapr control plane to survive node failures and other outages.
HA mode can be enabled with both the [Dapr CLI]({{< ref "kubernetes-deploy.md#install-in-highly-available-mode" >}}) and with [Helm charts]({{< ref "kubernetes-deploy.md#add-and-install-dapr-helm-chart" >}}).
For a new Dapr deployment, the HA mode can be set with both the [Dapr CLI]({{< ref "kubernetes-deploy.md#install-in-highly-available-mode" >}}) and with [Helm charts]({{< ref "kubernetes-deploy.md#add-and-install-dapr-helm-chart" >}}).
For an existing Dapr deployment, enabling the HA mode requires additional steps. Please refer to [this paragraph]({{< ref "#enabling-high-availability-in-an-existing-dapr-deployment" >}}) for more details.
## Deploying Dapr with Helm
@ -139,6 +141,23 @@ APP ID APP PORT AGE CREATED
nodeapp 3000 16h 2020-07-29 17:16.22
```
### Enabling high-availability in an existing Dapr deployment
Enabling HA mode for an existing Dapr deployment requires two steps.
First, delete the existing placement stateful set:
```bash
kubectl delete statefulset.apps/dapr-placement-server -n dapr-system
```
Second, issue the upgrade command:
```bash
helm upgrade dapr ./charts/dapr -n dapr-system --set global.ha.enabled=true
```
The reason for deletion of the placement stateful set is because in the HA mode, the placement service adds [Raft](https://raft.github.io/) for leader election. However, Kubernetes only allows for limited fields in stateful sets to be patched, subsequently failing upgrade of the placement service.
Deletion of the existing placement stateful set is safe. The agents will reconnect and re-register with the newly created placement service, which will persist its table in Raft.
## Recommended security configuration
When properly configured, Dapr ensures secure communication. It can also make your application more secure with a number of built-in features.

View File

@ -11,15 +11,15 @@ description: "Follow these steps to upgrade Dapr on Kubernetes and ensure a smoo
- [Dapr CLI]({{< ref install-dapr-cli.md >}})
- [Helm 3](https://github.com/helm/helm/releases) (if using Helm)
## Upgrade existing cluster to 1.2.0
## Upgrade existing cluster to 1.3.0
There are two ways to upgrade the Dapr control plane on a Kubernetes cluster using either the Dapr CLI or Helm.
### Dapr CLI
The example below shows how to upgrade to version 1.2.0:
The example below shows how to upgrade to version 1.3.0:
```bash
dapr upgrade -k --runtime-version=1.2.0
dapr upgrade -k --runtime-version=1.3.0
```
You can provide all the available Helm chart configurations using the Dapr CLI.
@ -43,7 +43,7 @@ To resolve this issue please run the follow command to upgrade the CustomResourc
kubectl replace -f https://raw.githubusercontent.com/dapr/dapr/5a15b3e0f093d2d0938b12f144c7047474a290fe/charts/dapr/crds/configuration.yaml
```
Then proceed with the `dapr upgrade --runtime-version 1.2.0 -k` command as above.
Then proceed with the `dapr upgrade --runtime-version 1.3.0 -k` command as above.
### Helm
@ -81,6 +81,11 @@ From version 1.0.0 onwards, upgrading Dapr using Helm is no longer a disruptive
4. All done!
#### Upgrading existing Dapr to enable high availability mode
Enabling HA mode in an existing Dapr deployment requires additional steps. Please refer to [this paragraph]({{< ref "kubernetes-production.md#enabling-high-availability-in-an-existing-dapr-deployment" >}}) for more details.
## Next steps
- [Dapr on Kubernetes]({{< ref kubernetes-overview.md >}})

View File

@ -1,6 +1,6 @@
---
type: docs
title: "Run Dapr in Self Hosted Mode"
title: "Run Dapr in self-hosted mode"
linkTitle: "Self-Hosted"
weight: 1000
description: "How to get Dapr up and running in your local environment"

View File

@ -20,16 +20,16 @@ The Dapr CLI provides an option to initialize Dapr using slim init, without the
dapr init --slim
```
In this mode two different binaries are installed `daprd` and `placement`. The `placement` binary is needed to enable [actors]({{< ref "actors-overview.md" >}}) in a Dapr self-hosted installation.
In this mode two different binaries are installed `daprd` and `placement`. The `placement` binary is needed to enable [actors]({{< ref "actors-overview.md" >}}) in a Dapr self-hosted installation.
In this mode no default components such as Redis are installed for state management or pub/sub. This means, that aside from [Service Invocation]({{< ref "service-invocation-overview.md" >}}), no other building block functionality is available on install out of the box. Users are free to setup their own environment and custom components. Furthermore, actor based service invocation is possible if a state store is configured as explained in the following sections.
## Service invocation
See [this sample](https://github.com/dapr/samples/tree/master/hello-dapr-slim) for an example on how to perform service invocation in this mode.
See [this sample](https://github.com/dapr/samples/tree/master/hello-dapr-slim) for an example on how to perform service invocation in this mode.
## Enabling state management or pub/sub
See configuring Redis in self-hosted mode [without docker](https://redis.io/topics/quickstart) to enable a local state store or pub/sub broker for messaging.
See configuring Redis in self-hosted mode [without docker](https://redis.io/topics/quickstart) to enable a local state store or pub/sub broker for messaging.
## Enabling actors
@ -51,7 +51,7 @@ INFO[0001] leader is established. instance=Nicoletaz-L10.
```
From here on you can follow the sample example created for the [java-sdk](https://github.com/dapr/java-sdk/tree/master/examples/src/main/java/io/dapr/examples/actors), [python-sdk](https://github.com/dapr/python-sdk/tree/master/examples/demo_actor) or [dotnet-sdk]({{< ref "dotnet-actors-howto.md" >}}) for running an application with Actors enabled.
From here on you can follow the sample example created for the [java-sdk](https://github.com/dapr/java-sdk/tree/master/examples/src/main/java/io/dapr/examples/actors), [python-sdk](https://github.com/dapr/python-sdk/tree/master/examples/demo_actor) or [dotnet-sdk]({{< ref "dotnet-actors-howto.md" >}}) for running an application with Actors enabled.
Update the state store configuration files to have the Redis host and password match the setup that you have. Additionally to enable it as a actor state store have the metadata piece added similar to the [sample Java Redis component](https://github.com/dapr/java-sdk/blob/master/examples/components/state/redis.yaml) definition.

View File

@ -8,7 +8,7 @@ description: "Overview of how to get Dapr running on a Windows/Linux/MacOS machi
## Overview
Dapr can be configured to run in self-hosted mode on your local developer machine or on production VMs. Each running service has a Dapr runtime process (or sidecar) which is configured to use state stores, pub/sub, binding components and the other building blocks.
Dapr can be configured to run in self-hosted mode on your local developer machine or on production VMs. Each running service has a Dapr runtime process (or sidecar) which is configured to use state stores, pub/sub, binding components and the other building blocks.
## Initialization
@ -17,13 +17,13 @@ Dapr can be initialized [with Docker]({{< ref self-hosted-with-docker.md >}}) (d
- A Zipkin container for diagnostics and tracing.
- A default Dapr configuration and components installed in `$HOME/.dapr/` (Mac/Linux) or `%USERPROFILE%\.dapr\` (Windows).
The `dapr-placement` service is responsible for managing the actor distribution scheme and key range settings. This service is not launched as a container and is only required if you are using Dapr actors. For more information on the actor `Placement` service read [actor overview]({{< ref "actors-overview.md" >}}).
The `dapr-placement` service is responsible for managing the actor distribution scheme and key range settings. This service is not launched as a container and is only required if you are using Dapr actors. For more information on the actor `Placement` service read [actor overview]({{< ref "actors-overview.md" >}}).
<img src="/images/overview-standalone-docker.png" width=1000 alt="Diagram of Dapr in self-hosted Docker mode" />
## Launching applications with Dapr
You can use the [`dapr run` CLI command]({{< ref dapr-run.md >}}) to a Dapr sidecar process along with your application.
You can use the [`dapr run` CLI command]({{< ref dapr-run.md >}}) to a Dapr sidecar process along with your application. Additional arguments and flags can be found [here]({{< ref arguments-annotations-overview.md >}}).
## Name resolution

View File

@ -12,7 +12,7 @@ description: "Follow these steps to upgrade Dapr in self-hosted mode and ensure
{{% alert title="Note" color="warning" %}}
This will remove the default `$HOME/.dapr` directory, binaries and all containers (dapr_redis, dapr_placement and dapr_zipkin). Linux users need to run `sudo` if docker command needs sudo.
{{% /alert %}}
```bash
dapr uninstall --all
```
@ -25,11 +25,11 @@ description: "Follow these steps to upgrade Dapr in self-hosted mode and ensure
dapr init
```
1. Ensure you are using the latest version of Dapr (v1.2) with:
1. Ensure you are using the latest version of Dapr (v1.3) with:
```bash
$ dapr --version
CLI version: 1.2
Runtime version: 1.2
CLI version: 1.3
Runtime version: 1.3
```

View File

@ -99,7 +99,7 @@ Each container will receive a unique IP on that network and be able to communica
[Docker Compose](https://docs.docker.com/compose/) can be used to define multi-container application configurations. If you wish to run multiple apps with Dapr sidecars locally without Kubernetes then it is recommended to use a Docker Compose definition (`docker-compose.yml`).
The syntax and tooling of Docker Compose is outside the scope of this article, however, it is recommended you refer to the [offical Docker documentation](https://docs.docker.com/compose/) for further details.
The syntax and tooling of Docker Compose is outside the scope of this article, however, it is recommended you refer to the [official Docker documentation](https://docs.docker.com/compose/) for further details.
In order to run your applications using Dapr and Docker Compose you'll need to define the sidecar pattern in your `docker-compose.yml`. For example:

View File

@ -12,16 +12,15 @@ description: "How to install Fluentd, Elastic Search, and Kibana to search logs
- [kubectl](https://kubernetes.io/docs/tasks/tools/install-kubectl/)
- [Helm 3](https://helm.sh/)
## Install Elastic search and Kibana
1. Create namespace for monitoring tool and add Helm repo for Elastic Search
1. Create a Kubernetes namespace for monitoring tools
```bash
kubectl create namespace dapr-monitoring
```
2. Add Elastic helm repo
2. Add the helm repo for Elastic Search
```bash
helm repo add elastic https://helm.elastic.co
@ -30,23 +29,23 @@ description: "How to install Fluentd, Elastic Search, and Kibana to search logs
3. Install Elastic Search using Helm
By default the chart creates 3 replicas which must be on different nodes. If your cluster has less than 3 nodes, specify a lower number of replicas. For example, this sets it to 1:
By default, the chart creates 3 replicas which must be on different nodes. If your cluster has fewer than 3 nodes, specify a smaller number of replicas. For example, this sets the number of replicas to 1:
```bash
helm install elasticsearch elastic/elasticsearch -n dapr-monitoring --set replicas=1
```
```bash
helm install elasticsearch elastic/elasticsearch -n dapr-monitoring --set replicas=1
```
Otherwise:
Otherwise:
```bash
helm install elasticsearch elastic/elasticsearch -n dapr-monitoring
```
```bash
helm install elasticsearch elastic/elasticsearch -n dapr-monitoring
```
If you are using minikube or want to disable persistent volumes for development purposes, you can disable it by using the following command:
If you are using minikube or simply want to disable persistent volumes for development purposes, you can do so by using the following command:
```bash
helm install elasticsearch elastic/elasticsearch -n dapr-monitoring --set persistence.enabled=false,replicas=1
```
```bash
helm install elasticsearch elastic/elasticsearch -n dapr-monitoring --set persistence.enabled=false,replicas=1
```
4. Install Kibana
@ -54,12 +53,10 @@ helm install elasticsearch elastic/elasticsearch -n dapr-monitoring --set persis
helm install kibana elastic/kibana -n dapr-monitoring
```
5. Validation
Ensure Elastic Search and Kibana are running in your Kubernetes cluster.
5. Ensure that Elastic Search and Kibana are running in your Kubernetes cluster
```bash
kubectl get pods -n dapr-monitoring
$ kubectl get pods -n dapr-monitoring
NAME READY STATUS RESTARTS AGE
elasticsearch-master-0 1/1 Running 0 6m58s
kibana-kibana-95bc54b89-zqdrk 1/1 Running 0 4m21s
@ -69,30 +66,29 @@ helm install elasticsearch elastic/elasticsearch -n dapr-monitoring --set persis
1. Install config map and Fluentd as a daemonset
Download these config files:
- [fluentd-config-map.yaml](/docs/fluentd-config-map.yaml)
- [fluentd-dapr-with-rbac.yaml](/docs/fluentd-dapr-with-rbac.yaml)
Download these config files:
- [fluentd-config-map.yaml](/docs/fluentd-config-map.yaml)
- [fluentd-dapr-with-rbac.yaml](/docs/fluentd-dapr-with-rbac.yaml)
> Note: If you already have Fluentd running in your cluster, please enable the nested json parser to parse JSON formatted log from Dapr.
> Note: If you already have Fluentd running in your cluster, please enable the nested json parser so that it can parse JSON-formatted logs from Dapr.
Apply the configurations to your cluster:
Apply the configurations to your cluster:
```bash
kubectl apply -f ./fluentd-config-map.yaml
kubectl apply -f ./fluentd-dapr-with-rbac.yaml
```
```bash
kubectl apply -f ./fluentd-config-map.yaml
kubectl apply -f ./fluentd-dapr-with-rbac.yaml
```
2. Ensure that Fluentd is running as a daemonset; the number of instances should be the same as the number of cluster nodes. In the example below we only have 1 node.
```bash
kubectl get pods -n kube-system -w
NAME READY STATUS RESTARTS AGE
coredns-6955765f44-cxjxk 1/1 Running 0 4m41s
coredns-6955765f44-jlskv 1/1 Running 0 4m41s
etcd-m01 1/1 Running 0 4m48s
fluentd-sdrld 1/1 Running 0 14s
```
2. Ensure that Fluentd is running as a daemonset. The number of FluentD instances should be the same as the number of cluster nodes. In the example below, there is only one node in the cluster:
```bash
$ kubectl get pods -n kube-system -w
NAME READY STATUS RESTARTS AGE
coredns-6955765f44-cxjxk 1/1 Running 0 4m41s
coredns-6955765f44-jlskv 1/1 Running 0 4m41s
etcd-m01 1/1 Running 0 4m48s
fluentd-sdrld 1/1 Running 0 14s
```
## Install Dapr with JSON formatted logs
@ -106,80 +102,83 @@ fluentd-sdrld 1/1 Running 0 14s
2. Enable JSON formatted log in Dapr sidecar
Add `dapr.io/log-as-json: "true"` annotation to your deployment yaml.
Add the `dapr.io/log-as-json: "true"` annotation to your deployment yaml. For example:
Example:
```yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: pythonapp
namespace: default
labels:
app: python
spec:
replicas: 1
selector:
matchLabels:
app: python
template:
```yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: pythonapp
namespace: default
labels:
app: python
annotations:
dapr.io/enabled: "true"
dapr.io/app-id: "pythonapp"
dapr.io/log-as-json: "true"
...
```
spec:
replicas: 1
selector:
matchLabels:
app: python
template:
metadata:
labels:
app: python
annotations:
dapr.io/enabled: "true"
dapr.io/app-id: "pythonapp"
dapr.io/log-as-json: "true"
...
```
## Search logs
> Note: Elastic Search takes a time to index the logs that Fluentd sends.
1. Port-forward to svc/kibana-kibana
1. Port-forward from localhost to `svc/kibana-kibana`
```
$ kubectl port-forward svc/kibana-kibana 5601 -n dapr-monitoring
Forwarding from 127.0.0.1:5601 -> 5601
Forwarding from [::1]:5601 -> 5601
Handling connection for 5601
Handling connection for 5601
```
```bash
$ kubectl port-forward svc/kibana-kibana 5601 -n dapr-monitoring
Forwarding from 127.0.0.1:5601 -> 5601
Forwarding from [::1]:5601 -> 5601
Handling connection for 5601
Handling connection for 5601
```
2. Browse `http://localhost:5601`
2. Browse to `http://localhost:5601`
3. Click Management -> Index Management
3. Expand the drop-down menu and click **Management → Stack Management**
![kibana management](/images/kibana-1.png)
![Stack Management item under Kibana Management menu options](/images/kibana-1.png)
4. Wait until dapr-* is indexed.
4. On the Stack Management page, select **Data → Index Management** and wait until `dapr-*` is indexed.
![index log](/images/kibana-2.png)
![Index Management view on Kibana Stack Management page](/images/kibana-2.png)
5. Once dapr-* indexed, click Kibana->Index Patterns and Create Index Pattern
5. Once `dapr-*` is indexed, click on **Kibana → Index Patterns** and then the **Create index pattern** button.
![create index pattern](/images/kibana-3.png)
![Kibana create index pattern button](/images/kibana-3.png)
6. Define index pattern - type `dapr*` in index pattern
6. Define a new index pattern by typing `dapr*` into the **Index Pattern name** field, then click the **Next step** button to continue.
![define index pattern](/images/kibana-4.png)
![Kibana define an index pattern page](/images/kibana-4.png)
7. Select time stamp filed: `@timestamp`
7. Configure the primary time field to use with the new index pattern by selecting the `@timestamp` option from the **Time field** drop-down. Click the **Create index pattern** button to complete creation of the index pattern.
![timestamp](/images/kibana-5.png)
![Kibana configure settings page for creating an index pattern](/images/kibana-5.png)
8. Confirm that `scope`, `type`, `app_id`, `level`, etc are being indexed.
8. The newly created index pattern should be shown. Confirm that the fields of interest such as `scope`, `type`, `app_id`, `level`, etc. are being indexed by using the search box in the **Fields** tab.
> Note: if you cannot find the indexed field, please wait. it depends on the volume of data and resource size where elastic search is running.
> Note: If you cannot find the indexed field, please wait. The time it takes to search across all indexed fields depends on the volume of data and size of the resource that the elastic search is running on.
![indexing](/images/kibana-6.png)
![View of created Kibana index pattern](/images/kibana-6.png)
9. Click `discover` icon and search `scope:*`
9. To explore the indexed data, expand the drop-down menu and click **Analytics → Discover**.
> Note: it would take some time to make log searchable based on the data volume and resource.
![Discover item under Kibana Analytics menu options](/images/kibana-7.png)
![discover](/images/kibana-7.png)
10. In the search box, type in a query string such as `scope:*` and click the **Refresh** button to view the results.
> Note: This can take a long time. The time it takes to return all results depends on the volume of data and size of the resource that the elastic search is running on.
![Using the search box in the Kibana Analytics Discover page](/images/kibana-8.png)
## References

View File

@ -24,7 +24,7 @@ This document explains how to install it in your cluster, either using a Helm ch
2. Add the New Relic official Helm chart repository following these instructions
3. Run the following command to install the New Relic Logging Kubernetes plugin via Helm, replacing the placeholder value YOUR_LICENSE_KEY with your [New Relic license key](https://docs.newrelic.com/docs/accounts/install-new-relic/account-setup/license-key):
3. Run the following command to install the New Relic Logging Kubernetes plugin via Helm, replacing the placeholder value YOUR_LICENSE_KEY with your [New Relic license key](https://docs.newrelic.com/docs/accounts/accounts-billing/account-setup/new-relic-license-key/):
- Helm 3
```bash
@ -74,5 +74,5 @@ By default, tailing is set to /var/log/containers/*.log. To change this setting,
* [New Relic Account Signup](https://newrelic.com/signup)
* [Telemetry Data Platform](https://newrelic.com/platform/telemetry-data-platform)
* [New Relic Logging](https://github.com/newrelic/helm-charts/tree/master/charts/newrelic-logging)
* [Types of New Relic API keys](https://docs.newrelic.com/docs/apis/get-started/intro-apis/types-new-relic-api-keys)
* [Types of New Relic API keys](https://docs.newrelic.com/docs/apis/intro-apis/new-relic-api-keys/)
* [Alerts and Applied Intelligence](https://docs.newrelic.com/docs/alerts-applied-intelligence)

View File

@ -156,7 +156,7 @@ First you need to connect Prometheus as a data source to Grafana.
1. Find the dashboard that you imported and enjoy
<img src="/images/system-service-dashboard.png" alt="Screenshot of Dapr service dashbaord" width=900>
<img src="/images/system-service-dashboard.png" alt="Screenshot of Dapr service dashboard" width=900>
{{% alert title="Tip" color="primary" %}}
Hover your mouse over the `i` in the corner to the description of each chart:
@ -173,4 +173,7 @@ First you need to connect Prometheus as a data source to Grafana.
* [Supported Dapr metrics](https://github.com/dapr/dapr/blob/master/docs/development/dapr-metrics.md)
## Example
<div class="embed-responsive embed-responsive-16by9">
<iframe width="560" height="315" src="https://www.youtube.com/embed/8W-iBDNvCUM?start=2577" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe>
</div>

View File

@ -22,7 +22,7 @@ This document explains how to install it in your cluster, either using a Helm ch
2. Add the New Relic official Helm chart repository following [these instructions](https://github.com/newrelic/helm-charts/blob/master/README.md#installing-charts)
3. Run the following command to install the New Relic Logging Kubernetes plugin via Helm, replacing the placeholder value YOUR_LICENSE_KEY with your [New Relic license key](https://docs.newrelic.com/docs/accounts/install-new-relic/account-setup/license-key):
3. Run the following command to install the New Relic Logging Kubernetes plugin via Helm, replacing the placeholder value YOUR_LICENSE_KEY with your [New Relic license key](https://docs.newrelic.com/docs/accounts/accounts-billing/account-setup/new-relic-license-key):
```bash
helm install nri-prometheus newrelic/nri-prometheus --set licenseKey=YOUR_LICENSE_KEY
@ -39,5 +39,5 @@ This document explains how to install it in your cluster, either using a Helm ch
* [New Relic Account Signup](https://newrelic.com/signup)
* [Telemetry Data Platform](https://newrelic.com/platform/telemetry-data-platform)
* [New Relic Prometheus OpenMetrics Integration](https://github.com/newrelic/helm-charts/tree/master/charts/nri-prometheus)
* [Types of New Relic API keys](https://docs.newrelic.com/docs/apis/get-started/intro-apis/types-new-relic-api-keys)
* [Types of New Relic API keys](https://docs.newrelic.com/docs/apis/intro-apis/new-relic-api-keys/)
* [Alerts and Applied Intelligence](https://docs.newrelic.com/docs/alerts-applied-intelligence)

View File

@ -111,9 +111,12 @@ dapr-prom-prometheus-server-694fd8d7c-q5d59 2/2 Running 0
```
## Example
<iframe width="560" height="315" src="https://www.youtube.com/embed/8W-iBDNvCUM?start=2577" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe>
<div class="embed-responsive embed-responsive-16by9">
<iframe width="560" height="315" src="https://www.youtube.com/embed/8W-iBDNvCUM?start=2577" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe>
</div>
## References
* [Prometheus Installation](https://github.com/prometheus-community/helm-charts)
* [Prometheus Query Language](https://prometheus.io/docs/prometheus/latest/querying/basics/)
* [Prometheus Query Language](https://prometheus.io/docs/prometheus/latest/querying/basics/)

View File

@ -7,16 +7,13 @@ description: "Set up Jaeger for distributed tracing"
type: docs
---
Dapr currently supports the Zipkin protocol. Since Jaeger is
compatible with Zipkin, the Zipkin protocol can be used to talk to
Jaeger.
Dapr supports the Zipkin protocol. Since Jaeger is compatible with Zipkin, the Zipkin protocol can be used to communication with Jaeger.
## Configure self hosted mode
### Setup
The simplest way to start Jaeger is to use the pre-built all-in-one
Jaeger image published to DockerHub:
The simplest way to start Jaeger is to use the pre-built all-in-one Jaeger image published to DockerHub:
```bash
docker run -d --name jaeger \
@ -55,15 +52,19 @@ dapr run --app-id mynode --app-port 3000 node app.js --config config.yaml
```
### Viewing Traces
To view traces, in your browser go to http://localhost:16686 and you will see the Jaeger UI.
To view traces, in your browser go to http://localhost:16686 to see the Jaeger UI.
## Configure Kubernetes
The following steps shows you how to configure Dapr to send distributed tracing data to Jaeger running as a container in your Kubernetes cluster, how to view them.
### Setup
First create the following YAML file to install Jaeger
* jaeger-operator.yaml
First create the following YAML file to install Jaeger, file name is `jaeger-operator.yaml`
#### Development and test
By default, the allInOne Jaeger image uses memory as the backend storage and it is not recommended to use this in a production environment.
```yaml
apiVersion: jaegertracing.io/v1
kind: "Jaeger"
@ -80,7 +81,54 @@ spec:
base-path: /jaeger
```
#### Production
Jaeger uses Elasticsearch as the backend storage, and you can create a secret in k8s cluster to access Elasticsearch server with access control. See [Configuring and Deploying Jaeger](https://docs.openshift.com/container-platform/4.7/jaeger/jaeger_install/rhbjaeger-deploying.html)
```shell
kubectl create secret generic jaeger-secret --from-literal=ES_PASSWORD='xxx' --from-literal=ES_USERNAME='xxx' -n ${NAMESPACE}
```
```yaml
apiVersion: jaegertracing.io/v1
kind: "Jaeger"
metadata:
name: jaeger
spec:
strategy: production
query:
options:
log-level: info
query:
base-path: /jaeger
collector:
maxReplicas: 5
resources:
limits:
cpu: 500m
memory: 516Mi
storage:
type: elasticsearch
esIndexCleaner:
enabled: false ## turn the job deployment on and off
numberOfDays: 7 ## number of days to wait before deleting a record
schedule: "55 23 * * *" ## cron expression for it to run
image: jaegertracing/jaeger-es-index-cleaner ## image of the job
secretName: jaeger-secret
options:
es:
server-urls: http://elasticsearch:9200
```
The pictures are as follows, include Elasticsearch and Grafana tracing data:
![jaeger-storage-es](/images/jaeger_storage_elasticsearch.png)
![grafana](/images/jaeger_grafana.png)
Now, use the above YAML file to install Jaeger
```bash
# Install Jaeger
helm repo add jaegertracing https://jaegertracing.github.io/helm-charts

View File

@ -14,7 +14,7 @@ description: "Set-up New Relic for distributed tracing"
Dapr natively captures metrics and traces that can be send directly to New Relic. The easiest way to export these is by configuring Dapr to send the traces to [New Relic's Trace API](https://docs.newrelic.com/docs/distributed-tracing/trace-api/report-zipkin-format-traces-trace-api/) using the Zipkin trace format.
In order for the integration to send data to New Relic [Telemetry Data Platform](https://newrelic.com/platform/telemetry-data-platform), you need a [New Relic Insights Insert API key](https://docs.newrelic.com/docs/apis/get-started/intro-apis/types-new-relic-api-keys#insights-insert-key).
In order for the integration to send data to New Relic [Telemetry Data Platform](https://newrelic.com/platform/telemetry-data-platform), you need a [New Relic Insights Insert API key](https://docs.newrelic.com/docs/apis/intro-apis/new-relic-api-keys/#insights-insert-key).
```yaml
apiVersion: dapr.io/v1alpha1
@ -39,7 +39,7 @@ New Relic Distributed Tracing details
## (optional) New Relic Instrumentation
In order for the integrations to send data to New Relic Telemetry Data Platform, you either need a [New Relic license key](https://docs.newrelic.com/docs/accounts/accounts-billing/account-setup/new-relic-license-key) or [New Relic Insights Insert API key](https://docs.newrelic.com/docs/apis/get-started/intro-apis/types-new-relic-api-keys#insights-insert-key).
In order for the integrations to send data to New Relic Telemetry Data Platform, you either need a [New Relic license key](https://docs.newrelic.com/docs/accounts/accounts-billing/account-setup/new-relic-license-key) or [New Relic Insights Insert API key](https://docs.newrelic.com/docs/apis/intro-apis/new-relic-api-keys/#insights-insert-key).
### OpenTelemetry instrumentation
@ -47,13 +47,13 @@ Leverage the different language specific OpenTelemetry implementations, for exam
### New Relic Language agent
Similarly to the OpenTelemetry instrumentation, you can also leverage a New Relic language agent. As an example, the [New Relic agent instrumentation for .NET Core](https://docs.newrelic.com/docs/agents/net-agent/installation/install-docker-container) is part of the Dockerfile. See example [here](https://github.com/harrykimpel/quickstarts/blob/master/distributed-calculator/csharp/Dockerfile).
Similarly to the OpenTelemetry instrumentation, you can also leverage a New Relic language agent. As an example, the [New Relic agent instrumentation for .NET Core](https://docs.newrelic.com/docs/agents/net-agent/other-installation/install-net-agent-docker-container) is part of the Dockerfile. See example [here](https://github.com/harrykimpel/quickstarts/blob/master/distributed-calculator/csharp/Dockerfile).
## (optional) Enable New Relic Kubernetes integration
In case Dapr and your applications run in the context of a Kubernetes environment, you can enable additional metrics and logs.
The easiest way to install the New Relic Kubernetes integration is to use the [automated installer](https://one.newrelic.com/launcher/nr1-core.settings?pane=eyJuZXJkbGV0SWQiOiJrOHMtY2x1c3Rlci1leHBsb3Jlci1uZXJkbGV0Lms4cy1zZXR1cCJ9) to generate a manifest. It bundles not just the integration DaemonSets, but also other New Relic Kubernetes configurations, like [Kubernetes events](https://docs.newrelic.com/docs/integrations/kubernetes-integration/kubernetes-events/install-kubernetes-events-integration), [Prometheus OpenMetrics](https://docs.newrelic.com/docs/integrations/prometheus-integrations/get-started/new-relic-prometheus-openmetrics-integration-kubernetes), and [New Relic log monitoring](https://docs.newrelic.com/docs/logs).
The easiest way to install the New Relic Kubernetes integration is to use the [automated installer](https://one.newrelic.com/launcher/nr1-core.settings?pane=eyJuZXJkbGV0SWQiOiJrOHMtY2x1c3Rlci1leHBsb3Jlci1uZXJkbGV0Lms4cy1zZXR1cCJ9) to generate a manifest. It bundles not just the integration DaemonSets, but also other New Relic Kubernetes configurations, like [Kubernetes events](https://docs.newrelic.com/docs/integrations/kubernetes-integration/kubernetes-events/install-kubernetes-events-integration), [Prometheus OpenMetrics](https://docs.newrelic.com/docs/integrations/prometheus-integrations/get-started/send-prometheus-metric-data-new-relic/), and [New Relic log monitoring](https://docs.newrelic.com/docs/logs).
### New Relic Kubernetes Cluster Explorer
@ -107,8 +107,8 @@ All the data that is collected from Dapr, Kubernetes or any services that run on
* [New Relic Account Signup](https://newrelic.com/signup)
* [Telemetry Data Platform](https://newrelic.com/platform/telemetry-data-platform)
* [Distributed Tracing](https://docs.newrelic.com/docs/understand-dependencies/distributed-tracing/get-started/introduction-distributed-tracing)
* [Distributed Tracing](https://docs.newrelic.com/docs/distributed-tracing/concepts/introduction-distributed-tracing/)
* [New Relic Trace API](https://docs.newrelic.com/docs/distributed-tracing/trace-api/introduction-trace-api/)
* [Types of New Relic API keys](https://docs.newrelic.com/docs/apis/get-started/intro-apis/types-new-relic-api-keys)
* [Types of New Relic API keys](https://docs.newrelic.com/docs/apis/intro-apis/new-relic-api-keys/)
* [New Relic OpenTelemetry User Experience](https://blog.newrelic.com/product-news/opentelemetry-user-experience/)
* [Alerts and Applied Intelligence](https://docs.newrelic.com/docs/alerts-applied-intelligence)

View File

@ -1,7 +1,7 @@
---
type: docs
title: "Performance and Scalability"
linkTitle: "Performance and Scalability"
title: "Performance and scalability statistics of Dapr"
linkTitle: "Performance and scalability"
weight: 700
description: "Benchmarks and guidelines for Dapr building blocks"
---

View File

@ -0,0 +1,16 @@
---
type: docs
title: "Preview features"
linkTitle: "Preview features"
weight: 4000
description: "List of current preview features"
---
Preview features in Dapr are considered experimental when they are first released. These preview features require explicit opt-in in order to be used. The opt-in is specified in Dapr's configuration. See [How-To: Enable preview features]({{<ref preview-features>}}) for information more information.
## Current preview features
| Description | Setting | Documentation |
|-------------|---------|---------------|
| Preview feature that enables Actors to be called multiple times in the same call chain allowing call backs between actors. | Actor.Reentrancy | [Actor reentrancy]({{<ref actor-reentrancy>}}) |
| Preview feature that allows Actor reminders to be partitioned across multiple keys in the underlying statestore in order to improve scale and performance. | Actor.TypeMetadata | [How-To: Partition Actor Reminders]({{< ref "howto-actors.md#partitioning-reminders" >}}) |
| Preview feature that enables you to call endpoints using service invocation on gRPC services through Dapr via gRPC proxying, without requiring the use of Dapr SDKs. | proxy.grpc | [How-To: Invoke services using gRPC]({{<ref howto-invoke-services-grpc>}}) |

View File

@ -33,27 +33,29 @@ The table below shows the versions of Dapr releases that have been tested togeth
|--------------------|:--------:|:--------|---------|---------|---------|
| Feb 17th 2021 | 1.0.0</br>| 1.0.0 | Java 1.0.0 </br>Go 1.0.0 </br>PHP 1.0.0 </br>Python 1.0.0 </br>.NET 1.0.0 | 0.6.0 | Unsupported |
| Mar 4th 2021 | 1.0.1</br>| 1.0.1 | Java 1.0.2 </br>Go 1.0.0 </br>PHP 1.0.0 </br>Python 1.0.0 </br>.NET 1.0.0 | 0.6.0 | Unsupported |
| Apr 1st 2021 | 1.1.0</br> | 1.1.0 | Java 1.0.2 </br>Go 1.1.0 </br>PHP 1.0.0 </br>Python 1.1.0 </br>.NET 1.1.0 | 0.6.0 | Supported |
| Apr 6th 2021 | 1.1.1</br> | 1.1.0 | Java 1.0.2 </br>Go 1.1.0 </br>PHP 1.0.0 </br>Python 1.1.0 </br>.NET 1.1.0 | 0.6.0 | Supported |
| Apr 16th 2021 | 1.1.2</br> | 1.1.0 | Java 1.0.2 </br>Go 1.1.0 </br>PHP 1.0.0 </br>Python 1.1.0 </br>.NET 1.1.0 | 0.6.0 | Supported |
| May 26th 2021 | 1.2.0</br> | 1.2.0 | Java 1.1.0 </br>Go 1.1.0 </br>PHP 1.1.0 </br>Python 1.1.0 </br>.NET 1.2.0 | 0.6.0 | Supported (current) |
| Apr 1st 2021 | 1.1.0</br> | 1.1.0 | Java 1.0.2 </br>Go 1.1.0 </br>PHP 1.0.0 </br>Python 1.1.0 </br>.NET 1.1.0 | 0.6.0 | Unsupported |
| Apr 6th 2021 | 1.1.1</br> | 1.1.0 | Java 1.0.2 </br>Go 1.1.0 </br>PHP 1.0.0 </br>Python 1.1.0 </br>.NET 1.1.0 | 0.6.0 | Unsupported |
| Apr 16th 2021 | 1.1.2</br> | 1.1.0 | Java 1.0.2 </br>Go 1.1.0 </br>PHP 1.0.0 </br>Python 1.1.0 </br>.NET 1.1.0 | 0.6.0 | Unsupported |
| May 26th 2021 | 1.2.0</br> | 1.2.0 | Java 1.1.0 </br>Go 1.1.0 </br>PHP 1.1.0 </br>Python 1.1.0 </br>.NET 1.2.0 | 0.6.0 | Supported |
| June 16th 2021 | 1.2.1</br> | 1.2.0 | Java 1.1.0 </br>Go 1.1.0 </br>PHP 1.1.0 </br>Python 1.1.0 </br>.NET 1.2.0 | 0.6.0 | Supported |
| June 16th 2021 | 1.2.2</br> | 1.2.0 | Java 1.1.0 </br>Go 1.1.0 </br>PHP 1.1.0 </br>Python 1.1.0 </br>.NET 1.2.0 | 0.6.0 | Supported |
| July 26th 2021 | 1.3</br> | 1.3.0 | Java 1.2.0 </br>Go 1.2.0 </br>PHP 1.1.0 </br>Python 1.2.0 </br>.NET 1.3.0 | 0.7.0 | Supported (current) |
## Upgrade paths
After the 1.0 release of the runtime there may be situations where it is necessary to explicitly upgrade through an additional release to reach the desired target. For example an upgrade from v1.0 to v1.2 may need go pass through v1.1
The table below shows the tested upgrade paths for the Dapr runtime. For example you are able to upgrade from 1.0-rc4 to the 1.0 release. Any other combinations of upgrades have not been tested.
The table below shows the tested upgrade paths for the Dapr runtime. Any other combinations of upgrades have not been tested.
General guidance on upgrading can be found for [self hosted mode]({{<ref self-hosted-upgrade>}}) and [Kubernetes]({{<ref kubernetes-upgrade>}}) deployments. It is best to review the target version release notes for specific guidance.
| Current Runtime version | Must upgrade through | Target Runtime version |
|--------------------------|-----------------------|------------------------- |
| 0.11 | N/A | 1.0.1 |
| | 1.0.1 | 1.1.2 |
| 1.0-rc1 to 1.0-rc4 | N/A | 1.0.1 |
| 1.0.0 or 1.0.1 | N/A | 1.1.2 |
| 1.1.0 or 1.1.1 | N/A | 1.1.2 |
| 1.0.0 or 1.0.1 | 1.1.2 | 1.2.0 |
| 1.1.0 to 1.1.2 | N/A | 1.2.0 |
| 1.0.0 or 1.0.1 | N/A | 1.1.2 |
| | 1.1.2 | 1.2.2 |
| | 1.2.2 | 1.3.0 |
| 1.1.0 to 1.1.2 | N/A | 1.2.2 |
| | 1.2.2 | 1.3.0 |
| 1.2.0 to 1.2.2 | N/A | 1.3.0 |
## Feature and deprecations
There is a process for announcing feature deprecations. Deprecations are applied two (2) releases after the release in which they were announced. For example Feature X is announced to be deprecated in the 1.0.0 release notes and will then be removed in 1.2.0.

View File

@ -91,7 +91,7 @@ The most common cause of this failure is that a component (such as a state store
To diagnose the root cause:
- Significantly increase the liveness probe delay - [link]({{< ref "kubernetes-annotations.md" >}})
- Significantly increase the liveness probe delay - [link]({{< ref "arguments-annotations-overview.md" >}})
- Set the log level of the sidecar to debug - [link]({{< ref "logs-troubleshooting.md#setting-the-sidecar-log-level" >}})
- Watch the logs for meaningful information - [link]({{< ref "logs-troubleshooting.md#viewing-logs-on-kubernetes" >}})
@ -223,6 +223,6 @@ In order for mDNS to function properly, ensure `Micorosft Content Filter` is ina
- Type `mdatp system-extension network-filter disable` and hit enter.
- Enter your account password.
Microsoft Content Filter is disbaled when the output is "Success".
Microsoft Content Filter is disabled when the output is "Success".
> Some organizations will re-enable the filter from time to time. If you repeatedly encounter app-id values missing, first check to see if the filter has been re-enabled before doing more extensive troubleshooting.
> Some organizations will re-enable the filter from time to time. If you repeatedly encounter app-id values missing, first check to see if the filter has been re-enabled before doing more extensive troubleshooting.

View File

@ -106,7 +106,7 @@ This section will guide you on how to view logs for Dapr system components as we
#### Sidecar Logs
When deployed in Kubernetes, the Dapr sidecar injector will inject an Dapr container named `daprd` into your annotated pod.
When deployed in Kubernetes, the Dapr sidecar injector will inject a Dapr container named `daprd` into your annotated pod.
In order to view logs for the sidecar, simply find the pod in question by running `kubectl get pods`:
```bash

View File

@ -21,7 +21,7 @@ To enable profiling in Standalone mode, pass the `--enable-profiling` and the `-
Note that `profile-port` is not required, and if not provided Dapr will pick an available port.
```bash
dapr run --enable-profiling true --profile-port 7777 python myapp.py
dapr run --enable-profiling --profile-port 7777 python myapp.py
```
### Kubernetes

View File

@ -178,7 +178,16 @@ Creates a persistent reminder for an actor.
POST/PUT http://localhost:<daprPort>/v1.0/actors/<actorType>/<actorId>/reminders/<name>
```
Body:
#### Request Body
A JSON object with the following fields:
| Field | Description |
|-------|--------------|
| dueTime | Specifies the time after which the reminder is invoked, its format should be [time.ParseDuration](https://pkg.go.dev/time#ParseDuration) format
| period | Specifies the period between different invocations, its format should be [time.ParseDuration](https://pkg.go.dev/time#ParseDuration) format or ISO 8601 duration format with optional recurrence.
`period` field supports `time.Duration` format and ISO 8601 format (with some limitations). Only duration format of ISO 8601 duration `Rn/PnYnMnWnDTnHnMnS` is supported for `period`. Here `Rn/` specifies that the reminder will be invoked `n` number of times. It should be a positive integer greater than zero. If certain values are zero, the `period` can be shortened, for example 10 seconds can be specified in ISO 8601 duration as `PT10S`. If `Rn/` is not specified the reminder will run infinite number of times until deleted.
The following specifies a `dueTime` of 3 seconds and a period of 7 seconds.
```json

View File

@ -0,0 +1,54 @@
---
type: docs
title: "Dapr arguments and annotations for daprd, CLI, and Kubernetes"
linkTitle: "Arguments and annotations"
description: "The arguments and annotations available when configuring Dapr in different environments"
weight: 300
aliases:
- "/operations/hosting/kubernetes/kubernetes-annotations/"
---
This table is meant to help users understand the equivalent options for running Dapr sidecars in different contexts--via the [CLI]({{< ref cli-overview.md >}}) directly, via daprd, or on [Kubernetes]({{< ref kubernetes-overview.md >}}) via annotations.
| daprd | Dapr CLI | CLI shorthand | Kubernetes annotations | Description
|----- | ------- | -----------| ----------| ------------ |
| `--allowed-origins` | not supported | | not supported | Allowed HTTP origins (default "*") |
| `--app-id` | `--app-id` | `-i` | `dapr.io/app-id` | The unique ID of the application. Used for service discovery, state encapsulation and the pub/sub consumer ID |
| `--app-port` | `--app-port` | `-p` | `dapr.io/app-port` | This parameter tells Dapr which port your application is listening on |
| `--app-ssl` | `--app-ssl` | | `dapr.io/app-ssl` | Sets the URI scheme of the app to https and attempts an SSL connection |
| `--components-path` | `--components-path` | `-d` | not supported | Path for components directory. If empty, components will not be loaded. |
| `--config` | `--config` | `-c` | `dapr.io/config` | Tells Dapr which Configuration CRD to use |
| `--control-plane-address` | not supported | | not supported | Address for a Dapr control plane |
| `--dapr-grpc-port` | `--dapr-grpc-port` | | not supported | gRPC port for the Dapr API to listen on (default "50001") |
| `--dapr-http-port` | `--dapr-http-port` | | not supported | The HTTP port for the Dapr API |
|` --dapr-http-max-request-size` | --dapr-http-max-request-size | | `dapr.io/http-max-request-size` | Increasing max size of request body http and grpc servers parameter in MB to handle uploading of big files. Default is `4` MB |
| not supported | `--image` | | not supported
| `--internal-grpc-port` | not supported | | not supported | gRPC port for the Dapr Internal API to listen on |
| `--enable-metrics` | not supported | | configuration spec | Enable prometheus metric (default true) |
| `--enable-mtls` | not supported | | configuration spec | Enables automatic mTLS for daprd to daprd communication channels |
| `--enable-profiling` | `--enable-profiling` | | `dapr.io/enable-profiling` | Enable profiling |
| `--log-as-json` | not supported | | `dapr.io/log-as-json` | Setting this parameter to `true` outputs logs in JSON format. Default is `false` |
| `--log-level` | `--log-level` | | `dapr.io/log-level` | Sets the log level for the Dapr sidecar. Allowed values are `debug`, `info`, `warn`, `error`. Default is `info` |
| `--app-max-concurrency` | `--app-max-concurrency` | | `dapr.io/app-max-concurrency` | Limit the concurrency of your application. A valid value is any number larger than `0`
| `--metrics-port` | `--metrics-port` | | `dapr.io/metrics-port` | Sets the port for the sidecar metrics server. Default is `9090` |
| `--mode` | not supported | | not supported | Runtime mode for Dapr (default "standalone") |
| `--placement-address` | `--placement-address` | | not supported | Addresses for Dapr Actor Placement servers |
| `--profiling-port` | `--profiling-port` | | not supported | The port for the profile server (default "7777") |
| `--app-protocol` | `--app-protocol` | `-P` | `dapr.io/app-protocol` | Tells Dapr which protocol your application is using. Valid options are `http` and `grpc`. Default is `http` |
| `--sentry-address` | `--sentry-address` | | not supported | Address for the Sentry CA service |
| `--version` | `--version` | `-v` | not supported | Prints the runtime version |
| not supported | not supported | | `dapr.io/enabled` | Setting this paramater to true injects the Dapr sidecar into the pod |
| not supported | not supported | | `dapr.io/api-token-secret` | Tells Dapr which Kubernetes secret to use for token based API authentication. By default this is not set |
| not supported | not supported | | `dapr.io/sidecar-cpu-limit` | Maximum amount of CPU that the Dapr sidecar can use. See valid values [here](https://kubernetes.io/docs/tasks/administer-cluster/manage-resources/quota-memory-cpu-namespace/). By default this is not set
| not supported | not supported | | `dapr.io/sidecar-memory-limit` | Maximum amount of Memory that the Dapr sidecar can use. See valid values [here](https://kubernetes.io/docs/tasks/administer-cluster/manage-resources/quota-memory-cpu-namespace/). By default this is not set
| not supported | not supported | | `dapr.io/sidecar-cpu-request` | Amount of CPU that the Dapr sidecar requests. See valid values [here](https://kubernetes.io/docs/tasks/administer-cluster/manage-resources/quota-memory-cpu-namespace/). By default this is not set
| not supported | not supported | | `dapr.io/sidecar-memory-request` | Amount of Memory that the Dapr sidecar requests .See valid values [here](https://kubernetes.io/docs/tasks/administer-cluster/manage-resources/quota-memory-cpu-namespace/). By default this is not set
| not supported | not supported | | `dapr.io/sidecar-liveness-probe-delay-seconds` | Number of seconds after the sidecar container has started before liveness probe is initiated. Read more [here](https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/#configure-probes). Default is `3`
| not supported | not supported | | `dapr.io/sidecar-liveness-probe-timeout-seconds` | Number of seconds after which the sidecar liveness probe times out. Read more [here](https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/#configure-probes). Default is `3`
| not supported | not supported | | `dapr.io/sidecar-liveness-probe-period-seconds` | How often (in seconds) to perform the sidecar liveness probe. Read more [here](https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/#configure-probes). Default is `6`
| not supported | not supported | | `dapr.io/sidecar-liveness-probe-threshold` | When the sidecar liveness probe fails, Kubernetes will try N times before giving up. In this case, the Pod will be marked Unhealthy. Read more about `failureThreshold` [here](https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/#configure-probes). Default is `3`
| not supported | not supported | | `dapr.io/sidecar-readiness-probe-delay-seconds` | Number of seconds after the sidecar container has started before readiness probe is initiated. Read more [here](https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/#configure-probes). Default is `3`
| not supported | not supported | | `dapr.io/sidecar-readiness-probe-timeout-seconds` | Number of seconds after which the sidecar readiness probe times out. Read more [here](https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/#configure-probes). Default is `3`
| not supported | not supported | | `dapr.io/sidecar-readiness-probe-period-seconds` | How often (in seconds) to perform the sidecar readiness probe. Read more [here](https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/#configure-probes). Default is `6`
| not supported | not supported | | `dapr.io/sidecar-readiness-probe-threshold` | When the sidecar readiness probe fails, Kubernetes will try N times before giving up. In this case, the Pod will be marked Unready. Read more about `failureThreshold` [here](https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/#configure-probes). Default is `3`
| not supported | not supported | | `dapr.io/env` | List of environment variable to be injected into the sidecar. Strings consisting of key=value pairs separated by a comma.

View File

@ -24,6 +24,7 @@ Usage:
dapr [command]
Available Commands:
build-info Print build info of Dapr CLI and runtime
completion Generates shell completion scripts
components List all Dapr components. Supported platforms: Kubernetes
configurations List all Dapr configurations. Supported platforms: Kubernetes
@ -37,13 +38,14 @@ Available Commands:
publish Publish a pub-sub event. Supported platforms: Self-hosted
run Run Dapr and (optionally) your application side by side. Supported platforms: Self-hosted
status Show the health status of Dapr services. Supported platforms: Kubernetes
stop Stop Dapr instances and their associated apps. . Supported platforms: Self-hosted
stop Stop Dapr instances and their associated apps. Supported platforms: Self-hosted
uninstall Uninstall Dapr runtime. Supported platforms: Kubernetes and self-hosted
upgrade Upgrades a Dapr control plane installation in a cluster. Supported platforms: Kubernetes
Flags:
-h, --help help for dapr
-v, --version version for dapr
-h, --help help for dapr
--log-as-json Log output in JSON format
-v, --version version for dapr
Use "dapr [command] --help" for more information about a command.
```
@ -52,6 +54,7 @@ Use "dapr [command] --help" for more information about a command.
You can learn more about each Dapr command from the links below.
- [`dapr build-info`]({{< ref dapr-build-info.md >}})
- [`dapr completion`]({{< ref dapr-completion.md >}})
- [`dapr components`]({{< ref dapr-components.md >}})
- [`dapr configurations`]({{< ref dapr-configurations.md >}})

View File

@ -23,6 +23,7 @@ dapr dashboard [flags]
| Name | Environment Variable | Default | Description |
|------|----------------------|---------|-------------|
| `--address`, `-a` | | `localhost` | Address to listen on. Only accepts IP address or localhost as a value |
| `--help`, `-h` | | | Prints this help message |
| `--kubernetes`, `-k` | | `false` | Opens Dapr dashboard in local browser via local proxy to Kubernetes cluster |
| `--namespace`, `-n` | | `dapr-system` | The namespace where Dapr dashboard is running |
@ -46,6 +47,11 @@ dapr dashboard -p 9999
dapr dashboard -k
```
### Port forward to dashboard service running in Kubernetes on all addresses on a specified port
```bash
dapr dashboard -k -p 9999 --address 0.0.0.0
```
### Port forward to dashboard service running in Kubernetes on a specified port
```bash
dapr dashboard -k -p 9999

View File

@ -26,7 +26,7 @@ dapr invoke [flags]
| `--help`, `-h` | | | Print this help message |
| `--method`, `-m` | | | The method to invoke |
| `--data`, `-d` | | | The JSON serialized data string (optional) |
| `--data-file`, `-f` | | | A file containing the JSON serialized data (optional)
| `--data-file`, `-f` | | | A file containing the JSON serialized data (optional)
| `--verb`, `-v` | | `POST` | The HTTP verb to use |
## Examples

View File

@ -7,7 +7,7 @@ description: "Detailed information on the run CLI command"
## Description
Run Dapr and (optionally) your application side by side.
Run Dapr and (optionally) your application side by side. A full list comparing daprd arguments, CLI arguments, and Kubernetes annotations can be found [here]({{< ref arguments-annotations-overview.md >}}).
## Supported platforms

View File

@ -7,7 +7,13 @@ description: "Detailed information on the upgrade CLI command"
## Description
Upgrade Dapr on supported hosting platforms.
Upgrade or downgrade Dapr on supported hosting platforms.
{{% alert title="Warning" color="warning" %}}
Version steps should be done incrementally, including minor versions as you upgrade or downgrade.
Prior to downgrading, confirm components are backwards compatible and application code does ultilize APIs that are not supported in previous versions of Dapr.
{{% /alert %}}
## Supported platforms
@ -23,8 +29,8 @@ dapr upgrade [flags]
| Name | Environment Variable | Default | Description
| --- | --- | --- | --- |
| `--help`, `-h` | | | Print this help message |
| `--kubernetes`, `-k` | | `false` | Upgrade Dapr in a Kubernetes cluster |
| `--runtime-version` | | `latest` | The version of the Dapr runtime to upgrade to, for example: `1.0.0` |
| `--kubernetes`, `-k` | | `false` | Upgrade/Downgrade Dapr in a Kubernetes cluster |
| `--runtime-version` | | `latest` | The version of the Dapr runtime to upgrade/downgrade to, for example: `1.0.0` |
| `--set` | | | Set values on the command line (can specify multiple or separate values with commas: key1=val1,key2=val2) |
## Examples
@ -34,15 +40,16 @@ dapr upgrade [flags]
dapr upgrade -k
```
### Upgrade specified version of Dapr runtime in Kubernetes
### Upgrade or downgrade to a specified version of Dapr runtime in Kubernetes
```bash
dapr upgrade -k --runtime-version 1.2
```
### Upgrade specified version of Dapr runtime in Kubernetes with value set
### Upgrade or downgrade to a specified version of Dapr runtime in Kubernetes with value set
```bash
dapr upgrade -k --runtime-version 1.2 --set global.logAsJson=true
```
# Related links
- [Upgrade Dapr on a Kubernetes cluster]({{< ref kubernetes-upgrade.md >}})
- [Upgrade Dapr on a Kubernetes cluster]({{< ref kubernetes-upgrade.md >}})

View File

@ -57,6 +57,7 @@ Table captions:
|------|:----------------:|:-----------------:|--------| ------ |----------|
| [AWS DynamoDB]({{< ref dynamodb.md >}}) | | ✅ | Alpha | v1 | 1.0 |
| [AWS S3]({{< ref s3.md >}}) | | ✅ | Alpha | v1 | 1.0 |
| [AWS SES]({{< ref ses.md >}}) | | ✅ | Alpha | v1 | 1.4 |
| [AWS SNS]({{< ref sns.md >}}) | | ✅ | Alpha | v1 | 1.0 |
| [AWS SQS]({{< ref sqs.md >}}) | ✅ | ✅ | Alpha | v1 | 1.0 |
| [AWS Kinesis]({{< ref kinesis.md >}}) | ✅ | ✅ | Alpha | v1 | 1.0 |
@ -82,7 +83,7 @@ Table captions:
### Zeebe (Camunda Cloud)
| Name | Input<br>Binding | Output<br>Binding | Status | Component version | Since |
| Name | Input<br>Binding | Output<br>Binding | Status | Component version | Since |
|------|:----------------:|:-----------------:|--------| --------- | ---------- |
| [Zeebe Command]({{< ref zeebe-command.md >}}) | | ✅ | Alpha | v1 | 1.2 |
| [Zeebe Job Worker]({{< ref zeebe-jobworker.md >}}) | ✅ | | Alpha | v1 | 1.2 |

View File

@ -70,6 +70,8 @@ The above example uses secrets as plain strings. It is recommended to use a secr
## Binding support
This component supports both **input and output** binding interfaces.
This component supports **output binding** with the following operations:
- `create`

View File

@ -22,7 +22,7 @@ spec:
version: v1
metadata:
- name: endpoint
value: http://localhost:8080/v1/graphql
value: http://localhost:8080/v1/graphql
- name: header:x-hasura-access-key
value: adminkey
- name: header:Cache-Control

View File

@ -31,9 +31,9 @@ spec:
## Binding support
This component supports **output binding** with the folowing [HTTP methods/verbs](https://www.w3.org/Protocols/rfc2616/rfc2616-sec9.html):
This component supports **output binding** with the following [HTTP methods/verbs](https://www.w3.org/Protocols/rfc2616/rfc2616-sec9.html):
- `create` : For backward compatability and treated like a post
- `create` : For backward compatibility and treated like a post
- `get` : Read data/records
- `head` : Identical to get except that the server does not return a response body
- `post` : Typically used to create records or send commands
@ -133,7 +133,7 @@ To send data to the HTTP endpoint, invoke the HTTP binding with a `POST`, `PUT`,
{{% alert title="Note" color="primary" %}}
Any metadata field that starts with a capital letter is passed as a request header.
For example, the default content type is `application/json; charset=utf-8`. This can be overriden be setting the `Content-Type` metadata field.
For example, the default content type is `application/json; charset=utf-8`. This can be overridden be setting the `Content-Type` metadata field.
{{% /alert %}}
```json

View File

@ -0,0 +1,105 @@
---
type: docs
title: "AWS SES binding spec"
linkTitle: "AWS SES"
description: "Detailed documentation on the AWS SES binding component"
aliases:
- "/operations/components/setup-bindings/supported-bindings/ses/"
---
## Component format
To setup AWS binding create a component of type `bindings.aws.ses`. See [this guide]({{< ref "howto-bindings.md#1-create-a-binding" >}}) on how to create and apply a binding configuration.
See [Authenticating to AWS]({{< ref authenticating-aws.md >}}) for information about authentication-related attributes
```yaml
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
name: ses
namespace: default
spec:
type: bindings.aws.ses
version: v1
metadata:
- name: accessKey
value: *****************
- name: secretKey
value: *****************
- name: region
value: "eu-west-1"
- name: sessionToken
value: mysession
- name: emailFrom
value: "sender@example.com"
- name: emailTo
value: "receiver@example.com"
- name: emailCc
value: "cc@example.com"
- name: emailBcc
value: "bcc@example.com"
- name: subject
value: "subject"
```
{{% alert title="Warning" color="warning" %}}
The above example uses secrets as plain strings. It is recommended to use a secret store for the secrets as described [here]({{< ref component-secrets.md >}}).
{{% /alert %}}
## Spec metadata fields
| Field | Required | Binding support | Details | Example |
|--------------------|:--------:|------------|-----|---------|
| region | Y | Output | The specific AWS region | `"eu-west-1"` |
| accessKey | Y | Output | The AWS Access Key to access this resource | `"key"` |
| secretKey | Y | Output | The AWS Secret Access Key to access this resource | `"secretAccessKey"` |
| sessionToken | N | Output | The AWS session token to use | `"sessionToken"` |
| emailFrom | N | Output | If set, this specifies the email address of the sender. See [also](#example-request) | `"me@example.com"` |
| emailTo | N | Output | If set, this specifies the email address of the receiver. See [also](#example-request) | `"me@example.com"` |
| emailCc | N | Output | If set, this specifies the email address to CC in. See [also](#example-request) | `"me@example.com"` |
| emailBcc | N | Output | If set, this specifies email address to BCC in. See [also](#example-request) | `"me@example.com"` |
| subject | N | Output | If set, this specifies the subject of the email message. See [also](#example-request) | `"subject of mail"` |
## Binding support
This component supports **output binding** with the following operations:
- `create`
## Example request
You can specify any of the following optional metadata properties with each request:
- `emailFrom`
- `emailTo`
- `emailCc`
- `emailBcc`
- `subject`
When sending an email, the metadata in the configuration and in the request is combined. The combined set of metadata must contain at least the `emailFrom`, `emailTo`, `emailCc`, `emailBcc` and `subject` fields.
The `emailTo`, `emailCc` and `emailBcc` fields can contain multiple email addresses separated by a semicolon.
Example:
```json
{
"operation": "create",
"metadata": {
"emailTo": "dapr-smtp-binding@example.net",
"emailCc": "cc1@example.net",
"subject": "Email subject"
},
"data": "Testing Dapr SMTP Binding"
}
```
The `emailTo`, `emailCc` and `emailBcc` fields can contain multiple email addresses separated by a semicolon.
## Related links
- [Basic schema for a Dapr component]({{< ref component-schema >}})
- [Bindings building block]({{< ref bindings >}})
- [How-To: Trigger application with input binding]({{< ref howto-triggers.md >}})
- [How-To: Use bindings to interface with external resources]({{< ref howto-bindings.md >}})
- [Bindings API reference]({{< ref bindings_api.md >}})

View File

@ -42,6 +42,8 @@ spec:
value: "bcc@example.com"
- name: subject
value: "subject"
- name: priority
value: "[value 1-5]"
```
{{% alert title="Warning" color="warning" %}}
@ -62,6 +64,7 @@ The example configuration shown above, contain a username and password as plain-
| emailCc | N | Output | If set, this specifies the email address to CC in. See [also](#example-request) | `"me@example.com"` |
| emailBcc | N | Output | If set, this specifies email address to BCC in. See [also](#example-request) | `"me@example.com"` |
| subject | N | Output | If set, this specifies the subject of the email message. See [also](#example-request) | `"subject of mail"` |
| priority | N | Output | If set, this specifies the priority (X-Priority) of the email message, from 1 (lowest) to 5 (highest) (default value: 3). See [also](#example-request) | `"1"` |
## Binding support
@ -78,6 +81,7 @@ You can specify any of the following optional metadata properties with each requ
- `emailCC`
- `emailBCC`
- `subject`
- `priority`
When sending an email, the metadata in the configuration and in the request is combined. The combined set of metadata must contain at least the `emailFrom`, `emailTo` and `subject` fields.
@ -90,7 +94,8 @@ Example:
"metadata": {
"emailTo": "dapr-smtp-binding@example.net",
"emailCC": "cc1@example.net; cc2@example.net",
"subject": "Email subject"
"subject": "Email subject",
"priority: "1"
},
"data": "Testing Dapr SMTP Binding"
}

View File

@ -35,10 +35,10 @@ spec:
| Field | Required | Binding support | Details | Example |
|-------------------------|:--------:|------------|-----|---------|
| gatewayAddr | Y | Output | Zeebe gateway address | `localhost:26500` |
| gatewayKeepAlive | N | Output | Sets how often keep alive messages should be sent to the gateway. Defaults to 45 seconds | `45s` |
| usePlainTextConnection | N | Output | Whether to use a plain text connection or not | `true,false` |
| caCertificatePath | N | Output | The path to the CA cert | `/path/to/ca-cert` |
| gatewayAddr | Y | Output | Zeebe gateway address | `localhost:26500` |
| gatewayKeepAlive | N | Output | Sets how often keep alive messages should be sent to the gateway. Defaults to 45 seconds | `45s` |
| usePlainTextConnection | N | Output | Whether to use a plain text connection or not | `true,false` |
| caCertificatePath | N | Output | The path to the CA cert | `/path/to/ca-cert` |
## Binding support
@ -59,7 +59,7 @@ This component supports **output binding** with the following operations:
### Output binding
Zeebe uses gRPC under the hood for the Zeebe client we use in this binding. Please consult the [gRPC API reference](https://stage.docs.zeebe.io/reference/grpc.html) for more information.
Zeebe uses gRPC under the hood for the Zeebe client we use in this binding. Please consult the [gRPC API reference](https://stage.docs.zeebe.io/reference/grpc.html) for more information.
#### topology
@ -161,7 +161,7 @@ The response values are:
- `key` - the unique key identifying the deployment
- `processes` - a list of deployed processes
- `bpmnProcessId` - the bpmn process ID, as parsed during deployment; together with the version forms a unique identifier for a specific
- `bpmnProcessId` - the bpmn process ID, as parsed during deployment; together with the version forms a unique identifier for a specific
process definition
- `version` - the assigned process version
- `processDefinitionKey` - the assigned key, which acts as a unique identifier for this process
@ -169,7 +169,7 @@ The response values are:
#### create-instance
The `create-instance` operation creates and starts an instance of the specified process. The process definition to use to create the instance can be
The `create-instance` operation creates and starts an instance of the specified process. The process definition to use to create the instance can be
specified either using its unique key (as returned by the `deploy-process` operation), or using the BPMN process ID and a version.
Note that only processes with none start events can be started through this command.
@ -296,7 +296,7 @@ To perform a `set-variables` operation, invoke the Zeebe command binding with a
The data parameters are:
- `elementInstanceKey` - the unique identifier of a particular element; can be the process instance key (as
- `elementInstanceKey` - the unique identifier of a particular element; can be the process instance key (as
obtained during instance creation), or a given element, such as a service task (see elementInstanceKey on the job message)
- `local` - (optional, default: `false`) if true, the variables will be merged strictly into the local scope (as indicated by
elementInstanceKey); this means the variables is not propagated to upper scopes.
@ -369,7 +369,7 @@ The data parameters are:
- `messageName` - the name of the message
- `correlationKey` - (optional) the correlation key of the message
- `timeToLive` - (optional) how long the message should be buffered on the broker
- `messageId` - (optional) the unique ID of the message; can be omitted. only useful to ensure only one message with the given ID will ever
- `messageId` - (optional) the unique ID of the message; can be omitted. only useful to ensure only one message with the given ID will ever
be published (during its lifetime)
- `variables` - (optional) the message variables as a JSON document; to be valid, the root of the document must be an object, e.g. { "a": "foo" }.
[ "foo" ] would not be valid
@ -390,7 +390,7 @@ The response values are:
#### activate-jobs
The `activate-jobs` operation iterates through all known partitions round-robin and activates up to the requested maximum and streams them back to
The `activate-jobs` operation iterates through all known partitions round-robin and activates up to the requested maximum and streams them back to
the client as they are activated.
To perform a `activate-jobs` operation, invoke the Zeebe command binding with a `POST` method, and the following JSON body:
@ -419,7 +419,7 @@ The data parameters are:
- `maxJobsToActivate` - the maximum jobs to activate by this request
- `timeout` - (optional, default: 5 minutes) a job returned after this call will not be activated by another call until the timeout has been reached
- `workerName` - (optional, default: `default`) the name of the worker activating the jobs, mostly used for logging purposes
- `fetchVariables` - (optional) a list of variables to fetch as the job variables; if empty, all visible variables at the time of activation for the
- `fetchVariables` - (optional) a list of variables to fetch as the job variables; if empty, all visible variables at the time of activation for the
scope of the job will be returned
##### Response
@ -429,7 +429,7 @@ The binding returns a JSON with the following response:
```json
[
{
}
]
```
@ -482,8 +482,8 @@ The binding does not return a response body.
#### fail-job
The `fail-job` operation marks the job as failed; if the retries argument is positive, then the job will be immediately activatable again, and a
worker could try again to process it. If it is zero or negative however, an incident will be raised, tagged with the given errorMessage, and the
The `fail-job` operation marks the job as failed; if the retries argument is positive, then the job will be immediately activatable again, and a
worker could try again to process it. If it is zero or negative however, an incident will be raised, tagged with the given errorMessage, and the
job will not be activatable until the incident is resolved.
To perform a `fail-job` operation, invoke the Zeebe command binding with a `POST` method, and the following JSON body:
@ -493,7 +493,7 @@ To perform a `fail-job` operation, invoke the Zeebe command binding with a `POST
"data": {
"jobKey": 2251799813685739,
"retries": 5,
"errorMessage": "some error occured"
"errorMessage": "some error occurred"
},
"metadata": {},
"operation": "fail-job"
@ -504,7 +504,7 @@ The data parameters are:
- `jobKey` - the unique job identifier, as obtained when activating the job
- `retries` - the amount of retries the job should have left
- `errorMessage ` - (optional) an message describing why the job failed this is particularly useful if a job runs out of retries and an
- `errorMessage ` - (optional) an message describing why the job failed this is particularly useful if a job runs out of retries and an
incident is raised, as it this message can help explain why an incident was raised
##### Response
@ -513,7 +513,7 @@ The binding does not return a response body.
#### update-job-retries
The `update-job-retries` operation updates the number of retries a job has left. This is mostly useful for jobs that have run out of retries, should the
The `update-job-retries` operation updates the number of retries a job has left. This is mostly useful for jobs that have run out of retries, should the
underlying problem be solved.
To perform a `update-job-retries` operation, invoke the Zeebe command binding with a `POST` method, and the following JSON body:
@ -540,7 +540,7 @@ The binding does not return a response body.
#### throw-error
The `throw-error` operation throw an error to indicate that a business error is occurred while processing the job. The error is identified
The `throw-error` operation throw an error to indicate that a business error is occurred while processing the job. The error is identified
by an error code and is handled by an error catch event in the process with the same error code.
To perform a `throw-error` operation, invoke the Zeebe command binding with a `POST` method, and the following JSON body:

View File

@ -53,19 +53,19 @@ spec:
| Field | Required | Binding support | Details | Example |
|-------------------------|:--------:|------------|-----|---------|
| gatewayAddr | Y | Input | Zeebe gateway address | `localhost:26500` |
| gatewayKeepAlive | N | Input | Sets how often keep alive messages should be sent to the gateway. Defaults to 45 seconds | `45s` |
| usePlainTextConnection | N | Input | Whether to use a plain text connection or not | `true,false` |
| caCertificatePath | N | Input | The path to the CA cert | `/path/to/ca-cert` |
| workerName | N | Input | The name of the worker activating the jobs, mostly used for logging purposes | `products-worker` |
| workerTimeout | N | Input | A job returned after this call will not be activated by another call until the timeout has been reached; defaults to 5 minutes | `5m` |
| requestTimeout | N | Input | The request will be completed when at least one job is activated or after the requestTimeout. If the requestTimeout = 0, a default timeout is used. If the requestTimeout < 0, long polling is disabled and the request is completed immediately, even when no job is activated. Defaults to 10 seconds | `30s` |
| jobType | Y | Input | the job type, as defined in the BPMN process (e.g. `<zeebe:taskDefinition type="fetch-products" />`) | `fetch-products` |
| maxJobsActive | N | Input | Set the maximum number of jobs which will be activated for this worker at the same time. Defaults to 32 | `32` |
| concurrency | N | Input | The maximum number of concurrent spawned goroutines to complete jobs. Defaults to 4 | `4` |
| pollInterval | N | Input | Set the maximal interval between polling for new jobs. Defaults to 100 milliseconds | `100ms` |
| pollThreshold | N | Input | Set the threshold of buffered activated jobs before polling for new jobs, i.e. threshold * maxJobsActive. Defaults to 0.3 | `0.3` |
| fetchVariables | N | Input | A list of variables to fetch as the job variables; if empty, all visible variables at the time of activation for the scope of the job will be returned | `productId, productName, productKey` |
| gatewayAddr | Y | Input | Zeebe gateway address | `localhost:26500` |
| gatewayKeepAlive | N | Input | Sets how often keep alive messages should be sent to the gateway. Defaults to 45 seconds | `45s` |
| usePlainTextConnection | N | Input | Whether to use a plain text connection or not | `true,false` |
| caCertificatePath | N | Input | The path to the CA cert | `/path/to/ca-cert` |
| workerName | N | Input | The name of the worker activating the jobs, mostly used for logging purposes | `products-worker` |
| workerTimeout | N | Input | A job returned after this call will not be activated by another call until the timeout has been reached; defaults to 5 minutes | `5m` |
| requestTimeout | N | Input | The request will be completed when at least one job is activated or after the requestTimeout. If the requestTimeout = 0, a default timeout is used. If the requestTimeout < 0, long polling is disabled and the request is completed immediately, even when no job is activated. Defaults to 10 seconds | `30s` |
| jobType | Y | Input | the job type, as defined in the BPMN process (e.g. `<zeebe:taskDefinition type="fetch-products" />`) | `fetch-products` |
| maxJobsActive | N | Input | Set the maximum number of jobs which will be activated for this worker at the same time. Defaults to 32 | `32` |
| concurrency | N | Input | The maximum number of concurrent spawned goroutines to complete jobs. Defaults to 4 | `4` |
| pollInterval | N | Input | Set the maximal interval between polling for new jobs. Defaults to 100 milliseconds | `100ms` |
| pollThreshold | N | Input | Set the threshold of buffered activated jobs before polling for new jobs, i.e. threshold * maxJobsActive. Defaults to 0.3 | `0.3` |
| fetchVariables | N | Input | A list of variables to fetch as the job variables; if empty, all visible variables at the time of activation for the scope of the job will be returned | `productId, productName, productKey` |
## Binding support
@ -75,10 +75,10 @@ This component supports **input** binding interfaces.
#### Variables
The Zeebe process engine handles the process state as also process variables which can be passed
The Zeebe process engine handles the process state as also process variables which can be passed
on process instantiation or which can be updated or created during process execution. These variables
can be passed to a registered job worker by defining the variable names as comma-separated list in
the `fetchVariables` metadata field. The process engine will then pass these variables with its current
the `fetchVariables` metadata field. The process engine will then pass these variables with its current
values to the job worker implementation.
If the binding will register three variables `productId`, `productName` and `productKey` then the worker will
@ -86,9 +86,9 @@ be called with the following JSON body:
```json
{
"productId": "some-product-id",
"productName": "some-product-name",
"productKey": "some-product-key"
"productId": "some-product-id",
"productName": "some-product-name",
"productKey": "some-product-key"
}
```

View File

@ -61,7 +61,7 @@ spec:
my_claim := jwt.payload["my-claim"]
}
jwt = { "payload": payload } {
auth_header := input.request.headers["authorization"]
auth_header := input.request.headers["Authorization"]
[_, jwt] := split(auth_header, " ")
[_, payload, _] := io.jwt.decode(jwt)
}
@ -75,7 +75,7 @@ You can prototype and experiment with policies using the [official opa playgroun
|--------|---------|---------|
| rego | The Rego policy language | See above |
| defaultStatus | The status code to return for denied responses | `"https://accounts.google.com"`, `"https://login.salesforce.com"`
| includedHeaders | A comma-separated set of case-insensitive headers to include in the request input. Request headers are not passed to the policy by default. Include to receive incoming request headers in the input | `"x-my-custom-header, x-jwt-header"`
| includedHeaders | A comma-separated set of case-insensitive headers to include in the request input. Request headers are not passed to the policy by default. Include to receive incoming request headers in the input | `"x-my-custom-header, x-jwt-header"`
## Dapr configuration
@ -99,7 +99,7 @@ This middleware supplies a [`HTTPRequest`](#httprequest) as input.
### HTTPRequest
The `HTTPRequest` input contains all the revelant information about an incoming HTTP Request except it's body.
The `HTTPRequest` input contains all the relevant information about an incoming HTTP Request except it's body.
```go
type Input struct {

View File

@ -97,7 +97,7 @@ spec:
meta:
DAPR_METRICS_PORT: "${DAPR_METRICS_PORT}"
DAPR_PROFILE_PORT: "${DAPR_PROFILE_PORT}"
daprPortMetaKey: "DAPR_PORT"
daprPortMetaKey: "DAPR_PORT"
queryOptions:
useCache: true
filter: "Checks.ServiceTags contains dapr"

View File

@ -37,6 +37,12 @@ spec:
value: "0"
- name: concurrencyMode
value: parallel
- name: backOffPolicy
value: "exponential"
- name: backOffInitialInterval
value: "100"
- name: backOffMaxRetries
value: "16"
```
{{% alert title="Warning" color="warning" %}}
The above example uses secrets as plain strings. It is recommended to use a secret store for the secrets as described [here]({{< ref component-secrets.md >}}).
@ -51,10 +57,24 @@ The above example uses secrets as plain strings. It is recommended to use a secr
| deletedWhenUnused | N | Whether or not the queue should be configured to [auto-delete](https://www.rabbitmq.com/queues.html) Defaults to `"true"` | `"true"`, `"false"`
| autoAck | N | Whether or not the queue consumer should [auto-ack](https://www.rabbitmq.com/confirms.html) messages. Defaults to `"false"` | `"true"`, `"false"`
| deliveryMode | N | Persistence mode when publishing messages. Defaults to `"0"`. RabbitMQ treats `"2"` as persistent, all other numbers as non-persistent | `"0"`, `"2"`
| requeueInFailure | N | Whether or not to requeue when sending a [negative acknolwedgement](https://www.rabbitmq.com/nack.html) in case of a failure. Defaults to `"false"` | `"true"`, `"false"`
| requeueInFailure | N | Whether or not to requeue when sending a [negative acknowledgement](https://www.rabbitmq.com/nack.html) in case of a failure. Defaults to `"false"` | `"true"`, `"false"`
| prefetchCount | N | Number of messages to [prefetch](https://www.rabbitmq.com/consumer-prefetch.html). Consider changing this to a non-zero value for production environments. Defaults to `"0"`, which means that all available messages will be pre-fetched. | `"2"`
| reconnectWait | N | How long to wait (in seconds) before reconnecting if a connection failure occurs | `"0"`
| concurrencyMode | N | `parallel` is the default, and allows processing multiple messages in parallel (limited by the `app-max-concurrency` annotation, if configured). Set to `single` to disable parallel processing. In most situations there's no reason to change this. | `parallel`, `single`
| backOffPolicy | N | Retry policy, `"constant"` is a backoff policy that always returns the same backoff delay. `"exponential"` is a backoff policy that increases the backoff period for each retry attempt using a randomization function that grows exponentially. Defaults to `"constant"`. | `constant`、`exponential` |
| backOffDuration | N | The fixed interval only takes effect when the policy is constant. There are two valid formats, one is the fraction with a unit suffix format, and the other is the pure digital format that will be processed as milliseconds. Valid time units are "ns", "us" (or "µs"), "ms", "s", "m", "h". Defaults to `"5s"`. | `"5s"`、`"5000"` |
| backOffInitialInterval | N | The backoff initial interval on retry. Only takes effect when the policy is exponential. There are two valid formats, one is the fraction with a unit suffix format, and the other is the pure digital format that will be processed as milliseconds. Valid time units are "ns", "us" (or "µs"), "ms", "s", "m", "h". Defaults to `"500"` | `"50"` |
| backOffMaxInterval | N | The backoff initial interval on retry. Only takes effect when the policy is exponential. There are two valid formats, one is the fraction with a unit suffix format, and the other is the pure digital format that will be processed as milliseconds. Valid time units are "ns", "us" (or "µs"), "ms", "s", "m", "h". Defaults to `"60s"` | `"60000"` |
| backOffMaxRetries | N | The maximum number of retries to process the message before returning an error. Defaults to `"0"` which means the component will not retry processing the message. `"-1"` will retry indefinitely until the message is processed or the application is shutdown. Any positive number is treated as the maximum retry count. | `"3"` |
| backOffRandomizationFactor | N | Randomization factor, between 1 and 0, including 0 but not 1. Randomized interval = RetryInterval * (1 ± backOffRandomizationFactor). Defaults to `"0.5"`. | `"0.5"` |
| backOffMultiplier | N | Backoff multiplier for the policy. Increments the interval by multiplying it with the multiplier. Defaults to `"1.5"` | `"1.5"` |
| backOffMaxElapsedTime | N | After MaxElapsedTime the ExponentialBackOff returns Stop. There are two valid formats, one is the fraction with a unit suffix format, and the other is the pure digital format that will be processed as milliseconds. Valid time units are "ns", "us" (or "µs"), "ms", "s", "m", "h". Defaults to `"15m"` | `"15m"` |
### Backoff policy introduction
Backoff retry strategy can instruct the dapr sidecar how to resend the message. By default, the retry strategy is turned off, which means that the sidecar will send a message to the service once. When the service returns a result, the message will be marked as consumption regardless of whether it is correct or not. The above is based on the condition of `autoAck` and `requeueInFailure` is setting to false(if `requeueInFailure` is set to true, the message will get a second chance).
But in some cases, you may want dapr to retry pushing message with an (exponential or constant) backoff strategy until the message is processed normally or the number of retries is exhausted. This maybe useful when your service breaks down abnormally but the sidecar is not stopped together. Adding backoff policy will retry the message pushing during the service downtime, instead of marking these message as consumed.
## Create a RabbitMQ server

View File

@ -30,6 +30,6 @@ spec:
```
## Related Links
- [Secrets building block]({{< ref secrets >}})
- [How-To: Retreive a secret]({{< ref "howto-secrets.md" >}})
- [How-To: Retrieve a secret]({{< ref "howto-secrets.md" >}})
- [How-To: Reference secrets in Dapr components]({{< ref component-secrets.md >}})
- [Secrets API reference]({{< ref secrets_api.md >}})

View File

@ -31,6 +31,8 @@ spec:
value: [path to the JSON file]
- name: nestedSeparator
value: ":"
- name: multiValued
value: "false"
```
## Spec metadata fields
@ -38,11 +40,12 @@ spec:
| Field | Required | Details | Example |
|--------------------|:--------:|-------------------------------------------------------------------------|--------------------------|
| secretsFile | Y | The path to the file where secrets are stored | `"path/to/file.json"` |
| nestedSeparator | N | Used by the store when flattening the JSON hierarchy to a map. Defaults to `":"` | `":"` |
| nestedSeparator | N | Used by the store when flattening the JSON hierarchy to a map. Defaults to `":"` | `":"`
| multiValued | N | Allows one level of multi-valued key/value pairs before flattening JSON hierarchy. Defaults to `"false"` | `"true"` |
## Setup JSON file to hold the secrets
Given the following json:
Given the following JSON loaded from `secretsFile`:
```json
{
@ -54,7 +57,7 @@ Given the following json:
}
```
The store will load the file and create a map with the following key value pairs:
If `multiValued` is `"false"`, the store will load the file and create a map with the following key value pairs:
| flattened key | value |
| --- | --- |
@ -62,7 +65,49 @@ The store will load the file and create a map with the following key value pairs
|"connectionStrings:sql" | "your sql connection string" |
|"connectionStrings:mysql"| "your mysql connection string" |
Use the flattened key (`connectionStrings:sql`) to access the secret.
Use the flattened key (`connectionStrings:sql`) to access the secret. The following JSON map returned:
```json
{
"connectionStrings:sql": "your sql connection string"
}
```
If `multiValued` is `"true"`, you would instead use the top level key. In this example, `connectionStrings` would return the following map:
```json
{
"sql": "your sql connection string",
"mysql": "your mysql connection string"
}
```
Nested structures after the top level will be flattened. In this example, `connectionStrings` would return the following map:
JSON from `secretsFile`:
```json
{
"redisPassword": "your redis password",
"connectionStrings": {
"mysql": {
"username": "your mysql username",
"password": "your mysql password"
}
}
}
```
Response:
```json
{
"mysql:username": "your mysql username",
"mysql:password": "your mysql password"
}
```
This is useful in order to mimic secret stores like Vault or Kubernetes that return multiple key/value pairs per secret key.
## Related links
- [Secrets building block]({{< ref secrets >}})

Some files were not shown because too many files have changed in this diff Show More