diff --git a/daprdocs/content/en/developing-applications/local-development/multi-app-dapr-run/multi-app-template.md b/daprdocs/content/en/developing-applications/local-development/multi-app-dapr-run/multi-app-template.md
index a0ef21650..98a499b09 100644
--- a/daprdocs/content/en/developing-applications/local-development/multi-app-dapr-run/multi-app-template.md
+++ b/daprdocs/content/en/developing-applications/local-development/multi-app-dapr-run/multi-app-template.md
@@ -72,19 +72,19 @@ version: 1
common: # optional section for variables shared across apps
resourcesPath: ./app/components # any dapr resources to be shared across apps
env: # any environment variable shared across apps
- - DEBUG: true
+ DEBUG: true
apps:
- appID: webapp # optional
appDirPath: .dapr/webapp/ # REQUIRED
resourcesPath: .dapr/resources # (optional) can be default by convention
configFilePath: .dapr/config.yaml # (optional) can be default by convention too, ignore if file is not found.
- appProtocol: HTTP
+ appProtocol: http
appPort: 8080
appHealthCheckPath: "/healthz"
command: ["python3" "app.py"]
- appID: backend # optional
appDirPath: .dapr/backend/ # REQUIRED
- appProtocol: GRPC
+ appProtocol: grpc
appPort: 3000
unixDomainSocket: "/tmp/test-socket"
env:
@@ -112,7 +112,7 @@ The properties for the Multi-App Run template align with the `dapr run` CLI flag
| `appID` | N | Application's app ID. If not provided, will be derived from `appDirPath` | `webapp`, `backend` |
| `resourcesPath` | N | Path to your Dapr resources. Can be default by convention; ignore if directory isn't found | `./app/components`, `./webapp/components` |
| `configFilePath` | N | Path to your application's configuration file | `./webapp/config.yaml` |
-| `appProtocol` | N | The protocol Dapr uses to talk to the application. | `HTTP`, `GRPC` |
+| `appProtocol` | N | The protocol Dapr uses to talk to the application. | `http`, `grpc` |
| `appPort` | N | The port your application is listening on | `8080`, `3000` |
| `daprHTTPPort` | N | Dapr HTTP port | |
| `daprGRPCPort` | N | Dapr GRPC port | |
diff --git a/daprdocs/content/en/getting-started/install-dapr-cli.md b/daprdocs/content/en/getting-started/install-dapr-cli.md
index 123067e3b..82474d9ae 100644
--- a/daprdocs/content/en/getting-started/install-dapr-cli.md
+++ b/daprdocs/content/en/getting-started/install-dapr-cli.md
@@ -202,7 +202,7 @@ Each release of Dapr CLI includes various OSes and architectures. You can manual
Verify the CLI is installed by restarting your terminal/command prompt and running the following:
```bash
-dapr
+dapr -h
```
**Output:**
diff --git a/daprdocs/content/en/getting-started/quickstarts/bindings-quickstart.md b/daprdocs/content/en/getting-started/quickstarts/bindings-quickstart.md
index 55c5472b0..fd8cb0d37 100644
--- a/daprdocs/content/en/getting-started/quickstarts/bindings-quickstart.md
+++ b/daprdocs/content/en/getting-started/quickstarts/bindings-quickstart.md
@@ -90,7 +90,7 @@ dapr run --app-id batch-sdk --app-port 50051 --resources-path ../../../component
> **Note**: Since Python3.exe is not defined in Windows, you may need to use `python app.py` instead of `python3 app.py`.
-The code inside the `process_batch` function is executed every 10 seconds (defined in [`binding-cron.yaml`]({{< ref "#componentsbinding-cronyaml-component-file" >}}) in the `components` directory). The binding trigger looks for a route called via HTTP POST in your Flask application by the Dapr sidecar.
+The code inside the `process_batch` function is executed every 10 seconds (defined in [`binding-cron.yaml`]({{< ref "#componentsbinding-cronyaml-component-file" >}}) in the `components` directory). The binding trigger looks for a route called via HTTP POST in your application by the Dapr sidecar.
```python
# Triggered by Dapr input binding
@@ -295,7 +295,7 @@ Run the `batch-sdk` service alongside a Dapr sidecar.
dapr run --app-id batch-sdk --app-port 5002 --dapr-http-port 3500 --resources-path ../../../components -- node index.js
```
-The code inside the `process_batch` function is executed every 10 seconds (defined in [`binding-cron.yaml`]({{< ref "#componentsbinding-cronyaml-component-file" >}}) in the `components` directory). The binding trigger looks for a route called via HTTP POST in your Flask application by the Dapr sidecar.
+The code inside the `process_batch` function is executed every 10 seconds (defined in [`binding-cron.yaml`]({{< ref "#componentsbinding-cronyaml-component-file" >}}) in the `components` directory). The binding trigger looks for a route called via HTTP POST in your application by the Dapr sidecar.
```javascript
async function start() {
@@ -498,7 +498,7 @@ Run the `batch-sdk` service alongside a Dapr sidecar.
dapr run --app-id batch-sdk --app-port 7002 --resources-path ../../../components -- dotnet run
```
-The code inside the `process_batch` function is executed every 10 seconds (defined in [`binding-cron.yaml`]({{< ref "#componentsbinding-cronyaml-component-file" >}}) in the `components` directory). The binding trigger looks for a route called via HTTP POST in your Flask application by the Dapr sidecar.
+The code inside the `process_batch` function is executed every 10 seconds (defined in [`binding-cron.yaml`]({{< ref "#componentsbinding-cronyaml-component-file" >}}) in the `components` directory). The binding trigger looks for a route called via HTTP POST in your application by the Dapr sidecar.
```csharp
app.MapPost("/" + cronBindingName, async () => {
@@ -704,7 +704,7 @@ Run the `batch-sdk` service alongside a Dapr sidecar.
dapr run --app-id batch-sdk --app-port 8080 --resources-path ../../../components -- java -jar target/BatchProcessingService-0.0.1-SNAPSHOT.jar
```
-The code inside the `process_batch` function is executed every 10 seconds (defined in [`binding-cron.yaml`]({{< ref "#componentsbinding-cronyaml-component-file" >}}) in the `components` directory). The binding trigger looks for a route called via HTTP POST in your Flask application by the Dapr sidecar.
+The code inside the `process_batch` function is executed every 10 seconds (defined in [`binding-cron.yaml`]({{< ref "#componentsbinding-cronyaml-component-file" >}}) in the `components` directory). The binding trigger looks for a route called via HTTP POST in your application by the Dapr sidecar.
```java
@PostMapping(path = cronBindingPath, consumes = MediaType.ALL_VALUE)
@@ -911,7 +911,7 @@ Run the `batch-sdk` service alongside a Dapr sidecar.
dapr run --app-id batch-sdk --app-port 6002 --dapr-http-port 3502 --dapr-grpc-port 60002 --resources-path ../../../components -- go run .
```
-The code inside the `process_batch` function is executed every 10 seconds (defined in [`binding-cron.yaml`]({{< ref "#componentsbinding-cronyaml-component-file" >}}) in the `components` directory). The binding trigger looks for a route called via HTTP POST in your Flask application by the Dapr sidecar.
+The code inside the `process_batch` function is executed every 10 seconds (defined in [`binding-cron.yaml`]({{< ref "#componentsbinding-cronyaml-component-file" >}}) in the `components` directory). The binding trigger looks for a route called via HTTP POST in your application by the Dapr sidecar.
```go
// Triggered by Dapr input binding
diff --git a/daprdocs/content/en/getting-started/quickstarts/configuration-quickstart.md b/daprdocs/content/en/getting-started/quickstarts/configuration-quickstart.md
index e457612a1..9efe06b58 100644
--- a/daprdocs/content/en/getting-started/quickstarts/configuration-quickstart.md
+++ b/daprdocs/content/en/getting-started/quickstarts/configuration-quickstart.md
@@ -64,7 +64,7 @@ pip3 install -r requirements.txt
Run the `order-processor` service alongside a Dapr sidecar.
```bash
-dapr run --app-id order-processor --components-path ../../../components/ --app-port 6001 -- python3 app.py
+dapr run --app-id order-processor --resources-path ../../../components/ --app-port 6001 -- python3 app.py
```
> **Note**: Since Python3.exe is not defined in Windows, you may need to use `python app.py` instead of `python3 app.py`.
@@ -90,7 +90,7 @@ docker exec dapr_redis redis-cli MSET orderId1 "103" orderId2 "104"
Run the `order-processor` service again:
```bash
-dapr run --app-id order-processor --components-path ../../../components/ --app-port 6001 -- python3 app.py
+dapr run --app-id order-processor --resources-path ../../../components/ --app-port 6001 -- python3 app.py
```
> **Note**: Since Python3.exe is not defined in Windows, you may need to use `python app.py` instead of `python3 app.py`.
@@ -187,7 +187,7 @@ npm install
Run the `order-processor` service alongside a Dapr sidecar.
```bash
-dapr run --app-id order-processor --components-path ../../../components/ --app-protocol grpc --dapr-grpc-port 3500 -- node index.js
+dapr run --app-id order-processor --resources-path ../../../components/ --app-protocol grpc --dapr-grpc-port 3500 -- node index.js
```
The expected output:
@@ -209,7 +209,7 @@ docker exec dapr_redis redis-cli MSET orderId1 "103" orderId2 "104"
Run the `order-processor` service again:
```bash
-dapr run --app-id order-processor --components-path ../../../components/ --app-protocol grpc --dapr-grpc-port 3500 -- node index.js
+dapr run --app-id order-processor --resources-path ../../../components/ --app-protocol grpc --dapr-grpc-port 3500 -- node index.js
```
The app will return the updated configuration values:
@@ -309,7 +309,7 @@ dotnet build
Run the `order-processor` service alongside a Dapr sidecar.
```bash
-dapr run --app-id order-processor-http --components-path ../../../components/ --app-port 7001 -- dotnet run --project .
+dapr run --app-id order-processor-http --resources-path ../../../components/ --app-port 7001 -- dotnet run --project .
```
The expected output:
@@ -331,7 +331,7 @@ docker exec dapr_redis redis-cli MSET orderId1 "103" orderId2 "104"
Run the `order-processor` service again:
```bash
-dapr run --app-id order-processor-http --components-path ../../../components/ --app-port 7001 -- dotnet run --project .
+dapr run --app-id order-processor-http --resources-path ../../../components/ --app-port 7001 -- dotnet run --project .
```
The app will return the updated configuration values:
@@ -428,7 +428,7 @@ mvn clean install
Run the `order-processor` service alongside a Dapr sidecar.
```bash
-dapr run --app-id order-processor --components-path ../../../components -- java -jar target/OrderProcessingService-0.0.1-SNAPSHOT.jar
+dapr run --app-id order-processor --resources-path ../../../components -- java -jar target/OrderProcessingService-0.0.1-SNAPSHOT.jar
```
The expected output:
@@ -450,7 +450,7 @@ docker exec dapr_redis redis-cli MSET orderId1 "103" orderId2 "104"
Run the `order-processor` service again:
```bash
-dapr run --app-id order-processor --components-path ../../../components -- java -jar target/OrderProcessingService-0.0.1-SNAPSHOT.jar
+dapr run --app-id order-processor --resources-path ../../../components -- java -jar target/OrderProcessingService-0.0.1-SNAPSHOT.jar
```
The app will return the updated configuration values:
@@ -537,7 +537,7 @@ cd configuration/go/sdk/order-processor
Run the `order-processor` service alongside a Dapr sidecar.
```bash
-dapr run --app-id order-processor --app-port 6001 --components-path ../../../components -- go run .
+dapr run --app-id order-processor --app-port 6001 --resources-path ../../../components -- go run .
```
The expected output:
@@ -560,7 +560,7 @@ docker exec dapr_redis redis-cli MSET orderId1 "103" orderId2 "104"
Run the `order-processor` service again:
```bash
-dapr run --app-id order-processor --app-port 6001 --components-path ../../../components -- go run .
+dapr run --app-id order-processor --app-port 6001 --resources-path ../../../components -- go run .
```
The app will return the updated configuration values:
@@ -636,4 +636,4 @@ Join the discussion in our [discord channel](https://discord.com/channels/778680
- [Go](https://github.com/dapr/quickstarts/tree/master/configuration/go/http)
- Learn more about [Configuration building block]({{< ref configuration-api-overview >}})
-{{< button text="Explore Dapr tutorials >>" page="getting-started/tutorials/_index.md" >}}
\ No newline at end of file
+{{< button text="Explore Dapr tutorials >>" page="getting-started/tutorials/_index.md" >}}
diff --git a/daprdocs/content/en/getting-started/quickstarts/pubsub-quickstart.md b/daprdocs/content/en/getting-started/quickstarts/pubsub-quickstart.md
index 9c6460290..660282fe4 100644
--- a/daprdocs/content/en/getting-started/quickstarts/pubsub-quickstart.md
+++ b/daprdocs/content/en/getting-started/quickstarts/pubsub-quickstart.md
@@ -56,7 +56,7 @@ pip3 install -r requirements.txt
Run the `order-processor` subscriber service alongside a Dapr sidecar.
```bash
-dapr run --app-id order-processor --resources-path ../../../components/ --app-port 5001 -- python3 app.py
+dapr run --app-id order-processor --resources-path ../../../components/ --app-port 6002 -- python3 app.py
```
> **Note**: Since Python3.exe is not defined in Windows, you may need to use `python app.py` instead of `python3 app.py`.
@@ -389,7 +389,7 @@ dotnet build
Run the `order-processor` subscriber service alongside a Dapr sidecar.
```bash
-dapr run --app-id order-processor --resources-path ../../../components --app-port 7002 -- dotnet run
+dapr run --app-id order-processor --resources-path ../../../components --app-port 7005 -- dotnet run
```
In the `order-processor` subscriber, we're subscribing to the Redis instance called `orderpubsub` [(as defined in the `pubsub.yaml` component)]({{< ref "#pubsubyaml-component-file" >}}) and topic `orders`. This enables your app code to talk to the Redis component instance through the Dapr sidecar.
diff --git a/daprdocs/content/en/getting-started/quickstarts/serviceinvocation-quickstart.md b/daprdocs/content/en/getting-started/quickstarts/serviceinvocation-quickstart.md
index 4f755ded0..ba5f56523 100644
--- a/daprdocs/content/en/getting-started/quickstarts/serviceinvocation-quickstart.md
+++ b/daprdocs/content/en/getting-started/quickstarts/serviceinvocation-quickstart.md
@@ -298,7 +298,7 @@ Dapr invokes an application on any Dapr instance. In the code, the sidecar progr
For this example, you will need:
- [Dapr CLI and initialized environment](https://docs.dapr.io/getting-started).
-- [.NET SDK or .NET 6 SDK installed](https://dotnet.microsoft.com/download).
+- [.NET SDK or .NET 7 SDK installed](https://dotnet.microsoft.com/download).
- [Docker Desktop](https://www.docker.com/products/docker-desktop)
diff --git a/daprdocs/content/en/getting-started/quickstarts/workflow-quickstart.md b/daprdocs/content/en/getting-started/quickstarts/workflow-quickstart.md
index d454edc50..f25808520 100644
--- a/daprdocs/content/en/getting-started/quickstarts/workflow-quickstart.md
+++ b/daprdocs/content/en/getting-started/quickstarts/workflow-quickstart.md
@@ -97,6 +97,12 @@ Expected output:
== APP == Workflow Status: Completed
```
+### (Optional) Step 4: View in Zipkin
+
+If you have Zipkin configured for Dapr locally on your machine, you can view the workflow trace spans in the Zipkin web UI (typically at `http://localhost:9411/zipkin/`).
+
+

+
### What happened?
When you ran `dapr run --app-id order-processor dotnet run`:
diff --git a/daprdocs/content/en/operations/configuration/configuration-overview.md b/daprdocs/content/en/operations/configuration/configuration-overview.md
index 51c09bd08..40eb09427 100644
--- a/daprdocs/content/en/operations/configuration/configuration-overview.md
+++ b/daprdocs/content/en/operations/configuration/configuration-overview.md
@@ -214,7 +214,7 @@ See the [preview features]({{< ref "preview-features.md" >}}) guide for informat
### Example sidecar configuration
-The following yaml shows an example configuration file that can be applied to an applications' Dapr sidecar.
+The following YAML shows an example configuration file that can be applied to an applications' Dapr sidecar.
```yml
apiVersion: dapr.io/v1alpha1
@@ -266,15 +266,21 @@ There is a single configuration file called `daprsystem` installed with the Dapr
### Control-plane configuration settings
-A Dapr control plane configuration can configure the following settings:
+A Dapr control plane configuration contains the following sections:
+
+- [`mtls`](#mtls-mutual-tls) for mTLS (Mutual TLS)
+
+### mTLS (Mutual TLS)
+
+The `mtls` section contains properties for mTLS.
| Property | Type | Description |
|------------------|--------|-------------|
-| enabled | bool | Set mtls to be enabled or disabled
-| allowedClockSkew | string | The extra time to give for certificate expiry based on possible clock skew on a machine. Default is 15 minutes.
-| workloadCertTTL | string | Time a certificate is valid for. Default is 24 hours
+| `enabled` | bool | If true, enables mTLS for communication between services and apps in the cluster.
+| `allowedClockSkew` | string | Allowed tolerance when checking the expiration of TLS certificates, to allow for clock skew. Follows the format used by [Go's time.ParseDuration](https://pkg.go.dev/time#ParseDuration). Default is `15m` (15 minutes).
+| `workloadCertTTL` | string | How long a certificate TLS issued by Dapr is valid for. Follows the format used by [Go's time.ParseDuration](https://pkg.go.dev/time#ParseDuration). Default is `24h` (24 hours).
-See the [Mutual TLS]({{< ref "mtls.md" >}}) HowTo and [security concepts]({{< ref "security-concept.md" >}}) for more information.
+See the [mTLS how-to]({{< ref "mtls.md" >}}) and [security concepts]({{< ref "security-concept.md" >}}) for more information.
### Example control plane configuration
@@ -282,7 +288,7 @@ See the [Mutual TLS]({{< ref "mtls.md" >}}) HowTo and [security concepts]({{< re
apiVersion: dapr.io/v1alpha1
kind: Configuration
metadata:
- name: default
+ name: daprsystem
namespace: default
spec:
mtls:
diff --git a/daprdocs/content/en/operations/monitoring/metrics/grafana.md b/daprdocs/content/en/operations/monitoring/metrics/grafana.md
index d0442b032..5d3949552 100644
--- a/daprdocs/content/en/operations/monitoring/metrics/grafana.md
+++ b/daprdocs/content/en/operations/monitoring/metrics/grafana.md
@@ -142,6 +142,8 @@ First you need to connect Prometheus as a data source to Grafana.
- Name: `Dapr`
- HTTP URL: `http://dapr-prom-prometheus-server.dapr-monitoring`
- Default: On
+ - Skip TLS Verify: On
+ - Necessary in order to save and test the configuration

diff --git a/daprdocs/content/en/operations/monitoring/metrics/prometheus.md b/daprdocs/content/en/operations/monitoring/metrics/prometheus.md
index da29d0315..3c787602f 100644
--- a/daprdocs/content/en/operations/monitoring/metrics/prometheus.md
+++ b/daprdocs/content/en/operations/monitoring/metrics/prometheus.md
@@ -90,7 +90,7 @@ If you are Minikube user or want to disable persistent volume for development pu
```bash
helm install dapr-prom prometheus-community/prometheus -n dapr-monitoring
- --set alertmanager.persistentVolume.enable=false --set pushgateway.persistentVolume.enabled=false --set server.persistentVolume.enabled=false
+ --set alertmanager.persistence.enabled=false --set pushgateway.persistentVolume.enabled=false --set server.persistentVolume.enabled=false
```
3. Validation
@@ -119,4 +119,4 @@ dapr-prom-prometheus-server-694fd8d7c-q5d59 2/2 Running 0
## References
* [Prometheus Installation](https://github.com/prometheus-community/helm-charts)
-* [Prometheus Query Language](https://prometheus.io/docs/prometheus/latest/querying/basics/)
\ No newline at end of file
+* [Prometheus Query Language](https://prometheus.io/docs/prometheus/latest/querying/basics/)
diff --git a/daprdocs/content/en/operations/monitoring/tracing/datadog.md b/daprdocs/content/en/operations/monitoring/tracing/datadog.md
new file mode 100644
index 000000000..3742cf408
--- /dev/null
+++ b/daprdocs/content/en/operations/monitoring/tracing/datadog.md
@@ -0,0 +1,55 @@
+---
+type: docs
+title: "How-To: Set up Datadog for distributed tracing"
+linkTitle: "Datadog"
+weight: 5000
+description: "Set up Datadog for distributed tracing"
+---
+
+Dapr captures metrics and traces that can be sent directly to Datadog through the OpenTelemetry Collector Datadog exporter.
+
+## Configure Dapr tracing with the OpenTelemetry Collector and Datadog
+
+Using the OpenTelemetry Collector Datadog exporter, you can configure Dapr to create traces for each application in your Kubernetes cluster and collect them in Datadog.
+
+> Before you begin, [set up the OpenTelemetry Collector]({{< ref "open-telemetry-collector.md#setting-opentelemetry-collector" >}}).
+
+1. Add your Datadog API key to the `./deploy/opentelemetry-collector-generic-datadog.yaml` file in the `datadog` exporter configuration section:
+ ```yaml
+ data:
+ otel-collector-config:
+ ...
+ exporters:
+ ...
+ datadog:
+ api:
+ key:
+ ```
+
+1. Apply the `opentelemetry-collector` configuration by running the following command.
+
+ ```sh
+ kubectl apply -f ./deploy/open-telemetry-collector-generic-datadog.yaml
+ ```
+
+1. Set up a Dapr configuration file that will turn on tracing and deploy a tracing exporter component that uses the OpenTelemetry Collector.
+
+ ```sh
+ kubectl apply -f ./deploy/collector-config.yaml
+
+1. Apply the `appconfig` configuration by adding a `dapr.io/config` annotation to the container that you want to participate in the distributed tracing.
+
+ ```yml
+ annotations:
+ dapr.io/config: "appconfig"
+
+1. Create and configure the application. Once running, telemetry data is sent to Datadog and visible in Datadog APM.
+
+
+
+
+## Related Links/References
+
+* [Complete example of setting up Dapr on a Kubernetes cluster](https://github.com/ericmustin/quickstarts/tree/master/hello-kubernetes)
+* [Datadog documentation about OpenTelemetry support](https://docs.datadoghq.com/opentelemetry/)
+* [Datadog Application Performance Monitoring](https://docs.datadoghq.com/tracing/)
\ No newline at end of file
diff --git a/daprdocs/content/en/operations/resiliency/policies.md b/daprdocs/content/en/operations/resiliency/policies.md
index 515e030b0..56ab3cb91 100644
--- a/daprdocs/content/en/operations/resiliency/policies.md
+++ b/daprdocs/content/en/operations/resiliency/policies.md
@@ -12,12 +12,12 @@ Define timeouts, retries, and circuit breaker policies under `policies`. Each po
## Timeouts
-Timeouts can be used to early-terminate long-running operations. If you've exceeded a timeout duration:
+Timeouts are optional policies that can be used to early-terminate long-running operations. If you've exceeded a timeout duration:
- The operation in progress is terminated (if possible).
- An error is returned.
-Valid values are of the form accepted by Go's [time.ParseDuration](https://pkg.go.dev/time#ParseDuration), for example: `15s`, `2m`, `1h30m`.
+Valid values are of the form accepted by Go's [time.ParseDuration](https://pkg.go.dev/time#ParseDuration), for example: `15s`, `2m`, `1h30m`. Timeouts have no set maximum value.
Example:
@@ -31,6 +31,8 @@ spec:
largeResponse: 10s
```
+If you don't specify a timeout value, the policy does not enforce a time and defaults to whatever you set up per the request client.
+
## Retries
With `retries`, you can define a retry strategy for failed operations, including requests failed due to triggering a defined timeout or circuit breaker policy. The following retry options are configurable:
@@ -69,6 +71,8 @@ spec:
maxRetries: -1 # Retry indefinitely
```
+
+
## Circuit Breakers
Circuit Breaker (CB) policies are used when other applications/services/components are experiencing elevated failure rates. CBs monitor the requests and shut off all traffic to the impacted service when a certain criteria is met ("open" state). By doing this, CBs give the service time to recover from their outage instead of flooding it with events. The CB can also allow partial traffic through to see if the system has healed ("half-open" state). Once requests resume being successful, the CB gets into "closed" state and allows traffic to completely resume.
@@ -95,7 +99,7 @@ spec:
## Overriding default retries
-Dapr provides default retries for certain request failures and transient errors. Within a resiliency spec, you have the option to override Dapr's default retry logic by defining policies with reserved, named keywords. For example, defining a policy with the name `DaprBuiltInServiceRetries`, overrides the default retries for failures between sidecars via service-to-service requests. Policy overrides are not applied to specific targets.
+Dapr provides default retries for any unsuccessful request, such as failures and transient errors. Within a resiliency spec, you have the option to override Dapr's default retry logic by defining policies with reserved, named keywords. For example, defining a policy with the name `DaprBuiltInServiceRetries`, overrides the default retries for failures between sidecars via service-to-service requests. Policy overrides are not applied to specific targets.
> Note: Although you can override default values with more robust retries, you cannot override with lesser values than the provided default value, or completely remove default retries. This prevents unexpected downtime.
diff --git a/daprdocs/content/en/operations/resiliency/resiliency-overview.md b/daprdocs/content/en/operations/resiliency/resiliency-overview.md
index ba63fe137..bb6cdb502 100644
--- a/daprdocs/content/en/operations/resiliency/resiliency-overview.md
+++ b/daprdocs/content/en/operations/resiliency/resiliency-overview.md
@@ -163,14 +163,14 @@ spec:
Watch this video for how to use [resiliency](https://www.youtube.com/watch?t=184&v=7D6HOU3Ms6g&feature=youtu.be):
-
+
- - [Policies]({{< ref "policies.md" >}})
- - [Targets]({{< ref "targets.md" >}})
## Next steps
-
+Learn more about resiliency policies and targets:
+ - [Policies]({{< ref "policies.md" >}})
+ - [Targets]({{< ref "targets.md" >}})
Try out one of the Resiliency quickstarts:
- [Resiliency: Service-to-service]({{< ref resiliency-serviceinvo-quickstart.md >}})
- [Resiliency: State Management]({{< ref resiliency-state-quickstart.md >}})
\ No newline at end of file
diff --git a/daprdocs/content/en/operations/support/support-release-policy.md b/daprdocs/content/en/operations/support/support-release-policy.md
index c2677880b..9c3282411 100644
--- a/daprdocs/content/en/operations/support/support-release-policy.md
+++ b/daprdocs/content/en/operations/support/support-release-policy.md
@@ -34,7 +34,11 @@ The table below shows the versions of Dapr releases that have been tested togeth
| Release date | Runtime | CLI | SDKs | Dashboard | Status |
|--------------------|:--------:|:--------|---------|---------|---------|
-| February 14 2023 | 1.10.0 | 1.10.0 | Java 1.8.0 Go 1.6.0 PHP 1.1.0 Python 1.9.0 .NET 1.10.0 JS 2.5.0 | 0.11.0 | Supported (current) |
+| March 16 2023 | 1.10.4 | 1.10.0 | Java 1.8.0 Go 1.6.0 PHP 1.1.0 Python 1.9.0 .NET 1.10.0 JS 2.5.0 | 0.11.0 | Supported (current) |
+| March 14 2023 | 1.10.3 | 1.10.0 | Java 1.8.0 Go 1.6.0 PHP 1.1.0 Python 1.9.0 .NET 1.10.0 JS 2.5.0 | 0.11.0 | Supported |
+| February 24 2023 | 1.10.2 | 1.10.0 | Java 1.8.0 Go 1.6.0 PHP 1.1.0 Python 1.9.0 .NET 1.10.0 JS 2.5.0 | 0.11.0 | Supported |
+| February 20 2023 | 1.10.1 | 1.10.0 | Java 1.8.0 Go 1.6.0 PHP 1.1.0 Python 1.9.0 .NET 1.10.0 JS 2.5.0 | 0.11.0 | Supported |
+| February 14 2023 | 1.10.0 | 1.10.0 | Java 1.8.0 Go 1.6.0 PHP 1.1.0 Python 1.9.0 .NET 1.10.0 JS 2.5.0 | 0.11.0 | Supported|
| December 2nd 2022 | 1.9.5 | 1.9.1 | Java 1.7.0 Go 1.6.0 PHP 1.1.0 Python 1.8.3 .NET 1.9.0 JS 2.4.2 | 0.11.0 | Supported |
| November 17th 2022 | 1.9.4 | 1.9.1 | Java 1.7.0 Go 1.6.0 PHP 1.1.0 Python 1.8.3 .NET 1.9.0 JS 2.4.2 | 0.11.0 | Supported |
| November 4th 2022 | 1.9.3 | 1.9.1 | Java 1.7.0 Go 1.6.0 PHP 1.1.0 Python 1.8.3 .NET 1.9.0 JS 2.4.2 | 0.11.0 | Supported |
@@ -86,15 +90,18 @@ General guidance on upgrading can be found for [self hosted mode]({{< ref self-h
| | 1.6.0 | 1.6.2 |
| | 1.6.2 | 1.7.5 |
| | 1.7.5 | 1.8.6 |
-| | 1.8.6 | 1.9.5 |
+| | 1.8.6 | 1.9.6 |
+| | 1.9.6 | 1.10.4 |
| 1.6.0 to 1.6.2 | N/A | 1.7.5 |
| | 1.7.5 | 1.8.6 |
-| | 1.8.6 | 1.9.5 |
+| | 1.8.6 | 1.9.6 |
+| | 1.9.6 | 1.10.4 |
| 1.7.0 to 1.7.5 | N/A | 1.8.6 |
-| | 1.8.6 | 1.9.5 |
-| 1.8.0 to 1.8.6 | N/A | 1.9.5 |
-| 1.9.0 | N/A | 1.9.5 |
-| 1.10.0 | N/A | 1.10.0 |
+| | 1.8.6 | 1.9.6 |
+| | 1.9.6 | 1.10.4 |
+| 1.8.0 to 1.8.6 | N/A | 1.9.6 |
+| 1.9.0 | N/A | 1.9.6 |
+| 1.10.0 | N/A | 1.10.4 |
## Breaking changes and deprecations
@@ -147,6 +154,7 @@ After announcing a future breaking change, the change will happen in 2 releases
| GET /v1.0/shutdown API (Users should use [POST API]({{< ref kubernetes-job.md >}}) instead) | 1.2.0 | 1.4.0 |
| Java domain builder classes deprecated (Users should use [setters](https://github.com/dapr/java-sdk/issues/587) instead) | Java SDK 1.3.0 | Java SDK 1.5.0 |
| Service invocation will no longer provide a default content type header of `application/json` when no content-type is specified. You must explicitly [set a content-type header]({{< ref "service_invocation_api.md#request-contents" >}}) for service invocation if your invoked apps rely on this header. | 1.7.0 | 1.9.0 |
+| gRPC service invocation using `invoke` method is deprecated. Use proxy mode service invocation instead. See [How-To: Invoke services using gRPC ]({{< ref howto-invoke-services-grpc.md >}}) to use the proxy mode.| 1.9.0 | 1.10.0 |
## Upgrade on Hosting platforms
diff --git a/daprdocs/content/en/reference/api/metadata_api.md b/daprdocs/content/en/reference/api/metadata_api.md
index e2adc08de..336711013 100644
--- a/daprdocs/content/en/reference/api/metadata_api.md
+++ b/daprdocs/content/en/reference/api/metadata_api.md
@@ -93,7 +93,8 @@ curl http://localhost:3500/v1.0/metadata
],
"extended": {
"cliPID":"1031040",
- "appCommand":"uvicorn --port 3000 demo_actor_service:app"
+ "appCommand":"uvicorn --port 3000 demo_actor_service:app",
+ "daprRuntimeVersion": "1.10.0"
},
"components":[
{
diff --git a/daprdocs/content/en/reference/api/workflow_api.md b/daprdocs/content/en/reference/api/workflow_api.md
index 4b6d8f259..ebf7ad2c7 100644
--- a/daprdocs/content/en/reference/api/workflow_api.md
+++ b/daprdocs/content/en/reference/api/workflow_api.md
@@ -5,39 +5,95 @@ linkTitle: "Workflow API"
description: "Detailed documentation on the workflow API"
weight: 900
---
-## Component format
-A Dapr `workflow.yaml` component file has the following structure:
-```yaml
-apiVersion: dapr.io/v1alpha1
-kind: Component
-metadata:
- name:
-spec:
- type: workflow.
- version: v1.0-alpha1
- metadata:
- - name:
- value:
- ```
-| Setting | Description |
-| ------- | ----------- |
-| `metadata.name` | The name of the workflow component. |
-| `spec/metadata` | Additional metadata parameters specified by workflow component |
+Dapr provides users with the ability to interact with workflows and comes with a built-in `dapr` component.
+## Start workflow request
+Start a workflow instance with the given name and instance ID.
-## Supported workflow methods
-
-### POST start workflow request
```bash
POST http://localhost:3500/v1.0-alpha1/workflows////start
```
-### POST terminate workflow request
+
+### URL parameters
+
+Parameter | Description
+--------- | -----------
+`workflowComponentName` | Current default is `dapr` for Dapr Workflows
+`workflowName` | Identify the workflow type
+`instanceId` | Unique value created for each run of a specific workflow
+
+### Request content
+
+In the request you can pass along relevant input information that will be passed to the workflow:
+
+```json
+{
+ "input": // argument(s) to pass to the workflow which can be any valid JSON data type (such as objects, strings, numbers, arrays, etc.)
+}
+```
+
+### HTTP response codes
+
+Code | Description
+---- | -----------
+`202` | Accepted
+`400` | Request was malformed
+`500` | Request formatted correctly, error in dapr code or underlying component
+
+### Response content
+
+The API call will provide a response similar to this:
+
+```json
+{
+ "WFInfo": {
+ "instance_id": "SampleWorkflow"
+ }
+}
+```
+
+## Terminate workflow request
+
+Terminate a running workflow instance with the given name and instance ID.
+
```bash
POST http://localhost:3500/v1.0-alpha1/workflows///terminate
```
-### GET workflow request
+
+### URL parameters
+
+Parameter | Description
+--------- | -----------
+`workflowComponentName` | Current default is `dapr` for Dapr Workflows
+`workflowName` | Identify the workflow type
+`instanceId` | Unique value created for each run of a specific workflow
+
+### HTTP response codes
+
+Code | Description
+---- | -----------
+`202` | Accepted
+`400` | Request was malformed
+`500` | Request formatted correctly, error in dapr code or underlying component
+
+### Response content
+
+The API call will provide a response similar to this:
+
+```bash
+HTTP/1.1 202 Accepted
+Server: fasthttp
+Date: Thu, 12 Jan 2023 21:31:16 GMT
+Traceparent: 00-e3dedffedbeb9efbde9fbed3f8e2d8-5f38960d43d24e98-01
+Connection: close
+```
+
+### Get workflow request
+
+Get information about a given workflow instance.
+
```bash
GET http://localhost:3500/v1.0-alpha1/workflows///
```
@@ -50,15 +106,7 @@ Parameter | Description
`workflowName` | Identify the workflow type
`instanceId` | Unique value created for each run of a specific workflow
-
-### Headers
-
-As part of the start HTTP request, the caller can optionally include one or more `dapr-workflow-metadata` HTTP request headers. The format of the header value is a list of `{key}={value}` values, similar to the format for HTTP cookie request headers. These key/value pairs are saved in the workflow instance’s metadata and can be made available for search (in cases where the workflow implementation supports this kind of search).
-
-
-## HTTP responses
-
-### Response codes
+### HTTP response codes
Code | Description
---- | -----------
@@ -66,31 +114,9 @@ Code | Description
`400` | Request was malformed
`500` | Request formatted correctly, error in dapr code or underlying component
-### Examples of response body for each method
+### Response content
-#### POST start workflow response body
-
-```bash
- "WFInfo": {
- "instance_id": "SampleWorkflow"
- }
-```
-
-
-#### POST terminate workflow response body
-
-```bash
-HTTP/1.1 202 Accepted
-Server: fasthttp
-Date: Thu, 12 Jan 2023 21:31:16 GMT
-Content-Type: application/json
-Content-Length: 139
-Traceparent: 00-e3dedffedbeb9efbde9fbed3f8e2d8-5f38960d43d24e98-01
-Connection: close
-```
-
-
-### GET workflow response body
+The API call will provide a response similar to this:
```bash
HTTP/1.1 202 Accepted
@@ -113,8 +139,31 @@ Connection: close
}
```
+## Component format
+
+A Dapr `workflow.yaml` component file has the following structure:
+
+```yaml
+apiVersion: dapr.io/v1alpha1
+kind: Component
+metadata:
+ name:
+spec:
+ type: workflow.
+ version: v1.0-alpha1
+ metadata:
+ - name:
+ value:
+ ```
+
+| Setting | Description |
+| ------- | ----------- |
+| `metadata.name` | The name of the workflow component. |
+| `spec/metadata` | Additional metadata parameters specified by workflow component |
+
+However, Dapr comes with a built-in `dapr` workflow component that is built on Dapr Actors. No component file is required to use the built-in Dapr workflow component.
## Next Steps
- [Workflow API overview]({{< ref workflow-overview.md >}})
-- [Route user to workflow patterns ](todo)
+- [Route user to workflow patterns ]({{< ref workflow-patterns.md >}})
diff --git a/daprdocs/content/en/reference/components-reference/supported-middleware/middleware-wasm.md b/daprdocs/content/en/reference/components-reference/supported-middleware/middleware-wasm.md
index e95c6671a..8d19d0b19 100644
--- a/daprdocs/content/en/reference/components-reference/supported-middleware/middleware-wasm.md
+++ b/daprdocs/content/en/reference/components-reference/supported-middleware/middleware-wasm.md
@@ -11,10 +11,11 @@ WebAssembly is a way to safely run code compiled in other languages. Runtimes
execute WebAssembly Modules (Wasm), which are most often binaries with a `.wasm`
extension.
-The Wasm [HTTP middleware]({{< ref middleware.md >}}) allows you to rewrite a
-request URI with custom logic compiled to a Wasm binary. In other words, you
-can extend Dapr using external files that are not pre-compiled into the `daprd`
-binary. Dapr embeds [wazero](https://wazero.io) to accomplish this without CGO.
+The Wasm [HTTP middleware]({{< ref middleware.md >}}) allows you to manipulate
+an incoming request or serve a response with custom logic compiled to a Wasm
+binary. In other words, you can extend Dapr using external files that are not
+pre-compiled into the `daprd` binary. Dapr embeds [wazero](https://wazero.io)
+to accomplish this without CGO.
Wasm modules are loaded from a filesystem path. On Kubernetes, see [mounting
volumes to the Dapr sidecar]({{< ref kubernetes-volume-mounts.md >}}) to configure
@@ -28,27 +29,21 @@ kind: Component
metadata:
name: wasm
spec:
- type: middleware.http.wasm.basic
+ type: middleware.http.wasm
version: v1
metadata:
- name: path
- value: "./hello.wasm"
- - name: poolSize
- value: 1
+ value: "./router.wasm"
```
## Spec metadata fields
-Minimally, a user must specify a Wasm binary that contains the custom logic
-used to rewrite requests. An instance of the Wasm binary is not safe to use
-concurrently. The below configuration fields control both the binary to
-instantiate and how large an instance pool to use. A larger pool allows higher
-concurrency while consuming more memory.
+Minimally, a user must specify a Wasm binary implements the [http-handler](https://http-wasm.io/http-handler/).
+How to compile this is described later.
| Field | Details | Required | Example |
|----------|----------------------------------------------------------------|----------|----------------|
| path | A relative or absolute path to the Wasm binary to instantiate. | true | "./hello.wasm" |
-| poolSize | Number of concurrent instances of the Wasm binary. Default: 10 | false | 1 |
## Dapr configuration
@@ -64,7 +59,60 @@ spec:
httpPipeline:
handlers:
- name: wasm
- type: middleware.http.wasm.basic
+ type: middleware.http.wasm
+```
+
+*Note*: WebAssembly middleware uses more resources than native middleware. This
+result in a resource constraint faster than the same logic in native code.
+Production usage should [Control max concurrency]({{< ref control-concurrency.md >}}).
+
+### Generating Wasm
+
+This component lets you manipulate an incoming request or serve a response with
+custom logic compiled using the [http-handler](https://http-wasm.io/http-handler/)
+Application Binary Interface (ABI). The `handle_request` function receives an
+incoming request and can manipulate it or serve a response as necessary.
+
+To compile your Wasm, you must compile source using a http-handler compliant
+guest SDK such as [TinyGo](https://github.com/http-wasm/http-wasm-guest-tinygo).
+
+Here's an example in TinyGo:
+
+```go
+package main
+
+import (
+ "strings"
+
+ "github.com/http-wasm/http-wasm-guest-tinygo/handler"
+ "github.com/http-wasm/http-wasm-guest-tinygo/handler/api"
+)
+
+func main() {
+ handler.HandleRequestFn = handleRequest
+}
+
+// handleRequest implements a simple HTTP router.
+func handleRequest(req api.Request, resp api.Response) (next bool, reqCtx uint32) {
+ // If the URI starts with /host, trim it and dispatch to the next handler.
+ if uri := req.GetURI(); strings.HasPrefix(uri, "/host") {
+ req.SetURI(uri[5:])
+ next = true // proceed to the next handler on the host.
+ return
+ }
+
+ // Serve a static response
+ resp.Headers().Set("Content-Type", "text/plain")
+ resp.Body().WriteString("hello")
+ return // skip the next handler, as we wrote a response.
+}
+```
+
+If using TinyGo, compile as shown below and set the spec metadata field named
+"path" to the location of the output (ex "router.wasm"):
+
+```bash
+tinygo build -o router.wasm -scheduler=none --no-debug -target=wasi router.go`
```
### Generating Wasm
@@ -108,4 +156,4 @@ tinygo build -o example.wasm -scheduler=none --no-debug -target=wasi example.go
- [Middleware]({{< ref middleware.md >}})
- [Configuration concept]({{< ref configuration-concept.md >}})
- [Configuration overview]({{< ref configuration-overview.md >}})
-- [waPC protocol](https://wapc.io/docs/spec/)
+- [Control max concurrency]({{< ref control-concurrency.md >}})
diff --git a/daprdocs/content/en/reference/components-reference/supported-pubsub/setup-apache-kafka.md b/daprdocs/content/en/reference/components-reference/supported-pubsub/setup-apache-kafka.md
index 96c27bccd..c616f3251 100644
--- a/daprdocs/content/en/reference/components-reference/supported-pubsub/setup-apache-kafka.md
+++ b/daprdocs/content/en/reference/components-reference/supported-pubsub/setup-apache-kafka.md
@@ -82,7 +82,7 @@ The `secretKeyRef` above is referencing a [kubernetes secrets store]({{< ref ku
Kafka supports a variety of authentication schemes and Dapr supports several: SASL password, mTLS, OIDC/OAuth2. With the added authentication methods, the `authRequired` field has
been deprecated from the v1.6 release and instead the `authType` field should be used. If `authRequired` is set to `true`, Dapr will attempt to configure `authType` correctly
-based on the value of `saslPassword`. There are four valid values for `authType`: `none`, `password`, `mtls`, and `oidc`. Note this is authentication only; authorization is still configured within Kafka.
+based on the value of `saslPassword`. There are four valid values for `authType`: `none`, `password`, `certificate`, `mtls`, and `oidc`. Note this is authentication only; authorization is still configured within Kafka.
#### None
@@ -275,17 +275,11 @@ spec:
- name: clientID # Optional. Used as client tracing ID by Kafka brokers.
value: "my-dapr-app-id"
- name: authType # Required.
- value: "password"
- - name: saslUsername # Required if authType is `password`.
- value: "adminuser"
+ value: "certificate"
- name: consumeRetryInterval # Optional.
value: 200ms
- name: version # Optional.
value: 0.10.2.0
- - name: saslPassword # Required if authRequired is `true`.
- secretKeyRef:
- name: kafka-secrets
- key: saslPasswordSecret
- name: maxMessageBytes # Optional.
value: 1024
- name: caCert # Certificate authority certificate.
diff --git a/daprdocs/content/en/reference/components-reference/supported-pubsub/setup-pulsar.md b/daprdocs/content/en/reference/components-reference/supported-pubsub/setup-pulsar.md
index d225eae49..e57c2aa26 100644
--- a/daprdocs/content/en/reference/components-reference/supported-pubsub/setup-pulsar.md
+++ b/daprdocs/content/en/reference/components-reference/supported-pubsub/setup-pulsar.md
@@ -77,7 +77,7 @@ spec:
### Enabling message delivery retries
-The Pulsar pub/sub component has no built-in support for retry strategies. This means that sidecar sends a message to the service only once and is not retried in case of failures. To make Dapr use more spohisticated retry policies, you can apply a [retry resiliency policy]({{< ref "policies.md#retries" >}}) to the MQTT pub/sub component. Note that it will be the same Dapr sidecar retrying the redelivery the message to the same app instance and not other instances.
+The Pulsar pub/sub component has no built-in support for retry strategies. This means that sidecar sends a message to the service only once and is not retried in case of failures. To make Dapr use more spohisticated retry policies, you can apply a [retry resiliency policy]({{< ref "policies.md#retries" >}}) to the Pulsar pub/sub component. Note that it will be the same Dapr sidecar retrying the redelivery the message to the same app instance and not other instances.
### Delay queue
diff --git a/daprdocs/content/en/reference/components-reference/supported-pubsub/setup-rabbitmq.md b/daprdocs/content/en/reference/components-reference/supported-pubsub/setup-rabbitmq.md
index 4d715fe8b..9b80c9581 100644
--- a/daprdocs/content/en/reference/components-reference/supported-pubsub/setup-rabbitmq.md
+++ b/daprdocs/content/en/reference/components-reference/supported-pubsub/setup-rabbitmq.md
@@ -18,8 +18,16 @@ spec:
type: pubsub.rabbitmq
version: v1
metadata:
- - name: host
+ - name: connectionString
value: "amqp://localhost:5672"
+ - name: protocol
+ value: amqp
+ - name: hostname
+ value: localhost
+ - name: username
+ value: username
+ - name: password
+ value: password
- name: consumerID
value: myapp
- name: durable
@@ -48,6 +56,8 @@ spec:
value: 10485760
- name: exchangeKind
value: fanout
+ - name: saslExternal
+ value: false
```
{{% alert title="Warning" color="warning" %}}
@@ -58,7 +68,11 @@ The above example uses secrets as plain strings. It is recommended to use a secr
| Field | Required | Details | Example |
|--------------------|:--------:|---------|---------|
-| host | Y | Connection-string for the rabbitmq host | `amqp://user:pass@localhost:5672`
+| connectionString | Y* | The RabbitMQ connection string. *Mutally exclusive with protocol, hostname, username, password field | `amqp://user:pass@localhost:5672` |
+| protocol | N* | The RabbitMQ protocol. *Mutally exclusive with connectionString field | `amqp` |
+| hostname | N* | The RabbitMQ hostname. *Mutally exclusive with connectionString field | `localhost` |
+| username | N* | The RabbitMQ username. *Mutally exclusive with connectionString field | `username` |
+| password | N* | The RabbitMQ password. *Mutally exclusive with connectionString field | `password` |
| consumerID | N | Consumer ID a.k.a consumer tag organizes one or more consumers into a group. Consumers with the same consumer ID work as one virtual consumer, i.e. a message is processed only once by one of the consumers in the group. If the consumer ID is not set, the dapr runtime will set it to the dapr application ID. |
| durable | N | Whether or not to use [durable](https://www.rabbitmq.com/queues.html#durability) queues. Defaults to `"false"` | `"true"`, `"false"`
| deletedWhenUnused | N | Whether or not the queue should be configured to [auto-delete](https://www.rabbitmq.com/queues.html) Defaults to `"true"` | `"true"`, `"false"`
@@ -73,6 +87,7 @@ The above example uses secrets as plain strings. It is recommended to use a secr
| maxLen | N | The maximum number of messages of a queue and its dead letter queue (if dead letter enabled). If both `maxLen` and `maxLenBytes` are set then both will apply; whichever limit is hit first will be enforced. Defaults to no limit. | `"1000"` |
| maxLenBytes | N | Maximum length in bytes of a queue and its dead letter queue (if dead letter enabled). If both `maxLen` and `maxLenBytes` are set then both will apply; whichever limit is hit first will be enforced. Defaults to no limit. | `"1048576"` |
| exchangeKind | N | Exchange kind of the rabbitmq exchange. Defaults to `"fanout"`. | `"fanout"`,`"topic"` |
+| saslExternal | N | With TLS, should the username be taken from an additional field (e.g. CN.) See [RabbitMQ Authentication Mechanisms](https://www.rabbitmq.com/access-control.html#mechanisms). Defaults to `"false"`. | `"true"`, `"false"` |
| caCert | Required for using TLS | Input/Output | Certificate Authority (CA) certificate in PEM format for verifying server TLS certificates. | `"-----BEGIN CERTIFICATE-----\n\n-----END CERTIFICATE-----"`
| clientCert | Required for using TLS | Input/Output | TLS client certificate in PEM format. Must be used with `clientKey`. | `"-----BEGIN CERTIFICATE-----\n\n-----END CERTIFICATE-----"`
| clientKey | Required for using TLS | Input/Output | TLS client key in PEM format. Must be used with `clientCert`. Can be `secretKeyRef` to use a secret reference. | `"-----BEGIN RSA PRIVATE KEY-----\n\n-----END RSA PRIVATE KEY-----"`
@@ -121,6 +136,8 @@ spec:
value: 10485760
- name: exchangeKind
value: fanout
+ - name: saslExternal
+ value: false
- name: caCert
value: ${{ myLoadedCACert }}
- name: clientCert
diff --git a/daprdocs/content/en/reference/components-reference/supported-secret-stores/azure-keyvault.md b/daprdocs/content/en/reference/components-reference/supported-secret-stores/azure-keyvault.md
index ce2e801f4..91ba14867 100644
--- a/daprdocs/content/en/reference/components-reference/supported-secret-stores/azure-keyvault.md
+++ b/daprdocs/content/en/reference/components-reference/supported-secret-stores/azure-keyvault.md
@@ -9,9 +9,10 @@ aliases:
## Component format
-To setup Azure Key Vault secret store create a component of type `secretstores.azure.keyvault`. See [this guide]({{< ref "setup-secret-store.md#apply-the-configuration" >}}) on how to create and apply a secretstore configuration. See this guide on [referencing secrets]({{< ref component-secrets.md >}}) to retrieve and use the secret with Dapr components.
-
-See also [configure the component](#configure-the-component) guide in this page.
+To setup Azure Key Vault secret store, create a component of type `secretstores.azure.keyvault`.
+- See [the secret store components guide]({{< ref "setup-secret-store.md#apply-the-configuration" >}}) on how to create and apply a secret store configuration.
+- See [the guide on referencing secrets]({{< ref component-secrets.md >}}) to retrieve and use the secret with Dapr components.
+- See [the Configure the component section](#configure-the-component) below.
```yaml
apiVersion: dapr.io/v1alpha1
@@ -37,7 +38,10 @@ spec:
## Authenticating with Azure AD
-The Azure Key Vault secret store component supports authentication with Azure AD only. Before you enable this component, make sure you've read the [Authenticating to Azure]({{< ref authenticating-azure.md >}}) document and created an Azure AD application (also called Service Principal). Alternatively, make sure you have created a managed identity for your application platform.
+The Azure Key Vault secret store component supports authentication with Azure AD only. Before you enable this component:
+1. Read the [Authenticating to Azure]({{< ref authenticating-azure.md >}}) document.
+1. Create an Azure AD application (also called Service Principal).
+1. Alternatively, create a managed identity for your application platform.
## Spec metadata fields
@@ -49,20 +53,21 @@ The Azure Key Vault secret store component supports authentication with Azure AD
Additionally, you must provide the authentication fields as explained in the [Authenticating to Azure]({{< ref authenticating-azure.md >}}) document.
-## Example: Create an Azure Key Vault and authorize a Service Principal
+## Example
### Prerequisites
- Azure Subscription
- [Azure CLI](https://docs.microsoft.com/cli/azure/install-azure-cli)
- [jq](https://stedolan.github.io/jq/download/)
-- The scripts below are optimized for a bash or zsh shell
+- You are using bash or zsh shell
+- You've created an Azure AD application (Service Principal) per the instructions in [Authenticating to Azure]({{< ref authenticating-azure.md >}}). You will need the following values:
-Make sure you have followed the steps in the [Authenticating to Azure]({{< ref authenticating-azure.md >}}) document to create an Azure AD application (also called Service Principal). You will need the following values:
+ | Value | Description |
+ | ----- | ----------- |
+ | `SERVICE_PRINCIPAL_ID` | The ID of the Service Principal that you created for a given application |
-- `SERVICE_PRINCIPAL_ID`: the ID of the Service Principal that you created for a given application
-
-### Steps
+### Create an Azure Key Vault and authorize a Service Principal
1. Set a variable with the Service Principal that you created:
@@ -70,7 +75,7 @@ Make sure you have followed the steps in the [Authenticating to Azure]({{< ref a
SERVICE_PRINCIPAL_ID="[your_service_principal_object_id]"
```
-2. Set a variable with the location where to create all resources:
+1. Set a variable with the location in which to create all resources:
```sh
LOCATION="[your_location]"
@@ -78,7 +83,7 @@ Make sure you have followed the steps in the [Authenticating to Azure]({{< ref a
(You can get the full list of options with: `az account list-locations --output tsv`)
-3. Create a Resource Group, giving it any name you'd like:
+1. Create a Resource Group, giving it any name you'd like:
```sh
RG_NAME="[resource_group_name]"
@@ -88,7 +93,7 @@ Make sure you have followed the steps in the [Authenticating to Azure]({{< ref a
| jq -r .id)
```
-4. Create an Azure Key Vault (that uses Azure RBAC for authorization):
+1. Create an Azure Key Vault that uses Azure RBAC for authorization:
```sh
KEYVAULT_NAME="[key_vault_name]"
@@ -99,7 +104,7 @@ Make sure you have followed the steps in the [Authenticating to Azure]({{< ref a
--location "${LOCATION}"
```
-5. Using RBAC, assign a role to the Azure AD application so it can access the Key Vault.
+1. Using RBAC, assign a role to the Azure AD application so it can access the Key Vault.
In this case, assign the "Key Vault Secrets User" role, which has the "Get secrets" permission over Azure Key Vault.
```sh
@@ -109,15 +114,17 @@ Make sure you have followed the steps in the [Authenticating to Azure]({{< ref a
--scope "${RG_ID}/providers/Microsoft.KeyVault/vaults/${KEYVAULT_NAME}"
```
-Other less restrictive roles like "Key Vault Secrets Officer" and "Key Vault Administrator" can be used as well, depending on your application. For more information about Azure built-in roles for Key Vault see the [Microsoft docs](https://docs.microsoft.com/azure/key-vault/general/rbac-guide?tabs=azure-cli#azure-built-in-roles-for-key-vault-data-plane-operations).
+Other less restrictive roles, like "Key Vault Secrets Officer" and "Key Vault Administrator", can be used, depending on your application. [See Microsoft Docs for more information about Azure built-in roles for Key Vault](https://docs.microsoft.com/azure/key-vault/general/rbac-guide?tabs=azure-cli#azure-built-in-roles-for-key-vault-data-plane-operations).
-## Configure the component
+### Configure the component
{{< tabs "Self-Hosted" "Kubernetes">}}
{{% codetab %}}
-To use a **client secret**, create a file called `azurekeyvault.yaml` in the components directory, filling in with the Azure AD application that you created following the [Authenticating to Azure]({{< ref authenticating-azure.md >}}) document:
+#### Using a client secret
+
+To use a **client secret**, create a file called `azurekeyvault.yaml` in the components directory. Use the following template, filling in [the Azure AD application you created]({{< ref authenticating-azure.md >}}):
```yaml
apiVersion: dapr.io/v1alpha1
@@ -138,7 +145,9 @@ spec:
value : "[your_client_secret]"
```
-If you want to use a **certificate** saved on the local disk, instead, use this template, filling in with details of the Azure AD application that you created following the [Authenticating to Azure]({{< ref authenticating-azure.md >}}) document:
+#### Using a certificate
+
+If you want to use a **certificate** saved on the local disk instead, use the following template. Fill in the details of [the Azure AD application you created]({{< ref authenticating-azure.md >}}):
```yaml
apiVersion: dapr.io/v1alpha1
@@ -161,9 +170,9 @@ spec:
{{% /codetab %}}
{{% codetab %}}
-In Kubernetes, you store the client secret or the certificate into the Kubernetes Secret Store and then refer to those in the YAML file. You will need the details of the Azure AD application that was created following the [Authenticating to Azure]({{< ref authenticating-azure.md >}}) document.
+In Kubernetes, you store the client secret or the certificate into the Kubernetes Secret Store and then refer to those in the YAML file. Before you start, you need the details of [the Azure AD application you created]({{< ref authenticating-azure.md >}}).
-To use a **client secret**:
+#### Using a client secret
1. Create a Kubernetes secret using the following command:
@@ -176,7 +185,7 @@ To use a **client secret**:
- `[your_k8s_secret_key]` is secret key in the Kubernetes secret store
-2. Create an `azurekeyvault.yaml` component file.
+1. Create an `azurekeyvault.yaml` component file.
The component yaml refers to the Kubernetes secretstore using `auth` property and `secretKeyRef` refers to the client secret stored in the Kubernetes secret store.
@@ -203,13 +212,13 @@ To use a **client secret**:
secretStore: kubernetes
```
-3. Apply the `azurekeyvault.yaml` component:
+1. Apply the `azurekeyvault.yaml` component:
```bash
kubectl apply -f azurekeyvault.yaml
```
-To use a **certificate**:
+#### Using a certificate
1. Create a Kubernetes secret using the following command:
@@ -221,7 +230,7 @@ To use a **certificate**:
- `[your_k8s_secret_name]` is secret name in the Kubernetes secret store
- `[your_k8s_secret_key]` is secret key in the Kubernetes secret store
-2. Create an `azurekeyvault.yaml` component file.
+1. Create an `azurekeyvault.yaml` component file.
The component yaml refers to the Kubernetes secretstore using `auth` property and `secretKeyRef` refers to the certificate stored in the Kubernetes secret store.
@@ -248,16 +257,16 @@ To use a **certificate**:
secretStore: kubernetes
```
-3. Apply the `azurekeyvault.yaml` component:
+1. Apply the `azurekeyvault.yaml` component:
```bash
kubectl apply -f azurekeyvault.yaml
```
-To use **Azure managed identity**:
+#### Using Azure managed identity
1. Ensure your AKS cluster has managed identity enabled and follow the [guide for using managed identities](https://docs.microsoft.com/azure/aks/use-managed-identity).
-2. Create an `azurekeyvault.yaml` component file.
+1. Create an `azurekeyvault.yaml` component file.
The component yaml refers to a particular KeyVault name. The managed identity you will use in a later step must be given read access to this particular KeyVault instance.
@@ -274,12 +283,23 @@ To use **Azure managed identity**:
value: "[your_keyvault_name]"
```
-3. Apply the `azurekeyvault.yaml` component:
+1. Apply the `azurekeyvault.yaml` component:
```bash
kubectl apply -f azurekeyvault.yaml
```
-4. Create and use a managed identity / pod identity by following [this guide](https://docs.microsoft.com/azure/aks/use-azure-ad-pod-identity#create-a-pod-identity). After creating an AKS pod identity, [give this identity read permissions on your desired KeyVault instance](https://docs.microsoft.com/azure/key-vault/general/assign-access-policy?tabs=azure-cli#assign-the-access-policy), and finally in your application deployment inject the pod identity via a label annotation:
+1. Create and assign a managed identity at the pod-level via either:
+ - [Azure AD workload identity](https://learn.microsoft.com/azure/aks/workload-identity-overview) (preferred method)
+ - [Azure AD pod identity](https://docs.microsoft.com/azure/aks/use-azure-ad-pod-identity#create-a-pod-identity)
+
+
+ **Important**: While both Azure AD pod identity and workload identity are in preview, currently Azure AD Workload Identity is planned for general availability (stable state).
+
+1. After creating a workload identity, give it `read` permissions:
+ - [On your desired KeyVault instance](https://docs.microsoft.com/azure/key-vault/general/assign-access-policy?tabs=azure-cli#assign-the-access-policy)
+ - In your application deployment. Inject the pod identity both:
+ - Via a label annotation
+ - By specifying the Kubernetes service account associated with the desired workload identity
```yaml
apiVersion: v1
@@ -290,6 +310,12 @@ To use **Azure managed identity**:
aadpodidbinding: $POD_IDENTITY_NAME
```
+#### Using Azure managed identity directly vs. via Azure AD workload identity
+
+When using **managed identity directly**, you can have multiple identities associated with an app, requiring `azureClientId` to specify which identity should be used.
+
+However, when using **managed identity via Azure AD workload identity**, `azureClientId` is not necessary and has no effect. The Azure identity to be used is inferred from the service account tied to an Azure identity via the Azure federated identity.
+
{{% /codetab %}}
{{< /tabs >}}
diff --git a/daprdocs/content/en/reference/components-reference/supported-state-stores/setup-cockroachdb.md b/daprdocs/content/en/reference/components-reference/supported-state-stores/setup-cockroachdb.md
index 5a6167d30..e0f6be7f3 100644
--- a/daprdocs/content/en/reference/components-reference/supported-state-stores/setup-cockroachdb.md
+++ b/daprdocs/content/en/reference/components-reference/supported-state-stores/setup-cockroachdb.md
@@ -11,6 +11,7 @@ aliases:
Create a file called `cockroachdb.yaml`, paste the following and replace the `` value with your connection string. The connection string for CockroachDB follow the same standard for PostgreSQL connection string. For example, `"host=localhost user=root port=26257 connect_timeout=10 database=dapr_test"`. See the CockroachDB [documentation on database connections](https://www.cockroachlabs.com/docs/stable/connect-to-the-database.html) for information on how to define a connection string.
+If you want to also configure CockroachDB to store actors, add the `actorStateStore` option as in the example below.
```yaml
apiVersion: dapr.io/v1alpha1
@@ -21,16 +22,44 @@ spec:
type: state.cockroachdb
version: v1
metadata:
+ # Connection string
- name: connectionString
value: ""
+ # Timeout for database operations, in seconds (optional)
+ #- name: timeoutInSeconds
+ # value: 20
+ # Name of the table where to store the state (optional)
+ #- name: tableName
+ # value: "state"
+ # Name of the table where to store metadata used by Dapr (optional)
+ #- name: metadataTableName
+ # value: "dapr_metadata"
+ # Cleanup interval in seconds, to remove expired rows (optional)
+ #- name: cleanupIntervalInSeconds
+ # value: 3600
+ # Max idle time for connections before they're closed (optional)
+ #- name: connectionMaxIdleTime
+ # value: 0
+ # Uncomment this if you wish to use CockroachDB as a state store for actors (optional)
+ #- name: actorStateStore
+ # value: "true"
```
+{{% alert title="Warning" color="warning" %}}
+The above example uses secrets as plain strings. It is recommended to use a secret store for the secrets as described [here]({{< ref component-secrets.md >}}).
+{{% /alert %}}
+
## Spec metadata fields
| Field | Required | Details | Example |
|--------------------|:--------:|---------|---------|
-| connectionString | Y | The connection string for CockroachDB | `"host=localhost user=root port=26257 connect_timeout=10 database=dapr_test"`
-| actorStateStore | N | Consider this state store for actors. Defaults to `"false"` | `"true"`, `"false"`
+| `connectionString` | Y | The connection string for CockroachDB | `"host=localhost user=root port=26257 connect_timeout=10 database=dapr_test"`
+| `timeoutInSeconds` | N | Timeout, in seconds, for all database operations. Defaults to `20` | `30`
+| `tableName` | N | Name of the table where the data is stored. Defaults to `state`. Can optionally have the schema name as prefix, such as `public.state` | `"state"`, `"public.state"`
+| `metadataTableName` | N | Name of the table Dapr uses to store a few metadata properties. Defaults to `dapr_metadata`. Can optionally have the schema name as prefix, such as `public.dapr_metadata` | `"dapr_metadata"`, `"public.dapr_metadata"`
+| `cleanupIntervalInSeconds` | N | Interval, in seconds, to clean up rows with an expired TTL. Default: `3600` (i.e. 1 hour). Setting this to values <=0 disables the periodic cleanup. | `1800`, `-1`
+| `connectionMaxIdleTime` | N | Max idle time before unused connections are automatically closed in the connection pool. By default, there's no value and this is left to the database driver to choose. | `"5m"`
+| `actorStateStore` | N | Consider this state store for actors. Defaults to `"false"` | `"true"`, `"false"`
## Setup CockroachDB
@@ -62,6 +91,19 @@ The easiest way to install CockroachDB on Kubernetes is by using the [CockroachD
{{% /tabs %}}
+## Advanced
+
+### TTLs and cleanups
+
+This state store supports [Time-To-Live (TTL)]({{< ref state-store-ttl.md >}}) for records stored with Dapr. When storing data using Dapr, you can set the `ttlInSeconds` metadata property to indicate after how many seconds the data should be considered "expired".
+
+Because CockroachDB doesn't have built-in support for TTLs, you implement this in Dapr by adding a column in the state table indicating when the data should be considered "expired". "Expired" records are not returned to the caller, even if they're still physically stored in the database. A background "garbage collector" periodically scans the state table for expired rows and deletes them.
+
+You can set the interval for the deletion of expired records with the `cleanupIntervalInSeconds` metadata property, which defaults to 3600 seconds (that is, 1 hour).
+
+- Longer intervals require less frequent scans for expired rows, but can require storing expired records for longer, potentially requiring more storage space. If you plan to store many records in your state table, with short TTLs, consider setting `cleanupIntervalInSeconds` to a smaller value - for example, `300` (300 seconds, or 5 minutes).
+- If you do not plan to use TTLs with Dapr and the CockroachDB state store, you should consider setting `cleanupIntervalInSeconds` to a value <= 0 (e.g. `0` or `-1`) to disable the periodic cleanup and reduce the load on the database.
+
## Related links
- [Basic schema for a Dapr component]({{< ref component-schema >}})
- Read [this guide]({{< ref "howto-get-save-state.md#step-2-save-and-retrieve-a-single-state" >}}) for instructions on configuring state store components
diff --git a/daprdocs/content/en/reference/components-reference/supported-state-stores/setup-redis.md b/daprdocs/content/en/reference/components-reference/supported-state-stores/setup-redis.md
index 007e7b6ad..3237b1092 100644
--- a/daprdocs/content/en/reference/components-reference/supported-state-stores/setup-redis.md
+++ b/daprdocs/content/en/reference/components-reference/supported-state-stores/setup-redis.md
@@ -34,7 +34,7 @@ spec:
value: # Optional
- name: maxRetryBackoff
value: # Optional
- - name: failover
+ - name: failover
value: # Optional
- name: sentinelMasterName
value: # Optional
diff --git a/daprdocs/content/en/reference/components-reference/supported-state-stores/setup-sqlserver.md b/daprdocs/content/en/reference/components-reference/supported-state-stores/setup-sqlserver.md
index f7de752d2..86aa92d91 100644
--- a/daprdocs/content/en/reference/components-reference/supported-state-stores/setup-sqlserver.md
+++ b/daprdocs/content/en/reference/components-reference/supported-state-stores/setup-sqlserver.md
@@ -33,6 +33,10 @@ spec:
value: # Optional. defaults to "dbo"
- name: indexedProperties
value: # Optional. List of IndexedProperties.
+ - name: metadataTableName # Optional. Name of the table where to store metadata used by Dapr
+ value: "dapr_metadata"
+ - name: cleanupIntervalInSeconds # Optional. Cleanup interval in seconds, to remove expired rows
+ value: 300
```
@@ -58,6 +62,8 @@ If you wish to use SQL server as an [actor state store]({{< ref "state_api.md#co
| schema | N | The schema to use. Defaults to `"dbo"` | `"dapr"`,`"dbo"`
| indexedProperties | N | List of IndexedProperties. | `'[{"column": "transactionid", "property": "id", "type": "int"}, {"column": "customerid", "property": "customer", "type": "nvarchar(100)"}]'`
| actorStateStore | N | Indicates that Dapr should configure this component for the actor state store ([more information]({{< ref "state_api.md#configuring-state-store-for-actors" >}})). | `"true"`
+| metadataTableName | N | Name of the table Dapr uses to store a few metadata properties. Defaults to `dapr_metadata`. | `"dapr_metadata"`
+| cleanupIntervalInSeconds | N | Interval, in seconds, to clean up rows with an expired TTL. Default: `3600` (i.e. 1 hour). Setting this to values <=0 disables the periodic cleanup. | `1800`, `-1`
## Create Azure SQL instance
@@ -80,6 +86,23 @@ When connecting with a dedicated user (not `sa`), these authorizations are requi
- `CREATE TABLE`
- `CREATE TYPE`
+### TTLs and cleanups
+
+This state store supports [Time-To-Live (TTL)]({{< ref state-store-ttl.md >}}) for records stored with Dapr. When storing data using Dapr, you can set the `ttlInSeconds` metadata property to indicate after how many seconds the data should be considered "expired".
+
+Because SQL Server doesn't have built-in support for TTLs, Dapr implements this by adding a column in the state table indicating when the data should be considered "expired". "Expired" records are not returned to the caller, even if they're still physically stored in the database. A background "garbage collector" periodically scans the state table for expired rows and deletes them.
+
+You can set the interval for the deletion of expired records with the `cleanupIntervalInSeconds` metadata property, which defaults to 3600 seconds (that is, 1 hour).
+
+- Longer intervals require less frequent scans for expired rows, but can require storing expired records for longer, potentially requiring more storage space. If you plan to store many records in your state table, with short TTLs, consider setting `cleanupIntervalInSeconds` to a smaller value - for example, `300` (300 seconds, or 5 minutes).
+- If you do not plan to use TTLs with Dapr and the SQL Server state store, you should consider setting `cleanupIntervalInSeconds` to a value <= 0 (e.g. `0` or `-1`) to disable the periodic cleanup and reduce the load on the database.
+
+The state store does not have an index on the `ExpireDate` column, which means that each clean up operation must perform a full table scan. If you intend to write to the table with a large number of records that use TTLs, you should consider creating an index on the `ExpireDate` column. An index makes queries faster, but uses more storage space and slightly slows down writes.
+
+```sql
+CREATE CLUSTERED INDEX expiredate_idx ON state(ExpireDate ASC)
+```
+
## Related links
- [Basic schema for a Dapr component]({{< ref component-schema >}})
- Read [this guide]({{< ref "howto-get-save-state.md#step-2-save-and-retrieve-a-single-state" >}}) for instructions on configuring state store components
diff --git a/daprdocs/data/components/state_stores/generic.yaml b/daprdocs/data/components/state_stores/generic.yaml
index 32cce46b7..621f91fa5 100644
--- a/daprdocs/data/components/state_stores/generic.yaml
+++ b/daprdocs/data/components/state_stores/generic.yaml
@@ -29,7 +29,7 @@
crud: true
transactions: true
etag: true
- ttl: false
+ ttl: true
query: true
- component: Couchbase
link: setup-couchbase
diff --git a/daprdocs/layouts/shortcodes/dapr-latest-version.html b/daprdocs/layouts/shortcodes/dapr-latest-version.html
index 1ac5196d0..8257e34f0 100644
--- a/daprdocs/layouts/shortcodes/dapr-latest-version.html
+++ b/daprdocs/layouts/shortcodes/dapr-latest-version.html
@@ -1 +1 @@
-{{- if .Get "short" }}1.10{{ else if .Get "long" }}1.10.0{{ else if .Get "cli" }}1.10.0{{ else }}1.9.5{{ end -}}
+{{- if .Get "short" }}1.10{{ else if .Get "long" }}1.10.4{{ else if .Get "cli" }}1.10.0{{ else }}1.10.4{{ end -}}
diff --git a/daprdocs/static/images/building-block-pub-sub-example.png b/daprdocs/static/images/building-block-pub-sub-example.png
deleted file mode 100644
index 4ffe87ce4..000000000
Binary files a/daprdocs/static/images/building-block-pub-sub-example.png and /dev/null differ
diff --git a/daprdocs/static/images/concepts-components.png b/daprdocs/static/images/concepts-components.png
index 6c2977ddc..b9ab8c5fb 100644
Binary files a/daprdocs/static/images/concepts-components.png and b/daprdocs/static/images/concepts-components.png differ
diff --git a/daprdocs/static/images/datadog-traces.png b/daprdocs/static/images/datadog-traces.png
new file mode 100644
index 000000000..3db3461e7
Binary files /dev/null and b/daprdocs/static/images/datadog-traces.png differ
diff --git a/daprdocs/static/images/grafana-prometheus-dapr-server-url.png b/daprdocs/static/images/grafana-prometheus-dapr-server-url.png
index 1098b526f..2a65dd5a1 100644
Binary files a/daprdocs/static/images/grafana-prometheus-dapr-server-url.png and b/daprdocs/static/images/grafana-prometheus-dapr-server-url.png differ
diff --git a/daprdocs/static/images/pubsub-howto-overview.png b/daprdocs/static/images/pubsub-howto-overview.png
new file mode 100644
index 000000000..cb0cf1a29
Binary files /dev/null and b/daprdocs/static/images/pubsub-howto-overview.png differ
diff --git a/daprdocs/static/images/skip-tls-verify.png b/daprdocs/static/images/skip-tls-verify.png
new file mode 100644
index 000000000..2a65dd5a1
Binary files /dev/null and b/daprdocs/static/images/skip-tls-verify.png differ
diff --git a/daprdocs/static/images/workflow-trace-spans-zipkin.png b/daprdocs/static/images/workflow-trace-spans-zipkin.png
new file mode 100644
index 000000000..4b6e5daa2
Binary files /dev/null and b/daprdocs/static/images/workflow-trace-spans-zipkin.png differ
diff --git a/daprdocs/static/presentations/Dapr-Diagrams.pptx.zip b/daprdocs/static/presentations/Dapr-Diagrams.pptx.zip
index bd76d31d6..21bc81f93 100644
Binary files a/daprdocs/static/presentations/Dapr-Diagrams.pptx.zip and b/daprdocs/static/presentations/Dapr-Diagrams.pptx.zip differ
diff --git a/daprdocs/static/presentations/dapr-slidedeck.pptx.zip b/daprdocs/static/presentations/dapr-slidedeck.pptx.zip
index 47a8aa326..6ef5a3b87 100644
Binary files a/daprdocs/static/presentations/dapr-slidedeck.pptx.zip and b/daprdocs/static/presentations/dapr-slidedeck.pptx.zip differ
diff --git a/sdkdocs/dotnet b/sdkdocs/dotnet
index 9dcae7b0e..f42b690f4 160000
--- a/sdkdocs/dotnet
+++ b/sdkdocs/dotnet
@@ -1 +1 @@
-Subproject commit 9dcae7b0e771d7328559bef1dd65df4c1a54b793
+Subproject commit f42b690f4c67e6bb4209932f660c46a96d0b0457