diff --git a/daprdocs/content/en/developing-applications/local-development/multi-app-dapr-run/multi-app-template.md b/daprdocs/content/en/developing-applications/local-development/multi-app-dapr-run/multi-app-template.md
index a0ef21650..98a499b09 100644
--- a/daprdocs/content/en/developing-applications/local-development/multi-app-dapr-run/multi-app-template.md
+++ b/daprdocs/content/en/developing-applications/local-development/multi-app-dapr-run/multi-app-template.md
@@ -72,19 +72,19 @@ version: 1
common: # optional section for variables shared across apps
resourcesPath: ./app/components # any dapr resources to be shared across apps
env: # any environment variable shared across apps
- - DEBUG: true
+ DEBUG: true
apps:
- appID: webapp # optional
appDirPath: .dapr/webapp/ # REQUIRED
resourcesPath: .dapr/resources # (optional) can be default by convention
configFilePath: .dapr/config.yaml # (optional) can be default by convention too, ignore if file is not found.
- appProtocol: HTTP
+ appProtocol: http
appPort: 8080
appHealthCheckPath: "/healthz"
command: ["python3" "app.py"]
- appID: backend # optional
appDirPath: .dapr/backend/ # REQUIRED
- appProtocol: GRPC
+ appProtocol: grpc
appPort: 3000
unixDomainSocket: "/tmp/test-socket"
env:
@@ -112,7 +112,7 @@ The properties for the Multi-App Run template align with the `dapr run` CLI flag
| `appID` | N | Application's app ID. If not provided, will be derived from `appDirPath` | `webapp`, `backend` |
| `resourcesPath` | N | Path to your Dapr resources. Can be default by convention; ignore if directory isn't found | `./app/components`, `./webapp/components` |
| `configFilePath` | N | Path to your application's configuration file | `./webapp/config.yaml` |
-| `appProtocol` | N | The protocol Dapr uses to talk to the application. | `HTTP`, `GRPC` |
+| `appProtocol` | N | The protocol Dapr uses to talk to the application. | `http`, `grpc` |
| `appPort` | N | The port your application is listening on | `8080`, `3000` |
| `daprHTTPPort` | N | Dapr HTTP port | |
| `daprGRPCPort` | N | Dapr GRPC port | |
diff --git a/daprdocs/content/en/getting-started/install-dapr-cli.md b/daprdocs/content/en/getting-started/install-dapr-cli.md
index 123067e3b..82474d9ae 100644
--- a/daprdocs/content/en/getting-started/install-dapr-cli.md
+++ b/daprdocs/content/en/getting-started/install-dapr-cli.md
@@ -202,7 +202,7 @@ Each release of Dapr CLI includes various OSes and architectures. You can manual
Verify the CLI is installed by restarting your terminal/command prompt and running the following:
```bash
-dapr
+dapr -h
```
**Output:**
diff --git a/daprdocs/content/en/getting-started/quickstarts/bindings-quickstart.md b/daprdocs/content/en/getting-started/quickstarts/bindings-quickstart.md
index 55c5472b0..fd8cb0d37 100644
--- a/daprdocs/content/en/getting-started/quickstarts/bindings-quickstart.md
+++ b/daprdocs/content/en/getting-started/quickstarts/bindings-quickstart.md
@@ -90,7 +90,7 @@ dapr run --app-id batch-sdk --app-port 50051 --resources-path ../../../component
> **Note**: Since Python3.exe is not defined in Windows, you may need to use `python app.py` instead of `python3 app.py`.
-The code inside the `process_batch` function is executed every 10 seconds (defined in [`binding-cron.yaml`]({{< ref "#componentsbinding-cronyaml-component-file" >}}) in the `components` directory). The binding trigger looks for a route called via HTTP POST in your Flask application by the Dapr sidecar.
+The code inside the `process_batch` function is executed every 10 seconds (defined in [`binding-cron.yaml`]({{< ref "#componentsbinding-cronyaml-component-file" >}}) in the `components` directory). The binding trigger looks for a route called via HTTP POST in your application by the Dapr sidecar.
```python
# Triggered by Dapr input binding
@@ -295,7 +295,7 @@ Run the `batch-sdk` service alongside a Dapr sidecar.
dapr run --app-id batch-sdk --app-port 5002 --dapr-http-port 3500 --resources-path ../../../components -- node index.js
```
-The code inside the `process_batch` function is executed every 10 seconds (defined in [`binding-cron.yaml`]({{< ref "#componentsbinding-cronyaml-component-file" >}}) in the `components` directory). The binding trigger looks for a route called via HTTP POST in your Flask application by the Dapr sidecar.
+The code inside the `process_batch` function is executed every 10 seconds (defined in [`binding-cron.yaml`]({{< ref "#componentsbinding-cronyaml-component-file" >}}) in the `components` directory). The binding trigger looks for a route called via HTTP POST in your application by the Dapr sidecar.
```javascript
async function start() {
@@ -498,7 +498,7 @@ Run the `batch-sdk` service alongside a Dapr sidecar.
dapr run --app-id batch-sdk --app-port 7002 --resources-path ../../../components -- dotnet run
```
-The code inside the `process_batch` function is executed every 10 seconds (defined in [`binding-cron.yaml`]({{< ref "#componentsbinding-cronyaml-component-file" >}}) in the `components` directory). The binding trigger looks for a route called via HTTP POST in your Flask application by the Dapr sidecar.
+The code inside the `process_batch` function is executed every 10 seconds (defined in [`binding-cron.yaml`]({{< ref "#componentsbinding-cronyaml-component-file" >}}) in the `components` directory). The binding trigger looks for a route called via HTTP POST in your application by the Dapr sidecar.
```csharp
app.MapPost("/" + cronBindingName, async () => {
@@ -704,7 +704,7 @@ Run the `batch-sdk` service alongside a Dapr sidecar.
dapr run --app-id batch-sdk --app-port 8080 --resources-path ../../../components -- java -jar target/BatchProcessingService-0.0.1-SNAPSHOT.jar
```
-The code inside the `process_batch` function is executed every 10 seconds (defined in [`binding-cron.yaml`]({{< ref "#componentsbinding-cronyaml-component-file" >}}) in the `components` directory). The binding trigger looks for a route called via HTTP POST in your Flask application by the Dapr sidecar.
+The code inside the `process_batch` function is executed every 10 seconds (defined in [`binding-cron.yaml`]({{< ref "#componentsbinding-cronyaml-component-file" >}}) in the `components` directory). The binding trigger looks for a route called via HTTP POST in your application by the Dapr sidecar.
```java
@PostMapping(path = cronBindingPath, consumes = MediaType.ALL_VALUE)
@@ -911,7 +911,7 @@ Run the `batch-sdk` service alongside a Dapr sidecar.
dapr run --app-id batch-sdk --app-port 6002 --dapr-http-port 3502 --dapr-grpc-port 60002 --resources-path ../../../components -- go run .
```
-The code inside the `process_batch` function is executed every 10 seconds (defined in [`binding-cron.yaml`]({{< ref "#componentsbinding-cronyaml-component-file" >}}) in the `components` directory). The binding trigger looks for a route called via HTTP POST in your Flask application by the Dapr sidecar.
+The code inside the `process_batch` function is executed every 10 seconds (defined in [`binding-cron.yaml`]({{< ref "#componentsbinding-cronyaml-component-file" >}}) in the `components` directory). The binding trigger looks for a route called via HTTP POST in your application by the Dapr sidecar.
```go
// Triggered by Dapr input binding
diff --git a/daprdocs/content/en/getting-started/quickstarts/configuration-quickstart.md b/daprdocs/content/en/getting-started/quickstarts/configuration-quickstart.md
index e457612a1..9efe06b58 100644
--- a/daprdocs/content/en/getting-started/quickstarts/configuration-quickstart.md
+++ b/daprdocs/content/en/getting-started/quickstarts/configuration-quickstart.md
@@ -64,7 +64,7 @@ pip3 install -r requirements.txt
Run the `order-processor` service alongside a Dapr sidecar.
```bash
-dapr run --app-id order-processor --components-path ../../../components/ --app-port 6001 -- python3 app.py
+dapr run --app-id order-processor --resources-path ../../../components/ --app-port 6001 -- python3 app.py
```
> **Note**: Since Python3.exe is not defined in Windows, you may need to use `python app.py` instead of `python3 app.py`.
@@ -90,7 +90,7 @@ docker exec dapr_redis redis-cli MSET orderId1 "103" orderId2 "104"
Run the `order-processor` service again:
```bash
-dapr run --app-id order-processor --components-path ../../../components/ --app-port 6001 -- python3 app.py
+dapr run --app-id order-processor --resources-path ../../../components/ --app-port 6001 -- python3 app.py
```
> **Note**: Since Python3.exe is not defined in Windows, you may need to use `python app.py` instead of `python3 app.py`.
@@ -187,7 +187,7 @@ npm install
Run the `order-processor` service alongside a Dapr sidecar.
```bash
-dapr run --app-id order-processor --components-path ../../../components/ --app-protocol grpc --dapr-grpc-port 3500 -- node index.js
+dapr run --app-id order-processor --resources-path ../../../components/ --app-protocol grpc --dapr-grpc-port 3500 -- node index.js
```
The expected output:
@@ -209,7 +209,7 @@ docker exec dapr_redis redis-cli MSET orderId1 "103" orderId2 "104"
Run the `order-processor` service again:
```bash
-dapr run --app-id order-processor --components-path ../../../components/ --app-protocol grpc --dapr-grpc-port 3500 -- node index.js
+dapr run --app-id order-processor --resources-path ../../../components/ --app-protocol grpc --dapr-grpc-port 3500 -- node index.js
```
The app will return the updated configuration values:
@@ -309,7 +309,7 @@ dotnet build
Run the `order-processor` service alongside a Dapr sidecar.
```bash
-dapr run --app-id order-processor-http --components-path ../../../components/ --app-port 7001 -- dotnet run --project .
+dapr run --app-id order-processor-http --resources-path ../../../components/ --app-port 7001 -- dotnet run --project .
```
The expected output:
@@ -331,7 +331,7 @@ docker exec dapr_redis redis-cli MSET orderId1 "103" orderId2 "104"
Run the `order-processor` service again:
```bash
-dapr run --app-id order-processor-http --components-path ../../../components/ --app-port 7001 -- dotnet run --project .
+dapr run --app-id order-processor-http --resources-path ../../../components/ --app-port 7001 -- dotnet run --project .
```
The app will return the updated configuration values:
@@ -428,7 +428,7 @@ mvn clean install
Run the `order-processor` service alongside a Dapr sidecar.
```bash
-dapr run --app-id order-processor --components-path ../../../components -- java -jar target/OrderProcessingService-0.0.1-SNAPSHOT.jar
+dapr run --app-id order-processor --resources-path ../../../components -- java -jar target/OrderProcessingService-0.0.1-SNAPSHOT.jar
```
The expected output:
@@ -450,7 +450,7 @@ docker exec dapr_redis redis-cli MSET orderId1 "103" orderId2 "104"
Run the `order-processor` service again:
```bash
-dapr run --app-id order-processor --components-path ../../../components -- java -jar target/OrderProcessingService-0.0.1-SNAPSHOT.jar
+dapr run --app-id order-processor --resources-path ../../../components -- java -jar target/OrderProcessingService-0.0.1-SNAPSHOT.jar
```
The app will return the updated configuration values:
@@ -537,7 +537,7 @@ cd configuration/go/sdk/order-processor
Run the `order-processor` service alongside a Dapr sidecar.
```bash
-dapr run --app-id order-processor --app-port 6001 --components-path ../../../components -- go run .
+dapr run --app-id order-processor --app-port 6001 --resources-path ../../../components -- go run .
```
The expected output:
@@ -560,7 +560,7 @@ docker exec dapr_redis redis-cli MSET orderId1 "103" orderId2 "104"
Run the `order-processor` service again:
```bash
-dapr run --app-id order-processor --app-port 6001 --components-path ../../../components -- go run .
+dapr run --app-id order-processor --app-port 6001 --resources-path ../../../components -- go run .
```
The app will return the updated configuration values:
@@ -636,4 +636,4 @@ Join the discussion in our [discord channel](https://discord.com/channels/778680
- [Go](https://github.com/dapr/quickstarts/tree/master/configuration/go/http)
- Learn more about [Configuration building block]({{< ref configuration-api-overview >}})
-{{< button text="Explore Dapr tutorials >>" page="getting-started/tutorials/_index.md" >}}
\ No newline at end of file
+{{< button text="Explore Dapr tutorials >>" page="getting-started/tutorials/_index.md" >}}
diff --git a/daprdocs/content/en/getting-started/quickstarts/pubsub-quickstart.md b/daprdocs/content/en/getting-started/quickstarts/pubsub-quickstart.md
index 9c6460290..cb6096c51 100644
--- a/daprdocs/content/en/getting-started/quickstarts/pubsub-quickstart.md
+++ b/daprdocs/content/en/getting-started/quickstarts/pubsub-quickstart.md
@@ -56,7 +56,7 @@ pip3 install -r requirements.txt
Run the `order-processor` subscriber service alongside a Dapr sidecar.
```bash
-dapr run --app-id order-processor --resources-path ../../../components/ --app-port 5001 -- python3 app.py
+dapr run --app-id order-processor --resources-path ../../../components/ --app-port 6002 -- python3 app.py
```
> **Note**: Since Python3.exe is not defined in Windows, you may need to use `python app.py` instead of `python3 app.py`.
@@ -273,7 +273,7 @@ dapr run --app-id checkout --app-protocol http --dapr-http-port 3500 --resources
In the `checkout` publisher service, we're publishing the orderId message to the Redis instance called `orderpubsub` [(as defined in the `pubsub.yaml` component)]({{< ref "#pubsubyaml-component-file" >}}) and topic `orders`. As soon as the service starts, it publishes in a loop:
```js
-const client = new DaprClient(DAPR_HOST, DAPR_HTTP_PORT);
+const client = new DaprClient();
await client.pubsub.publish(PUBSUB_NAME, PUBSUB_TOPIC, order);
console.log("Published data: " + JSON.stringify(order));
@@ -389,7 +389,7 @@ dotnet build
Run the `order-processor` subscriber service alongside a Dapr sidecar.
```bash
-dapr run --app-id order-processor --resources-path ../../../components --app-port 7002 -- dotnet run
+dapr run --app-id order-processor --resources-path ../../../components --app-port 7005 -- dotnet run
```
In the `order-processor` subscriber, we're subscribing to the Redis instance called `orderpubsub` [(as defined in the `pubsub.yaml` component)]({{< ref "#pubsubyaml-component-file" >}}) and topic `orders`. This enables your app code to talk to the Redis component instance through the Dapr sidecar.
diff --git a/daprdocs/content/en/getting-started/quickstarts/serviceinvocation-quickstart.md b/daprdocs/content/en/getting-started/quickstarts/serviceinvocation-quickstart.md
index 4f755ded0..ba5f56523 100644
--- a/daprdocs/content/en/getting-started/quickstarts/serviceinvocation-quickstart.md
+++ b/daprdocs/content/en/getting-started/quickstarts/serviceinvocation-quickstart.md
@@ -298,7 +298,7 @@ Dapr invokes an application on any Dapr instance. In the code, the sidecar progr
For this example, you will need:
- [Dapr CLI and initialized environment](https://docs.dapr.io/getting-started).
-- [.NET SDK or .NET 6 SDK installed](https://dotnet.microsoft.com/download).
+- [.NET SDK or .NET 7 SDK installed](https://dotnet.microsoft.com/download).
- [Docker Desktop](https://www.docker.com/products/docker-desktop)
diff --git a/daprdocs/content/en/getting-started/quickstarts/statemanagement-quickstart.md b/daprdocs/content/en/getting-started/quickstarts/statemanagement-quickstart.md
index 69a0d65f6..ecc916cc6 100644
--- a/daprdocs/content/en/getting-started/quickstarts/statemanagement-quickstart.md
+++ b/daprdocs/content/en/getting-started/quickstarts/statemanagement-quickstart.md
@@ -177,29 +177,19 @@ dapr run --app-id order-processor --resources-path ../../../resources/ -- npm ru
The `order-processor` service writes, reads, and deletes an `orderId` key/value pair to the `statestore` instance [defined in the `statestore.yaml` component]({{< ref "#statestoreyaml-component-file" >}}). As soon as the service starts, it performs a loop.
```js
- const client = new DaprClient(DAPR_HOST, DAPR_HTTP_PORT);
+const client = new DaprClient()
- // Save state into the state store
- client.state.save(STATE_STORE_NAME, [
- {
- key: orderId.toString(),
- value: order
- }
- ]);
- console.log("Saving Order: ", order);
+// Save state into a state store
+await client.state.save(DAPR_STATE_STORE_NAME, state)
+console.log("Saving Order: ", order)
- // Get state from the state store
- var result = client.state.get(STATE_STORE_NAME, orderId.toString());
- result.then(function(val) {
- console.log("Getting Order: ", val);
- });
-
- // Delete state from the state store
- client.state.delete(STATE_STORE_NAME, orderId.toString());
- result.then(function(val) {
- console.log("Deleting Order: ", val);
- });
+// Get state from a state store
+const savedOrder = await client.state.get(DAPR_STATE_STORE_NAME, order.orderId)
+console.log("Getting Order: ", savedOrd)
+// Delete state from the state store
+await client.state.delete(DAPR_STATE_STORE_NAME, order.orderId)
+console.log("Deleting Order: ", order)
```
### Step 3: View the order-processor outputs
diff --git a/daprdocs/content/en/getting-started/quickstarts/workflow-quickstart.md b/daprdocs/content/en/getting-started/quickstarts/workflow-quickstart.md
index d454edc50..f25808520 100644
--- a/daprdocs/content/en/getting-started/quickstarts/workflow-quickstart.md
+++ b/daprdocs/content/en/getting-started/quickstarts/workflow-quickstart.md
@@ -97,6 +97,12 @@ Expected output:
== APP == Workflow Status: Completed
```
+### (Optional) Step 4: View in Zipkin
+
+If you have Zipkin configured for Dapr locally on your machine, you can view the workflow trace spans in the Zipkin web UI (typically at `http://localhost:9411/zipkin/`).
+
+

+
### What happened?
When you ran `dapr run --app-id order-processor dotnet run`:
diff --git a/daprdocs/content/en/operations/configuration/configuration-overview.md b/daprdocs/content/en/operations/configuration/configuration-overview.md
index 51c09bd08..40eb09427 100644
--- a/daprdocs/content/en/operations/configuration/configuration-overview.md
+++ b/daprdocs/content/en/operations/configuration/configuration-overview.md
@@ -214,7 +214,7 @@ See the [preview features]({{< ref "preview-features.md" >}}) guide for informat
### Example sidecar configuration
-The following yaml shows an example configuration file that can be applied to an applications' Dapr sidecar.
+The following YAML shows an example configuration file that can be applied to an applications' Dapr sidecar.
```yml
apiVersion: dapr.io/v1alpha1
@@ -266,15 +266,21 @@ There is a single configuration file called `daprsystem` installed with the Dapr
### Control-plane configuration settings
-A Dapr control plane configuration can configure the following settings:
+A Dapr control plane configuration contains the following sections:
+
+- [`mtls`](#mtls-mutual-tls) for mTLS (Mutual TLS)
+
+### mTLS (Mutual TLS)
+
+The `mtls` section contains properties for mTLS.
| Property | Type | Description |
|------------------|--------|-------------|
-| enabled | bool | Set mtls to be enabled or disabled
-| allowedClockSkew | string | The extra time to give for certificate expiry based on possible clock skew on a machine. Default is 15 minutes.
-| workloadCertTTL | string | Time a certificate is valid for. Default is 24 hours
+| `enabled` | bool | If true, enables mTLS for communication between services and apps in the cluster.
+| `allowedClockSkew` | string | Allowed tolerance when checking the expiration of TLS certificates, to allow for clock skew. Follows the format used by [Go's time.ParseDuration](https://pkg.go.dev/time#ParseDuration). Default is `15m` (15 minutes).
+| `workloadCertTTL` | string | How long a certificate TLS issued by Dapr is valid for. Follows the format used by [Go's time.ParseDuration](https://pkg.go.dev/time#ParseDuration). Default is `24h` (24 hours).
-See the [Mutual TLS]({{< ref "mtls.md" >}}) HowTo and [security concepts]({{< ref "security-concept.md" >}}) for more information.
+See the [mTLS how-to]({{< ref "mtls.md" >}}) and [security concepts]({{< ref "security-concept.md" >}}) for more information.
### Example control plane configuration
@@ -282,7 +288,7 @@ See the [Mutual TLS]({{< ref "mtls.md" >}}) HowTo and [security concepts]({{< re
apiVersion: dapr.io/v1alpha1
kind: Configuration
metadata:
- name: default
+ name: daprsystem
namespace: default
spec:
mtls:
diff --git a/daprdocs/content/en/operations/hosting/self-hosted/self-hosted-with-podman.md b/daprdocs/content/en/operations/hosting/self-hosted/self-hosted-with-podman.md
index 698026b4a..b53aaf923 100644
--- a/daprdocs/content/en/operations/hosting/self-hosted/self-hosted-with-podman.md
+++ b/daprdocs/content/en/operations/hosting/self-hosted/self-hosted-with-podman.md
@@ -11,7 +11,7 @@ This article provides guidance on running Dapr with Podman on a Windows/Linux/ma
## Prerequisites
- [Dapr CLI]({{< ref install-dapr-cli.md >}})
-- [Podman](https://podman.io/getting-started/installation.html)
+- [Podman](https://podman.io/docs/tutorials/installation)
## Initialize Dapr environment
diff --git a/daprdocs/content/en/operations/monitoring/metrics/grafana.md b/daprdocs/content/en/operations/monitoring/metrics/grafana.md
index d0442b032..5d3949552 100644
--- a/daprdocs/content/en/operations/monitoring/metrics/grafana.md
+++ b/daprdocs/content/en/operations/monitoring/metrics/grafana.md
@@ -142,6 +142,8 @@ First you need to connect Prometheus as a data source to Grafana.
- Name: `Dapr`
- HTTP URL: `http://dapr-prom-prometheus-server.dapr-monitoring`
- Default: On
+ - Skip TLS Verify: On
+ - Necessary in order to save and test the configuration

diff --git a/daprdocs/content/en/operations/monitoring/metrics/prometheus.md b/daprdocs/content/en/operations/monitoring/metrics/prometheus.md
index da29d0315..3c787602f 100644
--- a/daprdocs/content/en/operations/monitoring/metrics/prometheus.md
+++ b/daprdocs/content/en/operations/monitoring/metrics/prometheus.md
@@ -90,7 +90,7 @@ If you are Minikube user or want to disable persistent volume for development pu
```bash
helm install dapr-prom prometheus-community/prometheus -n dapr-monitoring
- --set alertmanager.persistentVolume.enable=false --set pushgateway.persistentVolume.enabled=false --set server.persistentVolume.enabled=false
+ --set alertmanager.persistence.enabled=false --set pushgateway.persistentVolume.enabled=false --set server.persistentVolume.enabled=false
```
3. Validation
@@ -119,4 +119,4 @@ dapr-prom-prometheus-server-694fd8d7c-q5d59 2/2 Running 0
## References
* [Prometheus Installation](https://github.com/prometheus-community/helm-charts)
-* [Prometheus Query Language](https://prometheus.io/docs/prometheus/latest/querying/basics/)
\ No newline at end of file
+* [Prometheus Query Language](https://prometheus.io/docs/prometheus/latest/querying/basics/)
diff --git a/daprdocs/content/en/operations/monitoring/tracing/datadog.md b/daprdocs/content/en/operations/monitoring/tracing/datadog.md
new file mode 100644
index 000000000..3742cf408
--- /dev/null
+++ b/daprdocs/content/en/operations/monitoring/tracing/datadog.md
@@ -0,0 +1,55 @@
+---
+type: docs
+title: "How-To: Set up Datadog for distributed tracing"
+linkTitle: "Datadog"
+weight: 5000
+description: "Set up Datadog for distributed tracing"
+---
+
+Dapr captures metrics and traces that can be sent directly to Datadog through the OpenTelemetry Collector Datadog exporter.
+
+## Configure Dapr tracing with the OpenTelemetry Collector and Datadog
+
+Using the OpenTelemetry Collector Datadog exporter, you can configure Dapr to create traces for each application in your Kubernetes cluster and collect them in Datadog.
+
+> Before you begin, [set up the OpenTelemetry Collector]({{< ref "open-telemetry-collector.md#setting-opentelemetry-collector" >}}).
+
+1. Add your Datadog API key to the `./deploy/opentelemetry-collector-generic-datadog.yaml` file in the `datadog` exporter configuration section:
+ ```yaml
+ data:
+ otel-collector-config:
+ ...
+ exporters:
+ ...
+ datadog:
+ api:
+ key:
+ ```
+
+1. Apply the `opentelemetry-collector` configuration by running the following command.
+
+ ```sh
+ kubectl apply -f ./deploy/open-telemetry-collector-generic-datadog.yaml
+ ```
+
+1. Set up a Dapr configuration file that will turn on tracing and deploy a tracing exporter component that uses the OpenTelemetry Collector.
+
+ ```sh
+ kubectl apply -f ./deploy/collector-config.yaml
+
+1. Apply the `appconfig` configuration by adding a `dapr.io/config` annotation to the container that you want to participate in the distributed tracing.
+
+ ```yml
+ annotations:
+ dapr.io/config: "appconfig"
+
+1. Create and configure the application. Once running, telemetry data is sent to Datadog and visible in Datadog APM.
+
+
+
+
+## Related Links/References
+
+* [Complete example of setting up Dapr on a Kubernetes cluster](https://github.com/ericmustin/quickstarts/tree/master/hello-kubernetes)
+* [Datadog documentation about OpenTelemetry support](https://docs.datadoghq.com/opentelemetry/)
+* [Datadog Application Performance Monitoring](https://docs.datadoghq.com/tracing/)
\ No newline at end of file
diff --git a/daprdocs/content/en/operations/resiliency/policies.md b/daprdocs/content/en/operations/resiliency/policies.md
index 515e030b0..56ab3cb91 100644
--- a/daprdocs/content/en/operations/resiliency/policies.md
+++ b/daprdocs/content/en/operations/resiliency/policies.md
@@ -12,12 +12,12 @@ Define timeouts, retries, and circuit breaker policies under `policies`. Each po
## Timeouts
-Timeouts can be used to early-terminate long-running operations. If you've exceeded a timeout duration:
+Timeouts are optional policies that can be used to early-terminate long-running operations. If you've exceeded a timeout duration:
- The operation in progress is terminated (if possible).
- An error is returned.
-Valid values are of the form accepted by Go's [time.ParseDuration](https://pkg.go.dev/time#ParseDuration), for example: `15s`, `2m`, `1h30m`.
+Valid values are of the form accepted by Go's [time.ParseDuration](https://pkg.go.dev/time#ParseDuration), for example: `15s`, `2m`, `1h30m`. Timeouts have no set maximum value.
Example:
@@ -31,6 +31,8 @@ spec:
largeResponse: 10s
```
+If you don't specify a timeout value, the policy does not enforce a time and defaults to whatever you set up per the request client.
+
## Retries
With `retries`, you can define a retry strategy for failed operations, including requests failed due to triggering a defined timeout or circuit breaker policy. The following retry options are configurable:
@@ -69,6 +71,8 @@ spec:
maxRetries: -1 # Retry indefinitely
```
+
+
## Circuit Breakers
Circuit Breaker (CB) policies are used when other applications/services/components are experiencing elevated failure rates. CBs monitor the requests and shut off all traffic to the impacted service when a certain criteria is met ("open" state). By doing this, CBs give the service time to recover from their outage instead of flooding it with events. The CB can also allow partial traffic through to see if the system has healed ("half-open" state). Once requests resume being successful, the CB gets into "closed" state and allows traffic to completely resume.
@@ -95,7 +99,7 @@ spec:
## Overriding default retries
-Dapr provides default retries for certain request failures and transient errors. Within a resiliency spec, you have the option to override Dapr's default retry logic by defining policies with reserved, named keywords. For example, defining a policy with the name `DaprBuiltInServiceRetries`, overrides the default retries for failures between sidecars via service-to-service requests. Policy overrides are not applied to specific targets.
+Dapr provides default retries for any unsuccessful request, such as failures and transient errors. Within a resiliency spec, you have the option to override Dapr's default retry logic by defining policies with reserved, named keywords. For example, defining a policy with the name `DaprBuiltInServiceRetries`, overrides the default retries for failures between sidecars via service-to-service requests. Policy overrides are not applied to specific targets.
> Note: Although you can override default values with more robust retries, you cannot override with lesser values than the provided default value, or completely remove default retries. This prevents unexpected downtime.
diff --git a/daprdocs/content/en/operations/resiliency/resiliency-overview.md b/daprdocs/content/en/operations/resiliency/resiliency-overview.md
index ba63fe137..bb6cdb502 100644
--- a/daprdocs/content/en/operations/resiliency/resiliency-overview.md
+++ b/daprdocs/content/en/operations/resiliency/resiliency-overview.md
@@ -163,14 +163,14 @@ spec:
Watch this video for how to use [resiliency](https://www.youtube.com/watch?t=184&v=7D6HOU3Ms6g&feature=youtu.be):
-
+
- - [Policies]({{< ref "policies.md" >}})
- - [Targets]({{< ref "targets.md" >}})
## Next steps
-
+Learn more about resiliency policies and targets:
+ - [Policies]({{< ref "policies.md" >}})
+ - [Targets]({{< ref "targets.md" >}})
Try out one of the Resiliency quickstarts:
- [Resiliency: Service-to-service]({{< ref resiliency-serviceinvo-quickstart.md >}})
- [Resiliency: State Management]({{< ref resiliency-state-quickstart.md >}})
\ No newline at end of file
diff --git a/daprdocs/content/en/operations/support/support-release-policy.md b/daprdocs/content/en/operations/support/support-release-policy.md
index c2677880b..829690f50 100644
--- a/daprdocs/content/en/operations/support/support-release-policy.md
+++ b/daprdocs/content/en/operations/support/support-release-policy.md
@@ -34,7 +34,12 @@ The table below shows the versions of Dapr releases that have been tested togeth
| Release date | Runtime | CLI | SDKs | Dashboard | Status |
|--------------------|:--------:|:--------|---------|---------|---------|
-| February 14 2023 | 1.10.0 | 1.10.0 | Java 1.8.0 Go 1.6.0 PHP 1.1.0 Python 1.9.0 .NET 1.10.0 JS 2.5.0 | 0.11.0 | Supported (current) |
+| April 13 2023 | 1.10.5 | 1.10.0 | Java 1.8.0 Go 1.6.0 PHP 1.1.0 Python 1.9.0 .NET 1.10.0 JS 2.5.0 | 0.11.0 | Supported (current) |
+| March 16 2023 | 1.10.4 | 1.10.0 | Java 1.8.0 Go 1.6.0 PHP 1.1.0 Python 1.9.0 .NET 1.10.0 JS 2.5.0 | 0.11.0 | Supported |
+| March 14 2023 | 1.10.3 | 1.10.0 | Java 1.8.0 Go 1.6.0 PHP 1.1.0 Python 1.9.0 .NET 1.10.0 JS 2.5.0 | 0.11.0 | Supported |
+| February 24 2023 | 1.10.2 | 1.10.0 | Java 1.8.0 Go 1.6.0 PHP 1.1.0 Python 1.9.0 .NET 1.10.0 JS 2.5.0 | 0.11.0 | Supported |
+| February 20 2023 | 1.10.1 | 1.10.0 | Java 1.8.0 Go 1.6.0 PHP 1.1.0 Python 1.9.0 .NET 1.10.0 JS 2.5.0 | 0.11.0 | Supported |
+| February 14 2023 | 1.10.0 | 1.10.0 | Java 1.8.0 Go 1.6.0 PHP 1.1.0 Python 1.9.0 .NET 1.10.0 JS 2.5.0 | 0.11.0 | Supported|
| December 2nd 2022 | 1.9.5 | 1.9.1 | Java 1.7.0 Go 1.6.0 PHP 1.1.0 Python 1.8.3 .NET 1.9.0 JS 2.4.2 | 0.11.0 | Supported |
| November 17th 2022 | 1.9.4 | 1.9.1 | Java 1.7.0 Go 1.6.0 PHP 1.1.0 Python 1.8.3 .NET 1.9.0 JS 2.4.2 | 0.11.0 | Supported |
| November 4th 2022 | 1.9.3 | 1.9.1 | Java 1.7.0 Go 1.6.0 PHP 1.1.0 Python 1.8.3 .NET 1.9.0 JS 2.4.2 | 0.11.0 | Supported |
@@ -86,15 +91,18 @@ General guidance on upgrading can be found for [self hosted mode]({{< ref self-h
| | 1.6.0 | 1.6.2 |
| | 1.6.2 | 1.7.5 |
| | 1.7.5 | 1.8.6 |
-| | 1.8.6 | 1.9.5 |
+| | 1.8.6 | 1.9.6 |
+| | 1.9.6 | 1.10.5 |
| 1.6.0 to 1.6.2 | N/A | 1.7.5 |
| | 1.7.5 | 1.8.6 |
-| | 1.8.6 | 1.9.5 |
+| | 1.8.6 | 1.9.6 |
+| | 1.9.6 | 1.10.5 |
| 1.7.0 to 1.7.5 | N/A | 1.8.6 |
-| | 1.8.6 | 1.9.5 |
-| 1.8.0 to 1.8.6 | N/A | 1.9.5 |
-| 1.9.0 | N/A | 1.9.5 |
-| 1.10.0 | N/A | 1.10.0 |
+| | 1.8.6 | 1.9.6 |
+| | 1.9.6 | 1.10.5 |
+| 1.8.0 to 1.8.6 | N/A | 1.9.6 |
+| 1.9.0 | N/A | 1.9.6 |
+| 1.10.0 | N/A | 1.10.5 |
## Breaking changes and deprecations
@@ -147,6 +155,7 @@ After announcing a future breaking change, the change will happen in 2 releases
| GET /v1.0/shutdown API (Users should use [POST API]({{< ref kubernetes-job.md >}}) instead) | 1.2.0 | 1.4.0 |
| Java domain builder classes deprecated (Users should use [setters](https://github.com/dapr/java-sdk/issues/587) instead) | Java SDK 1.3.0 | Java SDK 1.5.0 |
| Service invocation will no longer provide a default content type header of `application/json` when no content-type is specified. You must explicitly [set a content-type header]({{< ref "service_invocation_api.md#request-contents" >}}) for service invocation if your invoked apps rely on this header. | 1.7.0 | 1.9.0 |
+| gRPC service invocation using `invoke` method is deprecated. Use proxy mode service invocation instead. See [How-To: Invoke services using gRPC ]({{< ref howto-invoke-services-grpc.md >}}) to use the proxy mode.| 1.9.0 | 1.10.0 |
## Upgrade on Hosting platforms
diff --git a/daprdocs/content/en/reference/api/metadata_api.md b/daprdocs/content/en/reference/api/metadata_api.md
index e2adc08de..336711013 100644
--- a/daprdocs/content/en/reference/api/metadata_api.md
+++ b/daprdocs/content/en/reference/api/metadata_api.md
@@ -93,7 +93,8 @@ curl http://localhost:3500/v1.0/metadata
],
"extended": {
"cliPID":"1031040",
- "appCommand":"uvicorn --port 3000 demo_actor_service:app"
+ "appCommand":"uvicorn --port 3000 demo_actor_service:app",
+ "daprRuntimeVersion": "1.10.0"
},
"components":[
{
diff --git a/daprdocs/content/en/reference/api/pubsub_api.md b/daprdocs/content/en/reference/api/pubsub_api.md
index a421e89a5..68619a753 100644
--- a/daprdocs/content/en/reference/api/pubsub_api.md
+++ b/daprdocs/content/en/reference/api/pubsub_api.md
@@ -262,10 +262,17 @@ A JSON-encoded payload body with the processing status against each entry needs
```json
{
- "statuses": {
- "entryId": "",
+ "statuses":
+ [
+ {
+ "entryId": "",
"status": ""
- }
+ },
+ {
+ "entryId": "",
+ "status": ""
+ }
+ ]
}
```
diff --git a/daprdocs/content/en/reference/api/workflow_api.md b/daprdocs/content/en/reference/api/workflow_api.md
index 4b6d8f259..442b206ad 100644
--- a/daprdocs/content/en/reference/api/workflow_api.md
+++ b/daprdocs/content/en/reference/api/workflow_api.md
@@ -5,9 +5,120 @@ linkTitle: "Workflow API"
description: "Detailed documentation on the workflow API"
weight: 900
---
+
+Dapr provides users with the ability to interact with workflows and comes with a built-in `dapr` component.
+
+## Start workflow request
+
+Start a workflow instance with the given name and optionally, an instance ID.
+
+```bash
+POST http://localhost:3500/v1.0-alpha1/workflows///start[?instanceId=]
+```
+
+Note that workflow instance IDs can only contain alphanumeric characters, underscores, and dashes.
+
+### URL parameters
+
+Parameter | Description
+--------- | -----------
+`workflowComponentName` | Current default is `dapr` for Dapr Workflows
+`workflowName` | Identify the workflow type
+`instanceId` | (Optional) Unique value created for each run of a specific workflow
+
+### Request content
+
+Any request content will be passed to the workflow as input. The Dapr API passes the content as-is without attempting to interpret it.
+
+### HTTP response codes
+
+Code | Description
+---- | -----------
+`202` | Accepted
+`400` | Request was malformed
+`500` | Request formatted correctly, error in dapr code or underlying component
+
+### Response content
+
+The API call will provide a response similar to this:
+
+```json
+{
+ "instanceID": "12345678"
+}
+```
+
+## Terminate workflow request
+
+Terminate a running workflow instance with the given name and instance ID.
+
+```bash
+POST http://localhost:3500/v1.0-alpha1/workflows//terminate
+```
+
+### URL parameters
+
+Parameter | Description
+--------- | -----------
+`workflowComponentName` | Current default is `dapr` for Dapr Workflows
+`instanceId` | Unique value created for each run of a specific workflow
+
+### HTTP response codes
+
+Code | Description
+---- | -----------
+`202` | Accepted
+`400` | Request was malformed
+`500` | Request formatted correctly, error in dapr code or underlying component
+
+### Response content
+
+This API does not return any content.
+
+### Get workflow request
+
+Get information about a given workflow instance.
+
+```bash
+GET http://localhost:3500/v1.0-alpha1/workflows//
+```
+
+### URL parameters
+
+Parameter | Description
+--------- | -----------
+`workflowComponentName` | Current default is `dapr` for Dapr Workflows
+`instanceId` | Unique value created for each run of a specific workflow
+
+### HTTP response codes
+
+Code | Description
+---- | -----------
+`200` | OK
+`400` | Request was malformed
+`500` | Request formatted correctly, error in dapr code or underlying component
+
+### Response content
+
+The API call will provide a JSON response similar to this:
+
+```json
+{
+ "createdAt": "2023-01-12T21:31:13Z",
+ "instanceID": "12345678",
+ "lastUpdatedAt": "2023-01-12T21:31:13Z",
+ "properties": {
+ "property1": "value1",
+ "property2": "value2",
+ },
+ "runtimeStatus": "RUNNING",
+ }
+```
+
## Component format
A Dapr `workflow.yaml` component file has the following structure:
+
```yaml
apiVersion: dapr.io/v1alpha1
kind: Component
@@ -20,101 +131,15 @@ spec:
- name:
value:
```
+
| Setting | Description |
| ------- | ----------- |
| `metadata.name` | The name of the workflow component. |
| `spec/metadata` | Additional metadata parameters specified by workflow component |
-
-
-## Supported workflow methods
-
-### POST start workflow request
-```bash
-POST http://localhost:3500/v1.0-alpha1/workflows////start
-```
-### POST terminate workflow request
-```bash
-POST http://localhost:3500/v1.0-alpha1/workflows///terminate
-```
-### GET workflow request
-```bash
-GET http://localhost:3500/v1.0-alpha1/workflows///
-```
-
-### URL parameters
-
-Parameter | Description
---------- | -----------
-`workflowComponentName` | Current default is `dapr` for Dapr Workflows
-`workflowName` | Identify the workflow type
-`instanceId` | Unique value created for each run of a specific workflow
-
-
-### Headers
-
-As part of the start HTTP request, the caller can optionally include one or more `dapr-workflow-metadata` HTTP request headers. The format of the header value is a list of `{key}={value}` values, similar to the format for HTTP cookie request headers. These key/value pairs are saved in the workflow instance’s metadata and can be made available for search (in cases where the workflow implementation supports this kind of search).
-
-
-## HTTP responses
-
-### Response codes
-
-Code | Description
----- | -----------
-`202` | Accepted
-`400` | Request was malformed
-`500` | Request formatted correctly, error in dapr code or underlying component
-
-### Examples of response body for each method
-
-#### POST start workflow response body
-
-```bash
- "WFInfo": {
- "instance_id": "SampleWorkflow"
- }
-```
-
-
-#### POST terminate workflow response body
-
-```bash
-HTTP/1.1 202 Accepted
-Server: fasthttp
-Date: Thu, 12 Jan 2023 21:31:16 GMT
-Content-Type: application/json
-Content-Length: 139
-Traceparent: 00-e3dedffedbeb9efbde9fbed3f8e2d8-5f38960d43d24e98-01
-Connection: close
-```
-
-
-### GET workflow response body
-
-```bash
-HTTP/1.1 202 Accepted
-Server: fasthttp
-Date: Thu, 12 Jan 2023 21:31:16 GMT
-Content-Type: application/json
-Content-Length: 139
-Traceparent: 00-e3dedffedbeb9efbde9fbed3f8e2d8-5f38960d43d24e98-01
-Connection: close
-
-{
- "WFInfo": {
- "instance_id": "SampleWorkflow"
- },
- "start_time": "2023-01-12T21:31:13Z",
- "metadata": {
- "status": "Running",
- "task_queue": "WorkflowSampleQueue"
- }
- }
-```
-
+However, Dapr comes with a built-in `dapr` workflow component that is built on Dapr Actors. No component file is required to use the built-in Dapr workflow component.
## Next Steps
- [Workflow API overview]({{< ref workflow-overview.md >}})
-- [Route user to workflow patterns ](todo)
+- [Route user to workflow patterns ]({{< ref workflow-patterns.md >}})
diff --git a/daprdocs/content/en/reference/components-reference/supported-bindings/cron.md b/daprdocs/content/en/reference/components-reference/supported-bindings/cron.md
index c4e49a1f5..ace35d104 100644
--- a/daprdocs/content/en/reference/components-reference/supported-bindings/cron.md
+++ b/daprdocs/content/en/reference/components-reference/supported-bindings/cron.md
@@ -69,7 +69,7 @@ app.post('/scheduled', async function(req, res){
});
```
-When running this code, note that the `/scheduled` endpoint is called every five minutes by the Dapr sidecar.
+When running this code, note that the `/scheduled` endpoint is called every fifteen minutes by the Dapr sidecar.
## Binding support
diff --git a/daprdocs/content/en/reference/components-reference/supported-bindings/graghql.md b/daprdocs/content/en/reference/components-reference/supported-bindings/graghql.md
index 7c202a297..9c7894e04 100644
--- a/daprdocs/content/en/reference/components-reference/supported-bindings/graghql.md
+++ b/daprdocs/content/en/reference/components-reference/supported-bindings/graghql.md
@@ -39,6 +39,7 @@ The above example uses secrets as plain strings. It is recommended to use a secr
|--------------------|:--------:|------------|-----|---------|
| endpoint | Y | Output | GraphQL endpoint string See [here](#url-format) for more details | `"http://localhost:4000/graphql/graphql"` |
| header:[HEADERKEY] | N | Output | GraphQL header. Specify the header key in the `name`, and the header value in the `value`. | `"no-cache"` (see above) |
+| variable:[VARIABLEKEY] | N | Output | GraphQL query variable. Specify the variable name in the `name`, and the variable value in the `value`. | `"123"` (see below) |
### Endpoint and Header format
@@ -65,6 +66,18 @@ Metadata: map[string]string{ "query": `query { users { name } }`},
}
```
+To use a `query` that requires [query variables](https://graphql.org/learn/queries/#variables), add a key-value pair to the `metadata` map, wherein every key corresponding to a query variable is the variable name prefixed with `variable:`
+
+```golang
+in := &dapr.InvokeBindingRequest{
+Name: "example.bindings.graphql",
+Operation: "query",
+Metadata: map[string]string{
+ "query": `query HeroNameAndFriends($episode: string!) { hero(episode: $episode) { name } }`,
+ "variable:episode": "JEDI",
+}
+```
+
## Related links
- [Basic schema for a Dapr component]({{< ref component-schema >}})
diff --git a/daprdocs/content/en/reference/components-reference/supported-bindings/s3.md b/daprdocs/content/en/reference/components-reference/supported-bindings/s3.md
index bb5e1f619..4d50e3447 100644
--- a/daprdocs/content/en/reference/components-reference/supported-bindings/s3.md
+++ b/daprdocs/content/en/reference/components-reference/supported-bindings/s3.md
@@ -70,6 +70,11 @@ The above example uses secrets as plain strings. It is recommended to use a secr
When running the Dapr sidecar (daprd) with your application on EKS (AWS Kubernetes), if you're using a node/pod that has already been attached to an IAM policy defining access to AWS resources, you **must not** provide AWS access-key, secret-key, and tokens in the definition of the component spec you're using.
{{% /alert %}}
+
+### S3 Bucket Creation
+{{< tabs "Minio" "LocalStack" "AWS" >}}
+
+{{% codetab %}}
### Using with Minio
[Minio](https://min.io/) is a service that exposes local storage as S3-compatible block storage, and it's a popular alternative to S3 especially in development environments. You can use the S3 binding with Minio too, with some configuration tweaks:
@@ -78,6 +83,70 @@ When running the Dapr sidecar (daprd) with your application on EKS (AWS Kubernet
3. The value for `region` is not important; you can set it to `us-east-1`.
4. Depending on your environment, you may need to set `disableSSL` to `true` if you're connecting to Minio using a non-secure connection (using the `http://` protocol). If you are using a secure connection (`https://` protocol) but with a self-signed certificate, you may need to set `insecureSSL` to `true`.
+{{% /codetab %}}
+
+{{% codetab %}}
+For local development, the [LocalStack project](https://github.com/localstack/localstack) is used to integrate AWS S3. Follow [these instructions](https://github.com/localstack/localstack#running) to run LocalStack.
+
+To run LocalStack locally from the command line using Docker, use a `docker-compose.yaml` similar to the following:
+
+```yaml
+version: "3.8"
+
+services:
+ localstack:
+ container_name: "cont-aws-s3"
+ image: localstack/localstack:1.4.0
+ ports:
+ - "127.0.0.1:4566:4566"
+ environment:
+ - DEBUG=1
+ - DOCKER_HOST=unix:///var/run/docker.sock
+ volumes:
+ - "/init-aws.sh:/etc/localstack/init/ready.d/init-aws.sh" # init hook
+ - "${LOCALSTACK_VOLUME_DIR:-./volume}:/var/lib/localstack"
+ - "/var/run/docker.sock:/var/run/docker.sock"
+```
+
+To use the S3 component, you need to use an existing bucket. The example above uses a [LocalStack Initialization Hook](https://docs.localstack.cloud/references/init-hooks/) to setup the bucket.
+
+To use LocalStack with your S3 binding, you need to provide the `endpoint` configuration in the component metadata. The `endpoint` is unnecessary when running against production AWS.
+
+
+```yaml
+apiVersion: dapr.io/v1alpha1
+kind: Component
+metadata:
+ name: aws-s3
+ namespace: default
+spec:
+ type: bindings.aws.s3
+ version: v1
+ metadata:
+ - name: bucket
+ value: conformance-test-docker
+ - name: endpoint
+ value: "http://localhost:4566"
+ - name: accessKey
+ value: "my-access"
+ - name: secretKey
+ value: "my-secret"
+ - name: region
+ value: "us-east-1"
+```
+
+{{% /codetab %}}
+
+{{% codetab %}}
+
+To use the S3 component, you need to use an existing bucket. Follow the [AWS documentation for creating a bucket](https://docs.aws.amazon.com/AmazonS3/latest/userguide/create-bucket-overview.html).
+
+{{% /codetab %}}
+
+
+
+{{< /tabs >}}
+
## Binding support
This component supports **output binding** with the following operations:
diff --git a/daprdocs/content/en/reference/components-reference/supported-middleware/middleware-wasm.md b/daprdocs/content/en/reference/components-reference/supported-middleware/middleware-wasm.md
index e95c6671a..8d19d0b19 100644
--- a/daprdocs/content/en/reference/components-reference/supported-middleware/middleware-wasm.md
+++ b/daprdocs/content/en/reference/components-reference/supported-middleware/middleware-wasm.md
@@ -11,10 +11,11 @@ WebAssembly is a way to safely run code compiled in other languages. Runtimes
execute WebAssembly Modules (Wasm), which are most often binaries with a `.wasm`
extension.
-The Wasm [HTTP middleware]({{< ref middleware.md >}}) allows you to rewrite a
-request URI with custom logic compiled to a Wasm binary. In other words, you
-can extend Dapr using external files that are not pre-compiled into the `daprd`
-binary. Dapr embeds [wazero](https://wazero.io) to accomplish this without CGO.
+The Wasm [HTTP middleware]({{< ref middleware.md >}}) allows you to manipulate
+an incoming request or serve a response with custom logic compiled to a Wasm
+binary. In other words, you can extend Dapr using external files that are not
+pre-compiled into the `daprd` binary. Dapr embeds [wazero](https://wazero.io)
+to accomplish this without CGO.
Wasm modules are loaded from a filesystem path. On Kubernetes, see [mounting
volumes to the Dapr sidecar]({{< ref kubernetes-volume-mounts.md >}}) to configure
@@ -28,27 +29,21 @@ kind: Component
metadata:
name: wasm
spec:
- type: middleware.http.wasm.basic
+ type: middleware.http.wasm
version: v1
metadata:
- name: path
- value: "./hello.wasm"
- - name: poolSize
- value: 1
+ value: "./router.wasm"
```
## Spec metadata fields
-Minimally, a user must specify a Wasm binary that contains the custom logic
-used to rewrite requests. An instance of the Wasm binary is not safe to use
-concurrently. The below configuration fields control both the binary to
-instantiate and how large an instance pool to use. A larger pool allows higher
-concurrency while consuming more memory.
+Minimally, a user must specify a Wasm binary implements the [http-handler](https://http-wasm.io/http-handler/).
+How to compile this is described later.
| Field | Details | Required | Example |
|----------|----------------------------------------------------------------|----------|----------------|
| path | A relative or absolute path to the Wasm binary to instantiate. | true | "./hello.wasm" |
-| poolSize | Number of concurrent instances of the Wasm binary. Default: 10 | false | 1 |
## Dapr configuration
@@ -64,7 +59,60 @@ spec:
httpPipeline:
handlers:
- name: wasm
- type: middleware.http.wasm.basic
+ type: middleware.http.wasm
+```
+
+*Note*: WebAssembly middleware uses more resources than native middleware. This
+result in a resource constraint faster than the same logic in native code.
+Production usage should [Control max concurrency]({{< ref control-concurrency.md >}}).
+
+### Generating Wasm
+
+This component lets you manipulate an incoming request or serve a response with
+custom logic compiled using the [http-handler](https://http-wasm.io/http-handler/)
+Application Binary Interface (ABI). The `handle_request` function receives an
+incoming request and can manipulate it or serve a response as necessary.
+
+To compile your Wasm, you must compile source using a http-handler compliant
+guest SDK such as [TinyGo](https://github.com/http-wasm/http-wasm-guest-tinygo).
+
+Here's an example in TinyGo:
+
+```go
+package main
+
+import (
+ "strings"
+
+ "github.com/http-wasm/http-wasm-guest-tinygo/handler"
+ "github.com/http-wasm/http-wasm-guest-tinygo/handler/api"
+)
+
+func main() {
+ handler.HandleRequestFn = handleRequest
+}
+
+// handleRequest implements a simple HTTP router.
+func handleRequest(req api.Request, resp api.Response) (next bool, reqCtx uint32) {
+ // If the URI starts with /host, trim it and dispatch to the next handler.
+ if uri := req.GetURI(); strings.HasPrefix(uri, "/host") {
+ req.SetURI(uri[5:])
+ next = true // proceed to the next handler on the host.
+ return
+ }
+
+ // Serve a static response
+ resp.Headers().Set("Content-Type", "text/plain")
+ resp.Body().WriteString("hello")
+ return // skip the next handler, as we wrote a response.
+}
+```
+
+If using TinyGo, compile as shown below and set the spec metadata field named
+"path" to the location of the output (ex "router.wasm"):
+
+```bash
+tinygo build -o router.wasm -scheduler=none --no-debug -target=wasi router.go`
```
### Generating Wasm
@@ -108,4 +156,4 @@ tinygo build -o example.wasm -scheduler=none --no-debug -target=wasi example.go
- [Middleware]({{< ref middleware.md >}})
- [Configuration concept]({{< ref configuration-concept.md >}})
- [Configuration overview]({{< ref configuration-overview.md >}})
-- [waPC protocol](https://wapc.io/docs/spec/)
+- [Control max concurrency]({{< ref control-concurrency.md >}})
diff --git a/daprdocs/content/en/reference/components-reference/supported-pubsub/setup-apache-kafka.md b/daprdocs/content/en/reference/components-reference/supported-pubsub/setup-apache-kafka.md
index 96c27bccd..c616f3251 100644
--- a/daprdocs/content/en/reference/components-reference/supported-pubsub/setup-apache-kafka.md
+++ b/daprdocs/content/en/reference/components-reference/supported-pubsub/setup-apache-kafka.md
@@ -82,7 +82,7 @@ The `secretKeyRef` above is referencing a [kubernetes secrets store]({{< ref ku
Kafka supports a variety of authentication schemes and Dapr supports several: SASL password, mTLS, OIDC/OAuth2. With the added authentication methods, the `authRequired` field has
been deprecated from the v1.6 release and instead the `authType` field should be used. If `authRequired` is set to `true`, Dapr will attempt to configure `authType` correctly
-based on the value of `saslPassword`. There are four valid values for `authType`: `none`, `password`, `mtls`, and `oidc`. Note this is authentication only; authorization is still configured within Kafka.
+based on the value of `saslPassword`. There are four valid values for `authType`: `none`, `password`, `certificate`, `mtls`, and `oidc`. Note this is authentication only; authorization is still configured within Kafka.
#### None
@@ -275,17 +275,11 @@ spec:
- name: clientID # Optional. Used as client tracing ID by Kafka brokers.
value: "my-dapr-app-id"
- name: authType # Required.
- value: "password"
- - name: saslUsername # Required if authType is `password`.
- value: "adminuser"
+ value: "certificate"
- name: consumeRetryInterval # Optional.
value: 200ms
- name: version # Optional.
value: 0.10.2.0
- - name: saslPassword # Required if authRequired is `true`.
- secretKeyRef:
- name: kafka-secrets
- key: saslPasswordSecret
- name: maxMessageBytes # Optional.
value: 1024
- name: caCert # Certificate authority certificate.
diff --git a/daprdocs/content/en/reference/components-reference/supported-pubsub/setup-gcp-pubsub.md b/daprdocs/content/en/reference/components-reference/supported-pubsub/setup-gcp-pubsub.md
index 4f06517af..69e98e64b 100644
--- a/daprdocs/content/en/reference/components-reference/supported-pubsub/setup-gcp-pubsub.md
+++ b/daprdocs/content/en/reference/components-reference/supported-pubsub/setup-gcp-pubsub.md
@@ -25,6 +25,10 @@ spec:
value: service_account
- name: projectId
value: # replace
+ - name: endpoint # Optional.
+ value: "http://localhost:8085"
+ - name: consumerID # Optional - defaults to the app's own ID
+ value:
- name: identityProjectId
value: # replace
- name: privateKeyId
@@ -46,11 +50,17 @@ spec:
- name: disableEntityManagement
value: "false"
- name: enableMessageOrdering
- value: "false"
+ value: "false"
+ - name: orderingKey # Optional
+ value:
- name: maxReconnectionAttempts # Optional
value: 30
- name: connectionRecoveryInSec # Optional
value: 2
+ - name: deadLetterTopic # Optional
+ value:
+ - name: maxDeliveryAttempts # Optional
+ value: 5
```
{{% alert title="Warning" color="warning" %}}
The above example uses secrets as plain strings. It is recommended to use a secret store for the secrets as described [here]({{< ref component-secrets.md >}}).
@@ -60,8 +70,9 @@ The above example uses secrets as plain strings. It is recommended to use a secr
| Field | Required | Details | Example |
|--------------------|:--------:|---------|---------|
-| type | N | GCP credentials type. Only `service_account` is supported. Defaults to `service_account` | `service_account`
| projectId | Y | GCP project id| `myproject-123`
+| endpoint | N | GCP endpoint for the component to use. Only used for local development (for example) with [GCP Pub/Sub Emulator](https://cloud.google.com/pubsub/docs/emulator). The `endpoint` is unnecessary when running against the GCP production API. | `"http://localhost:8085"`
+| `consumerID` | N | The Consumer ID organizes one or more consumers into a group. Consumers with the same consumer ID work as one virtual consumer; for example, a message is processed only once by one of the consumers in the group. If the consumer ID is not set, the Dapr runtime will set it to the Dapr application ID. The `consumerID`, along with the `topic` provided as part of the request, are used to build the Pub/Sub subscription ID |
| identityProjectId | N | If the GCP pubsub project is different from the identity project, specify the identity project using this attribute | `"myproject-123"`
| privateKeyId | N | If using explicit credentials, this field should contain the `private_key_id` field from the service account json document | `"my-private-key"`
| privateKey | N | If using explicit credentials, this field should contain the `private_key` field from the service account json | `-----BEGIN PRIVATE KEY-----MIIBVgIBADANBgkqhkiG9w0B`
@@ -73,18 +84,78 @@ The above example uses secrets as plain strings. It is recommended to use a secr
| clientX509CertUrl | N | If using explicit credentials, this field should contain the `client_x509_cert_url` field from the service account json | `https://www.googleapis.com/robot/v1/metadata/x509/myserviceaccount%40myproject.iam.gserviceaccount.com`
| disableEntityManagement | N | When set to `"true"`, topics and subscriptions do not get created automatically. Default: `"false"` | `"true"`, `"false"`
| enableMessageOrdering | N | When set to `"true"`, subscribed messages will be received in order, depending on publishing and permissions configuration. | `"true"`, `"false"`
+| orderingKey |N | The key provided in the request. It's used when `enableMessageOrdering` is set to `true` to order messages based on such key. | "my-orderingkey"
| maxReconnectionAttempts | N |Defines the maximum number of reconnect attempts. Default: `30` | `30`
| connectionRecoveryInSec | N |Time in seconds to wait between connection recovery attempts. Default: `2` | `2`
+| deadLetterTopic | N | Name of the GCP Pub/Sub Topic. This topic **must** exist before using this component. | `"myapp-dlq"`
+| maxDeliveryAttempts | N | Maximum number of attempts to deliver the message. If `deadLetterTopic` is specified, `maxDeliveryAttempts` is the maximum number of attempts for failed processing of messages. Once that number is reached, the message will be moved to the dead-letter topic. Default: `5` | `5`
+| type | N | **DEPRECATED** GCP credentials type. Only `service_account` is supported. Defaults to `service_account` | `service_account`
+
+
{{% alert title="Warning" color="warning" %}}
If `enableMessageOrdering` is set to "true", the roles/viewer or roles/pubsub.viewer role will be required on the service account in order to guarantee ordering in cases where order tokens are not embedded in the messages. If this role is not given, or the call to Subscription.Config() fails for any other reason, ordering by embedded order tokens will still function correctly.
{{% /alert %}}
+## GCP Credentials
+
+Since the GCP Pub/Sub component uses the GCP Go Client Libraries, by default it authenticates using **Application Default Credentials**. This is explained further in the [Authenticate to GCP Cloud services using client libraries](https://cloud.google.com/docs/authentication/client-libraries) guide.
+
## Create a GCP Pub/Sub
+
+{{< tabs "Self-Hosted" "GCP" >}}
+
+{{% codetab %}}
+For local development, the [GCP Pub/Sub Emulator](https://cloud.google.com/pubsub/docs/emulator) is used to test the GCP Pub/Sub Component. Follow [these instructions](https://cloud.google.com/pubsub/docs/emulator#start) to run the GCP Pub/Sub Emulator.
+
+To run the GCP Pub/Sub Emulator locally using Docker, use the following `docker-compose.yaml`:
+
+```yaml
+version: '3'
+services:
+ pubsub:
+ image: gcr.io/google.com/cloudsdktool/cloud-sdk:422.0.0-emulators
+ ports:
+ - "8085:8085"
+ container_name: gcp-pubsub
+ entrypoint: gcloud beta emulators pubsub start --project local-test-prj --host-port 0.0.0.0:8085
+
+```
+
+In order to use the GCP Pub/Sub Emulator with your pub/sub binding, you need to provide the `endpoint` configuration in the component metadata. The `endpoint` is unnecessary when running against the GCP Production API.
+
+The **projectId** attribute must match the `--project` used in either the `docker-compose.yaml` or Docker command.
+
+```yaml
+apiVersion: dapr.io/v1alpha1
+kind: Component
+metadata:
+ name: gcp-pubsub
+spec:
+ type: pubsub.gcp.pubsub
+ version: v1
+ metadata:
+ - name: projectId
+ value: "local-test-prj"
+ - name: consumerID
+ value: "testConsumer"
+ - name: endpoint
+ value: "localhost:8085"
+```
+
+{{% /codetab %}}
+
+
+{{% codetab %}}
+
You can use either "explicit" or "implicit" credentials to configure access to your GCP pubsub instance. If using explicit, most fields are required. Implicit relies on dapr running under a Kubernetes service account (KSA) mapped to a Google service account (GSA) which has the necessary permissions to access pubsub. In implicit mode, only the `projectId` attribute is needed, all other are optional.
Follow the instructions [here](https://cloud.google.com/pubsub/docs/quickstart-console) on setting up Google Cloud Pub/Sub system.
+{{% /codetab %}}
+
+{{< /tabs >}}
+
## Related links
- [Basic schema for a Dapr component]({{< ref component-schema >}})
- Read [this guide]({{< ref "howto-publish-subscribe.md#step-2-publish-a-topic" >}}) for instructions on configuring pub/sub components
diff --git a/daprdocs/content/en/reference/components-reference/supported-pubsub/setup-pulsar.md b/daprdocs/content/en/reference/components-reference/supported-pubsub/setup-pulsar.md
index d225eae49..e57c2aa26 100644
--- a/daprdocs/content/en/reference/components-reference/supported-pubsub/setup-pulsar.md
+++ b/daprdocs/content/en/reference/components-reference/supported-pubsub/setup-pulsar.md
@@ -77,7 +77,7 @@ spec:
### Enabling message delivery retries
-The Pulsar pub/sub component has no built-in support for retry strategies. This means that sidecar sends a message to the service only once and is not retried in case of failures. To make Dapr use more spohisticated retry policies, you can apply a [retry resiliency policy]({{< ref "policies.md#retries" >}}) to the MQTT pub/sub component. Note that it will be the same Dapr sidecar retrying the redelivery the message to the same app instance and not other instances.
+The Pulsar pub/sub component has no built-in support for retry strategies. This means that sidecar sends a message to the service only once and is not retried in case of failures. To make Dapr use more spohisticated retry policies, you can apply a [retry resiliency policy]({{< ref "policies.md#retries" >}}) to the Pulsar pub/sub component. Note that it will be the same Dapr sidecar retrying the redelivery the message to the same app instance and not other instances.
### Delay queue
diff --git a/daprdocs/content/en/reference/components-reference/supported-pubsub/setup-rabbitmq.md b/daprdocs/content/en/reference/components-reference/supported-pubsub/setup-rabbitmq.md
index 4d715fe8b..9b80c9581 100644
--- a/daprdocs/content/en/reference/components-reference/supported-pubsub/setup-rabbitmq.md
+++ b/daprdocs/content/en/reference/components-reference/supported-pubsub/setup-rabbitmq.md
@@ -18,8 +18,16 @@ spec:
type: pubsub.rabbitmq
version: v1
metadata:
- - name: host
+ - name: connectionString
value: "amqp://localhost:5672"
+ - name: protocol
+ value: amqp
+ - name: hostname
+ value: localhost
+ - name: username
+ value: username
+ - name: password
+ value: password
- name: consumerID
value: myapp
- name: durable
@@ -48,6 +56,8 @@ spec:
value: 10485760
- name: exchangeKind
value: fanout
+ - name: saslExternal
+ value: false
```
{{% alert title="Warning" color="warning" %}}
@@ -58,7 +68,11 @@ The above example uses secrets as plain strings. It is recommended to use a secr
| Field | Required | Details | Example |
|--------------------|:--------:|---------|---------|
-| host | Y | Connection-string for the rabbitmq host | `amqp://user:pass@localhost:5672`
+| connectionString | Y* | The RabbitMQ connection string. *Mutally exclusive with protocol, hostname, username, password field | `amqp://user:pass@localhost:5672` |
+| protocol | N* | The RabbitMQ protocol. *Mutally exclusive with connectionString field | `amqp` |
+| hostname | N* | The RabbitMQ hostname. *Mutally exclusive with connectionString field | `localhost` |
+| username | N* | The RabbitMQ username. *Mutally exclusive with connectionString field | `username` |
+| password | N* | The RabbitMQ password. *Mutally exclusive with connectionString field | `password` |
| consumerID | N | Consumer ID a.k.a consumer tag organizes one or more consumers into a group. Consumers with the same consumer ID work as one virtual consumer, i.e. a message is processed only once by one of the consumers in the group. If the consumer ID is not set, the dapr runtime will set it to the dapr application ID. |
| durable | N | Whether or not to use [durable](https://www.rabbitmq.com/queues.html#durability) queues. Defaults to `"false"` | `"true"`, `"false"`
| deletedWhenUnused | N | Whether or not the queue should be configured to [auto-delete](https://www.rabbitmq.com/queues.html) Defaults to `"true"` | `"true"`, `"false"`
@@ -73,6 +87,7 @@ The above example uses secrets as plain strings. It is recommended to use a secr
| maxLen | N | The maximum number of messages of a queue and its dead letter queue (if dead letter enabled). If both `maxLen` and `maxLenBytes` are set then both will apply; whichever limit is hit first will be enforced. Defaults to no limit. | `"1000"` |
| maxLenBytes | N | Maximum length in bytes of a queue and its dead letter queue (if dead letter enabled). If both `maxLen` and `maxLenBytes` are set then both will apply; whichever limit is hit first will be enforced. Defaults to no limit. | `"1048576"` |
| exchangeKind | N | Exchange kind of the rabbitmq exchange. Defaults to `"fanout"`. | `"fanout"`,`"topic"` |
+| saslExternal | N | With TLS, should the username be taken from an additional field (e.g. CN.) See [RabbitMQ Authentication Mechanisms](https://www.rabbitmq.com/access-control.html#mechanisms). Defaults to `"false"`. | `"true"`, `"false"` |
| caCert | Required for using TLS | Input/Output | Certificate Authority (CA) certificate in PEM format for verifying server TLS certificates. | `"-----BEGIN CERTIFICATE-----\n\n-----END CERTIFICATE-----"`
| clientCert | Required for using TLS | Input/Output | TLS client certificate in PEM format. Must be used with `clientKey`. | `"-----BEGIN CERTIFICATE-----\n\n-----END CERTIFICATE-----"`
| clientKey | Required for using TLS | Input/Output | TLS client key in PEM format. Must be used with `clientCert`. Can be `secretKeyRef` to use a secret reference. | `"-----BEGIN RSA PRIVATE KEY-----\n\n-----END RSA PRIVATE KEY-----"`
@@ -121,6 +136,8 @@ spec:
value: 10485760
- name: exchangeKind
value: fanout
+ - name: saslExternal
+ value: false
- name: caCert
value: ${{ myLoadedCACert }}
- name: clientCert
diff --git a/daprdocs/content/en/reference/components-reference/supported-secret-stores/azure-keyvault.md b/daprdocs/content/en/reference/components-reference/supported-secret-stores/azure-keyvault.md
index ce2e801f4..91ba14867 100644
--- a/daprdocs/content/en/reference/components-reference/supported-secret-stores/azure-keyvault.md
+++ b/daprdocs/content/en/reference/components-reference/supported-secret-stores/azure-keyvault.md
@@ -9,9 +9,10 @@ aliases:
## Component format
-To setup Azure Key Vault secret store create a component of type `secretstores.azure.keyvault`. See [this guide]({{< ref "setup-secret-store.md#apply-the-configuration" >}}) on how to create and apply a secretstore configuration. See this guide on [referencing secrets]({{< ref component-secrets.md >}}) to retrieve and use the secret with Dapr components.
-
-See also [configure the component](#configure-the-component) guide in this page.
+To setup Azure Key Vault secret store, create a component of type `secretstores.azure.keyvault`.
+- See [the secret store components guide]({{< ref "setup-secret-store.md#apply-the-configuration" >}}) on how to create and apply a secret store configuration.
+- See [the guide on referencing secrets]({{< ref component-secrets.md >}}) to retrieve and use the secret with Dapr components.
+- See [the Configure the component section](#configure-the-component) below.
```yaml
apiVersion: dapr.io/v1alpha1
@@ -37,7 +38,10 @@ spec:
## Authenticating with Azure AD
-The Azure Key Vault secret store component supports authentication with Azure AD only. Before you enable this component, make sure you've read the [Authenticating to Azure]({{< ref authenticating-azure.md >}}) document and created an Azure AD application (also called Service Principal). Alternatively, make sure you have created a managed identity for your application platform.
+The Azure Key Vault secret store component supports authentication with Azure AD only. Before you enable this component:
+1. Read the [Authenticating to Azure]({{< ref authenticating-azure.md >}}) document.
+1. Create an Azure AD application (also called Service Principal).
+1. Alternatively, create a managed identity for your application platform.
## Spec metadata fields
@@ -49,20 +53,21 @@ The Azure Key Vault secret store component supports authentication with Azure AD
Additionally, you must provide the authentication fields as explained in the [Authenticating to Azure]({{< ref authenticating-azure.md >}}) document.
-## Example: Create an Azure Key Vault and authorize a Service Principal
+## Example
### Prerequisites
- Azure Subscription
- [Azure CLI](https://docs.microsoft.com/cli/azure/install-azure-cli)
- [jq](https://stedolan.github.io/jq/download/)
-- The scripts below are optimized for a bash or zsh shell
+- You are using bash or zsh shell
+- You've created an Azure AD application (Service Principal) per the instructions in [Authenticating to Azure]({{< ref authenticating-azure.md >}}). You will need the following values:
-Make sure you have followed the steps in the [Authenticating to Azure]({{< ref authenticating-azure.md >}}) document to create an Azure AD application (also called Service Principal). You will need the following values:
+ | Value | Description |
+ | ----- | ----------- |
+ | `SERVICE_PRINCIPAL_ID` | The ID of the Service Principal that you created for a given application |
-- `SERVICE_PRINCIPAL_ID`: the ID of the Service Principal that you created for a given application
-
-### Steps
+### Create an Azure Key Vault and authorize a Service Principal
1. Set a variable with the Service Principal that you created:
@@ -70,7 +75,7 @@ Make sure you have followed the steps in the [Authenticating to Azure]({{< ref a
SERVICE_PRINCIPAL_ID="[your_service_principal_object_id]"
```
-2. Set a variable with the location where to create all resources:
+1. Set a variable with the location in which to create all resources:
```sh
LOCATION="[your_location]"
@@ -78,7 +83,7 @@ Make sure you have followed the steps in the [Authenticating to Azure]({{< ref a
(You can get the full list of options with: `az account list-locations --output tsv`)
-3. Create a Resource Group, giving it any name you'd like:
+1. Create a Resource Group, giving it any name you'd like:
```sh
RG_NAME="[resource_group_name]"
@@ -88,7 +93,7 @@ Make sure you have followed the steps in the [Authenticating to Azure]({{< ref a
| jq -r .id)
```
-4. Create an Azure Key Vault (that uses Azure RBAC for authorization):
+1. Create an Azure Key Vault that uses Azure RBAC for authorization:
```sh
KEYVAULT_NAME="[key_vault_name]"
@@ -99,7 +104,7 @@ Make sure you have followed the steps in the [Authenticating to Azure]({{< ref a
--location "${LOCATION}"
```
-5. Using RBAC, assign a role to the Azure AD application so it can access the Key Vault.
+1. Using RBAC, assign a role to the Azure AD application so it can access the Key Vault.
In this case, assign the "Key Vault Secrets User" role, which has the "Get secrets" permission over Azure Key Vault.
```sh
@@ -109,15 +114,17 @@ Make sure you have followed the steps in the [Authenticating to Azure]({{< ref a
--scope "${RG_ID}/providers/Microsoft.KeyVault/vaults/${KEYVAULT_NAME}"
```
-Other less restrictive roles like "Key Vault Secrets Officer" and "Key Vault Administrator" can be used as well, depending on your application. For more information about Azure built-in roles for Key Vault see the [Microsoft docs](https://docs.microsoft.com/azure/key-vault/general/rbac-guide?tabs=azure-cli#azure-built-in-roles-for-key-vault-data-plane-operations).
+Other less restrictive roles, like "Key Vault Secrets Officer" and "Key Vault Administrator", can be used, depending on your application. [See Microsoft Docs for more information about Azure built-in roles for Key Vault](https://docs.microsoft.com/azure/key-vault/general/rbac-guide?tabs=azure-cli#azure-built-in-roles-for-key-vault-data-plane-operations).
-## Configure the component
+### Configure the component
{{< tabs "Self-Hosted" "Kubernetes">}}
{{% codetab %}}
-To use a **client secret**, create a file called `azurekeyvault.yaml` in the components directory, filling in with the Azure AD application that you created following the [Authenticating to Azure]({{< ref authenticating-azure.md >}}) document:
+#### Using a client secret
+
+To use a **client secret**, create a file called `azurekeyvault.yaml` in the components directory. Use the following template, filling in [the Azure AD application you created]({{< ref authenticating-azure.md >}}):
```yaml
apiVersion: dapr.io/v1alpha1
@@ -138,7 +145,9 @@ spec:
value : "[your_client_secret]"
```
-If you want to use a **certificate** saved on the local disk, instead, use this template, filling in with details of the Azure AD application that you created following the [Authenticating to Azure]({{< ref authenticating-azure.md >}}) document:
+#### Using a certificate
+
+If you want to use a **certificate** saved on the local disk instead, use the following template. Fill in the details of [the Azure AD application you created]({{< ref authenticating-azure.md >}}):
```yaml
apiVersion: dapr.io/v1alpha1
@@ -161,9 +170,9 @@ spec:
{{% /codetab %}}
{{% codetab %}}
-In Kubernetes, you store the client secret or the certificate into the Kubernetes Secret Store and then refer to those in the YAML file. You will need the details of the Azure AD application that was created following the [Authenticating to Azure]({{< ref authenticating-azure.md >}}) document.
+In Kubernetes, you store the client secret or the certificate into the Kubernetes Secret Store and then refer to those in the YAML file. Before you start, you need the details of [the Azure AD application you created]({{< ref authenticating-azure.md >}}).
-To use a **client secret**:
+#### Using a client secret
1. Create a Kubernetes secret using the following command:
@@ -176,7 +185,7 @@ To use a **client secret**:
- `[your_k8s_secret_key]` is secret key in the Kubernetes secret store
-2. Create an `azurekeyvault.yaml` component file.
+1. Create an `azurekeyvault.yaml` component file.
The component yaml refers to the Kubernetes secretstore using `auth` property and `secretKeyRef` refers to the client secret stored in the Kubernetes secret store.
@@ -203,13 +212,13 @@ To use a **client secret**:
secretStore: kubernetes
```
-3. Apply the `azurekeyvault.yaml` component:
+1. Apply the `azurekeyvault.yaml` component:
```bash
kubectl apply -f azurekeyvault.yaml
```
-To use a **certificate**:
+#### Using a certificate
1. Create a Kubernetes secret using the following command:
@@ -221,7 +230,7 @@ To use a **certificate**:
- `[your_k8s_secret_name]` is secret name in the Kubernetes secret store
- `[your_k8s_secret_key]` is secret key in the Kubernetes secret store
-2. Create an `azurekeyvault.yaml` component file.
+1. Create an `azurekeyvault.yaml` component file.
The component yaml refers to the Kubernetes secretstore using `auth` property and `secretKeyRef` refers to the certificate stored in the Kubernetes secret store.
@@ -248,16 +257,16 @@ To use a **certificate**:
secretStore: kubernetes
```
-3. Apply the `azurekeyvault.yaml` component:
+1. Apply the `azurekeyvault.yaml` component:
```bash
kubectl apply -f azurekeyvault.yaml
```
-To use **Azure managed identity**:
+#### Using Azure managed identity
1. Ensure your AKS cluster has managed identity enabled and follow the [guide for using managed identities](https://docs.microsoft.com/azure/aks/use-managed-identity).
-2. Create an `azurekeyvault.yaml` component file.
+1. Create an `azurekeyvault.yaml` component file.
The component yaml refers to a particular KeyVault name. The managed identity you will use in a later step must be given read access to this particular KeyVault instance.
@@ -274,12 +283,23 @@ To use **Azure managed identity**:
value: "[your_keyvault_name]"
```
-3. Apply the `azurekeyvault.yaml` component:
+1. Apply the `azurekeyvault.yaml` component:
```bash
kubectl apply -f azurekeyvault.yaml
```
-4. Create and use a managed identity / pod identity by following [this guide](https://docs.microsoft.com/azure/aks/use-azure-ad-pod-identity#create-a-pod-identity). After creating an AKS pod identity, [give this identity read permissions on your desired KeyVault instance](https://docs.microsoft.com/azure/key-vault/general/assign-access-policy?tabs=azure-cli#assign-the-access-policy), and finally in your application deployment inject the pod identity via a label annotation:
+1. Create and assign a managed identity at the pod-level via either:
+ - [Azure AD workload identity](https://learn.microsoft.com/azure/aks/workload-identity-overview) (preferred method)
+ - [Azure AD pod identity](https://docs.microsoft.com/azure/aks/use-azure-ad-pod-identity#create-a-pod-identity)
+
+
+ **Important**: While both Azure AD pod identity and workload identity are in preview, currently Azure AD Workload Identity is planned for general availability (stable state).
+
+1. After creating a workload identity, give it `read` permissions:
+ - [On your desired KeyVault instance](https://docs.microsoft.com/azure/key-vault/general/assign-access-policy?tabs=azure-cli#assign-the-access-policy)
+ - In your application deployment. Inject the pod identity both:
+ - Via a label annotation
+ - By specifying the Kubernetes service account associated with the desired workload identity
```yaml
apiVersion: v1
@@ -290,6 +310,12 @@ To use **Azure managed identity**:
aadpodidbinding: $POD_IDENTITY_NAME
```
+#### Using Azure managed identity directly vs. via Azure AD workload identity
+
+When using **managed identity directly**, you can have multiple identities associated with an app, requiring `azureClientId` to specify which identity should be used.
+
+However, when using **managed identity via Azure AD workload identity**, `azureClientId` is not necessary and has no effect. The Azure identity to be used is inferred from the service account tied to an Azure identity via the Azure federated identity.
+
{{% /codetab %}}
{{< /tabs >}}
diff --git a/daprdocs/content/en/reference/components-reference/supported-state-stores/setup-cockroachdb.md b/daprdocs/content/en/reference/components-reference/supported-state-stores/setup-cockroachdb.md
index 5a6167d30..e0f6be7f3 100644
--- a/daprdocs/content/en/reference/components-reference/supported-state-stores/setup-cockroachdb.md
+++ b/daprdocs/content/en/reference/components-reference/supported-state-stores/setup-cockroachdb.md
@@ -11,6 +11,7 @@ aliases:
Create a file called `cockroachdb.yaml`, paste the following and replace the `` value with your connection string. The connection string for CockroachDB follow the same standard for PostgreSQL connection string. For example, `"host=localhost user=root port=26257 connect_timeout=10 database=dapr_test"`. See the CockroachDB [documentation on database connections](https://www.cockroachlabs.com/docs/stable/connect-to-the-database.html) for information on how to define a connection string.
+If you want to also configure CockroachDB to store actors, add the `actorStateStore` option as in the example below.
```yaml
apiVersion: dapr.io/v1alpha1
@@ -21,16 +22,44 @@ spec:
type: state.cockroachdb
version: v1
metadata:
+ # Connection string
- name: connectionString
value: ""
+ # Timeout for database operations, in seconds (optional)
+ #- name: timeoutInSeconds
+ # value: 20
+ # Name of the table where to store the state (optional)
+ #- name: tableName
+ # value: "state"
+ # Name of the table where to store metadata used by Dapr (optional)
+ #- name: metadataTableName
+ # value: "dapr_metadata"
+ # Cleanup interval in seconds, to remove expired rows (optional)
+ #- name: cleanupIntervalInSeconds
+ # value: 3600
+ # Max idle time for connections before they're closed (optional)
+ #- name: connectionMaxIdleTime
+ # value: 0
+ # Uncomment this if you wish to use CockroachDB as a state store for actors (optional)
+ #- name: actorStateStore
+ # value: "true"
```
+{{% alert title="Warning" color="warning" %}}
+The above example uses secrets as plain strings. It is recommended to use a secret store for the secrets as described [here]({{< ref component-secrets.md >}}).
+{{% /alert %}}
+
## Spec metadata fields
| Field | Required | Details | Example |
|--------------------|:--------:|---------|---------|
-| connectionString | Y | The connection string for CockroachDB | `"host=localhost user=root port=26257 connect_timeout=10 database=dapr_test"`
-| actorStateStore | N | Consider this state store for actors. Defaults to `"false"` | `"true"`, `"false"`
+| `connectionString` | Y | The connection string for CockroachDB | `"host=localhost user=root port=26257 connect_timeout=10 database=dapr_test"`
+| `timeoutInSeconds` | N | Timeout, in seconds, for all database operations. Defaults to `20` | `30`
+| `tableName` | N | Name of the table where the data is stored. Defaults to `state`. Can optionally have the schema name as prefix, such as `public.state` | `"state"`, `"public.state"`
+| `metadataTableName` | N | Name of the table Dapr uses to store a few metadata properties. Defaults to `dapr_metadata`. Can optionally have the schema name as prefix, such as `public.dapr_metadata` | `"dapr_metadata"`, `"public.dapr_metadata"`
+| `cleanupIntervalInSeconds` | N | Interval, in seconds, to clean up rows with an expired TTL. Default: `3600` (i.e. 1 hour). Setting this to values <=0 disables the periodic cleanup. | `1800`, `-1`
+| `connectionMaxIdleTime` | N | Max idle time before unused connections are automatically closed in the connection pool. By default, there's no value and this is left to the database driver to choose. | `"5m"`
+| `actorStateStore` | N | Consider this state store for actors. Defaults to `"false"` | `"true"`, `"false"`
## Setup CockroachDB
@@ -62,6 +91,19 @@ The easiest way to install CockroachDB on Kubernetes is by using the [CockroachD
{{% /tabs %}}
+## Advanced
+
+### TTLs and cleanups
+
+This state store supports [Time-To-Live (TTL)]({{< ref state-store-ttl.md >}}) for records stored with Dapr. When storing data using Dapr, you can set the `ttlInSeconds` metadata property to indicate after how many seconds the data should be considered "expired".
+
+Because CockroachDB doesn't have built-in support for TTLs, you implement this in Dapr by adding a column in the state table indicating when the data should be considered "expired". "Expired" records are not returned to the caller, even if they're still physically stored in the database. A background "garbage collector" periodically scans the state table for expired rows and deletes them.
+
+You can set the interval for the deletion of expired records with the `cleanupIntervalInSeconds` metadata property, which defaults to 3600 seconds (that is, 1 hour).
+
+- Longer intervals require less frequent scans for expired rows, but can require storing expired records for longer, potentially requiring more storage space. If you plan to store many records in your state table, with short TTLs, consider setting `cleanupIntervalInSeconds` to a smaller value - for example, `300` (300 seconds, or 5 minutes).
+- If you do not plan to use TTLs with Dapr and the CockroachDB state store, you should consider setting `cleanupIntervalInSeconds` to a value <= 0 (e.g. `0` or `-1`) to disable the periodic cleanup and reduce the load on the database.
+
## Related links
- [Basic schema for a Dapr component]({{< ref component-schema >}})
- Read [this guide]({{< ref "howto-get-save-state.md#step-2-save-and-retrieve-a-single-state" >}}) for instructions on configuring state store components
diff --git a/daprdocs/content/en/reference/components-reference/supported-state-stores/setup-firestore.md b/daprdocs/content/en/reference/components-reference/supported-state-stores/setup-firestore.md
index 9c489bbf1..6062b05e2 100644
--- a/daprdocs/content/en/reference/components-reference/supported-state-stores/setup-firestore.md
+++ b/daprdocs/content/en/reference/components-reference/supported-state-stores/setup-firestore.md
@@ -21,30 +21,32 @@ spec:
type: state.gcp.firestore
version: v1
metadata:
- - name: type
- value: # Required. Example: "serviceaccount"
- name: project_id
value: # Required.
+ - name: endpoint # Optional.
+ value: "http://localhost:8432"
- name: private_key_id
- value: # Required.
+ value: # Optional.
- name: private_key
- value: # Required.
+ value: # Optional, but Required if `private_key_id` is specified.
- name: client_email
- value: # Required.
+ value: # Optional, but Required if `private_key_id` is specified.
- name: client_id
- value: # Required.
+ value: # Optional, but Required if `private_key_id` is specified.
- name: auth_uri
- value: # Required.
+ value: # Optional.
- name: token_uri
- value: # Required.
+ value: # Optional.
- name: auth_provider_x509_cert_url
- value: # Required.
+ value: # Optional.
- name: client_x509_cert_url
- value: # Required.
+ value: # Optional.
- name: entity_kind
value: # Optional. default: "DaprState"
- name: noindex
value: # Optional. default: "false"
+ - name: type
+ value: # Deprecated.
```
{{% alert title="Warning" color="warning" %}}
@@ -55,17 +57,23 @@ The above example uses secrets as plain strings. It is recommended to use a secr
| Field | Required | Details | Example |
|--------------------|:--------:|---------|---------|
-| type | Y | The credentials type | `"serviceaccount"`
| project_id | Y | The ID of the GCP project to use | `"project-id"`
-| private_key_id | Y | The ID of the prvate key to use | `"private-key-id"`
-| client_email | Y | The email address for the client | `"eample@example.com"`
-| client_id | Y | The client id value to use for authentication | `"client-id"`
-| auth_uri | Y | The authentication URI to use | `"https://accounts.google.com/o/oauth2/auth"`
-| token_uri | Y | The token URI to query for Auth token | `"https://oauth2.googleapis.com/token"`
-| auth_provider_x509_cert_url | Y | The auth provider certificate URL | `"https://www.googleapis.com/oauth2/v1/certs"`
-| client_x509_cert_url | Y | The client certificate URL | `"https://www.googleapis.com/robot/v1/metadata/x509/x"`
+| endpoint | N | GCP endpoint for the component to use. Only used for local development with (for example) [GCP Datastore Emulator](https://cloud.google.com/datastore/docs/tools/datastore-emulator). The `endpoint` is unnecessary when running against the GCP production API. | `"localhost:8432"`
+| private_key_id | N | The ID of the prvate key to use | `"private-key-id"`
+| privateKey | N | If using explicit credentials, this field should contain the `private_key` field from the service account json | `-----BEGIN PRIVATE KEY-----MIIBVgIBADANBgkqhkiG9w0B`
+| client_email | N | The email address for the client | `"eample@example.com"`
+| client_id | N | The client id value to use for authentication | `"client-id"`
+| auth_uri | N | The authentication URI to use | `"https://accounts.google.com/o/oauth2/auth"`
+| token_uri | N | The token URI to query for Auth token | `"https://oauth2.googleapis.com/token"`
+| auth_provider_x509_cert_url | N | The auth provider certificate URL | `"https://www.googleapis.com/oauth2/v1/certs"`
+| client_x509_cert_url | N | The client certificate URL | `"https://www.googleapis.com/robot/v1/metadata/x509/x"`
| entity_kind | N | The entity name in Filestore. Defaults to `"DaprState"` | `"DaprState"`
| noindex | N | Whether to disable indexing of state entities. Use this setting if you encounter Firestore index size limitations. Defaults to `"false"` | `"true"`
+| type | N | **DEPRECATED** The credentials type | `"serviceaccount"`
+
+
+## GCP Credentials
+Since the GCP Firestore component uses the GCP Go Client Libraries, by default it authenticates using **Application Default Credentials**. This is explained in the [Authenticate to GCP Cloud services using client libraries](https://cloud.google.com/docs/authentication/client-libraries) guide.
## Setup GCP Firestore
@@ -74,7 +82,7 @@ The above example uses secrets as plain strings. It is recommended to use a secr
{{% codetab %}}
You can use the GCP Datastore emulator to run locally using the instructions [here](https://cloud.google.com/datastore/docs/tools/datastore-emulator).
-You can then interact with the server using `localhost:8081`.
+You can then interact with the server using `http://localhost:8432`.
{{% /codetab %}}
{{% codetab %}}
diff --git a/daprdocs/content/en/reference/components-reference/supported-state-stores/setup-redis.md b/daprdocs/content/en/reference/components-reference/supported-state-stores/setup-redis.md
index 007e7b6ad..3237b1092 100644
--- a/daprdocs/content/en/reference/components-reference/supported-state-stores/setup-redis.md
+++ b/daprdocs/content/en/reference/components-reference/supported-state-stores/setup-redis.md
@@ -34,7 +34,7 @@ spec:
value: # Optional
- name: maxRetryBackoff
value: # Optional
- - name: failover
+ - name: failover
value: # Optional
- name: sentinelMasterName
value: # Optional
diff --git a/daprdocs/content/en/reference/components-reference/supported-state-stores/setup-sqlserver.md b/daprdocs/content/en/reference/components-reference/supported-state-stores/setup-sqlserver.md
index f7de752d2..86aa92d91 100644
--- a/daprdocs/content/en/reference/components-reference/supported-state-stores/setup-sqlserver.md
+++ b/daprdocs/content/en/reference/components-reference/supported-state-stores/setup-sqlserver.md
@@ -33,6 +33,10 @@ spec:
value: # Optional. defaults to "dbo"
- name: indexedProperties
value: # Optional. List of IndexedProperties.
+ - name: metadataTableName # Optional. Name of the table where to store metadata used by Dapr
+ value: "dapr_metadata"
+ - name: cleanupIntervalInSeconds # Optional. Cleanup interval in seconds, to remove expired rows
+ value: 300
```
@@ -58,6 +62,8 @@ If you wish to use SQL server as an [actor state store]({{< ref "state_api.md#co
| schema | N | The schema to use. Defaults to `"dbo"` | `"dapr"`,`"dbo"`
| indexedProperties | N | List of IndexedProperties. | `'[{"column": "transactionid", "property": "id", "type": "int"}, {"column": "customerid", "property": "customer", "type": "nvarchar(100)"}]'`
| actorStateStore | N | Indicates that Dapr should configure this component for the actor state store ([more information]({{< ref "state_api.md#configuring-state-store-for-actors" >}})). | `"true"`
+| metadataTableName | N | Name of the table Dapr uses to store a few metadata properties. Defaults to `dapr_metadata`. | `"dapr_metadata"`
+| cleanupIntervalInSeconds | N | Interval, in seconds, to clean up rows with an expired TTL. Default: `3600` (i.e. 1 hour). Setting this to values <=0 disables the periodic cleanup. | `1800`, `-1`
## Create Azure SQL instance
@@ -80,6 +86,23 @@ When connecting with a dedicated user (not `sa`), these authorizations are requi
- `CREATE TABLE`
- `CREATE TYPE`
+### TTLs and cleanups
+
+This state store supports [Time-To-Live (TTL)]({{< ref state-store-ttl.md >}}) for records stored with Dapr. When storing data using Dapr, you can set the `ttlInSeconds` metadata property to indicate after how many seconds the data should be considered "expired".
+
+Because SQL Server doesn't have built-in support for TTLs, Dapr implements this by adding a column in the state table indicating when the data should be considered "expired". "Expired" records are not returned to the caller, even if they're still physically stored in the database. A background "garbage collector" periodically scans the state table for expired rows and deletes them.
+
+You can set the interval for the deletion of expired records with the `cleanupIntervalInSeconds` metadata property, which defaults to 3600 seconds (that is, 1 hour).
+
+- Longer intervals require less frequent scans for expired rows, but can require storing expired records for longer, potentially requiring more storage space. If you plan to store many records in your state table, with short TTLs, consider setting `cleanupIntervalInSeconds` to a smaller value - for example, `300` (300 seconds, or 5 minutes).
+- If you do not plan to use TTLs with Dapr and the SQL Server state store, you should consider setting `cleanupIntervalInSeconds` to a value <= 0 (e.g. `0` or `-1`) to disable the periodic cleanup and reduce the load on the database.
+
+The state store does not have an index on the `ExpireDate` column, which means that each clean up operation must perform a full table scan. If you intend to write to the table with a large number of records that use TTLs, you should consider creating an index on the `ExpireDate` column. An index makes queries faster, but uses more storage space and slightly slows down writes.
+
+```sql
+CREATE CLUSTERED INDEX expiredate_idx ON state(ExpireDate ASC)
+```
+
## Related links
- [Basic schema for a Dapr component]({{< ref component-schema >}})
- Read [this guide]({{< ref "howto-get-save-state.md#step-2-save-and-retrieve-a-single-state" >}}) for instructions on configuring state store components
diff --git a/daprdocs/data/components/bindings/aws.yaml b/daprdocs/data/components/bindings/aws.yaml
index 50b5a3607..e75089445 100644
--- a/daprdocs/data/components/bindings/aws.yaml
+++ b/daprdocs/data/components/bindings/aws.yaml
@@ -8,7 +8,7 @@
output: true
- component: AWS S3
link: s3
- state: Alpha
+ state: Stable
version: v1
since: "1.0"
features:
diff --git a/daprdocs/data/components/pubsub/gcp.yaml b/daprdocs/data/components/pubsub/gcp.yaml
index 815ced19b..ce654f136 100644
--- a/daprdocs/data/components/pubsub/gcp.yaml
+++ b/daprdocs/data/components/pubsub/gcp.yaml
@@ -1,6 +1,6 @@
- component: GCP Pub/Sub
link: setup-gcp-pubsub
- state: Alpha
+ state: Stable
version: v1
since: "1.0"
features:
diff --git a/daprdocs/data/components/state_stores/aws.yaml b/daprdocs/data/components/state_stores/aws.yaml
index e8af47bc1..1d5be544f 100644
--- a/daprdocs/data/components/state_stores/aws.yaml
+++ b/daprdocs/data/components/state_stores/aws.yaml
@@ -5,7 +5,7 @@
since: "1.10"
features:
crud: true
- transactions: false
+ transactions: true
etag: true
ttl: true
query: false
diff --git a/daprdocs/data/components/state_stores/gcp.yaml b/daprdocs/data/components/state_stores/gcp.yaml
index bd8fdc9bd..c129ebbf7 100644
--- a/daprdocs/data/components/state_stores/gcp.yaml
+++ b/daprdocs/data/components/state_stores/gcp.yaml
@@ -1,6 +1,6 @@
- component: GCP Firestore
link: setup-firestore
- state: Alpha
+ state: Stable
version: v1
since: "1.0"
features:
diff --git a/daprdocs/data/components/state_stores/generic.yaml b/daprdocs/data/components/state_stores/generic.yaml
index 32cce46b7..621f91fa5 100644
--- a/daprdocs/data/components/state_stores/generic.yaml
+++ b/daprdocs/data/components/state_stores/generic.yaml
@@ -29,7 +29,7 @@
crud: true
transactions: true
etag: true
- ttl: false
+ ttl: true
query: true
- component: Couchbase
link: setup-couchbase
diff --git a/daprdocs/layouts/shortcodes/dapr-latest-version.html b/daprdocs/layouts/shortcodes/dapr-latest-version.html
index 1ac5196d0..be35d4e00 100644
--- a/daprdocs/layouts/shortcodes/dapr-latest-version.html
+++ b/daprdocs/layouts/shortcodes/dapr-latest-version.html
@@ -1 +1 @@
-{{- if .Get "short" }}1.10{{ else if .Get "long" }}1.10.0{{ else if .Get "cli" }}1.10.0{{ else }}1.9.5{{ end -}}
+{{- if .Get "short" }}1.10{{ else if .Get "long" }}1.10.5{{ else if .Get "cli" }}1.10.0{{ else }}1.10.5{{ end -}}
diff --git a/daprdocs/static/images/actors-calling-method.png b/daprdocs/static/images/actors-calling-method.png
new file mode 100644
index 000000000..014dd5d83
Binary files /dev/null and b/daprdocs/static/images/actors-calling-method.png differ
diff --git a/daprdocs/static/images/building-block-pub-sub-example.png b/daprdocs/static/images/building-block-pub-sub-example.png
deleted file mode 100644
index 4ffe87ce4..000000000
Binary files a/daprdocs/static/images/building-block-pub-sub-example.png and /dev/null differ
diff --git a/daprdocs/static/images/concepts-components.png b/daprdocs/static/images/concepts-components.png
index 6c2977ddc..b9ab8c5fb 100644
Binary files a/daprdocs/static/images/concepts-components.png and b/daprdocs/static/images/concepts-components.png differ
diff --git a/daprdocs/static/images/datadog-traces.png b/daprdocs/static/images/datadog-traces.png
new file mode 100644
index 000000000..3db3461e7
Binary files /dev/null and b/daprdocs/static/images/datadog-traces.png differ
diff --git a/daprdocs/static/images/grafana-prometheus-dapr-server-url.png b/daprdocs/static/images/grafana-prometheus-dapr-server-url.png
index 1098b526f..2a65dd5a1 100644
Binary files a/daprdocs/static/images/grafana-prometheus-dapr-server-url.png and b/daprdocs/static/images/grafana-prometheus-dapr-server-url.png differ
diff --git a/daprdocs/static/images/pubsub-howto-overview.png b/daprdocs/static/images/pubsub-howto-overview.png
new file mode 100644
index 000000000..cb0cf1a29
Binary files /dev/null and b/daprdocs/static/images/pubsub-howto-overview.png differ
diff --git a/daprdocs/static/images/skip-tls-verify.png b/daprdocs/static/images/skip-tls-verify.png
new file mode 100644
index 000000000..2a65dd5a1
Binary files /dev/null and b/daprdocs/static/images/skip-tls-verify.png differ
diff --git a/daprdocs/static/images/workflow-trace-spans-zipkin.png b/daprdocs/static/images/workflow-trace-spans-zipkin.png
new file mode 100644
index 000000000..4b6e5daa2
Binary files /dev/null and b/daprdocs/static/images/workflow-trace-spans-zipkin.png differ
diff --git a/daprdocs/static/presentations/Dapr-Diagrams.pptx.zip b/daprdocs/static/presentations/Dapr-Diagrams.pptx.zip
index bd76d31d6..21bc81f93 100644
Binary files a/daprdocs/static/presentations/Dapr-Diagrams.pptx.zip and b/daprdocs/static/presentations/Dapr-Diagrams.pptx.zip differ
diff --git a/daprdocs/static/presentations/dapr-slidedeck.pptx.zip b/daprdocs/static/presentations/dapr-slidedeck.pptx.zip
index 47a8aa326..6ef5a3b87 100644
Binary files a/daprdocs/static/presentations/dapr-slidedeck.pptx.zip and b/daprdocs/static/presentations/dapr-slidedeck.pptx.zip differ
diff --git a/sdkdocs/dotnet b/sdkdocs/dotnet
index 9dcae7b0e..f42b690f4 160000
--- a/sdkdocs/dotnet
+++ b/sdkdocs/dotnet
@@ -1 +1 @@
-Subproject commit 9dcae7b0e771d7328559bef1dd65df4c1a54b793
+Subproject commit f42b690f4c67e6bb4209932f660c46a96d0b0457
diff --git a/sdkdocs/pluggable-components/go b/sdkdocs/pluggable-components/go
new file mode 160000
index 000000000..dbb1a9526
--- /dev/null
+++ b/sdkdocs/pluggable-components/go
@@ -0,0 +1 @@
+Subproject commit dbb1a9526875e8df6af1823e09dae11216221444