diff --git a/daprdocs/content/en/developing-applications/building-blocks/observability/tracing.md b/daprdocs/content/en/developing-applications/building-blocks/observability/tracing.md
index 765dddc77..d0e72b6d9 100644
--- a/daprdocs/content/en/developing-applications/building-blocks/observability/tracing.md
+++ b/daprdocs/content/en/developing-applications/building-blocks/observability/tracing.md
@@ -29,7 +29,7 @@ Read [W3C distributed tracing]({{< ref w3c-tracing >}}) for more background on W
Dapr uses [probabilistic sampling](https://opencensus.io/tracing/sampling/probabilistic/) as defined by OpenCensus. The sample rate defines the probability a tracing span will be sampled and can have a value between 0 and 1 (inclusive). The deafault sample rate is 0.0001 (i.e. 1 in 10,000 spans is sampled).
-To change the default tracing behavior, use a configuration file (in self hosted mode) or a Kubernetes configuration object (in Kubernetes mode). For example, the following configuration object changes the sample rate to 1 (i.e. every span is sampled):
+To change the default tracing behavior, use a configuration file (in self hosted mode) or a Kubernetes configuration object (in Kubernetes mode). For example, the following configuration object changes the sample rate to 1 (i.e. every span is sampled), and sends trace using Zipkin protocol to the Zipkin server at http://zipkin.default.svc.cluster.local
```yaml
apiVersion: dapr.io/v1alpha1
@@ -40,30 +40,14 @@ metadata:
spec:
tracing:
samplingRate: "1"
+ zipkin:
+ endpointAddress: "http://zipkin.default.svc.cluster.local:9411/api/v2/spans"
```
-Similarly, changing `samplingRate` to 0 will disable tracing altogether.
+Changing `samplingRate` to 0 will disable tracing altogether.
See the [References](#references) section for more details on how to configure tracing on local environment and Kubernetes environment.
-Dapr supports pluggable exporters, defined by configuration files (in self hosted mode) or a Kubernetes custom resource object (in Kubernetes mode). For example, the following manifest defines a Zipkin exporter:
-
-```yaml
-apiVersion: dapr.io/v1alpha1
-kind: Component
-metadata:
- name: zipkin
- namespace: default
-spec:
- type: exporters.zipkin
- version: v1
- metadata:
- - name: enabled
- value: "true"
- - name: exporterAddress
- value: "http://zipkin.default.svc.cluster.local:9411/api/v2/spans"
-```
-
## References
- [How-To: Setup Application Insights for distributed tracing with OpenTelemetry Collector]({{< ref open-telemetry-collector.md >}})
diff --git a/daprdocs/content/en/developing-applications/building-blocks/pubsub/howto-publish-subscribe.md b/daprdocs/content/en/developing-applications/building-blocks/pubsub/howto-publish-subscribe.md
index 054a34bbd..2023cf6ba 100644
--- a/daprdocs/content/en/developing-applications/building-blocks/pubsub/howto-publish-subscribe.md
+++ b/daprdocs/content/en/developing-applications/building-blocks/pubsub/howto-publish-subscribe.md
@@ -308,7 +308,7 @@ Invoke-RestMethod -Method Post -ContentType 'application/json' -Body '{"status":
{{< /tabs >}}
-Dapr automatically wraps the user payload in a Cloud Events v1.0 compliant envelope.
+Dapr automatically wraps the user payload in a Cloud Events v1.0 compliant envelope, using `Content-Type` header value for `datacontenttype` attribute.
## Step 4: ACK-ing a message
diff --git a/daprdocs/content/en/developing-applications/building-blocks/pubsub/pubsub-overview.md b/daprdocs/content/en/developing-applications/building-blocks/pubsub/pubsub-overview.md
index 78f98e01f..b297fe3e5 100644
--- a/daprdocs/content/en/developing-applications/building-blocks/pubsub/pubsub-overview.md
+++ b/daprdocs/content/en/developing-applications/building-blocks/pubsub/pubsub-overview.md
@@ -33,7 +33,7 @@ When multiple instances of the same application ID subscribe to a topic, Dapr wi
### Cloud events
-Dapr follows the [CloudEvents 1.0 Spec](https://github.com/cloudevents/spec/tree/v1.0) and wraps any payload sent to a topic inside a Cloud Events envelope.
+Dapr follows the [CloudEvents 1.0 Spec](https://github.com/cloudevents/spec/tree/v1.0) and wraps any payload sent to a topic inside a Cloud Events envelope, using `Content-Type` header value for `datacontenttype` attribute.
The following fields from the Cloud Events spec are implemented with Dapr:
- `id`
@@ -65,4 +65,4 @@ Limit which topics applications are able to publish/subscibe to in order to limi
## Next steps
- Read the How-To guide on [publishing and subscribing]({{< ref howto-publish-subscribe.md >}})
-- Learn about [Pub/Sub scopes]({{< ref pubsub-scopes.md >}})
\ No newline at end of file
+- Learn about [Pub/Sub scopes]({{< ref pubsub-scopes.md >}})
diff --git a/daprdocs/content/en/operations/components/setup-bindings/supported-bindings/_index.md b/daprdocs/content/en/operations/components/setup-bindings/supported-bindings/_index.md
index 159f82042..cc339776e 100644
--- a/daprdocs/content/en/operations/components/setup-bindings/supported-bindings/_index.md
+++ b/daprdocs/content/en/operations/components/setup-bindings/supported-bindings/_index.md
@@ -20,14 +20,14 @@ Every binding has its own unique set of properties. Click the name link to see t
| [Kubernetes Events]({{< ref "kubernetes-binding.md" >}}) | ✅ | | Experimental |
| [MQTT]({{< ref mqtt.md >}}) | ✅ | ✅ | Experimental |
| [PostgreSql]({{< ref postgres.md >}}) | | ✅ | Experimental |
+| [Postmark]({{< ref postmark.md >}}) | | ✅ | Experimental |
| [RabbitMQ]({{< ref rabbitmq.md >}}) | ✅ | ✅ | Experimental |
| [Redis]({{< ref redis.md >}}) | | ✅ | Experimental |
| [Twilio]({{< ref twilio.md >}}) | | ✅ | Experimental |
| [Twitter]({{< ref twitter.md >}}) | ✅ | ✅ | Experimental |
| [SendGrid]({{< ref sendgrid.md >}}) | | ✅ | Experimental |
-
-### Amazon Web Service (AWS)
+### Amazon Web Services (AWS)
| Name | Input Binding | Output Binding | Status |
|------|:----------------:|:-----------------:|--------|
@@ -37,7 +37,6 @@ Every binding has its own unique set of properties. Click the name link to see t
| [AWS SQS]({{< ref sqs.md >}}) | ✅ | ✅ | Experimental |
| [AWS Kinesis]({{< ref kinesis.md >}}) | ✅ | ✅ | Experimental |
-
### Google Cloud Platform (GCP)
| Name | Input Binding | Output Binding | Status |
@@ -55,4 +54,4 @@ Every binding has its own unique set of properties. Click the name link to see t
| [Azure Service Bus Queues]({{< ref servicebusqueues.md >}}) | ✅ | ✅ | Experimental |
| [Azure SignalR]({{< ref signalr.md >}}) | | ✅ | Experimental |
| [Azure Storage Queues]({{< ref storagequeues.md >}}) | ✅ | ✅ | Experimental |
-| [Azure Event Grid]({{< ref eventgrid.md >}}) | ✅ | ✅ | Experimental |
\ No newline at end of file
+| [Azure Event Grid]({{< ref eventgrid.md >}}) | ✅ | ✅ | Experimental |
diff --git a/daprdocs/content/en/operations/components/setup-bindings/supported-bindings/postmark.md b/daprdocs/content/en/operations/components/setup-bindings/supported-bindings/postmark.md
new file mode 100644
index 000000000..1c01d9d7b
--- /dev/null
+++ b/daprdocs/content/en/operations/components/setup-bindings/supported-bindings/postmark.md
@@ -0,0 +1,69 @@
+---
+type: docs
+title: "Postmark binding spec"
+linkTitle: "Postmark"
+description: "Detailed documentation on the Postmark binding component"
+---
+
+## Setup Dapr component
+
+```yaml
+apiVersion: dapr.io/v1alpha1
+kind: Component
+metadata:
+ name: postmark
+ namespace: default
+spec:
+ type: bindings.postmark
+ metadata:
+ - name: accountToken
+ value: "YOUR_ACCOUNT_TOKEN" # required, this is your Postmark account token
+ - name: serverToken
+ value: "YOUR_SERVER_TOKEN" # required, this is your Postmark server token
+ - name: emailFrom
+ value: "testapp@dapr.io" # optional
+ - name: emailTo
+ value: "dave@dapr.io" # optional
+ - name: subject
+ value: "Hello!" # optional
+```
+
+- `accountToken` is your Postmark account token, this should be considered a secret value. Required.
+- `serverToken` is your Postmark server token, this should be considered a secret value. Required.
+- `emailFrom` If set this specifies the 'from' email address of the email message. Optional field, see below.
+- `emailTo` If set this specifies the 'to' email address of the email message. Optional field, see below.
+- `emailCc` If set this specifies the 'cc' email address of the email message. Optional field, see below.
+- `emailBcc` If set this specifies the 'bcc' email address of the email message. Optional field, see below.
+- `subject` If set this specifies the subject of the email message. Optional field, see below.
+
+You can specify any of the optional metadata properties on the output binding request too (e.g. `emailFrom`, `emailTo`, `subject`, etc.)
+
+Combined, the optional metadata properties in the component configuration and the request payload should at least contain the `emailFrom`, `emailTo` and `subject` fields, as these are required to send an email with success.
+
+Example request payload
+
+```json
+{
+ "operation": "create",
+ "metadata": {
+ "emailTo": "changeme@example.net",
+ "subject": "An email from Dapr Postmark binding"
+ },
+ "data": "
Testing Dapr Bindings
This is a test. Bye!"
+}
+```
+
+{{% alert title="Warning" color="warning" %}}
+The above example uses secrets as plain strings. It is recommended to use a secret store for the secrets as described [here]({{< ref component-secrets.md >}}).
+{{% /alert %}}
+
+## Output Binding Supported Operations
+
+- `create`
+
+## Related links
+
+- [Bindings building block]({{< ref bindings >}})
+- [How-To: Trigger application with input binding]({{< ref howto-triggers.md >}})
+- [How-To: Use bindings to interface with external resources]({{< ref howto-bindings.md >}})
+- [Bindings API reference]({{< ref bindings_api.md >}})
diff --git a/daprdocs/content/en/operations/monitoring/grafana.md b/daprdocs/content/en/operations/monitoring/grafana.md
index 7cbc62176..ff2e46853 100644
--- a/daprdocs/content/en/operations/monitoring/grafana.md
+++ b/daprdocs/content/en/operations/monitoring/grafana.md
@@ -1,11 +1,35 @@
---
type: docs
title: "How-To: Observe metrics with Grafana"
-linkTitle: "Grafana"
+linkTitle: "Metrics dashboards with Grafana"
weight: 5000
description: "How to view Dapr metrics in a Grafana dashboard."
---
+## Available dashboards
+
+{{< tabs "System Service" "Sidecars" "Actors" >}}
+
+{{% codetab %}}
+The `grafana-system-services-dashboard.json` template shows Dapr system component status, dapr-operator, dapr-sidecar-injector, dapr-sentry, and dapr-placement:
+
+
+{{% /codetab %}}
+
+{{% codetab %}}
+The `grafana-sidecar-dashboard.json` template shows Dapr sidecar status, including sidecar health/resources, throughput/latency of HTTP and gRPC, Actor, mTLS, etc.:
+
+
+{{% /codetab %}}
+
+{{% codetab %}}
+The `grafana-actor-dashboard.json` template shows Dapr Sidecar status, actor invocation throughput/latency, timer/reminder triggers, and turn-based concurrnecy:
+
+
+{{% /codetab %}}
+
+{{< /tabs >}}
+
## Pre-requisites
- [Setup Prometheus]({{}})
@@ -14,40 +38,36 @@ description: "How to view Dapr metrics in a Grafana dashboard."
### Install Grafana
-1. Install Grafana
-
- Add the Grafana Helm repo:
+1. Add the Grafana Helm repo:
```bash
helm repo add grafana https://grafana.github.io/helm-charts
```
- Install the chart:
+1. Install the chart:
```bash
helm install grafana grafana/grafana -n dapr-monitoring
```
- If you are Minikube user or want to disable persistent volume for development purpose, you can disable it by using the following command:
+ {{% alert title="Note" color="primary" %}}
+ If you are Minikube user or want to disable persistent volume for development purpose, you can disable it by using the following command instead:
```bash
helm install grafana grafana/grafana -n dapr-monitoring --set persistence.enabled=false
```
+ {{% /alert %}}
+
-2. Retrieve the admin password for Grafana login
+1. Retrieve the admin password for Grafana login:
```bash
kubectl get secret --namespace dapr-monitoring grafana -o jsonpath="{.data.admin-password}" | base64 --decode ; echo
- cj3m0OfBNx8SLzUlTx91dEECgzRlYJb60D2evof1%
```
- {{% alert title="Note" color="info" %}}
- Remove the `%` character from the password that this command returns. For example, the admin password is `cj3m0OfBNx8SLzUlTx91dEECgzRlYJb60D2evof1`.
- {{% /alert %}}
+ You will get a password similar to `cj3m0OfBNx8SLzUlTx91dEECgzRlYJb60D2evof1%`. Remove the `%` character from the password to get `cj3m0OfBNx8SLzUlTx91dEECgzRlYJb60D2evof1` as the admin password.
-3. Validation
-
- Ensure Grafana is running in your cluster (see last line below)
+1. Validation Grafana is running in your cluster:
```bash
kubectl get pods -n dapr-monitoring
@@ -66,31 +86,37 @@ description: "How to view Dapr metrics in a Grafana dashboard."
### Configure Prometheus as data source
First you need to connect Prometheus as a data source to Grafana.
-1. Port-forward to svc/grafana
+1. Port-forward to svc/grafana:
```bash
- $ kubectl port-forward svc/grafana 8080:80 -n dapr-monitoring
+ kubectl port-forward svc/grafana 8080:80 -n dapr-monitoring
+
Forwarding from 127.0.0.1:8080 -> 3000
Forwarding from [::1]:8080 -> 3000
Handling connection for 8080
Handling connection for 8080
```
-2. Browse `http://localhost:8080`
+1. Open a browser to `http://localhost:8080`
-3. Login with admin and password
+1. Login to Grafana
+ - Username = `admin`
+ - Password = Password from above
-4. Click Configuration Settings -> Data Sources
+1. Select `Configuration` and `Data Sources`
- 
+
-5. Add Prometheus as a data source.
- 
+1. Add Prometheus as a data source.
-6. Enter Promethesus server address in your cluster.
+
- You can get the Prometheus server address by running the following command.
+1. Get your Prometheus HTTP URL
+
+ The Prometheus HTTP URL follows the format `http://.`
+
+ Start by getting the Prometheus server endpoint by running the following command:
```bash
kubectl get svc -n dapr-monitoring
@@ -108,40 +134,43 @@ First you need to connect Prometheus as a data source to Grafana.
```
- In this Howto, the server is `dapr-prom-prometheus-server`.
+ In this guide the server name is `dapr-prom-prometheus-server` and the namespace is `dapr-monitoring`, so the HTTP URL will be `http://dapr-prom-prometheus-server.dapr-monitoring`.
- You now need to set up Prometheus data source with the following settings:
+1. Fill in the following settings:
- Name: `Dapr`
- HTTP URL: `http://dapr-prom-prometheus-server.dapr-monitoring`
- Default: On
-
- 
-7. Click `Save & Test` button to verify that the connection succeeded.
+
+
+1. Click `Save & Test` button to verify that the connection succeeded.
## Import dashboards in Grafana
-Next you import the Dapr dashboards into Grafana.
-In the upper left, click the "+" then "Import".
+1. In the upper left corner of the Grafana home screen, click the "+" option, then "Import".
-You can now import built-in [Grafana dashboard templates](https://github.com/dapr/dapr/tree/master/grafana).
+ You can now import [Grafana dashboard templates](https://github.com/dapr/dapr/tree/master/grafana) from [release assets](https://github.com/dapr/dapr/releases) for your Dapr version:
-The Grafana dashboards are part of [release assets](https://github.com/dapr/dapr/releases) with this URL https://github.com/dapr/dapr/releases/
-You can find `grafana-actor-dashboard.json`, `grafana-sidecar-dashboard.json` and `grafana-system-services-dashboard.json` in release assets location.
+
-
+1. Find the dashboard that you imported and enjoy
-8. Find the dashboard that you imported and enjoy!
+
-
+ {{% alert title="Tip" color="primary" %}}
+ Hover your mouse over the `i` in the corner to the description of each chart:
+
+
+ {{% /alert %}}
## References
-* [Set up Prometheus and Grafana]({{< ref grafana.md >}})
+* [Dapr Observability]({{}})
* [Prometheus Installation](https://github.com/prometheus-community/helm-charts)
* [Prometheus on Kubernetes](https://github.com/coreos/kube-prometheus)
* [Prometheus Query Language](https://prometheus.io/docs/prometheus/latest/querying/basics/)
+* [Supported Dapr metrics](https://github.com/dapr/dapr/blob/master/docs/development/dapr-metrics.md)
## Example
-
\ No newline at end of file
+
diff --git a/daprdocs/content/en/operations/monitoring/jaeger.md b/daprdocs/content/en/operations/monitoring/jaeger.md
index bafd88dea..092bc8514 100644
--- a/daprdocs/content/en/operations/monitoring/jaeger.md
+++ b/daprdocs/content/en/operations/monitoring/jaeger.md
@@ -28,30 +28,29 @@ docker run -d --name jaeger \
Next, create the following YAML files locally:
-* **jaeger.yaml**: Note that because we are using the Zipkin protocol to talk to Jaeger,
-the type of the exporter in the YAML file below is `exporter.zipkin`,
-while the `exporterAddress` is the address of the Jaeger instance.
+* **config.yaml**: Note that because we are using the Zipkin protocol
+to talk to Jaeger, we specify the `zipkin` section of tracing
+configuration set the `endpointAddress` to address of the Jaeger
+instance.
```yaml
apiVersion: dapr.io/v1alpha1
-kind: Component
+kind: Configuration
metadata:
- name: zipkin
+ name: tracing
+ namespace: default
spec:
- type: exporters.zipkin
- metadata:
- - name: enabled
- value: "true"
- - name: exporterAddress
- value: "http://localhost:9412/api/v2/spans"
+ tracing:
+ samplingRate: "1"
+ zipkin:
+ endpointAddress: "http://localhost:9412/api/v2/spans"
```
To launch the application referring to the new YAML file, you can use
-`--components-path`. Assuming that, the **jaeger.yaml** file is in the
-current directory, you can use
+`--config` option:
```bash
-dapr run --app-id mynode --app-port 3000 node app.js --components-path .
+dapr run --app-id mynode --app-port 3000 node app.js --config config.yaml
```
### Viewing Traces
@@ -92,26 +91,7 @@ kubectl apply -f jaeger-operator.yaml
kubectl wait deploy --selector app.kubernetes.io/name=jaeger --for=condition=available
```
-Next, create the following YAML files locally:
-
-* **jaeger.yaml**: Note that because we are using the Zipkin protocol to talk to Jaeger,
-the type of the exporter in the YAML file below is `exporter.zipkin`,
-while the `exporterAddress` is the address of the Jaeger instance.
-
-
-```yaml
-apiVersion: dapr.io/v1alpha1
-kind: Component
-metadata:
- name: zipkin
-spec:
- type: exporters.zipkin
- metadata:
- - name: enabled
- value: "true"
- - name: exporterAddress
- value: "http://jaeger-collector.default.svc.cluster.local:9411/api/v2/spans"
-```
+Next, create the following YAML file locally:
* **tracing.yaml**
@@ -124,13 +104,14 @@ metadata:
spec:
tracing:
samplingRate: "1"
+ zipkin:
+ endpointAddress: "http://jaeger-collector.default.svc.cluster.local:9411/api/v2/spans"
```
Finally, deploy the the Dapr component and configuration files:
```bash
kubectl apply -f tracing.yaml
-kubectl apply -f jaeger.yaml
```
In order to enable this configuration for your Dapr sidecar, add the following annotation to your pod spec template:
diff --git a/daprdocs/content/en/operations/monitoring/newrelic.md b/daprdocs/content/en/operations/monitoring/newrelic.md
index d6e67c4f7..4d44295d2 100644
--- a/daprdocs/content/en/operations/monitoring/newrelic.md
+++ b/daprdocs/content/en/operations/monitoring/newrelic.md
@@ -10,25 +10,23 @@ description: "Set-up New Relic for Dapr observability"
- Perpetually [free New Relic account](https://newrelic.com/signup), 100 GB/month of free data ingest, 1 free full access user, unlimited free basic users
-## Configure Zipkin Exporter
+## Configure Dapr tracing
-Dapr natively captures metrics and traces that can be send directly to New Relic. The easiest way to export these is by providing a Zipkin exporter configured to send the traces to [New Relic's Trace API](https://docs.newrelic.com/docs/understand-dependencies/distributed-tracing/trace-api/report-zipkin-format-traces-trace-api#existing-zipkin).
+Dapr natively captures metrics and traces that can be send directly to New Relic. The easiest way to export these is by configuring Dapr to send the traces to [New Relic's Trace API](https://docs.newrelic.com/docs/understand-dependencies/distributed-tracing/trace-api/report-zipkin-format-traces-trace-api#existing-zipkin) using the Zipkin trace format.
In order for the integration to send data to New Relic [Telemetry Data Platform](https://newrelic.com/platform/telemetry-data-platform), you need a [New Relic Insights Insert API key](https://docs.newrelic.com/docs/apis/get-started/intro-apis/types-new-relic-api-keys#insights-insert-key).
```yaml
apiVersion: dapr.io/v1alpha1
-kind: Component
+kind: Configuration
metadata:
- name: zipkin
+ name: appconfig
namespace: default
spec:
- type: exporters.zipkin
- metadata:
- - name: enabled
- value: "true"
- - name: exporterAddress
- value: "https://trace-api.newrelic.com/trace/v1?Api-Key=&Data-Format=zipkin&Data-Format-Version=2"
+ tracing:
+ samplingRate: "1"
+ zipkin:
+ endpointAddress: "https://trace-api.newrelic.com/trace/v1?Api-Key=&Data-Format=zipkin&Data-Format-Version=2"
```
### Viewing Traces
@@ -114,4 +112,4 @@ All the data that is collected from Dapr, Kubernetes or any services that run on
* [New Relic Metric API](https://docs.newrelic.com/docs/telemetry-data-platform/get-data/apis/introduction-metric-api)
* [Types of New Relic API keys](https://docs.newrelic.com/docs/apis/get-started/intro-apis/types-new-relic-api-keys)
* [New Relic OpenTelemetry User Experience](https://blog.newrelic.com/product-news/opentelemetry-user-experience/)
-* [Alerts and Applied Intelligence](https://docs.newrelic.com/docs/alerts-applied-intelligence)
\ No newline at end of file
+* [Alerts and Applied Intelligence](https://docs.newrelic.com/docs/alerts-applied-intelligence)
diff --git a/daprdocs/content/en/operations/monitoring/open-telemetry-collector.md b/daprdocs/content/en/operations/monitoring/open-telemetry-collector.md
index 9645bd474..eb8136a80 100644
--- a/daprdocs/content/en/operations/monitoring/open-telemetry-collector.md
+++ b/daprdocs/content/en/operations/monitoring/open-telemetry-collector.md
@@ -6,7 +6,7 @@ weight: 1000
description: "How to use Dapr to push trace events to Azure Application Insights, through the OpenTelemetry Collector."
---
-Dapr can integrate with [OpenTelemetry Collector](https://github.com/open-telemetry/opentelemetry-collector) using the OpenCensus API. This guide walks through an example to use Dapr to push trace events to Azure Application Insights, through the OpenTelemetry Collector.
+Dapr can integrate with [OpenTelemetry Collector](https://github.com/open-telemetry/opentelemetry-collector) using the Zipkin API. This guide walks through an example to use Dapr to push trace events to Azure Application Insights, through the OpenTelemetry Collector.
## Requirements
@@ -29,15 +29,15 @@ export APP_INSIGHTS_KEY=
Next, install the OpenTelemetry Collector to your Kubernetes cluster to push events to your Application Insights instance
-1. Check out the file [open-telemetry-collector.yaml](/docs/open-telemetry-collector/open-telemetry-collector.yaml) and replace the `` placeholder with your `APP_INSIGHTS_KEY`.
+1. Check out the file [open-telemetry-collector.yaml](/docs/open-telemetry-collector/open-telemetry-collector.yaml) and replace the `` placeholder with your `APP_INSIGHTS_KEY`.
2. Apply the configuration with `kubectl apply -f open-telemetry-collector.yaml`.
Next, set up both a Dapr configuration file to turn on tracing and deploy a tracing exporter component that uses the OpenTelemetry Collector.
-1. Create a collector-component.yaml file with this [content](/docs/open-telemetry-collector/collector-component.yaml)
+1. Create a collector-config.yaml file with this [content](/docs/open-telemetry-collector/collector-config.yaml)
-2. Apply the configuration with `kubectl apply -f collector-component.yaml`.
+2. Apply the configuration with `kubectl apply -f collector-config.yaml`.
### Deploy your app with tracing
diff --git a/daprdocs/content/en/operations/monitoring/zipkin.md b/daprdocs/content/en/operations/monitoring/zipkin.md
index 85caac02d..5939661e5 100644
--- a/daprdocs/content/en/operations/monitoring/zipkin.md
+++ b/daprdocs/content/en/operations/monitoring/zipkin.md
@@ -9,28 +9,9 @@ type: docs
## Configure self hosted mode
-For self hosted mode, on running `dapr init` the following YAML files are created by default and they are referenced by default on `dapr run` calls unless otherwise overridden.
+For self hosted mode, on running `dapr init`:
-1. The following file in `$HOME/dapr/components/zipkin.yaml` or `%USERPROFILE%\dapr\components\zipkin.yaml`:
-
-* zipkin.yaml
-
-```yaml
-apiVersion: dapr.io/v1alpha1
-kind: Component
-metadata:
- name: zipkin
- namespace: default
-spec:
- type: exporters.zipkin
- version: v1
- metadata:
- - name: enabled
- value: "true"
- - name: exporterAddress
- value: "http://localhost:9411/api/v2/spans"
-```
-2. The following file in `$HOME/dapr/config.yaml` or `%USERPROFILE%\dapr\config.yaml`:
+1. The following YAML file is created by default in `$HOME/dapr/config.yaml` (on Linux/Mac) or `%USERPROFILE%\dapr\config.yaml` (on Windows) and it is referenced by default on `dapr run` calls unless otherwise overridden `:
* config.yaml
@@ -43,9 +24,11 @@ metadata:
spec:
tracing:
samplingRate: "1"
+ zipkin:
+ endpointAddress: "http://localhost:9411/api/v2/spans"
```
-3. The [openzipkin/zipkin](https://hub.docker.com/r/openzipkin/zipkin/) docker container is launched on running `dapr init` or it can be launched with the following code.
+2. The [openzipkin/zipkin](https://hub.docker.com/r/openzipkin/zipkin/) docker container is launched on running `dapr init` or it can be launched with the following code.
Launch Zipkin using Docker:
@@ -53,7 +36,7 @@ Launch Zipkin using Docker:
docker run -d -p 9411:9411 openzipkin/zipkin
```
-4. The applications launched with `dapr run` will by default reference the config file in `$HOME/dapr/config.yaml` or `%USERPROFILE%\dapr\config.yaml` and can be overridden with the Dapr CLI using the `--config` param:
+3. The applications launched with `dapr run` will by default reference the config file in `$HOME/dapr/config.yaml` or `%USERPROFILE%\dapr\config.yaml` and can be overridden with the Dapr CLI using the `--config` param:
```bash
dapr run --app-id mynode --app-port 3000 node app.js
@@ -79,25 +62,7 @@ Create a Kubernetes service for the Zipkin pod:
kubectl expose deployment zipkin --type ClusterIP --port 9411
```
-Next, create the following YAML files locally:
-
-* zipkin.yaml component
-
-```yaml
-apiVersion: dapr.io/v1alpha1
-kind: Component
-metadata:
- name: zipkin
- namespace: default
-spec:
- type: exporters.zipkin
- version: v1
- metadata:
- - name: enabled
- value: "true"
- - name: exporterAddress
- value: "http://zipkin.default.svc.cluster.local:9411/api/v2/spans"
-```
+Next, create the following YAML file locally:
* tracing.yaml configuration
@@ -110,13 +75,14 @@ metadata:
spec:
tracing:
samplingRate: "1"
+ zipkin:
+ endpointAddress: "http://zipkin.default.svc.cluster.local:9411/api/v2/spans"
```
-Finally, deploy the the Dapr component and configuration files:
+Now, deploy the the Dapr configuration file:
```bash
kubectl apply -f tracing.yaml
-kubectl apply -f zipkin.yaml
```
In order to enable this configuration for your Dapr sidecar, add the following annotation to your pod spec template:
diff --git a/daprdocs/content/en/operations/troubleshooting/setup-tracing.md b/daprdocs/content/en/operations/troubleshooting/setup-tracing.md
index afdbc8eb7..468c819b4 100644
--- a/daprdocs/content/en/operations/troubleshooting/setup-tracing.md
+++ b/daprdocs/content/en/operations/troubleshooting/setup-tracing.md
@@ -6,8 +6,6 @@ weight: 3000
description: "Configure Dapr to send distributed tracing data"
---
-Dapr integrates with Open Census for telemetry and tracing.
-
It is recommended to run Dapr with tracing enabled for any production scenario.
Since Dapr uses Open Census, you can configure various exporters for tracing and telemetry data based on your environment, whether it is running in the cloud or on-premises.
@@ -17,22 +15,18 @@ The `tracing` section under the `Configuration` spec contains the following prop
```yml
tracing:
- enabled: true
- exporterType: zipkin
- exporterAddress: ""
- expandParams: true
- includeBody: true
+ tracing:
+ samplingRate: "1"
+ zipkin:
+ endpointAddress: "https://..."
```
-The following table lists the different properties.
+The following table lists the properties for tracing:
-| Property | Type | Description |
-|----------|------|-------------|
-| enabled | bool | Set tracing to be enabled or disabled
-| exporterType | string | Name of the Open Census exporter to use. For example: Zipkin, Azure Monitor, etc
-| exporterAddress | string | URL of the exporter
-| expandParams | bool | When true, expands parameters passed to HTTP endpoints
-| includeBody | bool | When true, includes the request body in the tracing event
+| Property | Type | Description |
+|--------------|--------|-------------|
+| `samplingRate` | string | Set sampling rate for tracing to be enabled or disabled.
+| `zipkin.endpointAddress` | string | Set the Zipkin server address.
## Zipkin in stand-alone mode
@@ -51,11 +45,9 @@ For Standalone mode, create a Dapr configuration file locally and reference it w
namespace: default
spec:
tracing:
- enabled: true
- exporterType: zipkin
- exporterAddress: "http://localhost:9411/api/v2/spans"
- expandParams: true
- includeBody: true
+ samplingRate: "1"
+ zipkin:
+ endpointAddress: "http://localhost:9411/api/v2/spans"
```
2. Launch Zipkin using Docker:
@@ -99,11 +91,9 @@ metadata:
namespace: default
spec:
tracing:
- enabled: true
- exporterType: zipkin
- exporterAddress: "http://zipkin.default.svc.cluster.local:9411/api/v2/spans"
- expandParams: true
- includeBody: true
+ samplingRate: "1"
+ zipkin:
+ endpointAddress: "http://zipkin.default.svc.cluster.local:9411/api/v2/spans"
```
Finally, deploy the Dapr configuration:
diff --git a/daprdocs/content/en/reference/api/error_codes.md b/daprdocs/content/en/reference/api/error_codes.md
index d5053b763..819268f63 100644
--- a/daprdocs/content/en/reference/api/error_codes.md
+++ b/daprdocs/content/en/reference/api/error_codes.md
@@ -46,3 +46,4 @@ Following table lists the error codes returned by Dapr runtime:
| ERR_SECRET_STORES_NOT_CONFIGURED | Error that no secret store is configured.
| ERR_SECRET_STORE_NOT_FOUND | Error that specified secret store is not found.
| ERR_HEALTH_NOT_READY | Error that Dapr is not ready.
+| ERR_METADATA_GET | Error parsing the Metadata information.
diff --git a/daprdocs/content/en/reference/api/health_api.md b/daprdocs/content/en/reference/api/health_api.md
index 849fa3d0a..98c6fcb44 100644
--- a/daprdocs/content/en/reference/api/health_api.md
+++ b/daprdocs/content/en/reference/api/health_api.md
@@ -3,7 +3,7 @@ type: docs
title: "Health API reference"
linkTitle: "Health API"
description: "Detailed documentation on the health API"
-weight: 900
+weight: 700
---
Dapr provides health checking probes that can be used as readiness or liveness of Dapr.
diff --git a/daprdocs/content/en/reference/api/metadata_api.md b/daprdocs/content/en/reference/api/metadata_api.md
new file mode 100644
index 000000000..d77140f8b
--- /dev/null
+++ b/daprdocs/content/en/reference/api/metadata_api.md
@@ -0,0 +1,181 @@
+---
+type: docs
+title: "Metadata API reference"
+linkTitle: "Metadata API"
+description: "Detailed documentation on the Metadata API"
+weight: 800
+---
+
+Dapr has a metadata API that returns information about the sidecar allowing runtime discoverability. The metadata endpoint returns among other things, a list of the components loaded and the activated actors (if present).
+
+The Dapr metadata API also allows you to store additional information in the format of key-value pairs.
+
+Note: The Dapr metatada endpoint is for instance being used by the Dapr CLI when running dapr in standalone mode to store the PID of the process hosting the sidecar and the command used to run the application.
+
+## Get the Dapr sidecar information
+
+Gets the Dapr sidecar information provided by the Metadata Endpoint.
+
+### HTTP Request
+
+```http
+GET http://localhost:/v1.0/metadata
+```
+
+### URL Parameters
+
+Parameter | Description
+--------- | -----------
+daprPort | The Dapr port.
+
+### HTTP Response Codes
+
+Code | Description
+---- | -----------
+200 | Metadata information returned
+500 | Dapr could not return the metadata information
+
+### HTTP Response Body
+
+**Metadata API Response Object**
+
+Name | Type | Description
+---- | ---- | -----------
+id | string | Application ID
+actors | [Metadata API Response Registered Actor](#metadataapiresponseactor)[] | A json encoded array of Registered Actors metadata.
+extended.attributeName | string | List of custom attributes as key-value pairs, where key is the attribute name.
+components | [Metadata API Response Component](#metadataapiresponsecomponent)[] | A json encoded array of loaded components metadata.
+
+**Metadata API Response Registered Actor**
+
+Name | Type | Description
+---- | ---- | -----------
+type | string | The registered actor type.
+count | integer | Number of actors running.
+
+**Metadata API Response Component**
+
+Name | Type | Description
+---- | ---- | -----------
+name | string | Name of the component.
+type | string | Component type.
+version | string | Component version.
+
+### Examples
+
+Note: This example is based on the Actor sample provided in the [Dapr SDK for Python](https://github.com/dapr/python-sdk/tree/master/examples/demo_actor).
+
+```shell
+curl http://localhost:3500/v1.0/metadata
+```
+
+```json
+{
+ "id":"demo-actor",
+ "actors":[
+ {
+ "type":"DemoActor",
+ "count":1
+ }
+ ],
+ "extended": {
+ "cliPID":"1031040",
+ "appCommand":"uvicorn --port 3000 demo_actor_service:app"
+ },
+ "components":[
+ {
+ "name":"pubsub",
+ "type":"pubsub.redis",
+ "version":""
+ },
+ {
+ "name":"statestore",
+ "type":"state.redis",
+ "version":""
+ }
+ ]
+}
+```
+
+## Add a custom attribute to the Dapr sidecar information
+
+Adds a custom attribute to the Dapr sidecar information stored by the Metadata Endpoint.
+
+### HTTP Request
+
+```http
+PUT http://localhost:/v1.0/metadata/attributeName
+```
+
+### URL Parameters
+
+Parameter | Description
+--------- | -----------
+daprPort | The Dapr port.
+attributeName | Custom attribute name. This is they key name in the key-value pair.
+
+### HTTP Request Body
+
+In the request you need to pass the custom attribute value as RAW data:
+
+```json
+{
+ "Content-Type": "text/plain"
+}
+```
+
+Within the body of the request place the custom attribute value you want to store:
+
+```
+attributeValue
+```
+
+### HTTP Response Codes
+
+Code | Description
+---- | -----------
+204 | Custom attribute added to the metadata information
+
+### Examples
+
+Note: This example is based on the Actor sample provided in the [Dapr SDK for Python](https://github.com/dapr/python-sdk/tree/master/examples/demo_actor).
+
+Add a custom attribute to the metadata endpoint:
+
+```shell
+curl -X PUT -H "Content-Type: text/plain" --data "myDemoAttributeValue" http://localhost:3500/v1.0/metadata/myDemoAttribute
+```
+
+Get the metadata information to confirm your custom attribute was added:
+
+```json
+{
+ "id":"demo-actor",
+ "actors":[
+ {
+ "type":"DemoActor",
+ "count":1
+ }
+ ],
+ "extended": {
+ "myDemoAttribute": "myDemoAttributeValue",
+ "cliPID":"1031040",
+ "appCommand":"uvicorn --port 3000 demo_actor_service:app"
+ },
+ "components":[
+ {
+ "name":"pubsub",
+ "type":"pubsub.redis",
+ "version":""
+ },
+ {
+ "name":"statestore",
+ "type":"state.redis",
+ "version":""
+ }
+ ]
+}
+```
+
+
+
diff --git a/daprdocs/content/en/reference/api/secrets_api.md b/daprdocs/content/en/reference/api/secrets_api.md
index a5b5e81d2..628e6ddaa 100644
--- a/daprdocs/content/en/reference/api/secrets_api.md
+++ b/daprdocs/content/en/reference/api/secrets_api.md
@@ -3,7 +3,7 @@ type: docs
title: "Secrets API reference"
linkTitle: "Secrets API"
description: "Detailed documentation on the secrets API"
-weight: 700
+weight: 600
---
## Get Secret
diff --git a/daprdocs/content/en/reference/cli/dapr-configurations.md b/daprdocs/content/en/reference/cli/dapr-configurations.md
index ed0ca85b0..b8cc30bde 100644
--- a/daprdocs/content/en/reference/cli/dapr-configurations.md
+++ b/daprdocs/content/en/reference/cli/dapr-configurations.md
@@ -26,5 +26,8 @@ dapr components -k
| Name | Environment Variable | Default | Description
| --- | --- | --- | --- |
+| `--kubernetes`, `-k` | | `false` | List all Dapr configurations in a Kubernetes cluster
+| `--name`, `-n` | | | The configuration name to be printed (optional)
+| `--output`, `-o` | | `list`| Output format (options: json or yaml or list)
| `--help`, `-h` | | | Print this help message |
-| `--kubernetes`, `-k` | | `false` | List all Dapr configurations in a Kubernetes cluster |
+|
diff --git a/daprdocs/static/docs/open-telemetry-collector/collector-component.yaml b/daprdocs/static/docs/open-telemetry-collector/collector-component.yaml
deleted file mode 100644
index e157a4c9d..000000000
--- a/daprdocs/static/docs/open-telemetry-collector/collector-component.yaml
+++ /dev/null
@@ -1,22 +0,0 @@
-apiVersion: dapr.io/v1alpha1
-kind: Configuration
-metadata:
- name: appconfig
- namespace: default
-spec:
- tracing:
- samplingRate: "1"
----
-apiVersion: dapr.io/v1alpha1
-kind: Component
-metadata:
- name: native
- namespace: default
-spec:
- type: exporters.native
- version: v1
- metadata:
- - name: enabled
- value: "true"
- - name: agentEndpoint
- value: "otel-collector.default.svc.cluster.local:55678"
diff --git a/daprdocs/static/docs/open-telemetry-collector/collector-config.yaml b/daprdocs/static/docs/open-telemetry-collector/collector-config.yaml
new file mode 100644
index 000000000..78b37a928
--- /dev/null
+++ b/daprdocs/static/docs/open-telemetry-collector/collector-config.yaml
@@ -0,0 +1,10 @@
+apiVersion: dapr.io/v1alpha1
+kind: Configuration
+metadata:
+ name: appconfig
+ namespace: default
+spec:
+ tracing:
+ samplingRate: "1"
+ zipkin:
+ endpointAddress: "http://otel-collector.default.svc.cluster.local:9411/api/v2/spans"
diff --git a/daprdocs/static/docs/open-telemetry-collector/open-telemetry-collector.yaml b/daprdocs/static/docs/open-telemetry-collector/open-telemetry-collector.yaml
index 3516ab364..bae9a336e 100644
--- a/daprdocs/static/docs/open-telemetry-collector/open-telemetry-collector.yaml
+++ b/daprdocs/static/docs/open-telemetry-collector/open-telemetry-collector.yaml
@@ -8,8 +8,8 @@ metadata:
data:
otel-collector-config: |
receivers:
- opencensus:
- endpoint: 0.0.0.0:55678
+ zipkin:
+ endpoint: 0.0.0.0:9411
processors:
queued_retry:
batch:
@@ -20,8 +20,9 @@ data:
zpages:
endpoint: :55679
exporters:
+ logging:
+ loglevel: debug
azuremonitor:
- azuremonitor/2:
endpoint: "https://dc.services.visualstudio.com/v2/track"
instrumentation_key: ""
# maxbatchsize is the maximum number of items that can be
@@ -34,8 +35,8 @@ data:
extensions: [pprof, zpages, health_check]
pipelines:
traces:
- receivers: [opencensus]
- exporters: [azuremonitor/2]
+ receivers: [zipkin]
+ exporters: [azuremonitor,logging]
processors: [batch, queued_retry]
---
apiVersion: v1
@@ -47,10 +48,10 @@ metadata:
component: otel-collector
spec:
ports:
- - name: opencensus # Default endpoint for Opencensus receiver.
- port: 55678
+ - name: zipkin # Default endpoint for Zipkin receiver.
+ port: 9411
protocol: TCP
- targetPort: 55678
+ targetPort: 9411
selector:
component: otel-collector
---
@@ -86,7 +87,7 @@ spec:
cpu: 200m
memory: 400Mi
ports:
- - containerPort: 55678 # Default endpoint for Opencensus receiver.
+ - containerPort: 9411 # Default endpoint for Zipkin receiver.
volumeMounts:
- name: otel-collector-config-vol
mountPath: /conf
diff --git a/daprdocs/static/images/grafana-actor-dashboard.png b/daprdocs/static/images/grafana-actor-dashboard.png
new file mode 100644
index 000000000..371f398ee
Binary files /dev/null and b/daprdocs/static/images/grafana-actor-dashboard.png differ
diff --git a/daprdocs/static/images/grafana-sidecar-dashboard.png b/daprdocs/static/images/grafana-sidecar-dashboard.png
new file mode 100644
index 000000000..e923a8168
Binary files /dev/null and b/daprdocs/static/images/grafana-sidecar-dashboard.png differ
diff --git a/daprdocs/static/images/grafana-system-service-dashboard.png b/daprdocs/static/images/grafana-system-service-dashboard.png
new file mode 100644
index 000000000..f713caf4d
Binary files /dev/null and b/daprdocs/static/images/grafana-system-service-dashboard.png differ
diff --git a/daprdocs/static/images/grafana-tooltip.png b/daprdocs/static/images/grafana-tooltip.png
new file mode 100644
index 000000000..368b31cf3
Binary files /dev/null and b/daprdocs/static/images/grafana-tooltip.png differ