Merge branch 'master' into update-cli-ref

This commit is contained in:
Mukundan Sundararajan 2020-12-11 13:31:33 -08:00
commit 9f890c6c01
23 changed files with 409 additions and 219 deletions

View File

@ -29,7 +29,7 @@ Read [W3C distributed tracing]({{< ref w3c-tracing >}}) for more background on W
Dapr uses [probabilistic sampling](https://opencensus.io/tracing/sampling/probabilistic/) as defined by OpenCensus. The sample rate defines the probability a tracing span will be sampled and can have a value between 0 and 1 (inclusive). The deafault sample rate is 0.0001 (i.e. 1 in 10,000 spans is sampled). Dapr uses [probabilistic sampling](https://opencensus.io/tracing/sampling/probabilistic/) as defined by OpenCensus. The sample rate defines the probability a tracing span will be sampled and can have a value between 0 and 1 (inclusive). The deafault sample rate is 0.0001 (i.e. 1 in 10,000 spans is sampled).
To change the default tracing behavior, use a configuration file (in self hosted mode) or a Kubernetes configuration object (in Kubernetes mode). For example, the following configuration object changes the sample rate to 1 (i.e. every span is sampled): To change the default tracing behavior, use a configuration file (in self hosted mode) or a Kubernetes configuration object (in Kubernetes mode). For example, the following configuration object changes the sample rate to 1 (i.e. every span is sampled), and sends trace using Zipkin protocol to the Zipkin server at http://zipkin.default.svc.cluster.local
```yaml ```yaml
apiVersion: dapr.io/v1alpha1 apiVersion: dapr.io/v1alpha1
@ -40,30 +40,14 @@ metadata:
spec: spec:
tracing: tracing:
samplingRate: "1" samplingRate: "1"
zipkin:
endpointAddress: "http://zipkin.default.svc.cluster.local:9411/api/v2/spans"
``` ```
Similarly, changing `samplingRate` to 0 will disable tracing altogether. Changing `samplingRate` to 0 will disable tracing altogether.
See the [References](#references) section for more details on how to configure tracing on local environment and Kubernetes environment. See the [References](#references) section for more details on how to configure tracing on local environment and Kubernetes environment.
Dapr supports pluggable exporters, defined by configuration files (in self hosted mode) or a Kubernetes custom resource object (in Kubernetes mode). For example, the following manifest defines a Zipkin exporter:
```yaml
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
name: zipkin
namespace: default
spec:
type: exporters.zipkin
version: v1
metadata:
- name: enabled
value: "true"
- name: exporterAddress
value: "http://zipkin.default.svc.cluster.local:9411/api/v2/spans"
```
## References ## References
- [How-To: Setup Application Insights for distributed tracing with OpenTelemetry Collector]({{< ref open-telemetry-collector.md >}}) - [How-To: Setup Application Insights for distributed tracing with OpenTelemetry Collector]({{< ref open-telemetry-collector.md >}})

View File

@ -308,7 +308,7 @@ Invoke-RestMethod -Method Post -ContentType 'application/json' -Body '{"status":
{{< /tabs >}} {{< /tabs >}}
Dapr automatically wraps the user payload in a Cloud Events v1.0 compliant envelope. Dapr automatically wraps the user payload in a Cloud Events v1.0 compliant envelope, using `Content-Type` header value for `datacontenttype` attribute.
## Step 4: ACK-ing a message ## Step 4: ACK-ing a message

View File

@ -33,7 +33,7 @@ When multiple instances of the same application ID subscribe to a topic, Dapr wi
### Cloud events ### Cloud events
Dapr follows the [CloudEvents 1.0 Spec](https://github.com/cloudevents/spec/tree/v1.0) and wraps any payload sent to a topic inside a Cloud Events envelope. Dapr follows the [CloudEvents 1.0 Spec](https://github.com/cloudevents/spec/tree/v1.0) and wraps any payload sent to a topic inside a Cloud Events envelope, using `Content-Type` header value for `datacontenttype` attribute.
The following fields from the Cloud Events spec are implemented with Dapr: The following fields from the Cloud Events spec are implemented with Dapr:
- `id` - `id`
@ -65,4 +65,4 @@ Limit which topics applications are able to publish/subscibe to in order to limi
## Next steps ## Next steps
- Read the How-To guide on [publishing and subscribing]({{< ref howto-publish-subscribe.md >}}) - Read the How-To guide on [publishing and subscribing]({{< ref howto-publish-subscribe.md >}})
- Learn about [Pub/Sub scopes]({{< ref pubsub-scopes.md >}}) - Learn about [Pub/Sub scopes]({{< ref pubsub-scopes.md >}})

View File

@ -20,14 +20,14 @@ Every binding has its own unique set of properties. Click the name link to see t
| [Kubernetes Events]({{< ref "kubernetes-binding.md" >}}) | ✅ | | Experimental | | [Kubernetes Events]({{< ref "kubernetes-binding.md" >}}) | ✅ | | Experimental |
| [MQTT]({{< ref mqtt.md >}}) | ✅ | ✅ | Experimental | | [MQTT]({{< ref mqtt.md >}}) | ✅ | ✅ | Experimental |
| [PostgreSql]({{< ref postgres.md >}}) | | ✅ | Experimental | | [PostgreSql]({{< ref postgres.md >}}) | | ✅ | Experimental |
| [Postmark]({{< ref postmark.md >}}) | | ✅ | Experimental |
| [RabbitMQ]({{< ref rabbitmq.md >}}) | ✅ | ✅ | Experimental | | [RabbitMQ]({{< ref rabbitmq.md >}}) | ✅ | ✅ | Experimental |
| [Redis]({{< ref redis.md >}}) | | ✅ | Experimental | | [Redis]({{< ref redis.md >}}) | | ✅ | Experimental |
| [Twilio]({{< ref twilio.md >}}) | | ✅ | Experimental | | [Twilio]({{< ref twilio.md >}}) | | ✅ | Experimental |
| [Twitter]({{< ref twitter.md >}}) | ✅ | ✅ | Experimental | | [Twitter]({{< ref twitter.md >}}) | ✅ | ✅ | Experimental |
| [SendGrid]({{< ref sendgrid.md >}}) | | ✅ | Experimental | | [SendGrid]({{< ref sendgrid.md >}}) | | ✅ | Experimental |
### Amazon Web Services (AWS)
### Amazon Web Service (AWS)
| Name | Input<br>Binding | Output<br>Binding | Status | | Name | Input<br>Binding | Output<br>Binding | Status |
|------|:----------------:|:-----------------:|--------| |------|:----------------:|:-----------------:|--------|
@ -37,7 +37,6 @@ Every binding has its own unique set of properties. Click the name link to see t
| [AWS SQS]({{< ref sqs.md >}}) | ✅ | ✅ | Experimental | | [AWS SQS]({{< ref sqs.md >}}) | ✅ | ✅ | Experimental |
| [AWS Kinesis]({{< ref kinesis.md >}}) | ✅ | ✅ | Experimental | | [AWS Kinesis]({{< ref kinesis.md >}}) | ✅ | ✅ | Experimental |
### Google Cloud Platform (GCP) ### Google Cloud Platform (GCP)
| Name | Input<br>Binding | Output<br>Binding | Status | | Name | Input<br>Binding | Output<br>Binding | Status |
@ -55,4 +54,4 @@ Every binding has its own unique set of properties. Click the name link to see t
| [Azure Service Bus Queues]({{< ref servicebusqueues.md >}}) | ✅ | ✅ | Experimental | | [Azure Service Bus Queues]({{< ref servicebusqueues.md >}}) | ✅ | ✅ | Experimental |
| [Azure SignalR]({{< ref signalr.md >}}) | | ✅ | Experimental | | [Azure SignalR]({{< ref signalr.md >}}) | | ✅ | Experimental |
| [Azure Storage Queues]({{< ref storagequeues.md >}}) | ✅ | ✅ | Experimental | | [Azure Storage Queues]({{< ref storagequeues.md >}}) | ✅ | ✅ | Experimental |
| [Azure Event Grid]({{< ref eventgrid.md >}}) | ✅ | ✅ | Experimental | | [Azure Event Grid]({{< ref eventgrid.md >}}) | ✅ | ✅ | Experimental |

View File

@ -0,0 +1,69 @@
---
type: docs
title: "Postmark binding spec"
linkTitle: "Postmark"
description: "Detailed documentation on the Postmark binding component"
---
## Setup Dapr component
```yaml
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
name: postmark
namespace: default
spec:
type: bindings.postmark
metadata:
- name: accountToken
value: "YOUR_ACCOUNT_TOKEN" # required, this is your Postmark account token
- name: serverToken
value: "YOUR_SERVER_TOKEN" # required, this is your Postmark server token
- name: emailFrom
value: "testapp@dapr.io" # optional
- name: emailTo
value: "dave@dapr.io" # optional
- name: subject
value: "Hello!" # optional
```
- `accountToken` is your Postmark account token, this should be considered a secret value. Required.
- `serverToken` is your Postmark server token, this should be considered a secret value. Required.
- `emailFrom` If set this specifies the 'from' email address of the email message. Optional field, see below.
- `emailTo` If set this specifies the 'to' email address of the email message. Optional field, see below.
- `emailCc` If set this specifies the 'cc' email address of the email message. Optional field, see below.
- `emailBcc` If set this specifies the 'bcc' email address of the email message. Optional field, see below.
- `subject` If set this specifies the subject of the email message. Optional field, see below.
You can specify any of the optional metadata properties on the output binding request too (e.g. `emailFrom`, `emailTo`, `subject`, etc.)
Combined, the optional metadata properties in the component configuration and the request payload should at least contain the `emailFrom`, `emailTo` and `subject` fields, as these are required to send an email with success.
Example request payload
```json
{
"operation": "create",
"metadata": {
"emailTo": "changeme@example.net",
"subject": "An email from Dapr Postmark binding"
},
"data": "<h1>Testing Dapr Bindings</h1>This is a test.<br>Bye!"
}
```
{{% alert title="Warning" color="warning" %}}
The above example uses secrets as plain strings. It is recommended to use a secret store for the secrets as described [here]({{< ref component-secrets.md >}}).
{{% /alert %}}
## Output Binding Supported Operations
- `create`
## Related links
- [Bindings building block]({{< ref bindings >}})
- [How-To: Trigger application with input binding]({{< ref howto-triggers.md >}})
- [How-To: Use bindings to interface with external resources]({{< ref howto-bindings.md >}})
- [Bindings API reference]({{< ref bindings_api.md >}})

View File

@ -1,11 +1,35 @@
--- ---
type: docs type: docs
title: "How-To: Observe metrics with Grafana" title: "How-To: Observe metrics with Grafana"
linkTitle: "Grafana" linkTitle: "Metrics dashboards with Grafana"
weight: 5000 weight: 5000
description: "How to view Dapr metrics in a Grafana dashboard." description: "How to view Dapr metrics in a Grafana dashboard."
--- ---
## Available dashboards
{{< tabs "System Service" "Sidecars" "Actors" >}}
{{% codetab %}}
The `grafana-system-services-dashboard.json` template shows Dapr system component status, dapr-operator, dapr-sidecar-injector, dapr-sentry, and dapr-placement:
<img src="/images/grafana-system-service-dashboard.png" alt="Screenshot of the system service dashboard" width=1200>
{{% /codetab %}}
{{% codetab %}}
The `grafana-sidecar-dashboard.json` template shows Dapr sidecar status, including sidecar health/resources, throughput/latency of HTTP and gRPC, Actor, mTLS, etc.:
<img src="/images/grafana-sidecar-dashboard.png" alt="Screenshot of the sidecar dashboard" width=1200>
{{% /codetab %}}
{{% codetab %}}
The `grafana-actor-dashboard.json` template shows Dapr Sidecar status, actor invocation throughput/latency, timer/reminder triggers, and turn-based concurrnecy:
<img src="/images/grafana-actor-dashboard.png" alt="Screenshot of the actor dashboard" width=1200>
{{% /codetab %}}
{{< /tabs >}}
## Pre-requisites ## Pre-requisites
- [Setup Prometheus]({{<ref prometheus.md>}}) - [Setup Prometheus]({{<ref prometheus.md>}})
@ -14,40 +38,36 @@ description: "How to view Dapr metrics in a Grafana dashboard."
### Install Grafana ### Install Grafana
1. Install Grafana 1. Add the Grafana Helm repo:
Add the Grafana Helm repo:
```bash ```bash
helm repo add grafana https://grafana.github.io/helm-charts helm repo add grafana https://grafana.github.io/helm-charts
``` ```
Install the chart: 1. Install the chart:
```bash ```bash
helm install grafana grafana/grafana -n dapr-monitoring helm install grafana grafana/grafana -n dapr-monitoring
``` ```
If you are Minikube user or want to disable persistent volume for development purpose, you can disable it by using the following command: {{% alert title="Note" color="primary" %}}
If you are Minikube user or want to disable persistent volume for development purpose, you can disable it by using the following command instead:
```bash ```bash
helm install grafana grafana/grafana -n dapr-monitoring --set persistence.enabled=false helm install grafana grafana/grafana -n dapr-monitoring --set persistence.enabled=false
``` ```
{{% /alert %}}
2. Retrieve the admin password for Grafana login 1. Retrieve the admin password for Grafana login:
```bash ```bash
kubectl get secret --namespace dapr-monitoring grafana -o jsonpath="{.data.admin-password}" | base64 --decode ; echo kubectl get secret --namespace dapr-monitoring grafana -o jsonpath="{.data.admin-password}" | base64 --decode ; echo
cj3m0OfBNx8SLzUlTx91dEECgzRlYJb60D2evof1%
``` ```
{{% alert title="Note" color="info" %}} You will get a password similar to `cj3m0OfBNx8SLzUlTx91dEECgzRlYJb60D2evof1%`. Remove the `%` character from the password to get `cj3m0OfBNx8SLzUlTx91dEECgzRlYJb60D2evof1` as the admin password.
Remove the `%` character from the password that this command returns. For example, the admin password is `cj3m0OfBNx8SLzUlTx91dEECgzRlYJb60D2evof1`.
{{% /alert %}}
3. Validation 1. Validation Grafana is running in your cluster:
Ensure Grafana is running in your cluster (see last line below)
```bash ```bash
kubectl get pods -n dapr-monitoring kubectl get pods -n dapr-monitoring
@ -66,31 +86,37 @@ description: "How to view Dapr metrics in a Grafana dashboard."
### Configure Prometheus as data source ### Configure Prometheus as data source
First you need to connect Prometheus as a data source to Grafana. First you need to connect Prometheus as a data source to Grafana.
1. Port-forward to svc/grafana 1. Port-forward to svc/grafana:
```bash ```bash
$ kubectl port-forward svc/grafana 8080:80 -n dapr-monitoring kubectl port-forward svc/grafana 8080:80 -n dapr-monitoring
Forwarding from 127.0.0.1:8080 -> 3000 Forwarding from 127.0.0.1:8080 -> 3000
Forwarding from [::1]:8080 -> 3000 Forwarding from [::1]:8080 -> 3000
Handling connection for 8080 Handling connection for 8080
Handling connection for 8080 Handling connection for 8080
``` ```
2. Browse `http://localhost:8080` 1. Open a browser to `http://localhost:8080`
3. Login with admin and password 1. Login to Grafana
- Username = `admin`
- Password = Password from above
4. Click Configuration Settings -> Data Sources 1. Select `Configuration` and `Data Sources`
![data source](/images/grafana-datasources.png) <img src="/images/grafana-datasources.png" alt="Screenshot of the Grafana add Data Source menu" width=200>
5. Add Prometheus as a data source.
![add data source](/images/grafana-add-datasources.png) 1. Add Prometheus as a data source.
6. Enter Promethesus server address in your cluster. <img src="/images/grafana-add-datasources.png" alt="Screenshot of the Prometheus add Data Source" width=600>
You can get the Prometheus server address by running the following command. 1. Get your Prometheus HTTP URL
The Prometheus HTTP URL follows the format `http://<prometheus service endpoint>.<namespace>`
Start by getting the Prometheus server endpoint by running the following command:
```bash ```bash
kubectl get svc -n dapr-monitoring kubectl get svc -n dapr-monitoring
@ -108,40 +134,43 @@ First you need to connect Prometheus as a data source to Grafana.
``` ```
In this Howto, the server is `dapr-prom-prometheus-server`. In this guide the server name is `dapr-prom-prometheus-server` and the namespace is `dapr-monitoring`, so the HTTP URL will be `http://dapr-prom-prometheus-server.dapr-monitoring`.
You now need to set up Prometheus data source with the following settings: 1. Fill in the following settings:
- Name: `Dapr` - Name: `Dapr`
- HTTP URL: `http://dapr-prom-prometheus-server.dapr-monitoring` - HTTP URL: `http://dapr-prom-prometheus-server.dapr-monitoring`
- Default: On - Default: On
![prometheus server](/images/grafana-prometheus-dapr-server-url.png)
7. Click `Save & Test` button to verify that the connection succeeded. <img src="/images/grafana-prometheus-dapr-server-url.png" alt="Screenshot of the Prometheus Data Source configuration" width=600>
1. Click `Save & Test` button to verify that the connection succeeded.
## Import dashboards in Grafana ## Import dashboards in Grafana
Next you import the Dapr dashboards into Grafana.
In the upper left, click the "+" then "Import". 1. In the upper left corner of the Grafana home screen, click the "+" option, then "Import".
You can now import built-in [Grafana dashboard templates](https://github.com/dapr/dapr/tree/master/grafana). You can now import [Grafana dashboard templates](https://github.com/dapr/dapr/tree/master/grafana) from [release assets](https://github.com/dapr/dapr/releases) for your Dapr version:
The Grafana dashboards are part of [release assets](https://github.com/dapr/dapr/releases) with this URL https://github.com/dapr/dapr/releases/ <img src="/images/grafana-uploadjson.png" alt="Screenshot of the Grafana dashboard upload option" width=700>
You can find `grafana-actor-dashboard.json`, `grafana-sidecar-dashboard.json` and `grafana-system-services-dashboard.json` in release assets location.
![upload json](/images/grafana-uploadjson.png) 1. Find the dashboard that you imported and enjoy
8. Find the dashboard that you imported and enjoy! <img src="/images/system-service-dashboard.png" alt="Screenshot of Dapr service dashbaord" width=900>
![upload json](/images/system-service-dashboard.png) {{% alert title="Tip" color="primary" %}}
Hover your mouse over the `i` in the corner to the description of each chart:
<img src="/images/grafana-tooltip.png" alt="Screenshot of the tooltip for graphs" width=700>
{{% /alert %}}
## References ## References
* [Set up Prometheus and Grafana]({{< ref grafana.md >}}) * [Dapr Observability]({{<ref observability-concept.md >}})
* [Prometheus Installation](https://github.com/prometheus-community/helm-charts) * [Prometheus Installation](https://github.com/prometheus-community/helm-charts)
* [Prometheus on Kubernetes](https://github.com/coreos/kube-prometheus) * [Prometheus on Kubernetes](https://github.com/coreos/kube-prometheus)
* [Prometheus Query Language](https://prometheus.io/docs/prometheus/latest/querying/basics/) * [Prometheus Query Language](https://prometheus.io/docs/prometheus/latest/querying/basics/)
* [Supported Dapr metrics](https://github.com/dapr/dapr/blob/master/docs/development/dapr-metrics.md)
## Example ## Example
<iframe width="560" height="315" src="https://www.youtube.com/embed/8W-iBDNvCUM?start=2577" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe> <iframe width="560" height="315" src="https://www.youtube.com/embed/8W-iBDNvCUM?start=2577" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe>

View File

@ -28,30 +28,29 @@ docker run -d --name jaeger \
Next, create the following YAML files locally: Next, create the following YAML files locally:
* **jaeger.yaml**: Note that because we are using the Zipkin protocol to talk to Jaeger, * **config.yaml**: Note that because we are using the Zipkin protocol
the type of the exporter in the YAML file below is `exporter.zipkin`, to talk to Jaeger, we specify the `zipkin` section of tracing
while the `exporterAddress` is the address of the Jaeger instance. configuration set the `endpointAddress` to address of the Jaeger
instance.
```yaml ```yaml
apiVersion: dapr.io/v1alpha1 apiVersion: dapr.io/v1alpha1
kind: Component kind: Configuration
metadata: metadata:
name: zipkin name: tracing
namespace: default
spec: spec:
type: exporters.zipkin tracing:
metadata: samplingRate: "1"
- name: enabled zipkin:
value: "true" endpointAddress: "http://localhost:9412/api/v2/spans"
- name: exporterAddress
value: "http://localhost:9412/api/v2/spans"
``` ```
To launch the application referring to the new YAML file, you can use To launch the application referring to the new YAML file, you can use
`--components-path`. Assuming that, the **jaeger.yaml** file is in the `--config` option:
current directory, you can use
```bash ```bash
dapr run --app-id mynode --app-port 3000 node app.js --components-path . dapr run --app-id mynode --app-port 3000 node app.js --config config.yaml
``` ```
### Viewing Traces ### Viewing Traces
@ -92,26 +91,7 @@ kubectl apply -f jaeger-operator.yaml
kubectl wait deploy --selector app.kubernetes.io/name=jaeger --for=condition=available kubectl wait deploy --selector app.kubernetes.io/name=jaeger --for=condition=available
``` ```
Next, create the following YAML files locally: Next, create the following YAML file locally:
* **jaeger.yaml**: Note that because we are using the Zipkin protocol to talk to Jaeger,
the type of the exporter in the YAML file below is `exporter.zipkin`,
while the `exporterAddress` is the address of the Jaeger instance.
```yaml
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
name: zipkin
spec:
type: exporters.zipkin
metadata:
- name: enabled
value: "true"
- name: exporterAddress
value: "http://jaeger-collector.default.svc.cluster.local:9411/api/v2/spans"
```
* **tracing.yaml** * **tracing.yaml**
@ -124,13 +104,14 @@ metadata:
spec: spec:
tracing: tracing:
samplingRate: "1" samplingRate: "1"
zipkin:
endpointAddress: "http://jaeger-collector.default.svc.cluster.local:9411/api/v2/spans"
``` ```
Finally, deploy the the Dapr component and configuration files: Finally, deploy the the Dapr component and configuration files:
```bash ```bash
kubectl apply -f tracing.yaml kubectl apply -f tracing.yaml
kubectl apply -f jaeger.yaml
``` ```
In order to enable this configuration for your Dapr sidecar, add the following annotation to your pod spec template: In order to enable this configuration for your Dapr sidecar, add the following annotation to your pod spec template:

View File

@ -10,25 +10,23 @@ description: "Set-up New Relic for Dapr observability"
- Perpetually [free New Relic account](https://newrelic.com/signup), 100 GB/month of free data ingest, 1 free full access user, unlimited free basic users - Perpetually [free New Relic account](https://newrelic.com/signup), 100 GB/month of free data ingest, 1 free full access user, unlimited free basic users
## Configure Zipkin Exporter ## Configure Dapr tracing
Dapr natively captures metrics and traces that can be send directly to New Relic. The easiest way to export these is by providing a Zipkin exporter configured to send the traces to [New Relic's Trace API](https://docs.newrelic.com/docs/understand-dependencies/distributed-tracing/trace-api/report-zipkin-format-traces-trace-api#existing-zipkin). Dapr natively captures metrics and traces that can be send directly to New Relic. The easiest way to export these is by configuring Dapr to send the traces to [New Relic's Trace API](https://docs.newrelic.com/docs/understand-dependencies/distributed-tracing/trace-api/report-zipkin-format-traces-trace-api#existing-zipkin) using the Zipkin trace format.
In order for the integration to send data to New Relic [Telemetry Data Platform](https://newrelic.com/platform/telemetry-data-platform), you need a [New Relic Insights Insert API key](https://docs.newrelic.com/docs/apis/get-started/intro-apis/types-new-relic-api-keys#insights-insert-key). In order for the integration to send data to New Relic [Telemetry Data Platform](https://newrelic.com/platform/telemetry-data-platform), you need a [New Relic Insights Insert API key](https://docs.newrelic.com/docs/apis/get-started/intro-apis/types-new-relic-api-keys#insights-insert-key).
```yaml ```yaml
apiVersion: dapr.io/v1alpha1 apiVersion: dapr.io/v1alpha1
kind: Component kind: Configuration
metadata: metadata:
name: zipkin name: appconfig
namespace: default namespace: default
spec: spec:
type: exporters.zipkin tracing:
metadata: samplingRate: "1"
- name: enabled zipkin:
value: "true" endpointAddress: "https://trace-api.newrelic.com/trace/v1?Api-Key=<NR-INSIGHTS-INSERT-API-KEY>&Data-Format=zipkin&Data-Format-Version=2"
- name: exporterAddress
value: "https://trace-api.newrelic.com/trace/v1?Api-Key=<NR-INSIGHTS-INSERT-API-KEY>&Data-Format=zipkin&Data-Format-Version=2"
``` ```
### Viewing Traces ### Viewing Traces
@ -114,4 +112,4 @@ All the data that is collected from Dapr, Kubernetes or any services that run on
* [New Relic Metric API](https://docs.newrelic.com/docs/telemetry-data-platform/get-data/apis/introduction-metric-api) * [New Relic Metric API](https://docs.newrelic.com/docs/telemetry-data-platform/get-data/apis/introduction-metric-api)
* [Types of New Relic API keys](https://docs.newrelic.com/docs/apis/get-started/intro-apis/types-new-relic-api-keys) * [Types of New Relic API keys](https://docs.newrelic.com/docs/apis/get-started/intro-apis/types-new-relic-api-keys)
* [New Relic OpenTelemetry User Experience](https://blog.newrelic.com/product-news/opentelemetry-user-experience/) * [New Relic OpenTelemetry User Experience](https://blog.newrelic.com/product-news/opentelemetry-user-experience/)
* [Alerts and Applied Intelligence](https://docs.newrelic.com/docs/alerts-applied-intelligence) * [Alerts and Applied Intelligence](https://docs.newrelic.com/docs/alerts-applied-intelligence)

View File

@ -6,7 +6,7 @@ weight: 1000
description: "How to use Dapr to push trace events to Azure Application Insights, through the OpenTelemetry Collector." description: "How to use Dapr to push trace events to Azure Application Insights, through the OpenTelemetry Collector."
--- ---
Dapr can integrate with [OpenTelemetry Collector](https://github.com/open-telemetry/opentelemetry-collector) using the OpenCensus API. This guide walks through an example to use Dapr to push trace events to Azure Application Insights, through the OpenTelemetry Collector. Dapr can integrate with [OpenTelemetry Collector](https://github.com/open-telemetry/opentelemetry-collector) using the Zipkin API. This guide walks through an example to use Dapr to push trace events to Azure Application Insights, through the OpenTelemetry Collector.
## Requirements ## Requirements
@ -29,15 +29,15 @@ export APP_INSIGHTS_KEY=<your-app-insight-key>
Next, install the OpenTelemetry Collector to your Kubernetes cluster to push events to your Application Insights instance Next, install the OpenTelemetry Collector to your Kubernetes cluster to push events to your Application Insights instance
1. Check out the file [open-telemetry-collector.yaml](/docs/open-telemetry-collector/open-telemetry-collector.yaml) and replace the `<INSTRUMENTATION-KEY>` placeholder with your `APP_INSIGHTS_KEY`. 1. Check out the file [open-telemetry-collector.yaml](/docs/open-telemetry-collector/open-telemetry-collector.yaml) and replace the `<INSTRUMENTATION-KEY>` placeholder with your `APP_INSIGHTS_KEY`.
2. Apply the configuration with `kubectl apply -f open-telemetry-collector.yaml`. 2. Apply the configuration with `kubectl apply -f open-telemetry-collector.yaml`.
Next, set up both a Dapr configuration file to turn on tracing and deploy a tracing exporter component that uses the OpenTelemetry Collector. Next, set up both a Dapr configuration file to turn on tracing and deploy a tracing exporter component that uses the OpenTelemetry Collector.
1. Create a collector-component.yaml file with this [content](/docs/open-telemetry-collector/collector-component.yaml) 1. Create a collector-config.yaml file with this [content](/docs/open-telemetry-collector/collector-config.yaml)
2. Apply the configuration with `kubectl apply -f collector-component.yaml`. 2. Apply the configuration with `kubectl apply -f collector-config.yaml`.
### Deploy your app with tracing ### Deploy your app with tracing

View File

@ -9,28 +9,9 @@ type: docs
## Configure self hosted mode ## Configure self hosted mode
For self hosted mode, on running `dapr init` the following YAML files are created by default and they are referenced by default on `dapr run` calls unless otherwise overridden. For self hosted mode, on running `dapr init`:
1. The following file in `$HOME/dapr/components/zipkin.yaml` or `%USERPROFILE%\dapr\components\zipkin.yaml`: 1. The following YAML file is created by default in `$HOME/dapr/config.yaml` (on Linux/Mac) or `%USERPROFILE%\dapr\config.yaml` (on Windows) and it is referenced by default on `dapr run` calls unless otherwise overridden `:
* zipkin.yaml
```yaml
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
name: zipkin
namespace: default
spec:
type: exporters.zipkin
version: v1
metadata:
- name: enabled
value: "true"
- name: exporterAddress
value: "http://localhost:9411/api/v2/spans"
```
2. The following file in `$HOME/dapr/config.yaml` or `%USERPROFILE%\dapr\config.yaml`:
* config.yaml * config.yaml
@ -43,9 +24,11 @@ metadata:
spec: spec:
tracing: tracing:
samplingRate: "1" samplingRate: "1"
zipkin:
endpointAddress: "http://localhost:9411/api/v2/spans"
``` ```
3. The [openzipkin/zipkin](https://hub.docker.com/r/openzipkin/zipkin/) docker container is launched on running `dapr init` or it can be launched with the following code. 2. The [openzipkin/zipkin](https://hub.docker.com/r/openzipkin/zipkin/) docker container is launched on running `dapr init` or it can be launched with the following code.
Launch Zipkin using Docker: Launch Zipkin using Docker:
@ -53,7 +36,7 @@ Launch Zipkin using Docker:
docker run -d -p 9411:9411 openzipkin/zipkin docker run -d -p 9411:9411 openzipkin/zipkin
``` ```
4. The applications launched with `dapr run` will by default reference the config file in `$HOME/dapr/config.yaml` or `%USERPROFILE%\dapr\config.yaml` and can be overridden with the Dapr CLI using the `--config` param: 3. The applications launched with `dapr run` will by default reference the config file in `$HOME/dapr/config.yaml` or `%USERPROFILE%\dapr\config.yaml` and can be overridden with the Dapr CLI using the `--config` param:
```bash ```bash
dapr run --app-id mynode --app-port 3000 node app.js dapr run --app-id mynode --app-port 3000 node app.js
@ -79,25 +62,7 @@ Create a Kubernetes service for the Zipkin pod:
kubectl expose deployment zipkin --type ClusterIP --port 9411 kubectl expose deployment zipkin --type ClusterIP --port 9411
``` ```
Next, create the following YAML files locally: Next, create the following YAML file locally:
* zipkin.yaml component
```yaml
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
name: zipkin
namespace: default
spec:
type: exporters.zipkin
version: v1
metadata:
- name: enabled
value: "true"
- name: exporterAddress
value: "http://zipkin.default.svc.cluster.local:9411/api/v2/spans"
```
* tracing.yaml configuration * tracing.yaml configuration
@ -110,13 +75,14 @@ metadata:
spec: spec:
tracing: tracing:
samplingRate: "1" samplingRate: "1"
zipkin:
endpointAddress: "http://zipkin.default.svc.cluster.local:9411/api/v2/spans"
``` ```
Finally, deploy the the Dapr component and configuration files: Now, deploy the the Dapr configuration file:
```bash ```bash
kubectl apply -f tracing.yaml kubectl apply -f tracing.yaml
kubectl apply -f zipkin.yaml
``` ```
In order to enable this configuration for your Dapr sidecar, add the following annotation to your pod spec template: In order to enable this configuration for your Dapr sidecar, add the following annotation to your pod spec template:

View File

@ -6,8 +6,6 @@ weight: 3000
description: "Configure Dapr to send distributed tracing data" description: "Configure Dapr to send distributed tracing data"
--- ---
Dapr integrates with Open Census for telemetry and tracing.
It is recommended to run Dapr with tracing enabled for any production scenario. It is recommended to run Dapr with tracing enabled for any production scenario.
Since Dapr uses Open Census, you can configure various exporters for tracing and telemetry data based on your environment, whether it is running in the cloud or on-premises. Since Dapr uses Open Census, you can configure various exporters for tracing and telemetry data based on your environment, whether it is running in the cloud or on-premises.
@ -17,22 +15,18 @@ The `tracing` section under the `Configuration` spec contains the following prop
```yml ```yml
tracing: tracing:
enabled: true tracing:
exporterType: zipkin samplingRate: "1"
exporterAddress: "" zipkin:
expandParams: true endpointAddress: "https://..."
includeBody: true
``` ```
The following table lists the different properties. The following table lists the properties for tracing:
| Property | Type | Description | | Property | Type | Description |
|----------|------|-------------| |--------------|--------|-------------|
| enabled | bool | Set tracing to be enabled or disabled | `samplingRate` | string | Set sampling rate for tracing to be enabled or disabled.
| exporterType | string | Name of the Open Census exporter to use. For example: Zipkin, Azure Monitor, etc | `zipkin.endpointAddress` | string | Set the Zipkin server address.
| exporterAddress | string | URL of the exporter
| expandParams | bool | When true, expands parameters passed to HTTP endpoints
| includeBody | bool | When true, includes the request body in the tracing event
## Zipkin in stand-alone mode ## Zipkin in stand-alone mode
@ -51,11 +45,9 @@ For Standalone mode, create a Dapr configuration file locally and reference it w
namespace: default namespace: default
spec: spec:
tracing: tracing:
enabled: true samplingRate: "1"
exporterType: zipkin zipkin:
exporterAddress: "http://localhost:9411/api/v2/spans" endpointAddress: "http://localhost:9411/api/v2/spans"
expandParams: true
includeBody: true
``` ```
2. Launch Zipkin using Docker: 2. Launch Zipkin using Docker:
@ -99,11 +91,9 @@ metadata:
namespace: default namespace: default
spec: spec:
tracing: tracing:
enabled: true samplingRate: "1"
exporterType: zipkin zipkin:
exporterAddress: "http://zipkin.default.svc.cluster.local:9411/api/v2/spans" endpointAddress: "http://zipkin.default.svc.cluster.local:9411/api/v2/spans"
expandParams: true
includeBody: true
``` ```
Finally, deploy the Dapr configuration: Finally, deploy the Dapr configuration:

View File

@ -46,3 +46,4 @@ Following table lists the error codes returned by Dapr runtime:
| ERR_SECRET_STORES_NOT_CONFIGURED | Error that no secret store is configured. | ERR_SECRET_STORES_NOT_CONFIGURED | Error that no secret store is configured.
| ERR_SECRET_STORE_NOT_FOUND | Error that specified secret store is not found. | ERR_SECRET_STORE_NOT_FOUND | Error that specified secret store is not found.
| ERR_HEALTH_NOT_READY | Error that Dapr is not ready. | ERR_HEALTH_NOT_READY | Error that Dapr is not ready.
| ERR_METADATA_GET | Error parsing the Metadata information.

View File

@ -3,7 +3,7 @@ type: docs
title: "Health API reference" title: "Health API reference"
linkTitle: "Health API" linkTitle: "Health API"
description: "Detailed documentation on the health API" description: "Detailed documentation on the health API"
weight: 900 weight: 700
--- ---
Dapr provides health checking probes that can be used as readiness or liveness of Dapr. Dapr provides health checking probes that can be used as readiness or liveness of Dapr.

View File

@ -0,0 +1,181 @@
---
type: docs
title: "Metadata API reference"
linkTitle: "Metadata API"
description: "Detailed documentation on the Metadata API"
weight: 800
---
Dapr has a metadata API that returns information about the sidecar allowing runtime discoverability. The metadata endpoint returns among other things, a list of the components loaded and the activated actors (if present).
The Dapr metadata API also allows you to store additional information in the format of key-value pairs.
Note: The Dapr metatada endpoint is for instance being used by the Dapr CLI when running dapr in standalone mode to store the PID of the process hosting the sidecar and the command used to run the application.
## Get the Dapr sidecar information
Gets the Dapr sidecar information provided by the Metadata Endpoint.
### HTTP Request
```http
GET http://localhost:<daprPort>/v1.0/metadata
```
### URL Parameters
Parameter | Description
--------- | -----------
daprPort | The Dapr port.
### HTTP Response Codes
Code | Description
---- | -----------
200 | Metadata information returned
500 | Dapr could not return the metadata information
### HTTP Response Body
**Metadata API Response Object**
Name | Type | Description
---- | ---- | -----------
id | string | Application ID
actors | [Metadata API Response Registered Actor](#metadataapiresponseactor)[] | A json encoded array of Registered Actors metadata.
extended.attributeName | string | List of custom attributes as key-value pairs, where key is the attribute name.
components | [Metadata API Response Component](#metadataapiresponsecomponent)[] | A json encoded array of loaded components metadata.
<a id="metadataapiresponseactor"></a>**Metadata API Response Registered Actor**
Name | Type | Description
---- | ---- | -----------
type | string | The registered actor type.
count | integer | Number of actors running.
<a id="metadataapiresponsecomponent"></a>**Metadata API Response Component**
Name | Type | Description
---- | ---- | -----------
name | string | Name of the component.
type | string | Component type.
version | string | Component version.
### Examples
Note: This example is based on the Actor sample provided in the [Dapr SDK for Python](https://github.com/dapr/python-sdk/tree/master/examples/demo_actor).
```shell
curl http://localhost:3500/v1.0/metadata
```
```json
{
"id":"demo-actor",
"actors":[
{
"type":"DemoActor",
"count":1
}
],
"extended": {
"cliPID":"1031040",
"appCommand":"uvicorn --port 3000 demo_actor_service:app"
},
"components":[
{
"name":"pubsub",
"type":"pubsub.redis",
"version":""
},
{
"name":"statestore",
"type":"state.redis",
"version":""
}
]
}
```
## Add a custom attribute to the Dapr sidecar information
Adds a custom attribute to the Dapr sidecar information stored by the Metadata Endpoint.
### HTTP Request
```http
PUT http://localhost:<daprPort>/v1.0/metadata/attributeName
```
### URL Parameters
Parameter | Description
--------- | -----------
daprPort | The Dapr port.
attributeName | Custom attribute name. This is they key name in the key-value pair.
### HTTP Request Body
In the request you need to pass the custom attribute value as RAW data:
```json
{
"Content-Type": "text/plain"
}
```
Within the body of the request place the custom attribute value you want to store:
```
attributeValue
```
### HTTP Response Codes
Code | Description
---- | -----------
204 | Custom attribute added to the metadata information
### Examples
Note: This example is based on the Actor sample provided in the [Dapr SDK for Python](https://github.com/dapr/python-sdk/tree/master/examples/demo_actor).
Add a custom attribute to the metadata endpoint:
```shell
curl -X PUT -H "Content-Type: text/plain" --data "myDemoAttributeValue" http://localhost:3500/v1.0/metadata/myDemoAttribute
```
Get the metadata information to confirm your custom attribute was added:
```json
{
"id":"demo-actor",
"actors":[
{
"type":"DemoActor",
"count":1
}
],
"extended": {
"myDemoAttribute": "myDemoAttributeValue",
"cliPID":"1031040",
"appCommand":"uvicorn --port 3000 demo_actor_service:app"
},
"components":[
{
"name":"pubsub",
"type":"pubsub.redis",
"version":""
},
{
"name":"statestore",
"type":"state.redis",
"version":""
}
]
}
```

View File

@ -3,7 +3,7 @@ type: docs
title: "Secrets API reference" title: "Secrets API reference"
linkTitle: "Secrets API" linkTitle: "Secrets API"
description: "Detailed documentation on the secrets API" description: "Detailed documentation on the secrets API"
weight: 700 weight: 600
--- ---
## Get Secret ## Get Secret

View File

@ -26,5 +26,8 @@ dapr components -k
| Name | Environment Variable | Default | Description | Name | Environment Variable | Default | Description
| --- | --- | --- | --- | | --- | --- | --- | --- |
| `--kubernetes`, `-k` | | `false` | List all Dapr configurations in a Kubernetes cluster
| `--name`, `-n` | | | The configuration name to be printed (optional)
| `--output`, `-o` | | `list`| Output format (options: json or yaml or list)
| `--help`, `-h` | | | Print this help message | | `--help`, `-h` | | | Print this help message |
| `--kubernetes`, `-k` | | `false` | List all Dapr configurations in a Kubernetes cluster | |

View File

@ -1,22 +0,0 @@
apiVersion: dapr.io/v1alpha1
kind: Configuration
metadata:
name: appconfig
namespace: default
spec:
tracing:
samplingRate: "1"
---
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
name: native
namespace: default
spec:
type: exporters.native
version: v1
metadata:
- name: enabled
value: "true"
- name: agentEndpoint
value: "otel-collector.default.svc.cluster.local:55678"

View File

@ -0,0 +1,10 @@
apiVersion: dapr.io/v1alpha1
kind: Configuration
metadata:
name: appconfig
namespace: default
spec:
tracing:
samplingRate: "1"
zipkin:
endpointAddress: "http://otel-collector.default.svc.cluster.local:9411/api/v2/spans"

View File

@ -8,8 +8,8 @@ metadata:
data: data:
otel-collector-config: | otel-collector-config: |
receivers: receivers:
opencensus: zipkin:
endpoint: 0.0.0.0:55678 endpoint: 0.0.0.0:9411
processors: processors:
queued_retry: queued_retry:
batch: batch:
@ -20,8 +20,9 @@ data:
zpages: zpages:
endpoint: :55679 endpoint: :55679
exporters: exporters:
logging:
loglevel: debug
azuremonitor: azuremonitor:
azuremonitor/2:
endpoint: "https://dc.services.visualstudio.com/v2/track" endpoint: "https://dc.services.visualstudio.com/v2/track"
instrumentation_key: "<INSTRUMENTATION-KEY>" instrumentation_key: "<INSTRUMENTATION-KEY>"
# maxbatchsize is the maximum number of items that can be # maxbatchsize is the maximum number of items that can be
@ -34,8 +35,8 @@ data:
extensions: [pprof, zpages, health_check] extensions: [pprof, zpages, health_check]
pipelines: pipelines:
traces: traces:
receivers: [opencensus] receivers: [zipkin]
exporters: [azuremonitor/2] exporters: [azuremonitor,logging]
processors: [batch, queued_retry] processors: [batch, queued_retry]
--- ---
apiVersion: v1 apiVersion: v1
@ -47,10 +48,10 @@ metadata:
component: otel-collector component: otel-collector
spec: spec:
ports: ports:
- name: opencensus # Default endpoint for Opencensus receiver. - name: zipkin # Default endpoint for Zipkin receiver.
port: 55678 port: 9411
protocol: TCP protocol: TCP
targetPort: 55678 targetPort: 9411
selector: selector:
component: otel-collector component: otel-collector
--- ---
@ -86,7 +87,7 @@ spec:
cpu: 200m cpu: 200m
memory: 400Mi memory: 400Mi
ports: ports:
- containerPort: 55678 # Default endpoint for Opencensus receiver. - containerPort: 9411 # Default endpoint for Zipkin receiver.
volumeMounts: volumeMounts:
- name: otel-collector-config-vol - name: otel-collector-config-vol
mountPath: /conf mountPath: /conf

Binary file not shown.

After

Width:  |  Height:  |  Size: 650 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 581 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 585 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 213 KiB