Merge branch 'master' into website

This commit is contained in:
Aaron Crawfis 2020-10-23 10:00:46 -07:00
commit e3dc72210f
8 changed files with 86 additions and 236 deletions

View File

@ -11,18 +11,53 @@ Observability is a term from control theory. Observability means you can answer
The observability capabilities enable users to monitor the Dapr system services, their interaction with user applications and understand how these monitored services behave. The observability capabilities are divided into the following areas;
* **[Distributed tracing]({{<ref "tracing.md">}})**: is used to profile and monitor Dapr system services and user apps. Distributed tracing helps pinpoint where failures occur and what causes poor performance. Distributed tracing is particularly well-suited to debugging and monitoring distributed software architectures, such as microservices.
## Distributed tracing
You can use distributed tracing to help debug and optimize application code. Distributed tracing contains trace spans between the Dapr runtime, Dapr system services, and user apps across process, nodes, network, and security boundaries. It provides a detailed understanding of service invocations (call flows) and service dependencies.
[Distributed tracing]({{<ref "tracing.md">}}) is used to profile and monitor Dapr system services and user apps. Distributed tracing helps pinpoint where failures occur and what causes poor performance. Distributed tracing is particularly well-suited to debugging and monitoring distributed software architectures, such as microservices.
Dapr uses [W3C tracing context for distributed tracing]({{<ref w3c-tracing>}})
You can use distributed tracing to help debug and optimize application code. Distributed tracing contains trace spans between the Dapr runtime, Dapr system services, and user apps across process, nodes, network, and security boundaries. It provides a detailed understanding of service invocations (call flows) and service dependencies.
It is generally recommended to run Dapr in production with tracing.
* **[Metrics]({{<ref "metrics.md">}})**: are the series of measured values and counts that are collected and stored over time. Dapr metrics provide monitoring and understanding of the behavior of Dapr system services and user apps. For example, the service metrics between Dapr sidecars and user apps show call latency, traffic failures, error rates of requests etc. Dapr system services metrics show side car injection failures, health of the system services including CPU usage, number of actor placement made etc.
Dapr uses [W3C tracing context for distributed tracing]({{<ref w3c-tracing>}})
* **[Logs]({{<ref "logs.md">}})**: are records of events that occur and can be used to determine failures or another status. Logs events contain warning, error, info, and debug messages produced by Dapr system services. Each log event includes metadata such as message type, hostname, component name, App ID, ip address, etc.
It is generally recommended to run Dapr in production with tracing.
* **[Health]({{<ref "sidecar-health.md">}})**: Dapr provides a way for a hosting platform to determine its health using an HTTP endpoint. With this endpoint, the Dapr process, or sidecar, can be probed to determine its readiness and liveness and action taken accordingly.
### Open Telemetry
## Open Telemetry
Dapr integrates with [OpenTelemetry](https://opentelemetry.io/) for tracing, metrics and logs. With OpenTelemetry, you can configure various exporters for tracing and metrics based on your environment, whether it is running in the cloud or on-premises.
#### Next steps
- [How-To: Set up Zipkin]({{< ref zipkin.md >}})
- [How-To: Set up Application Insights with Open Telemetry Collector]({{< ref open-telemetry-collector.md >}})
## Metrics
[Metrics]({{<ref "metrics.md">}}) are the series of measured values and counts that are collected and stored over time. Dapr metrics provide monitoring and understanding of the behavior of Dapr system services and user apps.
For example, the service metrics between Dapr sidecars and user apps show call latency, traffic failures, error rates of requests etc.
Dapr system services metrics show side car injection failures, health of the system services including CPU usage, number of actor placement made etc.
#### Next steps
- [How-To: Set up Prometheus and Grafana]({{< ref prometheus.md >}})
- [How-To: Set up Azure Monitor]({{< ref azure-monitor.md >}})
## Logs
[Logs]({{<ref "logs.md">}}) are records of events that occur and can be used to determine failures or another status.
Logs events contain warning, error, info, and debug messages produced by Dapr system services. Each log event includes metadata such as message type, hostname, component name, App ID, ip address, etc.
#### Next steps
- [How-To: Set up Fluentd, Elastic search and Kibana in Kubernetes]({{< ref fluentd.md >}})
- [How-To: Set up Azure Monitor]({{< ref azure-monitor.md >}})
## Health
Dapr provides a way for a hosting platform to determine its [Health]({{<ref "sidecar-health.md">}}) using an HTTP endpoint. With this endpoint, the Dapr process, or sidecar, can be probed to determine its readiness and liveness and action taken accordingly.
#### Next steps
- [Health API]({{< ref health_api.md >}})

View File

@ -65,7 +65,6 @@ spec:
## References
- [How-To: Setup Application Insights for distributed tracing with Local Forwarder]({{< ref local-forwarder.md >}})
- [How-To: Setup Application Insights for distributed tracing with OpenTelemetry Collector]({{< ref open-telemetry-collector.md >}})
- [How-To: Set up Zipkin for distributed tracing]({{< ref zipkin.md >}})
- [W3C distributed tracing]({{< ref w3c-tracing >}})

View File

@ -26,7 +26,7 @@ f := tracecontext.HTTPFormat{}
sc, ok := f.SpanContextFromRequest(req)
```
#### For gRPC calls
To retrieve the trace context header when the gRPC call is returned, you can pass the response header reference as gRPC call option which contains response headers:
To retrieve the trace context header when the gRPC call is returned, you can pass the response header reference as gRPC call option which contains response headers:
```go
var responseHeader metadata.MD
@ -143,7 +143,7 @@ You can create a trace context using the recommended OpenCensus SDKs. OpenCensus
### Create trace context in Go
#### 1. Get the OpenCensus Go SDK
#### 1. Get the OpenCensus Go SDK
Prerequisites: OpenCensus Go libraries require Go 1.8 or later. For details on installation go [here](https://pkg.go.dev/go.opencensus.io?tab=overview).
@ -289,12 +289,9 @@ You can now correlate the calls in your app and across services with Dapr using
## Related Links
* [Observability concepts]({{< ref observability-concept.md >}})
* [W3C Trace Context for distributed tracing]({{< ref w3c-tracing >}})
* [How To set up Application Insights for distributed tracing with local forwarder]({{< ref local-forwarder.md >}})
* [How To set up Application Insights for distributed tracing with OpenTelemetry]({{< ref open-telemetry-collector.md >}})
* [How to set up Zipkin for distributed tracing]({{< ref zipkin.md >}})
* [W3C trace context specification](https://www.w3.org/TR/trace-context/)
* [Observability quickstart](https://github.com/dapr/quickstarts/tree/master/observability)
- [Observability concepts]({{< ref observability-concept.md >}})
- [W3C Trace Context for distributed tracing]({{< ref w3c-tracing >}})
- [How To set up Application Insights for distributed tracing with OpenTelemetry]({{< ref open-telemetry-collector.md >}})
- [How to set up Zipkin for distributed tracing]({{< ref zipkin.md >}})
- [W3C trace context specification](https://www.w3.org/TR/trace-context/)
- [Observability quickstart](https://github.com/dapr/quickstarts/tree/master/observability)

View File

@ -8,10 +8,10 @@ type: docs
---
## Introduction
Dapr uses W3C trace context for distributed tracing for both service invocation and pub/sub messaging. Largely Dapr does all the heavy lifting of generating and propogating the trace context information and this can be sent to many different diagnostics tools for visualization and querying. There are only a very few cases where you, as a developer, need to either propagate a trace header or generate one.
Dapr uses W3C trace context for distributed tracing for both service invocation and pub/sub messaging. Largely Dapr does all the heavy lifting of generating and propogating the trace context information and this can be sent to many different diagnostics tools for visualization and querying. There are only a very few cases where you, as a developer, need to either propagate a trace header or generate one.
## Background
Distributed tracing is a methodology implemented by tracing tools to follow, analyze and debug a transaction across multiple software components. Typically, a distributed trace traverses more than one service which requires it to be uniquely identifiable. Trace context propagation passes along this unique identification.
Distributed tracing is a methodology implemented by tracing tools to follow, analyze and debug a transaction across multiple software components. Typically, a distributed trace traverses more than one service which requires it to be uniquely identifiable. Trace context propagation passes along this unique identification.
In the past, trace context propagation has typically been implemented individually by each different tracing vendors. In multi-vendor environments, this causes interoperability problems, such as;
@ -31,12 +31,12 @@ This transformation of modern applications called for a distributed tracing cont
A unified approach for propagating trace data improves visibility into the behavior of distributed applications, facilitating problem and performance analysis.
## Scenarios
There are two scenarios where you need to understand how tracing is used;
1. Dapr generates and propagates the trace context between services.
There are two scenarios where you need to understand how tracing is used:
1. Dapr generates and propagates the trace context between services.
2. Dapr generates the trace context and you need to propagate the trace context to another service **or** you generate the trace context and Dapr propagates the trace context to a service.
### Dapr generates and propagates the trace context between services.
In these scenarios Dapr does all work for you. You do not need to create and propagate any trace headers. Dapr takes care of creating all trace headers and propogating them. Let's go through the scenarios with examples;
In these scenarios Dapr does all work for you. You do not need to create and propagate any trace headers. Dapr takes care of creating all trace headers and propogating them. Let's go through the scenarios with examples;
1. Single service invocation call (`service A -> service B` )
@ -49,15 +49,15 @@ In these scenarios Dapr does all work for you. You do not need to create and pro
3. Request is from external endpoint (`For example from a gateway service to a Dapr enabled service A`)
Dapr generates the trace headers in service A and these trace headers are propagated from service A to further Dapr enabled services `service A-> service B -> service C`. This is similar to above case 2.
4. Pub/sub messages
Dapr generates the trace headers in the published message topic and these trace headers are propagated to any services listening on that topic.
### You need to propagate or generate trace context between services
In these scenarios Dapr does some of the work for you and you need to either create or propagate trace headers.
In these scenarios Dapr does some of the work for you and you need to either create or propagate trace headers.
1. Multiple service calls to different services from single service
When you are calling multiple services from a single service, for example from service A like this, you need to propagate the trace headers;
service A -> service B
@ -66,28 +66,28 @@ In these scenarios Dapr does some of the work for you and you need to either cre
[ .. some code logic ..]
service A -> service D
[ .. some code logic ..]
In this case, when service A first calls service B, Dapr generates the trace headers in service A, and these trace headers are then propagated to service B. These trace headers are returned in the response from service B as part of response headers. However you need to propagate the returned trace context to the next services, service C and Service D, as Dapr does not know you want to reuse the same header.
To understand how to extract the trace headers from a response and add the trace headers into a request, see the [how to use trace context]({{< ref w3c-tracing >}}) article.
2. You have chosen to generate your own trace context headers.
This is much more unusual. There may be occassions where you specifically chose to add W3C trace headers into a service call, for example if you have an existing application that does not currently use Dapr. In this case Dapr still propagates the trace context headers for you. If you decide to generate trace headers yourself, there are three ways this can be done :
2. You have chosen to generate your own trace context headers.
This is much more unusual. There may be occassions where you specifically chose to add W3C trace headers into a service call, for example if you have an existing application that does not currently use Dapr. In this case Dapr still propagates the trace context headers for you. If you decide to generate trace headers yourself, there are three ways this can be done :
1. You can use the industry standard OpenCensus/OpenTelemetry SDKs to generate trace headers and pass these trace headers to a Dapr enabled service. This is the preferred recommendation.
1. You can use the industry standard OpenCensus/OpenTelemetry SDKs to generate trace headers and pass these trace headers to a Dapr enabled service. This is the preferred recommendation.
2. You can use a vendor SDK that provides a way to generate W3C trace headers such as DynaTrace SDK and pass these trace headers to a Dapr enabled service.
2. You can use a vendor SDK that provides a way to generate W3C trace headers such as DynaTrace SDK and pass these trace headers to a Dapr enabled service.
3. You can handcraft a trace context following [W3C trace context specification](https://www.w3.org/TR/trace-context/) and pass these trace headers to Dapr enabled service.
3. You can handcraft a trace context following [W3C trace context specification](https://www.w3.org/TR/trace-context/) and pass these trace headers to Dapr enabled service.
## W3C trace headers
Theses are the specific trace context headers that are generated and propagated by Dapr for HTTP and gRPC.
Theses are the specific trace context headers that are generated and propagated by Dapr for HTTP and gRPC.
### Trace context HTTP headers format
When propogating a trace context header from an HTTP response to an HTTP request, these are the headers that you need to copy.
#### Traceparent Header
The traceparent header represents the incoming request in a tracing system in a common format, understood by all vendors.
The traceparent header represents the incoming request in a tracing system in a common format, understood by all vendors.
Heres an example of a traceparent header.
`traceparent: 00-0af7651916cd43dd8448eb211c80319c-b7ad6b7169203331-01`
@ -105,8 +105,7 @@ The tracestate fields are detailed [here](https://www.w3.org/TR/trace-context/#t
In the gRPC API calls, trace context is passed through `grpc-trace-bin` header.
## Related Links
* [How To set up Application Insights for distributed tracing with local forwarder]({{< ref local-forwarder.md >}})
* [How To set up Application Insights for distributed tracing with OpenTelemetry]({{< ref open-telemetry-collector.md >}})
* [How To set up Zipkin for distributed tracing]({{< ref zipkin.md >}})
* [W3C trace context specification](https://www.w3.org/TR/trace-context/)
* [Observability sample](https://github.com/dapr/quickstarts/tree/master/observability)
- [How To set up Application Insights for distributed tracing with OpenTelemetry]({{< ref open-telemetry-collector.md >}})
- [How To set up Zipkin for distributed tracing]({{< ref zipkin.md >}})
- [W3C trace context specification](https://www.w3.org/TR/trace-context/)
- [Observability sample](https://github.com/dapr/quickstarts/tree/master/observability)

View File

@ -20,9 +20,18 @@ In self hosted mode, set the `--app-id` flag:
```bash
dapr run --app-id cart --app-port 5000 python app.py
```
If your app uses an SSL connection, you can tell Dapr to invoke your app over an insecure SSL connection:
```bash
dapr run --app-id cart --app-port 5000 --app-ssl python app.py
```
{{% /codetab %}}
{{% codetab %}}
### Setup an ID using Kubernetes
In Kubernetes, set the `dapr.io/app-id` annotation on your pod:
```yaml
@ -48,6 +57,8 @@ spec:
dapr.io/app-port: "5000"
...
```
*If your app uses an SSL connection, you can tell Dapr to invoke your app over an insecure SSL connection with the `app-ssl: "true"` annotation (full list [here]({{< ref kubernetes-annotations.md >}}))*
{{% /codetab %}}
{{< /tabs >}}

View File

@ -62,12 +62,11 @@ spec:
# blow are subscription configuration.
- name: subscriptionType
value: <REPLACE-WITH-SUBSCRIPTION-TYPE> # Required. Allowed values: topic, queue.
# following subscription options - only one can be used
# - name: consumerID
# value: queuename
- name: consumerID
value: <REPLACE-WITH-consumerID> # Optional. Any String would be accept.
# - name: durableSubscriptionName
# value: ""
# following subscription options - only one can be used
# - name: startAtSequence
# value: 1
# - name: startWithLastReceived

View File

@ -19,6 +19,7 @@ The following table shows all the supported pod Spec annotations supported by Da
| `dapr.io/enable-profiling` | Setting this paramater to `true` starts the Dapr profiling server on port `7777`. Default is `false`
| `dapr.io/app-protocol` | Tells Dapr which protocol your application is using. Valid options are `http` and `grpc`. Default is `http`
| `dapr.io/app-max-concurrency` | Limit the concurrency of your application. A valid value is any number larger than `0`
| `dapr.io/app-ssl` | Tells Dapr to invoke the app over an insecure SSL connection. Applies to both HTTP and gRPC. Default is `false`.
| `dapr.io/metrics-port` | Sets the port for the sidecar metrics server. Default is `9090`
| `dapr.io/sidecar-cpu-limit` | Maximum amount of CPU that the Dapr sidecar can use. See valid values [here](https://kubernetes.io/docs/tasks/administer-cluster/manage-resources/quota-memory-cpu-namespace/). By default this is not set
| `dapr.io/sidecar-memory-limit` | Maximum amount of Memory that the Dapr sidecar can use. See valid values [here](https://kubernetes.io/docs/tasks/administer-cluster/manage-resources/quota-memory-cpu-namespace/). By default this is not set

View File

@ -1,191 +0,0 @@
---
type: docs
title: "Set up Application Insights for distributed tracing"
linkTitle: "Local Forwarder"
weight: 1000
description: "Integrate with Application Insights through OpenTelemetry's default exporter along with a dedicated agent known as the Local Forwarder"
---
Dapr integrates with Application Insights through OpenTelemetry's default exporter along with a dedicated agent known as the [Local Forwarder](https://docs.microsoft.com/en-us/azure/azure-monitor/app/opencensus-local-forwarder).
> Note: The local forwarder is still under preview, but being deprecated. The Application Insights team recommends using [Opentelemetry collector](https://github.com/open-telemetry/opentelemetry-collector) (which is in alpha state) for the future so we're working on moving from local forwarder to [Opentelemetry collector](https://github.com/open-telemetry/opentelemetry-collector).
## How to configure distributed tracing with Application insights
The following steps show you how to configure Dapr to send distributed tracing data to Application insights.
### Setup Application Insights
1. First, you'll need an Azure account. Please see instructions [here](https://azure.microsoft.com/free/) to apply for a **free** Azure account.
2. Follow instructions [here](https://docs.microsoft.com/en-us/azure/azure-monitor/app/create-new-resource) to create a new Application Insights resource.
3. Get Application insights Intrumentation key from your application insights page
4. On the Application Insights side menu, go to `Configure -> API Access`
5. Click `Create API Key`
6. Select all checkboxes under `Choose what this API key will allow apps to do:`
- Read telemetry
- Write annotations
- Authenticate SDK control channel
7. Generate Key and get API key
### Setup the Local Forwarder
#### Self hosted environment
This is for running the local forwarder on your machine.
1. Run the local fowarder
```bash
docker run -e APPINSIGHTS_INSTRUMENTATIONKEY=<Your Instrumentation Key> -e APPINSIGHTS_LIVEMETRICSSTREAMAUTHENTICATIONAPIKEY=<Your API Key> -d -p 55678:55678 daprio/dapr-localforwarder:latest
```
> Note: [dapr-localforwarder](https://github.com/dapr/ApplicationInsights-LocalForwarder) is the forked version of [ApplicationInsights Localforwarder](https://github.com/microsoft/ApplicationInsights-LocalForwarder/), that includes the minor changes for Dapr. We're working on migrating to [opentelemetry-sdk and opentelemetry collector](https://opentelemetry.io/).
1. Create the following YAML files. Copy the native.yaml component file and tracing.yaml configuration file to the *components/* sub-folder under the same folder where you run your application.
* native.yaml component
```yaml
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
name: native
namespace: default
spec:
type: exporters.native
metadata:
- name: enabled
value: "true"
- name: agentEndpoint
value: "localhost:55678"
```
* tracing.yaml configuration
```yaml
apiVersion: dapr.io/v1alpha1
kind: Configuration
metadata:
name: tracing
namespace: default
spec:
tracing:
samplingRate: "1"
```
3. When running in the local self hosted mode, you need to launch Dapr with the `--config` parameter:
```bash
dapr run --app-id mynode --app-port 3000 --config ./components/tracing.yaml node app.js
```
#### Kubernetes environment
1. Download [dapr-localforwarder.yaml](./localforwarder/dapr-localforwarder.yaml)
2. Replace `<APPINSIGHT INSTRUMENTATIONKEY>` with your Instrumentation Key and `<APPINSIGHT API KEY>` with the generated key in the file
```yaml
- name: APPINSIGHTS_INSTRUMENTATIONKEY
value: <APPINSIGHT INSTRUMENTATIONKEY> # Replace with your ikey
- name: APPINSIGHTS_LIVEMETRICSSTREAMAUTHENTICATIONAPIKEY
value: <APPINSIGHT API KEY> # Replace with your generated api key
```
3. Deploy dapr-localfowarder.yaml
```bash
kubectl apply -f ./dapr-localforwarder.yaml
```
4. Create the following YAML files
* native.yaml component
```yaml
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
name: native
namespace: default
spec:
type: exporters.native
metadata:
- name: enabled
value: "true"
- name: agentEndpoint
value: "<Local forwarder address, e.g. dapr-localforwarder.default.svc.cluster.local:55678>"
```
* tracing.yaml configuration
```yaml
apiVersion: dapr.io/v1alpha1
kind: Configuration
metadata:
name: tracing
namespace: default
spec:
tracing:
samplingRate: "1"
```
5. Use kubectl to apply the above CRD files:
```bash
kubectl apply -f tracing.yaml
kubectl apply -f native.yaml
```
6. Deploy your app with tracing
When running in Kubernetes mode, apply the configuration by adding a `dapr.io/config` annotation to the container that you want to participate in the distributed tracing, as shown in the following example:
```yaml
apiVersion: apps/v1
kind: Deployment
metadata:
...
spec:
...
template:
metadata:
...
annotations:
dapr.io/enabled: "true"
dapr.io/app-id: "calculator-front-end"
dapr.io/app-port: "8080"
dapr.io/config: "tracing"
```
That's it! There's no need include any SDKs or instrument your application code. Dapr automatically handles the distributed tracing for you.
> **NOTE**: You can register multiple exporters at the same time, and the tracing logs are forwarded to all registered exporters.
Deploy and run some applications. After a few minutes, you should see tracing logs appearing in your Application Insights resource. You can also use **Application Map** to examine the topology of your services, as shown below:
![Application map](/images/azure-monitor.png)
> **NOTE**: Only operations going through Dapr API exposed by Dapr sidecar (e.g. service invocation or event publishing) will be displayed in Application Map topology. Direct service invocations (not going through the Dapr API) will not be shown.
## Tracing configuration
The `tracing` section under the `Configuration` spec contains the following properties:
```yml
tracing:
samplingRate: "1"
```
The following table lists the different properties.
Property | Type | Description
---- | ------- | -----------
samplingRate | string | Set sampling rate for tracing to be enabled or disabled.
`samplingRate` is used to enable or disable the tracing. To disable the sampling rate ,
set `samplingRate : "0"` in the configuration. The valid range of samplingRate is between 0 and 1 inclusive. The sampling rate determines whether a trace span should be sampled or not based on value. `samplingRate : "1"` will always sample the traces.By default, the sampling rate is 1 in 10,000
## References
* [How-To: Use W3C Trace Context for distributed tracing](../../howto/use-w3c-tracecontext/README.md)