Merge branch 'master' into hugo-docs

This commit is contained in:
Aaron Crawfis 2020-10-21 13:43:57 -07:00
commit 190527b34c
16 changed files with 539 additions and 259 deletions

View File

@ -19,3 +19,13 @@ assignees: ''
## Steps to Reproduce the Problem
<!-- How can a maintainer reproduce this issue (be detailed) -->
## Release Note
<!-- How should the fix for this issue be communicated in our release notes? It can be populated later. -->
<!-- Keep it as a single line. Examples: -->
<!-- RELEASE NOTE: **ADD** New feature in Dapr. -->
<!-- RELEASE NOTE: **FIX** Bug in runtime. -->
<!-- RELEASE NOTE: **UPDATE** Runtime dependency. -->
RELEASE NOTE:

View File

@ -7,3 +7,13 @@ assignees: ''
---
## Describe the feature
## Release Note
<!-- How should this new feature be announced in our release notes? It can be populated later. -->
<!-- Keep it as a single line. Examples: -->
<!-- RELEASE NOTE: **ADD** New feature in Dapr. -->
<!-- RELEASE NOTE: **FIX** Bug in runtime. -->
<!-- RELEASE NOTE: **UPDATE** Runtime dependency. -->
RELEASE NOTE:

21
.github/workflows/stale-pr-monitor.yml vendored Normal file
View File

@ -0,0 +1,21 @@
# ------------------------------------------------------------
# Copyright (c) Microsoft Corporation.
# Licensed under the MIT License.
# ------------------------------------------------------------
name: "Stale PR monitor"
on:
schedule:
- cron: "0 0 * * *"
jobs:
stale:
runs-on: ubuntu-latest
steps:
- uses: actions/stale@v3
with:
repo-token: ${{ secrets.GITHUB_TOKEN }}
stale-pr-message: 'Stale PR, paging all reviewers'
stale-pr-label: 'stale'
exempt-pr-labels: 'question,"help wanted",do-not-merge'
days-before-stale: 5

View File

@ -16,9 +16,7 @@ Dapr can be used alongside any service mesh such as Istio and Linkerd. A service
That is where Dapr comes in. Dapr is a language agnostic programming model built on http and gRPC that provides distributed system building blocks via open APIs for asynchronous pub-sub, stateful services, service discovery and invocation, actors and distributed tracing. Dapr introduces new functionality to an apps runtime. Both service meshes and Dapr run as side-car services to your application, one giving network features and the other distributed application capabilities.
Watch this video on how Dapr and service meshes work together:
<iframe width="560" height="315" src="https://www.youtube.com/embed/xxU68ewRmz8?start=141" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe>
Watch this [video](https://www.youtube.com/watch?v=xxU68ewRmz8&feature=youtu.be&t=140) on how Dapr and service meshes work together.
### Understanding how Dapr interoperates with the service mesh interface (SMI)
@ -26,21 +24,19 @@ SMI is an abstraction layer that provides a common API surface across different
### Differences between Dapr, Istio and Linkerd
Read [how does Dapr work with service meshes?](https://github.com/dapr/dapr/wiki/FAQ#how-does-dapr-work-with-service-meshes) Istio is an open source service mesh implementation that focuses on Layer7 routing, traffic flow management and mTLS authentication between services. Istio uses a sidecar to intercept traffic going into and out of a container and enforces a set of network policies on them.
Read [How does Dapr work with service meshes?](https://github.com/dapr/dapr/wiki/FAQ#how-does-dapr-work-with-service-meshes) Istio is an open source service mesh implementation that focuses on Layer7 routing, traffic flow management and mTLS authentication between services. Istio uses a sidecar to intercept traffic going into and out of a container and enforces a set of network policies on them.
Istio is not a programming model and does not focus on application level features such as state management, pub-sub, bindings etc. That is where Dapr comes in.
## Performance Benchmarks
The Dapr project is focused on performance due to the inherent discussion of Dapr being a sidecar to your application. The performance benchmark data is planned to be published on a regular basis. You can also run the perf tests in your own environment to get perf numbers. This performance benchmark video discusses and demos the work that has been done so far.
<iframe width="560" height="315" src="https://www.youtube.com/embed/4kV3SHs1j2k?start=790" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe>
The Dapr project is focused on performance due to the inherent discussion of Dapr being a sidecar to your application. This [performance benchmark video](https://youtu.be/4kV3SHs1j2k?t=783) discusses and demos the work that has been done so far. The performance benchmark data is planned to be published on a regular basis. You can also run the perf tests in your own environment to get perf numbers.
## Actors
### What is the relationship between Dapr, Orleans and Service Fabric Reliable Actors?
The actors in Dapr are based on the same virtual actor concept that [Orleans](https://www.microsoft.com/research/project/orleans-virtual-actors/) started, meaning that they are activated when called and deactivated after a period of time. If you are familiar with Orleans, Dapr C# actors will be familiar. Dapr C# actors are based on [Service Fabric Reliable Actors](https://docs.microsoft.com/azure/service-fabric/service-fabric-reliable-actors-introduction) (which also came from Orleans) and enable you to take Reliable Actors in Service Fabric and migrate them to other hosting platforms such as Kubernetes or other on-premise environments.
Also Dapr is about more than just actors. It provides you with a set of best practice building blocks to build into any microservices application. See [Dapr overview]({{<ref "overview.md">}}).
Also Dapr is about more than just actors. It provides you with a set of best practice building blocks to build into any microservices application. See [Dapr overview](https://github.com/dapr/docs/blob/master/overview/README.md).
### Differences between Dapr from an actor framework

View File

@ -65,7 +65,7 @@ spec:
## References
- [How-To: Set up Application Insights for distributed tracing]({{< ref azure-monitor.md >}})
- [How-To: Setup Application Insights for distributed tracing with Local Forwarder]({{< ref local-forwarder.md >}})
- [How-To: Setup Application Insights for distributed tracing with OpenTelemetry Collector]({{< ref open-telemetry-collector.md >}})
- [How-To: Set up Zipkin for distributed tracing]({{< ref zipkin.md >}})
- [W3C distributed tracing]({{< ref w3c-tracing >}})

View File

@ -291,7 +291,8 @@ You can now correlate the calls in your app and across services with Dapr using
* [Observability concepts]({{< ref observability-concept.md >}})
* [W3C Trace Context for distributed tracing]({{< ref w3c-tracing >}})
* [How to set up Application Insights for distributed tracing]({{< ref azure-monitor.md >}})
* [How To set up Application Insights for distributed tracing with local forwarder]({{< ref local-forwarder.md >}})
* [How To set up Application Insights for distributed tracing with OpenTelemetry]({{< ref open-telemetry-collector.md >}})
* [How to set up Zipkin for distributed tracing]({{< ref zipkin.md >}})
* [W3C trace context specification](https://www.w3.org/TR/trace-context/)
* [Observability quickstart](https://github.com/dapr/quickstarts/tree/master/observability)

View File

@ -105,7 +105,8 @@ The tracestate fields are detailed [here](https://www.w3.org/TR/trace-context/#t
In the gRPC API calls, trace context is passed through `grpc-trace-bin` header.
## Related Links
* [How To set up Application Insights for distributed tracing]({{< ref azure-monitor.md >}})
* [How To set up Application Insights for distributed tracing with local forwarder]({{< ref local-forwarder.md >}})
* [How To set up Application Insights for distributed tracing with OpenTelemetry]({{< ref open-telemetry-collector.md >}})
* [How To set up Zipkin for distributed tracing]({{< ref zipkin.md >}})
* [W3C trace context specification](https://www.w3.org/TR/trace-context/)
* [Observability sample](https://github.com/dapr/quickstarts/tree/master/observability)

View File

@ -12,6 +12,7 @@ Every binding has its own unique set of properties. Click the name link to see t
| Name | Input<br>Binding | Output<br>Binding | Status |
|------|:----------------:|:-----------------:|--------|
| [APNs]({{< ref apns.md >}}) | | ✅ | Experimental |
| [Cron (Scheduler)]({{< ref cron.md >}}) | ✅ | ✅ | Experimental |
| [HTTP]({{< ref http.md >}}) | | ✅ | Experimental |
| [InfluxDB]({{< ref influxdb.md >}}) | | ✅ | Experimental |

View File

@ -0,0 +1,73 @@
---
type: docs
title: "Apple Push Notification Service binding spec"
linkTitle: "Apple Push Notification Service"
description: "Detailed documentation on the Apple Push Notification Service binding component"
---
## Configuration
```yaml
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
name: <NAME>
namespace: <NAMESPACE>
spec:
type: bindings.apns
metadata:
- name: development
value: <true | false>
- name: key-id
value: <APPLE_KEY_ID>
- name: team-id
value: <APPLE_TEAM_ID>
- name: private-key
secretKeyRef:
name: <SECRET>
key: <SECRET-KEY-NAME>
```
- `database` tells the binding which APNs service to use. Set to `true` to use the development service or `false` to use the production service. If not specified, the binding will default to production.
- `key-id` is the identifier for the private key from the Apple Developer Portal.
- `team-id` is the identifier for the organization or author from the Apple Developer Portal.
- `private-key` is a PKCS #8-formatted private key. It is intended that the private key is stored in the secret store and not exposed directly in the configuration.
## Request Format
```json
{
"data": {
"aps": {
"alert": {
"title": "New Updates!",
"body": "There are new updates for your review"
}
}
},
"metadata": {
"device-token": "PUT-DEVICE-TOKEN-HERE",
"apns-push-type": "alert",
"apns-priority": "10",
"apns-topic": "com.example.helloworld"
},
"operation": "create"
}
```
The `data` object contains a complete push notification specification as described in the [Apple documentation](https://developer.apple.com/documentation/usernotifications/setting_up_a_remote_notification_server/generating_a_remote_notification). The `data` object will be sent directly to the APNs service.
Besides the `device-token` value, the HTTP headers specified in the [Apple documentation](https://developer.apple.com/documentation/usernotifications/setting_up_a_remote_notification_server/sending_notification_requests_to_apns) can be sent as metadata fields and will be included in the HTTP request to the APNs service.
## Response Format
```json
{
"messageID": "UNIQUE-ID-FOR-NOTIFICATION"
}
```
## Output Binding Supported Operations
* `create`

View File

@ -1,245 +0,0 @@
---
type: docs
title: "How-To: Set up Application Insights for distributed tracing"
linkTitle: "Application Insights"
weight: 3000
description: "Enable Application Insights to visualize Dapr tracing and application map"
type: docs
---
Dapr integrates with Application Insights through OpenTelemetry's default exporter along with a dedicated agent known as the [Local Forwarder](https://docs.microsoft.com/en-us/azure/azure-monitor/app/opencensus-local-forwarder).
> Note: The local forwarder is still under preview, but being deprecated. The Application Insights team recommends using [Opentelemetry collector](https://github.com/open-telemetry/opentelemetry-collector) (which is in alpha state) for the future so we're working on moving from local forwarder to [Opentelemetry collector](https://github.com/open-telemetry/opentelemetry-collector).
## How to configure distributed tracing with Application insights
The following steps show you how to configure Dapr to send distributed tracing data to Application insights.
### Setup Application Insights
1. First, you'll need an Azure account. Please see instructions [here](https://azure.microsoft.com/free/) to apply for a **free** Azure account.
2. Follow instructions [here](https://docs.microsoft.com/en-us/azure/azure-monitor/app/create-new-resource) to create a new Application Insights resource.
3. Get Application insights Intrumentation key from your application insights page
4. On the Application Insights side menu, go to `Configure -> API Access`
5. Click `Create API Key`
6. Select all checkboxes under `Choose what this API key will allow apps to do:`
- Read telemetry
- Write annotations
- Authenticate SDK control channel
7. Generate Key and get API key
### Setup the Local Forwarder
{{< tabs "Self-Hosted" Kubernetes>}}
{{% codetab %}}
1. Run the local fowarder
```bash
docker run -e APPINSIGHTS_INSTRUMENTATIONKEY=<Your Instrumentation Key> -e APPINSIGHTS_LIVEMETRICSSTREAMAUTHENTICATIONAPIKEY=<Your API Key> -d -p 55678:55678 daprio/dapr-localforwarder:latest
```
{{% alert title="Note" color="primary" %}}
[dapr-localforwarder](https://github.com/dapr/ApplicationInsights-LocalForwarder) is the forked version of [ApplicationInsights Localforwarder](https://github.com/microsoft/ApplicationInsights-LocalForwarder/), that includes the minor changes for Dapr. We're working on migrating to [opentelemetry-sdk and opentelemetry collector](https://opentelemetry.io/).
{{% /alert %}}
2. Create the following YAML files. Copy the native.yaml component file and tracing.yaml configuration file to the *components/* sub-folder under the same folder where you run your application.
- native.yaml component
```yaml
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
name: native
namespace: default
spec:
type: exporters.native
metadata:
- name: enabled
value: "true"
- name: agentEndpoint
value: "localhost:55678"
```
- tracing.yaml configuration
```yaml
apiVersion: dapr.io/v1alpha1
kind: Configuration
metadata:
name: tracing
namespace: default
spec:
tracing:
samplingRate: "1"
```
3. When running in the local self hosted mode, you need to launch Dapr with the `--config` parameter:
```bash
dapr run --app-id mynode --app-port 3000 --config ./components/tracing.yaml node app.js
```
{{% /codetab %}}
{{% codetab %}}
1. Create a file named `dapr-localforwarder.yaml` with the following contents:
```yaml
kind: Service
apiVersion: v1
metadata:
name: dapr-localforwarder
namespace: default
labels:
app: dapr-localforwarder
spec:
selector:
app: dapr-localforwarder
ports:
- protocol: TCP
port: 55678
targetPort: 55678
type: ClusterIP
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: dapr-localforwarder
namespace: default
labels:
app: dapr-localforwarder
spec:
replicas: 3 # Adjust replica # based on your telemetry volume
selector:
matchLabels:
app: dapr-localforwarder
template:
metadata:
labels:
app: dapr-localforwarder
spec:
containers:
- name: dapr-localforwarder
image: docker.io/daprio/dapr-localforwarder:latest
ports:
- containerPort: 55678
imagePullPolicy: Always
env:
- name: APPINSIGHTS_INSTRUMENTATIONKEY
value: <APPINSIGHT INSTRUMENTATIONKEY> # Replace with your ikey
- name: APPINSIGHTS_LIVEMETRICSSTREAMAUTHENTICATIONAPIKEY
value: <APPINSIGHT API KEY> # Replace with your generated api key
```
2. Replace `<APPINSIGHT INSTRUMENTATIONKEY>` with your Instrumentation Key and `<APPINSIGHT API KEY>` with the generated key in the file
```yaml
- name: APPINSIGHTS_INSTRUMENTATIONKEY
value: <APPINSIGHT INSTRUMENTATIONKEY> # Replace with your ikey
- name: APPINSIGHTS_LIVEMETRICSSTREAMAUTHENTICATIONAPIKEY
value: <APPINSIGHT API KEY> # Replace with your generated api key
```
3. Deploy dapr-localfowarder.yaml
```bash
kubectl apply -f ./dapr-localforwarder.yaml
```
4. Create the following YAML files
- native.yaml component
```yaml
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
name: native
namespace: default
spec:
type: exporters.native
metadata:
- name: enabled
value: "true"
- name: agentEndpoint
value: "<Local forwarder address, e.g. dapr-localforwarder.default.svc.cluster.local:55678>"
```
- tracing.yaml configuration
```yaml
apiVersion: dapr.io/v1alpha1
kind: Configuration
metadata:
name: tracing
namespace: default
spec:
tracing:
samplingRate: "1"
```
5. Use kubectl to apply the above CRD files:
```bash
kubectl apply -f tracing.yaml
kubectl apply -f native.yaml
```
6. Deploy your app with tracing
When running in Kubernetes mode, apply the configuration by adding a `dapr.io/config` annotation to the container that you want to participate in the distributed tracing, as shown in the following example:
```yaml
apiVersion: apps/v1
kind: Deployment
metadata:
...
spec:
...
template:
metadata:
...
annotations:
dapr.io/enabled: "true"
dapr.io/app-id: "calculator-front-end"
dapr.io/app-port: "8080"
dapr.io/config: "tracing"
```
{{% /codetab %}}
{{< /tabs >}}
That's it! There's no need include any SDKs or instrument your application code. Dapr automatically handles the distributed tracing for you.
{{% alert title="Note" color="primary" %}}
You can register multiple exporters at the same time, and the tracing logs are forwarded to all registered exporters.
{{% /alert %}}
Deploy and run some applications. After a few minutes, you should see tracing logs appearing in your Application Insights resource. You can also use **Application Map** to examine the topology of your services, as shown below:
![Application map](/images/azure-monitor.png)
> **NOTE**: Only operations going through Dapr API exposed by Dapr sidecar (e.g. service invocation or event publishing) will be displayed in Application Map topology. Direct service invocations (not going through the Dapr API) will not be shown.
## Tracing configuration
The `tracing` section under the `Configuration` spec contains the following properties:
```yml
tracing:
samplingRate: "1"
```
The following table lists the different properties.
Property | Type | Description
---- | ------- | -----------
samplingRate | string | Set sampling rate for tracing to be enabled or disabled.
`samplingRate` is used to enable or disable the tracing. To disable the sampling rate ,
set `samplingRate : "0"` in the configuration. The valid range of samplingRate is between 0 and 1 inclusive. The sampling rate determines whether a trace span should be sampled or not based on value. `samplingRate : "1"` will always sample the traces.By default, the sampling rate is 1 in 10,000
## References
* [How-To: Use W3C Trace Context for distributed tracing]({{< ref w3c-tracing-howto >}})

View File

@ -0,0 +1,191 @@
---
type: docs
title: "Set up Application Insights for distributed tracing"
linkTitle: "Local Forwarder"
weight: 1000
description: "Integrate with Application Insights through OpenTelemetry's default exporter along with a dedicated agent known as the Local Forwarder"
---
Dapr integrates with Application Insights through OpenTelemetry's default exporter along with a dedicated agent known as the [Local Forwarder](https://docs.microsoft.com/en-us/azure/azure-monitor/app/opencensus-local-forwarder).
> Note: The local forwarder is still under preview, but being deprecated. The Application Insights team recommends using [Opentelemetry collector](https://github.com/open-telemetry/opentelemetry-collector) (which is in alpha state) for the future so we're working on moving from local forwarder to [Opentelemetry collector](https://github.com/open-telemetry/opentelemetry-collector).
## How to configure distributed tracing with Application insights
The following steps show you how to configure Dapr to send distributed tracing data to Application insights.
### Setup Application Insights
1. First, you'll need an Azure account. Please see instructions [here](https://azure.microsoft.com/free/) to apply for a **free** Azure account.
2. Follow instructions [here](https://docs.microsoft.com/en-us/azure/azure-monitor/app/create-new-resource) to create a new Application Insights resource.
3. Get Application insights Intrumentation key from your application insights page
4. On the Application Insights side menu, go to `Configure -> API Access`
5. Click `Create API Key`
6. Select all checkboxes under `Choose what this API key will allow apps to do:`
- Read telemetry
- Write annotations
- Authenticate SDK control channel
7. Generate Key and get API key
### Setup the Local Forwarder
#### Self hosted environment
This is for running the local forwarder on your machine.
1. Run the local fowarder
```bash
docker run -e APPINSIGHTS_INSTRUMENTATIONKEY=<Your Instrumentation Key> -e APPINSIGHTS_LIVEMETRICSSTREAMAUTHENTICATIONAPIKEY=<Your API Key> -d -p 55678:55678 daprio/dapr-localforwarder:latest
```
> Note: [dapr-localforwarder](https://github.com/dapr/ApplicationInsights-LocalForwarder) is the forked version of [ApplicationInsights Localforwarder](https://github.com/microsoft/ApplicationInsights-LocalForwarder/), that includes the minor changes for Dapr. We're working on migrating to [opentelemetry-sdk and opentelemetry collector](https://opentelemetry.io/).
1. Create the following YAML files. Copy the native.yaml component file and tracing.yaml configuration file to the *components/* sub-folder under the same folder where you run your application.
* native.yaml component
```yaml
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
name: native
namespace: default
spec:
type: exporters.native
metadata:
- name: enabled
value: "true"
- name: agentEndpoint
value: "localhost:55678"
```
* tracing.yaml configuration
```yaml
apiVersion: dapr.io/v1alpha1
kind: Configuration
metadata:
name: tracing
namespace: default
spec:
tracing:
samplingRate: "1"
```
3. When running in the local self hosted mode, you need to launch Dapr with the `--config` parameter:
```bash
dapr run --app-id mynode --app-port 3000 --config ./components/tracing.yaml node app.js
```
#### Kubernetes environment
1. Download [dapr-localforwarder.yaml](./localforwarder/dapr-localforwarder.yaml)
2. Replace `<APPINSIGHT INSTRUMENTATIONKEY>` with your Instrumentation Key and `<APPINSIGHT API KEY>` with the generated key in the file
```yaml
- name: APPINSIGHTS_INSTRUMENTATIONKEY
value: <APPINSIGHT INSTRUMENTATIONKEY> # Replace with your ikey
- name: APPINSIGHTS_LIVEMETRICSSTREAMAUTHENTICATIONAPIKEY
value: <APPINSIGHT API KEY> # Replace with your generated api key
```
3. Deploy dapr-localfowarder.yaml
```bash
kubectl apply -f ./dapr-localforwarder.yaml
```
4. Create the following YAML files
* native.yaml component
```yaml
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
name: native
namespace: default
spec:
type: exporters.native
metadata:
- name: enabled
value: "true"
- name: agentEndpoint
value: "<Local forwarder address, e.g. dapr-localforwarder.default.svc.cluster.local:55678>"
```
* tracing.yaml configuration
```yaml
apiVersion: dapr.io/v1alpha1
kind: Configuration
metadata:
name: tracing
namespace: default
spec:
tracing:
samplingRate: "1"
```
5. Use kubectl to apply the above CRD files:
```bash
kubectl apply -f tracing.yaml
kubectl apply -f native.yaml
```
6. Deploy your app with tracing
When running in Kubernetes mode, apply the configuration by adding a `dapr.io/config` annotation to the container that you want to participate in the distributed tracing, as shown in the following example:
```yaml
apiVersion: apps/v1
kind: Deployment
metadata:
...
spec:
...
template:
metadata:
...
annotations:
dapr.io/enabled: "true"
dapr.io/app-id: "calculator-front-end"
dapr.io/app-port: "8080"
dapr.io/config: "tracing"
```
That's it! There's no need include any SDKs or instrument your application code. Dapr automatically handles the distributed tracing for you.
> **NOTE**: You can register multiple exporters at the same time, and the tracing logs are forwarded to all registered exporters.
Deploy and run some applications. After a few minutes, you should see tracing logs appearing in your Application Insights resource. You can also use **Application Map** to examine the topology of your services, as shown below:
![Application map](/images/azure-monitor.png)
> **NOTE**: Only operations going through Dapr API exposed by Dapr sidecar (e.g. service invocation or event publishing) will be displayed in Application Map topology. Direct service invocations (not going through the Dapr API) will not be shown.
## Tracing configuration
The `tracing` section under the `Configuration` spec contains the following properties:
```yml
tracing:
samplingRate: "1"
```
The following table lists the different properties.
Property | Type | Description
---- | ------- | -----------
samplingRate | string | Set sampling rate for tracing to be enabled or disabled.
`samplingRate` is used to enable or disable the tracing. To disable the sampling rate ,
set `samplingRate : "0"` in the configuration. The valid range of samplingRate is between 0 and 1 inclusive. The sampling rate determines whether a trace span should be sampled or not based on value. `samplingRate : "1"` will always sample the traces.By default, the sampling rate is 1 in 10,000
## References
* [How-To: Use W3C Trace Context for distributed tracing](../../howto/use-w3c-tracecontext/README.md)

View File

@ -0,0 +1,91 @@
---
type: docs
title: "Using OpenTelemetry Collector to collect traces"
linkTitle: "OpenTelemetry"
weight: 1000
description: "How to use Dapr to push trace events to Azure Application Insights, through the OpenTelemetry Collector."
---
Dapr can integrate with [OpenTelemetry Collector](https://github.com/open-telemetry/opentelemetry-collector) using the OpenCensus API. This guide walks through an example to use Dapr to push trace events to Azure Application Insights, through the OpenTelemetry Collector.
## Requirements
A installation of Dapr on Kubernetes.
## How to configure distributed tracing with Application Insights
### Setup Application Insights
1. First, you'll need an Azure account. See instructions [here](https://azure.microsoft.com/free/) to apply for a **free** Azure account.
2. Follow instructions [here](https://docs.microsoft.com/en-us/azure/azure-monitor/app/create-new-resource) to create a new Application Insights resource.
3. Get the Application Insights Intrumentation key from your Application Insights page.
### Run OpenTelemetry Collector to push to your Application Insights instance
First, save your Application Insights Instrumentation Key in an environment variable
```
export APP_INSIGHTS_KEY=<your-app-insight-key>
```
Next, install the OpenTelemetry Collector to your Kubernetes cluster to push events to your Application Insights instance
1. Check out the file [open-telemetry-collector.yaml](/docs/open-telemetry-collector/open-telemetry-collector.yaml) and replace the `<INSTRUMENTATION-KEY>` placeholder with your `APP_INSIGHTS_KEY`.
2. Apply the configuration with `kubectl apply -f open-telemetry-collector.yaml`.
Next, set up both a Dapr configuration file to turn on tracing and deploy a tracing exporter component that uses the OpenTelemetry Collector.
1. Create a collector-component.yaml file with this [content](/docs/open-telemetry-collector/collector-component.yaml)
2. Apply the configuration with `kubectl apply -f collector-component.yaml`.
### Deploy your app with tracing
When running in Kubernetes mode, apply the `appconfig` configuration by adding a `dapr.io/config` annotation to the container that you want to participate in the distributed tracing, as shown in the following example:
```yaml
apiVersion: apps/v1
kind: Deployment
metadata:
...
spec:
...
template:
metadata:
...
annotations:
dapr.io/enabled: "true"
dapr.io/app-id: "MyApp"
dapr.io/app-port: "8080"
dapr.io/config: "appconfig"
```
Some of the quickstarts such as [distributed calculator](https://github.com/dapr/quickstarts/tree/master/distributed-calculator) already configure these settings, so if you are using those no additional settings are needed.
That's it! There's no need include any SDKs or instrument your application code. Dapr automatically handles the distributed tracing for you.
> **NOTE**: You can register multiple tracing exporters at the same time, and the tracing logs are forwarded to all registered exporters.
Deploy and run some applications. After a few minutes, you should see tracing logs appearing in your Application Insights resource. You can also use **Application Map** to examine the topology of your services, as shown below:
![Application map](/images/open-telemetry-app-insights.png)
> **NOTE**: Only operations going through Dapr API exposed by Dapr sidecar (e.g. service invocation or event publishing) are displayed in Application Map topology.
## Tracing configuration
The `tracing` section under the `Configuration` spec contains the following properties:
```yml
tracing:
samplingRate: "1"
```
The following table lists the different properties.
| Property | Type | Description
|-------------- | ------ | -----------
| samplingRate | string | Set sampling rate for tracing to be enabled or disabled.
`samplingRate` is used to enable or disable the tracing. To disable the sampling rate , set `samplingRate : "0"` in the configuration. The valid range of samplingRate is between 0 and 1 inclusive. The sampling rate determines whether a trace span should be sampled or not based on value. `samplingRate : "1"` will always sample the traces. By default, the sampling rate is 1 in 10,000

View File

@ -101,7 +101,8 @@ path | route path from the subscription configuration
#### Expected HTTP Response
An HTTP 200 response with JSON encoded payload body with the processing status:
An HTTP 2xx response denotes successful processing of message.
For richer response handling, a JSON encoded payload body with the processing status can be sent:
```json
{
@ -114,8 +115,9 @@ Status | Description
SUCCESS | message is processed successfully
RETRY | message to be retried by Dapr
DROP | warning is logged and message is dropped
Others | error, message to be retried by Dapr
For empty payload responses in HTTP 2xx, Dapr assumes `SUCCESS`.
Dapr assumes a JSON encoded payload response without `status` field or an empty payload responses with HTTP 2xx, as `SUCCESS`.
The HTTP response might be different from HTTP 2xx, the following are Dapr's behavior in different HTTP statuses:

View File

@ -0,0 +1,21 @@
apiVersion: dapr.io/v1alpha1
kind: Configuration
metadata:
name: appconfig
namespace: default
spec:
tracing:
samplingRate: "1"
---
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
name: native
namespace: default
spec:
type: exporters.native
metadata:
- name: enabled
value: "true"
- name: agentEndpoint
value: "otel-collector.default.svc.cluster.local:55678"

View File

@ -0,0 +1,107 @@
apiVersion: v1
kind: ConfigMap
metadata:
name: otel-collector-conf
labels:
app: opentelemetry
component: otel-collector-conf
data:
otel-collector-config: |
receivers:
opencensus:
endpoint: 0.0.0.0:55678
processors:
queued_retry:
batch:
extensions:
health_check:
pprof:
endpoint: :1888
zpages:
endpoint: :55679
exporters:
azuremonitor:
azuremonitor/2:
endpoint: "https://dc.services.visualstudio.com/v2/track"
instrumentation_key: "<INSTRUMENTATION-KEY>"
# maxbatchsize is the maximum number of items that can be
# queued before calling to the configured endpoint
maxbatchsize: 100
# maxbatchinterval is the maximum time to wait before calling
# the configured endpoint.
maxbatchinterval: 10s
service:
extensions: [pprof, zpages, health_check]
pipelines:
traces:
receivers: [opencensus]
exporters: [azuremonitor/2]
processors: [batch, queued_retry]
---
apiVersion: v1
kind: Service
metadata:
name: otel-collector
labels:
app: opencesus
component: otel-collector
spec:
ports:
- name: opencensus # Default endpoint for Opencensus receiver.
port: 55678
protocol: TCP
targetPort: 55678
selector:
component: otel-collector
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: otel-collector
labels:
app: opentelemetry
component: otel-collector
spec:
replicas: 1 # scale out based on your usage
selector:
matchLabels:
app: opentelemetry
template:
metadata:
labels:
app: opentelemetry
component: otel-collector
spec:
containers:
- name: otel-collector
image: otel/opentelemetry-collector-contrib-dev:latest
command:
- "/otelcontribcol"
- "--config=/conf/otel-collector-config.yaml"
resources:
limits:
cpu: 1
memory: 2Gi
requests:
cpu: 200m
memory: 400Mi
ports:
- containerPort: 55678 # Default endpoint for Opencensus receiver.
volumeMounts:
- name: otel-collector-config-vol
mountPath: /conf
livenessProbe:
httpGet:
path: /
port: 13133
readinessProbe:
httpGet:
path: /
port: 13133
volumes:
- configMap:
name: otel-collector-conf
items:
- key: otel-collector-config
path: otel-collector-config.yaml
name: otel-collector-config-vol

Binary file not shown.

After

Width:  |  Height:  |  Size: 52 KiB