mirror of https://github.com/dapr/docs.git
Merge branch 'use-cases' of github.com:arschles/docs into use-cases
This commit is contained in:
commit
566a35b192
|
@ -29,6 +29,8 @@ Dapr is currently under community development in preview phase and master branch
|
|||
|
||||
| Version | Repo |
|
||||
|:-------:|:----:|
|
||||
| v0.7.0 | [Docs](https://github.com/dapr/docs/tree/v0.7.0) - [Runtime](https://github.com/dapr/dapr/tree/v0.7.0) - [CLI](https://github.com/dapr/cli/tree/release-0.7)
|
||||
| v0.6.0 | [Docs](https://github.com/dapr/docs/tree/v0.6.0) - [Runtime](https://github.com/dapr/dapr/tree/v0.6.0) - [CLI](https://github.com/dapr/cli/tree/release-0.6)
|
||||
| v0.5.0 | [Docs](https://github.com/dapr/docs/tree/v0.5.0) - [Runtime](https://github.com/dapr/dapr/tree/v0.5.0) - [CLI](https://github.com/dapr/cli/tree/release-0.5)
|
||||
| v0.4.0 | [Docs](https://github.com/dapr/docs/tree/v0.4.0) - [Runtime](https://github.com/dapr/dapr/tree/v0.4.0) - [CLI](https://github.com/dapr/cli/tree/release-0.4)
|
||||
| v0.3.0 | [Docs](https://github.com/dapr/docs/tree/v0.3.0) - [Runtime](https://github.com/dapr/dapr/tree/v0.3.0) - [CLI](https://github.com/dapr/cli/tree/release-0.3)
|
||||
|
|
|
@ -9,11 +9,12 @@ First, check your Deployment or Pod YAML file, and check that you have the follo
|
|||
|
||||
Sample deployment:
|
||||
|
||||
<pre>
|
||||
```yaml
|
||||
apiVersion: apps/v1
|
||||
kind: Deployment
|
||||
metadata:
|
||||
name: nodeapp
|
||||
namespace: default
|
||||
labels:
|
||||
app: node
|
||||
spec:
|
||||
|
@ -36,7 +37,7 @@ spec:
|
|||
ports:
|
||||
- containerPort: 3000
|
||||
imagePullPolicy: Always
|
||||
</pre>
|
||||
```
|
||||
|
||||
If your pod spec template is annotated correctly and you still don't see the sidecar injected, make sure Dapr was deployed to the cluster before your deployment or pod were deployed.
|
||||
|
||||
|
|
|
@ -51,7 +51,7 @@ dapr run node myapp.js
|
|||
== DAPR == time="2019-09-05T12:26:43-07:00" level=info msg="starting Dapr Runtime -- version 0.3.0-alpha -- commit b6f2810-dirty"
|
||||
== DAPR == time="2019-09-05T12:26:43-07:00" level=info msg="log level set to: info"
|
||||
== DAPR == time="2019-09-05T12:26:43-07:00" level=info msg="standalone mode configured"
|
||||
== DAPR == time="2019-09-05T12:26:43-07:00" level=info msg="dapr id: Trackgreat-Lancer"
|
||||
== DAPR == time="2019-09-05T12:26:43-07:00" level=info msg="app id: Trackgreat-Lancer"
|
||||
== DAPR == time="2019-09-05T12:26:43-07:00" level=info msg="loaded component statestore (state.redis)"
|
||||
== DAPR == time="2019-09-05T12:26:43-07:00" level=info msg="loaded component messagebus (pubsub.redis)"
|
||||
== DAPR == 2019/09/05 12:26:43 redis: connecting to localhost:6379
|
||||
|
@ -123,7 +123,7 @@ kubectl logs addapp-74b57fb78c-67zm6 -c daprd
|
|||
time="2019-09-04T02:52:27Z" level=info msg="starting Dapr Runtime -- version 0.3.0-alpha -- commit b6f2810-dirty"
|
||||
time="2019-09-04T02:52:27Z" level=info msg="log level set to: info"
|
||||
time="2019-09-04T02:52:27Z" level=info msg="kubernetes mode configured"
|
||||
time="2019-09-04T02:52:27Z" level=info msg="dapr id: addapp"
|
||||
time="2019-09-04T02:52:27Z" level=info msg="app id: addapp"
|
||||
time="2019-09-04T02:52:27Z" level=info msg="application protocol: http. waiting on port 6000"
|
||||
time="2019-09-04T02:52:27Z" level=info msg="application discovered on port 6000"
|
||||
time="2019-09-04T02:52:27Z" level=info msg="actor runtime started. actor idle timeout: 1h0m0s. actor scan interval: 30s"
|
||||
|
|
|
@ -30,6 +30,7 @@ apiVersion: dapr.io/v1alpha1
|
|||
kind: Configuration
|
||||
metadata:
|
||||
name: zipkin
|
||||
namespace: default
|
||||
spec:
|
||||
tracing:
|
||||
enabled: true
|
||||
|
@ -74,11 +75,12 @@ For Standalone mode, create a Dapr configuration file locally and reference it w
|
|||
|
||||
1. Create the following YAML file:
|
||||
|
||||
```yml
|
||||
```yaml
|
||||
apiVersion: dapr.io/v1alpha1
|
||||
kind: Configuration
|
||||
metadata:
|
||||
name: zipkin
|
||||
namespace: default
|
||||
spec:
|
||||
tracing:
|
||||
enabled: true
|
||||
|
|
|
@ -27,6 +27,7 @@ Every binding has its own unique set of properties. Click the name link to see t
|
|||
| [RabbitMQ](../../reference/specs/bindings/rabbitmq.md) | ✅ | ✅ | Experimental |
|
||||
| [Redis](../../reference/specs/bindings/redis.md) | | ✅ | Experimental |
|
||||
| [Twilio](../../reference/specs/bindings/twilio.md) | | ✅ | Experimental |
|
||||
| [SendGrid](../../reference/specs/bindings/sendgrid.md) | | ✅ | Experimental |
|
||||
|
||||
### Amazon Web Service (AWS)
|
||||
|
||||
|
|
|
@ -1,8 +1,142 @@
|
|||
# Configuration
|
||||
# Configurations
|
||||
Dapr configurations are settings that enable you to change the behavior of individual Dapr sidecars or globally on the system services in the Dapr control plane.
|
||||
|
||||
Dapr configuration is a configuration file (in local mode) or a Kubernetes configuration object (in Kubernetes mode). A Dapr sidecar can apply a configuration by using a ```dapr.io/config``` annotation (in Kubernetes mode) or by using a ```--config``` switch with ```dapr run```.
|
||||
An example of a per Dapr sidecar setting is configuring trace settings. An example of a control plane setting is mutual TLS which is a global setting on the Sentry system service.
|
||||
|
||||
A Dapr configuration configures:
|
||||
- [Self hosted sidecar configuration](#Self-hosted-sidecar-configuration)
|
||||
- [Kubernetes sidecar configuration](#Kubernetes-sidecar-configuration)
|
||||
- [Sidecar Configuration settings](#sidecar-configuration-settings)
|
||||
- [Kubernetes control plane configuration](#Kubernetes-control-plane-configuration)
|
||||
- [Control plane configuration settings](#control-plane-configuration-settings)
|
||||
|
||||
* [distributed tracing](../observability/traces.md)
|
||||
* [custom pipeline](../middleware/README.md)
|
||||
## Self hosted sidecar configuration
|
||||
In self hosted mode the Dapr configuration is a configuration file, for example `myappconfig.yaml`. By default Dapr side looks in the `components/` sub-folder under the folder where you run your application for a configuration file.
|
||||
|
||||
A Dapr sidecar can also apply a configuration by using a ```--config``` flag to the file path with ```dapr run``` CLI command.
|
||||
|
||||
## Kubernetes sidecar configuration
|
||||
In Kubernetes mode the Dapr configuration is a Configuration CRD, that is applied to the cluster. For example;
|
||||
|
||||
```cli
|
||||
kubectl apply -f myappconfig.yaml
|
||||
```
|
||||
|
||||
You can use the Dapr CLI to list the Configuration CRDs
|
||||
|
||||
```cli
|
||||
dapr configurations -k
|
||||
```
|
||||
|
||||
A Dapr sidecar can apply a specific configuration by using a ```dapr.io/config``` annotation. For example:
|
||||
|
||||
```yml
|
||||
annotations:
|
||||
dapr.io/enabled: "true"
|
||||
dapr.io/id: "nodeapp"
|
||||
dapr.io/port: "3000"
|
||||
dapr.io/config: "myappconfig"
|
||||
```
|
||||
Note: There are more [Kubernetes annotations](../../howto/configure-k8s/readme.md) available to configure the Dapr sidecar on activation by sidecar Injector system service.
|
||||
|
||||
## Sidecar configuration settings
|
||||
|
||||
The following configuration settings can be applied to Dapr sidecars;
|
||||
|
||||
* [Observability distributed tracing](../observability/traces.md)
|
||||
* [Middleware pipelines](../middleware/README.md)
|
||||
|
||||
### Tracing configuration
|
||||
|
||||
The `tracing` section under the `Configuration` spec contains the following properties:
|
||||
|
||||
```yml
|
||||
tracing:
|
||||
enabled: true
|
||||
expandParams: true
|
||||
includeBody: true
|
||||
```
|
||||
|
||||
The following table lists the different properties.
|
||||
|
||||
Property | Type | Description
|
||||
---- | ------- | -----------
|
||||
enabled | bool | Set tracing to be enabled or disabled
|
||||
expandParams | bool | When true, expands parameters passed to HTTP endpoints
|
||||
includeBody | bool | When true, includes the request body in the tracing event
|
||||
|
||||
### Middleware configuration
|
||||
|
||||
The `middleware` section under the `Configuration` spec contains the following properties:
|
||||
|
||||
```yml
|
||||
httpPipeline:
|
||||
handlers:
|
||||
- name: oauth2
|
||||
type: middleware.http.oauth2
|
||||
- name: uppercase
|
||||
type: middleware.http.uppercase
|
||||
```
|
||||
|
||||
The following table lists the different properties.
|
||||
|
||||
Property | Type | Description
|
||||
---- | ------- | -----------
|
||||
name | string | name of the middleware component
|
||||
type | string | type of middleware component
|
||||
|
||||
|
||||
|
||||
Example sidecar configuration
|
||||
|
||||
```yml
|
||||
apiVersion: dapr.io/v1alpha1
|
||||
kind: Configuration
|
||||
metadata:
|
||||
name: myappconfig
|
||||
namespace: default
|
||||
spec:
|
||||
tracing:
|
||||
enabled: true
|
||||
expandParams: true
|
||||
includeBody: true
|
||||
httpPipeline:
|
||||
- name: oauth2
|
||||
type: middleware.http.oauth2
|
||||
```
|
||||
|
||||
## Kubernetes control plane configuration
|
||||
There is a single configuration file called `default` installed with the control plane system services that applies global settings.
|
||||
|
||||
## Control plane configuration settings
|
||||
|
||||
A Dapr control plane configuration can configure the following settings:
|
||||
|
||||
* [Mutual TLS](../../howto/configure-mtls/readme.md). Also see [security concepts](../security/readme.md)
|
||||
|
||||
|
||||
Property | Type | Description
|
||||
---- | ------- | -----------
|
||||
enabled | bool | Set mtls to be enabled or disabled
|
||||
allowedClockSkew | string | The extra time to give for certificate expiry based on possible clock skew on a machine. Default is 15 minutes.
|
||||
workloadCertTTL | string | Time a certificate is valid for. Default is 24 hours
|
||||
|
||||
Example control plane configuration
|
||||
|
||||
```yaml
|
||||
apiVersion: dapr.io/v1alpha1
|
||||
kind: Configuration
|
||||
metadata:
|
||||
name: default
|
||||
namespace: default
|
||||
spec:
|
||||
mtls:
|
||||
enabled: true
|
||||
allowedClockSkew: 15m
|
||||
workloadCertTTL: 24h
|
||||
```
|
||||
|
||||
## References
|
||||
* [Distributed tracing](../observability/traces.md)
|
||||
* [Middleware pipelines](../middleware/README.md)
|
||||
* [Security](../security/readme.md)
|
||||
* [How-To: Configuring the Dapr sidecar on Kubernetes](../../howto/configure-k8s/readme.md)
|
||||
|
|
|
@ -1,34 +1,35 @@
|
|||
# Middleware
|
||||
# Middleware pipeline
|
||||
|
||||
Dapr allows custom processing pipelines to be defined by chaining a series of custom middleware. A request goes through all defined middleware before it's routed to user code, and it goes through the defined middleware (in reversed order) before it's returned to the client, as shown in the following diagram.
|
||||
Dapr allows custom processing pipelines to be defined by chaining a series of middleware components. A request goes through all defined middleware components before it's routed to user code, and then goes through the defined middleware, in reverse order, before it's returned to the client, as shown in the following diagram.
|
||||
|
||||

|
||||
|
||||
## Customize processing pipeline
|
||||
|
||||
When launched, a Dapr sidecar constructs a processing pipeline. The pipeline consists of a [tracing middleware](../observabilty/traces.md) (when tracing is enabled) and a CORS middleware by default. Additional middleware, configured by a Dapr [configuration](../configuration/README.md), are added to the pipeline in the order as they are defined. The pipeline applies to all Dapr API endpoints, including state, pub/sub, direct messaging, bindings and others.
|
||||
When launched, a Dapr sidecar constructs a middleware processing pipeline. By default the pipeline consists of [tracing middleware](../observabilty/traces.md) and CORS middleware. Additional middleware, configured by a Dapr [configuration](../configuration/README.md), can be added to the pipeline in the order they are defined. The pipeline applies to all Dapr API endpoints, including state, pub/sub, service invocation, bindings, security and others.
|
||||
|
||||
> **NOTE:** Dapr provides a **middleware.http.uppercase** middleware that doesn't need any configurations. The middleware changes all texts in a request body to uppercase. You can use it to test/verify if your custom pipeline is in place.
|
||||
> **NOTE:** Dapr provides a **middleware.http.uppercase** pre-registered component that changes all text in a request body to uppercase. You can use it to test/verify if your custom pipeline is in place.
|
||||
|
||||
The following configuration defines a custom pipeline that uses a [OAuth 2.0 middleware](../../howto/authorization-with-oauth/README.md) and an uppercase middleware. In this case, all requests are authorized through the OAuth 2.0 protocol, and transformed to uppercase texts, before they are forwarded to user code.
|
||||
The following configuration example defines a custom pipeline that uses a [OAuth 2.0 middleware](../../howto/authorization-with-oauth/README.md) and an uppercase middleware component. In this case, all requests are authorized through the OAuth 2.0 protocol, and transformed to uppercase text, before they are forwarded to user code.
|
||||
|
||||
```yaml
|
||||
apiVersion: dapr.io/v1alpha1
|
||||
kind: Configuration
|
||||
metadata:
|
||||
name: pipeline
|
||||
namespace: default
|
||||
spec:
|
||||
httpPipeline:
|
||||
handlers:
|
||||
- name: middleware.http.oauth2
|
||||
- name: middleware.http.uppercase
|
||||
- name: oauth2
|
||||
type: middleware.http.oauth2
|
||||
- name: uppercase
|
||||
type: middleware.http.uppercase
|
||||
```
|
||||
|
||||
> **NOTE:** in future versions, a middleware can be conditionally applied by matching selectors.
|
||||
|
||||
## Writing a custom middleware
|
||||
|
||||
Dapr uses [FastHTTP](https://github.com/valyala/fasthttp) to implement it's HTTP server. Hence, your HTTP middleware needs to be written as a FastHTTP handler. Your middleware needs to implement a Middleware interface, which defines a **GetHandler** method that returns a **fasthttp.RequestHandler**:
|
||||
Dapr uses [FastHTTP](https://github.com/valyala/fasthttp) to implement it's HTTP server. Hence, your HTTP middleware needs to be written as a FastHTTP handler. Your middleware needs to implement a middleware interface, which defines a **GetHandler** method that returns a **fasthttp.RequestHandler**:
|
||||
|
||||
```go
|
||||
type Middleware interface {
|
||||
|
@ -50,4 +51,8 @@ func GetHandler(metadata Metadata) fasthttp.RequestHandler {
|
|||
}
|
||||
```
|
||||
|
||||
Your code should be contributed to the https://github.com/dapr/components-contrib repository, under the */middleware* folder. Then, you'll need to submit another pull request against the https://github.com/dapr/dapr repository to register the new middleware type. You'll need to modify the **Load()** method under https://github.com/dapr/dapr/blob/master/pkg/components/middleware/http/registry.go to register your middleware using the **Register** method.
|
||||
## Submitting middleware components
|
||||
Your middleware component can be contributed to the https://github.com/dapr/components-contrib repository, under the */middleware* folder. Then submit another pull request against the https://github.com/dapr/dapr repository to register the new middleware type. You'll need to modify the **Load()** method under https://github.com/dapr/dapr/blob/master/pkg/components/middleware/http/registry.go to register your middleware using the **Register** method.
|
||||
|
||||
## References
|
||||
* [How-To: Configure API authorization with OAuth](../../howto/authorization-with-oauth/readme.md)
|
|
@ -5,10 +5,38 @@ Observability is a term from control theory. Observability means you can answer
|
|||
The observability capabilities enable users to monitor the Dapr system services, their interaction with user applications and understand how these monitored services behave. The observability capabilities are divided into the following areas;
|
||||
|
||||
* **[Metrics](./metrics.md)**: are the series of measured values and counts that are collected and stored over time. Dapr metrics provide monitoring and understanding of the behavior of Dapr system services and user apps. For example, the service metrics between Dapr sidecars and user apps show call latency, traffic failures, error rates of requests etc. Dapr system services metrics show side car injection failures, health of the system services including CPU usage, number of actor placement made etc.
|
||||
* **[Logs](./logs.md)**: are records of events that occur occur that can be used to determine failures or other status. Logs events contain warning, error, info and debug messages produced by Dapr system services. Each log event includes metadata such as message type, hostname, component name, Dapr id, ip address, etc.
|
||||
* **[Distributed tracing](./traces.md)**: is used to profile and monitor Dapr system services and user apps. Distributed tracing helps pinpoint where failures occur and what causes poor performance. Distributed tracing is particularly well-suited to debugging and monitoring distributed software architectures, such as microservices. You can use distributed tracing to help debug and optimize application code. Distributed tracing contains trace spans between the Dapr runtime, Dapr system services, and user apps across process, nodes, network, and security boundaries. It provides a detailed understanding of service invocations (call flows) and service dependencies.
|
||||
* **[Logs](./logs.md)**: are records of events that occur occur that can be used to determine failures or other status. Logs events contain warning, error, info and debug messages produced by Dapr system services. Each log event includes metadata such as message type, hostname, component name, App ID, ip address, etc.
|
||||
* **[Distributed tracing](./traces.md)**: is used to profile and monitor Dapr system services and user apps. Distributed tracing helps pinpoint where failures occur and what causes poor performance. Distributed tracing is particularly well-suited to debugging and monitoring distributed software architectures, such as microservices.
|
||||
|
||||
You can use distributed tracing to help debug and optimize application code. Distributed tracing contains trace spans between the Dapr runtime, Dapr system services, and user apps across process, nodes, network, and security boundaries. It provides a detailed understanding of service invocations (call flows) and service dependencies.
|
||||
|
||||
It is generally recommended to run Dapr in production with tracing.
|
||||
|
||||
* **[Health](./health.md)**: Dapr provides a way for a hosting platform to determine it's health using an HTTP endpoint. With this endpoint, the Dapr process, or sidecar, can be probed to determine its readiness and liveness and action taken accordingly.
|
||||
|
||||
## Open Telemetry
|
||||
Dapr integrates with OpenTelemetry for metrics, logs and tracing. With OpenTelemetry, you can configure various exporters for tracing and metrics based on your environment, whether it is running in the cloud or on-premises.
|
||||
|
||||
## Monitoring tools
|
||||
|
||||
The observability tools listed below are ones that have been tested to work with Dapr.
|
||||
|
||||
### Metrics
|
||||
|
||||
* [How-To: Set up Prometheus and Grafana](../../howto/setup-monitoring-tools/setup-prometheus-grafana.md)
|
||||
* [How-To: Set up Azure Monitor](../../howto/setup-monitoring-tools/setup-azure-monitor.md)
|
||||
|
||||
### Logs
|
||||
|
||||
* [How-To: Set up Fluentd, Elastic search and Kibana in Kubernetes](../../howto/setup-monitoring-tools/setup-fluentd-es-kibana.md)
|
||||
* [How-To: Set up Azure Monitor](../../howto/setup-monitoring-tools/setup-azure-monitor.md)
|
||||
|
||||
### Distributed Tracing
|
||||
|
||||
* [How-To: Set up Zipkin](../../howto/diagnose-with-tracing/zipkin.md)
|
||||
* [How-To: Set up Application Insights](../../howto/diagnose-with-tracing/azure-monitor.md)
|
||||
|
||||
|
||||
## Implementation Status
|
||||
The table below shows the current status of each of the observabilty capabilites for the Dapr runtime and system services. N/A means not applicable.
|
||||
|
||||
|
@ -16,23 +44,4 @@ The table below shows the current status of each of the observabilty capabilites
|
|||
|---------|---------|----------|----------|-----------|--------|
|
||||
|Metrics | Yes | Yes | Yes | Yes | Yes |
|
||||
|Tracing | Yes | N/A | N/A | *Planned* | N/A |
|
||||
|Logs | Yes | Yes | Yes | Yes | Yes |
|
||||
|
||||
## Monitoring tools
|
||||
|
||||
The observability tools listed below are ones that have been tested to work with Dapr.
|
||||
|
||||
### Metrics
|
||||
|
||||
* [Prometheus + Grafana](../../howto/setup-monitoring-tools/setup-prometheus-grafana.md)
|
||||
* [Azure Monitor](../../howto/setup-monitoring-tools/setup-azure-monitor.md)
|
||||
|
||||
### Logs
|
||||
|
||||
* [Fluentd + Elastic Search + Kibana](../../howto/setup-monitoring-tools/setup-fluentd-es-kibana.md)
|
||||
* [Azure Monitor](../../howto/setup-monitoring-tools/setup-azure-monitor.md)
|
||||
|
||||
### Traces
|
||||
|
||||
* [Zipkin](../../howto/diagnose-with-tracing/zipkin.md)
|
||||
* [Application Insights](../../howto/diagnose-with-tracing/azure-monitor.md)
|
||||
|Logs | Yes | Yes | Yes | Yes | Yes |
|
|
@ -59,6 +59,7 @@ apiVersion: apps/v1
|
|||
kind: Deployment
|
||||
metadata:
|
||||
name: pythonapp
|
||||
namespace: default
|
||||
labels:
|
||||
app: python
|
||||
spec:
|
||||
|
|
|
@ -12,7 +12,7 @@ The default metrics port is `9090`. This can be overridden by passing the comman
|
|||
|
||||
Each Dapr system process emits Go runtime/process metrics by default and have their own metrics:
|
||||
|
||||
- [Dapr runtime metric list](https://github.com/dapr/dapr/blob/master/pkg/diagnostics/README.md)
|
||||
- [Dapr metric list](https://github.com/dapr/dapr/blob/master/docs/development/dapr-metrics.md)
|
||||
|
||||
## References
|
||||
|
||||
|
|
|
@ -22,33 +22,37 @@ Dapr adds a HTTP/gRPC middleware to the Dapr sidecar. The middleware intercepts
|
|||
|
||||
## Correlation ID
|
||||
|
||||
For HTTP requests, Dapr injects a **X-Correlation-ID** header to requests. For gRPC calls, Dapr inserts a **X-Correlation-ID** as a field of a **header** metadata. When a request arrives without an correlation ID, Dapr creates a new one. Otherwise, it passes the correlation ID along the call chain.
|
||||
Dapr uses the standard W3C Trace Context headers. For HTTP requests, Dapr uses `traceparent` header.For gRPC requests, Dapr uses `grpc-trace-bin` header.When a request arrives without a trace ID, Dapr creates a new one. Otherwise, it passes the trace ID along the call chain.
|
||||
|
||||
## Configuration
|
||||
|
||||
Dapr tracing is configured by a configuration file (in local mode) or a Kubernetes configuration object (in Kubernetes mode). For example, the following configuration object enables distributed tracing:
|
||||
Dapr uses [probalistic sampling](https://opencensus.io/tracing/sampling/probabilistic/) as defined by OpenCensus. The sample rate defines the probaility a tracing span will be sampled and can have a value between 0 and 1 (inclusive). The deafault sample rate is 0.0001 (i.e. 1 in 10,000 spans is sampled).
|
||||
|
||||
To change the default tracing behavior, use a configuration file (in self hosted mode) or a Kubernetes configuration object (in Kubernetes mode). For example, the following configuration object changes the sample rate to 1 (i.e. every span is sampled):
|
||||
|
||||
```yaml
|
||||
apiVersion: dapr.io/v1alpha1
|
||||
kind: Configuration
|
||||
metadata:
|
||||
name: tracing
|
||||
namespace: default
|
||||
spec:
|
||||
tracing:
|
||||
enabled: true
|
||||
expandParams: true
|
||||
includeBody: true
|
||||
samplingRate: "1"
|
||||
```
|
||||
|
||||
Please see the [References](#references) section for more details on how to configure tracing on local environment and Kubernetes environment.
|
||||
Similarly, changing `samplingRate` to 0 will disable tracing altogether.
|
||||
|
||||
Dapr supports pluggable exporters, defined by configuration files (in local mode) or a Kubernetes custom resource object (in Kubernetes mode). For example, the following manifest defines a Zipkin exporter:
|
||||
See the [References](#references) section for more details on how to configure tracing on local environment and Kubernetes environment.
|
||||
|
||||
Dapr supports pluggable exporters, defined by configuration files (in self hosted mode) or a Kubernetes custom resource object (in Kubernetes mode). For example, the following manifest defines a Zipkin exporter:
|
||||
|
||||
```yaml
|
||||
apiVersion: dapr.io/v1alpha1
|
||||
kind: Component
|
||||
metadata:
|
||||
name: zipkin
|
||||
namespace: default
|
||||
spec:
|
||||
type: exporters.zipkin
|
||||
metadata:
|
||||
|
@ -60,6 +64,5 @@ spec:
|
|||
|
||||
## References
|
||||
|
||||
* [How-To: Set up Distributed Tracing with Azure Monitor](../../howto/diagnose-with-tracing/azure-monitor.md)
|
||||
|
||||
* [How-To: Set Up Distributed Tracing with Zipkin](../../howto/diagnose-with-tracing/zipkin.md)
|
||||
* [How-To: Set up Application Insights for distributed tracing](../../howto/diagnose-with-tracing/azure-monitor.md)
|
||||
* [How-To: Set up Zipkin for distributed tracing](../../howto/diagnose-with-tracing/zipkin.md)
|
||||
|
|
|
@ -38,6 +38,7 @@ apiVersion: dapr.io/v1alpha1
|
|||
kind: Component
|
||||
metadata:
|
||||
name: statestore
|
||||
namespace: default
|
||||
spec:
|
||||
type: state.redis
|
||||
metadata:
|
||||
|
@ -54,6 +55,7 @@ apiVersion: dapr.io/v1alpha1
|
|||
kind: Component
|
||||
metadata:
|
||||
name: statestore
|
||||
namespace: default
|
||||
spec:
|
||||
type: state.redis
|
||||
metadata:
|
||||
|
@ -81,11 +83,12 @@ kubectl create secret generic eventhubs-secret --from-literal=connectionString=*
|
|||
|
||||
Next, reference the secret in your binding:
|
||||
|
||||
```yml
|
||||
```yaml
|
||||
apiVersion: dapr.io/v1alpha1
|
||||
kind: Component
|
||||
metadata:
|
||||
name: eventhubs
|
||||
namespace: default
|
||||
spec:
|
||||
type: bindings.azure.eventhubs
|
||||
metadata:
|
||||
|
|
|
@ -43,6 +43,12 @@ Install the latest darwin Dapr CLI to `/usr/local/bin`
|
|||
curl -fsSL https://raw.githubusercontent.com/dapr/cli/master/install/install.sh | /bin/bash
|
||||
```
|
||||
|
||||
Or install via [Homebrew](https://brew.sh)
|
||||
|
||||
```bash
|
||||
brew install dapr/tap/dapr-cli
|
||||
```
|
||||
|
||||
### From the Binary Releases
|
||||
|
||||
Each release of Dapr CLI includes various OSes and architectures. These binary versions can be manually downloaded and installed.
|
||||
|
@ -199,6 +205,10 @@ dapr-sidecar-injector-8555576b6f-29cqm 1/1 Running 0 40s
|
|||
dapr-sentry-9435776c7f-8f7yd 1/1 Running 0 40s
|
||||
```
|
||||
|
||||
#### Sidecar annotations
|
||||
|
||||
To see all the supported annotations for the Dapr sidecar on Kubernetes, visit [this](../howto/configure-k8s/README.md) how to guide.
|
||||
|
||||
#### Uninstall Dapr on Kubernetes
|
||||
|
||||
Helm 3
|
||||
|
|
|
@ -1,43 +1,40 @@
|
|||
# How Tos
|
||||
|
||||
Here you'll find a list of How To guides that walk you through accomplishing specific tasks.
|
||||
Here you'll find a list of "How To" guides that walk you through accomplishing specific tasks.
|
||||
|
||||
## Contents
|
||||
- [Service invocation](#service-invocation)
|
||||
- [State Management](#state-management)
|
||||
- [Pub/Sub](#Pub/Sub)
|
||||
- [Bindings and Triggers](#bindings-and-triggers)
|
||||
- [State management](#state-management)
|
||||
- [Pub/Sub](#pubsub)
|
||||
- [Bindings](#bindings-and-triggers)
|
||||
- [Actors](#actors)
|
||||
- [Observerability](#observerability)
|
||||
- [Security](#security)
|
||||
- [Middleware](#middleware)
|
||||
- [Components](#components)
|
||||
- [Hosting platforms](#hosting-platforms)
|
||||
- [Developer tooling](#developer-tooling)
|
||||
- [Infrastructure integration](#Infrastructure-integration)
|
||||
|
||||
## Service invocation
|
||||
|
||||
* [Invoke other services in your cluster or environment](./invoke-and-discover-services)
|
||||
* [Create a gRPC enabled app, and invoke Dapr over gRPC](./create-grpc-app)
|
||||
|
||||
### Middleware
|
||||
|
||||
* [Authorization with oAuth](./authorization-with-oauth)
|
||||
|
||||
## State Management
|
||||
|
||||
* [Setup Dapr state store](./setup-state-store)
|
||||
* [Setup a state store](./setup-state-store)
|
||||
* [Create a service that performs stateful CRUD operations](./create-stateful-service)
|
||||
* [Query the underlying state store](./query-state-store)
|
||||
* [Create a stateful, replicated service with different consistency/concurrency levels](./stateful-replicated-service)
|
||||
* [Control your app's throttling using rate limiting features](./control-concurrency)
|
||||
* [Configuring Redis for state management ](./configure-redis)
|
||||
|
||||
|
||||
## Pub/Sub
|
||||
|
||||
* [Setup Dapr Pub/Sub](./setup-pub-sub-message-broker)
|
||||
* [Use Pub/Sub to publish messages to a given topic](./publish-topic)
|
||||
* [Use Pub/Sub to consume events from a topic](./consume-topic)
|
||||
* [Use Pub/Sub across multiple namespaces](./pubsub-namespaces)
|
||||
* [Configuring Redis for pub/sub](./configure-redis)
|
||||
* [Limit the Pub/Sub topics used or scope them to one or more applications](./pubsub-scopes)
|
||||
|
||||
|
@ -73,10 +70,20 @@ For Actors How Tos see the SDK documentation
|
|||
* [Configure component secrets using Dapr secret stores](./setup-secret-store)
|
||||
* [Using the Secrets API to get application secrets](./get-secrets)
|
||||
|
||||
## Middleware
|
||||
|
||||
* [Configure API authorization with OAuth](./authorization-with-oauth)
|
||||
|
||||
## Components
|
||||
|
||||
* [Limit components for one or more applications using scopes](./components-scopes)
|
||||
|
||||
## Hosting Platforms
|
||||
### Kubernetes Configuration
|
||||
|
||||
* [Sidecar configuration on Kubernetes](./configure-k8s)
|
||||
* [Autoscale on Kubernetes using KEDA and Dapr bindings](./autoscale-with-keda)
|
||||
|
||||
## Developer tooling
|
||||
### Using Visual Studio Code
|
||||
|
||||
|
@ -92,7 +99,3 @@ For Actors How Tos see the SDK documentation
|
|||
### SDKs
|
||||
|
||||
* [Serialization in Dapr's SDKs](./serialize)
|
||||
|
||||
## Infrastructure integration
|
||||
|
||||
* [Autoscale on Kubernetes using KEDA and Dapr bindings](./autoscale-with-keda)
|
||||
|
|
|
@ -1,4 +1,4 @@
|
|||
# Authorization with oAuth
|
||||
# Configure API authorization with OAuth
|
||||
|
||||
Dapr OAuth 2.0 [middleware](../../concepts/middleware/README.md) allows you to enable [OAuth](https://oauth.net/2/) authorization on Dapr endpoints for your web APIs, using the [Authorization Code Grant flow](https://tools.ietf.org/html/rfc6749#section-4.1). When the middleware is enabled, any method invocation through Dapr needs to be authorized before getting passed to the user code.
|
||||
|
||||
|
@ -40,6 +40,7 @@ apiVersion: dapr.io/v1alpha1
|
|||
kind: Component
|
||||
metadata:
|
||||
name: oauth2
|
||||
namespace: default
|
||||
spec:
|
||||
type: middleware.http.oauth2
|
||||
metadata:
|
||||
|
@ -68,6 +69,7 @@ apiVersion: dapr.io/v1alpha1
|
|||
kind: Configuration
|
||||
metadata:
|
||||
name: pipeline
|
||||
namespace: default
|
||||
spec:
|
||||
httpPipeline:
|
||||
handlers:
|
||||
|
|
|
@ -29,6 +29,7 @@ apiVersion: dapr.io/v1alpha1
|
|||
kind: Component
|
||||
metadata:
|
||||
name: kafkaevent
|
||||
namespace: default
|
||||
spec:
|
||||
type: bindings.kafka
|
||||
metadata:
|
||||
|
|
|
@ -0,0 +1,27 @@
|
|||
# Configuring the Dapr sidecar on Kubernetes
|
||||
|
||||
On Kubernetes, Dapr uses a sidecar injector pod that automatically injects the Dapr sidecar container into a pod that has the correct annotations.
|
||||
The sidecar injector is an implementation of a Kubernetes [Admission Controller](https://kubernetes.io/docs/reference/access-authn-authz/admission-controllers/).
|
||||
|
||||
The following table shows all the supported pod Spec annotations supported by Dapr.
|
||||
|
||||
| Annotation | Description
|
||||
| ----------------------------------- | -------------- |
|
||||
| `dapr.io/enabled` | Setting this paramater to `true` injects the Dapr sidecar into the pod
|
||||
| `dapr.io/port` | This parameter tells Dapr which port your application is listening on
|
||||
| `dapr.io/id` | The unique ID of the application. Used for service discovery, state encapsulation and the pub/sub consumer ID
|
||||
| `dapr.io/log-level` | Sets the log level for the Dapr sidecar. Allowed values are `debug`, `info`, `warn`, `error`. Default is `info`
|
||||
| `dapr.io/config` | Tells Dapr which Configuration CRD to use
|
||||
| `dapr.io/log-as-json` | Setting this parameter to `true` outputs logs in JSON format. Default is `false`
|
||||
| `dapr.io/profiling` | Setting this paramater to `true` starts the Dapr profiling server on port `7777`. Default is `false`
|
||||
| `dapr.io/protocol` | Tells Dapr which protocol your application is using. Valid options are `http` and `grpc`. Default is `http`
|
||||
| `dapr.io/max-concurrency` | Limit the concurrency of your application. A valid value is any number larger than `0`
|
||||
| `dapr.io/metrics-port` | Sets the port for the sidecar metrics server. Default is `9090`
|
||||
| `dapr.io/sidecar-cpu-limit` | Maximum amount of CPU that the Dapr sidecar can use. See valid values [here](https://kubernetes.io/docs/tasks/administer-cluster/manage-resources/quota-memory-cpu-namespace/). By default this is not set
|
||||
| `dapr.io/sidecar-memory-limit` | Maximum amount of Memory that the Dapr sidecar can use. See valid values [here](https://kubernetes.io/docs/tasks/administer-cluster/manage-resources/quota-memory-cpu-namespace/). By default this is not set
|
||||
| `dapr.io/sidecar-cpu-request` | Amount of CPU that the Dapr sidecar requests. See valid values [here](https://kubernetes.io/docs/tasks/administer-cluster/manage-resources/quota-memory-cpu-namespace/). By default this is not set
|
||||
| `dapr.io/sidecar-memory-request` | Amount of Memory that the Dapr sidecar requests .See valid values [here](https://kubernetes.io/docs/tasks/administer-cluster/manage-resources/quota-memory-cpu-namespace/). By default this is not set
|
||||
| `dapr.io/sidecar-readiness-probe-delay-seconds` | Number of seconds after the sidecar container has started before readiness probe is initiated. Read more [here](https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/#configure-probes). Default is `3`
|
||||
| `dapr.io/sidecar-readiness-probe-timeout-seconds` | Number of seconds after which the sidecar readiness probe times out. Read more [here](https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/#configure-probes). Default is `3`
|
||||
| `dapr.io/sidecar-readiness-probe-period-seconds` | How often (in seconds) to perform the sidecar readiness probe. Read more [here](https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/#configure-probes). Default is `6`
|
||||
| `dapr.io/sidecar-readiness-probe-threshold` | When the sidecar readiness probe fails, Kubernetes will try N times before giving up. In this case, the Pod will be marked Unready. Read more about `failureThreshold` [here](https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/#configure-probes). Default is `3`
|
|
@ -20,6 +20,7 @@ apiVersion: dapr.io/v1alpha1
|
|||
kind: Configuration
|
||||
metadata:
|
||||
name: default
|
||||
namespace: default
|
||||
spec:
|
||||
mtls:
|
||||
enabled: "true"
|
||||
|
@ -169,6 +170,7 @@ apiVersion: dapr.io/v1alpha1
|
|||
kind: Configuration
|
||||
metadata:
|
||||
name: default
|
||||
namespace: default
|
||||
spec:
|
||||
mtls:
|
||||
enabled: "true"
|
||||
|
@ -195,6 +197,7 @@ apiVersion: dapr.io/v1alpha1
|
|||
kind: Configuration
|
||||
metadata:
|
||||
name: default
|
||||
namespace: default
|
||||
spec:
|
||||
mtls:
|
||||
enabled: "true"
|
||||
|
|
|
@ -77,6 +77,7 @@ apiVersion: dapr.io/v1alpha1
|
|||
kind: Component
|
||||
metadata:
|
||||
name: statestore
|
||||
namespace: default
|
||||
spec:
|
||||
type: state.redis
|
||||
metadata:
|
||||
|
@ -95,6 +96,7 @@ apiVersion: dapr.io/v1alpha1
|
|||
kind: Component
|
||||
metadata:
|
||||
name: messagebus
|
||||
namespace: default
|
||||
spec:
|
||||
type: pubsub.redis
|
||||
metadata:
|
||||
|
|
|
@ -13,11 +13,12 @@ For this guide, we'll use Redis Streams, which is also installed by default on a
|
|||
|
||||
*Note: When running Dapr locally, a pub/sub component YAML will automatically be created if it doesn't exist in a directory called `components` in your current working directory.*
|
||||
|
||||
```yml
|
||||
```yaml
|
||||
apiVersion: dapr.io/v1alpha1
|
||||
kind: Component
|
||||
metadata:
|
||||
name: messagebus
|
||||
namespace: default
|
||||
spec:
|
||||
type: pubsub.redis
|
||||
metadata:
|
||||
|
|
|
@ -14,11 +14,12 @@ Using Dapr, there are no code changes needed to an app.
|
|||
|
||||
To set max-concurrency in Kubernetes, add the following annotation to your pod:
|
||||
|
||||
<pre>
|
||||
```yaml
|
||||
apiVersion: apps/v1
|
||||
kind: Deployment
|
||||
metadata:
|
||||
name: nodesubscriber
|
||||
namespace: default
|
||||
labels:
|
||||
app: nodesubscriber
|
||||
spec:
|
||||
|
@ -36,7 +37,7 @@ spec:
|
|||
dapr.io/port: "3000"
|
||||
<b>dapr.io/max-concurrency: "1"</b>
|
||||
...
|
||||
</pre>
|
||||
```
|
||||
|
||||
### Setting max-concurrency using the Dapr CLI
|
||||
|
||||
|
|
|
@ -16,11 +16,12 @@ To do that, the app simply needs to host a gRPC server and implement the [Dapr c
|
|||
|
||||
On Kubernetes, set the following annotations in your deployment YAML:
|
||||
|
||||
<pre>
|
||||
```yaml
|
||||
apiVersion: apps/v1
|
||||
kind: Deployment
|
||||
metadata:
|
||||
name: myapp
|
||||
namespace: default
|
||||
labels:
|
||||
app: myapp
|
||||
spec:
|
||||
|
@ -38,7 +39,7 @@ spec:
|
|||
<b>dapr.io/protocol: "grpc"
|
||||
dapr.io/port: "5005"</b>
|
||||
...
|
||||
</pre>
|
||||
```
|
||||
|
||||
This tells Dapr to communicate with your app via gRPC over port `5005`.
|
||||
|
||||
|
|
|
@ -1,19 +1,23 @@
|
|||
# Set up distributed tracing with Application insights
|
||||
# Set up Application Insights for distributed tracing
|
||||
|
||||
Dapr integrates with Application Insights through OpenTelemetry's default exporter along with a dedicated agent known as [Local Forwarder](https://docs.microsoft.com/en-us/azure/azure-monitor/app/opencensus-local-forwarder).
|
||||
Dapr integrates with Application Insights through OpenTelemetry's default exporter along with a dedicated agent known as the [Local Forwarder](https://docs.microsoft.com/en-us/azure/azure-monitor/app/opencensus-local-forwarder).
|
||||
|
||||
> Note: Local forwarder is still under preview, but being deprecated. Application insights team recommends to use [Opentelemetry collector](https://github.com/open-telemetry/opentelemetry-collector) (which is in alpha state) for the future so that we're working on moving from local forwarder to [Opentelemetry collector](https://github.com/open-telemetry/opentelemetry-collector).
|
||||
> Note: The local forwarder is still under preview, but being deprecated. The Application Insights team recommends using [Opentelemetry collector](https://github.com/open-telemetry/opentelemetry-collector) (which is in alpha state) for the future so we're working on moving from local forwarder to [Opentelemetry collector](https://github.com/open-telemetry/opentelemetry-collector).
|
||||
|
||||
|
||||
- [How to configure distributed tracing with Application insights](#How-to-configure-distributed-tracing-with-Application-insights)
|
||||
- [Tracing configuration](#Tracing-configuration)
|
||||
|
||||
## How to configure distributed tracing with Application insights
|
||||
|
||||
The following steps will show you how to configure Dapr to send distributed tracing data to Application insights.
|
||||
The following steps show you how to configure Dapr to send distributed tracing data to Application insights.
|
||||
|
||||
### Setup Application insights
|
||||
### Setup Application Insights
|
||||
|
||||
1. First, you'll need an Azure account. Please see instructions [here](https://azure.microsoft.com/free/) to apply for a **free** Azure account.
|
||||
2. Follow instructions [here](https://docs.microsoft.com/en-us/azure/azure-monitor/app/create-new-resource) to create a new Application Insights resource.
|
||||
3. Get Application insights Intrumentation key from your application insights page
|
||||
4. Go to `Configure -> API Access`
|
||||
4. On the Application Insights side menu, go to `Configure -> API Access`
|
||||
5. Click `Create API Key`
|
||||
6. Select all checkboxes under `Choose what this API key will allow apps to do:`
|
||||
- Read telemetry
|
||||
|
@ -23,25 +27,27 @@ The following steps will show you how to configure Dapr to send distributed trac
|
|||
|
||||
### Setup the Local Forwarder
|
||||
|
||||
#### Local environment
|
||||
#### Self hosted environment
|
||||
This is for running the local forwarder on your machine.
|
||||
|
||||
1. Run localfowarder
|
||||
1. Run the local fowarder
|
||||
|
||||
```bash
|
||||
docker run -e APPINSIGHTS_INSTRUMENTATIONKEY=<Your Instrumentation Key> -e APPINSIGHTS_LIVEMETRICSSTREAMAUTHENTICATIONAPIKEY=<Your API Key> -d -p 55678:55678 daprio/dapr-localforwarder:0.1-beta1
|
||||
docker run -e APPINSIGHTS_INSTRUMENTATIONKEY=<Your Instrumentation Key> -e APPINSIGHTS_LIVEMETRICSSTREAMAUTHENTICATIONAPIKEY=<Your API Key> -d -p 55678:55678 daprio/dapr-localforwarder:latest
|
||||
```
|
||||
|
||||
> Note: dapr-localforwarder is created by using [0.1-beta1 release](https://github.com/microsoft/ApplicationInsights-LocalForwarder/releases/tag/v0.1-beta1). If you want to create your own image, please use [this dockerfile](./localforwarder/Dockerfile).
|
||||
> Note: [dapr-localforwarder](https://github.com/dapr/ApplicationInsights-LocalForwarder) is the forked version of [ApplicationInsights Localforwarder](https://github.com/microsoft/ApplicationInsights-LocalForwarder/), that includes the minor changes for Dapr. We're working on migrating to [opentelemetry-sdk and opentelemetry collector](https://opentelemetry.io/).
|
||||
|
||||
1. Copy *tracing.yaml* to a *components* folder under the same folder where you run you application.
|
||||
1. Create the following YAML files. Copy the native.yaml component file and tracing.yaml configuration file to the *components/* sub-folder under the same folder where you run your application.
|
||||
|
||||
* native.yaml
|
||||
* native.yaml component
|
||||
|
||||
```yaml
|
||||
apiVersion: dapr.io/v1alpha1
|
||||
kind: Component
|
||||
metadata:
|
||||
name: native
|
||||
namespace: default
|
||||
spec:
|
||||
type: exporters.native
|
||||
metadata:
|
||||
|
@ -51,24 +57,23 @@ spec:
|
|||
value: "localhost:55678"
|
||||
```
|
||||
|
||||
* tracing.yaml
|
||||
* tracing.yaml configuration
|
||||
|
||||
```yaml
|
||||
apiVersion: dapr.io/v1alpha1
|
||||
kind: Configuration
|
||||
metadata:
|
||||
name: tracing
|
||||
namespace: default
|
||||
spec:
|
||||
tracing:
|
||||
enabled: true
|
||||
expandParams: true
|
||||
includeBody: true
|
||||
samplingRate: "1"
|
||||
```
|
||||
|
||||
3. When running in the local mode, you need to launch Dapr with the `--config` parameter:
|
||||
3. When running in the local self hosted mode, you need to launch Dapr with the `--config` parameter:
|
||||
|
||||
```bash
|
||||
dapr run --app-id mynode --app-port 3000 --config ./tracing.yaml node app.js
|
||||
dapr run --app-id mynode --app-port 3000 --config ./components/tracing.yaml node app.js
|
||||
```
|
||||
|
||||
#### Kubernetes environment
|
||||
|
@ -91,13 +96,14 @@ kubectl apply -f ./dapr-localforwarder.yaml
|
|||
|
||||
4. Create the following YAML files
|
||||
|
||||
* native.yaml
|
||||
* native.yaml component
|
||||
|
||||
```yaml
|
||||
apiVersion: dapr.io/v1alpha1
|
||||
kind: Component
|
||||
metadata:
|
||||
name: native
|
||||
namespace: default
|
||||
spec:
|
||||
type: exporters.native
|
||||
metadata:
|
||||
|
@ -107,18 +113,17 @@ spec:
|
|||
value: "<Local forwarder address, e.g. dapr-localforwarder.default.svc.cluster.local:55678>"
|
||||
```
|
||||
|
||||
* tracing.yaml
|
||||
* tracing.yaml configuration
|
||||
|
||||
```yaml
|
||||
apiVersion: dapr.io/v1alpha1
|
||||
kind: Configuration
|
||||
metadata:
|
||||
name: tracing
|
||||
namespace: default
|
||||
spec:
|
||||
tracing:
|
||||
enabled: true
|
||||
expandParams: true
|
||||
includeBody: true
|
||||
samplingRate: "1"
|
||||
```
|
||||
|
||||
5. Use kubectl to apply the above CRD files:
|
||||
|
@ -129,7 +134,8 @@ kubectl apply -f native.yaml
|
|||
```
|
||||
|
||||
6. Deploy your app with tracing
|
||||
When running in the Kubernetes model, you need to add a `dapr.io/config` annotation to your container that you want to participate in the distributed tracing, as shown in the following example:
|
||||
|
||||
When running in Kubernetes mode, apply the configuration by adding a `dapr.io/config` annotation to the container that you want to participate in the distributed tracing, as shown in the following example:
|
||||
|
||||
```yaml
|
||||
apiVersion: apps/v1
|
||||
|
@ -148,29 +154,29 @@ spec:
|
|||
dapr.io/config: "tracing"
|
||||
```
|
||||
|
||||
That's it! There's no need include any SDKs or instrument your application code in anyway. Dapr automatically handles distributed tracing for you.
|
||||
That's it! There's no need include any SDKs or instrument your application code. Dapr automatically handles the distributed tracing for you.
|
||||
|
||||
> **NOTE**: You can register multiple exporters at the same time, and tracing logs will be forwarded to all registered exporters.
|
||||
> **NOTE**: You can register multiple exporters at the same time, and the tracing logs are forwarded to all registered exporters.
|
||||
|
||||
Generate some workloads. And after a few minutes, you should see tracing logs appearing in your Application Insights resource. And you can also use **Application map** to examine the topology of your services, as shown below:
|
||||
Deploy and run some applications. After a few minutes, you should see tracing logs appearing in your Application Insights resource. You can also use **Application Map** to examine the topology of your services, as shown below:
|
||||
|
||||

|
||||
|
||||
## Tracing Configuration
|
||||
## Tracing configuration
|
||||
|
||||
The `tracing` section under the `Configuration` spec contains the following properties:
|
||||
|
||||
```yml
|
||||
tracing:
|
||||
enabled: true
|
||||
expandParams: true
|
||||
includeBody: true
|
||||
samplingRate: "1"
|
||||
```
|
||||
|
||||
The following table lists the different properties.
|
||||
|
||||
Property | Type | Description
|
||||
---- | ------- | -----------
|
||||
enabled | bool | Set tracing to be enabled or disabled
|
||||
expandParams | bool | When true, expands parameters passed to HTTP endpoints
|
||||
includeBody | bool | When true, includes the request body in the tracing event
|
||||
samplingRate | string | Set sampling rate for tracing to be enabled or disabled.
|
||||
|
||||
|
||||
`samplingRate` is used to enable or disable the tracing. To disable the sampling rate ,
|
||||
set `samplingRate : "0"` in the configuration. The valid range of samplingRate is between 0 and 1 inclusive. The sampling rate determines whether a trace span should be sampled or not based on value. `samplingRate : "1"` will always sample the traces.By default, the sampling rate is 1 in 10,000
|
||||
|
|
|
@ -1,8 +0,0 @@
|
|||
FROM mcr.microsoft.com/dotnet/core/runtime:2.1
|
||||
RUN mkdir /lf
|
||||
WORKDIR /lf
|
||||
RUN curl -LsO https://github.com/microsoft/ApplicationInsights-LocalForwarder/releases/download/v0.1-beta1/LF-ConsoleHost-linux-x64.tar.gz
|
||||
RUN tar xzf LF-ConsoleHost-linux-x64.tar.gz
|
||||
RUN rm -f LF-ConsoleHost-linux-x64.tar.gz
|
||||
EXPOSE 55678
|
||||
ENTRYPOINT ["/lf/Microsoft.LocalForwarder.ConsoleHost", "noninteractive"]
|
|
@ -3,6 +3,7 @@ apiVersion: dapr.io/v1alpha1
|
|||
kind: Configuration
|
||||
metadata:
|
||||
name: tracing
|
||||
namespace: default
|
||||
spec:
|
||||
tracing:
|
||||
enabled: true
|
||||
|
@ -13,6 +14,7 @@ apiVersion: dapr.io/v1alpha1
|
|||
kind: Component
|
||||
metadata:
|
||||
name: native
|
||||
namespace: default
|
||||
spec:
|
||||
type: exporters.native
|
||||
metadata:
|
||||
|
|
|
@ -2,6 +2,7 @@ kind: Service
|
|||
apiVersion: v1
|
||||
metadata:
|
||||
name: dapr-localforwarder
|
||||
namespace: default
|
||||
labels:
|
||||
app: dapr-localforwarder
|
||||
spec:
|
||||
|
@ -17,6 +18,7 @@ apiVersion: apps/v1
|
|||
kind: Deployment
|
||||
metadata:
|
||||
name: dapr-localforwarder
|
||||
namespace: default
|
||||
labels:
|
||||
app: dapr-localforwarder
|
||||
spec:
|
||||
|
@ -31,7 +33,7 @@ spec:
|
|||
spec:
|
||||
containers:
|
||||
- name: dapr-localforwarder
|
||||
image: docker.io/daprio/dapr-localforwarder:0.1-beta1
|
||||
image: docker.io/daprio/dapr-localforwarder:latest
|
||||
ports:
|
||||
- containerPort: 55678
|
||||
imagePullPolicy: Always
|
||||
|
|
|
@ -1,26 +1,15 @@
|
|||
# Set up distributed tracing with Zipkin
|
||||
# Set up Zipkin for distributed tracing
|
||||
|
||||
Dapr integrates seamlessly with OpenTelemetry for telemetry and tracing. It is recommended to run Dapr with tracing enabled for any production scenario. Since Dapr uses OpenTelemetry, you can configure various exporters for tracing and telemetry data based on your environment, whether it is running in the cloud or on-premises.
|
||||
- [Configure self hosted mode](#Configure-self-hosted-mode)
|
||||
- [Configure Kubernetes](#Configure-Kubernetes)
|
||||
- [Tracing configuration](#Tracing-Configuration)
|
||||
|
||||
## How to configure distributed tracing with Zipkin on Kubernetes
|
||||
|
||||
The following steps will show you how to configure Dapr to send distributed tracing data to Zipkin running as a container in your Kubernetes cluster, and how to view them.
|
||||
## Configure self hosted mode
|
||||
|
||||
### Setup
|
||||
For self hosted mode, create a Dapr configuration file locally and reference it with the Dapr CLI.
|
||||
|
||||
First, deploy Zipkin:
|
||||
|
||||
```bash
|
||||
kubectl run zipkin --image openzipkin/zipkin --port 9411
|
||||
```
|
||||
|
||||
Create a Kubernetes Service for the Zipkin pod:
|
||||
|
||||
```bash
|
||||
kubectl expose deploy zipkin --type ClusterIP --port 9411
|
||||
```
|
||||
|
||||
Next, create the following YAML files locally:
|
||||
1. Create the following YAML files:
|
||||
|
||||
* zipkin.yaml
|
||||
|
||||
|
@ -29,13 +18,14 @@ apiVersion: dapr.io/v1alpha1
|
|||
kind: Component
|
||||
metadata:
|
||||
name: zipkin
|
||||
namespace: default
|
||||
spec:
|
||||
type: exporters.zipkin
|
||||
metadata:
|
||||
- name: enabled
|
||||
value: "true"
|
||||
- name: exporterAddress
|
||||
value: "http://zipkin.default.svc.cluster.local:9411/api/v2/spans"
|
||||
value: "http://localhost:9411/api/v2/spans"
|
||||
```
|
||||
|
||||
* tracing.yaml
|
||||
|
@ -45,14 +35,79 @@ apiVersion: dapr.io/v1alpha1
|
|||
kind: Configuration
|
||||
metadata:
|
||||
name: tracing
|
||||
namespace: default
|
||||
spec:
|
||||
tracing:
|
||||
enabled: true
|
||||
expandParams: true
|
||||
includeBody: true
|
||||
samplingRate: "1"
|
||||
```
|
||||
|
||||
Finally, deploy the Dapr configurations:
|
||||
2. Copy `zipkin.yaml` to a `/components` subfolder under the same folder where you run your application.
|
||||
|
||||
3. Launch Zipkin using Docker:
|
||||
|
||||
```bash
|
||||
docker run -d -p 9411:9411 openzipkin/zipkin
|
||||
```
|
||||
|
||||
3. Launch your application with Dapr CLI using the `--config` param:
|
||||
|
||||
```bash
|
||||
dapr run --app-id mynode --app-port 3000 --config ./tracing.yaml node app.js
|
||||
```
|
||||
### Viewing Traces
|
||||
To view traces, in your browser go to http://localhost:9411 and you will see the Zipkin UI.
|
||||
|
||||
## Configure Kubernetes
|
||||
|
||||
The following steps shows you how to configure Dapr to send distributed tracing data to Zipkin running as a container in your Kubernetes cluster, how to view them.
|
||||
|
||||
### Setup
|
||||
|
||||
First, deploy Zipkin:
|
||||
|
||||
```bash
|
||||
kubectl run zipkin --image openzipkin/zipkin --port 9411
|
||||
```
|
||||
|
||||
Create a Kubernetes service for the Zipkin pod:
|
||||
|
||||
```bash
|
||||
kubectl expose deploy zipkin --type ClusterIP --port 9411
|
||||
```
|
||||
|
||||
Next, create the following YAML files locally:
|
||||
|
||||
* zipkin.yaml component
|
||||
|
||||
```yaml
|
||||
apiVersion: dapr.io/v1alpha1
|
||||
kind: Component
|
||||
metadata:
|
||||
name: zipkin
|
||||
namespace: default
|
||||
spec:
|
||||
type: exporters.zipkin
|
||||
metadata:
|
||||
- name: enabled
|
||||
value: "true"
|
||||
- name: exporterAddress
|
||||
value: "http://zipkin.default.svc.cluster.local:9411/api/v2/spans"
|
||||
```
|
||||
|
||||
* tracing.yaml configuration
|
||||
|
||||
```yaml
|
||||
apiVersion: dapr.io/v1alpha1
|
||||
kind: Configuration
|
||||
metadata:
|
||||
name: tracing
|
||||
namespace: default
|
||||
spec:
|
||||
tracing:
|
||||
samplingRate: "1"
|
||||
```
|
||||
|
||||
Finally, deploy the the Dapr component and configuration files:
|
||||
|
||||
```bash
|
||||
kubectl apply -f tracing.yaml
|
||||
|
@ -76,75 +131,26 @@ To view traces, connect to the Zipkin Service and open the UI:
|
|||
kubectl port-forward svc/zipkin 9411:9411
|
||||
```
|
||||
|
||||
On your browser, go to ```http://localhost:9411``` and you should see the Zipkin UI.
|
||||
In your browser, go to ```http://localhost:9411``` and you will see the Zipkin UI.
|
||||
|
||||

|
||||
|
||||
## How to configure distributed tracing with Zipkin when running in stand-alone mode
|
||||
|
||||
For standalone mode, create an Dapr Configuration CRD file locally and reference it with the Dapr CLI.
|
||||
|
||||
1. Create the following YAML files:
|
||||
|
||||
* zipkin.yaml
|
||||
|
||||
```yaml
|
||||
apiVersion: dapr.io/v1alpha1
|
||||
kind: Component
|
||||
metadata:
|
||||
name: zipkin
|
||||
spec:
|
||||
type: exporters.zipkin
|
||||
metadata:
|
||||
- name: enabled
|
||||
value: "true"
|
||||
- name: exporterAddress
|
||||
value: "http://localhost:9411/api/v2/spans"
|
||||
```
|
||||
|
||||
* tracing.yaml
|
||||
|
||||
```yaml
|
||||
apiVersion: dapr.io/v1alpha1
|
||||
kind: Configuration
|
||||
metadata:
|
||||
name: tracing
|
||||
spec:
|
||||
tracing:
|
||||
enabled: true
|
||||
expandParams: true
|
||||
includeBody: true
|
||||
```
|
||||
|
||||
2. Copy *zipkin.yaml* to a *components* folder under the same folder where you run you application.
|
||||
|
||||
3. Launch Zipkin using Docker:
|
||||
|
||||
```bash
|
||||
docker run -d -p 9411:9411 openzipkin/zipkin
|
||||
```
|
||||
|
||||
3. Launch Dapr with the `--config` param:
|
||||
|
||||
```bash
|
||||
dapr run --app-id mynode --app-port 3000 --config ./tracing.yaml node app.js
|
||||
```
|
||||
|
||||
## Tracing Configuration
|
||||
## Tracing configuration
|
||||
|
||||
The `tracing` section under the `Configuration` spec contains the following properties:
|
||||
|
||||
```yml
|
||||
tracing:
|
||||
enabled: true
|
||||
expandParams: true
|
||||
includeBody: true
|
||||
samplingRate: "1"
|
||||
```
|
||||
|
||||
The following table lists the different properties.
|
||||
|
||||
Property | Type | Description
|
||||
---- | ------- | -----------
|
||||
enabled | bool | Set tracing to be enabled or disabled
|
||||
expandParams | bool | When true, expands parameters passed to HTTP endpoints
|
||||
includeBody | bool | When true, includes the request body in the tracing event
|
||||
samplingRate | string | Set sampling rate for tracing to be enabled or disabled.
|
||||
|
||||
|
||||
`samplingRate` is used to enable or disable the tracing. To disable the sampling rate ,
|
||||
set `samplingRate : "0"` in the configuration. The valid range of samplingRate is between 0 and 1 inclusive. The sampling rate determines whether a trace span should be sampled or not based on value. `samplingRate : "1"` will always sample the traces.By default, the sampling rate is 1 in 10,000
|
||||
|
||||
|
|
|
@ -28,11 +28,12 @@ dapr run --app-id cart --app-port 5000 python app.py
|
|||
|
||||
In Kubernetes, set the `dapr.io/id` annotation on your pod:
|
||||
|
||||
<pre>
|
||||
```yaml
|
||||
apiVersion: apps/v1
|
||||
kind: Deployment
|
||||
metadata:
|
||||
name: python-app
|
||||
namespace: default
|
||||
labels:
|
||||
app: python-app
|
||||
spec:
|
||||
|
@ -49,7 +50,7 @@ spec:
|
|||
<b>dapr.io/id: "cart"</b>
|
||||
dapr.io/port: "5000"
|
||||
...
|
||||
</pre>
|
||||
```
|
||||
|
||||
## Invoke a service in code
|
||||
|
||||
|
|
|
@ -13,11 +13,12 @@ For this guide, we'll use Redis Streams, which is also installed by default on a
|
|||
|
||||
*Note: When running Dapr locally, a pub/sub component YAML will automatically be created if it doesn't exist in a directory called `components` in your current working directory.*
|
||||
|
||||
```yml
|
||||
```yaml
|
||||
apiVersion: dapr.io/v1alpha1
|
||||
kind: Component
|
||||
metadata:
|
||||
name: messagebus
|
||||
namespace: default
|
||||
spec:
|
||||
type: pubsub.redis
|
||||
metadata:
|
||||
|
|
|
@ -0,0 +1,106 @@
|
|||
# Using PubSub across multiple namespaces
|
||||
|
||||
In some scenarios, applications can be spread across namespaces and share a queue or topic via PubSub. In this case, the PubSub component must be provisioned on each namespace.
|
||||
|
||||
In this example, we will use the [PubSub sample](https://github.com/dapr/samples/tree/master/4.pub-sub). Redis installation and the subscribers will be in `namespace-a` while the publisher UI will be on `namespace-b`. This solution should also work if Redis was installed on another namespace or if we used a managed cloud service like Azure ServiceBus.
|
||||
|
||||
The table below shows which resources are deployed to which namespaces:
|
||||
| Resource | namespace-a | namespace-b |
|
||||
|-|-|-|
|
||||
| Redis master | X ||
|
||||
| Redis slave | X ||
|
||||
| Dapr's PubSub component | X | X |
|
||||
| Node subscriber | X ||
|
||||
| Python subscriber | X ||
|
||||
| React UI publisher | | X|
|
||||
|
||||
## Pre-requisites
|
||||
|
||||
* [Dapr installed](https://github.com/dapr/docs/blob/master/getting-started/environment-setup.md.) on any namespace since Dapr works at the cluster level.
|
||||
* Checkout and cd into directory for [PubSub sample](https://github.com/dapr/samples/tree/master/4.pub-sub).
|
||||
|
||||
## Setup `namespace-a`
|
||||
|
||||
Create namespace and switch kubectl to use it.
|
||||
```
|
||||
kubectl create namespace namespace-a
|
||||
kubectl config set-context --current --namespace=namespace-a
|
||||
```
|
||||
|
||||
Install Redis (master and slave) on `namespace-a`, following [these instructions](https://github.com/dapr/docs/blob/master/howto/setup-pub-sub-message-broker/setup-redis.md).
|
||||
|
||||
Now, configure `deploy/redis.yaml`, paying attention to the hostname containing `namespace-a`.
|
||||
|
||||
```yaml
|
||||
apiVersion: dapr.io/v1alpha1
|
||||
kind: Component
|
||||
metadata:
|
||||
name: messagebus
|
||||
namespace: default
|
||||
spec:
|
||||
type: pubsub.redis
|
||||
metadata:
|
||||
- name: "redisHost"
|
||||
value: "redis-master.namespace-a.svc:6379"
|
||||
- name: "redisPassword"
|
||||
value: "YOUR_PASSWORD"
|
||||
```
|
||||
|
||||
Deploy resources to `namespace-a`:
|
||||
```
|
||||
kubectl apply -f deploy/redis.yaml
|
||||
kubectl apply -f deploy/node-subscriber.yaml
|
||||
kubectl apply -f deploy/python-subscriber.yaml
|
||||
```
|
||||
|
||||
## Setup `namespace-b`
|
||||
|
||||
Create namespace and switch kubectl to use it.
|
||||
```
|
||||
kubectl create namespace namespace-b
|
||||
kubectl config set-context --current --namespace=namespace-b
|
||||
```
|
||||
|
||||
Deploy resources to `namespace-b`, including the Redis component:
|
||||
```
|
||||
kubectl apply -f deploy/redis.yaml
|
||||
kubectl apply -f deploy/react-form.yaml
|
||||
```
|
||||
|
||||
Now, find the IP address for react-form, open it on your browser and publish messages to each topic (A, B and C).
|
||||
```
|
||||
kubectl get service -A
|
||||
```
|
||||
|
||||
## Confirm subscribers received the messages.
|
||||
|
||||
Switch back to `namespace-a`:
|
||||
```
|
||||
kubectl config set-context --current --namespace=namespace-a
|
||||
```
|
||||
|
||||
Find the POD names:
|
||||
```
|
||||
kubectl get pod # Copy POD names and use in the next commands.
|
||||
```
|
||||
|
||||
Display logs:
|
||||
```
|
||||
kubectl logs node-subscriber-XYZ node-subscriber
|
||||
kubectl logs python-subscriber-XYZ python-subscriber
|
||||
```
|
||||
|
||||
The messages published on the browser should show in the corresponding subscriber's logs. The Node.js subscriber receives messages of type "A" and "B", while the Python subscriber receives messages of type "A" and "C".
|
||||
|
||||
## Clean up
|
||||
|
||||
```
|
||||
kubectl delete -f deploy/redis.yaml --namespace namespace-a
|
||||
kubectl delete -f deploy/node-subscriber.yaml --namespace namespace-a
|
||||
kubectl delete -f deploy/python-subscriber.yaml --namespace namespace-a
|
||||
kubectl delete -f deploy/react-form.yaml --namespace namespace-b
|
||||
kubectl delete -f deploy/redis.yaml --namespace namespace-b
|
||||
kubectl config set-context --current --namespace=default
|
||||
kubectl delete namespace namespace-a
|
||||
kubectl delete namespace namespace-b
|
||||
```
|
|
@ -28,6 +28,7 @@ apiVersion: dapr.io/v1alpha1
|
|||
kind: Component
|
||||
metadata:
|
||||
name: messagebus
|
||||
namespace: default
|
||||
spec:
|
||||
type: pubsub.redis
|
||||
metadata:
|
||||
|
@ -71,6 +72,7 @@ apiVersion: dapr.io/v1alpha1
|
|||
kind: Component
|
||||
metadata:
|
||||
name: messagebus
|
||||
namespace: default
|
||||
spec:
|
||||
type: pubsub.redis
|
||||
metadata:
|
||||
|
@ -94,6 +96,7 @@ apiVersion: dapr.io/v1alpha1
|
|||
kind: Component
|
||||
metadata:
|
||||
name: messagebus
|
||||
namespace: default
|
||||
spec:
|
||||
type: pubsub.redis
|
||||
metadata:
|
||||
|
|
|
@ -10,7 +10,7 @@ The easiest way to connect to your Cosmos DB instance is to use the Data Explore
|
|||
|
||||
> **NOTE:** The following samples use Cosmos DB [SQL API](https://docs.microsoft.com/en-us/azure/cosmos-db/sql-query-getting-started). When you configure an Azure Cosmos DB for Dapr, you need to specify the exact database and collection to use. The follow samples assume you've already connected to the right database and a collection named "states".
|
||||
|
||||
## 2. List keys by Dapr id
|
||||
## 2. List keys by App ID
|
||||
|
||||
To get all state keys associated with application "myapp", use the query:
|
||||
|
||||
|
|
|
@ -12,7 +12,7 @@ You can use the official [redis-cli](https://redis.io/topics/rediscli) or any ot
|
|||
docker run --rm -it --link <name of the Redis container> redis redis-cli -h <name of the Redis container>
|
||||
```
|
||||
|
||||
## 2. List keys by Dapr id
|
||||
## 2. List keys by App ID
|
||||
|
||||
To get all state keys associated with application "myapp", use the command:
|
||||
|
||||
|
|
|
@ -8,7 +8,7 @@ The easiest way to connect to your SQL Server instance is to use the [Azure Data
|
|||
|
||||
> **NOTE:** The following samples use Azure SQL. When you configure an Azure SQL database for Dapr, you need to specify the exact table name to use. The follow samples assume you've already connected to the right database with a table named "states".
|
||||
|
||||
## 2. List keys by Dapr id
|
||||
## 2. List keys by App ID
|
||||
|
||||
To get all state keys associated with application "myapp", use the query:
|
||||
|
||||
|
|
|
@ -13,11 +13,12 @@ Create the following YAML file, named binding.yaml, and save this to the /compon
|
|||
|
||||
*Note: When running in Kubernetes, apply this file to your cluster using `kubectl apply -f binding.yaml`*
|
||||
|
||||
```yml
|
||||
```yaml
|
||||
apiVersion: dapr.io/v1alpha1
|
||||
kind: Component
|
||||
metadata:
|
||||
name: myEvent
|
||||
namespace: default
|
||||
spec:
|
||||
type: bindings.kafka
|
||||
metadata:
|
||||
|
|
|
@ -27,6 +27,7 @@ kind: ClusterRoleBinding
|
|||
apiVersion: rbac.authorization.k8s.io/v1beta1
|
||||
metadata:
|
||||
name: fluentd
|
||||
namespace: default
|
||||
roleRef:
|
||||
kind: ClusterRole
|
||||
name: fluentd
|
||||
|
|
|
@ -1,4 +1,4 @@
|
|||
# Set up azure monitor to search logs and collect metrics for Dapr
|
||||
# Set up Azure Monitor to search logs and collect metrics
|
||||
|
||||
This document describes how to enable Dapr metrics and logs with Azure Monitor for Azure Kubernetes Service (AKS).
|
||||
|
||||
|
@ -74,6 +74,7 @@ apiVersion: apps/v1
|
|||
kind: Deployment
|
||||
metadata:
|
||||
name: pythonapp
|
||||
namespace: default
|
||||
labels:
|
||||
app: python
|
||||
spec:
|
||||
|
|
|
@ -1,4 +1,4 @@
|
|||
# Set up Fleuntd, Elastic search, and Kibana in Kubernetes
|
||||
# Set up Fluentd, Elastic search and Kibana in Kubernetes
|
||||
|
||||
This document descriebs how to install Fluentd, Elastic Search, and Kibana to search logs in Kubernetes
|
||||
|
||||
|
@ -32,6 +32,14 @@ helm repo update
|
|||
|
||||
3. Install Elastic Search using Helm
|
||||
|
||||
By default the chart creates 3 replicas which must be on different nodes. If your cluster has less than 3 nodes, specify a lower number of replicas. For example, this sets it to 1:
|
||||
|
||||
```
|
||||
helm install elasticsearch elastic/elasticsearch -n dapr-monitoring --set replicas=1
|
||||
```
|
||||
|
||||
Otherwise:
|
||||
|
||||
```bash
|
||||
helm install elasticsearch elastic/elasticsearch -n dapr-monitoring
|
||||
```
|
||||
|
@ -62,14 +70,20 @@ kibana-kibana-95bc54b89-zqdrk 1/1 Running 0 4m21s
|
|||
|
||||
1. Install config map and Fluentd as a daemonset
|
||||
|
||||
> Note: If you are running Fluentd in your cluster, please enable the nested json parser to parse JSON formatted log from Dapr.
|
||||
Navigate to the following path if you're not already there (the one this document is in):
|
||||
|
||||
```
|
||||
docs/howto/setup-monitoring-tools
|
||||
```
|
||||
|
||||
> Note: If you already have Fluentd running in your cluster, please enable the nested json parser to parse JSON formatted log from Dapr.
|
||||
|
||||
```bash
|
||||
kubectl apply -f ./fluentd-config-map.yaml
|
||||
kubectl apply -f ./fluentd-dapr-with-rbac.yaml
|
||||
```
|
||||
|
||||
2. Ensure that Fluentd is running as a daemonset
|
||||
2. Ensure that Fluentd is running as a daemonset; the number of instances should be the same as the number of cluster nodes. In the example below we only have 1 node.
|
||||
|
||||
```bash
|
||||
kubectl get pods -n kube-system -w
|
||||
|
@ -86,6 +100,8 @@ fluentd-sdrld 1/1 Running 0 14s
|
|||
1. Install Dapr with enabling JSON-formatted logs
|
||||
|
||||
```bash
|
||||
helm repo add dapr https://daprio.azurecr.io/helm/v1/repo
|
||||
helm repo update
|
||||
helm install dapr dapr/dapr --namespace dapr-system --set global.logAsJson=true
|
||||
```
|
||||
|
||||
|
@ -99,6 +115,7 @@ apiVersion: apps/v1
|
|||
kind: Deployment
|
||||
metadata:
|
||||
name: pythonapp
|
||||
namespace: default
|
||||
labels:
|
||||
app: python
|
||||
spec:
|
||||
|
|
|
@ -24,6 +24,8 @@ kubectl create namespace dapr-monitoring
|
|||
2. Install Prometheus
|
||||
|
||||
```bash
|
||||
helm repo add stable https://kubernetes-charts.storage.googleapis.com
|
||||
helm repo update
|
||||
helm install dapr-prom stable/prometheus -n dapr-monitoring
|
||||
```
|
||||
|
||||
|
@ -50,7 +52,7 @@ helm install grafana stable/grafana -n dapr-monitoring --set persistence.enabled
|
|||
> Note: remove `%` character from the password that this command returns. The admin password is `cj3m0OfBNx8SLzUlTx91dEECgzRlYJb60D2evof1`.
|
||||
|
||||
```
|
||||
kubernetes get secret --namespace dapr-monitoring grafana -o jsonpath="{.data.admin-password}" | base64 --decode
|
||||
kubectl get secret --namespace dapr-monitoring grafana -o jsonpath="{.data.admin-password}" | base64 --decode
|
||||
cj3m0OfBNx8SLzUlTx91dEECgzRlYJb60D2evof1%
|
||||
```
|
||||
|
||||
|
@ -129,9 +131,9 @@ So you need to set up Prometheus data source with the below settings:
|
|||
|
||||
8. Import Dapr dashboards.
|
||||
|
||||
You can now import built-in [Grafana dashboard templates](../../reference/dashboard/README.md).
|
||||
In the upper left, click the "+" then "Import".
|
||||
|
||||
Refer [here](../../reference/dashboard/README.md) for details.
|
||||
You can now import built-in [Grafana dashboard templates](../../reference/dashboard/README.md). Please see the link for the templates.
|
||||
|
||||

|
||||
|
||||
|
|
|
@ -7,11 +7,12 @@ Pub/Sub message buses are extensible and can be found in the [components-contrib
|
|||
|
||||
A pub/sub in Dapr is described using a `Component` file:
|
||||
|
||||
```yml
|
||||
```yaml
|
||||
apiVersion: dapr.io/v1alpha1
|
||||
kind: Component
|
||||
metadata:
|
||||
name: messagebus
|
||||
namespace: default
|
||||
spec:
|
||||
type: pubsub.<NAME>
|
||||
metadata:
|
||||
|
@ -48,3 +49,4 @@ kubectl apply -f pubsub.yaml
|
|||
- [Setup RabbitMQ](./setup-rabbitmq.md)
|
||||
- [Setup GCP Pubsub](./setup-gcp.md)
|
||||
- [Setup Hazelcast Pubsub](./setup-hazelcast.md)
|
||||
- [Setup Azure Event Hubs](./setup-azure-eventhubs.md)
|
||||
|
|
|
@ -0,0 +1,46 @@
|
|||
# Setup Azure Event Hubs
|
||||
|
||||
Follow the instructions [here](https://docs.microsoft.com/en-us/azure/event-hubs/event-hubs-create) on setting up Azure Event Hubs.
|
||||
Since this implementation uses the Event Processor Host, you will also need an [Azure Storage Account](https://docs.microsoft.com/en-us/azure/storage/common/storage-account-create?tabs=azure-portal).
|
||||
|
||||
## Create a Dapr component
|
||||
|
||||
The next step is to create a Dapr component for Azure Event Hubs.
|
||||
|
||||
Create the following YAML file named `eventhubs.yaml`:
|
||||
|
||||
```yaml
|
||||
apiVersion: dapr.io/v1alpha1
|
||||
kind: Component
|
||||
metadata:
|
||||
name: <NAME>
|
||||
namespace: <NAMESPACE>
|
||||
spec:
|
||||
type: pubsub.azure.eventhubs
|
||||
metadata:
|
||||
- name: connectionString
|
||||
value: <REPLACE-WITH-CONNECTION-STRING> # Required.
|
||||
- name: storageAccountName
|
||||
value: <REPLACE-WITH-STORAGE-ACCOUNT-NAME> # Required.
|
||||
- name: storageAccountKey
|
||||
value: <REPLACE-WITH-STORAGE-ACCOUNT-KEY> # Required.
|
||||
- name: storageContainerName
|
||||
value: <REPLACE-WITH-CONTAINER-NAME > # Required.
|
||||
```
|
||||
|
||||
The above example uses secrets as plain strings. It is recommended to use a secret store for the secrets as described [here](../../concepts/secrets/README.md)
|
||||
|
||||
## Apply the configuration
|
||||
|
||||
### In Kubernetes
|
||||
|
||||
To apply the Azure Event Hubs pub/sub to Kubernetes, use the `kubectl` CLI:
|
||||
|
||||
```bash
|
||||
kubectl apply -f eventhubs.yaml
|
||||
```
|
||||
|
||||
### Running locally
|
||||
|
||||
The Dapr CLI will automatically create a directory named `components` in your current working directory with a Redis component.
|
||||
To use Azure Event Hubs, replace the contents of `pubsub.yaml` (or `messagebus.yaml` for Dapr < 0.6.0) file with the contents of `eventhubs.yaml` above (Don't change the filename).
|
|
@ -8,11 +8,12 @@ The next step is to create a Dapr component for Azure Service Bus.
|
|||
|
||||
Create the following YAML file named `azuresb.yaml`:
|
||||
|
||||
```yml
|
||||
```yaml
|
||||
apiVersion: dapr.io/v1alpha1
|
||||
kind: Component
|
||||
metadata:
|
||||
name: <name>
|
||||
name: <NAME>
|
||||
namespace: <NAMESPACE>
|
||||
spec:
|
||||
type: pubsub.azure.servicebus
|
||||
metadata:
|
||||
|
@ -47,4 +48,4 @@ kubectl apply -f azuresb.yaml
|
|||
### Running locally
|
||||
|
||||
The Dapr CLI will automatically create a directory named `components` in your current working directory with a Redis component.
|
||||
To use Azure Service Bus, replace the contents of `messagebus.yaml` file with the contents of `azuresb.yaml` above (Don't change the filename).
|
||||
To use Azure Service Bus, replace the contents of `pubsub.yaml` (or `messagebus.yaml` for Dapr < 0.6.0) file with the contents of `azuresb.yaml` above (Don't change the filename).
|
||||
|
|
|
@ -8,11 +8,12 @@ The next step is to create a Dapr component for Google Cloud Pub/Sub
|
|||
|
||||
Create the following YAML file named `messagebus.yaml`:
|
||||
|
||||
```yml
|
||||
```yaml
|
||||
apiVersion: dapr.io/v1alpha1
|
||||
kind: Component
|
||||
metadata:
|
||||
name: <NAME>
|
||||
namespace: <NAMESPACE>
|
||||
spec:
|
||||
type: pubsub.gcp.pubsub
|
||||
metadata:
|
||||
|
@ -69,4 +70,4 @@ kubectl apply -f messagebus.yaml
|
|||
|
||||
### Running locally
|
||||
|
||||
The Dapr CLI will automatically create a directory named `components` in your current working directory. To use Cloud Pubsub, replace the contents of `messagebus.yaml` file with the contents of yaml above.
|
||||
The Dapr CLI will automatically create a directory named `components` in your current working directory. To use Cloud Pubsub, replace the contents of `pubsub.yaml` (or `messagebus.yaml` for Dapr < 0.6.0) file with the contents of yaml above.
|
||||
|
|
|
@ -20,11 +20,12 @@ The next step is to create a Dapr component for Hazelcast.
|
|||
|
||||
Create the following YAML file named `hazelcast.yaml`:
|
||||
|
||||
```yml
|
||||
```yaml
|
||||
apiVersion: dapr.io/v1alpha1
|
||||
kind: Component
|
||||
metadata:
|
||||
name: <name>
|
||||
name: <NAME>
|
||||
namespace: <NAMESPACE>
|
||||
spec:
|
||||
type: pubsub.hazelcast
|
||||
metadata:
|
||||
|
@ -48,4 +49,4 @@ kubectl apply -f hazelcast.yaml
|
|||
### Running locally
|
||||
|
||||
The Dapr CLI will automatically create a directory named `components` in your current working directory with a Redis component.
|
||||
To use Hazelcast, replace the redis.yaml file with the hazelcast.yaml above.
|
||||
To use Hazelcast, replace the `pubsub.yaml` (or `messagebus.yaml` for Dapr < 0.6.0) file with the hazelcast.yaml above.
|
||||
|
|
|
@ -1,3 +1,57 @@
|
|||
# Setup Kafka
|
||||
|
||||
Content for this file to be added
|
||||
## Locally
|
||||
|
||||
You can run Kafka locally using [this](https://github.com/wurstmeister/kafka-docker) Docker image.
|
||||
To run without Docker, see the getting started guide [here](https://kafka.apache.org/quickstart).
|
||||
|
||||
## Kubernetes
|
||||
|
||||
To run Kafka on Kubernetes, you can use the [Helm Chart](https://github.com/helm/charts/tree/master/incubator/kafka#installing-the-chart).
|
||||
|
||||
## Create a Dapr component
|
||||
|
||||
The next step is to create a Dapr component for Kafka.
|
||||
|
||||
Create the following YAML file named `kafka.yaml`:
|
||||
|
||||
```yaml
|
||||
apiVersion: dapr.io/v1alpha1
|
||||
kind: Component
|
||||
metadata:
|
||||
name: <NAME>
|
||||
namespace: <NAMESPACE>
|
||||
spec:
|
||||
type: pubsub.kafka
|
||||
metadata:
|
||||
# Kafka broker connection setting
|
||||
- name: brokers
|
||||
# Comma separated list of kafka brokers
|
||||
value: "dapr-kafka.dapr-tests.svc.cluster.local:9092"
|
||||
# Enable auth. Default is "false"
|
||||
- name: authRequired
|
||||
value: "false"
|
||||
# Only available is authRequired is set to true
|
||||
- name: saslUsername
|
||||
value: <username>
|
||||
# Only available is authRequired is set to true
|
||||
- name: saslPassword
|
||||
value: <password>
|
||||
```
|
||||
|
||||
The above example uses secrets as plain strings. It is recommended to use a secret store for the secrets as described [here](../../concepts/secrets/README.md).
|
||||
|
||||
## Apply the configuration
|
||||
|
||||
### In Kubernetes
|
||||
|
||||
To apply the Kafka component to Kubernetes, use the `kubectl`:
|
||||
|
||||
```
|
||||
kubectl apply -f kafka.yaml
|
||||
```
|
||||
|
||||
### Running locally
|
||||
|
||||
The Dapr CLI will automatically create a directory named `components` in your current working directory with a Redis component.
|
||||
To use Kafka, replace the `pubsub.yaml` (or `messagebus.yaml` for Dapr < 0.6.0) file with the kafka.yaml above.
|
||||
|
|
|
@ -31,11 +31,12 @@ The next step is to create a Dapr component for NATS.
|
|||
|
||||
Create the following YAML file named `nats.yaml`:
|
||||
|
||||
```yml
|
||||
```yaml
|
||||
apiVersion: dapr.io/v1alpha1
|
||||
kind: Component
|
||||
metadata:
|
||||
name: <name>
|
||||
name: <NAME>
|
||||
namespace: <NAMESPACE>
|
||||
spec:
|
||||
type: pubsub.nats
|
||||
metadata:
|
||||
|
@ -58,4 +59,4 @@ kubectl apply -f nats.yaml
|
|||
### Running locally
|
||||
|
||||
The Dapr CLI will automatically create a directory named `components` in your current working directory with a Redis component.
|
||||
To use NATS, replace the contents of `messagebus.yaml` file with the contents of `nats.yaml` above (Don't change the filename).
|
||||
To use NATS, replace the contents of `pubsub.yaml` (or `messagebus.yaml` for Dapr < 0.6.0) file with the contents of `nats.yaml` above (Don't change the filename).
|
||||
|
|
|
@ -33,11 +33,12 @@ The next step is to create a Dapr component for RabbitMQ.
|
|||
|
||||
Create the following YAML file named `rabbitmq.yaml`:
|
||||
|
||||
```yml
|
||||
```yaml
|
||||
apiVersion: dapr.io/v1alpha1
|
||||
kind: Component
|
||||
metadata:
|
||||
name: <name>
|
||||
name: <NAME>
|
||||
namespace: <NAMESPACE>
|
||||
spec:
|
||||
type: pubsub.rabbitmq
|
||||
metadata:
|
||||
|
@ -72,4 +73,4 @@ kubectl apply -f rabbitmq.yaml
|
|||
### Running locally
|
||||
|
||||
The Dapr CLI will automatically create a directory named `components` in your current working directory with a Redis component.
|
||||
To use RabbitMQ, replace the contents of `messagebus.yaml` file with the contents of `rabbitmq.yaml` above (Don't change the filename).
|
||||
To use RabbitMQ, replace the contents of `pubsub.yaml` (or `messagebus.yaml` for Dapr < 0.6.0) file with the contents of `rabbitmq.yaml` above (Don't change the filename).
|
||||
|
|
|
@ -60,6 +60,7 @@ apiVersion: dapr.io/v1alpha1
|
|||
kind: Component
|
||||
metadata:
|
||||
name: messagebus
|
||||
namespace: default
|
||||
spec:
|
||||
type: pubsub.redis
|
||||
metadata:
|
||||
|
|
|
@ -13,6 +13,7 @@ apiVersion: dapr.io/v1alpha1
|
|||
kind: Component
|
||||
metadata:
|
||||
name: awssecretmanager
|
||||
namespace: default
|
||||
spec:
|
||||
type: secretstores.aws.secretmanager
|
||||
metadata:
|
||||
|
@ -44,6 +45,7 @@ apiVersion: dapr.io/v1alpha1
|
|||
kind: Component
|
||||
metadata:
|
||||
name: statestore
|
||||
namespace: default
|
||||
spec:
|
||||
type: state.redis
|
||||
metadata:
|
||||
|
|
|
@ -47,7 +47,7 @@ This document shows how to enable Azure Key Vault secret store using [Dapr Secre
|
|||
5. Assign the Managed Identity Operator role to the AKS Service Principal
|
||||
|
||||
```bash
|
||||
$aks = az aks show -g [your resource group] -n [your AKS name] | ConvertFrom-Json
|
||||
$aks = az aks show -g [your resource group] -n [your AKS name] -o json | ConvertFrom-Json
|
||||
|
||||
az role assignment create --role "Managed Identity Operator" --assignee $aks.servicePrincipalProfile.clientId --scope $identity.id
|
||||
```
|
||||
|
@ -69,6 +69,7 @@ This document shows how to enable Azure Key Vault secret store using [Dapr Secre
|
|||
Save the following yaml as azure-identity-config.yaml:
|
||||
|
||||
```yaml
|
||||
apiVersion: "aadpodidentity.k8s.io/v1"
|
||||
kind: AzureIdentity
|
||||
metadata:
|
||||
name: [you managed identity name]
|
||||
|
@ -105,13 +106,14 @@ In Kubernetes mode, you store the certificate for the service principal into the
|
|||
kind: Component
|
||||
metadata:
|
||||
name: azurekeyvault
|
||||
namespace: default
|
||||
spec:
|
||||
type: secretstores.azure.keyvault
|
||||
metadata:
|
||||
- name: vaultName
|
||||
value: [your_keyvault_name]
|
||||
- name: spnClientId
|
||||
value: [your_managed_identity_client_id]
|
||||
- name: vaultName
|
||||
value: [your_keyvault_name]
|
||||
- name: spnClientId
|
||||
value: [your_managed_identity_client_id]
|
||||
```
|
||||
|
||||
2. Apply azurekeyvault.yaml component
|
||||
|
@ -137,6 +139,7 @@ In Kubernetes mode, you store the certificate for the service principal into the
|
|||
kind: Component
|
||||
metadata:
|
||||
name: statestore
|
||||
namespace: default
|
||||
spec:
|
||||
type: state.redis
|
||||
metadata:
|
||||
|
@ -162,6 +165,7 @@ In Kubernetes mode, you store the certificate for the service principal into the
|
|||
apiVersion: v1
|
||||
metadata:
|
||||
name: nodeapp
|
||||
namespace: default
|
||||
labels:
|
||||
app: node
|
||||
spec:
|
||||
|
@ -178,6 +182,7 @@ In Kubernetes mode, you store the certificate for the service principal into the
|
|||
kind: Deployment
|
||||
metadata:
|
||||
name: nodeapp
|
||||
namespace: default
|
||||
labels:
|
||||
app: node
|
||||
spec:
|
||||
|
@ -220,7 +225,7 @@ In Kubernetes mode, you store the certificate for the service principal into the
|
|||
time="2020-02-05T09:15:03Z" level=info msg="starting Dapr Runtime -- version edge -- commit v0.3.0-rc.0-58-ge540a71-dirty"
|
||||
time="2020-02-05T09:15:03Z" level=info msg="log level set to: info"
|
||||
time="2020-02-05T09:15:03Z" level=info msg="kubernetes mode configured"
|
||||
time="2020-02-05T09:15:03Z" level=info msg="dapr id: nodeapp"
|
||||
time="2020-02-05T09:15:03Z" level=info msg="app id: nodeapp"
|
||||
time="2020-02-05T09:15:03Z" level=info msg="mTLS enabled. creating sidecar authenticator"
|
||||
time="2020-02-05T09:15:03Z" level=info msg="trust anchors extracted successfully"
|
||||
time="2020-02-05T09:15:03Z" level=info msg="authenticator created"
|
||||
|
|
|
@ -57,7 +57,7 @@ az ad sp create-for-rbac --name [your_service_principal_name] --create-cert --ce
|
|||
|
||||
**Save the both the appId and tenant from the output which will be used in the next step**
|
||||
|
||||
3. Get the Object Id for [your_service_principal_name]
|
||||
4. Get the Object Id for [your_service_principal_name]
|
||||
|
||||
```bash
|
||||
az ad sp show --id [service_principal_app_id]
|
||||
|
@ -70,7 +70,7 @@ az ad sp show --id [service_principal_app_id]
|
|||
}
|
||||
```
|
||||
|
||||
4. Grant the service principal the GET permission to your Azure Key Vault
|
||||
5. Grant the service principal the GET permission to your Azure Key Vault
|
||||
|
||||
```bash
|
||||
az keyvault set-policy --name [your_keyvault] --object-id [your_service_principal_object_id] --secret-permissions get
|
||||
|
@ -78,27 +78,18 @@ az keyvault set-policy --name [your_keyvault] --object-id [your_service_principa
|
|||
|
||||
Now, your service principal has access to your keyvault, you are ready to configure the secret store component to use secrets stored in your keyvault to access other components securely.
|
||||
|
||||
5. Download PFX cert from your Azure Keyvault
|
||||
6. Download the certificate in PFX format from your Azure Key Vault either using the Azure portal or the Azure CLI:
|
||||
|
||||
- **Using Azure Portal**
|
||||
Go to your keyvault on Portal and download [certificate_name] pfx cert from certificate vault
|
||||
- **Using Azure CLI**
|
||||
For Linux/MacOS
|
||||
- **Using the Azure portal:**
|
||||
|
||||
Go to your key vault on the Azure portal and navigate to the *Certificates* tab under *Settings*. Find the certificate that was created during the service principal creation, named [certificate_name] and click on it.
|
||||
|
||||
Click *Download in PFX/PEM format* to download the certificate.
|
||||
|
||||
- **Using the Azure CLI:**
|
||||
|
||||
```bash
|
||||
# Download base64 encoded cert
|
||||
az keyvault secret download --vault-name [your_keyvault] --name [certificate_name] --file [certificate_name].txt
|
||||
|
||||
# Decode base64 encoded cert to pfx cert for linux/macos
|
||||
base64 --decode [certificate_name].txt > [certificate_name].pfx
|
||||
```
|
||||
|
||||
For Windows, on powershell
|
||||
|
||||
```powershell
|
||||
# Decode base64 encoded cert to pfx cert for linux/macos
|
||||
$EncodedText = Get-Content -Path [certificate_name].txt -Raw
|
||||
[System.Text.Encoding]::Unicode.GetString([System.Convert]::FromBase64String($EncodedText)) | Set-Content -Path [certificate_name].pfx -Encoding Byte
|
||||
az keyvault secret download --vault-name [your_keyvault] --name [certificate_name] --encoding base64 --file [certificate_name].pfx
|
||||
```
|
||||
|
||||
## Use Azure Key Vault secret store in Standalone mode
|
||||
|
@ -124,6 +115,7 @@ apiVersion: dapr.io/v1alpha1
|
|||
kind: Component
|
||||
metadata:
|
||||
name: azurekeyvault
|
||||
namespace: default
|
||||
spec:
|
||||
type: secretstores.azure.keyvault
|
||||
metadata:
|
||||
|
@ -152,6 +144,7 @@ apiVersion: dapr.io/v1alpha1
|
|||
kind: Component
|
||||
metadata:
|
||||
name: statestore
|
||||
namespace: default
|
||||
spec:
|
||||
type: state.redis
|
||||
metadata:
|
||||
|
@ -211,6 +204,7 @@ apiVersion: dapr.io/v1alpha1
|
|||
kind: Component
|
||||
metadata:
|
||||
name: azurekeyvault
|
||||
namespace: default
|
||||
spec:
|
||||
type: secretstores.azure.keyvault
|
||||
metadata:
|
||||
|
@ -250,6 +244,7 @@ apiVersion: dapr.io/v1alpha1
|
|||
kind: Component
|
||||
metadata:
|
||||
name: statestore
|
||||
namespace: default
|
||||
spec:
|
||||
type: state.redis
|
||||
metadata:
|
||||
|
@ -281,7 +276,7 @@ $ kubectl logs nodeapp-f7b7576f4-4pjrj daprd
|
|||
time="2019-09-26T20:34:23Z" level=info msg="starting Dapr Runtime -- version 0.4.0-alpha.4 -- commit 876474b-dirty"
|
||||
time="2019-09-26T20:34:23Z" level=info msg="log level set to: info"
|
||||
time="2019-09-26T20:34:23Z" level=info msg="kubernetes mode configured"
|
||||
time="2019-09-26T20:34:23Z" level=info msg="dapr id: nodeapp"
|
||||
time="2019-09-26T20:34:23Z" level=info msg="app id: nodeapp"
|
||||
time="2019-09-26T20:34:24Z" level=info msg="loaded component azurekeyvault (secretstores.azure.keyvault)"
|
||||
time="2019-09-26T20:34:25Z" level=info msg="loaded component statestore (state.redis)"
|
||||
...
|
||||
|
|
|
@ -13,6 +13,7 @@ apiVersion: dapr.io/v1alpha1
|
|||
kind: Component
|
||||
metadata:
|
||||
name: gcpsecretmanager
|
||||
namespace: default
|
||||
spec:
|
||||
type: secretstores.gcp.secretmanager
|
||||
metadata:
|
||||
|
@ -56,6 +57,7 @@ apiVersion: dapr.io/v1alpha1
|
|||
kind: Component
|
||||
metadata:
|
||||
name: statestore
|
||||
namespace: default
|
||||
spec:
|
||||
type: state.redis
|
||||
metadata:
|
||||
|
|
|
@ -15,6 +15,7 @@ apiVersion: dapr.io/v1alpha1
|
|||
kind: Component
|
||||
metadata:
|
||||
name: vault
|
||||
namespace: default
|
||||
spec:
|
||||
type: secretstores.hashicorp.vault
|
||||
metadata:
|
||||
|
@ -53,6 +54,7 @@ apiVersion: dapr.io/v1alpha1
|
|||
kind: Component
|
||||
metadata:
|
||||
name: statestore
|
||||
namespace: default
|
||||
spec:
|
||||
type: state.redis
|
||||
metadata:
|
||||
|
|
|
@ -7,11 +7,12 @@ State stores are extensible and can be found in the [components-contrib repo](ht
|
|||
|
||||
A state store in Dapr is described using a `Component` file:
|
||||
|
||||
```yml
|
||||
```yaml
|
||||
apiVersion: dapr.io/v1alpha1
|
||||
kind: Component
|
||||
metadata:
|
||||
name: statestore
|
||||
namespace: default
|
||||
spec:
|
||||
type: state.<DATABASE>
|
||||
metadata:
|
||||
|
|
|
@ -32,11 +32,12 @@ The next step is to create a Dapr component for Aerospike.
|
|||
|
||||
Create the following YAML file named `aerospike.yaml`:
|
||||
|
||||
```
|
||||
```yaml
|
||||
apiVersion: dapr.io/v1alpha1
|
||||
kind: Component
|
||||
metadata:
|
||||
name: <name>
|
||||
name: <NAME>
|
||||
namespace: <NAMESPACE>
|
||||
spec:
|
||||
type: state.Aerospike
|
||||
metadata:
|
||||
|
|
|
@ -19,11 +19,12 @@ The next step is to create a Dapr component for CosmosDB.
|
|||
|
||||
Create the following YAML file named `cosmos.yaml`:
|
||||
|
||||
```
|
||||
```yaml
|
||||
apiVersion: dapr.io/v1alpha1
|
||||
kind: Component
|
||||
metadata:
|
||||
name: <name>
|
||||
name: <NAME>
|
||||
namespace: <NAMESPACE>
|
||||
spec:
|
||||
type: state.azure.cosmosdb
|
||||
metadata:
|
||||
|
@ -41,11 +42,12 @@ The above example uses secrets as plain strings. It is recommended to use a secr
|
|||
|
||||
The following example uses the Kubernetes secret store to retrieve the secrets:
|
||||
|
||||
```
|
||||
```yaml
|
||||
apiVersion: dapr.io/v1alpha1
|
||||
kind: Component
|
||||
metadata:
|
||||
name: <store_name>
|
||||
name: <NAME>
|
||||
namespace: <NAMESPACE>
|
||||
spec:
|
||||
type: state.azure.cosmosdb
|
||||
metadata:
|
||||
|
|
|
@ -18,11 +18,12 @@ The next step is to create a Dapr component for Azure Table Storage.
|
|||
|
||||
Create the following YAML file named `azuretable.yaml`:
|
||||
|
||||
```
|
||||
```yaml
|
||||
apiVersion: dapr.io/v1alpha1
|
||||
kind: Component
|
||||
metadata:
|
||||
name: <name>
|
||||
name: <NAME>
|
||||
namespace: <NAMESPACE>
|
||||
spec:
|
||||
type: state.azure.tablestorage
|
||||
metadata:
|
||||
|
@ -38,11 +39,12 @@ The above example uses secrets as plain strings. It is recommended to use a secr
|
|||
|
||||
The following example uses the Kubernetes secret store to retrieve the secrets:
|
||||
|
||||
```
|
||||
```yaml
|
||||
apiVersion: dapr.io/v1alpha1
|
||||
kind: Component
|
||||
metadata:
|
||||
name: <name>
|
||||
name: <NAME>
|
||||
namespace: <NAMESPACE>
|
||||
spec:
|
||||
type: state.azure.tablestorage
|
||||
metadata:
|
||||
|
|
|
@ -32,11 +32,12 @@ The next step is to create a Dapr component for Cassandra.
|
|||
|
||||
Create the following YAML file named `cassandra.yaml`:
|
||||
|
||||
```
|
||||
```yaml
|
||||
apiVersion: dapr.io/v1alpha1
|
||||
kind: Component
|
||||
metadata:
|
||||
name: <name>
|
||||
name: <NAME>
|
||||
namespace: <NAMESPACE>
|
||||
spec:
|
||||
type: state.cassandra
|
||||
metadata:
|
||||
|
@ -62,11 +63,12 @@ The above example uses secrets as plain strings. It is recommended to use a secr
|
|||
|
||||
The following example uses the Kubernetes secret store to retrieve the username and password:
|
||||
|
||||
```
|
||||
```yaml
|
||||
apiVersion: dapr.io/v1alpha1
|
||||
kind: Component
|
||||
metadata:
|
||||
name: <name>
|
||||
name: <NAME>
|
||||
namespace: <NAMESPACE>
|
||||
spec:
|
||||
type: state.cassandra
|
||||
metadata:
|
||||
|
|
|
@ -21,11 +21,12 @@ The next step is to create a Dapr component for Cloudstate.
|
|||
|
||||
Create the following YAML file named `cloudstate.yaml`:
|
||||
|
||||
```
|
||||
```yaml
|
||||
apiVersion: dapr.io/v1alpha1
|
||||
kind: Component
|
||||
metadata:
|
||||
name: cloudstate
|
||||
namespace: default
|
||||
spec:
|
||||
type: state.cloudstate
|
||||
metadata:
|
||||
|
@ -61,6 +62,7 @@ kind: Deployment
|
|||
metadata:
|
||||
annotations:
|
||||
name: test-dapr-app
|
||||
namespace: default
|
||||
labels:
|
||||
app: test-dapr-app
|
||||
spec:
|
||||
|
@ -125,6 +127,7 @@ apiVersion: rbac.authorization.k8s.io/v1
|
|||
kind: Role
|
||||
metadata:
|
||||
name: cloudstate-pod-reader
|
||||
namespace: default
|
||||
rules:
|
||||
- apiGroups:
|
||||
- ""
|
||||
|
@ -140,6 +143,7 @@ apiVersion: rbac.authorization.k8s.io/v1
|
|||
kind: RoleBinding
|
||||
metadata:
|
||||
name: cloudstate-read-pods-default
|
||||
namespace: default
|
||||
roleRef:
|
||||
apiGroup: rbac.authorization.k8s.io
|
||||
kind: Role
|
||||
|
|
|
@ -31,11 +31,12 @@ The next step is to create a Dapr component for Consul.
|
|||
|
||||
Create the following YAML file named `consul.yaml`:
|
||||
|
||||
```
|
||||
```yaml
|
||||
apiVersion: dapr.io/v1alpha1
|
||||
kind: Component
|
||||
metadata:
|
||||
name: <name>
|
||||
name: <NAME>
|
||||
namespace: <NAMESPACE>
|
||||
spec:
|
||||
type: state.consul
|
||||
metadata:
|
||||
|
@ -55,11 +56,12 @@ The above example uses secrets as plain strings. It is recommended to use a secr
|
|||
|
||||
The following example uses the Kubernetes secret store to retrieve the acl token:
|
||||
|
||||
```
|
||||
```yaml
|
||||
apiVersion: dapr.io/v1alpha1
|
||||
kind: Component
|
||||
metadata:
|
||||
name: <name>
|
||||
name: <NAME>
|
||||
namespace: <NAMESPACE>
|
||||
spec:
|
||||
type: state.consul
|
||||
metadata:
|
||||
|
|
|
@ -26,11 +26,12 @@ The next step is to create a Dapr component for Couchbase.
|
|||
|
||||
Create the following YAML file named `couchbase.yaml`:
|
||||
|
||||
```
|
||||
```yaml
|
||||
apiVersion: dapr.io/v1alpha1
|
||||
kind: Component
|
||||
metadata:
|
||||
name: <name>
|
||||
name: <NAME>
|
||||
namespace: <NAMESPACE>
|
||||
spec:
|
||||
type: state.couchbase
|
||||
metadata:
|
||||
|
|
|
@ -32,11 +32,12 @@ The next step is to create a Dapr component for etcd.
|
|||
|
||||
Create the following YAML file named `etcd.yaml`:
|
||||
|
||||
```
|
||||
```yaml
|
||||
apiVersion: dapr.io/v1alpha1
|
||||
kind: Component
|
||||
metadata:
|
||||
name: <name>
|
||||
name: <NAME>
|
||||
namespace: <NAMESPACE>
|
||||
spec:
|
||||
type: state.etcd
|
||||
metadata:
|
||||
|
|
|
@ -16,11 +16,12 @@ The next step is to create a Dapr component for Firestore.
|
|||
|
||||
Create the following YAML file named `firestore.yaml`:
|
||||
|
||||
```
|
||||
```yaml
|
||||
apiVersion: dapr.io/v1alpha1
|
||||
kind: Component
|
||||
metadata:
|
||||
name: <name>
|
||||
name: <NAME>
|
||||
namespace: <NAMESPACE>
|
||||
spec:
|
||||
type: state.gcp.firestore
|
||||
metadata:
|
||||
|
|
|
@ -20,11 +20,12 @@ The next step is to create a Dapr component for Hazelcast.
|
|||
|
||||
Create the following YAML file named `hazelcast.yaml`:
|
||||
|
||||
```
|
||||
```yaml
|
||||
apiVersion: dapr.io/v1alpha1
|
||||
kind: Component
|
||||
metadata:
|
||||
name: <name>
|
||||
name: <NAME>
|
||||
namespace: <NAMESPACE>
|
||||
spec:
|
||||
type: state.hazelcast
|
||||
metadata:
|
||||
|
|
|
@ -31,11 +31,12 @@ The next step is to create a Dapr component for Memcached.
|
|||
|
||||
Create the following YAML file named `memcached.yaml`:
|
||||
|
||||
```
|
||||
```yaml
|
||||
apiVersion: dapr.io/v1alpha1
|
||||
kind: Component
|
||||
metadata:
|
||||
name: <name>
|
||||
name: <NAME>
|
||||
namespace: <NAMESPACE>
|
||||
spec:
|
||||
type: state.memcached
|
||||
metadata:
|
||||
|
|
|
@ -35,11 +35,12 @@ The next step is to create a Dapr component for MongoDB.
|
|||
|
||||
Create the following YAML file named `mongodb.yaml`:
|
||||
|
||||
```
|
||||
```yaml
|
||||
apiVersion: dapr.io/v1alpha1
|
||||
kind: Component
|
||||
metadata:
|
||||
name: <name>
|
||||
name: <NAME>
|
||||
namespace: <NAMESPACE>
|
||||
spec:
|
||||
type: state.mongodb
|
||||
metadata:
|
||||
|
@ -65,11 +66,12 @@ The above example uses secrets as plain strings. It is recommended to use a secr
|
|||
|
||||
The following example uses the Kubernetes secret store to retrieve the username and password:
|
||||
|
||||
```
|
||||
```yaml
|
||||
apiVersion: dapr.io/v1alpha1
|
||||
kind: Component
|
||||
metadata:
|
||||
name: <name>
|
||||
name: <NAME>
|
||||
namespace: <NAMESPACE>
|
||||
spec:
|
||||
type: state.mondodb
|
||||
metadata:
|
||||
|
|
|
@ -67,6 +67,7 @@ apiVersion: dapr.io/v1alpha1
|
|||
kind: Component
|
||||
metadata:
|
||||
name: statestore
|
||||
namespace: default
|
||||
spec:
|
||||
type: state.redis
|
||||
metadata:
|
||||
|
|
|
@ -23,7 +23,8 @@ Create the following YAML file named `sqlserver.yaml`:
|
|||
apiVersion: dapr.io/v1alpha1
|
||||
kind: Component
|
||||
metadata:
|
||||
name: <name>
|
||||
name: <NAME>
|
||||
namespace: <NAMESPACE>
|
||||
spec:
|
||||
type: state.sqlserver
|
||||
metadata:
|
||||
|
@ -41,7 +42,8 @@ The following example uses the Kubernetes secret store to retrieve the secrets:
|
|||
apiVersion: dapr.io/v1alpha1
|
||||
kind: Component
|
||||
metadata:
|
||||
name: <name>
|
||||
name: <NAME>
|
||||
namespace: <NAMESPACE>
|
||||
spec:
|
||||
type: state.sqlserver
|
||||
metadata:
|
||||
|
|
|
@ -32,11 +32,12 @@ The next step is to create a Dapr component for Zookeeper.
|
|||
|
||||
Create the following YAML file named `zookeeper.yaml`:
|
||||
|
||||
```
|
||||
```yaml
|
||||
apiVersion: dapr.io/v1alpha1
|
||||
kind: Component
|
||||
metadata:
|
||||
name: <name>
|
||||
name: <NAME>
|
||||
namespace: <NAMESPACE>
|
||||
spec:
|
||||
type: state.zookeeper
|
||||
metadata:
|
||||
|
|
|
@ -24,11 +24,12 @@ Create the following YAML file, named binding.yaml, and save this to the /compon
|
|||
|
||||
*Note: When running in Kubernetes, apply this file to your cluster using `kubectl apply -f binding.yaml`*
|
||||
|
||||
```yml
|
||||
```yaml
|
||||
apiVersion: dapr.io/v1alpha1
|
||||
kind: Component
|
||||
metadata:
|
||||
name: myEvent
|
||||
namespace: default
|
||||
spec:
|
||||
type: bindings.kafka
|
||||
metadata:
|
||||
|
|
Binary file not shown.
Before Width: | Height: | Size: 49 KiB After Width: | Height: | Size: 36 KiB |
|
@ -35,7 +35,7 @@ Each of these building blocks is independent, meaning that you can use one, some
|
|||
| Building Block | Description |
|
||||
|----------------|-------------|
|
||||
| **[Service Invocation](../concepts/service-invocation)** | Resilient service-to-service invocation enables method calls, including retries, on remote services wherever they are located in the supported hosting environment.
|
||||
| **[State Management](../concepts/state)** | With state management for storing key/value pairs, long running, highly available, stateful services can be easily written alongside stateless services in your application. The state store is pluggable and can include Azure CosmosDB, AWS DynamoDB or Redis among others.
|
||||
| **[State Management](../concepts/state-management)** | With state management for storing key/value pairs, long running, highly available, stateful services can be easily written alongside stateless services in your application. The state store is pluggable and can include Azure CosmosDB, AWS DynamoDB or Redis among others.
|
||||
| **[Publish and Subscribe Messaging](../concepts/publish-subscribe-messaging)** | Publishing events and subscribing to topics | tween services enables event-driven architectures to simplify horizontal scalability and make them | silient to failure. Dapr provides at least once message delivery guarantee.
|
||||
| **[Resource Bindings](../concepts/bindings)** | Resource bindings with triggers builds further on event-driven | chitectures for scale and resiliency by receiving and sending events to and from any external | source such as databases, queues, file systems, etc.
|
||||
| **[Distributed Tracing](../concepts/observability/traces.md)** | Dapr supports distributed tracing to easily diagnose and | serve inter-service calls in production using the W3C Trace Context standard.
|
||||
|
|
Binary file not shown.
|
@ -740,12 +740,12 @@ In order to enable visibility into the state of an actor and allow for complex s
|
|||
|
||||
The state namespace created by Dapr for actors is composed of the following items:
|
||||
|
||||
* Dapr ID - Represents the unique ID given to the Dapr application.
|
||||
* App ID - Represents the unique ID given to the Dapr application.
|
||||
* Actor Type - Represents the type of the actor.
|
||||
* Actor ID - Represents the unique ID of the actor instance for an actor type.
|
||||
* Key - A key for the specific state value. An actor ID can hold multiple state keys.
|
||||
|
||||
The following example shows how to construct a key for the state of an actor instance under the `myapp` Dapr ID namespace:
|
||||
The following example shows how to construct a key for the state of an actor instance under the `myapp` App ID namespace:
|
||||
`myapp-cat-hobbit-food`
|
||||
|
||||
In the example above, we are getting the value for the state key `food`, for the actor ID `hobbit` with an actor type of `cat`, under the Dapr ID namespace of `myapp`.
|
||||
In the example above, we are getting the value for the state key `food`, for the actor ID `hobbit` with an actor type of `cat`, under the App ID namespace of `myapp`.
|
||||
|
|
|
@ -15,11 +15,12 @@ Examples for bindings include ```Kafka```, ```Rabbit MQ```, ```Azure Event Hubs`
|
|||
|
||||
A Dapr Binding yaml file has the following structure:
|
||||
|
||||
```yml
|
||||
```yaml
|
||||
apiVersion: dapr.io/v1alpha1
|
||||
kind: Component
|
||||
metadata:
|
||||
name: <NAME>
|
||||
namespace: <NAMESPACE>
|
||||
spec:
|
||||
type: bindings.<TYPE>
|
||||
metadata:
|
||||
|
@ -54,6 +55,7 @@ apiVersion: dapr.io/v1alpha1
|
|||
kind: Component
|
||||
metadata:
|
||||
name: kafkaevent
|
||||
namespace: default
|
||||
spec:
|
||||
type: bindings.kafka
|
||||
metadata:
|
||||
|
@ -201,3 +203,11 @@ curl -X POST http://localhost:3500/v1.0/bindings/myKafka \
|
|||
}
|
||||
}'
|
||||
```
|
||||
|
||||
### Common metadata values
|
||||
|
||||
There are common metadata properties which are support accross multiple binding components. The list below illustrates them:
|
||||
|
||||
|Property|Description|Binding definition|Available in
|
||||
|-|-|-|-|
|
||||
|ttlInSeconds|Defines the time to live in seconds for the message|If set in the binding definition will cause all messages to have a default time to live. The message ttl overrides any value in the binding definition.|RabbitMQ, Azure Service Bus, Azure Storage Queue|
|
||||
|
|
|
@ -13,11 +13,12 @@
|
|||
|
||||
A Dapr State Store component yaml file has the following structure:
|
||||
|
||||
```yml
|
||||
```yaml
|
||||
apiVersion: dapr.io/v1alpha1
|
||||
kind: Component
|
||||
metadata:
|
||||
name: <NAME>
|
||||
namespace: <NAMESPACE>
|
||||
spec:
|
||||
type: state.<TYPE>
|
||||
metadata:
|
||||
|
@ -40,13 +41,13 @@ Please refer https://github.com/dapr/dapr/blob/master/docs/decision_records/api/
|
|||
Dapr state stores are key/value stores. To ensure data compatibility, Dapr requires these data stores follow a fixed key scheme. For general states, the key format is:
|
||||
|
||||
```
|
||||
<Dapr id>||<state key>
|
||||
<App ID>||<state key>
|
||||
```
|
||||
|
||||
For Actor states, the key format is:
|
||||
|
||||
```
|
||||
<Dapr id>||<Actor type>||<Actor id>||<state key>
|
||||
<App ID>||<Actor type>||<Actor id>||<state key>
|
||||
```
|
||||
|
||||
## Save state
|
||||
|
@ -222,11 +223,12 @@ Actors don't support multiple state stores and require a transactional state sto
|
|||
To specify which state store to be used for actors, specify value of property `actorStateStore` as true in the metadata section of the state store component yaml file.
|
||||
Example: Following components yaml will configure redis to be used as the state store for Actors.
|
||||
|
||||
```yml
|
||||
```yaml
|
||||
apiVersion: dapr.io/v1alpha1
|
||||
kind: Component
|
||||
metadata:
|
||||
name: statestore
|
||||
namespace: default
|
||||
spec:
|
||||
type: state.redis
|
||||
metadata:
|
||||
|
@ -245,8 +247,8 @@ spec:
|
|||
|
||||
A Dapr-compatible state store shall use the following key scheme:
|
||||
|
||||
* *\<Dapr id>||\<state key>* key format for general states
|
||||
* *\<Dapr id>||\<Actor type>||\<Actor id>||\<state key>* key format for Actor states.
|
||||
* *\<App ID>||\<state key>* key format for general states
|
||||
* *\<App ID>||\<Actor type>||\<Actor id>||\<state key>* key format for Actor states.
|
||||
|
||||
### Concurrency
|
||||
|
||||
|
|
File diff suppressed because it is too large
Load Diff
|
@ -1,10 +1,11 @@
|
|||
# Azure Blob Storage Binding Spec
|
||||
|
||||
```yml
|
||||
```yaml
|
||||
apiVersion: dapr.io/v1alpha1
|
||||
kind: Component
|
||||
metadata:
|
||||
name: <name>
|
||||
name: <NAME>
|
||||
namespace: <NAMESPACE>
|
||||
spec:
|
||||
type: bindings.azure.blobstorage
|
||||
metadata:
|
||||
|
|
|
@ -1,10 +1,11 @@
|
|||
# Azure CosmosDB Binding Spec
|
||||
|
||||
```yml
|
||||
```yaml
|
||||
apiVersion: dapr.io/v1alpha1
|
||||
kind: Component
|
||||
metadata:
|
||||
name: <name>
|
||||
name: <NAME>
|
||||
namespace: <NAMESPACE>
|
||||
spec:
|
||||
type: bindings.azure.cosmosdb
|
||||
metadata:
|
||||
|
|
|
@ -1,10 +1,11 @@
|
|||
# AWS DynamoDB Binding Spec
|
||||
|
||||
```yml
|
||||
```yaml
|
||||
apiVersion: dapr.io/v1alpha1
|
||||
kind: Component
|
||||
metadata:
|
||||
name: <name>
|
||||
name: <NAME>
|
||||
namespace: <NAMESPACE>
|
||||
spec:
|
||||
type: bindings.aws.dynamodb
|
||||
metadata:
|
||||
|
|
|
@ -2,11 +2,12 @@
|
|||
|
||||
See [this](https://docs.microsoft.com/en-us/azure/event-hubs/event-hubs-dotnet-framework-getstarted-send) for instructions on how to set up an Event Hub.
|
||||
|
||||
```yml
|
||||
```yaml
|
||||
apiVersion: dapr.io/v1alpha1
|
||||
kind: Component
|
||||
metadata:
|
||||
name: <name>
|
||||
name: <NAME>
|
||||
namespace: <NAMESPACE>
|
||||
spec:
|
||||
type: bindings.azure.eventhubs
|
||||
metadata:
|
||||
|
|
|
@ -1,10 +1,11 @@
|
|||
# GCP Storage Bucket Spec
|
||||
|
||||
```yml
|
||||
```yaml
|
||||
apiVersion: dapr.io/v1alpha1
|
||||
kind: Component
|
||||
metadata:
|
||||
name: <name>
|
||||
name: <NAME>
|
||||
namespace: <NAMESPACE>
|
||||
spec:
|
||||
type: bindings.gcp.bucket
|
||||
metadata:
|
||||
|
|
|
@ -1,10 +1,11 @@
|
|||
# GCP Cloud Pub/Sub Binding Spec
|
||||
|
||||
```yml
|
||||
```yaml
|
||||
apiVersion: dapr.io/v1alpha1
|
||||
kind: Component
|
||||
metadata:
|
||||
name: <name>
|
||||
name: <NAME>
|
||||
namespace: <NAMESPACE>
|
||||
spec:
|
||||
type: bindings.gcp.pubsub
|
||||
metadata:
|
||||
|
|
|
@ -1,10 +1,11 @@
|
|||
# HTTP Binding Spec
|
||||
|
||||
```yml
|
||||
```yaml
|
||||
apiVersion: dapr.io/v1alpha1
|
||||
kind: Component
|
||||
metadata:
|
||||
name: <name>
|
||||
name: <NAME>
|
||||
namespace: <NAMESPACE>
|
||||
spec:
|
||||
type: bindings.http
|
||||
metadata:
|
||||
|
|
|
@ -1,10 +1,11 @@
|
|||
# Kafka Binding Spec
|
||||
|
||||
```yml
|
||||
```yaml
|
||||
apiVersion: dapr.io/v1alpha1
|
||||
kind: Component
|
||||
metadata:
|
||||
name: <name>
|
||||
name: <NAME>
|
||||
namespace: <NAMESPACE>
|
||||
spec:
|
||||
type: bindings.kafka
|
||||
metadata:
|
||||
|
@ -32,4 +33,25 @@ spec:
|
|||
- `saslUsername` is the SASL username for authentication. Only used if `authRequired` is set to - `"true"`.
|
||||
- `saslPassword` is the SASL password for authentication. Only used if `authRequired` is set to - `"true"`.
|
||||
|
||||
> **Note:** In production never place passwords or secrets within Dapr components. For information on securely storing and retrieving secrets refer to [Setup Secret Store](../../../howto/setup-secret-store)
|
||||
> **Note:** In production never place passwords or secrets within Dapr components. For information on securely storing and retrieving secrets refer to [Setup Secret Store](../../../howto/setup-secret-store)
|
||||
|
||||
## Specifying a partition key
|
||||
|
||||
When invoking the Kafka binding, its possible to provide an optional partition key by using the `metadata` section in the request body.
|
||||
|
||||
The field name is `partitionKey`.
|
||||
|
||||
Example:
|
||||
|
||||
```shell
|
||||
curl -X POST http://localhost:3500/v1.0/bindings/myKafka \
|
||||
-H "Content-Type: application/json" \
|
||||
-d '{
|
||||
"data": {
|
||||
"message": "Hi"
|
||||
},
|
||||
"metadata": {
|
||||
"partitionKey": "key1"
|
||||
}
|
||||
}'
|
||||
```
|
||||
|
|
|
@ -2,11 +2,12 @@
|
|||
|
||||
See [this](https://aws.amazon.com/kinesis/data-streams/getting-started/) for instructions on how to set up an AWS Kinesis data streams
|
||||
|
||||
```yml
|
||||
```yaml
|
||||
apiVersion: dapr.io/v1alpha1
|
||||
kind: Component
|
||||
metadata:
|
||||
name: <name>
|
||||
name: <NAME>
|
||||
namespace: <NAMESPACE>
|
||||
spec:
|
||||
type: bindings.aws.kinesis
|
||||
metadata:
|
||||
|
|
|
@ -1,10 +1,11 @@
|
|||
# Kubernetes Events Binding Spec
|
||||
|
||||
```yml
|
||||
```yaml
|
||||
apiVersion: dapr.io/v1alpha1
|
||||
kind: Component
|
||||
metadata:
|
||||
name: <name>
|
||||
name: <NAME>
|
||||
namespace: <NAMESPACE>
|
||||
spec:
|
||||
type: bindings.kubernetes
|
||||
metadata:
|
||||
|
|
|
@ -1,10 +1,11 @@
|
|||
# MQTT Binding Spec
|
||||
|
||||
```yml
|
||||
```yaml
|
||||
apiVersion: dapr.io/v1alpha1
|
||||
kind: Component
|
||||
metadata:
|
||||
name: <name>
|
||||
name: <NAME>
|
||||
namespace: <NAMESPACE>
|
||||
spec:
|
||||
type: bindings.mqtt
|
||||
metadata:
|
||||
|
|
|
@ -1,10 +1,11 @@
|
|||
# RabbitMQ Binding Spec
|
||||
|
||||
```yml
|
||||
```yaml
|
||||
apiVersion: dapr.io/v1alpha1
|
||||
kind: Component
|
||||
metadata:
|
||||
name: <name>
|
||||
name: <NAME>
|
||||
namespace: <NAMESPACE>
|
||||
spec:
|
||||
type: bindings.rabbitmq
|
||||
metadata:
|
||||
|
@ -16,11 +17,37 @@ spec:
|
|||
value: true
|
||||
- name: deleteWhenUnused
|
||||
value: false
|
||||
- name: ttlInSeconds
|
||||
value: 60
|
||||
```
|
||||
|
||||
- `queueName` is the RabbitMQ queue name.
|
||||
- `host` is the RabbitMQ host address.
|
||||
- `durable` tells RabbitMQ to persist message in storage.
|
||||
- `deleteWhenUnused` enables or disables auto-delete.
|
||||
- `ttlInSeconds` is an optional parameter to set the [default message time to live at RabbitMQ queue level](https://www.rabbitmq.com/ttl.html). If this parameter is omitted, messages won't expire, continuing to exist on the queue until processed.
|
||||
|
||||
> **Note:** In production never place passwords or secrets within Dapr components. For information on securely storing and retrieving secrets refer to [Setup Secret Store](../../../howto/setup-secret-store)
|
||||
> **Note:** In production never place passwords or secrets within Dapr components. For information on securely storing and retrieving secrets refer to [Setup Secret Store](../../../howto/setup-secret-store)
|
||||
|
||||
## Specifying a time to live on message level
|
||||
|
||||
Time to live can be defined on queue level (as illustrated above) or at the message level. The value defined at message level overwrites any value set at queue level.
|
||||
|
||||
To set time to live at message level use the `metadata` section in the request body during the binding invocation.
|
||||
|
||||
The field name is `ttlInSeconds`.
|
||||
|
||||
Example:
|
||||
|
||||
```shell
|
||||
curl -X POST http://localhost:3500/v1.0/bindings/myRabbitMQ \
|
||||
-H "Content-Type: application/json" \
|
||||
-d '{
|
||||
"data": {
|
||||
"message": "Hi"
|
||||
},
|
||||
"metadata": {
|
||||
"ttlInSeconds": "60"
|
||||
}
|
||||
}'
|
||||
```
|
||||
|
|
|
@ -1,10 +1,11 @@
|
|||
# Redis Binding Spec
|
||||
|
||||
```yml
|
||||
```yaml
|
||||
apiVersion: dapr.io/v1alpha1
|
||||
kind: Component
|
||||
metadata:
|
||||
name: <name>
|
||||
name: <NAME>
|
||||
namespace: <NAMESPACE>
|
||||
spec:
|
||||
type: bindings.redis
|
||||
metadata:
|
||||
|
@ -12,9 +13,12 @@ spec:
|
|||
value: <address>:6379
|
||||
- name: redisPassword
|
||||
value: **************
|
||||
- name: enableTLS
|
||||
value: <bool>
|
||||
```
|
||||
|
||||
- `redisHost` is the Redis host address.
|
||||
- `redisPassword` is the Redis password.
|
||||
- `enableTLS` - If the Redis instance supports TLS with public certificates it can be configured to enable or disable TLS.
|
||||
|
||||
> **Note:** In production never place passwords or secrets within Dapr components. For information on securely storing and retrieving secrets refer to [Setup Secret Store](../../../howto/setup-secret-store)
|
||||
|
|
|
@ -1,10 +1,11 @@
|
|||
# AWS S3 Binding Spec
|
||||
|
||||
```yml
|
||||
```yaml
|
||||
apiVersion: dapr.io/v1alpha1
|
||||
kind: Component
|
||||
metadata:
|
||||
name: <name>
|
||||
name: <NAME>
|
||||
namespace: <NAMESPACE>
|
||||
spec:
|
||||
type: bindings.aws.s3
|
||||
metadata:
|
||||
|
|
|
@ -0,0 +1,42 @@
|
|||
# SendGrid Binding Spec
|
||||
|
||||
```yaml
|
||||
apiVersion: dapr.io/v1alpha1
|
||||
kind: Component
|
||||
metadata:
|
||||
name: sendgrid
|
||||
namespace: default
|
||||
spec:
|
||||
type: bindings.twilio.sendgrid
|
||||
metadata:
|
||||
- name: emailFrom
|
||||
value: "testapp@dapr.io" # optional
|
||||
- name: emailTo
|
||||
value: "dave@dapr.io" # optional
|
||||
- name: subject
|
||||
value: "Hello!" # optional
|
||||
- name: apiKey
|
||||
value: "YOUR_API_KEY" # required, this is your SendGrid key
|
||||
```
|
||||
|
||||
- `emailFrom` If set this specifies the 'from' email address of the email message. Optional field, see below.
|
||||
- `emailTo` If set this specifies the 'to' email address of the email message. Optional field, see below.
|
||||
- `emailCc` If set this specifies the 'cc' email address of the email message. Optional field, see below.
|
||||
- `emailBcc` If set this specifies the 'bcc' email address of the email message. Optional field, see below.
|
||||
- `subject` If set this specifies the subject of the email message. Optional field, see below.
|
||||
- `apiKey` is your SendGrid API key, this should be considered a secret value. Required.
|
||||
|
||||
You can specify any of the optional metadata properties on the output binding request too (e.g. `emailFrom`, `emailTo`, `subject`, etc.)
|
||||
|
||||
Example request payload
|
||||
```
|
||||
{
|
||||
"metadata": {
|
||||
"emailTo": "changeme@example.net",
|
||||
"subject": "An email from Dapr SendGrid binding"
|
||||
},
|
||||
"data": "<h1>Testing Dapr Bindings</h1>This is a test.<br>Bye!"
|
||||
}
|
||||
```
|
||||
|
||||
> **Note:** In production never place passwords or secrets within Dapr components. For information on securely storing and retrieving secrets refer to [Setup Secret Store](../../../howto/setup-secret-store)
|
|
@ -1,20 +1,47 @@
|
|||
# Azure Service Bus Queues Binding Spec
|
||||
|
||||
```yml
|
||||
```yaml
|
||||
apiVersion: dapr.io/v1alpha1
|
||||
kind: Component
|
||||
metadata:
|
||||
name: <name>
|
||||
name: <NAME>
|
||||
namespace: <NAMESPACE>
|
||||
spec:
|
||||
type: bindings.azure.servicebusqueues
|
||||
metadata:
|
||||
- name: connectionString
|
||||
value: sb://************
|
||||
value: "sb://************"
|
||||
- name: queueName
|
||||
value: queue1
|
||||
- name: ttlInSeconds
|
||||
value: 60
|
||||
```
|
||||
|
||||
- `connectionString` is the Service Bus connection string.
|
||||
- `queueName` is the Service Bus queue name.
|
||||
- `ttlInSeconds` is an optional parameter to set the default message [time to live](https://docs.microsoft.com/azure/service-bus-messaging/message-expiration). If this parameter is omitted, messages will expire after 14 days.
|
||||
|
||||
> **Note:** In production never place passwords or secrets within Dapr components. For information on securely storing and retrieving secrets refer to [Setup Secret Store](../../../howto/setup-secret-store)
|
||||
> **Note:** In production never place passwords or secrets within Dapr components. For information on securely storing and retrieving secrets refer to [Setup Secret Store](../../../howto/setup-secret-store)
|
||||
|
||||
## Specifying a time to live on message level
|
||||
|
||||
Time to live can be defined on queue level (as illustrated above) or at the message level. The value defined at message level overwrites any value set at queue level.
|
||||
|
||||
To set time to live at message level use the `metadata` section in the request body during the binding invocation.
|
||||
|
||||
The field name is `ttlInSeconds`.
|
||||
|
||||
Example:
|
||||
|
||||
```shell
|
||||
curl -X POST http://localhost:3500/v1.0/bindings/myServiceBusQueue \
|
||||
-H "Content-Type: application/json" \
|
||||
-d '{
|
||||
"data": {
|
||||
"message": "Hi"
|
||||
},
|
||||
"metadata": {
|
||||
"ttlInSeconds": "60"
|
||||
}
|
||||
}'
|
||||
```
|
||||
|
|
|
@ -4,7 +4,8 @@
|
|||
apiVersion: dapr.io/v1alpha1
|
||||
kind: Component
|
||||
metadata:
|
||||
name: <name>
|
||||
name: <NAME>
|
||||
namespace: <NAMESPACE>
|
||||
spec:
|
||||
type: bindings.azure.signalr
|
||||
metadata:
|
||||
|
|
|
@ -1,10 +1,11 @@
|
|||
# AWS SNS Binding Spec
|
||||
|
||||
```yml
|
||||
```yaml
|
||||
apiVersion: dapr.io/v1alpha1
|
||||
kind: Component
|
||||
metadata:
|
||||
name: <name>
|
||||
name: <NAME>
|
||||
namespace: <NAMESPACE>
|
||||
spec:
|
||||
type: bindings.aws.sns
|
||||
metadata:
|
||||
|
|
|
@ -1,10 +1,11 @@
|
|||
# AWS SQS Binding Spec
|
||||
|
||||
```yml
|
||||
```yaml
|
||||
apiVersion: dapr.io/v1alpha1
|
||||
kind: Component
|
||||
metadata:
|
||||
name: <name>
|
||||
name: <NAME>
|
||||
namespace: <NAMESPACE>
|
||||
spec:
|
||||
type: bindings.aws.sqs
|
||||
metadata:
|
||||
|
|
|
@ -4,7 +4,8 @@
|
|||
apiVersion: dapr.io/v1alpha1
|
||||
kind: Component
|
||||
metadata:
|
||||
name: <name>
|
||||
name: <NAME>
|
||||
namespace: <NAMESPACE>
|
||||
spec:
|
||||
type: bindings.azure.storagequeues
|
||||
metadata:
|
||||
|
@ -14,10 +15,36 @@ spec:
|
|||
value: "***********"
|
||||
- name: queue
|
||||
value: "myqueue"
|
||||
- name: ttlInSeconds
|
||||
value: "60"
|
||||
```
|
||||
|
||||
- `storageAccount` is the Azure Storage account name.
|
||||
- `storageAccessKey` is the Azure Storage access key.
|
||||
- `queue` is the name of the Azure Storage queue.
|
||||
- `ttlInSeconds` is an optional parameter to set the default message time to live. If this parameter is omitted, messages will expire after 10 minutes.
|
||||
|
||||
> **Note:** In production never place passwords or secrets within Dapr components. For information on securely storing and retrieving secrets refer to [Setup Secret Store](../../../howto/setup-secret-store)
|
||||
> **Note:** In production never place passwords or secrets within Dapr components. For information on securely storing and retrieving secrets refer to [Setup Secret Store](../../../howto/setup-secret-store)
|
||||
|
||||
## Specifying a time to live on message level
|
||||
|
||||
Time to live can be defined on queue level (as illustrated above) or at the message level. The value defined at message level overwrites any value set at queue level.
|
||||
|
||||
To set time to live at message level use the `metadata` section in the request body during the binding invocation.
|
||||
|
||||
The field name is `ttlInSeconds`.
|
||||
|
||||
Example:
|
||||
|
||||
```shell
|
||||
curl -X POST http://localhost:3500/v1.0/bindings/myStorageQueue \
|
||||
-H "Content-Type: application/json" \
|
||||
-d '{
|
||||
"data": {
|
||||
"message": "Hi"
|
||||
},
|
||||
"metadata": {
|
||||
"ttlInSeconds": "60"
|
||||
}
|
||||
}'
|
||||
```
|
||||
|
|
|
@ -1,10 +1,11 @@
|
|||
# Twilio SMS Binding Spec
|
||||
|
||||
```yml
|
||||
```yaml
|
||||
apiVersion: dapr.io/v1alpha1
|
||||
kind: Component
|
||||
metadata:
|
||||
name: <name>
|
||||
name: <NAME>
|
||||
namespace: <NAMESPACE>
|
||||
spec:
|
||||
type: bindings.twilio.sms
|
||||
metadata:
|
||||
|
|
|
@ -0,0 +1,22 @@
|
|||
# Twitter Binding Spec
|
||||
|
||||
```yaml
|
||||
apiVersion: dapr.io/v1alpha1
|
||||
kind: Component
|
||||
metadata:
|
||||
name: <NAME>
|
||||
namespace: <NAMESPACE>
|
||||
spec:
|
||||
type: bindings.twitter
|
||||
metadata:
|
||||
- name: consumerKey
|
||||
value: "****" # twitter api consumer key, required
|
||||
- name: consumerSecret
|
||||
value: "****" # twitter api consumer secret, required
|
||||
- name: accessToken
|
||||
value: "****" # twitter api access token, required
|
||||
- name: accessSecret
|
||||
value: "****" # twitter api access secret, required
|
||||
- name: query
|
||||
value: "dapr" # your search query, required
|
||||
```
|
Loading…
Reference in New Issue