Merge branch 'v1.0' into observability-concept

This commit is contained in:
Ori Zohar 2021-02-15 11:40:09 -08:00
commit 70b5566b7f
25 changed files with 279 additions and 176 deletions

4
.gitmodules vendored
View File

@ -4,4 +4,6 @@
[submodule "sdkdocs/python"]
path = sdkdocs/python
url = https://github.com/dapr/python-sdk.git
[submodule "sdkdocs/php"]
path = sdkdocs/php
url = https://github.com/dapr/php-sdk.git

View File

@ -43,6 +43,9 @@ id = "UA-149338238-3"
[[module.mounts]]
source = "../sdkdocs/python/daprdocs/content/en/python-sdk-contributing"
target = "content/contributing/"
[[module.mounts]]
source = "../sdkdocs/php/daprdocs/content/en/php-sdk-docs"
target = "content/developing-applications/sdks/php"
# Markdown Engine - Allow inline html
[markup]

View File

@ -1,14 +1,14 @@
---
type: docs
title: "Dapr actors overview"
title: "Actors overview"
linkTitle: "Overview"
weight: 10
description: Overview of Dapr support for actors
description: Overview of the actors building block
aliases:
- "/developing-applications/building-blocks/actors/actors-background"
---
## Background
## Introduction
The [actor pattern](https://en.wikipedia.org/wiki/Actor_model) describes actors as the lowest-level "unit of computation". In other words, you write your code in a self-contained unit (called an actor) that receives messages and processes them one at a time, without any kind of concurrency or threading.
While your code processes a message, it can send one or more messages to other actors, or create new actors. An underlying runtime manages how, when and where each actor runs, and also routes messages between actors.

View File

@ -26,13 +26,7 @@ Actors can save state reliably using state management capability.
You can interact with Dapr through HTTP/gRPC endpoints for state management.
To use actors, your state store must support multi-item transactions. This means your state store [component](https://github.com/dapr/components-contrib/tree/master/state) must implement the [TransactionalStore](https://github.com/dapr/components-contrib/blob/master/state/transactional_store.go) interface. The following state stores implement this interface:
- Redis
- MongoDB
- PostgreSQL
- SQL Server
- Azure CosmosDB
To use actors, your state store must support multi-item transactions. This means your state store [component](https://github.com/dapr/components-contrib/tree/master/state) must implement the [TransactionalStore](https://github.com/dapr/components-contrib/blob/master/state/transactional_store.go) interface. The list of components that support transactions/actors can be found here: [supported state stores]({{< ref supported-state-stores.md >}}).
## Actor timers and reminders

View File

@ -3,5 +3,5 @@ type: docs
title: "Bindings"
linkTitle: "Bindings"
weight: 40
description: Trigger code from and interface with a large array of external resources
description: Interface with or be triggered from external systems
---

View File

@ -3,7 +3,7 @@ type: docs
title: "Bindings overview"
linkTitle: "Overview"
weight: 100
description: Overview of the Dapr bindings building block
description: Overview of the bindings building block
---
## Introduction
@ -37,19 +37,18 @@ Read the [Create an event-driven app using input bindings]({{< ref howto-trigger
## Output bindings
Output bindings allow users to invoke external resources.
An optional payload and metadata can be sent with the invocation request.
Output bindings allow you to invoke external resources. An optional payload and metadata can be sent with the invocation request.
In order to invoke an output binding:
1. Define the component YAML that describes the type of binding and its metadata (connection info, etc.)
2. Use the HTTP endpoint or gRPC method to invoke the binding with an optional payload
Read the [Send events to external systems using output bindings]({{< ref howto-bindings.md >}}) page to get started with output bindings.
## Related Topics
- [Trigger a service from different resources with input bindings]({{< ref howto-triggers.md >}})
- [Invoke different resources using output bindings]({{< ref howto-bindings.md >}})
Read the [Use output bindings to interface with external resources]({{< ref howto-bindings.md >}}) page to get started with output bindings.
## Next Steps
* Follow these guides on:
* [How-To: Trigger a service from different resources with input bindings]({{< ref howto-triggers.md >}})
* [How-To: Use output bindings to interface with external resources]({{< ref howto-bindings.md >}})
* Try out the [bindings quickstart](https://github.com/dapr/quickstarts/tree/master/bindings/README.md) which shows how to bind to a Kafka queue
* Read the [bindings API specification]({{< ref bindings_api.md >}})

View File

@ -10,7 +10,8 @@ Output bindings enable you to invoke external resources without taking dependenc
For a complete sample showing output bindings, visit this [link](https://github.com/dapr/quickstarts/tree/master/bindings).
Watch this [video](https://www.youtube.com/watch?v=ysklxm81MTs&feature=youtu.be&t=1960) on how to use bi-directional output bindings.
<iframe width="560" height="315" src="https://www.youtube.com/watch?v=ysklxm81MTs" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe>
<iframe width="560" height="315" src="https://www.youtube.com/embed/ysklxm81MTs?start=1960" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe>
## 1. Create a binding

View File

@ -14,7 +14,7 @@ Dapr uses the Zipkin protocol for distributed traces and metrics collection. Due
Dapr adds a HTTP/gRPC middleware to the Dapr sidecar. The middleware intercepts all Dapr and application traffic and automatically injects correlation IDs to trace distributed transactions. This design has several benefits:
* No need for code instrumentation. All traffic is automatically traced (with configurable tracing levels).
* No need for code instrumentation. All traffic is automatically traced with configurable tracing levels.
* Consistent tracing behavior across microservices. Tracing is configured and managed on Dapr sidecar so that it remains consistent across services made by different teams and potentially written in different programming languages.
* Configurable and extensible. By leveraging the Zipkin API and the OpenTelemetry Collector, Dapr tracing can be configured to work with popular tracing backends, including custom backends a customer may have.
* You can define and enable multiple exporters at the same time.
@ -27,7 +27,7 @@ Read [W3C distributed tracing]({{< ref w3c-tracing >}}) for more background on W
## Configuration
Dapr uses [probabilistic sampling](https://opencensus.io/tracing/sampling/probabilistic/) as defined by OpenCensus. The sample rate defines the probability a tracing span will be sampled and can have a value between 0 and 1 (inclusive). The deafault sample rate is 0.0001 (i.e. 1 in 10,000 spans is sampled).
Dapr uses probabilistic sampling. The sample rate defines the probability a tracing span will be sampled and can have a value between 0 and 1 (inclusive). The default sample rate is 0.0001 (i.e. 1 in 10,000 spans is sampled).
To change the default tracing behavior, use a configuration file (in self hosted mode) or a Kubernetes configuration object (in Kubernetes mode). For example, the following configuration object changes the sample rate to 1 (i.e. every span is sampled), and sends trace using Zipkin protocol to the Zipkin server at http://zipkin.default.svc.cluster.local
@ -44,7 +44,7 @@ spec:
endpointAddress: "http://zipkin.default.svc.cluster.local:9411/api/v2/spans"
```
Changing `samplingRate` to 0 will disable tracing altogether.
Note: Changing `samplingRate` to 0 disables tracing altogether.
See the [References](#references) section for more details on how to configure tracing on local environment and Kubernetes environment.

View File

@ -3,7 +3,7 @@ type: docs
title: "Publish and subscribe overview"
linkTitle: "Overview"
weight: 1000
description: "Overview of the Dapr Pub/Sub building block"
description: "Overview of the Pub/Sub building block"
---
## Introduction
@ -106,10 +106,11 @@ The publish/subscribe API is located in the [API reference]({{< ref pubsub_api.m
## Next steps
- Try the [Pub/Sub quickstart sample](https://github.com/dapr/quickstarts/tree/master/pub-sub)
- Read the [guide on publishing and subscribing]({{< ref howto-publish-subscribe.md >}})
- Learn about [topic scoping]({{< ref pubsub-scopes.md >}})
- Learn about [message time-to-live]({{< ref pubsub-message-ttl.md >}})
- Learn [how to configure Pub/Sub components with multiple namespaces]({{< ref pubsub-namespaces.md >}})
- List of [pub/sub components]({{< ref supported-pubsub >}})
- Read the [API reference]({{< ref pubsub_api.md >}})
* Follow these guides on:
* [How-To: Publish a message and subscribe to a topic]({{< ref howto-publish-subscribe.md >}})
* [How-To: Configure Pub/Sub components with multiple namespaces]({{< ref pubsub-namespaces.md >}})
* Try out the [Pub/Sub quickstart sample](https://github.com/dapr/quickstarts/tree/master/pub-sub)
* Learn about [topic scoping]({{< ref pubsub-scopes.md >}})
* Learn about [message time-to-live (TTL)]({{< ref pubsub-message-ttl.md >}})
* List of [pub/sub components]({{< ref supported-pubsub.md >}})
* Read the [pub/sub API reference]({{< ref pubsub_api.md >}})

View File

@ -1,7 +1,7 @@
---
type: docs
title: "Secrets building block"
linkTitle: "Secrets"
title: "Secrets management"
linkTitle: "Secrets management"
weight: 70
description: Securely access secrets from your application
---

View File

@ -1,9 +1,9 @@
---
type: docs
title: "Secrets stores overview"
linkTitle: "Secrets stores overview"
title: "Secrets management overview"
linkTitle: "Overview"
weight: 1000
description: "Overview of Dapr secrets management building block"
description: "Overview of secrets management building block"
---
It's common for applications to store sensitive information such as connection strings, keys and tokens that are used to authenticate with databases, services and external systems in secrets by using a dedicated secret store.

View File

@ -3,21 +3,24 @@ type: docs
title: "How To: Use secret scoping"
linkTitle: "How To: Use secret scoping"
weight: 3000
description: "Use scoping to limit the secrets that can be read from secret stores"
description: "Use scoping to limit the secrets that can be read by your application from secret stores"
type: docs
---
Follow [these instructions]({{< ref setup-secret-store >}}) to configure secret store for an application. Once configured, any secret defined within that store will be accessible from the Dapr application.
You can read [guidance on setting up secret store components]({{< ref setup-secret-store >}}) to configure a secret store for an application. Once configured, by default *any* secret defined within that store is accessible from the Dapr application.
To limit the secrets to which the Dapr application has access, users can define secret scopes by augmenting existing configuration CRD with restrictive permissions.
To limit the secrets to which the Dapr application has access to, you can can define secret scopes by adding a secret scope policy to the application configuration with restrictive permissions. Follow [these instructions]({{< ref configuration-concept.md >}}) to define an application configuration.
Follow [these instructions]({{< ref configuration-concept.md >}}) to define a configuration CRD.
The secret scoping policy applies to any [secret store]({{< ref supported-secret-stores.md >}}), whether that is a local secret store, a Kubernetes secret store or a public cloud secret store. For details on how to set up a [secret stores]({{< ref secret-stores-overview.md >}}) read [How To: Retrieve a secret]({{< ref howto-secrets.md >}})
Watch this [video](https://youtu.be/j99RN_nxExA?start=2272) for a demo on how to use secret scoping with your application.
<iframe width="688" height="430" src="https://www.youtube.com/embed/j99RN_nxExA?start=2272" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe>
## Scenario 1 : Deny access to all secrets for a secret store
In Kubernetes cluster, the native Kubernetes secret store is added to Dapr application by default. In some scenarios it may be necessary to deny access to Dapr secrets for a given application. To add this configuration follow the steps below:
This example uses Kubernetes. The native Kubernetes secret store is added to you Dapr application by default. In some scenarios it may be necessary to deny access to Dapr secrets for a given application. To add this configuration follow the steps below:
Define the following `appconfig.yaml` and apply it to the Kubernetes cluster using the command `kubectl apply -f appconfig.yaml`.
Define the following `appconfig.yaml` configuration and apply it to the Kubernetes cluster using the command `kubectl apply -f appconfig.yaml`.
```yaml
apiVersion: dapr.io/v1alpha1
@ -37,11 +40,11 @@ For applications that need to be denied access to the Kubernetes secret store, f
dapr.io/config: appconfig
```
With this defined, the application no longer has access to Kubernetes secret store.
With this defined, the application no longer has access to any secrets in the Kubernetes secret store.
## Scenario 2 : Allow access to only certain secrets in a secret store
To allow a Dapr application to have access to only certain secrets, define the following `config.yaml`:
This example uses a secret store that is named `vault`. For example this could be a Hashicorp secret store component that has been set on your application. To allow a Dapr application to have access to only certain secrets `secret1` and `secret2` in the `vault` secret store, define the following `appconfig.yaml`:
```yaml
apiVersion: dapr.io/v1alpha1
@ -56,9 +59,9 @@ spec:
allowedSecrets: ["secret1", "secret2"]
```
This example defines configuration for secret store named vault. The default access to the secret store is `deny`, whereas some secrets are accessible by the application based on the `allowedSecrets` list. Follow [these instructions]({{< ref configuration-concept.md >}}) to apply configuration to the sidecar.
This example defines configuration for secret store named `vault`. The default access to the secret store is `deny`, whereas some secrets are accessible by the application based on the `allowedSecrets` list. Follow [these instructions]({{< ref configuration-concept.md >}}) to apply configuration to the sidecar.
## Scenario 3: Deny access to certain senstive secrets in a secret store
## Scenario 3: Deny access to certain sensitive secrets in a secret store
Define the following `config.yaml`:
@ -75,11 +78,11 @@ spec:
deniedSecrets: ["secret1", "secret2"]
```
The above configuration explicitly denies access to `secret1` and `secret2` from the secret store named vault while allowing access to all other secrets. Follow [these instructions]({{< ref configuration-concept.md >}}) to apply configuration to the sidecar.
This example uses a secret store that is named `vault`. The above configuration explicitly denies access to `secret1` and `secret2` from the secret store named vault while allowing access to all other secrets. Follow [these instructions]({{< ref configuration-concept.md >}}) to apply configuration to the sidecar.
## Permission priority
The `allowedSecrets` and `deniedSecrets` list values take priorty over the `defaultAccess`.
The `allowedSecrets` and `deniedSecrets` list values take priority over the `defaultAccess` policy.
Scenarios | defaultAccess | allowedSecrets | deniedSecrets | permission
---- | ------- | -----------| ----------| ------------
@ -90,6 +93,8 @@ Scenarios | defaultAccess | allowedSecrets | deniedSecrets | permission
5 - Default deny with denied list | deny | empty | ["s1"] | deny
6 - Default deny/allow with both lists | deny/allow | ["s1"] | ["s2"] | only "s1" can be accessed
## Related links
* List of [secret stores]({{< ref supported-secret-stores.md >}})
* Overview of [secret stores]({{< ref secret-stores-overview.md >}})
howto-secrets/

View File

@ -1,8 +1,8 @@
---
type: docs
title: "How-To: Invoke and discover services"
title: "How-To: Invoke services using HTTP"
linkTitle: "How-To: Invoke services"
description: "How-to guide on how to use Dapr service invocation in a distributed application"
description: "Call between services using service invocation"
weight: 2000
---

View File

@ -8,26 +8,26 @@ description: "Overview of the service invocation building block"
## Introduction
Using service invocation, your application can discover and reliably and securely communicate with other applications using the standard protocols of [gRPC](https://grpc.io) or [HTTP](https://www.w3.org/Protocols/).
Using service invocation, your application can reliably and securely communicate with other applications using the standard [gRPC](https://grpc.io) or [HTTP](https://www.w3.org/Protocols/) protocols.
In many environments with multiple services that need to communicate with each other, developers often ask themselves the following questions:
* How do I discover and invoke methods on different services?
* How do I call other services securely?
* How do I call other services securely with encryption and apply access control on the methods?
* How do I handle retries and transient errors?
* How do I use distributed tracing to see a call graph to diagnose issues in production?
* How do I use tracing to see a call graph with metrics to diagnose issues in production?
Dapr allows you to overcome these challenges by providing an endpoint that acts as a combination of a reverse proxy with built-in service discovery, while leveraging built-in distributed tracing, metrics, error handling and more.
Dapr addresses these challenges by providing a service invocation API that acts as a combination of a reverse proxy with built-in service discovery, while leveraging built-in distributed tracing, metrics, error handling, encryption and more.
Dapr uses a sidecar, decentralized architecture. To invoke an application using Dapr, you use the `invoke` API on any Dapr instance. The sidecar programming model encourages each applications to talk to its own instance of Dapr. The Dapr instances discover and communicate with one another.
Dapr uses a sidecar architecture. To invoke an application using Dapr, you use the `invoke` API on any Dapr instance. The sidecar programming model encourages each applications to talk to its own instance of Dapr. The Dapr instances discover and communicate with one another.
### Invoke logic
### Service invocation
The diagram below is an overview of how Dapr's service invocation works.
<img src="/images/service-invocation-overview.png" width=800 alt="Diagram showing the steps of service invocation">
1. Service A makes an http/gRPC call targeting Service B. The call goes to the local Dapr sidecar.
1. Service A makes an HTTP or gRPC call targeting Service B. The call goes to the local Dapr sidecar.
2. Dapr discovers Service B's location using the [name resolution component](https://github.com/dapr/components-contrib/tree/master/nameresolution) which is running on the given [hosting platform]({{< ref "hosting" >}}).
3. Dapr forwards the message to Service B's Dapr sidecar
@ -39,11 +39,7 @@ The diagram below is an overview of how Dapr's service invocation works.
7. Service A receives the response.
## Features
Service invocation provides several features to make it easy for you to call methods on remote applications.
### Service invocation API
The API for Pservice invocation can be found in the [spec repo]({{< ref service_invocation_api.md >}}).
Service invocation provides several features to make it easy for you to call methods between applications.
### Namespaces scoping
@ -59,17 +55,6 @@ This is especially useful in cross namespace calls in a Kubernetes cluster. Watc
<iframe width="560" height="315" src="https://www.youtube.com/embed/LYYV_jouEuA?start=497" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe>
### Retries
Service invocation performs automatic retries with backoff time periods in the event of call failures and transient errors.
Errors that cause retries are:
* Network errors including endpoint unavailability and refused connections
* Authentication errors due to a renewing certificate on the calling/callee Dapr sidecars
Per call retries are performed with a backoff interval of 1 second up to a threshold of 3 times.
Connection establishment via gRPC to the target sidecar has a timeout of 5 seconds.
### Service-to-service security
@ -77,28 +62,55 @@ All calls between Dapr applications can be made secure with mutual (mTLS) authen
For more information read the [service-to-service security]({{< ref "security-concept.md#sidecar-to-sidecar-communication" >}}) article.
<img src="/images/security-mTLS-sentry-selfhosted.png" width=800>
### Service access security
### Service access policies security
Applications can control which other applications are allowed to call them and what they are authorized to do via access policies. This enables you to restrict sensitive applications, that say have personnel information, from being accessed by unauthorized applications, and combined with service-to-service secure communication, provides for soft multi-tenancy deployments.
For more information read the [access control allow lists for service invocation]({{< ref invoke-allowlist.md >}}) article.
### Observability
#### Example service invocation security
The diagram below is an example deployment on a Kubernetes cluster with a Daprized `Ingress` service that calls onto `Service A` using service invocation with mTLS encryption and an applies access control policy. `Service A` then calls onto `Service B` also using service invocation and mTLS. Each service is running in different namespaces for added isolation.
By default, all calls between applications are traced and metrics are gathered to provide insights and diagnostics for applications, which is especially important in production scenarios.
<img src="/images/service-invocation-security.png" width=800>
For more information read the [observability]({{< ref observability-concept.md >}}) article.
### Retries
Service invocation performs automatic retries with backoff time periods in the event of call failures and transient errors.
Errors that cause retries are:
* Network errors including endpoint unavailability and refused connections.
* Authentication errors due to a renewing certificate on the calling/callee Dapr sidecars.
Per call retries are performed with a backoff interval of 1 second up to a threshold of 3 times.
Connection establishment via gRPC to the target sidecar has a timeout of 5 seconds.
### Pluggable service discovery
Dapr can run on any [hosting platform]({{< ref hosting >}}). For the supported hosting platforms this means they have a [name resolution component](https://github.com/dapr/components-contrib/tree/master/nameresolution) developed for them that enables service discovery. For example, the Kubernetes name resolution component uses the Kubernetes DNS service to resolve the location of other applications running in the cluster.
Dapr can run on any [hosting platform]({{< ref hosting >}}). For the supported hosting platforms this means they have a [name resolution component](https://github.com/dapr/components-contrib/tree/master/nameresolution) developed for them that enables service discovery. For example, the Kubernetes name resolution component uses the Kubernetes DNS service to resolve the location of other applications running in the cluster. For local and multiple physical machines this uses the mDNS protocol.
### Round robin load balancing with mDNS
Dapr provides round robin load balancing of service invocation requests with the mDNS protocol, for example with a single machine or with multiple, networked, physical machines.
The diagram below shows an example of how this works. If you have 1 instance of an application with app ID `FrontEnd` and 3 instances of application with app ID `Cart` and you call from `FrontEnd` app to `Cart` app, Dapr round robins' between the 3 instances. These instance can be on the same machine or on different machines. .
<img src="/images/service-invocation-mdns-round-robin.png" width=800 alt="Diagram showing the steps of service invocation">
Note: You can have N instances of the same app with the same app ID as app ID is unique per app. And you can have multiple instances of that app where all those instances have the same app ID.
### Tracing and metrics with observability
By default, all calls between applications are traced and metrics are gathered to provide insights and diagnostics for applications, which is especially important in production scenarios. This gives you call graphs and metrics on the calls between your services. For more information read about [observability]({{< ref observability-concept.md >}}).
### Service invocation API
The API for service invocation can be found in the [service invocation API reference]({{< ref service_invocation_api.md >}}) which describes how to invoke a method on another service.
## Example
Following the above call sequence, suppose you have the applications as described in the [hello world quickstart](https://github.com/dapr/quickstarts/blob/master/hello-world/README.md), where a python app invokes a node.js app. In such a scenario, the python app would be "Service A" , and a Node.js app would be "Service B".
The diagram below shows sequence 1-7 again on a local machine showing the API call:
The diagram below shows sequence 1-7 again on a local machine showing the API calls:
<img src="/images/service-invocation-overview-example.png" width=800>
@ -112,9 +124,9 @@ The diagram below shows sequence 1-7 again on a local machine showing the API ca
## Next steps
* Follow these guide on:
* [How-to: Get started with HTTP service invocation]({{< ref howto-invoke-discover-services.md >}})
* [How-to: Get started with Dapr and gRPC]({{< ref grpc >}})
* Try out the [hello world quickstart](https://github.com/dapr/quickstarts/blob/master/hello-world/README.md) which shows how to use HTTP service invocation or visit the samples in each of the [Dapr SDKs]({{< ref sdks >}})
* Follow these guides on:
* [How-to: Invoke services using HTTP]({{< ref howto-invoke-discover-services.md >}})
* [How-To: Configure Dapr to use gRPC]({{< ref grpc >}})
* Try out the [hello world quickstart](https://github.com/dapr/quickstarts/blob/master/hello-world/README.md) which shows how to use HTTP service invocation or try the samples in the [Dapr SDKs]({{< ref sdks >}})
* Read the [service invocation API specification]({{< ref service_invocation_api.md >}})
* See the [service invocation performance]({{< ref perf-service-invocation.md >}}) numbers
* Understand the [service invocation performance]({{< ref perf-service-invocation.md >}}) numbers

View File

@ -8,54 +8,39 @@ description: "Overview of the state management building block"
## Introduction
<img src="/images/state-management-overview.png" width=900>
Using state management, your application can store data as key/value pairs in the [supported state stores]({{< ref supported-state-stores.md >}}).
Dapr offers key/value storage APIs for state management. If a microservice uses state management, it can use these APIs to leverage any of the [supported state stores]({{< ref supported-state-stores.md >}}), without adding or learning a third party SDK.
When using state management your application can leverage several features that would otherwise be complicated and error-prone to build yourself such as:
When using state management your application can leverage features that would otherwise be complicated and error-prone to build yourself such as:
- Distributed concurrency and data consistency
- Retry policies
- Bulk [CRUD](https://en.wikipedia.org/wiki/Create,_read,_update_and_delete) operations
Your application can used Dapr's state management API to save and read key/value pairs using a state store component, as shown in the diagram below. For example, by using HTTP POST you can save key/value pairs and by using HTTP GET you can read a key and have its value returned.
<img src="/images/state-management-overview.png" width=900>
## Features
### State management API
Developers can use the [state management API]({{< ref state_api.md >}}) to retrieve, save and delete state values by providing keys.
### Pluggable state stores
Dapr data stores are modeled as pluggable components, which can be swapped out without any changes to your service code. Check out the [full list of state stores]({{< ref supported-state-stores >}}) to see what Dapr supports.
Dapr data stores are modeled as components, which can be swapped out without any changes to your service code. See [supported state stores]({{< ref supported-state-stores >}}) to see the list.
### Configurable state store behavior
Dapr allows developers to attach additional metadata to a state operation request that describes how the request is expected to be handled.
For example, you can attach:
Dapr allows developers to attach additional metadata to a state operation request that describes how the request is expected to be handled. You can attach:
- Concurrency requirements
- Consistency requirements
- Retry policies
By default, your application should assume a data store is **eventually consistent** and uses a **last-write-wins** concurrency pattern.
Not all stores are created equal. To ensure portability of your application you can query the capabilities of the store and make your code adaptive to different store capabilities.
The following table gives examples of capabilities of popular data store implementations.
| Store | Strong consistent write | Strong consistent read | ETag |
|-------------------|-------------------------|------------------------|------|
| Cosmos DB | Yes | Yes | Yes |
| PostgreSQL | Yes | Yes | Yes |
| Redis | Yes | Yes | Yes |
| Redis (clustered) | Yes | No | Yes |
| SQL Server | Yes | Yes | Yes |
[Not all stores are created equal]({{< ref supported-state-stores.md >}}). To ensure portability of your application you can query the capabilities of the store and make your code adaptive to different store capabilities.
### Concurrency
Dapr supports optimistic concurrency control (OCC) using ETags. When a state is requested, Dapr always attaches an **ETag** property to the returned state. When the user code tries to update or delete a state, it's expected to attach the ETag through the **If-Match** header. The write operation can succeed only when the provided ETag matches with the ETag in the state store.
Dapr supports optimistic concurrency control (OCC) using ETags. When a state is requested, Dapr always attaches an ETag property to the returned state. When the user code tries to update or delete a state, its expected to attach the ETag either through the request body for updates or the `If-Match` header for deletes. The write operation can succeed only when the provided ETag matches with the ETag in the state store.
Dapr chooses OCC because in many applications, data update conflicts are rare because clients are naturally partitioned by business contexts to operate on different data. However, if your application chooses to use ETags, a request may get rejected because of mismatched ETags. It's recommended that you use a [retry policy](#retry-policies) to compensate for such conflicts when using ETags.
Dapr chooses OCC because in many applications, data update conflicts are rare because clients are naturally partitioned by business contexts to operate on different data. However, if your application chooses to use ETags, a request may get rejected because of mismatched ETags. It's recommended that you use a retry policy to compensate for such conflicts when using ETags.
If your application omits ETags in writing requests, Dapr skips ETag checks while handling the requests. This essentially enables the **last-write-wins** pattern, compared to the **first-write-wins** pattern with ETags.
@ -63,25 +48,24 @@ If your application omits ETags in writing requests, Dapr skips ETag checks whil
For stores that don't natively support ETags, it's expected that the corresponding Dapr state store implementation simulates ETags and follows the Dapr state management API specification when handling states. Because Dapr state store implementations are technically clients to the underlying data store, such simulation should be straightforward using the concurrency control mechanisms provided by the store.
{{% /alert %}}
Read the [API reference]({{< ref state_api.md >}}) to learn how to set concurrency options.
### Consistency
Dapr supports both **strong consistency** and **eventual consistency**, with eventual consistency as the default behavior.
When strong consistency is used, Dapr waits for all replicas (or designated quorums) to acknowledge before it acknowledges a write request. When eventual consistency is used, Dapr returns as soon as the write request is accepted by the underlying data store, even if this is a single replica.
Visit the [API reference]({{< ref state_api.md >}}) to learn how to set consistency options.
### Retry policies
Dapr allows you to attach a retry policy to any write request. A policy is described by an **retryInterval**, a **retryPattern** and a **retryThreshold**. Dapr keeps retrying the request at the given interval up to the specified threshold. You can choose between a **linear** retry pattern or an **exponential** (backoff) pattern. When the **exponential** pattern is used, the retry interval is doubled after each attempt.
Visit the [API reference]({{< ref state_api.md >}}) to learn how to set retry policy options.
Read the [API reference]({{< ref state_api.md >}}) to learn how to set consistency options.
### Bulk operations
Dapr supports two types of bulk operations - **bulk** or **multi**. You can group several requests of the same type into a bulk (or a batch). Dapr submits requests in the bulk as individual requests to the underlying data store. In other words, bulk operations are not transactional. On the other hand, you can group requests of different types into a multi-operation, which is handled as an atomic transaction.
Visit the [API reference]({{< ref state_api.md >}}) to learn how use bulk and multi options.
Read the [API reference]({{< ref state_api.md >}}) to learn how use bulk and multi options.
### Actor state
Transactional state stores can be used to store actor state. To specify which state store to be used for actors, specify value of property `actorStateStore` as `true` in the metadata section of the state store component. Actors state is stored with a specific scheme in transactional state stores, which allows for consistent querying. Read the [API reference]({{< ref state_api.md >}}) to learn more about state stores for actors and the [actors API reference]({{< ref actors_api.md >}})
### Query state store directly
@ -108,11 +92,19 @@ SELECT AVG(value) FROM StateTable WHERE Id LIKE '<app-id>||<thermometer>||*||tem
```
{{% alert title="Note on direct queries" color="primary" %}}
Direct queries of the state store are not governed by Dapr concurrency control, since you are not calling through the Dapr runtime. What you see are snapshots of committed data which are acceptable for read-only queries across multiple actors, however writes should be done via the actor instances.
Direct queries of the state store are not governed by Dapr concurrency control, since you are not calling through the Dapr runtime. What you see are snapshots of committed data which are acceptable for read-only queries across multiple actors, however writes should be done via the Dapr state management or actors APIs.
{{% /alert %}}
## Next steps
### State management API
- Follow the [state store setup guides]({{< ref setup-state-store >}})
- Read the [state management API specification]({{< ref state_api.md >}})
- Read the [actors API specification]({{< ref actors_api.md >}})
The API for state management can be found in the [state management API reference]({{< ref state_api.md >}}) which describes how to retrieve, save and delete state values by providing keys.
## Next steps
* Follow these guides on:
* [How-To: Save and get state]({{< ref howto-get-save-state.md >}})
* [How-To: Build a stateful service]({{< ref howto-stateful-service.md >}})
* [How-To: Share state between applications]({{< ref howto-share-state.md >}})
* Try out the [hello world quickstart](https://github.com/dapr/quickstarts/blob/master/hello-world/README.md) which shows how to use state management or try the samples in the [Dapr SDKs]({{< ref sdks >}})
* List of [state store components]({{< ref supported-state-stores.md >}})
* Read the [state management API reference]({{< ref state_api.md >}})
* Read the [actors API reference]({{< ref actors_api.md >}})

View File

@ -9,32 +9,36 @@ no_list: true
### Generic
| Name | CRUD | Transactional | Status |
|----------------------------------------------------------------|------|---------------|--------|
| [Aerospike]({{< ref setup-aerospike.md >}}) | ✅ | ❌ | Alpha |
| [Apache Cassandra]({{< ref setup-cassandra.md >}}) | ✅ | ❌ | Alpha |
| [Cloudstate]({{< ref setup-cloudstate.md >}}) | ✅ | ❌ | Alpha |
| [Couchbase]({{< ref setup-couchbase.md >}}) | ✅ | ❌ | Alpha |
| [Hashicorp Consul]({{< ref setup-consul.md >}}) | ✅ | ❌ | Alpha |
| [Hazelcast]({{< ref setup-hazelcast.md >}}) | ✅ | ❌ | Alpha |
| [Memcached]({{< ref setup-memcached.md >}}) | ✅ | ❌ | Alpha |
| [MongoDB]({{< ref setup-mongodb.md >}}) | ✅ | ✅ | Alpha |
| [MySQL]({{< ref setup-mysql.md >}}) | ✅ | ✅ | Alpha |
| [PostgreSQL]({{< ref setup-postgresql.md >}}) | ✅ | ✅ | Alpha |
| [Redis]({{< ref setup-redis.md >}}) | ✅ | ✅ | Alpha |
| [Zookeeper]({{< ref setup-zookeeper.md >}}) | ✅ | ❌ | Alpha |
| Name | CRUD | Transactional </br>(Supports Actors) | ETag | Status |
|----------------------------------------------------------------|------|---------------------|------|--------|
| [Aerospike]({{< ref setup-aerospike.md >}}) | ✅ | ❌ | ✅ | Alpha |
| [Apache Cassandra]({{< ref setup-cassandra.md >}}) | ✅ | ❌ | ❌ | Alpha |
| [Cloudstate]({{< ref setup-cloudstate.md >}}) | ✅ | ❌ | ✅ | Alpha |
| [Couchbase]({{< ref setup-couchbase.md >}}) | ✅ | ❌ | ✅ | Alpha |
| [Hashicorp Consul]({{< ref setup-consul.md >}}) | ✅ | ❌ | ❌ | Alpha |
| [Hazelcast]({{< ref setup-hazelcast.md >}}) | ✅ | ❌ | ❌ | Alpha |
| [Memcached]({{< ref setup-memcached.md >}}) | ✅ | ❌ | ❌ | Alpha |
| [MongoDB]({{< ref setup-mongodb.md >}}) | ✅ | ✅ | ❌ | Alpha |
| [MySQL]({{< ref setup-mysql.md >}}) | ✅ | ✅ | ✅ | Alpha |
| [PostgreSQL]({{< ref setup-postgresql.md >}}) | ✅ | ✅ | ✅ | Alpha |
| [Redis]({{< ref setup-redis.md >}}) | ✅ | ✅ | ✅ | Alpha |
| RethinkDB | ✅ | ✅ | ✅ | Alpha |
| [Zookeeper]({{< ref setup-zookeeper.md >}}) | ✅ | ❌ | ✅ | Alpha |
### Google Cloud Platform (GCP)
| Name | CRUD | Transactional | Status |
|-------------------------------------------------------|------|---------------|--------|
| [GCP Firestore]({{< ref setup-firestore.md >}}) | ✅ | ❌ | Alpha |
| Name | CRUD | Transactional </br>(Supports Actors) | ETag | Status |
|-------------------------------------------------------|------|---------------------|------|--------|
| [GCP Firestore]({{< ref setup-firestore.md >}}) | ✅ | ❌ | ❌ | Alpha |
### Microsoft Azure
| Name | CRUD | Transactional | Status |
|------------------------------------------------------------------|------|---------------|--------|
| [Azure Blob Storage]({{< ref setup-azure-blobstorage.md >}}) | ✅ | ❌ | Alpha |
| [Azure CosmosDB]({{< ref setup-azure-cosmosdb.md >}}) | ✅ | ✅ | Alpha |
| [Azure SQL Server]({{< ref setup-sqlserver.md >}}) | ✅ | ❌ | Alpha |
| [Azure Table Storage]({{< ref setup-azure-tablestorage.md >}}) | ✅ | ❌ | Alpha |
| Name | CRUD | Transactional </br>(Supports Actors) | ETag | Status |
|------------------------------------------------------------------|------|---------------------|------|--------|
| [Azure Blob Storage]({{< ref setup-azure-blobstorage.md >}}) | ✅ | ❌ | ✅ | Alpha |
| [Azure CosmosDB]({{< ref setup-azure-cosmosdb.md >}}) | ✅ | ✅ | ✅ | Alpha |
| [Azure SQL Server]({{< ref setup-sqlserver.md >}}) | ✅ | ✅ | ✅ | Alpha |
| [Azure Table Storage]({{< ref setup-azure-tablestorage.md >}}) | ✅ | ❌ | ✅ | Alpha |
### Amazon Web Services (AWS)
| Name | CRUD | Transactional </br>(Supports Actors) | ETag | Status |
|------------------------------------------------------------------|------|---------------------|------|--------|
| AWS DynamoDB | ✅ | ❌ | ❌ | Alpha |

View File

@ -13,6 +13,9 @@ Using Dapr, you can control how many requests and events will invoke your applic
*Note that rate limiting per second can be achieved by using the **middleware.http.ratelimit** middleware. However, there is an imporant difference between the two approaches. The rate limit middlware is time bound and limits the number of requests per second, while the `app-max-concurrency` flag specifies the number of concurrent requests (and events) at any point of time. See [Rate limit middleware]({{< ref middleware-rate-limit.md >}}). *
Watch this [video](https://youtu.be/yRI5g6o_jp8?t=1710) on how to control concurrency and rate limiting ".
<iframe width="764" height="430" src="https://www.youtube.com/embed/yRI5g6o_jp8?t=1710" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe>
## Setting app-max-concurrency
Without using Dapr, a developer would need to create some sort of a semaphore in the application and take care of acquiring and releasing it.

View File

@ -10,6 +10,9 @@ Access control enables the configuration of policies that restrict what operatio
An access control policy is specified in configuration and be applied to Dapr sidecar for the *called* application. Example access policies are shown below and access to the called app is based on the matched policy action. You can provide a default global action for all calling applications and if no access control policy is specified, the default behavior is to allow all calling applicatons to access to the called app.
Watch this [video](https://youtu.be/j99RN_nxExA?t=1108) on how to apply access control list for service invocation.
<iframe width="688" height="430" src="https://www.youtube.com/embed/j99RN_nxExA?start=1108" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe>
## Concepts
**TrustDomain** - A "trust domain" is a logical group to manage trust relationships. Every application is assigned a trust domain which can be specified in the access control list policy spec. If no policy spec is defined or an empty trust domain is specified, then a default value "public" is used. This trust domain is used to generate the identity of the application in the TLS cert.

View File

@ -6,8 +6,10 @@ weight: 100
description: "Configure Dapr to send distributed tracing data"
---
It is recommended to run Dapr with tracing enabled for any production scenario.
Since Dapr uses Open Census, you can configure various exporters for tracing and telemetry data based on your environment, whether it is running in the cloud or on-premises.
It is recommended to run Dapr with tracing enabled for any production
scenario. You can configure Dapr to send tracing and telemetry data
to many backends based on your environment, whether it is running in
the cloud or on-premises.
## Tracing configuration
@ -29,13 +31,13 @@ The following table lists the properties for tracing:
| `zipkin.endpointAddress` | string | Set the Zipkin server address.
## Zipkin in stand-alone mode
## Zipkin in self hosted mode
The following steps show you how to configure Dapr to send distributed tracing data to Zipkin running as a container on your local machine and view them.
For Standalone mode, create a Dapr configuration file locally and reference it with the Dapr CLI.
For self hosted mode, create a Dapr configuration file locally and reference it with the Dapr CLI.
1. Create the following YAML file:
1. Create the following `config.yaml` YAML file:
```yaml
apiVersion: dapr.io/v1alpha1
@ -56,7 +58,7 @@ For Standalone mode, create a Dapr configuration file locally and reference it w
docker run -d -p 9411:9411 openzipkin/zipkin
```
3. Launch Dapr with the `--config` param:
3. Launch Dapr with the `--config` param with the path for where the `config.yaml` is saved :
```bash
dapr run --app-id mynode --app-port 3000 --config ./config.yaml node app.js
@ -109,7 +111,7 @@ annotations:
dapr.io/config: "zipkin"
```
That's it! Your sidecar is now configured for use with Open Census and Zipkin.
That's it! Your sidecar is now configured for use with Zipkin.
### Viewing Tracing Data
@ -123,3 +125,5 @@ On your browser, go to ```http://localhost:9411``` and you should see the Zipkin
![zipkin](/images/zipkin_ui.png)
## References
- [Zipkin for distributed tracing](https://zipkin.io/)

View File

@ -6,9 +6,10 @@ weight: 3000
description: "Set up Jaeger for distributed tracing"
type: docs
---
Dapr currently supports two kind of tracing protocol: OpenCensus and
Zipkin. Since Jaeger is compatible with Zipkin, the Zipkin
protocol can be used to talk to Jaeger.
Dapr currently supports the Zipkin protocol. Since Jaeger is
compatible with Zipkin, the Zipkin protocol can be used to talk to
Jaeger.
## Configure self hosted mode
@ -54,8 +55,7 @@ dapr run --app-id mynode --app-port 3000 node app.js --config config.yaml
```
### Viewing Traces
To view traces, in your browser go to http://localhost:16686 and you
will see the Zipkin UI.
To view traces, in your browser go to http://localhost:16686 and you will see the Jaeger UI.
## Configure Kubernetes
The following steps shows you how to configure Dapr to send distributed tracing data to Jaeger running as a container in your Kubernetes cluster, how to view them.
@ -68,7 +68,7 @@ First create the following YAML file to install Jaeger
apiVersion: jaegertracing.io/v1
kind: "Jaeger"
metadata:
name: "jaeger"
name: jaeger
spec:
strategy: allInOne
ingress:
@ -121,7 +121,7 @@ annotations:
dapr.io/config: "tracing"
```
That's it! your sidecar is now configured for use with Jaeger.
That's it! Your Dapr sidecar is now configured for use with Jaeger.
### Viewing Tracing Data

View File

@ -36,7 +36,7 @@ Launch Zipkin using Docker:
docker run -d -p 9411:9411 openzipkin/zipkin
```
3. The applications launched with `dapr run` will by default reference the config file in `$HOME/.dapr/config.yaml` or `%USERPROFILE%\.dapr\config.yaml` and can be overridden with the Dapr CLI using the `--config` param:
3. The applications launched with `dapr run` by default reference the config file in `$HOME/.dapr/config.yaml` or `%USERPROFILE%\.dapr\config.yaml` and can be overridden with the Dapr CLI using the `--config` param:
```bash
dapr run --app-id mynode --app-port 3000 node app.js
@ -46,7 +46,7 @@ To view traces, in your browser go to http://localhost:9411 and you will see the
## Configure Kubernetes
The following steps shows you how to configure Dapr to send distributed tracing data to Zipkin running as a container in your Kubernetes cluster, how to view them.
The following steps shows you how to configure Dapr to send distributed tracing data to Zipkin running as a container in your Kubernetes cluster, and how to view them.
### Setup
@ -92,11 +92,11 @@ annotations:
dapr.io/config: "tracing"
```
That's it! your sidecar is now configured for use with Open Census and Zipkin.
That's it! Your sidecar is now configured to send traces to Zipkin.
### Viewing Tracing Data
To view traces, connect to the Zipkin Service and open the UI:
To view traces, connect to the Zipkin service and open the UI:
```bash
kubectl port-forward svc/zipkin 9411:9411
@ -107,4 +107,5 @@ In your browser, go to `http://localhost:9411` and you will see the Zipkin UI.
![zipkin](/images/zipkin_ui.png)
## References
- [Zipkin for distributed tracing](https://zipkin.io/)
- [W3C distributed tracing]({{< ref w3c-tracing >}})

View File

@ -103,7 +103,8 @@ curl -X POST http://localhost:3500/v1.0/state/starwars \
-d '[
{
"key": "weapon",
"value": "DeathStar"
"value": "DeathStar",
"etag": "1234"
},
{
"key": "planet",
@ -288,7 +289,7 @@ None.
### Example
```shell
curl -X "DELETE" http://localhost:3500/v1.0/state/starwars/planet -H "ETag: xxxxxxx"
curl -X "DELETE" http://localhost:3500/v1.0/state/starwars/planet -H "If-Match: xxxxxxx"
```
## State transactions
@ -377,6 +378,7 @@ curl -X POST http://localhost:3500/v1.0/state/starwars/transaction \
## Configuring state store for actors
Actors don't support multiple state stores and require a transactional state store to be used with Dapr. Currently Mongodb, Redis, PostgreSQL, SQL Server, and Azure CosmosDB implement the transactional state store interface.
To specify which state store to be used for actors, specify value of property `actorStateStore` as true in the metadata section of the state store component yaml file.
Example: Following components yaml will configure redis to be used as the state store for Actors.
@ -434,9 +436,9 @@ When a strong consistency hint is attached, a state store should:
* For read requests, the state store should return the most up-to-date data consistently across replicas.
* For write/delete requests, the state store should synchronisely replicate updated data to configured quorum before completing the write request.
### Example
### Example - Complete options request example
The following is a sample *set* request with a complete operation option definition:
The following is an example *set* request with a complete options definition:
```shell
curl -X POST http://localhost:3500/v1.0/state/starwars \
@ -454,6 +456,82 @@ curl -X POST http://localhost:3500/v1.0/state/starwars \
]'
```
### Example - Working with ETags
The following is an example which walks through the usage of an ETag when setting/deleting an object in a compatible statestore.
First, store an object in a statestore (this sample uses Redis that has been defined as 'statestore'):
```shell
curl -X POST http://localhost:3500/v1.0/state/statestore \
-H "Content-Type: application/json" \
-d '[
{
"key": "sampleData",
"value": "1"
}
]'
```
Get the object to find the ETag that was set automatically by the statestore:
```shell
curl http://localhost:3500/v1.0/state/statestore/sampleData -v
* Connected to localhost (127.0.0.1) port 3500 (#0)
> GET /v1.0/state/statestore/sampleData HTTP/1.1
> Host: localhost:3500
> User-Agent: curl/7.64.1
> Accept: */*
>
< HTTP/1.1 200 OK
< Server: fasthttp
< Date: Sun, 14 Feb 2021 04:51:50 GMT
< Content-Type: application/json
< Content-Length: 3
< Etag: 1
< Traceparent: 00-3452582897d134dc9793a244025256b1-b58d8d773e4d661d-01
<
* Connection #0 to host localhost left intact
"1"* Closing connection 0
```
The returned ETag here was 1. Sending a new request to update or delete the data with the wrong ETag will return an error (omitting the ETag will allow the request):
```shell
# Update
curl -X POST http://localhost:3500/v1.0/state/statestore \
-H "Content-Type: application/json" \
-d '[
{
"key": "sampleData",
"value": "2",
"etag": "2"
}
]'
{"errorCode":"ERR_STATE_SAVE","message":"failed saving state in state store statestore: possible etag mismatch. error from state store: ERR Error running script (call to f_83e03ec05d6a3b6fb48483accf5e594597b6058f): @user_script:1: user_script:1: failed to set key nodeapp||sampleData"}
# Delete
curl -X DELETE -H 'If-Match: 5' http://localhost:3500/v1.0/state/statestore/sampleData
{"errorCode":"ERR_STATE_DELETE","message":"failed deleting state with key sampleData: possible etag mismatch. error from state store: ERR Error running script (call to f_9b5da7354cb61e2ca9faff50f6c43b81c73c0b94): @user_script:1: user_script:1: failed to delete node
app||sampleData"}
```
In order to update or delete the object, simply match the ETag in either the request body (update) or the `If-Match` header (delete). Note, when the state is updated, it receives a new ETag so further updates or deletes will need to use the new ETag.
```shell
# Update
curl -X POST http://localhost:3500/v1.0/state/statestore \
-H "Content-Type: application/json" \
-d '[
{
"key": "sampleData",
"value": "2",
"etag": "1"
}
]'
# Delete
curl -X DELETE -H 'If-Match: 1' http://localhost:3500/v1.0/state/statestore/sampleData
```
## Next Steps
- [State management overview]({{< ref state-management-overview.md >}})
- [How-To: Save & get state]({{< ref howto-get-save-state.md >}})

Binary file not shown.

After

Width:  |  Height:  |  Size: 29 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 96 KiB

1
sdkdocs/php Submodule

@ -0,0 +1 @@
Subproject commit d3525e99cc88789d4693ed5cb39f91f68ef2f335