Developing applications section [hugo-docs] (#847)
* Developing applications section * Changing capital letters in getting started section * Remove redundant getting started title
|
@ -1,5 +1,6 @@
|
|||
---
|
||||
title: "Developing applications with Dapr"
|
||||
shortTitle: "Developing"
|
||||
linkTitle: "Developing applications"
|
||||
description: "Tools, tips, and information on how to build your application with Dapr"
|
||||
weight: 30
|
||||
---
|
|
@ -1,7 +1,7 @@
|
|||
---
|
||||
title: "Building Blocks"
|
||||
linkTitle: "Building Blocks"
|
||||
weight: 30
|
||||
description: "HTTP or gRPC APIs that can be called from user code and uses one or more Dapr components."
|
||||
title: "Building blocks"
|
||||
linkTitle: "Building blocks"
|
||||
weight: 10
|
||||
description: "Dapr capabilities that solve common development challenges for distributed applications"
|
||||
---
|
||||
|
||||
|
|
|
@ -1,143 +1,6 @@
|
|||
---
|
||||
title: "Dapr Actors Runtime"
|
||||
title: "Actors"
|
||||
linkTitle: "Actors"
|
||||
weight: 50
|
||||
description: >
|
||||
Information on actors in Dapr
|
||||
description: Use Dapr to implement the actor programming model in your application
|
||||
---
|
||||
|
||||
# Dapr actors runtime
|
||||
|
||||
The Dapr actors runtime provides following capabilities:
|
||||
|
||||
- [Method Invocation](#actor-method-invocation)
|
||||
- [State Management](#actor-state-management)
|
||||
- [Timers and Reminders](#actor-timers-and-reminders)
|
||||
|
||||
## Actor method invocation
|
||||
|
||||
You can interact with Dapr to invoke the actor method by calling HTTP/gRPC endpoint
|
||||
|
||||
```bash
|
||||
POST/GET/PUT/DELETE http://localhost:3500/v1.0/actors/<actorType>/<actorId>/method/<method>
|
||||
```
|
||||
|
||||
You can provide any data for the actor method in the request body and the response for the request is in response body which is data from actor method call.
|
||||
|
||||
Refer [api spec](../../reference/api/actors_api.md#invoke-actor-method) for more details.
|
||||
|
||||
## Actor state management
|
||||
|
||||
Actors can save state reliably using state management capability.
|
||||
|
||||
You can interact with Dapr through HTTP/gRPC endpoints for state management.
|
||||
|
||||
To use actors, your state store must support multi-item transactions. This means your state store [component](https://github.com/dapr/components-contrib/tree/master/state) must implement the [TransactionalStore](https://github.com/dapr/components-contrib/blob/master/state/transactional_store.go) interface. The following state stores implement this interface:
|
||||
|
||||
- Redis
|
||||
- MongoDB
|
||||
- PostgreSQL
|
||||
- SQL Server
|
||||
- Azure CosmosDB
|
||||
|
||||
## Actor timers and reminders
|
||||
|
||||
Actors can schedule periodic work on themselves by registering either timers or reminders.
|
||||
|
||||
### Actor timers
|
||||
|
||||
You can register a callback on actor to be executed based on a timer.
|
||||
|
||||
The Dapr actor runtime ensures that the callback methods respect the turn-based concurrency guarantees.This means that no other actor methods or timer/reminder callbacks will be in progress until this callback completes execution.
|
||||
|
||||
The next period of the timer starts after the callback completes execution. This implies that the timer is stopped while the callback is executing and is started when the callback finishes.
|
||||
|
||||
The Dapr actors runtime saves changes made to the actor's state when the callback finishes. If an error occurs in saving the state, that actor object is deactivated and a new instance will be activated.
|
||||
|
||||
All timers are stopped when the actor is deactivated as part of garbage collection. No timer callbacks are invoked after that. Also, the Dapr actors runtime does not retain any information about the timers that were running before deactivation. It is up to the actor to register any timers that it needs when it is reactivated in the future.
|
||||
|
||||
You can create a timer for an actor by calling the HTTP/gRPC request to Dapr.
|
||||
|
||||
```http
|
||||
POST/PUT http://localhost:3500/v1.0/actors/<actorType>/<actorId>/timers/<name>
|
||||
```
|
||||
|
||||
The timer `duetime` and callback are specified in the request body. The due time represents when the timer will first fire after registration. The `period` represents how often the timer fires after that. A due time of 0 means to fire immediately. Negative due times and negative periods are invalid.
|
||||
|
||||
The following request body configures a timer with a `dueTime` of 9 seconds and a `period` of 3 seconds. This means it will first fire after 9 seconds, then every 3 seconds after that.
|
||||
```json
|
||||
{
|
||||
"dueTime":"0h0m9s0ms",
|
||||
"period":"0h0m3s0ms"
|
||||
}
|
||||
```
|
||||
|
||||
The following request body configures a timer with a `dueTime` 0 seconds and a `period` of 3 seconds. This means it fires immediately after registration, then every 3 seconds after that.
|
||||
```json
|
||||
{
|
||||
"dueTime":"0h0m0s0ms",
|
||||
"period":"0h0m3s0ms"
|
||||
}
|
||||
```
|
||||
|
||||
You can remove the actor timer by calling
|
||||
|
||||
```http
|
||||
DELETE http://localhost:3500/v1.0/actors/<actorType>/<actorId>/timers/<name>
|
||||
```
|
||||
|
||||
Refer [api spec](../../reference/api/actors_api.md#invoke-timer) for more details.
|
||||
|
||||
### Actor reminders
|
||||
|
||||
Reminders are a mechanism to trigger *persistent* callbacks on an actor at specified times. Their functionality is similar to timers. But unlike timers, reminders are triggered under all circumstances until the actor explicitly unregisters them or the actor is explicitly deleted. Specifically, reminders are triggered across actor deactivations and failovers because the Dapr actors runtime persists the information about the actors' reminders using Dapr actor state provider.
|
||||
|
||||
You can create a persistent reminder for an actor by calling the Http/gRPC request to Dapr.
|
||||
|
||||
```http
|
||||
POST/PUT http://localhost:3500/v1.0/actors/<actorType>/<actorId>/reminders/<name>
|
||||
```
|
||||
|
||||
The reminder `duetime` and callback can be specified in the request body. The due time represents when the reminder first fires after registration. The `period` represents how often the reminder will fire after that. A due time of 0 means to fire immediately. Negative due times and negative periods are invalid. To register a reminder that fires only once, set the period to an empty string.
|
||||
|
||||
The following request body configures a reminder with a `dueTime` 9 seconds and a `period` of 3 seconds. This means it will first fire after 9 seconds, then every 3 seconds after that.
|
||||
```json
|
||||
{
|
||||
"dueTime":"0h0m9s0ms",
|
||||
"period":"0h0m3s0ms"
|
||||
}
|
||||
```
|
||||
|
||||
The following request body configures a reminder with a `dueTime` 0 seconds and a `period` of 3 seconds. This means it will fire immediately after registration, then every 3 seconds after that.
|
||||
```json
|
||||
{
|
||||
"dueTime":"0h0m0s0ms",
|
||||
"period":"0h0m3s0ms"
|
||||
}
|
||||
```
|
||||
|
||||
The following request body configures a reminder with a `dueTime` 15 seconds and a `period` of empty string. This means it will first fire after 15 seconds, then never fire again.
|
||||
```json
|
||||
{
|
||||
"dueTime":"0h0m15s0ms",
|
||||
"period":""
|
||||
}
|
||||
```
|
||||
|
||||
#### Retrieve Actor Reminder
|
||||
|
||||
You can retrieve the actor reminder by calling
|
||||
|
||||
```http
|
||||
GET http://localhost:3500/v1.0/actors/<actorType>/<actorId>/reminders/<name>
|
||||
```
|
||||
|
||||
#### Remove the Actor Reminder
|
||||
|
||||
You can remove the actor reminder by calling
|
||||
|
||||
```http
|
||||
DELETE http://localhost:3500/v1.0/actors/<actorType>/<actorId>/reminders/<name>
|
||||
```
|
||||
|
||||
Refer [api spec](../../reference/api/actors_api.md#invoke-reminder) for more details.
|
||||
|
|
|
@ -0,0 +1,136 @@
|
|||
---
|
||||
title: "Dapr actors overview"
|
||||
linkTitle: "Dapr actors"
|
||||
weight: 200
|
||||
description: Overview of Dapr support for actors
|
||||
---
|
||||
|
||||
The Dapr actors runtime provides following capabilities:
|
||||
|
||||
## Actor method invocation
|
||||
|
||||
You can interact with Dapr to invoke the actor method by calling HTTP/gRPC endpoint
|
||||
|
||||
```bash
|
||||
POST/GET/PUT/DELETE http://localhost:3500/v1.0/actors/<actorType>/<actorId>/method/<method>
|
||||
```
|
||||
|
||||
You can provide any data for the actor method in the request body and the response for the request is in response body which is data from actor method call.
|
||||
|
||||
Refer [api spec](../../reference/api/actors_api.md#invoke-actor-method) for more details.
|
||||
|
||||
## Actor state management
|
||||
|
||||
Actors can save state reliably using state management capability.
|
||||
|
||||
You can interact with Dapr through HTTP/gRPC endpoints for state management.
|
||||
|
||||
To use actors, your state store must support multi-item transactions. This means your state store [component](https://github.com/dapr/components-contrib/tree/master/state) must implement the [TransactionalStore](https://github.com/dapr/components-contrib/blob/master/state/transactional_store.go) interface. The following state stores implement this interface:
|
||||
|
||||
- Redis
|
||||
- MongoDB
|
||||
- PostgreSQL
|
||||
- SQL Server
|
||||
- Azure CosmosDB
|
||||
|
||||
## Actor timers and reminders
|
||||
|
||||
Actors can schedule periodic work on themselves by registering either timers or reminders.
|
||||
|
||||
### Actor timers
|
||||
|
||||
You can register a callback on actor to be executed based on a timer.
|
||||
|
||||
The Dapr actor runtime ensures that the callback methods respect the turn-based concurrency guarantees.This means that no other actor methods or timer/reminder callbacks will be in progress until this callback completes execution.
|
||||
|
||||
The next period of the timer starts after the callback completes execution. This implies that the timer is stopped while the callback is executing and is started when the callback finishes.
|
||||
|
||||
The Dapr actors runtime saves changes made to the actor's state when the callback finishes. If an error occurs in saving the state, that actor object is deactivated and a new instance will be activated.
|
||||
|
||||
All timers are stopped when the actor is deactivated as part of garbage collection. No timer callbacks are invoked after that. Also, the Dapr actors runtime does not retain any information about the timers that were running before deactivation. It is up to the actor to register any timers that it needs when it is reactivated in the future.
|
||||
|
||||
You can create a timer for an actor by calling the HTTP/gRPC request to Dapr.
|
||||
|
||||
```http
|
||||
POST/PUT http://localhost:3500/v1.0/actors/<actorType>/<actorId>/timers/<name>
|
||||
```
|
||||
|
||||
The timer `duetime` and callback are specified in the request body. The due time represents when the timer will first fire after registration. The `period` represents how often the timer fires after that. A due time of 0 means to fire immediately. Negative due times and negative periods are invalid.
|
||||
|
||||
The following request body configures a timer with a `dueTime` of 9 seconds and a `period` of 3 seconds. This means it will first fire after 9 seconds, then every 3 seconds after that.
|
||||
```json
|
||||
{
|
||||
"dueTime":"0h0m9s0ms",
|
||||
"period":"0h0m3s0ms"
|
||||
}
|
||||
```
|
||||
|
||||
The following request body configures a timer with a `dueTime` 0 seconds and a `period` of 3 seconds. This means it fires immediately after registration, then every 3 seconds after that.
|
||||
```json
|
||||
{
|
||||
"dueTime":"0h0m0s0ms",
|
||||
"period":"0h0m3s0ms"
|
||||
}
|
||||
```
|
||||
|
||||
You can remove the actor timer by calling
|
||||
|
||||
```http
|
||||
DELETE http://localhost:3500/v1.0/actors/<actorType>/<actorId>/timers/<name>
|
||||
```
|
||||
|
||||
Refer [api spec](../../reference/api/actors_api.md#invoke-timer) for more details.
|
||||
|
||||
### Actor reminders
|
||||
|
||||
Reminders are a mechanism to trigger *persistent* callbacks on an actor at specified times. Their functionality is similar to timers. But unlike timers, reminders are triggered under all circumstances until the actor explicitly unregisters them or the actor is explicitly deleted. Specifically, reminders are triggered across actor deactivations and failovers because the Dapr actors runtime persists the information about the actors' reminders using Dapr actor state provider.
|
||||
|
||||
You can create a persistent reminder for an actor by calling the Http/gRPC request to Dapr.
|
||||
|
||||
```http
|
||||
POST/PUT http://localhost:3500/v1.0/actors/<actorType>/<actorId>/reminders/<name>
|
||||
```
|
||||
|
||||
The reminder `duetime` and callback can be specified in the request body. The due time represents when the reminder first fires after registration. The `period` represents how often the reminder will fire after that. A due time of 0 means to fire immediately. Negative due times and negative periods are invalid. To register a reminder that fires only once, set the period to an empty string.
|
||||
|
||||
The following request body configures a reminder with a `dueTime` 9 seconds and a `period` of 3 seconds. This means it will first fire after 9 seconds, then every 3 seconds after that.
|
||||
```json
|
||||
{
|
||||
"dueTime":"0h0m9s0ms",
|
||||
"period":"0h0m3s0ms"
|
||||
}
|
||||
```
|
||||
|
||||
The following request body configures a reminder with a `dueTime` 0 seconds and a `period` of 3 seconds. This means it will fire immediately after registration, then every 3 seconds after that.
|
||||
```json
|
||||
{
|
||||
"dueTime":"0h0m0s0ms",
|
||||
"period":"0h0m3s0ms"
|
||||
}
|
||||
```
|
||||
|
||||
The following request body configures a reminder with a `dueTime` 15 seconds and a `period` of empty string. This means it will first fire after 15 seconds, then never fire again.
|
||||
```json
|
||||
{
|
||||
"dueTime":"0h0m15s0ms",
|
||||
"period":""
|
||||
}
|
||||
```
|
||||
|
||||
#### Retrieve actor reminder
|
||||
|
||||
You can retrieve the actor reminder by calling
|
||||
|
||||
```http
|
||||
GET http://localhost:3500/v1.0/actors/<actorType>/<actorId>/reminders/<name>
|
||||
```
|
||||
|
||||
#### Remove the actor reminder
|
||||
|
||||
You can remove the actor reminder by calling
|
||||
|
||||
```http
|
||||
DELETE http://localhost:3500/v1.0/actors/<actorType>/<actorId>/reminders/<name>
|
||||
```
|
||||
|
||||
Refer [api spec](../../reference/api/actors_api.md#invoke-reminder) for more details.
|
|
@ -1,13 +1,10 @@
|
|||
---
|
||||
title: "Introduction to Actors"
|
||||
linkTitle: "Actor Background"
|
||||
weight: 51
|
||||
description: >
|
||||
Information on virtual actors in Dapr.
|
||||
title: "Introduction to actors"
|
||||
linkTitle: "Introduction to actors"
|
||||
weight: 100
|
||||
description: Learn more about the actor pattern
|
||||
---
|
||||
|
||||
# Introduction to actors
|
||||
|
||||
The [actor pattern](https://en.wikipedia.org/wiki/Actor_model) describes **actors** as the lowest-level "unit of computation". In other words, you write your code in a self-contained unit (called an actor) that receives messages and processes them one at a time, without any kind of concurrency or threading.
|
||||
|
||||
While your code processes a message, it can send one or more messages to other actors, or create new actors. An underlying **runtime** manages how, when and where each actor runs, and also routes messages between actors.
|
||||
|
@ -21,13 +18,6 @@ Dapr includes a runtime that specifically implements the [Virtual Actor pattern]
|
|||
- [Dapr Actor Features](./actors_features.md)
|
||||
- [Dapr Actor API Spec](../../reference/api/actors_api.md)
|
||||
|
||||
## Contents
|
||||
|
||||
- [Actors in Dapr](#actors-in-dapr)
|
||||
- [Actor Lifetime](#actor-lifetime)
|
||||
- [Distribution and Failover](#distribution-and-failover)
|
||||
- [Actor Communication](#actor-communication)
|
||||
|
||||
### When to use actors
|
||||
|
||||
As with any other technology decision, you should decide whether to use actors based on the problem you're trying to solve.
|
||||
|
@ -42,7 +32,7 @@ The actor design pattern can be a good fit to a number of distributed systems pr
|
|||
|
||||
Every actor is defined as an instance of an actor type, identical to the way an object is an instance of a class. For example, there may be an actor type that implements the functionality of a calculator and there could be many actors of that type that are distributed on various nodes across a cluster. Each such actor is uniquely identified by an actor ID.
|
||||
|
||||
<img src="../../images/actor_game_example.png" width=400>
|
||||
<img src="/images/actor_background_game_example.png" width=400>
|
||||
|
||||
## Actor lifetime
|
||||
|
||||
|
@ -65,11 +55,11 @@ Actors are distributed across the instances of the actor service, and those inst
|
|||
### Actor placement service
|
||||
The Dapr actor runtime manages distribution scheme and key range settings for you. This is done by the actor `Placement` service. When a new instance of a service is created, the corresponding Dapr runtime register the actor types it can create and the `Placement` service calculates the partitioning across all the instances for a given actor type. This table of partition information for each actor type is updated and stored in each Dapr instance running in the environment and can change dynamically as new instance of actor services are created and destroyed. This is shown in the diagram below.
|
||||
|
||||

|
||||
<img src="/images/actors_background_placement_service_registration.png" width=600>
|
||||
|
||||
When a client calls an actor with a particular id (for example, actor id 123), the Dapr instance for the client hashes the actor type and id, and uses the information to call onto the corresponding Dapr instance that can serve the requests for that particular actor id. As a result, the same partition (or service instance) is always called for any given actor id. This is shown in the diagram below.
|
||||
|
||||

|
||||
<img src="/images/actors_background_id_hashing_calling.png" width=600>
|
||||
|
||||
This simplifies some choices but also carries some consideration:
|
||||
|
||||
|
@ -98,7 +88,8 @@ A single actor instance cannot process more than one request at a time. An actor
|
|||
|
||||
Actors can deadlock on each other if there is a circular request between two actors while an external request is made to one of the actors simultaneously. The Dapr actor runtime automatically times out on actor calls and throw an exception to the caller to interrupt possible deadlock situations.
|
||||
|
||||

|
||||
<img src="/images/actors_background_communication.png" width=600>
|
||||
|
||||
|
||||
### Turn-based access
|
||||
|
||||
|
@ -108,4 +99,5 @@ The Dapr actors runtime enforces turn-based concurrency by acquiring a per-actor
|
|||
|
||||
The following example illustrates the above concepts. Consider an actor type that implements two asynchronous methods (say, Method1 and Method2), a timer, and a reminder. The diagram below shows an example of a timeline for the execution of these methods and callbacks on behalf of two actors (ActorId1 and ActorId2) that belong to this actor type.
|
||||
|
||||

|
||||
<img src="/images/actors_background_concurrency.png" width=600>
|
||||
|
||||
|
|
|
@ -1,106 +1,6 @@
|
|||
---
|
||||
title: "Dapr Bindings"
|
||||
title: "Bindings"
|
||||
linkTitle: "Bindings"
|
||||
weight: 40
|
||||
description: >
|
||||
Information on the Dapr bindings building block
|
||||
description: Use Dapr to invoke external systems and build event driven applications
|
||||
---
|
||||
|
||||
# Bindings
|
||||
|
||||
Using bindings, you can trigger your app with events coming in from external systems, or invoke external systems. This building block provides several benefits for you and your code:
|
||||
|
||||
* Remove the complexities of connecting to, and polling from, messaging systems such as queues and message buses
|
||||
* Focus on business logic and not implementation details of how to interact with a system
|
||||
* Keep your code free from SDKs or libraries
|
||||
* Handle retries and failure recovery
|
||||
* Switch between bindings at run time
|
||||
* Build portable applications where environment-specific bindings are set-up and no code changes are required
|
||||
|
||||
For a specific example, bindings would allow your microservice to respond to incoming Twilio/SMS messages without adding or configuring a third-party Twilio SDK, worrying about polling from Twilio (or using websockets, etc.).
|
||||
|
||||
Bindings are developed independently of Dapr runtime. You can view and contribute to the bindings [here](https://github.com/dapr/components-contrib/tree/master/bindings).
|
||||
|
||||
## Supported bindings and specs
|
||||
|
||||
Every binding has its own unique set of properties. Click the name link to see the component YAML for each binding.
|
||||
|
||||
### Generic
|
||||
|
||||
| Name | Input<br>Binding | Output<br>Binding | Status |
|
||||
|------|:----------------:|:-----------------:|--------|
|
||||
| [Cron (Scheduler)](../../reference/specs/bindings/cron.md) | ✅ | ✅ | Experimental |
|
||||
| [HTTP](../../reference/specs/bindings/http.md) | | ✅ | Experimental |
|
||||
| [InfluxDB](../../reference/specs/bindings/influxdb.md) | | ✅ | Experimental |
|
||||
| [Kafka](../../reference/specs/bindings/kafka.md) | ✅ | ✅ | Experimental |
|
||||
| [Kubernetes Events](../../reference/specs/bindings/kubernetes.md) | ✅ | | Experimental |
|
||||
| [MQTT](../../reference/specs/bindings/mqtt.md) | ✅ | ✅ | Experimental |
|
||||
| [PostgreSql](../../reference/specs/bindings/postgres.md) | | ✅ | Experimental |
|
||||
| [RabbitMQ](../../reference/specs/bindings/rabbitmq.md) | ✅ | ✅ | Experimental |
|
||||
| [Redis](../../reference/specs/bindings/redis.md) | | ✅ | Experimental |
|
||||
| [Twilio](../../reference/specs/bindings/twilio.md) | | ✅ | Experimental |
|
||||
| [Twitter](../../reference/specs/bindings/twitter.md) | ✅ | ✅ | Experimental |
|
||||
| [SendGrid](../../reference/specs/bindings/sendgrid.md) | | ✅ | Experimental |
|
||||
|
||||
|
||||
### Amazon Web Service (AWS)
|
||||
|
||||
| Name | Input<br>Binding | Output<br>Binding | Status |
|
||||
|------|:----------------:|:-----------------:|--------|
|
||||
| [AWS DynamoDB](../../reference/specs/bindings/dynamodb.md) | | ✅ | Experimental |
|
||||
| [AWS S3](../../reference/specs/bindings/s3.md) | | ✅ | Experimental |
|
||||
| [AWS SNS](../../reference/specs/bindings/sns.md) | | ✅ | Experimental |
|
||||
| [AWS SQS](../../reference/specs/bindings/sqs.md) | ✅ | ✅ | Experimental |
|
||||
| [AWS Kinesis](../../reference/specs/bindings/kinesis.md) | ✅ | ✅ | Experimental |
|
||||
|
||||
|
||||
### Google Cloud Platform (GCP)
|
||||
|
||||
| Name | Input<br>Binding | Output<br>Binding | Status |
|
||||
|------|:----------------:|:-----------------:|--------|
|
||||
| [GCP Cloud Pub/Sub](../../reference/specs/bindings/gcppubsub.md) | ✅ | ✅ | Experimental |
|
||||
| [GCP Storage Bucket](../../reference/specs/bindings/gcpbucket.md) | | ✅ | Experimental |
|
||||
|
||||
### Microsoft Azure
|
||||
|
||||
| Name | Input<br>Binding | Output<br>Binding | Status |
|
||||
|------|:----------------:|:-----------------:|--------|
|
||||
| [Azure Blob Storage](../../reference/specs/bindings/blobstorage.md) | | ✅ | Experimental |
|
||||
| [Azure EventHubs](../../reference/specs/bindings/eventhubs.md) | ✅ | ✅ | Experimental |
|
||||
| [Azure CosmosDB](../../reference/specs/bindings/cosmosdb.md) | | ✅ | Experimental |
|
||||
| [Azure Service Bus Queues](../../reference/specs/bindings/servicebusqueues.md) | ✅ | ✅ | Experimental |
|
||||
| [Azure SignalR](../../reference/specs/bindings/signalr.md) | | ✅ | Experimental |
|
||||
| [Azure Storage Queues](../../reference/specs/bindings/storagequeues.md) | ✅ | ✅ | Experimental |
|
||||
| [Azure Event Grid](../../reference/specs/bindings/eventgrid.md) | ✅ | ✅ | Experimental |
|
||||
|
||||
## Input bindings
|
||||
|
||||
Input bindings are used to trigger your application when an event from an external resource has occurred.
|
||||
An optional payload and metadata might be sent with the request.
|
||||
|
||||
In order to receive events from an input binding:
|
||||
|
||||
1. Define the component YAML that describes the type of binding and its metadata (connection info, etc.)
|
||||
2. Listen on an HTTP endpoint for the incoming event, or use the gRPC proto library to get incoming events
|
||||
|
||||
> On startup Dapr sends a ```OPTIONS``` request for all defined input bindings to the application and expects a different status code as ```NOT FOUND (404)``` if this application wants to subscribe to the binding.
|
||||
|
||||
Read the [Create an event-driven app using input bindings](../../howto/trigger-app-with-input-binding) section to get started with input bindings.
|
||||
|
||||
## Output bindings
|
||||
|
||||
Output bindings allow users to invoke external resources.
|
||||
An optional payload and metadata can be sent with the invocation request.
|
||||
|
||||
In order to invoke an output binding:
|
||||
|
||||
1. Define the component YAML that describes the type of binding and its metadata (connection info, etc.)
|
||||
2. Use the HTTP endpoint or gRPC method to invoke the binding with an optional payload
|
||||
|
||||
Read the [Send events to external systems using Output Bindings](../../howto/send-events-with-output-bindings) section to get started with output bindings.
|
||||
|
||||
## Related Topics
|
||||
* [Implementing a new binding](https://github.com/dapr/docs/tree/master/reference/specs/bindings)
|
||||
* [Trigger a service from different resources with input bindings](../../howto/trigger-app-with-input-binding)
|
||||
* [Invoke different resources using output bindings](../../howto/send-events-with-output-bindings)
|
||||
|
||||
|
|
|
@ -0,0 +1,103 @@
|
|||
---
|
||||
title: "Bindings overview"
|
||||
linkTitle: "Bindings overview"
|
||||
weight: 100
|
||||
description: Overview of the Dapr bindings building block
|
||||
---
|
||||
|
||||
Using bindings, you can trigger your app with events coming in from external systems, or invoke external systems. This building block provides several benefits for you and your code:
|
||||
|
||||
* Remove the complexities of connecting to, and polling from, messaging systems such as queues and message buses
|
||||
* Focus on business logic and not implementation details of how to interact with a system
|
||||
* Keep your code free from SDKs or libraries
|
||||
* Handle retries and failure recovery
|
||||
* Switch between bindings at run time
|
||||
* Build portable applications where environment-specific bindings are set-up and no code changes are required
|
||||
|
||||
For a specific example, bindings would allow your microservice to respond to incoming Twilio/SMS messages without adding or configuring a third-party Twilio SDK, worrying about polling from Twilio (or using websockets, etc.).
|
||||
|
||||
Bindings are developed independently of Dapr runtime. You can view and contribute to the bindings [here](https://github.com/dapr/components-contrib/tree/master/bindings).
|
||||
|
||||
## Supported bindings and specs
|
||||
|
||||
Every binding has its own unique set of properties. Click the name link to see the component YAML for each binding.
|
||||
|
||||
### Generic
|
||||
|
||||
| Name | Input<br>Binding | Output<br>Binding | Status |
|
||||
|------|:----------------:|:-----------------:|--------|
|
||||
| [Cron (Scheduler)](../../reference/specs/bindings/cron.md) | ✅ | ✅ | Experimental |
|
||||
| [HTTP](../../reference/specs/bindings/http.md) | | ✅ | Experimental |
|
||||
| [InfluxDB](../../reference/specs/bindings/influxdb.md) | | ✅ | Experimental |
|
||||
| [Kafka](../../reference/specs/bindings/kafka.md) | ✅ | ✅ | Experimental |
|
||||
| [Kubernetes Events](../../reference/specs/bindings/kubernetes.md) | ✅ | | Experimental |
|
||||
| [MQTT](../../reference/specs/bindings/mqtt.md) | ✅ | ✅ | Experimental |
|
||||
| [PostgreSql](../../reference/specs/bindings/postgres.md) | | ✅ | Experimental |
|
||||
| [RabbitMQ](../../reference/specs/bindings/rabbitmq.md) | ✅ | ✅ | Experimental |
|
||||
| [Redis](../../reference/specs/bindings/redis.md) | | ✅ | Experimental |
|
||||
| [Twilio](../../reference/specs/bindings/twilio.md) | | ✅ | Experimental |
|
||||
| [Twitter](../../reference/specs/bindings/twitter.md) | ✅ | ✅ | Experimental |
|
||||
| [SendGrid](../../reference/specs/bindings/sendgrid.md) | | ✅ | Experimental |
|
||||
|
||||
|
||||
### Amazon Web Service (AWS)
|
||||
|
||||
| Name | Input<br>Binding | Output<br>Binding | Status |
|
||||
|------|:----------------:|:-----------------:|--------|
|
||||
| [AWS DynamoDB](../../reference/specs/bindings/dynamodb.md) | | ✅ | Experimental |
|
||||
| [AWS S3](../../reference/specs/bindings/s3.md) | | ✅ | Experimental |
|
||||
| [AWS SNS](../../reference/specs/bindings/sns.md) | | ✅ | Experimental |
|
||||
| [AWS SQS](../../reference/specs/bindings/sqs.md) | ✅ | ✅ | Experimental |
|
||||
| [AWS Kinesis](../../reference/specs/bindings/kinesis.md) | ✅ | ✅ | Experimental |
|
||||
|
||||
|
||||
### Google Cloud Platform (GCP)
|
||||
|
||||
| Name | Input<br>Binding | Output<br>Binding | Status |
|
||||
|------|:----------------:|:-----------------:|--------|
|
||||
| [GCP Cloud Pub/Sub](../../reference/specs/bindings/gcppubsub.md) | ✅ | ✅ | Experimental |
|
||||
| [GCP Storage Bucket](../../reference/specs/bindings/gcpbucket.md) | | ✅ | Experimental |
|
||||
|
||||
### Microsoft Azure
|
||||
|
||||
| Name | Input<br>Binding | Output<br>Binding | Status |
|
||||
|------|:----------------:|:-----------------:|--------|
|
||||
| [Azure Blob Storage](../../reference/specs/bindings/blobstorage.md) | | ✅ | Experimental |
|
||||
| [Azure EventHubs](../../reference/specs/bindings/eventhubs.md) | ✅ | ✅ | Experimental |
|
||||
| [Azure CosmosDB](../../reference/specs/bindings/cosmosdb.md) | | ✅ | Experimental |
|
||||
| [Azure Service Bus Queues](../../reference/specs/bindings/servicebusqueues.md) | ✅ | ✅ | Experimental |
|
||||
| [Azure SignalR](../../reference/specs/bindings/signalr.md) | | ✅ | Experimental |
|
||||
| [Azure Storage Queues](../../reference/specs/bindings/storagequeues.md) | ✅ | ✅ | Experimental |
|
||||
| [Azure Event Grid](../../reference/specs/bindings/eventgrid.md) | ✅ | ✅ | Experimental |
|
||||
|
||||
## Input bindings
|
||||
|
||||
Input bindings are used to trigger your application when an event from an external resource has occurred.
|
||||
An optional payload and metadata might be sent with the request.
|
||||
|
||||
In order to receive events from an input binding:
|
||||
|
||||
1. Define the component YAML that describes the type of binding and its metadata (connection info, etc.)
|
||||
2. Listen on an HTTP endpoint for the incoming event, or use the gRPC proto library to get incoming events
|
||||
|
||||
> On startup Dapr sends a ```OPTIONS``` request for all defined input bindings to the application and expects a different status code as ```NOT FOUND (404)``` if this application wants to subscribe to the binding.
|
||||
|
||||
Read the [Create an event-driven app using input bindings](../../howto/trigger-app-with-input-binding) section to get started with input bindings.
|
||||
|
||||
## Output bindings
|
||||
|
||||
Output bindings allow users to invoke external resources.
|
||||
An optional payload and metadata can be sent with the invocation request.
|
||||
|
||||
In order to invoke an output binding:
|
||||
|
||||
1. Define the component YAML that describes the type of binding and its metadata (connection info, etc.)
|
||||
2. Use the HTTP endpoint or gRPC method to invoke the binding with an optional payload
|
||||
|
||||
Read the [Send events to external systems using Output Bindings](../../howto/send-events-with-output-bindings) section to get started with output bindings.
|
||||
|
||||
## Related Topics
|
||||
* [Implementing a new binding](https://github.com/dapr/docs/tree/master/reference/specs/bindings)
|
||||
* [Trigger a service from different resources with input bindings](../../howto/trigger-app-with-input-binding)
|
||||
* [Invoke different resources using output bindings](../../howto/send-events-with-output-bindings)
|
||||
|
|
@ -1,10 +1,11 @@
|
|||
---
|
||||
title: "Interface with external resource using bindings"
|
||||
title: "How-To: Use bindings to interface with external resources"
|
||||
linkTitle: "How-To: Bindings"
|
||||
weight: 2000
|
||||
description: "Invoke external systems with Dapr output bindings"
|
||||
weight: 300
|
||||
---
|
||||
|
||||
Using bindings, its possible to invoke external resources without tying in to special SDK or libraries.
|
||||
Using bindings, it is possible to invoke external resources without tying in to special SDK or libraries.
|
||||
For a complete sample showing output bindings, visit this [link](https://github.com/dapr/quickstarts/tree/master/bindings).
|
||||
|
||||
Watch this [video](https://www.youtube.com/watch?v=ysklxm81MTs&feature=youtu.be&t=1960) on how to use bi-directional output bindings.
|
||||
|
@ -14,7 +15,7 @@ Watch this [video](https://www.youtube.com/watch?v=ysklxm81MTs&feature=youtu.be&
|
|||
|
||||
An output binding represents a resource that Dapr will use invoke and send messages to.
|
||||
|
||||
For the purpose of this guide, we'll use a Kafka binding. You can find a list of the different binding specs [here](../../concepts/bindings/README.md).
|
||||
For the purpose of this guide, you'll use a Kafka binding. You can find a list of the different binding specs [here](../../concepts/bindings/README.md).
|
||||
|
||||
Create the following YAML file, named binding.yaml, and save this to a `components` sub-folder in your application directory.
|
||||
(Use the `--components-path` flag with `dapr run` to point to your custom components dir)
|
||||
|
@ -36,24 +37,24 @@ spec:
|
|||
value: topic1
|
||||
```
|
||||
|
||||
Here, we create a new binding component with the name of `myevent`.
|
||||
Here, create a new binding component with the name of `myevent`.
|
||||
|
||||
Inside the `metadata` section, we configure Kafka related properties such as the topic to publish the message to and the broker.
|
||||
Inside the `metadata` section, configure Kafka related properties such as the topic to publish the message to and the broker.
|
||||
|
||||
## 2. Send an event
|
||||
|
||||
All that's left now is to invoke the bindings endpoint on a running Dapr instance.
|
||||
|
||||
We can do so using HTTP:
|
||||
You can do so using HTTP:
|
||||
|
||||
```bash
|
||||
curl -X POST -H http://localhost:3500/v1.0/bindings/myevent -d '{ "data": { "message": "Hi!" }, "operation": "create" }'
|
||||
```
|
||||
|
||||
As seen above, we invoked the `/binding` endpoint with the name of the binding to invoke, in our case its `myevent`.
|
||||
As seen above, you invoked the `/binding` endpoint with the name of the binding to invoke, in our case its `myevent`.
|
||||
The payload goes inside the mandatory `data` field, and can be any JSON serializable value.
|
||||
|
||||
You'll also notice that there's an `operation` field that tells the binding what we need it to do.
|
||||
You'll also notice that there's an `operation` field that tells the binding what you need it to do.
|
||||
You can check [here](../../reference/specs/bindings) which operations are supported for every output binding.
|
||||
|
||||
|
||||
|
|
|
@ -1,7 +1,8 @@
|
|||
---
|
||||
title: "Trigger your app with input bindings"
|
||||
title: "How-To: Trigger your application with input bindings"
|
||||
linkTitle: "How-To: Triggers"
|
||||
weight: 1000
|
||||
description: "Use Dapr input bindings to trigger event driven applications"
|
||||
weight: 200
|
||||
---
|
||||
|
||||
Using bindings, your code can be triggered with incoming events from different resources which can be anything: a queue, messaging pipeline, cloud-service, filesystem etc.
|
||||
|
|
|
@ -1,15 +1,10 @@
|
|||
---
|
||||
title: "Dapr W3C Traces"
|
||||
title: "W3C trace context for distributed tracing"
|
||||
linkTitle: "W3C Traces"
|
||||
weight: 1
|
||||
weight: 2000
|
||||
description: Using W3C tracing standard with Dapr
|
||||
---
|
||||
|
||||
# W3C trace context for distributed tracing
|
||||
|
||||
- [Background](#background)
|
||||
- [Trace scenarios](#scenarios)
|
||||
- [W3C trace headers](#w3c-trace-headers)
|
||||
|
||||
## Introduction
|
||||
Dapr uses W3C trace context for distributed tracing for both service invocation and pub/sub messaging. Largely Dapr does all the heavy lifting of generating and propogating the trace context information and this can be sent to many different diagnostics tools for visualization and querying. There are only a very few cases where you, as a developer, need to either propagate a trace header or generate one.
|
||||
|
||||
|
|
|
@ -1,26 +1,10 @@
|
|||
---
|
||||
title: "Observability in Dapr"
|
||||
title: "Observability"
|
||||
linkTitle: "Observability"
|
||||
weight: 60
|
||||
description: >
|
||||
How to monitor your application with Dapr Observability
|
||||
description: Dapr capabilities for tracing, logs and metrics
|
||||
---
|
||||
|
||||
## Monitoring tools
|
||||
|
||||
The observability tools listed below are ones that have been tested to work with Dapr.
|
||||
|
||||
### Metrics
|
||||
|
||||
* [How-To: Set up Prometheus and Grafana](../../howto/setup-monitoring-tools/setup-prometheus-grafana.md)
|
||||
* [How-To: Set up Azure Monitor](../../howto/setup-monitoring-tools/setup-azure-monitor.md)
|
||||
|
||||
### Logs
|
||||
|
||||
* [How-To: Set up Fluentd, Elastic search and Kibana in Kubernetes](../../howto/setup-monitoring-tools/setup-fluentd-es-kibana.md)
|
||||
* [How-To: Set up Azure Monitor](../../howto/setup-monitoring-tools/setup-azure-monitor.md)
|
||||
|
||||
### Distributed Tracing
|
||||
|
||||
* [How-To: Set up Zipkin](../../howto/diagnose-with-tracing/zipkin.md)
|
||||
* [How-To: Set up Application Insights](../../howto/diagnose-with-tracing/azure-monitor.md)
|
||||
This section includes guides for developers in the context of observability.
|
||||
For a general overview of the observability concept in Dapr see the **Concepts** section
|
||||
For operations guidance on observability see the **Operations** section.
|
||||
|
|
|
@ -1,11 +1,10 @@
|
|||
---
|
||||
title: "Dapr Logs"
|
||||
title: "Logs"
|
||||
linkTitle: "Logs"
|
||||
weight: 1
|
||||
weight: 3000
|
||||
description: "Understand Dapr logging"
|
||||
---
|
||||
|
||||
# Logs
|
||||
|
||||
Dapr produces structured logs to stdout either as a plain text or JSON formatted. By default, all Dapr processes (runtime and system services) write to console out in plain text. To enable JSON formatted logs, you need to add the `--log-as-json` command flag when running Dapr processes.
|
||||
|
||||
If you want to use a search engine such as Elastic Search or Azure Monitor to search the logs, it is recommended to use JSON-formatted logs which the log collector and search engine can parse using the built-in JSON parser.
|
||||
|
|
|
@ -1,11 +1,10 @@
|
|||
---
|
||||
title: "Dapr Metrics"
|
||||
title: "Metrics"
|
||||
linkTitle: "Metrics"
|
||||
weight: 1
|
||||
weight: 4000
|
||||
description: "Observing Dapr metrics"
|
||||
---
|
||||
|
||||
# Metrics
|
||||
|
||||
Dapr exposes a [Prometheus](https://prometheus.io/) metrics endpoint that you can scrape to gain a greater understanding of how Dapr is behaving and to setup alerts for specific conditions.
|
||||
|
||||
## Configuration
|
||||
|
|
|
@ -1,11 +1,10 @@
|
|||
---
|
||||
title: "Dapr Health"
|
||||
linkTitle: "Health"
|
||||
weight: 1
|
||||
title: "Sidecar health"
|
||||
linkTitle: "Sidecar health"
|
||||
weight: 5000
|
||||
description: Dapr sidecar health checks.
|
||||
---
|
||||
|
||||
# Health
|
||||
|
||||
Dapr provides a way to determine it's health using an HTTP /healthz endpoint.
|
||||
With this endpoint, the Dapr process, or sidecar, can be probed for its health and hence determine its readiness and liveness. See [health API ](../../reference/api/health_api.md)
|
||||
|
||||
|
@ -26,7 +25,7 @@ The kubelet uses readiness probes to know when a container is ready to start acc
|
|||
|
||||
When integrating with Kubernetes, the Dapr sidecar is injected with a Kubernetes probe configuration telling it to use the Dapr healthz endpoint. This is done by the `Sidecar Injector` system service. The integration with the kubelet is shown in the diagram below.
|
||||
|
||||

|
||||
<img src="/images/security-mTLS-dapr-system-services.png" width=600>
|
||||
|
||||
### How to configure a liveness probe in Kubernetes
|
||||
|
|
@ -1,23 +1,13 @@
|
|||
---
|
||||
title: "Dapr Health"
|
||||
linkTitle: "Traces"
|
||||
weight: 1
|
||||
title: "Distributed tracing"
|
||||
linkTitle: "Distributed tracing"
|
||||
weight: 1000
|
||||
description: "Use Dapr tracing to get visibility for distributed application"
|
||||
---
|
||||
|
||||
# Distributed Tracing
|
||||
|
||||
Dapr uses OpenTelemetry (previously known as OpenCensus) for distributed traces and metrics collection. OpenTelemetry supports various backends including [Azure Monitor](https://azure.microsoft.com/en-us/services/monitor/), [Datadog](https://www.datadoghq.com), [Instana](https://www.instana.com), [Jaeger](https://www.jaegertracing.io/), [SignalFX](https://www.signalfx.com/), [Stackdriver](https://cloud.google.com/stackdriver), [Zipkin](https://zipkin.io) and others.
|
||||
|
||||

|
||||
|
||||
## Contents
|
||||
|
||||
- [Distributed Tracing](#distributed-tracing)
|
||||
- [Contents](#contents)
|
||||
- [Tracing design](#tracing-design)
|
||||
- [W3C Correlation ID](#w3c-correlation-id)
|
||||
- [Configuration](#configuration)
|
||||
- [References](#references)
|
||||
<img src="/images/tracing.png" width=600>
|
||||
|
||||
## Tracing design
|
||||
|
|
@ -1,59 +1,6 @@
|
|||
---
|
||||
title: "Publish & Subscribe Messaging in Dapr"
|
||||
title: "Pub/Sub"
|
||||
linkTitle: "Pub/Sub"
|
||||
weight: 30
|
||||
description: "Use publish and subscribe messaging in a distributed application"
|
||||
---
|
||||
|
||||
The [publish/subscribe pattern](https://en.wikipedia.org/wiki/Publish%E2%80%93subscribe_pattern) allows your microservices to communicate with each other purely by sending messages. In this system, the **producer** of a message sends it to a **topic**, with no knowledge of what service will receive the message. A messages can even be sent if there's no consumer for it.
|
||||
|
||||
Similarly, a **consumer** will receive messages from a topic without knowledge of what producer sent it. This pattern is especially useful when you need to decouple microservices from one another.
|
||||
|
||||
Dapr provides a publish/subscribe API that provides at-least-once guarantees and integrates with various message brokers implementations. These implementations are pluggable, and developed outside of the Dapr runtime in [components-contrib](https://github.com/dapr/components-contrib/tree/master/pubsub).
|
||||
|
||||
## Publish/Subscribe API
|
||||
|
||||
The API for Publish/Subscribe can be found in the [spec repo](../../reference/api/pubsub_api.md).
|
||||
|
||||
## Behavior and guarantees
|
||||
|
||||
Dapr guarantees At-Least-Once semantics for message delivery.
|
||||
That means that when an application publishes a message to a topic using the Publish/Subscribe API, it can assume the message is delivered at least once to any subscriber when the response status code from that endpoint is `200`, or returns no error if using the gRPC client.
|
||||
|
||||
The burden of dealing with concepts like consumer groups and multiple instances inside consumer groups is all catered for by Dapr.
|
||||
|
||||
### App ID
|
||||
|
||||
Dapr has the concept of an `id`. This is specified in Kubernetes using the `dapr.io/app-id` annotation and with the `app-id` flag using the Dapr CLI. Dapr requires an ID to be assigned to every application.
|
||||
|
||||
When multiple instances of the same application ID subscribe to a topic, Dapr will make sure to deliver the message to only one instance. If two different applications with different IDs subscribe to a topic, at least one instance in each application receives a copy of the same message.
|
||||
|
||||
## Cloud events
|
||||
|
||||
Dapr follows the [CloudEvents 1.0 Spec](https://github.com/cloudevents/spec/tree/v1.0) and wraps any payload sent to a topic inside a Cloud Events envelope.
|
||||
|
||||
The following fields from the Cloud Events spec are implemented with Dapr:
|
||||
|
||||
* `id`
|
||||
* `source`
|
||||
* `specversion`
|
||||
* `type`
|
||||
* `datacontenttype` (Optional)
|
||||
|
||||
|
||||
> Starting with Dapr v0.9, Dapr no longer wraps published content into CloudEvent if the published payload itself is already in CloudEvent format.
|
||||
|
||||
The following example shows an XML content in CloudEvent v1.0 serialized as JSON:
|
||||
|
||||
|
||||
```json
|
||||
{
|
||||
"specversion" : "1.0",
|
||||
"type" : "xml.message",
|
||||
"source" : "https://example.com/message",
|
||||
"subject" : "Test XML Message",
|
||||
"id" : "id-1234-5678-9101",
|
||||
"time" : "2020-09-23T06:23:21Z",
|
||||
"datacontenttype" : "text/xml",
|
||||
"data" : "<note><to>User1</to><from>user2</from><message>hi</message></note>"
|
||||
}
|
||||
```
|
||||
|
|
|
@ -1,11 +1,10 @@
|
|||
---
|
||||
title: "Publish message to a topic with Dapr"
|
||||
title: "How-To: Publish message to a topic"
|
||||
linkTitle: "How-To: Publish"
|
||||
weight: 4000
|
||||
weight: 2000
|
||||
description: "Send messages to subscribes through topics"
|
||||
---
|
||||
|
||||
# Use Pub/Sub to publish a message to a topic
|
||||
|
||||
Pub/Sub is a common pattern in a distributed system with many services that want to utilize decoupled, asynchronous messaging.
|
||||
Using Pub/Sub, you can enable scenarios where event consumers are decoupled from event producers.
|
||||
|
||||
|
|
|
@ -1,7 +1,8 @@
|
|||
---
|
||||
title: "Use Pub/Sub to consume messages from topics"
|
||||
linkTitle: "How-To: Publish"
|
||||
title: "How-To: Subscribe to a topic"
|
||||
linkTitle: "How-To: Subscribe"
|
||||
weight: 3000
|
||||
description: "Consume messages from topics"
|
||||
---
|
||||
|
||||
Pub/Sub is a very common pattern in a distributed system with many services that want to utilize decoupled, asynchronous messaging.
|
||||
|
|
|
@ -1,7 +1,8 @@
|
|||
---
|
||||
title: "Use Dapr PubSub with Multiple Namespaces"
|
||||
linkTitle: "Multiple Namespaces"
|
||||
weight: 1000
|
||||
title: "Pub/Sub and namespaces"
|
||||
linkTitle: "Pub/Sub and namespaces"
|
||||
weight: 4000
|
||||
description: "Use Dapr Pub/Sub with multiple namespaces"
|
||||
---
|
||||
|
||||
In some scenarios, applications can be spread across namespaces and share a queue or topic via PubSub. In this case, the PubSub component must be provisioned on each namespace.
|
||||
|
|
|
@ -0,0 +1,60 @@
|
|||
---
|
||||
title: "Pub/Sub overview"
|
||||
linkTitle: "Pub/Sub overview"
|
||||
weight: 1000
|
||||
description: "Overview of the Dapr Pub/Sub building block"
|
||||
---
|
||||
|
||||
The [publish/subscribe pattern](https://en.wikipedia.org/wiki/Publish%E2%80%93subscribe_pattern) allows your microservices to communicate with each other purely by sending messages. In this system, the **producer** of a message sends it to a **topic**, with no knowledge of what service will receive the message. A messages can even be sent if there's no consumer for it.
|
||||
|
||||
Similarly, a **consumer** will receive messages from a topic without knowledge of what producer sent it. This pattern is especially useful when you need to decouple microservices from one another.
|
||||
|
||||
Dapr provides a publish/subscribe API that provides at-least-once guarantees and integrates with various message brokers implementations. These implementations are pluggable, and developed outside of the Dapr runtime in [components-contrib](https://github.com/dapr/components-contrib/tree/master/pubsub).
|
||||
|
||||
## Publish/Subscribe API
|
||||
|
||||
The API for Publish/Subscribe can be found in the [spec repo](../../reference/api/pubsub_api.md).
|
||||
|
||||
## Behavior and guarantees
|
||||
|
||||
Dapr guarantees At-Least-Once semantics for message delivery.
|
||||
That means that when an application publishes a message to a topic using the Publish/Subscribe API, it can assume the message is delivered at least once to any subscriber when the response status code from that endpoint is `200`, or returns no error if using the gRPC client.
|
||||
|
||||
The burden of dealing with concepts like consumer groups and multiple instances inside consumer groups is all catered for by Dapr.
|
||||
|
||||
### App ID
|
||||
|
||||
Dapr has the concept of an `id`. This is specified in Kubernetes using the `dapr.io/app-id` annotation and with the `app-id` flag using the Dapr CLI. Dapr requires an ID to be assigned to every application.
|
||||
|
||||
When multiple instances of the same application ID subscribe to a topic, Dapr will make sure to deliver the message to only one instance. If two different applications with different IDs subscribe to a topic, at least one instance in each application receives a copy of the same message.
|
||||
|
||||
## Cloud events
|
||||
|
||||
Dapr follows the [CloudEvents 1.0 Spec](https://github.com/cloudevents/spec/tree/v1.0) and wraps any payload sent to a topic inside a Cloud Events envelope.
|
||||
|
||||
The following fields from the Cloud Events spec are implemented with Dapr:
|
||||
|
||||
* `id`
|
||||
* `source`
|
||||
* `specversion`
|
||||
* `type`
|
||||
* `datacontenttype` (Optional)
|
||||
|
||||
|
||||
> Starting with Dapr v0.9, Dapr no longer wraps published content into CloudEvent if the published payload itself is already in CloudEvent format.
|
||||
|
||||
The following example shows an XML content in CloudEvent v1.0 serialized as JSON:
|
||||
|
||||
|
||||
```json
|
||||
{
|
||||
"specversion" : "1.0",
|
||||
"type" : "xml.message",
|
||||
"source" : "https://example.com/message",
|
||||
"subject" : "Test XML Message",
|
||||
"id" : "id-1234-5678-9101",
|
||||
"time" : "2020-09-23T06:23:21Z",
|
||||
"datacontenttype" : "text/xml",
|
||||
"data" : "<note><to>User1</to><from>user2</from><message>hi</message></note>"
|
||||
}
|
||||
```
|
|
@ -1,7 +1,8 @@
|
|||
---
|
||||
title: "Limit Application Pub/Sub Topics with Scopes"
|
||||
linkTitle: "State Management"
|
||||
weight: 2000
|
||||
title: "How To: Scope Pub/Sub topics"
|
||||
linkTitle: "How To: Scope topics"
|
||||
weight: 5000
|
||||
description: "Use scopes to limit Pub/Sub topics to specific applications"
|
||||
---
|
||||
|
||||
[Namespaces or component scopes](../components-scopes/README.md) can be used to limit component access to particular applications. These application scopes added to a component limit only the applications with specific IDs to be able to use the component.
|
||||
|
|
|
@ -1,55 +1,6 @@
|
|||
---
|
||||
title: "Secrets in Dapr"
|
||||
linkTitle: "Secrets"
|
||||
title: "Secrets stores"
|
||||
linkTitle: "Secrets stores"
|
||||
weight: 70
|
||||
description: "Retrieve secrets securely from managed secret stores"
|
||||
---
|
||||
|
||||
# Dapr secrets management
|
||||
|
||||
Almost all non-trivial applications need to _securely_ store secret data like API keys, database passwords, and more. By nature, these secrets should not be checked into the version control system, but they also need to be accessible to code running in production. This is generally a hard problem, but it's critical to get it right. Otherwise, critical production systems can be compromised.
|
||||
|
||||
Dapr's solution to this problem is the secrets API and secrets stores.
|
||||
|
||||
Here's how it works:
|
||||
|
||||
- Dapr is set up to use a **secret store** - a place to securely store secret data
|
||||
- Application code uses the standard Dapr secrets API to retrieve secrets.
|
||||
|
||||
Some examples for secret stores include `Kubernetes`, `Hashicorp Vault`, `Azure KeyVault`. See [secret stores](https://github.com/dapr/components-contrib/tree/master/secretstores) for the list of supported stores.
|
||||
|
||||
See [Setup secret stores](https://github.com/dapr/docs/tree/master/howto/setup-secret-store) for a HowTo guide for setting up and using secret stores.
|
||||
|
||||
## Referencing secret stores in Dapr components
|
||||
|
||||
Instead of including credentials directly within a Dapr component file, you can place the credentials within a Dapr supported secret store and reference the secret within the Dapr component. This is preferred approach and is a recommended best practice especially in production environments.
|
||||
|
||||
For more information read [Referencing Secret Stores in Components](./component-secrets.md)
|
||||
|
||||
|
||||
## Using secrets in your application
|
||||
|
||||
Application code can call the secrets building block API to retrieve secrets from Dapr supported secret stores that can be used in your code.
|
||||
Watch this [video](https://www.youtube.com/watch?v=OtbYCBt9C34&t=1818) for an example of how the secrets API can be used in your application.
|
||||
|
||||
For example, the diagram below shows an application requesting the secret called "mysecret" from a secret store called "vault" from a configured cloud secret store.
|
||||
|
||||
<img src="../../images/secrets_cloud_stores.png" width=800>
|
||||
|
||||
Applications can use the secrets API to access secrets from a Kubernetes secret store. In the example below, the application retrieves the same secret "mysecret" from a Kubernetes secret store.
|
||||
|
||||
<img src="../../images/secrets_kubernetes_store.png" width=800>
|
||||
|
||||
In Azure Dapr can be configured to use Managed Identities to authenticate with Azure Key Vault in order to retrieve secrets. In the example below, an Azure Kubernetes Service (AKS) cluster is configured to use managed identities. Then Dapr uses [pod identities](https://docs.microsoft.com/en-us/azure/aks/operator-best-practices-identity#use-pod-identities) to retrieve secrets from Azure Key Vault on behalf of the application.
|
||||
|
||||
<img src="../../images/secrets_azure_aks_keyvault.png" width=800>
|
||||
|
||||
Notice that in all of the examples above the application code did not have to change to get the same secret. Dapr did all the heavy lifting here via the secrets building block API and using the secret components.
|
||||
|
||||
See [Access Application Secrets using the Secrets API](https://github.com/dapr/docs/tree/master/howto/get-secrets) for a How To guide to use secrets in your application.
|
||||
|
||||
|
||||
For detailed API information read [Secrets API](https://github.com/dapr/docs/blob/master/reference/api/secrets_api.md).
|
||||
|
||||
|
||||
|
||||
|
||||
|
|
|
@ -1,7 +1,8 @@
|
|||
---
|
||||
title: "Get Secrets with the Dapr Secrets API"
|
||||
linkTitle: "How-To: Secrets"
|
||||
weight: 1000
|
||||
title: "How To: Retrieve a secret"
|
||||
linkTitle: "How To: Retrieve a secret"
|
||||
weight: 2000
|
||||
description: "Use the secret store building block to securely retrieve a secret"
|
||||
---
|
||||
|
||||
It's common for applications to store sensitive information such as connection strings, keys and tokens that are used to authenticate with databases, services and external systems in secrets by using a dedicated secret store.
|
||||
|
|
|
@ -0,0 +1,54 @@
|
|||
---
|
||||
title: "Secrets stores overview"
|
||||
linkTitle: "Secrets stores overview"
|
||||
weight: 1000
|
||||
description: "Overview of Dapr secrets management building block"
|
||||
---
|
||||
|
||||
Almost all non-trivial applications need to _securely_ store secret data like API keys, database passwords, and more. By nature, these secrets should not be checked into the version control system, but they also need to be accessible to code running in production. This is generally a hard problem, but it's critical to get it right. Otherwise, critical production systems can be compromised.
|
||||
|
||||
Dapr's solution to this problem is the secrets API and secrets stores.
|
||||
|
||||
Here's how it works:
|
||||
|
||||
- Dapr is set up to use a **secret store** - a place to securely store secret data
|
||||
- Application code uses the standard Dapr secrets API to retrieve secrets.
|
||||
|
||||
Some examples for secret stores include `Kubernetes`, `Hashicorp Vault`, `Azure KeyVault`. See [secret stores](https://github.com/dapr/components-contrib/tree/master/secretstores) for the list of supported stores.
|
||||
|
||||
See [Setup secret stores](https://github.com/dapr/docs/tree/master/howto/setup-secret-store) for a HowTo guide for setting up and using secret stores.
|
||||
|
||||
## Referencing secret stores in Dapr components
|
||||
|
||||
Instead of including credentials directly within a Dapr component file, you can place the credentials within a Dapr supported secret store and reference the secret within the Dapr component. This is preferred approach and is a recommended best practice especially in production environments.
|
||||
|
||||
For more information read [Referencing Secret Stores in Components](./component-secrets.md)
|
||||
|
||||
|
||||
## Using secrets in your application
|
||||
|
||||
Application code can call the secrets building block API to retrieve secrets from Dapr supported secret stores that can be used in your code.
|
||||
Watch this [video](https://www.youtube.com/watch?v=OtbYCBt9C34&t=1818) for an example of how the secrets API can be used in your application.
|
||||
|
||||
For example, the diagram below shows an application requesting the secret called "mysecret" from a secret store called "vault" from a configured cloud secret store.
|
||||
|
||||
<img src="/images/secrets-overview-cloud-stores.png" width=600>
|
||||
|
||||
Applications can use the secrets API to access secrets from a Kubernetes secret store. In the example below, the application retrieves the same secret "mysecret" from a Kubernetes secret store.
|
||||
|
||||
<img src="/images/secrets-overview-kubernetes-store.png" width=600>
|
||||
|
||||
In Azure Dapr can be configured to use Managed Identities to authenticate with Azure Key Vault in order to retrieve secrets. In the example below, an Azure Kubernetes Service (AKS) cluster is configured to use managed identities. Then Dapr uses [pod identities](https://docs.microsoft.com/en-us/azure/aks/operator-best-practices-identity#use-pod-identities) to retrieve secrets from Azure Key Vault on behalf of the application.
|
||||
|
||||
<img src="/images/secrets-overview-azure-aks-keyvault.png" width=600>
|
||||
|
||||
Notice that in all of the examples above the application code did not have to change to get the same secret. Dapr did all the heavy lifting here via the secrets building block API and using the secret components.
|
||||
|
||||
See [Access Application Secrets using the Secrets API](https://github.com/dapr/docs/tree/master/howto/get-secrets) for a How To guide to use secrets in your application.
|
||||
|
||||
|
||||
For detailed API information read [Secrets API](https://github.com/dapr/docs/blob/master/reference/api/secrets_api.md).
|
||||
|
||||
|
||||
|
||||
|
|
@ -1,114 +1,6 @@
|
|||
---
|
||||
title: "Dapr Service Invocation"
|
||||
linkTitle: "Service Invocation"
|
||||
title: "Service invocation"
|
||||
linkTitle: "Service invocation"
|
||||
weight: 10
|
||||
description: Invoke methods of other services reliably and securely
|
||||
---
|
||||
|
||||
# Service Invocation
|
||||
|
||||
Using service invocation, your application can discover and reliably and securely communicate with other applications using the standard protocols of [gRPC](https://grpc.io) or [HTTP](https://www.w3.org/Protocols/).
|
||||
|
||||
- [Overview](#overview)
|
||||
- [Features](#features)
|
||||
- [Next steps](#next-steps)
|
||||
|
||||
## Overview
|
||||
In many environments with multiple services that need to communicate with each other, developers often ask themselves the following questions:
|
||||
|
||||
* How do I discover and invoke methods on different services?
|
||||
* How do I call other services securely?
|
||||
* How do I handle retries and transient errors?
|
||||
* How do I use distributed tracing to see a call graph to diagnose issues in production?
|
||||
|
||||
Dapr allows you to overcome these challenges by providing an endpoint that acts as a combination of a reverse proxy with built-in service discovery, while leveraging built-in distributed tracing, metrics, error handling and more.
|
||||
|
||||
Dapr uses a sidecar, decentralized architecture. To invoke an application using Dapr, you use the `invoke` API on any Dapr instance. The sidecar programming model encourages each applications to talk to its own instance of Dapr. The Dapr instances discover and communicate with one another.
|
||||
|
||||
The diagram below is an overview of how Dapr's service invocation works.
|
||||
|
||||

|
||||
|
||||
1. Service A makes an http/gRPC call meant for Service B. The call goes to the local Dapr sidecar.
|
||||
2. Dapr discovers Service B's location using the [name resolution component](https://github.com/dapr/components-contrib/tree/master/nameresolution) installed for the given hosting platform.
|
||||
3. Dapr forwards the message to Service B's Dapr sidecar
|
||||
* Note: All calls between Dapr sidecars go over gRPC for performance. Only calls between services and Dapr sidecars are either HTTP or gRPC
|
||||
4. Service B's Dapr sidecar forwards the request to the specified endpoint (or method) on Service B. Service B then runs its business logic code.
|
||||
5. Service B sends a response to Service A. The response goes to Service B's sidecar.
|
||||
6. Dapr forwards the response to Service A's Dapr sidecar.
|
||||
7. Service A receives the response.
|
||||
|
||||
### Example
|
||||
As an example for the above call sequence, suppose you have the applications as described in the [hello world sample](https://github.com/dapr/quickstarts/blob/master/hello-world/README.md), where a python app invokes a node.js app.
|
||||
|
||||
In such a scenario, the python app would be "Service A" above, and the Node.js app would be "Service B".
|
||||
|
||||
The diagram below shows sequence 1-7 again on a local machine showing the API call:
|
||||
|
||||

|
||||
|
||||
1. Suppose the Node.js app has a Dapr app ID of `nodeapp`, as in the sample. The python app invokes the Node.js app's `neworder` method by posting `http://localhost:3500/v1.0/invoke/nodeapp/method/neworder`, which first goes to the python app's local Dapr sidecar.
|
||||
2. Dapr discovers the Node.js app's location using multicast DNS component which runs on your local machine.
|
||||
3. Dapr forwards the request to the Node.js app's sidecar.
|
||||
4. The Node.js app's sidecar forwards the request to the Node.js app. The Node.js app performs its business logic, which, as described in the sample, is to log the incoming message and then persist the order ID into Redis (not shown in the diagram above).
|
||||
|
||||
Steps 5-7 are the same as above.
|
||||
|
||||
## Features
|
||||
Service invocation provides several features to make it easy for you to call methods on remote applications.
|
||||
|
||||
- [Namespaces scoping](#namespaces-scoping)
|
||||
- [Retries](#Retries)
|
||||
- [Service-to-service security](#service-to-service-security)
|
||||
- [Service access security](#service-access-security)
|
||||
- [Observability: Tracing, logging and metrics](#observability)
|
||||
- [Pluggable service discovery](#pluggable-service-discovery)
|
||||
|
||||
|
||||
### Namespaces scoping
|
||||
Service invocation supports calls across namespaces. On all supported hosting platforms, Dapr app IDs conform to a valid FQDN format that includes the target namespace.
|
||||
|
||||
For example, the following string contains the app ID `nodeapp` in addition to the namespace the app runs in `production`.
|
||||
|
||||
```
|
||||
localhost:3500/v1.0/invoke/nodeapp.production/method/neworder
|
||||
```
|
||||
|
||||
This is especially useful in cross namespace calls in a Kubernetes cluster. Watch this [video](https://youtu.be/LYYV_jouEuA?t=495) for a demo on how to use namespaces with service invocation.
|
||||
|
||||
### Retries
|
||||
Service invocation performs automatic retries with backoff time periods in the event of call failures and transient errors.
|
||||
Errors that cause retries are:
|
||||
|
||||
* Network errors including endpoint unavailability and refused connections
|
||||
* Authentication errors due to a renewing certificate on the calling/callee Dapr sidecars
|
||||
|
||||
Per call retries are performed with a backoff interval of 1 second up to a threshold of 3 times.
|
||||
Connection establishment via gRPC to the target sidecar has a timeout of 5 seconds.
|
||||
|
||||
### Service-to-service security
|
||||
All calls between Dapr applications can be made secure with mutual (mTLS) authentication on hosted platforms, including automatic certificate rollover, via the Dapr Sentry service. The diagram below shows this for self hosted applications.
|
||||
|
||||
For more information read the [service-to-service security](../security#mtls-self-hosted) article.
|
||||
|
||||

|
||||
|
||||
### Service access security
|
||||
Applications can control which other applications are allowed to call them and what they are authorized to do via access policies. This enables you to restrict sensitive applications, that say have personnel information, from being accessed by unauthorized applications, and combined with service-to-service secure communication, provides for soft multi-tenancy deployments.
|
||||
|
||||
For more information read the [access control allow lists for service invocation](../configuration#access-control-allow-lists-for-service-invocation) article.
|
||||
|
||||
### Observability
|
||||
By default, all calls between applications are traced and metrics are gathered to provide insights and diagnostics for applications, which is especially important in production scenarios.
|
||||
|
||||
For more information read the [observability](../concepts/observability) article.
|
||||
|
||||
### Pluggable service discovery
|
||||
Dapr can run on any [hosting platform](../concepts/hosting). For the supported hosting platforms this means they have a [name resolution component](https://github.com/dapr/components-contrib/tree/master/nameresolution) developed for them that enables service discovery. For example, the Kubernetes name resolution component uses the Kubernetes DNS service to resolve the location of other applications running in the cluster.
|
||||
|
||||
## Next steps
|
||||
|
||||
* Follow these guide on
|
||||
* [How-to: Get started with HTTP service invocation](../../howto/invoke-and-discover-services)
|
||||
* [How-to: Get started with Dapr and gRPC](../../howto/create-grpc-app)
|
||||
* Try out the [hello world sample](https://github.com/dapr/quickstarts/blob/master/hello-world/README.md) which shows how to use HTTP service invocation or visit the samples in each of the [Dapr SDKs](https://github.com/dapr/docs#further-documentation)
|
||||
* Read the [service invocation API specification](../../reference/api/service_invocation_api.md)
|
||||
|
|
|
@ -1,12 +1,10 @@
|
|||
---
|
||||
title: "Invoke & Discover Services"
|
||||
linkTitle: "How-To: Invoke & Discover Services"
|
||||
description: "This guide will walk you through configuring and invoking services using dapr"
|
||||
weight: 200
|
||||
title: "How-To: Invoke and discover services"
|
||||
linkTitle: "How-To: Invoke and discover services"
|
||||
description: "Use service invocation in a distributed application"
|
||||
weight: 2000
|
||||
---
|
||||
|
||||
# Invoke remote services
|
||||
|
||||
This article describe how to deploy services each with an unique application ID, so that other services can discover and call endpoints on them using service invocation API.
|
||||
|
||||
## 1. Choose an ID for your service
|
||||
|
|
|
@ -1,8 +1,8 @@
|
|||
---
|
||||
title: "Service invocation"
|
||||
linkTitle: "Overview"
|
||||
description: "An overview of the features and capabilities of the service invocation building block"
|
||||
weight: 100
|
||||
title: "Service invocation overview"
|
||||
linkTitle: "Service invocation overview"
|
||||
weight: 1000
|
||||
description: "Overview of the service invocation building block"
|
||||
---
|
||||
|
||||
Using service invocation, your application can discover and reliably and securely communicate with other applications using the standard protocols of [gRPC](https://grpc.io) or [HTTP](https://www.w3.org/Protocols/).
|
||||
|
@ -21,7 +21,7 @@ Dapr uses a sidecar, decentralized architecture. To invoke an application using
|
|||
|
||||
The diagram below is an overview of how Dapr's service invocation works.
|
||||
|
||||

|
||||
<img src="/images/service-invocation-overview.png" width=800>
|
||||
|
||||
1. Service A makes an http/gRPC call meant for Service B. The call goes to the local Dapr sidecar.
|
||||
2. Dapr discovers Service B's location using the [name resolution component](https://github.com/dapr/components-contrib/tree/master/nameresolution) installed for the given hosting platform.
|
||||
|
@ -39,7 +39,7 @@ In such a scenario, the python app would be "Service A" above, and the Node.js a
|
|||
|
||||
The diagram below shows sequence 1-7 again on a local machine showing the API call:
|
||||
|
||||

|
||||
<img src="/images/service-invocation-overview-example.png" width=800>
|
||||
|
||||
1. Suppose the Node.js app has a Dapr app ID of `nodeapp`, as in the sample. The python app invokes the Node.js app's `neworder` method by posting `http://localhost:3500/v1.0/invoke/nodeapp/method/neworder`, which first goes to the python app's local Dapr sidecar.
|
||||
2. Dapr discovers the Node.js app's location using multicast DNS component which runs on your local machine.
|
||||
|
@ -85,7 +85,7 @@ All calls between Dapr applications can be made secure with mutual (mTLS) authen
|
|||
|
||||
For more information read the [service-to-service security](../security#mtls-self-hosted) article.
|
||||
|
||||

|
||||
<img src="/images/security-mTLS-sentry-selfhosted.png" width=800>
|
||||
|
||||
### Service access security
|
||||
Applications can control which other applications are allowed to call them and what they are authorized to do via access policies. This enables you to restrict sensitive applications, that say have personnel information, from being accessed by unauthorized applications, and combined with service-to-service secure communication, provides for soft multi-tenancy deployments.
|
||||
|
|
|
@ -1,129 +1,6 @@
|
|||
---
|
||||
title: "Dapr State Management"
|
||||
linkTitle: "State Management"
|
||||
title: "State management"
|
||||
linkTitle: "State management"
|
||||
weight: 20
|
||||
description: "Use state management to create a stateful service"
|
||||
---
|
||||
|
||||
# State management
|
||||
|
||||
Dapr offers key/value storage APIs for state management. If a microservice uses state management, it can use these APIs to leverage any of the [supported state stores](https://github.com/dapr/docs/blob/master/howto/setup-state-store/supported-state-stores.md), without adding or learning a third party SDK.
|
||||
|
||||
- [Overview](#overview)
|
||||
- [Features](#features)
|
||||
- [Next Steps](#next-steps)
|
||||
|
||||
## Overview
|
||||
When using state management your application can leverage several features that would otherwise be complicated and error-prone to build yourself such as:
|
||||
|
||||
- Distributed concurrency and data consistency
|
||||
- Retry policies
|
||||
- Bulk [CRUD](https://en.wikipedia.org/wiki/Create,_read,_update_and_delete) operations
|
||||
|
||||
See below for a diagram of state management's high level architecture.
|
||||
|
||||

|
||||
|
||||
## Features
|
||||
|
||||
- [State Management API](#state-management-api)
|
||||
- [State Store Behaviors](#state-store-behaviors)
|
||||
- [Concurrency](#concurrency)
|
||||
- [Consistency](#consistency)
|
||||
- [Retry Policies](#retry-policies)
|
||||
- [Bulk Operations](#bulk-operations)
|
||||
- [Querying State Store Directly](#querying-state-store-directly)
|
||||
|
||||
## State management API
|
||||
|
||||
Developers can use the state management API to retrieve, save and delete state values by providing keys.
|
||||
|
||||
Dapr data stores are components. Dapr ships with [Redis](https://redis.io
|
||||
) out-of-box for local development in self hosted mode. Dapr allows you to plug in other data stores as components such as [Azure CosmosDB](https://azure.microsoft.com/services/cosmos-db/), [SQL Server](https://azure.microsoft.com/services/sql-database/), [AWS DynamoDB](https://aws.amazon.com/DynamoDB
|
||||
), [GCP Cloud Spanner](https://cloud.google.com/spanner
|
||||
) and [Cassandra](http://cassandra.apache.org/).
|
||||
|
||||
Visit [State API](../../reference/api/state_api.md) for more information.
|
||||
|
||||
> **NOTE:** Dapr prefixes state keys with the ID of the current Dapr instance. This allows multiple Dapr instances to share the same state store.
|
||||
|
||||
## State store behaviors
|
||||
|
||||
Dapr allows developers to attach to a state operation request additional metadata that describes how the request is expected to be handled. For example, you can attach concurrency requirements, consistency requirements, and retry policy to any state operation requests.
|
||||
|
||||
By default, your application should assume a data store is **eventually consistent** and uses a **last-write-wins** concurrency pattern. On the other hand, if you do attach metadata to your requests, Dapr passes the metadata along with the requests to the state store and expects the data store to fulfill the requests.
|
||||
|
||||
Not all stores are created equal. To ensure portability of your application, you can query the capabilities of the store and make your code adaptive to different store capabilities.
|
||||
|
||||
The following table summarizes the capabilities of existing data store implementations.
|
||||
|
||||
Store | Strong consistent write | Strong consistent read | ETag|
|
||||
----|----|----|----
|
||||
Cosmos DB | Yes | Yes | Yes
|
||||
PostgreSQL | Yes | Yes | Yes
|
||||
Redis | Yes | Yes | Yes
|
||||
Redis (clustered)| Yes | No | Yes
|
||||
SQL Server | Yes | Yes | Yes
|
||||
|
||||
## Concurrency
|
||||
|
||||
Dapr supports optimistic concurrency control (OCC) using ETags. When a state is requested, Dapr always attaches an **ETag** property to the returned state. And when the user code tries to update or delete a state, it's expected to attach the ETag through the **If-Match** header. The write operation can succeed only when the provided ETag matches with the ETag in the state store.
|
||||
|
||||
Dapr chooses OCC because in many applications, data update conflicts are rare because clients are naturally partitioned by business contexts to operate on different data. However, if your application chooses to use ETags, a request may get rejected because of mismatched ETags. It's recommended that you use a [Retry Policy](#Retry-Policies) to compensate for such conflicts when using ETags.
|
||||
|
||||
If your application omits ETags in writing requests, Dapr skips ETag checks while handling the requests. This essentially enables the **last-write-wins** pattern, compared to the **first-write-wins** pattern with ETags.
|
||||
|
||||
> **NOTE:** For stores that don't natively support ETags, it's expected that the corresponding Dapr state store implementation simulates ETags and follows the Dapr state management API specification when handling states. Because Dapr state store implementations are technically clients to the underlying data store, such simulation should be straightforward using the concurrency control mechanisms provided by the store.
|
||||
|
||||
## Consistency
|
||||
|
||||
Dapr supports both **strong consistency** and **eventual consistency**, with eventual consistency as the default behavior.
|
||||
|
||||
When strong consistency is used, Dapr waits for all replicas (or designated quorums) to acknowledge before it acknowledges a write request. When eventual consistency is used, Dapr returns as soon as the write request is accepted by the underlying data store, even if this is a single replica.
|
||||
|
||||
## Retry policies
|
||||
|
||||
Dapr allows you to attach a retry policy to any write request. A policy is described by an **retryInterval**, a **retryPattern** and a **retryThreshold**. Dapr keeps retrying the request at the given interval up to the specified threshold. You can choose between a **linear** retry pattern or an **exponential** (backoff) pattern. When the **exponential** pattern is used, the retry interval is doubled after each attempt.
|
||||
|
||||
## Bulk operations
|
||||
|
||||
Dapr supports two types of bulk operations - **bulk** or **multi**. You can group several requests of the same type into a bulk (or a batch). Dapr submits requests in the bulk as individual requests to the underlying data store. In other words, bulk operations are not transactional. On the other hand, you can group requests of different types into a multi-operation, which is handled as an atomic transaction.
|
||||
|
||||
## Querying state store directly
|
||||
|
||||
Dapr saves and retrieves state values without any transformation. You can query and aggregate state directly from the underlying state store. For example, to get all state keys associated with an application ID "myApp" in Redis, use:
|
||||
|
||||
```bash
|
||||
KEYS "myApp*"
|
||||
```
|
||||
|
||||
> **NOTE:** See [How to query Redis store](../../howto/query-state-store/query-redis-store.md) for details on how to query a Redis store.
|
||||
>
|
||||
|
||||
### Querying actor state
|
||||
|
||||
If the data store supports SQL queries, you can query an actor's state using SQL queries. For example use:
|
||||
|
||||
```sql
|
||||
SELECT * FROM StateTable WHERE Id='<app-id>||<actor-type>||<actor-id>||<key>'
|
||||
```
|
||||
|
||||
You can also perform aggregate queries across actor instances, avoiding the common turn-based concurrency limitations of actor frameworks. For example, to calculate the average temperature of all thermometer actors, use:
|
||||
|
||||
```sql
|
||||
SELECT AVG(value) FROM StateTable WHERE Id LIKE '<app-id>||<thermometer>||*||temperature'
|
||||
```
|
||||
|
||||
> **NOTE:** Direct queries of the state store are not governed by Dapr concurrency control, since you are not calling through the Dapr runtime. What you see are snapshots of committed data which are acceptable for read-only queries across multiple actors, however writes should be done via the actor instances.
|
||||
|
||||
## Next steps
|
||||
|
||||
* Follow these guides on
|
||||
* [How-to: Set up Azure Cosmos DB store](../../howto/setup-state-store/setup-azure-cosmosdb.md)
|
||||
* [How-to: query Azure Cosmos DB store](../../howto/query-state-store/query-cosmosdb-store.md)
|
||||
* [How-to: Set up PostgreSQL store](../../howto/setup-state-store/setup-postgresql.md)
|
||||
* [How-to: Set up Redis store](../../howto/setup-state-store/setup-redis.md)
|
||||
* [How-to: Query Redis store](../../howto/query-state-store/query-redis-store.md)
|
||||
* [How-to: Set up SQL Server store](../../howto/setup-state-store/setup-sqlserver.md)
|
||||
* [How-to: Query SQL Server store](../../howto/query-state-store/query-sqlserver-store.md)
|
||||
* Read the [state management API specification](../../reference/api/state_api.md)
|
||||
* Read the [actors API specification](../../reference/api/actors_api.md)
|
||||
|
|
|
@ -1,11 +1,10 @@
|
|||
---
|
||||
title: "Save and Get State Using Dapr"
|
||||
linkTitle: "How-To: Save/Get State"
|
||||
title: "How-To: Save and get state"
|
||||
linkTitle: "How-To: Save and get state"
|
||||
weight: 200
|
||||
description: "Use key value pairs to persist a state"
|
||||
---
|
||||
|
||||
# Create a stateful service
|
||||
|
||||
State management is one of the most common needs of any application: new or legacy, monolith or microservice.
|
||||
Dealing with different databases libraries, testing them, handling retries and faults can be time consuming and hard.
|
||||
|
||||
|
|
|
@ -1,10 +1,11 @@
|
|||
---
|
||||
title: "Create a stateful replicated service"
|
||||
linkTitle: "How-To: Stateful Service"
|
||||
title: "How-To: Build a stateful service"
|
||||
linkTitle: "How-To: Build a stateful service"
|
||||
weight: 300
|
||||
description: "Use state management with a scaled, replicated service"
|
||||
---
|
||||
|
||||
In this HowTo we'll show you how you can create a stateful service which can be horizontally scaled, using opt-in concurrency and consistency models.
|
||||
In this article you'll learn how you can create a stateful service which can be horizontally scaled, using opt-in concurrency and consistency models.
|
||||
|
||||
This frees developers from difficult state coordination, conflict resolution and failure handling, and allows them instead to consume these capabilities as APIs from Dapr.
|
||||
|
||||
|
|
|
@ -1,5 +1,6 @@
|
|||
---
|
||||
title: "Query Backend State Store"
|
||||
linkTitle: "How-To: Query State Store"
|
||||
title: "Work with backend state stores"
|
||||
linkTitle: "Work with backend state stores"
|
||||
weight: 400
|
||||
description: "Guides for working with specific backend states stores"
|
||||
---
|
||||
|
|
|
@ -1,7 +1,8 @@
|
|||
---
|
||||
title: "Query CosmosDB Store"
|
||||
linkTitle: "Query CosmosDB"
|
||||
title: "Azure Cosmos DB"
|
||||
linkTitle: "Azure Cosmos DB"
|
||||
weight: 1000
|
||||
description: "Use Azure Cosmos DB as a backend state store"
|
||||
---
|
||||
|
||||
Dapr doesn't transform state values while saving and retrieving states. Dapr requires all state store implementations to abide by a certain key format scheme (see [Dapr state management spec](../../reference/api/state_api.md). You can directly interact with the underlying store to manipulate the state data, such querying states, creating aggregated views and making backups.
|
||||
|
|
|
@ -1,7 +1,8 @@
|
|||
---
|
||||
title: "Query Redis Store"
|
||||
linkTitle: "Query Redis"
|
||||
title: "Redis"
|
||||
linkTitle: "Redis"
|
||||
weight: 2000
|
||||
description: "Use Redis as a backend state store"
|
||||
---
|
||||
|
||||
Dapr doesn't transform state values while saving and retrieving states. Dapr requires all state store implementations to abide by a certain key format scheme (see [Dapr state management spec](../../reference/api/state_api.md). You can directly interact with the underlying store to manipulate the state data, such querying states, creating aggregated views and making backups.
|
||||
|
|
|
@ -1,7 +1,8 @@
|
|||
---
|
||||
title: "Query SQL Store"
|
||||
linkTitle: "Query SQL"
|
||||
title: "SQL server"
|
||||
linkTitle: "SQL server"
|
||||
weight: 3000
|
||||
description: "Use SQL server as a backend state store"
|
||||
---
|
||||
|
||||
Dapr doesn't transform state values while saving and retrieving states. Dapr requires all state store implementations to abide by a certain key format scheme (see [Dapr state management spec](../../reference/api/state_api.md). You can directly interact with the underlying store to manipulate the state data, such querying states, creating aggregated views and making backups.
|
||||
|
|
|
@ -0,0 +1,128 @@
|
|||
---
|
||||
title: "State management overview"
|
||||
linkTitle: "State Management overview"
|
||||
weight: 100
|
||||
description: "Overview of the state management building block"
|
||||
---
|
||||
|
||||
Dapr offers key/value storage APIs for state management. If a microservice uses state management, it can use these APIs to leverage any of the [supported state stores](https://github.com/dapr/docs/blob/master/howto/setup-state-store/supported-state-stores.md), without adding or learning a third party SDK.
|
||||
|
||||
- [Overview](#overview)
|
||||
- [Features](#features)
|
||||
- [Next Steps](#next-steps)
|
||||
|
||||
## Overview
|
||||
When using state management your application can leverage several features that would otherwise be complicated and error-prone to build yourself such as:
|
||||
|
||||
- Distributed concurrency and data consistency
|
||||
- Retry policies
|
||||
- Bulk [CRUD](https://en.wikipedia.org/wiki/Create,_read,_update_and_delete) operations
|
||||
|
||||
See below for a diagram of state management's high level architecture.
|
||||
|
||||

|
||||
|
||||
## Features
|
||||
|
||||
- [State Management API](#state-management-api)
|
||||
- [State Store Behaviors](#state-store-behaviors)
|
||||
- [Concurrency](#concurrency)
|
||||
- [Consistency](#consistency)
|
||||
- [Retry Policies](#retry-policies)
|
||||
- [Bulk Operations](#bulk-operations)
|
||||
- [Querying State Store Directly](#querying-state-store-directly)
|
||||
|
||||
## State management API
|
||||
|
||||
Developers can use the state management API to retrieve, save and delete state values by providing keys.
|
||||
|
||||
Dapr data stores are components. Dapr ships with [Redis](https://redis.io
|
||||
) out-of-box for local development in self hosted mode. Dapr allows you to plug in other data stores as components such as [Azure CosmosDB](https://azure.microsoft.com/services/cosmos-db/), [SQL Server](https://azure.microsoft.com/services/sql-database/), [AWS DynamoDB](https://aws.amazon.com/DynamoDB
|
||||
), [GCP Cloud Spanner](https://cloud.google.com/spanner
|
||||
) and [Cassandra](http://cassandra.apache.org/).
|
||||
|
||||
Visit [State API](../../reference/api/state_api.md) for more information.
|
||||
|
||||
> **NOTE:** Dapr prefixes state keys with the ID of the current Dapr instance. This allows multiple Dapr instances to share the same state store.
|
||||
|
||||
## State store behaviors
|
||||
|
||||
Dapr allows developers to attach to a state operation request additional metadata that describes how the request is expected to be handled. For example, you can attach concurrency requirements, consistency requirements, and retry policy to any state operation requests.
|
||||
|
||||
By default, your application should assume a data store is **eventually consistent** and uses a **last-write-wins** concurrency pattern. On the other hand, if you do attach metadata to your requests, Dapr passes the metadata along with the requests to the state store and expects the data store to fulfill the requests.
|
||||
|
||||
Not all stores are created equal. To ensure portability of your application, you can query the capabilities of the store and make your code adaptive to different store capabilities.
|
||||
|
||||
The following table summarizes the capabilities of existing data store implementations.
|
||||
|
||||
Store | Strong consistent write | Strong consistent read | ETag|
|
||||
----|----|----|----
|
||||
Cosmos DB | Yes | Yes | Yes
|
||||
PostgreSQL | Yes | Yes | Yes
|
||||
Redis | Yes | Yes | Yes
|
||||
Redis (clustered)| Yes | No | Yes
|
||||
SQL Server | Yes | Yes | Yes
|
||||
|
||||
## Concurrency
|
||||
|
||||
Dapr supports optimistic concurrency control (OCC) using ETags. When a state is requested, Dapr always attaches an **ETag** property to the returned state. And when the user code tries to update or delete a state, it's expected to attach the ETag through the **If-Match** header. The write operation can succeed only when the provided ETag matches with the ETag in the state store.
|
||||
|
||||
Dapr chooses OCC because in many applications, data update conflicts are rare because clients are naturally partitioned by business contexts to operate on different data. However, if your application chooses to use ETags, a request may get rejected because of mismatched ETags. It's recommended that you use a [Retry Policy](#Retry-Policies) to compensate for such conflicts when using ETags.
|
||||
|
||||
If your application omits ETags in writing requests, Dapr skips ETag checks while handling the requests. This essentially enables the **last-write-wins** pattern, compared to the **first-write-wins** pattern with ETags.
|
||||
|
||||
> **NOTE:** For stores that don't natively support ETags, it's expected that the corresponding Dapr state store implementation simulates ETags and follows the Dapr state management API specification when handling states. Because Dapr state store implementations are technically clients to the underlying data store, such simulation should be straightforward using the concurrency control mechanisms provided by the store.
|
||||
|
||||
## Consistency
|
||||
|
||||
Dapr supports both **strong consistency** and **eventual consistency**, with eventual consistency as the default behavior.
|
||||
|
||||
When strong consistency is used, Dapr waits for all replicas (or designated quorums) to acknowledge before it acknowledges a write request. When eventual consistency is used, Dapr returns as soon as the write request is accepted by the underlying data store, even if this is a single replica.
|
||||
|
||||
## Retry policies
|
||||
|
||||
Dapr allows you to attach a retry policy to any write request. A policy is described by an **retryInterval**, a **retryPattern** and a **retryThreshold**. Dapr keeps retrying the request at the given interval up to the specified threshold. You can choose between a **linear** retry pattern or an **exponential** (backoff) pattern. When the **exponential** pattern is used, the retry interval is doubled after each attempt.
|
||||
|
||||
## Bulk operations
|
||||
|
||||
Dapr supports two types of bulk operations - **bulk** or **multi**. You can group several requests of the same type into a bulk (or a batch). Dapr submits requests in the bulk as individual requests to the underlying data store. In other words, bulk operations are not transactional. On the other hand, you can group requests of different types into a multi-operation, which is handled as an atomic transaction.
|
||||
|
||||
## Querying state store directly
|
||||
|
||||
Dapr saves and retrieves state values without any transformation. You can query and aggregate state directly from the underlying state store. For example, to get all state keys associated with an application ID "myApp" in Redis, use:
|
||||
|
||||
```bash
|
||||
KEYS "myApp*"
|
||||
```
|
||||
|
||||
> **NOTE:** See [How to query Redis store](../../howto/query-state-store/query-redis-store.md) for details on how to query a Redis store.
|
||||
>
|
||||
|
||||
### Querying actor state
|
||||
|
||||
If the data store supports SQL queries, you can query an actor's state using SQL queries. For example use:
|
||||
|
||||
```sql
|
||||
SELECT * FROM StateTable WHERE Id='<app-id>||<actor-type>||<actor-id>||<key>'
|
||||
```
|
||||
|
||||
You can also perform aggregate queries across actor instances, avoiding the common turn-based concurrency limitations of actor frameworks. For example, to calculate the average temperature of all thermometer actors, use:
|
||||
|
||||
```sql
|
||||
SELECT AVG(value) FROM StateTable WHERE Id LIKE '<app-id>||<thermometer>||*||temperature'
|
||||
```
|
||||
|
||||
> **NOTE:** Direct queries of the state store are not governed by Dapr concurrency control, since you are not calling through the Dapr runtime. What you see are snapshots of committed data which are acceptable for read-only queries across multiple actors, however writes should be done via the actor instances.
|
||||
|
||||
## Next steps
|
||||
|
||||
* Follow these guides on
|
||||
* [How-to: Set up Azure Cosmos DB store](../../howto/setup-state-store/setup-azure-cosmosdb.md)
|
||||
* [How-to: query Azure Cosmos DB store](../../howto/query-state-store/query-cosmosdb-store.md)
|
||||
* [How-to: Set up PostgreSQL store](../../howto/setup-state-store/setup-postgresql.md)
|
||||
* [How-to: Set up Redis store](../../howto/setup-state-store/setup-redis.md)
|
||||
* [How-to: Query Redis store](../../howto/query-state-store/query-redis-store.md)
|
||||
* [How-to: Set up SQL Server store](../../howto/setup-state-store/setup-sqlserver.md)
|
||||
* [How-to: Query SQL Server store](../../howto/query-state-store/query-sqlserver-store.md)
|
||||
* Read the [state management API specification](../../reference/api/state_api.md)
|
||||
* Read the [actors API specification](../../reference/api/actors_api.md)
|
|
@ -1,5 +1,6 @@
|
|||
---
|
||||
title: "Dapr IDE Integrations"
|
||||
linkTitle: "IDEs"
|
||||
weight: 200
|
||||
title: "IDE support"
|
||||
linkTitle: "IDE support"
|
||||
weight: 30
|
||||
description: "Support for common Integrated Development Environments (IDEs)"
|
||||
---
|
|
@ -1,7 +1,8 @@
|
|||
---
|
||||
title: "Configuring IntelliJ Community Edition for debugging with Dapr"
|
||||
title: "IntelliJ"
|
||||
linkTitle: "IntelliJ"
|
||||
weight: 1000
|
||||
description: "Configuring IntelliJ community edition for debugging with Dapr"
|
||||
---
|
||||
|
||||
When developing Dapr applications, you typically use the Dapr CLI to start your 'Daprized' service similar to this:
|
||||
|
|
|
@ -1,7 +1,8 @@
|
|||
---
|
||||
title: "Application development and debugging with Visual Studio Code"
|
||||
linkTitle: "VSCode Debugging"
|
||||
title: "VS Code"
|
||||
linkTitle: "VS Code"
|
||||
weight: 2000
|
||||
description: "Application development and debugging with Visual Studio Code"
|
||||
---
|
||||
|
||||
## Visual Studio Code Dapr extension
|
||||
|
|
|
@ -1,7 +1,8 @@
|
|||
---
|
||||
title: "Application development and debugging with Visual Studio Code rmeote containers"
|
||||
linkTitle: "VSCode Remote Containers"
|
||||
title: "VS Code remote containers"
|
||||
linkTitle: "VS Code remote containers"
|
||||
weight: 3000
|
||||
description: "Application development and debugging with Visual Studio Code remote containers"
|
||||
---
|
||||
|
||||
## Using remote containers for your application development
|
||||
|
|
|
@ -1,5 +1,6 @@
|
|||
---
|
||||
title: "SDKs & Integrations"
|
||||
title: "Integrations"
|
||||
linkTitle: "Integrations"
|
||||
weight: 50
|
||||
description: "Information on ways you can build and develop with Dapr using your favorite languages and frameworks."
|
||||
description: "Dapr integrations with other technologies"
|
||||
---
|
|
@ -0,0 +1,6 @@
|
|||
---
|
||||
title: "Middleware"
|
||||
linkTitle: "Middleware"
|
||||
weight: 50
|
||||
description: "Customize Dapr processing pipelines by adding middleware components"
|
||||
---
|
|
@ -1,5 +1,21 @@
|
|||
---
|
||||
title: "Dapr Developer SDKs"
|
||||
title: "SDKs"
|
||||
linkTitle: "SDKs"
|
||||
weight: 100
|
||||
---
|
||||
weight: 20
|
||||
description: "Use your favorite languages with Dapr"
|
||||
---
|
||||
|
||||
### .NET
|
||||
See the [.NET SDK repository](https://github.com/dapr/dotnet-sdk)
|
||||
|
||||
### Java
|
||||
See the [Java SDK repository](https://github.com/dapr/java-sdk)
|
||||
|
||||
### Go
|
||||
See the [Go SDK repository](https://github.com/dapr/go-sdk)
|
||||
|
||||
### Python
|
||||
See the [Python SDK repository](https://github.com/dapr/python-sdk)
|
||||
|
||||
### Javascript
|
||||
See the [Javascript SDK repository](https://github.com/dapr/js-sdk)
|
|
@ -1,15 +1,13 @@
|
|||
---
|
||||
title: "Getting Started with Dapr"
|
||||
linkTitle: "Getting Started"
|
||||
title: "Getting started with Dapr"
|
||||
linkTitle: "Getting started"
|
||||
weight: 20
|
||||
description: "Get up and running with Dapr to start Daperizing your apps"
|
||||
---
|
||||
|
||||
# Getting Started
|
||||
|
||||
Dapr is a portable, event-driven runtime that makes it easy for enterprise developers to build resilient, microservice stateless and stateful applications that run on the cloud and edge and embraces the diversity of languages and developer frameworks.
|
||||
|
||||
## Core Concepts
|
||||
## Core concepts
|
||||
|
||||
* **Building blocks** are a collection of components that implement distributed system capabilities, such as pub/sub, state management, resource bindings, and distributed tracing.
|
||||
|
||||
|
|
Before Width: | Height: | Size: 303 KiB After Width: | Height: | Size: 303 KiB |
Before Width: | Height: | Size: 31 KiB After Width: | Height: | Size: 31 KiB |
Before Width: | Height: | Size: 65 KiB After Width: | Height: | Size: 65 KiB |
Before Width: | Height: | Size: 69 KiB After Width: | Height: | Size: 69 KiB |
Before Width: | Height: | Size: 65 KiB After Width: | Height: | Size: 65 KiB |
Before Width: | Height: | Size: 49 KiB After Width: | Height: | Size: 49 KiB |
Before Width: | Height: | Size: 66 KiB After Width: | Height: | Size: 66 KiB |
Before Width: | Height: | Size: 35 KiB After Width: | Height: | Size: 35 KiB |
Before Width: | Height: | Size: 21 KiB After Width: | Height: | Size: 21 KiB |
Before Width: | Height: | Size: 14 KiB After Width: | Height: | Size: 14 KiB |