Fixes as per Microsoft Doc Authoring Pack ext.

This commit is contained in:
lolorol 2020-02-10 19:24:45 +08:00
parent 2beb3888b0
commit 35a7a4996c
77 changed files with 604 additions and 453 deletions

27
FAQ.md
View File

@ -5,38 +5,45 @@
- **[Developer language SDKs and frameworks](#developer-language-sdks-and-frameworks)**
## Networking and service meshes
### How does Dapr work with service meshes?
### Understanding how Dapr works with service meshes
Dapr is a distributed application runtime. Unlike a service mesh which is focused on networking concerns, Dapr is focused on providing building blocks that make it easier for developers to build microservices. Dapr is developer-centric versus service meshes being infrastructure-centric.
Dapr can be used alongside any service mesh such as Istio and Linkerd. A service mesh is a dedicated network infrastructure layer designed to connect services to one another and provide insightful telemetry. A service mesh doesnt introduce new functionality to an application.
Dapr can be used alongside any service mesh such as Istio and Linkerd. A service mesh is a dedicated network infrastructure layer designed to connect services to one another and provide insightful telemetry. A service mesh doesnt introduce new functionality to an application.
That is where Dapr comes in. Dapr is a language agnostic programming model built on http and gRPC that provides distributed system building blocks via open APIs for asynchronous pub-sub, stateful services, service discovery and invocation, actors and distributed tracing. Dapr introduces new functionality to an apps runtime. Both service meshes and Dapr run as side-car services to your application, one giving network features and the other distributed application capabilities.
That is where Dapr comes in. Dapr is a language agnostic programming model built on http and gRPC that provides distributed system building blocks via open APIs for asynchronous pub-sub, stateful services, service discovery and invocation, actors and distributed tracing. Dapr introduces new functionality to an apps runtime. Both service meshes and Dapr run as side-car services to your application, one giving network features and the other distributed application capabilities.
### Understanding how Dapr interoperates with the service mesh interface (SMI)
### How does Dapr interoperate with the service mesh interface (SMI)?
SMI is an abstraction layer that provides a common API surface across different service mesh technology. Dapr can leverage any service mesh technology including SMI.
### Whats the difference between Dapr and Istio?
Read [How does Dapr work with service meshes?](https://github.com/dapr/dapr/wiki/FAQ#how-does-dapr-work-with-service-meshes) Istio is an open source service mesh implementation that focuses on Layer7 routing, traffic flow management and mTLS authentication between services. Istio uses a sidecar to intercept traffic going into and out of a container and enforces a set of network policies on them.
### Differences between Dapr and Istio
Istio is not a programming model and does not focus on application level features such as state management, pub-sub, bindings etc. That is where Dapr comes in.
Read [How does Dapr work with service meshes?](https://github.com/dapr/dapr/wiki/FAQ#how-does-dapr-work-with-service-meshes) Istio is an open source service mesh implementation that focuses on Layer7 routing, traffic flow management and mTLS authentication between services. Istio uses a sidecar to intercept traffic going into and out of a container and enforces a set of network policies on them.
Istio is not a programming model and does not focus on application level features such as state management, pub-sub, bindings etc. That is where Dapr comes in.
## Actors
### How does Dapr relate to Orleans and Service Fabric Reliable Actors?
### Relationship between Dapr, Orleans and Service Fabric Reliable Actors
The actors in Dapr are based on the same virtual actor concept that [Orleans](https://www.microsoft.com/research/project/orleans-virtual-actors/) started, meaning that they are activated when called and garbage collected after a period of time. If you are familiar with Orleans, Dapr C# actors will be familiar. Dapr C# actors are based on [Service Fabric Reliable Actors](https://docs.microsoft.com/azure/service-fabric/service-fabric-reliable-actors-introduction) (which also came from Orleans) and enable you to take Reliable Actors in Service Fabric and migrate them to other hosting platforms such as Kubernetes or other on-premise environments.
Also Dapr is about more than just actors. It provides you with a set of best practice building blocks to build into any microservices application. See [Dapr overview](https://github.com/dapr/docs/blob/master/overview.md)
### How is Dapr different from an actor framework?
### Differences between Dapr from an actor framework
Virtual actors capabilities are one of the building blocks that Dapr provides in its runtime. With Dapr because it is programming language agnostic with an http/gRPC API, the actors can be called from any language. This allows actors written in one language to invoke actors written in a different language.
Creating a new actor follows a local call like http://localhost:3500/v1.0/actors/<actorType>/<actorId>/meth...
for example http://localhost:3500/v1.0/actors/myactor/50/method/getData to call the getData method on myactor with id=50
for example http://localhost:3500/v1.0/actors/myactor/50/method/getData to call the getData method on myactor with id=50
The Dapr runtime SDKs have language specific actor frameworks. The the .NET SDK for example has C# actors. You will see all the SDKs have an actor framework that fits with the language
## Developer language SDKs and frameworks
### Does Dapr have any SDKs if I want to work with a particular programming language or framework?
To make using Dapr more natural for different languages, it includes language specific SDKs for Go, Java, JavaScript, .NET and Python. These SDKs expose the functionality in the Dapr building blocks, such as saving state, publishing an event or creating an actor, through a typed, language API rather than calling the http/gRPC API. This enables you to write a combination of stateless and stateful functions and actors all in the language of their choice. And because these SDKs share the Dapr runtime, you get cross-language actor and functions support.
Dapr can be integrated with any developer framework. For example, in the Dapr .NET SDK you can find ASP.NET Core integration, which brings stateful routing controllers that respond to pub/sub events from other services.

View File

@ -1,3 +1,3 @@
# documentation
# documentation
Content for this file to be added

View File

@ -10,14 +10,16 @@ The levels outlined below are the same for both system components and the Dapr s
1. error
2. warning
3. info
3. debug
4. debug
error produces the minimum amount of output, where debug produces the maximum amount. The default level is info, which provides a balanced amount of information for operating Dapr in normal conditions.
To set the output level, you can use the `--log-level` command-line option. For example:
`./daprd --log-level error` <br>
`./placement --log-level debug`
```bash
./daprd --log-level error
./placement --log-level debug
```
This will start the Dapr runtime binary with a log level of `error` and the Dapr Actor Placement Service with a log level of `debug`.
@ -25,18 +27,22 @@ This will start the Dapr runtime binary with a log level of `error` and the Dapr
As outlined above, every Dapr binary takes a `--log-level` argument. For example, to launch the placement service with a log level of warning:
`./placement --log-level warning`
```bash
./placement --log-level warning
```
To set the log level when running your app with the Dapr CLI, pass the `log-level` param:
`dapr run --log-level warning node myapp.js`
```bash
dapr run --log-level warning node myapp.js
```
## Viewing Logs on Standalone Mode
When running Dapr with the Dapr CLI, both your app's log output and the runtime's output will be redirected to the same session, for easy debugging.
For example, this is the output when running Dapr:
```
```bash
dapr run node myapp.js
Starting Dapr with id Trackgreat-Lancer on port 56730
✅ You're up and running! Both Dapr and your app logs will appear here.
@ -67,7 +73,7 @@ This section shows you how to configure the log levels for Dapr system pods and
You can set the log level individually for every sidecar by providing the following annotation in your pod spec template:
```
```yml
annotations:
dapr.io/log-level: "debug"
```
@ -78,16 +84,21 @@ When deploying Dapr to your cluster using Helm 3.x, you can individually set the
#### Setting the Operator log level
`helm install dapr dapr/dapr --namespace dapr-system --set dapr_operator.logLevel=error`
```bash
helm install dapr dapr/dapr --namespace dapr-system --set dapr_operator.logLevel=error
```
#### Setting the Placement Service log level
`helm install dapr dapr/dapr --namespace dapr-system --set dapr_placement.logLevel=error`
```bash
helm install dapr dapr/dapr --namespace dapr-system --set dapr_placement.logLevel=error
```
#### Setting the Sidecar Injector log level
`helm install dapr dapr/dapr --namespace dapr-system --set dapr_sidecar_injector.logLevel=error`
```bash
helm install dapr dapr/dapr --namespace dapr-system --set dapr_sidecar_injector.logLevel=error
```
## Viewing Logs on Kubernetes
@ -99,15 +110,16 @@ This section will guide you on how to view logs for Dapr system components as we
When deployed in Kubernetes, the Dapr sidecar injector will inject an Dapr container named `daprd` into your annotated pod.
In order to view logs for the sidecar, simply find the pod in question by running `kubectl get pods`:
```
```bash
NAME READY STATUS RESTARTS AGE
addapp-74b57fb78c-67zm6 2/2 Running 0 40h
```
Next, get the logs for the Dapr sidecar container:
`kubectl logs addapp-74b57fb78c-67zm6 -c daprd`
```
```bash
kubectl logs addapp-74b57fb78c-67zm6 -c daprd
time="2019-09-04T02:52:27Z" level=info msg="starting Dapr Runtime -- version 0.3.0-alpha -- commit b6f2810-dirty"
time="2019-09-04T02:52:27Z" level=info msg="log level set to: info"
time="2019-09-04T02:52:27Z" level=info msg="kubernetes mode configured"
@ -132,7 +144,7 @@ Dapr runs the following system pods:
#### Viewing Operator Logs
```
```Bash
kubectl logs -l app=dapr-operator -n dapr-system
time="2019-09-05T19:03:43Z" level=info msg="log level set to: info"
time="2019-09-05T19:03:43Z" level=info msg="starting Dapr Operator -- version 0.3.0-alpha -- commit b6f2810-dirty"
@ -143,7 +155,7 @@ time="2019-09-05T19:03:43Z" level=info msg="Dapr Operator is started"
#### Viewing Sidecar Injector Logs
```
```Bash
kubectl logs -l app=dapr-sidecar-injector -n dapr-system
time="2019-09-03T21:01:12Z" level=info msg="log level set to: info"
time="2019-09-03T21:01:12Z" level=info msg="starting Dapr Sidecar Injector -- version 0.3.0-alpha -- commit b6f2810-dirty"
@ -154,16 +166,16 @@ time="2019-09-03T21:01:12Z" level=info msg="Sidecar injector is listening on :40
#### Viewing Placement Service Logs
```
```Bash
kubectl logs -l app=dapr-placement -n dapr-system
time="2019-09-03T21:01:12Z" level=info msg="log level set to: info"
time="2019-09-03T21:01:12Z" level=info msg="starting Dapr Placement Service -- version 0.3.0-alpha -- commit b6f2810-dirty"
time="2019-09-03T21:01:12Z" level=info msg="placement Service started on port 50005"
time="2019-09-04T00:21:57Z" level=info msg="host added: 10.244.1.89"
```
*Note: If Dapr is installed to a different namespace than dapr-system, simply replace the namespace to the desired one in the command above*
### Non Kubernetes Environments
The examples above are specific specific to Kubernetes, but the principal is the same for any kind of container based environment: simply grab the container ID of the Dapr sidecar and/or system component (if applicable) and view its logs.

View File

@ -25,7 +25,9 @@ annotations:
To enable profiling in Standalone mode, pass the `enable-profiling` and the `profile-port` flags to the Dapr CLI:
Note that `profile-port` is not required, and Dapr will pick an available port.
`dapr run --enable-profiling true --profile-port 7777 python myapp.py`
```bash
dapr run --enable-profiling true --profile-port 7777 python myapp.py
```
## Debug a profiling session
@ -35,7 +37,7 @@ After profiling is enabled, we can start a profiling session to investigate what
First, find the pod containing the Dapr runtime. If you don't already know the the pod name, type `kubectl get pods`:
```
```bash
NAME READY STATUS RESTARTS AGE
divideapp-6dddf7dc74-6sq4l 2/2 Running 0 2d23h
```
@ -47,7 +49,7 @@ In this case, we want to start a session with the Dapr runtime inside of pod `di
We can do so by connecting to the pod via port forwarding:
```
```bash
kubectl port-forward divideapp-6dddf7dc74-6sq4 7777:7777
Forwarding from 127.0.0.1:7777 -> 7777
Forwarding from [::1]:7777 -> 7777
@ -57,27 +59,34 @@ Handling connection for 7777
Now that the connection has been established, we can use `pprof` to profile the Dapr runtime.
The following example will create a `cpu.pprof` file containing samples from a profile session that lasts 120 seconds:
`curl "http://localhost:7777/debug/pprof/profile?seconds=120" > cpu.pprof`
```bash
curl "http://localhost:7777/debug/pprof/profile?seconds=120" > cpu.pprof
```
Analyze the file with pprof:
```
```bash
pprof cpu.pprof
```
You can also save the results in a visualized way inside a PDF:
`go tool pprof --pdf your-binary-file http://localhost:7777/debug/pprof/profile?seconds=120 > profile.pdf`
```bash
go tool pprof --pdf your-binary-file http://localhost:7777/debug/pprof/profile?seconds=120 > profile.pdf
```
For memory related issues, you can profile the heap:
`go tool pprof --pdf your-binary-file http://localhost:7777/debug/pprof/heap > heap.pdf`
```bash
go tool pprof --pdf your-binary-file http://localhost:7777/debug/pprof/heap > heap.pdf
```
![heap](../../images/heap.png)
Profiling allocated objects:
```
```bash
go tool pprof http://localhost:7777/debug/pprof/heap
> exit
@ -86,16 +95,17 @@ Saved profile in /Users/myusername/pprof/pprof.daprd.alloc_objects.alloc_space.i
To analyze, grab the file path above (its a dynamic file path, so pay attention to note paste this one), and execute:
`go tool pprof -alloc_objects --pdf /Users/myusername/pprof/pprof.daprd.alloc_objects.alloc_space.inuse_objects.inuse_space.003.pb.gz > alloc-objects.pdf`
```bash
go tool pprof -alloc_objects --pdf /Users/myusername/pprof/pprof.daprd.alloc_objects.alloc_space.inuse_objects.inuse_space.003.pb.gz > alloc-objects.pdf
```
![alloc](../../images/alloc.png)
### Standalone
For Standalone mode, locate the Dapr instance that you want to profile:
```
```bash
dapr list
APP ID DAPR PORT APP PORT COMMAND AGE CREATED PID
node-subscriber 3500 3000 node app.js 12s 2019-09-09 15:11.24 896

View File

@ -9,24 +9,23 @@ Since Dapr uses Open Census, you can configure various exporters for tracing and
The following steps show you how to configure Dapr to send distributed tracing data to Zipkin running as a container in your Kubernetes cluster, and how to view them.
### Setup
First, deploy Zipkin:
```
```bash
kubectl run zipkin --image openzipkin/zipkin --port 9411
```
Create a Kubernetes Service for the Zipkin pod:
```
```bash
kubectl expose deploy zipkin --type ClusterIP --port 9411
```
Next, create the following YAML file locally:
```
```yml
apiVersion: dapr.io/v1alpha1
kind: Configuration
metadata:
@ -42,13 +41,13 @@ spec:
Finally, deploy the Dapr configuration:
```
```bash
kubectl apply -f config.yaml
```
In order to enable this configuration for your Dapr sidecar, add the following annotation to your pod spec template:
```
```yml
annotations:
dapr.io/config: "zipkin"
```
@ -59,7 +58,7 @@ That's it! Your sidecar is now configured for use with Open Census and Zipkin.
To view traces, connect to the Zipkin service and open the UI:
```
```bash
kubectl port-forward svc/zipkin 9411:9411
```
@ -68,14 +67,14 @@ On your browser, go to ```http://localhost:9411``` and you should see the Zipkin
![zipkin](../../images/zipkin_ui.png)
## Distributed Tracing with Zipkin - Standalone Mode
The following steps show you how to configure Dapr to send distributed tracing data to Zipkin running as a container on your local machine and view them.
The following steps show you how to configure Dapr to send distributed tracing data to Zipkin running as a container on your local machine and view them.
For Standalone mode, create a Dapr configuration file locally and reference it with the Dapr CLI.
1. Create the following YAML file:
```
```yml
apiVersion: dapr.io/v1alpha1
kind: Configuration
metadata:
@ -91,13 +90,13 @@ spec:
2. Launch Zipkin using Docker:
```
```bash
docker run -d -p 9411:9411 openzipkin/zipkin
```
3. Launch Dapr with the `--config` param:
```
```bash
dapr run --app-id mynode --app-port 3000 --config ./config.yaml node app.js
```
@ -105,7 +104,7 @@ dapr run --app-id mynode --app-port 3000 --config ./config.yaml node app.js
The `tracing` section under the `Configuration` spec contains the following properties:
```
```yml
tracing:
enabled: true
exporterType: zipkin

View File

@ -3,12 +3,13 @@
This directory contains Dapr concepts. The goal of these documents provide an understanding of the key concepts used in the Dapr documentation and the [Dapr spec](../reference/api/README.md).
## Building blocks
A [building block](./architecture/building_blocks.md) is as an Http or gRPC API that can be called from user code and uses one or more Dapr components. Dapr consists of a set of building blocks, with extensibility to add new building blocks.
A [building block](./architecture/building_blocks.md) is as an Http or gRPC API that can be called from user code and uses one or more Dapr components. Dapr consists of a set of building blocks, with extensibility to add new building blocks.
The diagram below shows how building blocks expose a public API that is called from your code, using components to implement the building blocks capability.
![Dapr Building Blocks and Components](../images/concepts-building-blocks.png)
The following are the building blocks provided by Dapr:
* [**Resource Bindings**](./bindings/README.md)
@ -44,28 +45,34 @@ Dapr uses a modular design where functionality is delivered as a component. Each
A building block can use any combination of components. For example the [actors](./actor/actor_overview.md) building block and the state management building block both use state components. As another example, the pub/sub building block uses [pub/sub](./publish-subscribe-messaging/README.md) components.
You can get a list of current components available in the current hosting environment using the `dapr components` CLI command.
The following are the component types provided by Dapr:
* Bindings
* Tracing exporters
* Middleware
* Pub/sub
* Secret store
* Service discovery
* State
* Bindings
* Tracing exporters
* Middleware
* Pub/sub
* Secret store
* Service discovery
* State
## Configuration
Dapr [Configuration](./configuration/README.md) defines a policy that affects how any Dapr sidecar instance behaves, such as using [distributed tracing](distributed-tracing/README.md) or a [custom pipeline](middleware/middleware.md). Configuration can be applied to Dapr sidecar instances dynamically.
You can get a list of current configurations available in the current the hosting environment using the `dapr configuration` CLI command.
## Middleware
Dapr allows custom [**middleware**](./middleware/middleware.md) to be plugged into the request processing pipeline. Middleware are components. Middleware can perform additional actions on a request, such as authentication, encryption and message transformation before the request is routed to the user code, or before the request is returned to the client.
## Secrets
In Dapr, a [**Secret**](./components/secrets.md) is any piece of private information that you want to guard against unwanted users. Dapr offers a simple secret API and integrates with secret stores such as Azure Key Vault and Kubernetes secret stores to store the secrets. Secretstores, used to store secrets, are Dapr components.
## Hosting environments
Dapr can run on multiple hosting platforms. The supported hosting platforms are:
* [**Self hosted**](../overview.md#running-dapr-on-a-local-developer-machine-in-standalone-mode). Dapr runs on a single machine either as a process or in a container. Used for local development or running on a single machine execution
* [**Kubernetes**](../overview.md#running-dapr-in-kubernetes-mode). Dapr runs on any Kubernetes cluster either from a cloud provider or on-premises.
* [**Kubernetes**](../overview.md#running-dapr-in-kubernetes-mode). Dapr runs on any Kubernetes cluster either from a cloud provider or on-premises.

View File

@ -2,9 +2,9 @@
Dapr runtime provides an actor implementation which is based on Virtual Actor pattern. The Dapr actors API provides a single-threaded programming model leveraging the scalability and reliability guarantees provided by underlying platform on which Dapr is running.
## What are Actors?
## Understanding actors
An actor is an isolated, independent unit of compute and state with single-threaded execution.
An actor is an isolated, independent unit of compute and state with single-threaded execution.
The [actor pattern](https://en.wikipedia.org/wiki/Actor_model) is a computational model for concurrent or distributed systems in which a large number of these actors can execute simultaneously and independently of each other. Actors can communicate with each other and they can create more actors.
@ -35,6 +35,7 @@ This virtual actor lifetime abstraction carries some caveats as a result of the
An actor is automatically activated (causing an actor object to be constructed) the first time a message is sent to its actor ID. After some period of time, the actor object is garbage collected. In the future, using the actor ID again, causes a new actor object to be constructed. An actor's state outlives the object's lifetime as state is stored in configured state provider for Dapr runtime.
## Distribution and failover
To provide scalability and reliability, actors instances are distributed throughout the cluster and Dapr automatically migrates them from failed nodes to healthy ones as required.
Actors are distributed across the instances of the actor service, and those instance are distributed across the nodes in a cluster. Each service instance contains a set of actors for a given actor type.
@ -58,7 +59,7 @@ Note: The Dapr actor Placement service is only used for actor placement and ther
You can interact with Dapr to invoke the actor method by calling Http/gRPC endpoint
```
```bash
POST/GET/PUT/DELETE http://localhost:3500/v1.0/actors/<actorType>/<actorId>/method/<method>
```
@ -74,13 +75,14 @@ A single actor instance cannot process more than one request at a time. An actor
Actors can deadlock on each other if there is a circular request between two actors while an external request is made to one of the actors simultaneously. The Dapr actor runtime will automatically time out on actor calls and throw an exception to the caller to interrupt possible deadlock situations.
![](../../images/actors_communication.png)
!["Actor concurrency"](../../images/actors_communication.png)
### Turn-based access
A turn consists of the complete execution of an actor method in response to a request from other actors or clients, or the complete execution of a timer/reminder callback. Even though these methods and callbacks are asynchronous, the Dapr Actors runtime does not interleave them. A turn must be fully finished before a new turn is allowed. In other words, an actor method or timer/reminder callback that is currently executing must be fully finished before a new call to a method or callback is allowed. A method or callback is considered to have finished if the execution has returned from the method or callback and the task returned by the method or callback has finished. It is worth emphasizing that turn-based concurrency is respected even across different methods, timers, and callbacks.
The Dapr actors runtime enforces turn-based concurrency by acquiring a per-actor lock at the beginning of a turn and releasing the lock at the end of the turn. Thus, turn-based concurrency is enforced on a per-actor basis and not across actors. Actor methods and timer/reminder callbacks can execute simultaneously on behalf of different actors.
The following example illustrates the above concepts. Consider an actor type that implements two asynchronous methods (say, Method1 and Method2), a timer, and a reminder. The diagram below shows an example of a timeline for the execution of these methods and callbacks on behalf of two actors (ActorId1 and ActorId2) that belong to this actor type.
![](../../images/actors_concurrency.png)
![""](../../images/actors_concurrency.png)

View File

@ -1,27 +1,29 @@
# Dapr Actors Runtime
# Dapr Actors Runtime
Dapr Actors runtime provides following capabilities:
## Actor State Management
Actors can save state reliably using state management capability.
You can interact with Dapr through Http/gRPC endpoints for state management.
Actors can save state reliably using state management capability.
To use actors, your state store must support multi-item transactions. This means your state store [component](https://github.com/dapr/components-contrib/tree/master/state) must implement the [TransactionalStore](https://github.com/dapr/components-contrib/blob/master/state/transactional_store.go) interface. The following state stores implement this interface:
- Redis
- MongoDB
You can interact with Dapr through Http/gRPC endpoints for state management.
### Save the Actor State
To use actors, your state store must support multi-item transactions. This means your state store [component](https://github.com/dapr/components-contrib/tree/master/state) must implement the [TransactionalStore](https://github.com/dapr/components-contrib/blob/master/state/transactional_store.go) interface. The following state stores implement this interface:
- Redis
- MongoDB
### Save the Actor State
You can save the Actor state of a given key of actorId of type actorType by calling
```
```http
POST/PUT http://localhost:3500/v1.0/actors/<actorType>/<actorId>/state/<key>
```
Value of the key is passed as request body.
```
```json
{
"key": "value"
}
@ -29,7 +31,7 @@ Value of the key is passed as request body.
If you want to save multiple items in a single transaction, you can call
```
```http
POST/PUT http://localhost:3500/v1.0/actors/<actorType>/<actorId>/state
```
@ -37,7 +39,7 @@ POST/PUT http://localhost:3500/v1.0/actors/<actorType>/<actorId>/state
Once you have saved the actor state, you can retrieve the saved state by calling
```
```http
GET http://localhost:3500/v1.0/actors/<actorType>/<actorId>/state/<key>
```
@ -45,13 +47,14 @@ GET http://localhost:3500/v1.0/actors/<actorType>/<actorId>/state/<key>
You can remove state permanently from the saved Actor state by calling
```
```http
DELETE http://localhost:3500/v1.0/actors/<actorType>/<actorId>/state/<key>
```
Refer [dapr spec](../../reference/api/actors.md) for more details.
## Actor Timers and Reminders
Actors can schedule periodic work on themselves by registering either timers or reminders.
### Actor timers
@ -68,7 +71,7 @@ All timers are stopped when the actor is deactivated as part of garbage collecti
You can create a timer for an actor by calling the Http/gRPC request to Dapr.
```
```http
POST,PUT http://localhost:3500/v1.0/actors/<actorType>/<actorId>/timers/<name>
```
@ -76,7 +79,7 @@ You can provide the timer due time and callback in the request body.
You can remove the actor timer by calling
```
```http
DELETE http://localhost:3500/v1.0/actors/<actorType>/<actorId>/timers/<name>
```
@ -84,11 +87,11 @@ Refer [dapr spec](../../reference/api/actors.md) for more details.
### Actor reminders
Reminders are a mechanism to trigger persistent callbacks on an actor at specified times. Their functionality is similar to timers. But unlike timers, reminders are triggered under all circumstances until the actor explicitly unregisters them or the actor is explicitly deleted. Specifically, reminders are triggered across actor deactivations and failovers because the Dapr Actors runtime persists information about the actor's reminders using Dapr actor state provider.
Reminders are a mechanism to trigger persistent callbacks on an actor at specified times. Their functionality is similar to timers. But unlike timers, reminders are triggered under all circumstances until the actor explicitly unregisters them or the actor is explicitly deleted. Specifically, reminders are triggered across actor deactivations and failovers because the Dapr Actors runtime persists information about the actor's reminders using Dapr actor state provider.
You can create a persistent reminder for an actor by calling the Http/gRPC request to Dapr.
```
```http
POST,PUT http://localhost:3500/v1.0/actors/<actorType>/<actorId>/reminders/<name>
```
@ -98,7 +101,7 @@ You can provide the reminder due time and period in the request body.
You can retrieve the actor reminder by calling
```
```http
GET http://localhost:3500/v1.0/actors/<actorType>/<actorId>/reminders/<name>
```
@ -106,16 +109,8 @@ GET http://localhost:3500/v1.0/actors/<actorType>/<actorId>/reminders/<name>
You can remove the actor reminder by calling
```
```http
DELETE http://localhost:3500/v1.0/actors/<actorType>/<actorId>/reminders/<name>
```
Refer [dapr spec](../../reference/api/actors.md) for more details.

View File

@ -1,3 +1,3 @@
# documentation
# documentation
Content for this file to be added

View File

@ -1,15 +1,15 @@
# Building blocks
Dapr consists of a set of building blocks that can be called from any programming language through Dapr HTTP or gRPC APIs. These building blocks address common challenges in building resilient, microservices applications. And they capture and share best practices and patterns that empower distributed application developers.
Dapr consists of a set of building blocks that can be called from any programming language through Dapr HTTP or gRPC APIs. These building blocks address common challenges in building resilient, microservices applications. And they capture and share best practices and patterns that empower distributed application developers.
![Dapr building blocks](../../images/overview.png)
## Anatomy of a building block
Both Dapr spec and Dapr runtime are designed to be extensible
Both Dapr spec and Dapr runtime are designed to be extensible
to include new building blocks. A building block is comprised of the following artifacts:
* Dapr spec API definition. A newly proposed building block shall have its API design incorporated into the Dapr spec.
* Dapr spec API definition. A newly proposed building block shall have its API design incorporated into the Dapr spec.
* Components. A building block may reuse existing [Dapr components](../components), or introduce new components.
* Test suites. A new building block implementation should come with associated unit tests and end-to-end scenario tests.
* Documents and samples.
* Documents and samples.

View File

@ -1,6 +1,6 @@
# Azure Blob Storage Binding Spec
```
```yml
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:

View File

@ -1,6 +1,6 @@
# Azure CosmosDB Binding Spec
```
```yml
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:

View File

@ -1,6 +1,6 @@
# AWS DynamoDB Binding Spec
```
```yml
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:

View File

@ -1,6 +1,6 @@
# Azure EventHubs Binding Spec
```
```yml
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:

View File

@ -1,6 +1,6 @@
# GCP Storage Bucket Spec
```
```yml
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:

View File

@ -1,6 +1,6 @@
# GCP Cloud Pub/Sub Binding Spec
```
```yml
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:

View File

@ -1,6 +1,6 @@
# HTTP Binding Spec
```
```yml
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:

View File

@ -1,6 +1,6 @@
# Kafka Binding Spec
```
```yml
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:

View File

@ -1,6 +1,6 @@
# Kubernetes Events Binding Spec
```
```yml
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:

View File

@ -1,6 +1,6 @@
# MQTT Binding Spec
```
```yml
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:

View File

@ -1,6 +1,6 @@
# RabbitMQ Binding Spec
```
```yml
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:

View File

@ -1,6 +1,6 @@
# Redis Binding Spec
```
```yml
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:

View File

@ -1,6 +1,6 @@
# AWS S3 Binding Spec
```
```yml
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:

View File

@ -1,6 +1,6 @@
# Azure Service Bus Queues Binding Spec
```
```yml
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:

View File

@ -1,6 +1,6 @@
# AWS SNS Binding Spec
```
```yml
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
@ -21,4 +21,4 @@ spec:
`region` is the AWS region.
`accessKey` is the AWS access key.
`secretKey` is the AWS secret key.
`topicArn` is the SNS topic name.
`topicArn` is the SNS topic name.

View File

@ -1,6 +1,6 @@
# AWS SQS Binding Spec
```
```yml
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
@ -21,4 +21,4 @@ spec:
`region` is the AWS region.
`accessKey` is the AWS access key.
`secretKey` is the AWS secret key.
`queueName` is the SQS queue name.
`queueName` is the SQS queue name.

View File

@ -1,6 +1,6 @@
# Twilio SMS Binding Spec
```
```yml
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
@ -20,5 +20,5 @@ spec:
`toNumber` is the target number to send the sms to.
`fromNumber` is the sender phone number.
`accountSid` is the twilio account SID.
`authToken` is the twilio auth token.
`accountSid` is the Twilio account SID.
`authToken` is the Twilio auth token.

View File

@ -13,26 +13,37 @@ Dapr can use any Redis instance - containerized, running on your local dev machi
We can use [Helm](https://helm.sh/) to quickly create a Redis instance in our Kubernetes cluster. This approach requires [Installing Helm v3](https://github.com/helm/helm#install).
1. Install Redis into your cluster: `helm install redis stable/redis`.
1. Install Redis into your cluster:
```bash
helm install redis stable/redis
```
> Note that you need a Redis version greater than 5, which is what Dapr' pub/sub functionality requires. If you're intending on using Redis as just a state store (and not for pub/sub), also a lower version can be used.
2. Run `kubectl get pods` to see the Redis containers now running in your cluster.
3. Add `redis-master:6379` as the `redisHost` in your [redis.yaml](#configuration) file. For example:
```yaml
metadata:
- name: redisHost
value: redis-master:6379
```
```yaml
metadata:
- name: redisHost
value: redis-master:6379
```
4. Next, we'll get our Redis password, which is slightly different depending on the OS we're using:
- **Windows**: Run `kubectl get secret --namespace default redis -o jsonpath="{.data.redis-password}" > encoded.b64`, which will create a file with your encoded password. Next, run `certutil -decode encoded.b64 password.txt`, which will put your redis password in a text file called `password.txt`. Copy the password and delete the two files.
- **Linux/MacOS**: Run `kubectl get secret --namespace default redis -o jsonpath="{.data.redis-password}" | base64 --decode` and copy the outputted password.
- **Windows**: Run `kubectl get secret --namespace default redis -o jsonpath="{.data.redis-password}" > encoded.b64`, which will create a file with your encoded password. Next, run `certutil -decode encoded.b64 password.txt`, which will put your redis password in a text file called `password.txt`. Copy the password and delete the two files.
Add this password as the `redisPassword` value in your [redis.yaml](#configuration) file. For example:
```yaml
metadata:
- name: redisPassword
value: lhDOkwTlp0
```
- **Linux/MacOS**: Run `kubectl get secret --namespace default redis -o jsonpath="{.data.redis-password}" | base64 --decode` and copy the outputted password.
Add this password as the `redisPassword` value in your [redis.yaml](#configuration) file. For example:
```yaml
metadata:
- name: redisPassword
value: lhDOkwTlp0
```
### Creating an Azure Managed Redis Cache
@ -47,8 +58,6 @@ We can use [Helm](https://helm.sh/) to quickly create a Redis instance in our Ku
> **NOTE:** Dapr pub/sub uses [Redis Streams](https://redis.io/topics/streams-intro) that was introduced by Redis 5.0, which isn't currently available on Azure Managed Redis Cache. Consequently, you can use Azure Managed Redis Cache only for state persistence.
### Other ways to Create a Redis Database
- [AWS Redis](https://aws.amazon.com/redis/)
@ -98,7 +107,7 @@ spec:
### Kubernetes
```
```bash
kubectl apply -f redis-state.yaml
kubectl apply -f redis-pubsub.yaml

View File

@ -1,14 +1,16 @@
# Secrets
Components can reference secrets for the `spec.metadata` section.<br>
In order to reference a secret, you need to set the `auth.secretStore` field to specify the name of the secret store that holds the secrets.<br><br>
Components can reference secrets for the `spec.metadata` section.
In order to reference a secret, you need to set the `auth.secretStore` field to specify the name of the secret store that holds the secrets.
When running in Kubernetes, if the `auth.secretStore` is empty, the Kubernetes secret store is assumed.
## Examples
Using plain text:
```
```yml
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
@ -24,7 +26,7 @@ spec:
Using a Kubernetes secret:
```
```yml
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
@ -50,13 +52,13 @@ The following example shows you how to create a Kubernetes secret to hold the co
First, create the Kubernetes secret:
```
```bash
kubectl create secret generic eventhubs-secret --from-literal=connectionString=*********
```
Next, reference the secret in your binding:
```
```yml
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
@ -72,7 +74,7 @@ spec:
Finally, apply the component to the Kubernetes cluster:
```
```bash
kubectl apply -f ./eventhubs.yaml
```

View File

@ -6,4 +6,3 @@ A Dapr configuration configures:
* [distributed tracing](../distributed-tracing/README.md)
* [custom pipeline](../middleware/middleware.md)

View File

@ -1,10 +1,10 @@
# Distributed Tracing
# Distributed Tracing
Dapr uses OpenTelemetry (previously known as OpenCensus) for distributed traces and metrics collection. OpenTelemetry supports various backends including [Azure Monitor](https://azure.microsoft.com/en-us/services/monitor/), [Datadog](https://www.datadoghq.com), [Instana](https://www.instana.com), [Jaeger](https://www.jaegertracing.io/), [SignalFX](https://www.signalfx.com/), [Stackdriver](https://cloud.google.com/stackdriver), [Zipkin](https://zipkin.io) and others.
Dapr uses OpenTelemetry (previously known as OpenCensus) for distributed traces and metrics collection. OpenTelemetry supports various backends including [Azure Monitor](https://azure.microsoft.com/en-us/services/monitor/), [Datadog](https://www.datadoghq.com), [Instana](https://www.instana.com), [Jaeger](https://www.jaegertracing.io/), [SignalFX](https://www.signalfx.com/), [Stackdriver](https://cloud.google.com/stackdriver), [Zipkin](https://zipkin.io) and others.
![Tracing](../../images/tracing.png)
# Tracing Design
## Tracing Design
Dapr adds a HTTP/gRPC middleware to the Dapr sidecar. The middleware intercepts all Dapr and application traffic and automatically injects correlation IDs to trace distributed transactions. This design has several benefits:
@ -13,11 +13,11 @@ Dapr adds a HTTP/gRPC middleware to the Dapr sidecar. The middleware intercepts
* Configurable and extensible. By leveraging OpenTelemetry, Dapr tracing can be configured to work with popular tracing backends, including custom backends a customer may have.
* OpenTelemetry exporters are defined as first-class Dapr components. You can define and enable multiple exporters at the same time.
# Correlation ID
## Correlation ID
For HTTP requests, Dapr injects a **X-Correlation-ID** header to requests. For gRPC calls, Dapr inserts a **X-Correlation-ID** as a field of a **header** metadata. When a request arrives without an correlation ID, Dapr creates a new one. Otherwise, it passes the correlation ID along the call chain.
# Configuration
## Configuration
Dapr tracing is configured by a configuration file (in local mode) or a Kubernetes configuration object (in Kubernetes mode). For example, the following configuration object enables distributed tracing:
@ -51,6 +51,8 @@ spec:
value: "http://zipkin.default.svc.cluster.local:9411/api/v2/spans"
```
# References
## References
* [How-To: Set up Distributed Tracing with Azure Monitor](../../howto/diagnose-with-tracing/azure-monitor.md)
* [How-To: Set Up Distributed Tracing with Zipkin](../../howto/diagnose-with-tracing/zipkin.md)
* [How-To: Set Up Distributed Tracing with Zipkin](../../howto/diagnose-with-tracing/zipkin.md)

View File

@ -1,11 +1,12 @@
# Middleware
# Middleware
Dapr allows custom processing pipelines to be defined by chaining a series of custom middleware. A request goes through all defined middleware before it's routed to user code, and it goes through the defined middleware (in reversed order) before it's returned to the client, as shown in the following diagram.
![Middleware](../../images/middleware.png)
## Customize processing pipeline
When launched, a Dapr sidecar contructs a processing pipeline. The pipeline consists of a [tracing middleware](../distributed-tracing/README.md) (when tracing is enabled) and a CORS middleware by default. Additional middleware, configured by a Dapr [configuration](../configuration/README.md), are added to the pipeline in the order as they are defined. The pipeline applies to all Dapr API endpoints, including state, pub/sub, direct messaging, bindings and others.
When launched, a Dapr sidecar constructs a processing pipeline. The pipeline consists of a [tracing middleware](../distributed-tracing/README.md) (when tracing is enabled) and a CORS middleware by default. Additional middleware, configured by a Dapr [configuration](../configuration/README.md), are added to the pipeline in the order as they are defined. The pipeline applies to all Dapr API endpoints, including state, pub/sub, direct messaging, bindings and others.
> **NOTE:** Dapr provides a **middleware.http.uppercase** middleware that doesn't need any configurations. The middleware changes all texts in a request body to uppercase. You can use it to test/verify if your custom pipeline is in place.
@ -31,22 +32,22 @@ Dapr uses [FastHTTP](https://github.com/valyala/fasthttp) to implement it's HTTP
```go
type Middleware interface {
GetHandler(metadata Metadata) (func(h fasthttp.RequestHandler) fasthttp.RequestHandler, error)
GetHandler(metadata Metadata) (func(h fasthttp.RequestHandler) fasthttp.RequestHandler, error)
}
```
Your handler implementation can include any inboud logic, outbound logic, or both:
Your handler implementation can include any inbound logic, outbound logic, or both:
```go
func GetHandler(metadata Metadata) fasthttp.RequestHandler {
return func(h fasthttp.RequestHandler) fasthttp.RequestHandler {
return func(ctx *fasthttp.RequestCtx) {
//inboud logic
return func(h fasthttp.RequestHandler) fasthttp.RequestHandler {
return func(ctx *fasthttp.RequestCtx) {
//inboud logic
h(ctx) //call the downstream handler
//outbound logic
}
}
}
}
```
Your code should be contributed to the https://github.com/dapr/components-contrib repository, under the */middleware* folder. Then, you'll need to submit another pull request against the https://github.com/dapr/dapr repository to register the new middleware type. You'll need to modify the **Load()** method under https://github.com/dapr/dapr/blob/master/pkg/components/middleware/http/loader.go to register your middleware using the **RegisterMiddleware** method.

View File

@ -1,8 +1,8 @@
# Security
# Security
## Dapr-to-app communication
Dapr sidecar runs close to the application through **localhost**. Dapr assumes it runs in the same security domain of the application. So, there are no authentication, authorization or encryption between a Dapr sidecar and the application.
Dapr sidecar runs close to the application through **localhost**. Dapr assumes it runs in the same security domain of the application. So, there are no authentication, authorization or encryption between a Dapr sidecar and the application.
## Dapr-to-Dapr communication
@ -10,15 +10,15 @@ Dapr is designed for inter-component communications within an application. Dapr
However, in a multi-tenant environment, a secured communication channel among Dapr sidecars becomes necessary. Supporting TLS and other authentication, authorization, and encryption methods is on the Dapr roadmap.
An alternative is to use service mesh technologies such as [Istio]( https://istio.io/) to provide secured communications among your application components. Dapr works well with popular service meshes.
An alternative is to use service mesh technologies such as [Istio]( https://istio.io/) to provide secured communications among your application components. Dapr works well with popular service meshes.
By default, Dapr supports Cross-Origin Resource Sharing (CORS) from all origins. You can configure Dapr runtime to allow only specific origins.
## Network security
You can adopt common network security technologies such as network security groups (NSGs), demilitarized zones (DMZs) and firewalls to provide layers of protections over your networked resources.
You can adopt common network security technologies such as network security groups (NSGs), demilitarized zones (DMZs) and firewalls to provide layers of protections over your networked resources.
For example, unless configured to talk to an external binding target, Dapr sidecars dont open connections to the Internet. And most binding implementations use outbound connections only. You can design your firewall rules to allow outbound connections only through designated ports.
For example, unless configured to talk to an external binding target, Dapr sidecars dont open connections to the Internet. And most binding implementations use outbound connections only. You can design your firewall rules to allow outbound connections only through designated ports.
## Bindings security
@ -26,13 +26,13 @@ Authentication with a binding target is configured by the bindings configurat
## State store security
Dapr doesn't transform the state data from applications. This means Dapr doesn't attempt to encrypt/decrypt state data. However, your application can adopt encryption/decryption methods of your choice, and the state data remains opaque to Dapr.
Dapr doesn't transform the state data from applications. This means Dapr doesn't attempt to encrypt/decrypt state data. However, your application can adopt encryption/decryption methods of your choice, and the state data remains opaque to Dapr.
Dapr uses the configured authentication method to authenticate with the underlying state store. And many state store implementations use official client libraries that generally use secured communication channels with the servers.
## Management security
When deploying on Kubernetes, you can use regular [Kubernetes RBAC]( https://kubernetes.io/docs/reference/access-authn-authz/rbac/) to control access to management activities.
When deploying on Kubernetes, you can use regular [Kubernetes RBAC]( https://kubernetes.io/docs/reference/access-authn-authz/rbac/) to control access to management activities.
When deploying on Azure Kubernetes Service (AKS), you can use [Azure Active Directory (AD) service principals]( https://docs.microsoft.com/en-us/azure/active-directory/develop/app-objects-and-service-principals) to control access to management activities and resource management.

View File

@ -4,7 +4,6 @@ Dapr-enabled apps can communicate with each other through well-known endpoints i
![Service Invocation Diagram](../../images/service-invocation.png)
1. Service A makes a http/gRPC call meant for Service B. The call goes to the local Dapr sidecar.
2. Dapr discovers Service B's location and forwards the message to Service B's Dapr sidecar
3. Service B's Dapr sidecar forwards the request to Service B. Service B performs its corresponding business logic.
@ -17,13 +16,14 @@ As an example for all the above, suppose we have the collection of apps describe
In such a scenario, the python app would be "Service A" above, and the Node.js app would be "Service B".
The following describes items 1-6 again in the context of this sample:
1. Suppose the Node.js app has a Dapr app id of "nodeapp", as in the sample. The python app invokes the Node.js app's `neworder` method by posting `http://localhost:3500/v1.0/invoke/nodeapp/method/neworder`, which first goes to the python app's local Dapr sidecar.
2. Dapr discovers the Node.js app's location and forwards it to the Node.js app's sidecar.
3. The Node.js app's sidecar forwards the request to the Node.js app. The Node.js app performs its business logic, which, as described in the sample, is to log the incoming message and then persist the order ID into Redis (not shown in the diagram above).
Steps 4-5 are the same as the list above.
For more information, see:
- The [Service Invocation Spec](../../reference/api/service_invocation.md)
- A [HowTo](../../howto/invoke-and-discover-services/README.md) on Service Invocation
- A [HowTo](../../howto/invoke-and-discover-services/README.md) on Service Invocation

View File

@ -1,4 +1,4 @@
# State management
# State management
Dapr makes it simple for you to store key/value data in a store of your choice.
@ -18,6 +18,7 @@ See the Dapr API specification for details on [state management API](../../refer
> **NOTE:** Dapr prefixes state keys with the ID of the current Dapr instance/sidecar. This allows multiple Dapr instances to share the same state store.
## State store behaviors
Dapr allows developers to attach to a state operation request additional metadata that describes how the request is expected to be handled. For example, you can attach concurrency requirement, consistency requirement, and retry policy to any state operation requests.
By default, your application should assume a data store is **eventually consistent** and uses a **last-write-wins** concurrency pattern. On the other hand, if you do attach metadata to your requests, Dapr passes the metadata along with the requests to the state store and expects the data store to fulfil the requests.
@ -34,6 +35,7 @@ Redis (clustered)| Yes | No | Yes
SQL Server | Yes | Yes | Yes
## Concurrency
Dapr supports optimistic concurrency control (OCC) using ETags. When a state is requested, Dapr always attaches an **ETag** property to the returned state. And when the user code tries to update or delete a state, it's expected to attach the ETag through the **If-Match** header. The write operation can succeed only when the provided ETag matches with the ETag in the database.
Dapr chooses OCC because in many applications, data update conflicts are rare because clients are naturally partitioned by business contexts to operate on different data. However, if your application chooses to use ETags, a request may get rejected because of mismatched ETags. It's recommended that you use a [Retry Policy](#Retry-Policies) to compensate for such conflicts when using ETags.
@ -43,12 +45,14 @@ If your application omits ETags in writing requests, Dapr skips ETag checks whil
> **NOTE:** For stores that don't natively support ETags, it's expected that the corresponding Dapr state store implementation simulates ETags and follows the Dapr state management API specification when handling states. Because Dapr state store implementations are technically clients to the underlying data store, such simulation should be straightforward using the concurrency control mechanisms provided by the store.
## Consistency
Dapr supports both **strong consistency** and **eventual consistency**, with eventual consistency as the default behavior.
When strong consistency is used, Dapr waits for all replicas (or designated quorums) to acknowledge before it acknowledges a write request. When eventual consistency is used, Dapr returns as soon as the write request is accepted by the underlying data store, even if this is a single replica.
When strong consistency is used, Dapr waits for all replicas (or designated quorums) to acknowledge before it acknowledges a write request. When eventual consistency is used, Dapr returns as soon as the write request is accepted by the underlying data store, even if this is a single replica.
## Retry policies
Dapr allows you to attach a retry policy to any write request. A policy is described by an **retryInterval**, a **retryPattern** and a **retryThreshold**. Dapr keeps retrying the request at the given interval up to the specified threshold. You can choose between a **linear** retry pattern or an **exponential** (backoff) pattern. When the **exponential** pattern is used, the retry interval is doubled after each attempt.
Dapr allows you to attach a retry policy to any write request. A policy is described by an **retryInterval**, a **retryPattern** and a **retryThreshold**. Dapr keeps retrying the request at the given interval up to the specified threshold. You can choose between a **linear** retry pattern or an **exponential** (backoff) pattern. When the **exponential** pattern is used, the retry interval is doubled after each attempt.
## Bulk operations
@ -63,7 +67,8 @@ KEYS "myApp*"
```
> **NOTE:** See [How to query Redis store](../../howto/query-state-store/query-redis-store.md) for details on how to query a Redis store.
>
>
### Querying actor state
If the data store supports SQL queries, you can query an actor's state using SQL queries. For example use:
@ -81,6 +86,7 @@ SELECT AVG(value) FROM StateTable WHERE Id LIKE '<dapr-id>||<thermometer>||*||te
> **NOTE:** Direct queries of the state store are not governed by Dapr concurrency control, since you are not calling through the Dapr runtime. What you see are snapshots of committed data which are acceptable for read-only queries across multiple actors, however writes should be done via the actor instances.
## References
* [Spec: Dapr state management specification](../../reference/api/state.md)
* [Spec: Dapr actors specification](../../reference/api/actors.md)
* [How-to: Set up Azure Cosmos DB store](../../howto/setup-state-store/setup-azure-cosmosdb.md)

View File

@ -1,6 +1,6 @@
# Getting started
Dapr is a portable, event-driven runtime that makes it easy for enterprise developers to build resilient, microservice stateless and stateful applications that run on the cloud and edge and embraces the diversity of languages and developer frameworks.
Dapr is a portable, event-driven runtime that makes it easy for enterprise developers to build resilient, microservice stateless and stateful applications that run on the cloud and edge and embraces the diversity of languages and developer frameworks.
## Core concepts
@ -11,10 +11,11 @@ Dapr is a portable, event-driven runtime that makes it easy for enterprise devel
To learn more, see [Dapr Concepts](../concepts/README.md).
## Setup the development environment
Dapr can be run locally or in Kubernetes. We recommend starting with a local setup to explore the core Dapr concepts and familiarize yourself with the Dapr CLI. Follow these instructions to [configure Dapr locally](./environment-setup.md#prerequisites) or [in Kubernetes](./environment-setup.md#installing-dapr-on-a-kubernetes-cluster).
## Next steps
1. Once Dapr is installed, continue to the [Hello World sample](https://github.com/dapr/samples/tree/master/1.hello-world).
2. Explore additional [samples](https://github.com/dapr/samples) for more advanced concepts, such as service invocation, pub/sub, and state management.
3. Follow [How To guides](../howto) to understand how Dapr solves specific problems, such as creating a [rate limited app](../howto/control-concurrency).

View File

@ -12,11 +12,13 @@
This guide walks you through installing an Azure Kubernetes Service cluster. If you need more information, refer to [Quickstart: Deploy an Azure Kubernetes Service (AKS) cluster using the Azure CLI](https://docs.microsoft.com/en-us/azure/aks/kubernetes-walkthrough)
1. Login to Azure
```bash
az login
```
2. Set the default subscription
```bash
az account set -s [your_subscription_id]
```
@ -28,6 +30,7 @@ az group create --name [your_resource_group] --location [region]
```
4. Create an Azure Kubernetes Service cluster
Use 1.13.x or newer version of Kubernetes with `--kubernetes-version`
```bash

View File

@ -48,7 +48,7 @@ minikube addons enable ingress
In Minikube, EXTERNAL-IP in `kubectl get svc` shows `<pending>` state for your service. In this case, you can run `minikube service [service_name]` to open your service without external IP address.
```
```bash
$ kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
...

View File

@ -4,14 +4,14 @@ Dapr can be run in either Standalone or Kubernetes modes. Running Dapr runtime i
## Contents
- [Prerequisites](#prerequisites)
- [Installing Dapr CLI](#installing-dapr-cli)
- [Installing Dapr in standalone mode](#installing-dapr-in-standalone-mode)
- [Installing Dapr on Kubernetes cluster](#installing-dapr-on-a-kubernetes-cluster)
- [Prerequisites](#prerequisites)
- [Installing Dapr CLI](#installing-dapr-cli)
- [Installing Dapr in standalone mode](#installing-dapr-in-standalone-mode)
- [Installing Dapr on Kubernetes cluster](#installing-dapr-on-a-kubernetes-cluster)
## Prerequisites
* Install [Docker](https://docs.docker.com/install/)
- Install [Docker](https://docs.docker.com/install/)
> For Windows user, ensure that `Docker Desktop For Windows` uses Linux containers.
@ -50,18 +50,17 @@ Each release of Dapr CLI includes various OSes and architectures. These binary v
1. Download the [Dapr CLI](https://github.com/dapr/cli/releases)
2. Unpack it (e.g. dapr_linux_amd64.tar.gz, dapr_windows_amd64.zip)
3. Move it to your desired location.
* For Linux/MacOS - `/usr/local/bin`
* For Windows, create a directory and add this to your System PATH. For example create a directory called `c:\dapr` and add this directory to your path, by editing your system environment variable.
- For Linux/MacOS - `/usr/local/bin`
- For Windows, create a directory and add this to your System PATH. For example create a directory called `c:\dapr` and add this directory to your path, by editing your system environment variable.
## Installing Dapr in standalone mode
### Install Dapr runtime using the CLI
Install Dapr by running `dapr init` from a command prompt
> For Linux users, if you run your docker cmds with sudo, you need to use "**sudo dapr init**"
> For Windows users, make sure that you run the cmd terminal in administrator mode
> **Note:** See [Dapr CLI](https://github.com/dapr/cli) for details on the usage of Dapr CLI
```bash
@ -109,8 +108,8 @@ When setting up Kubernetes you can do this either via the Dapr CLI or Helm
### Setup Cluster
* [Setup Minikube Cluster](./cluster/setup-minikube.md)
* [Setup Azure Kubernetes Service Cluster](./cluster/setup-aks.md)
- [Setup Minikube Cluster](./cluster/setup-minikube.md)
- [Setup Azure Kubernetes Service Cluster](./cluster/setup-aks.md)
### Using the Dapr CLI
@ -157,7 +156,7 @@ helm repo update
3. Create `dapr-system` namespace on your kubernetes cluster
```
```bash
kubectl create namespace dapr-system
```
@ -183,6 +182,7 @@ dapr-sidecar-injector-8555576b6f-29cqm 1/1 Running 0 40s
#### Uninstall Dapr on Kubernetes
Helm 3
```bash
helm uninstall dapr -n dapr-system
```

View File

@ -2,34 +2,42 @@
Here you'll find a list of How To guides that walk you through accomplishing specific tasks.
### Service invocation
## Service invocation
* [Invoke other services in your cluster or environment](./invoke-and-discover-services)
* [Create a gRPC enabled app, and invoke Dapr over gRPC](./create-grpc-app)
### State Management
## State Management
* [Setup Dapr state store](./setup-state-store)
* [Create a service that performs stateful CRUD operations](./create-stateful-service)
* [Query the underlying state store](./query-state-store)
* [Create a stateful, replicated service with different consistency/concurrency levels](./stateful-replicated-service)
* [Control your app's throttling using rate limiting features](./control-concurrency)
### Pub/Sub
## Pub/Sub
* [Setup Dapr Pub/Sub](./setup-pub-sub-message-broker)
* [Use Pub/Sub to publish messages to a given topic](./publish-topic)
* [Use Pub/Sub to consume events from a topic](./consume-topic)
### Resources Bindings
## Resources Bindings
* [Trigger a service from different resources with input bindings](./trigger-app-with-input-binding)
* [Invoke different resources using output bindings](./send-events-with-output-bindings)
### Distributed Tracing
## Distributed Tracing
* [Diagnose your services with distributed tracing](./diagnose-with-tracing)
### Secrets
## Secrets
* [Configure secrets using Dapr secret stores](./setup-secret-store)
### Autoscaling
## Autoscaling
* [Autoscale on Kubernetes using KEDA and Dapr bindings](./autoscale-with-keda)
### Configuring Visual Studio Code
## Configuring Visual Studio Code
* [Debugging with daprd](./vscode-debugging-daprd)

View File

@ -94,8 +94,7 @@ spec:
dapr.io/config: "pipeline"
...
```
## Accessing the access token
Once everything is in place, whenever a client tries to invoke an API method through Dapr sidecar (such as calling the *v1.0/invoke/* endpoint), it will be reidrected to the authorization's consent page if an access token is not found. Otherwise, the access token is written to the **authHeaderName** header and made available to the app code.
Once everything is in place, whenever a client tries to invoke an API method through Dapr sidecar (such as calling the *v1.0/invoke/* endpoint), it will be reidrected to the authorization's consent page if an access token is not found. Otherwise, the access token is written to the **authHeaderName** header and made available to the app code.

View File

@ -11,15 +11,17 @@ To install KEDA, follow the [Deploying KEDA](https://keda.sh/deploy/) instructio
## Create KEDA enabled Dapr binding
For this example, we'll be using Kafka.<br>
For this example, we'll be using Kafka.
You can install Kafka in your cluster by using Helm:
```
```bash
$ helm repo add incubator http://storage.googleapis.com/kubernetes-charts-incubator
$ helm install my-kafka incubator/kafka
```
Next, we'll create the Dapr Kafka binding for Kubernetes.<br>
Next, we'll create the Dapr Kafka binding for Kubernetes.
Paste the following in a file named kafka.yaml:
```yaml
@ -42,7 +44,7 @@ The following YAML defines a Kafka component that listens for the topic `myTopic
Deploy the binding to the cluster:
```
```bash
$ kubectl apply -f kafka.yaml
```
@ -76,7 +78,7 @@ spec:
Deploy the KEDA scaler to Kubernetes:
```
```bash
$ kubectl apply -f kafka_scaler.yaml
```

View File

@ -13,7 +13,7 @@ For this guide, we'll use Redis Streams, which is also installed by default on a
*Note: When running Dapr locally, a pub/sub component YAML will automatically be created if it doesn't exist in a directory called `components` in your current working directory.*
```
```yml
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
@ -72,7 +72,7 @@ app.post('/topic1', (req, res) => {
In order to tell Dapr that a message was processed successfully, return a `200 OK` response:
```
```javascript
res.status(200).send()
```

View File

@ -42,6 +42,8 @@ spec:
To set max-concurrency with the Dapr CLI for running on your local dev machine, add the `max-concurrency` flag:
`dapr run --max-concurrency 1 --app-port 5000 python ./app.py`.
```bash
dapr run --max-concurrency 1 --app-port 5000 python ./app.py
```
The above examples will effectively turn your app into a single concurrent service.

View File

@ -46,7 +46,7 @@ This tells Dapr to communicate with your app via gRPC over port `5005`.
When running in standalone mode, use the `--protocol` flag to tell Dapr to use gRPC to talk to the app:
```
```bash
dapr run --protocol grpc --app-port 5005 node app.js
```
@ -67,32 +67,32 @@ import (
2. Create the client
```go
// Get the Dapr port and create a connection
daprPort := os.Getenv("DAPR_GRPC_PORT")
daprAddress := fmt.Sprintf("localhost:%s", daprPort)
conn, err := grpc.Dial(daprAddress, grpc.WithInsecure())
if err != nil {
fmt.Println(err)
}
defer conn.Close()
// Get the Dapr port and create a connection
daprPort := os.Getenv("DAPR_GRPC_PORT")
daprAddress := fmt.Sprintf("localhost:%s", daprPort)
conn, err := grpc.Dial(daprAddress, grpc.WithInsecure())
if err != nil {
fmt.Println(err)
}
defer conn.Close()
// Create the client
client := pb.NewDaprClient(conn)
// Create the client
client := pb.NewDaprClient(conn)
```
3. Invoke the Save State method
```go
_, err = client.SaveState(context.Background(), &pb.SaveStateEnvelope{
Requests: []*pb.StateRequest{
&pb.StateRequest{
Key: "myKey",
Value: &any.Any{
Value: []byte("My State"),
},
},
},
})
Requests: []*pb.StateRequest{
&pb.StateRequest{
Key: "myKey",
Value: &any.Any{
Value: []byte("My State"),
},
},
},
})
```
Hooray!

View File

@ -1,8 +1,8 @@
# Set up distributed tracing with Azure Monitor
Dapr integrates with Application Monitor through OpenTelemetry's default exporter along with a dedicated agent knwon as [Local Forwarder](https://docs.microsoft.com/en-us/azure/azure-monitor/app/opencensus-local-forwarder).
Dapr integrates with Application Monitor through OpenTelemetry's default exporter along with a dedicated agent known as [Local Forwarder](https://docs.microsoft.com/en-us/azure/azure-monitor/app/opencensus-local-forwarder).
## How to configure distributed tracing with Azure Monitor
## How to configure distributed tracing with Azure Monitor
The following steps will show you how to configure Dapr to send distributed tracing data to Azure Monitor.
@ -13,8 +13,8 @@ The following steps will show you how to configure Dapr to send distributed trac
### Setup the Local Forwarder
The Local Forwarder listens to OpenTelemetry's traces through
Please follow the insturctions [here](https://docs.microsoft.com/en-us/azure/azure-monitor/app/opencensus-local-forwarder) to setup Local Forwarder as a local service or daemon.
The Local Forwarder listens to OpenTelemetry's traces through
Please follow the insturctions [here](https://docs.microsoft.com/en-us/azure/azure-monitor/app/opencensus-local-forwarder) to setup Local Forwarder as a local service or daemon.
> **NOTE**: At the time of writing, there's no official guidance on packaging and running the Local Forwarder as a Docker container. To use Local Forwarder on Kubernetes, you'll need to package the Local Forwarder as a Docker container and register a *ClusterIP* service. Then, you should set the service as the export target of the native exporter.
@ -63,9 +63,10 @@ kubectl apply -f native.yaml
3. When running in the local mode, you need to launch Dapr with the `--config` parameter:
```
```bash
dapr run --app-id mynode --app-port 3000 --config ./tracing.yaml node app.js
```
When running in the Kubernetes model, you need to add a `dapr.io/config` annotation to your container that you want to participate in the distributed tracing, as shown in the following example:
```yaml
@ -85,7 +86,7 @@ spec:
dapr.io/config: "tracing"
```
That's it! There's no need include any SDKs or instrument your application code in anyway. Dapr automatically handles distributed tracing for you.
That's it! There's no need include any SDKs or instrument your application code in anyway. Dapr automatically handles distributed tracing for you.
> **NOTE**: You can register multiple exporters at the same time, and tracing logs will be forwarded to all registered exporters.
@ -97,7 +98,7 @@ Generate some workloads. And after a few minutes, you should see tracing logs ap
The `tracing` section under the `Configuration` spec contains the following properties:
```
```yml
tracing:
enabled: true
expandParams: true
@ -111,4 +112,3 @@ Property | Type | Description
enabled | bool | Set tracing to be enabled or disabled
expandParams | bool | When true, expands parameters passed to HTTP endpoints
includeBody | bool | When true, includes the request body in the tracing event

View File

@ -6,7 +6,6 @@ Dapr integrates seamlessly with OpenTelemetry for telemetry and tracing. It is r
The following steps will show you how to configure Dapr to send distributed tracing data to Zipkin running as a container in your Kubernetes cluster, and how to view them.
### Setup
First, deploy Zipkin:
@ -38,7 +37,9 @@ spec:
- name: exporterAddress
value: "http://zipkin.default.svc.cluster.local:9411/api/v2/spans"
```
* tracing.yaml
```yaml
apiVersion: dapr.io/v1alpha1
kind: Configuration
@ -60,7 +61,7 @@ kubectl apply -f zipkin.yaml
In order to enable this configuration for your Dapr sidecar, add the following annotation to your pod spec template:
```
```yml
annotations:
dapr.io/config: "tracing"
```
@ -71,7 +72,7 @@ That's it! your sidecar is now configured for use with Open Census and Zipkin.
To view traces, connect to the Zipkin Service and open the UI:
```
```bash
kubectl port-forward svc/zipkin 9411:9411
```
@ -100,7 +101,9 @@ spec:
- name: exporterAddress
value: "http://zipkin.default.svc.cluster.local:9411/api/v2/spans"
```
* tracing.yaml
```yaml
apiVersion: dapr.io/v1alpha1
kind: Configuration
@ -114,16 +117,16 @@ spec:
```
2. Copy *zipkin.yaml* to a *components* folder under the same folder where you run you application.
3. Launch Zipkin using Docker:
```
```bash
docker run -d -p 9411:9411 openzipkin/zipkin
```
3. Launch Dapr with the `--config` param:
```
```bash
dapr run --app-id mynode --app-port 3000 --config ./tracing.yaml node app.js
```
@ -131,7 +134,7 @@ dapr run --app-id mynode --app-port 3000 --config ./tracing.yaml node app.js
The `tracing` section under the `Configuration` spec contains the following properties:
```
```yml
tracing:
enabled: true
expandParams: true

View File

@ -12,14 +12,17 @@ For more info on service invocation, read the [conceptional documentation](../..
## 1. Choose an ID for your service
Dapr allows you to assign a global, unique ID for your app.<br>
Dapr allows you to assign a global, unique ID for your app.
This ID encapsulates the state for your application, regardless of the number of instances it may have.
### Setup an ID using the Dapr CLI
In Standalone mode, set the `--app-id` flag:
`dapr run --app-id cart --app-port 5000 python app.py`
```bash
dapr run --app-id cart --app-port 5000 python app.py
```
### Setup an ID using Kubernetes
@ -72,7 +75,7 @@ This Python app exposes an `add()` method via the `/add` endpoint.
### Invoke with curl
```
```bash
curl http://localhost:3500/v1.0/invoke/cart/method/add -X POST
```
@ -80,19 +83,18 @@ Since the add endpoint is a 'POST' method, we used `-X POST` in the curl command
To invoke a 'GET' endpoint:
```
```bash
curl http://localhost:3500/v1.0/invoke/cart/method/add
```
To invoke a 'DELETE' endpoint:
```
```bash
curl http://localhost:3500/v1.0/invoke/cart/method/add -X DELETE
```
Dapr puts any payload return by their called service in the HTTP response's body.
## Overview
The example above showed you how to directly invoke a different service running in our environment, locally or in Kubernetes.

View File

@ -13,7 +13,7 @@ For this guide, we'll use Redis Streams, which is also installed by default on a
*Note: When running Dapr locally, a pub/sub component YAML will automatically be created if it doesn't exist in a directory called `components` in your current working directory.*
```
```yml
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
@ -33,12 +33,12 @@ To deploy this into a Kubernetes cluster, fill in the `metadata` connection deta
To publish a message to a topic, invoke the following endpoint on a Dapr instance:
```
```bash
curl -X POST http://localhost:3500/v1.0/publish/deathStarStatus \
-H "Content-Type: application/json" \
-d '{
"status": "completed"
}'
-H "Content-Type: application/json" \
-d '{
"status": "completed"
}'
```
The above example publishes a JSON payload to a `deathStartStatus` topic.

View File

@ -24,25 +24,29 @@ The above query returns all documents with id containing "myapp-", which is the
To get the state data by a key "balance" for the application "myapp", use the query:
```bash
```sql
SELECT * FROM states WHERE states.id = 'myapp||balance'
```
Then, read the **value** field of the returned document.
To get the state version/ETag, use the command:
```bash
```sql
SELECT states._etag FROM states WHERE states.id = 'myapp||balance'
```
## 4. Read actor state
To get all the state keys associated with an actor with the instance ID "leroy" of actor type "cat" belonging to the application with ID "mypets", use the command:
```bash
```sql
SELECT * FROM states WHERE CONTAINS(states.id, 'mypets||cat||leroy||')
```
And to get a specific actor state such as "food", use the command:
```bash
```sql
SELECT * FROM states WHERE states.id = 'mypets||cat||leroy||food'
```

View File

@ -2,7 +2,7 @@
Dapr doesn't transform state values while saving and retrieving states. Dapr requires all state store implementations to abide by a certain key format scheme (see [Dapr state management spec](../../reference/api/state.md). You can directly interact with the underlying store to manipulate the state data, such querying states, creating aggregated views and making backups.
>**NOTE:** The following examples uses Redis CLI against a Redis store using the default Dapr state store implementation.
>**NOTE:** The following examples uses Redis CLI against a Redis store using the default Dapr state store implementation.
## 1. Connect to Redis
@ -11,6 +11,7 @@ You can use the official [redis-cli](https://redis.io/topics/rediscli) or any ot
```bash
docker run --rm -it --link <name of the Redis container> redis redis-cli -h <name of the Redis container>
```
## 2. List keys by Dapr id
To get all state keys associated with application "myapp", use the command:
@ -20,6 +21,7 @@ KEYS myapp*
```
The above command returns a list of existing keys, for example:
```bash
1) "myapp||balance"
2) "myapp||amount"
@ -36,9 +38,11 @@ HGET myapp||balance data
```
To get the state version/ETag, use the command:
```bash
HGET myapp||balance version
```
## 4. Read actor state
To get all the state keys associated with an actor with the instance ID "leroy" of actor type "cat" belonging to the application with ID "mypets", use the command:
@ -46,6 +50,7 @@ To get all the state keys associated with an actor with the instance ID "leroy"
```bash
KEYS mypets||cat||leroy*
```
And to get a specific actor state such as "food", use the command:
```bash

View File

@ -13,7 +13,7 @@ Create the following YAML file, named binding.yaml, and save this to the /compon
*Note: When running in Kubernetes, apply this file to your cluster using `kubectl apply -f binding.yaml`*
```
```yml
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
@ -27,7 +27,8 @@ spec:
value: topic1
```
Here, we create a new binding component with the name of `myEvent`.<br>
Here, we create a new binding component with the name of `myEvent`.
Inside the `metadata` section, we configure Kafka related properties such as the topic to publish the message to and the broker.
## 2. Send an event
@ -36,9 +37,10 @@ All that's left now is to invoke the bindings endpoint on a running Dapr instanc
We can do so using HTTP:
```
```bash
curl -X POST -H http://localhost:3500/v1.0/bindings/myEvent -d '{ "data": { "message": "Hi!" } }'
```
As seen above, we invoked the `/binding` endpoint with the name of the binding to invoke, in our case its `myEvent`.<br>
As seen above, we invoked the `/binding` endpoint with the name of the binding to invoke, in our case its `myEvent`.
The payload goes inside the `data` field.

View File

@ -7,7 +7,7 @@ Pub/Sub message buses are extensible and can be found in the [components-contrib
A pub/sub in Dapr is described using a `Component` file:
```
```yml
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
@ -36,7 +36,7 @@ You can make changes to this file the way you see fit, whether to change connect
Dapr uses a Kubernetes Operator to update the sidecars running in the cluster with different components.
To setup a pub/sub in Kubernetes, use `kubectl` to apply the component file:
```
```bash
kubectl apply -f pubsub.yaml
```

View File

@ -8,7 +8,7 @@ The next step is to create a Dapr component for Azure Service Bus.
Create the following YAML file named `azuresb.yaml`:
```
```yml
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
@ -34,14 +34,13 @@ spec:
The above example uses secrets as plain strings. It is recommended to use a secret store for the secrets as described [here](../../concepts/components/secrets.md)
## Apply the configuration
### In Kubernetes
To apply the Azure Service Bus pub/sub to Kubernetes, use the `kubectl` CLI:
```
```bash
kubectl apply -f azuresb.yaml
```

View File

@ -1,3 +1,3 @@
# Setup Kafka
# Setup Kafka
Content for this file to be added

View File

@ -1,10 +1,10 @@
# Setup NATS
# Setup NATS
## Locally
You can run a NATS server locally using Docker:
```
```bash
docker run -d --name nats-main -p 4222:4222 -p 6222:6222 -p 8222:8222 nats
```
@ -14,7 +14,7 @@ You can then interact with the server using the client port: `localhost:4222`.
The easiest way to install NATS on Kubernetes is by using the [Helm chart](https://github.com/helm/charts/tree/master/stable/nats):
```
```bash
helm install nats stable/nats
```
@ -31,7 +31,7 @@ The next step is to create a Dapr component for NATS.
Create the following YAML file named `nats.yaml`:
```
```yml
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
@ -45,18 +45,17 @@ spec:
The above example uses secrets as plain strings. It is recommended to use a secret store for the secrets as described [here](../../concepts/components/secrets.md)
## Apply the configuration
### In Kubernetes
To apply the NATS pub/sub to Kubernetes, use the `kubectl` CLI:
```
```bash
kubectl apply -f nats.yaml
```
### Running locally
The Dapr CLI will automatically create a directory named `components` in your current working directory with a Redis component.
To use NATS, replace the contents of `messagebus.yaml` file with the contents of `nats.yaml` above (Don't change the filename).
To use NATS, replace the contents of `messagebus.yaml` file with the contents of `nats.yaml` above (Don't change the filename).

View File

@ -1,10 +1,10 @@
# Setup RabbitMQ
# Setup RabbitMQ
## Locally
You can run a RabbitMQ server locally using Docker:
```
```bash
docker run -d --hostname my-rabbit --name some-rabbit rabbitmq:3
```
@ -14,7 +14,7 @@ You can then interact with the server using the client port: `localhost:5672`.
The easiest way to install RabbitMQ on Kubernetes is by using the [Helm chart](https://github.com/helm/charts/tree/master/stable/rabbitmq):
```
```bash
helm install rabbitmq stable/rabbitmq
```
@ -33,7 +33,7 @@ The next step is to create a Dapr component for RabbitMQ.
Create the following YAML file named `rabbitmq.yaml`:
```
```yml
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
@ -65,18 +65,17 @@ spec:
The above example uses secrets as plain strings. It is recommended to use a secret store for the secrets as described [here](../../concepts/components/secrets.md)
## Apply the configuration
### In Kubernetes
To apply the RabbitMQ pub/sub to Kubernetes, use the `kubectl` CLI:
```
```bash
kubectl apply -f rabbitmq.yaml
```
### Running locally
The Dapr CLI will automatically create a directory named `components` in your current working directory with a Redis component.
To use RabbitMQ, replace the contents of `messagebus.yaml` file with the contents of `rabbitmq.yaml` above (Don't change the filename).
To use RabbitMQ, replace the contents of `messagebus.yaml` file with the contents of `rabbitmq.yaml` above (Don't change the filename).

View File

@ -16,17 +16,20 @@ We can use [Helm](https://helm.sh/) to quickly create a Redis instance in our Ku
1. Install Redis into your cluster: `helm install redis stable/redis`. Note that we're explicitly setting an image tag to get a version greater than 5, which is what Dapr' pub/sub functionality requires.
2. Run `kubectl get pods` to see the Redis containers now running in your cluster.
3. Add `redis-master:6379` as the `redisHost` in your redis.yaml file. For example:
```yaml
metadata:
- name: redisHost
value: redis-master:6379
```
4. Next, we'll get our Redis password, which is slightly different depending on the OS we're using:
- **Windows**: Run `kubectl get secret --namespace default redis -o jsonpath="{.data.redis-password}" > encoded.b64`, which will create a file with your encoded password. Next, run `certutil -decode encoded.b64 password.txt`, which will put your redis password in a text file called `password.txt`. Copy the password and delete the two files.
- **Linux/MacOS**: Run `kubectl get secret --namespace default redis -o jsonpath="{.data.redis-password}" | base64 --decode` and copy the outputted password.
Add this password as the `redisPassword` value in your redis.yaml file. For example:
```yaml
- name: redisPassword
value: "lhDOkwTlp0"
@ -39,8 +42,8 @@ We can use [Helm](https://helm.sh/) to quickly create a Redis instance in our Ku
## Configuration
To setup Redis, you need to create a component for `pubsub.redis`.
<br>
To setup Redis, you need to create a component for `pubsub.redis`.
The following yaml files demonstrates how to define each. **Note:** yaml files below illustrate secret management in plain text. In a production-grade application, follow [secret management](../../concepts/components/secrets.md) instructions to securely manage your secrets.
### Configuring Redis Streams for Pub/Sub
@ -65,7 +68,7 @@ spec:
### Kubernetes
```
```bash
kubectl apply -f pubsub.yaml
```

View File

@ -4,10 +4,10 @@ This document shows how to enable Azure Key Vault secret store using [Dapr Secre
## Contents
- [Prerequisites](#prerequisites)
- [Setup Kubernetes to use managed identities and Azure Key Vault](#setup-kubernetes-to-use-managed-identities-and-azure-key-vault)
- [Use Azure Key Vault secret store in Kubernetes mode with managed identities](#use-azure-key-vault-secret-store-in-kubernetes-mode-with-managed-identities)
- [References](#references)
- [Prerequisites](#prerequisites)
- [Setup Kubernetes to use managed identities and Azure Key Vault](#setup-kubernetes-to-use-managed-identities-and-azure-key-vault)
- [Use Azure Key Vault secret store in Kubernetes mode with managed identities](#use-azure-key-vault-secret-store-in-kubernetes-mode-with-managed-identities)
- [References](#references)
## Prerequisites
@ -75,7 +75,7 @@ This document shows how to enable Azure Key Vault secret store using [Dapr Secre
spec:
type: 0
ResourceID: [you managed identity id]
ClientID: [you managed identity Cliend ID]
ClientID: [you managed identity Client ID]
---
apiVersion: "aadpodidentity.k8s.io/v1"
kind: AzureIdentityBinding
@ -231,10 +231,10 @@ In Kubernetes mode, you store the certificate for the service principal into the
2020-02-05 09:15:04.639435 I | redis: connected to redis-master:6379 (localAddr: 10.244.0.11:38294, remAddr: 10.0.74.145:6379)
...
```
## References
* [Azure CLI Keyvault CLI](https://docs.microsoft.com/en-us/cli/azure/keyvault?view=azure-cli-latest#az-keyvault-create)
* [Create an Azure service principal with Azure CLI](https://docs.microsoft.com/en-us/cli/azure/create-an-azure-service-principal-azure-cli?view=azure-cli-latest)
* [AAD Pod Identity](https://github.com/Azure/aad-pod-identity)
* [Secrets Component](../../concepts/components/secrets.md)
- [Azure CLI Keyvault CLI](https://docs.microsoft.com/en-us/cli/azure/keyvault?view=azure-cli-latest#az-keyvault-create)
- [Create an Azure service principal with Azure CLI](https://docs.microsoft.com/en-us/cli/azure/-reate-an-azure-service-principal-azure-cli?view=azure-cli-latest)
- [AAD Pod Identity](https://github.com/Azure/aad-pod-identity)
- [Secrets Component](../../concepts/components/secrets.md)

View File

@ -1,21 +1,21 @@
# Secret Store for Azure Key Vault
This document shows how to enable Azure Key Vault secret store using [Dapr Secrets Component](../../concepts/components/secrets.md) for Standalone and Kubernetes mode. The Dapr secret store component uses Service Principal using certificate authorization to authenticate Key Vault.
This document shows how to enable Azure Key Vault secret store using [Dapr Secrets Component](../../concepts/components/secrets.md) for Standalone and Kubernetes mode. The Dapr secret store component uses Service Principal using certificate authorization to authenticate Key Vault.
> **Note:** Find the Managed Identity for Azure Key Vault instructions [here](azure-keyvault-managed-identity.md).
## Contents
- [Prerequisites](#prerequisites)
- [Create Azure Key Vault and Service principal](#create-azure-key-vault-and-service-principal)
- [Use Azure Key Vault secret store in Standalone mode](#use-azure-key-vault-secret-store-in-standalone-mode)
- [Use Azure Key Vault secret store in Kubernetes mode](#use-azure-key-vault-secret-store-in-kubernetes-mode)
- [References](#references)
- [Prerequisites](#prerequisites)
- [Create Azure Key Vault and Service principal](#create-azure-key-vault-and-service-principal)
- [Use Azure Key Vault secret store in Standalone mode](#use-azure-key-vault-secret-store-in-standalone-mode)
- [Use Azure Key Vault secret store in Kubernetes mode](#use-azure-key-vault-secret-store-in-kubernetes-mode)
- [References](#references)
## Prerequisites
* [Azure Subscription](https://azure.microsoft.com/en-us/free/)
* [Azure CLI](https://docs.microsoft.com/en-us/cli/azure/install-azure-cli?view=azure-cli-latest)
- [Azure Subscription](https://azure.microsoft.com/en-us/free/)
- [Azure CLI](https://docs.microsoft.com/en-us/cli/azure/install-azure-cli?view=azure-cli-latest)
## Create an Azure Key Vault and a service principal
@ -76,14 +76,15 @@ az ad sp show --id [service_principal_app_id]
az keyvault set-policy --name [your_keyvault] --object-id [your_service_principal_object_id] --secret-permissions get
```
Now, your service principal has access to your keyvault, you are ready to configure the secret store component to use secrets stored in your keyvault to access other components securely.
Now, your service principal has access to your keyvault, you are ready to configure the secret store component to use secrets stored in your keyvault to access other components securely.
5. Download PFX cert from your Azure Keyvault
* **Using Azure Portal**
- **Using Azure Portal**
Go to your keyvault on Portal and download [certificate_name] pfx cert from certificate vault
* **Using Azure CLI**
- **Using Azure CLI**
For Linux/MacOS
```bash
# Download base64 encoded cert
az keyvault secret download --vault-name [your_keyvault] --name [certificate_name] --file [certificate_name].txt
@ -93,6 +94,7 @@ Now, your service principal has access to your keyvault, you are ready to confi
```
For Windows, on powershell
```powershell
# Decode base64 encoded cert to pfx cert for linux/macos
$EncodedText = Get-Content -Path [certificate_name].txt -Raw
@ -191,9 +193,9 @@ In Kubernetes mode, you store the certificate for the service principal into the
1. Create a kubernetes secret using the following command
* **[pfx_certificate_file_local_path]** is the path of PFX cert file you downloaded from [Create Azure Key Vault and Service principal](#create-azure-key-vault-and-service-principal)
- **[pfx_certificate_file_local_path]** is the path of PFX cert file you downloaded from [Create Azure Key Vault and Service principal](#create-azure-key-vault-and-service-principal)
* **[your_k8s_spn_secret_name]** is secret name in Kubernetes secret store
- **[your_k8s_spn_secret_name]** is secret name in Kubernetes secret store
```bash
kubectl create secret generic [your_k8s_spn_secret_name] --from-file=[pfx_certificate_file_local_path]
@ -269,7 +271,7 @@ auth:
Make sure that `secretstores.azure.keyvault` is loaded successfully in `daprd` sidecar log
Here is the nodeapp log of [HelloWorld Kubernetes sample](https://github.com/dapr/samples/tree/master/2.hello-kubernetes). Note: use the nodeapp name for your deployed container instance.
Here is the nodeapp log of [HelloWorld Kubernetes sample](https://github.com/dapr/samples/tree/master/2.hello-kubernetes). Note: use the nodeapp name for your deployed container instance.
```bash
$ kubectl logs nodeapp-f7b7576f4-4pjrj daprd
@ -288,6 +290,6 @@ time="2019-09-26T20:34:25Z" level=info msg="loaded component statestore (state.r
## References
* [Azure CLI Keyvault CLI](https://docs.microsoft.com/en-us/cli/azure/keyvault?view=azure-cli-latest#az-keyvault-create)
* [Create an Azure service principal with Azure CLI](https://docs.microsoft.com/en-us/cli/azure/create-an-azure-service-principal-azure-cli?view=azure-cli-latest)
* [Secrets Component](../../concepts/components/secrets.md)
- [Azure CLI Keyvault CLI](https://docs.microsoft.com/en-us/cli/azure/keyvault?view=azure-cli-latest#az-keyvault-create)
- [Create an Azure service principal with Azure CLI](https://docs.microsoft.com/en-us/cli/azure/create-an-azure-service-principal-azure-cli?view=azure-cli-latest)
- [Secrets Component](../../concepts/components/secrets.md)

View File

@ -6,7 +6,7 @@ This document shows how to enable Hashicorp Vault secret store using [Dapr Secre
Setup Hashicorp Vault using the Vault documentation: https://www.vaultproject.io/docs/install/index.html.
For Kubernetes, you can use the Helm Chart: https://github.com/hashicorp/vault-helm.
For Kubernetes, you can use the Helm Chart: <https://github.com/hashicorp/vault-helm.>
## Create the Vault component
@ -38,7 +38,7 @@ spec:
To deploy in Kubernetes, save the file above to `vault.yaml` and then run:
```
```bash
kubectl apply -f vault.yaml
```

View File

@ -3,4 +3,4 @@
Kubernetes has a built-in state store which Dapr components can use to fetch secrets from.
No special configuration is needed to setup the Kubernetes state store.
Please refer to [this](../../concepts/components/secrets.md) document for information and examples on how to fetch secrets from Kubernetes using Dapr.
Please refer to [this](../../concepts/components/secrets.md) document for information and examples on how to fetch secrets from Kubernetes using Dapr.

View File

@ -1,4 +1,4 @@
## Supported Secret Stores
# Supported Secret Stores
* [Kubernetes](./kubernetes.md)
* [Azure Key Vault](./azure-keyvault.md)

View File

@ -7,7 +7,7 @@ State stores are extensible and can be found in the [components-contrib repo](ht
A state store in Dapr is described using a `Component` file:
```
```yml
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
@ -36,23 +36,24 @@ You can make changes to this file the way you see fit, whether to change connect
Dapr uses a Kubernetes Operator to update the sidecars running in the cluster with different components.
To setup a state store in Kubernetes, use `kubectl` to apply the component file:
```
```bash
kubectl apply -f statestore.yaml
```
## Reference
* [Setup Redis](./setup-redis.md)
* [Setup Aerospike](./setup-aerospike.md)
* [Setup Cassandra](./setup-cassandra.md)
* [Setup Couchbase](./setup-couchbase.md)
* [Setup etcd](./setup-etcd.md)
* [Setup Consul](./setup-consul.md)
* [Setup Hashicorp Consul](./setup-consul.md)
* [Setup Hazelcast](./setup-hazelcast.md)
* [Setup Memcached](./setup-memcached.md)
* [Setup MongoDB](./setup-mongodb.md)
* [Setup Redis](./setup-redis.md)
* [Setup Zookeeper](./setup-zookeeper.md)
* [Setup Azure CosmosDB](./setup-azure-cosmosdb.md)
* [Setup Azure SQL Server](./setup-sqlserver.md)
* [Setup Azure Table Storage](./setup-azure-tablestorage.md)
* [Setup Google Cloud Firestore (Datastore mode)](./setup-firestore.md)
* [Setup MongoDB](./setup-mongodb.md)
* [Setup Zookeeper](./setup-zookeeper.md)
* [Setup Aerospike](./setup-aerospike.md)
* [Setup Hazelcast](./setup-hazelcast.md)
* [Setup Couchbase](./setup-couchbase.md)
* [Supported State Stores](./supported-state-stores.md)

View File

@ -1,19 +1,18 @@
# Supported state stores
# Supported state stores
| Name | CRUD | Transactional
| ------------- | -------|------ |
| Redis | :white_check_mark: | :white_check_mark: |
| Azure CosmosDB | :white_check_mark: | :x: |
| Google Cloud Firestore | :white_check_mark: | :x: |
| Aerospike | :white_check_mark: | :x: |
| Cassandra | :white_check_mark: | :x: |
| Hashicorp Consul | :white_check_mark: | :x: |
| Couchbase | :white_check_mark: | :x: |
| etcd | :white_check_mark: | :x: |
| Hashicorp Consul | :white_check_mark: | :x: |
| Hazelcast | :white_check_mark: | :x: |
| Memcached | :white_check_mark: | :x: |
| MongoDB | :white_check_mark: | :white_check_mark: |
| Redis | :white_check_mark: | :white_check_mark: |
| Zookeeper | :white_check_mark: | :x: |
| SQL Server | :white_check_mark: | :white_check_mark: |
| Aerospike | :white_check_mark: | :x: |
| Hazelcast | :white_check_mark: | :x: |
| Couchbase | :white_check_mark: | :x: |
| Azure CosmosDB | :white_check_mark: | :x: |
| Azure SQL Server | :white_check_mark: | :white_check_mark: |
| Azure Table Storage | :white_check_mark: | :x: |
| Google Cloud Firestore | :white_check_mark: | :x: |

View File

@ -63,6 +63,7 @@ import json
response = requests.delete("http://localhost:3500/v1.0/state/key1", headers={"consistency":"strong"})
```
Last-write concurrency is the default concurrency mode if the `concurrency` option is not specified.
## First-write-wins and Last-write-wins
@ -72,11 +73,11 @@ First-Write-Wins is useful in situations where you have multiple instances of an
The default mode for Dapr is Last-write-wins.
Dapr uses version numbers to determine whether a specific key has been updated. Clients retain the version number when reading the data for a key and then use the version number during updates such as writes and deletes. If the version information has changed since the client retrieved, an error is thrown, which then requires the client to perform a read again to get the latest version information and state.
Dapr uses version numbers to determine whether a specific key has been updated. Clients retain the version number when reading the data for a key and then use the version number during updates such as writes and deletes. If the version information has changed since the client retrieved, an error is thrown, which then requires the client to perform a read again to get the latest version information and state.
Dapr utilizes ETags to determine the state's version number. ETags are returned from state requests in an `ETag` header.
Using ETags, clients know that a resource has been updated since the last time they checked by erroring when there's an ETag mismatch.
Using ETags, clients know that a resource has been updated since the last time they checked by erroring when there's an ETag mismatch.
The following example shows how to get an ETag, and then use it to save state and then delete the state:

View File

@ -10,7 +10,8 @@ Dapr bindings allow you to:
* Replace bindings without changing your code
* Focus on business logic and not the event resource implementation
For more info on bindings, read [this](../../concepts/bindings/README.md) link.<br>
For more info on bindings, read [this](../../concepts/bindings/README.md) link.
For a complete sample showing bindings, visit this [link](https://github.com/dapr/samples/tree/master/5.bindings).
## 1. Create a binding
@ -23,7 +24,7 @@ Create the following YAML file, named binding.yaml, and save this to the /compon
*Note: When running in Kubernetes, apply this file to your cluster using `kubectl apply -f binding.yaml`*
```
```yml
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
@ -39,7 +40,8 @@ spec:
value: group1
```
Here, you create a new binding component with the name of `myEvent`.<br>
Here, you create a new binding component with the name of `myEvent`.
Inside the `metadata` section, configure the Kafka related properties such as the topics to listen on, the brokers and more.
## 2. Listen for incoming events
@ -48,7 +50,7 @@ Now configure your application to receive incoming events. If using HTTP, you ne
*The following example shows how you would listen for the event in Node.js, but this is applicable to any programming language*
```
```javascript
const express = require('express')
const bodyParser = require('body-parser')
const app = express()
@ -64,18 +66,19 @@ app.post('/myEvent', (req, res) => {
app.listen(port, () => console.log(`Kafka consumer app listening on port ${port}!`))
```
#### ACK-ing an event
### ACK-ing an event
In order to tell Dapr that you successfully processed an event in your application, return a `200 OK` response from your HTTP handler.
```
```javascript
res.status(200).send()
```
#### Rejecting an event
### Rejecting an event
In order to tell Dapr that the event wasn't processed correctly in your application and schedule it for redelivery, return any response different from `200 OK`. For example, a `500 Error`.
```
```javascript
res.status(500).send()
```

View File

@ -6,7 +6,7 @@ When developing Dapr applications, you typically use the dapr cli to start your
dapr run --app-id nodeapp --app-port 3000 --port 3500 app.js
```
This will generate the components yaml files (if they don't exist) so that your service can interact with the local redis container. This is great when you are just getting started but what if you want to attach a debugger to your service and step through the code? This is where you can use the dapr runtime (daprd) to help facilitate this.
This will generate the components yaml files (if they don't exist) so that your service can interact with the local redis container. This is great when you are just getting started but what if you want to attach a debugger to your service and step through the code? This is where you can use the dapr runtime (daprd) to help facilitate this.
>Note: The dapr runtime (daprd) will not automatically generate the components yaml files for Redis. These will need to be created manually or you will need to run the dapr cli (dapr) once in order to have them created automatically.
@ -157,7 +157,7 @@ Let's take a quick look at the args that are being passed to the daprd command.
## Wrapping up
Once you have made the required changes, you should be able to switch to the [debug](https://code.visualstudio.com/Docs/editor/debugging) view in VSCode and launch your daprized configurations by clicking the "play" button. If everything was configured correctly, you should see daprd launch in the VSCode terminal window and the [debugger](https://code.visualstudio.com/Docs/editor/debugging) should attach to your application (you should see it's output in the debug window).
Once you have made the required changes, you should be able to switch to the [debug](https://code.visualstudio.com/Docs/editor/debugging) view in VSCode and launch your daprized configurations by clicking the "play" button. If everything was configured correctly, you should see daprd launch in the VSCode terminal window and the [debugger](https://code.visualstudio.com/Docs/editor/debugging) should attach to your application (you should see it's output in the debug window).
>Note: Since you didn't launch the service(s) using the **dapr** ***run*** cli command, but instead by running **daprd**, the **dapr** ***list*** command will not show a list of apps that are currently running.

View File

@ -71,7 +71,9 @@ For more information on the actor *Placement* service see [actor overview](/conc
In order to give your service an id and port known to Dapr and launch the Dapr sidecar container, you simply annotate your deployment like this.
annotations:
dapr.io/enabled: "true"
dapr.io/id: "nodeapp"
dapr.io/port: "3000"
```yml
annotations:
dapr.io/enabled: "true"
dapr.io/id: "nodeapp"
dapr.io/port: "3000"
```

View File

@ -1,4 +1,3 @@
# Dapr Quickstarts and Samples
- The **[Dapr Samples repo](https://github.com/dapr/samples/blob/master/README.md)** has samples for getting started building applications
- The **[Dapr Samples repo](https://github.com/dapr/samples/blob/master/README.md)** has samples for getting started building applications

View File

@ -27,6 +27,7 @@ Issues are used as the primary method for tracking anything to do with the Dapr
### Issue Types
There are 4 types of issues (each with their own corresponding [label](#labels)):
- Discussion: These are support or functionality inquiries that we want to have a record of for
future reference. Depending on the discussion, these can turn into "Spec Change" issues.
- Proposal: Used for items that propose a new ideas or functionality that require
@ -41,6 +42,7 @@ from "Proposal" and "Discussion" items, or can be submitted individually dependi
The issue lifecycle is mainly driven by the core maintainers, but is good information for those
contributing to Helm. All issue types follow the same general lifecycle. Differences are noted below.
1. Issue creation
2. Triage
- The maintainer in charge of triaging will apply the proper labels for the issue. This
@ -58,7 +60,7 @@ contributing to Helm. All issue types follow the same general lifecycle. Differe
## How to Contribute a Patch
1. Fork the repo, modify the specification to address the issue.
1. Submit a pull request.
2. Submit a pull request.
The next section contains more information on the workflow followed for Pull Requests.
@ -80,7 +82,7 @@ Like any good open source project, we use Pull Requests (PRs) to track code chan
include at least a size label, a milestone, and `awaiting review` once all labels are applied.
See the [Labels section](#labels) for full details on the definitions of labels.
3. Assigning reviews
- All PRs require at least 2 review approvals before it can be merged.
- All PRs require at least 2 review approvals before it can be merged.
4. Reviewing/Discussion
- All reviews will be completed using Github review tool.
- A "Comment" review should be used when there are questions about the spec that should be
@ -88,14 +90,14 @@ Like any good open source project, we use Pull Requests (PRs) to track code chan
- A "Changes Requested" review indicates that changes to the spec need to be made before they will be
merged.
- Reviewers should update labels as needed (such as `needs rebase`).
- When a review is approved, the reviewer should add `LGTM` as a comment.
- When a review is approved, the reviewer should add `LGTM` as a comment.
- Final approval is required by a designated owner (see `.github/CODEOWNERS` file). Merging is blocked without this final approval. Approvers will factor reviews from all other reviewers into their approval process.
5. PR owner should try to be responsive to comments by answering questions or changing text. Once all comments have been addressed,
the PR is ready to be merged.
6. Merge or close
- A PR should stay open until a Final Approver (see above) has marked the PR approved.
- A PR should stay open until a Final Approver (see above) has marked the PR approved.
- PRs can be closed by the author without merging
- PRs may be closed by a Final Approver if the decision is made that the PR is not going to be merged
- PRs may be closed by a Final Approver if the decision is made that the PR is not going to be merged
## The Triager

View File

@ -11,7 +11,9 @@ Invokes a method on an actor.
#### HTTP Request
`POST/GET/PUT/DELETE http://localhost:<daprPort>/v1.0/actors/<actorType>/<actorId>/method/<method>`
```http
POST/GET/PUT/DELETE http://localhost:<daprPort>/v1.0/actors/<actorType>/<actorId>/method/<method>
```
#### HTTP Response Codes
@ -34,14 +36,14 @@ method | The name of the method to invoke.
```shell
curl -X POST http://localhost:3500/v1.0/actors/stormtrooper/50/method/shoot \
-H "Content-Type: application/json"
-H "Content-Type: application/json"
```
> Example of invoking a method on an actor with a payload:
```shell
curl -X POST http://localhost:3500/v1.0/actors/x-wing/33/method/fly \
-H "Content-Type: application/json"
-H "Content-Type: application/json"
-d '{
"destination": "Hoth"
}'
@ -49,7 +51,6 @@ curl -X POST http://localhost:3500/v1.0/actors/x-wing/33/method/fly \
> The response from the remote endpoint will be returned in the request body.
### Actor State Changes - Transaction
Persists the changed to the state for an actor as a multi-item transaction.
@ -78,20 +79,20 @@ actorId | The actor ID.
```shell
curl -X POST http://localhost:3500/v1.0/actors/stormtrooper/50/state \
-H "Content-Type: application/json"
-H "Content-Type: application/json"
-d '[
{
"operation": "upsert",
"request": {
"key": "key1",
"value": "myData"
}
"request": {
"key": "key1",
"value": "myData"
}
},
{
"operation": "delete",
"request": {
"key": "key2"
}
"request": {
"key": "key2"
}
}
]'
```
@ -102,7 +103,9 @@ Gets the state for an actor using a specified key.
#### HTTP Request
`GET http://localhost:<daprPort>/v1.0/actors/<actorType>/<actorId>/state/<key>`
```http
GET http://localhost:<daprPort>/v1.0/actors/<actorType>/<actorId>/state/<key>
```
#### HTTP Response Codes
@ -123,7 +126,7 @@ key | The key for the state value.
```shell
curl http://localhost:3500/v1.0/actors/stormtrooper/50/state/location \
-H "Content-Type: application/json"
-H "Content-Type: application/json"
```
> The above command returns the state:
@ -140,7 +143,9 @@ Creates a persistent reminder for an actor.
#### HTTP Request
`POST,PUT http://localhost:<daprPort>/v1.0/actors/<actorType>/<actorId>/reminders/<name>`
```http
POST,PUT http://localhost:<daprPort>/v1.0/actors/<actorType>/<actorId>/reminders/<name>
```
#### HTTP Response Codes
@ -161,11 +166,11 @@ name | The name of the reminder to create.
```shell
curl http://localhost:3500/v1.0/actors/stormtrooper/50/reminders/checkRebels \
-H "Content-Type: application/json"
-H "Content-Type: application/json"
-d '{
"data": "someData",
"dueTime": "1m",
"period": "20s"
"data": "someData",
"dueTime": "1m",
"period": "20s"
}'
```
@ -175,7 +180,9 @@ Gets a reminder for an actor.
#### HTTP Request
`GET http://localhost:<daprPort>/v1.0/actors/<actorType>/<actorId>/reminders/<name>`
```http
GET http://localhost:<daprPort>/v1.0/actors/<actorType>/<actorId>/reminders/<name>
```
#### HTTP Response Codes
@ -196,7 +203,7 @@ name | The name of the reminder to get.
```shell
curl http://localhost:3500/v1.0/actors/stormtrooper/50/reminders/checkRebels \
"Content-Type: application/json"
"Content-Type: application/json"
```
> The above command returns the reminder:
@ -215,7 +222,9 @@ Deletes a reminder for an actor.
#### HTTP Request
`DELETE http://localhost:<daprPort>/v1.0/actors/<actorType>/<actorId>/reminders/<name>`
```http
DELETE http://localhost:<daprPort>/v1.0/actors/<actorType>/<actorId>/reminders/<name>
```
#### HTTP Response Codes
@ -236,7 +245,7 @@ name | The name of the reminder to delete.
```shell
curl http://localhost:3500/v1.0/actors/stormtrooper/50/reminders/checkRebels \
-X "Content-Type: application/json"
-X "Content-Type: application/json"
```
### Create Actor Timer
@ -245,7 +254,9 @@ Creates a timer for an actor.
#### HTTP Request
`POST,PUT http://localhost:<daprPort>/v1.0/actors/<actorType>/<actorId>/timers/<name>`
```http
POST,PUT http://localhost:<daprPort>/v1.0/actors/<actorType>/<actorId>/timers/<name>
```
#### HTTP Response Codes
@ -266,12 +277,12 @@ name | The name of the timer to create.
```shell
curl http://localhost:3500/v1.0/actors/stormtrooper/50/timers/checkRebels \
-H "Content-Type: application/json"
-H "Content-Type: application/json"
-d '{
"data": "someData",
"dueTime": "1m",
"period": "20s",
"callback": "myEventHandler"
"data": "someData",
"dueTime": "1m",
"period": "20s",
"callback": "myEventHandler"
}'
```
@ -281,7 +292,9 @@ Deletes a timer for an actor.
#### HTTP Request
`DELETE http://localhost:<daprPort>/v1.0/actors/<actorType>/<actorId>/timers/<name>`
```http
DELETE http://localhost:<daprPort>/v1.0/actors/<actorType>/<actorId>/timers/<name>
```
#### HTTP Response Codes
@ -302,7 +315,7 @@ name | The name of the timer to delete.
```shell
curl http://localhost:3500/v1.0/actors/stormtrooper/50/timers/checkRebels \
-X "Content-Type: application/json"
-X "Content-Type: application/json"
```
## Specifications for Dapr calling to user service code
@ -313,7 +326,9 @@ Gets the registered actors in Dapr.
#### HTTP Request
`GET http://localhost:<appPort>/dapr/config`
```http
GET http://localhost:<appPort>/dapr/config
```
#### HTTP Response Codes
@ -332,7 +347,7 @@ appPort | The application port.
```shell
curl -X GET http://localhost:3000/dapr/config \
-H "Content-Type: application/json"
-H "Content-Type: application/json"
```
> The above command returns the config (all fields are optional):
@ -353,7 +368,9 @@ Activates an actor.
#### HTTP Request
`POST http://localhost:<appPort>/actors/<actorType>/<actorId>`
```http
POST http://localhost:<appPort>/actors/<actorType>/<actorId>
```
#### HTTP Response Codes
@ -375,7 +392,7 @@ actorId | The actor ID.
```shell
curl -X POST http://localhost:3000/actors/stormtrooper/50 \
-H "Content-Type: application/json"
-H "Content-Type: application/json"
```
### Deactivate Actor
@ -384,7 +401,9 @@ Deactivates an actor.
#### HTTP Request
`DELETE http://localhost:<appPort>/actors/<actorType>/<actorId>`
```http
DELETE http://localhost:<appPort>/actors/<actorType>/<actorId>
```
#### HTTP Response Codes
@ -406,7 +425,7 @@ actorId | The actor ID.
```shell
curl -X DELETE http://localhost:3000/actors/stormtrooper/50 \
-H "Content-Type: application/json"
-H "Content-Type: application/json"
```
### Invoke Actor method
@ -415,7 +434,9 @@ Invokes a method for an actor.
#### HTTP Request
`PUT http://localhost:<appPort>/actors/<actorType>/<actorId>/method/<methodName>`
```http
PUT http://localhost:<appPort>/actors/<actorType>/<actorId>/method/<methodName>
```
#### HTTP Response Codes
@ -438,7 +459,7 @@ methodName | The name of the method to invoke.
```shell
curl -X POST http://localhost:3000/actors/stormtrooper/50/method/performAction \
-H "Content-Type: application/json"
-H "Content-Type: application/json"
```
### Invoke Reminder
@ -447,7 +468,9 @@ Invokes a reminder for an actor.
#### HTTP Request
`PUT http://localhost:<appPort>/actors/<actorType>/<actorId>/method/remind/<reminderName>`
```http
PUT http://localhost:<appPort>/actors/<actorType>/<actorId>/method/remind/<reminderName>
```
#### HTTP Response Codes
@ -470,7 +493,7 @@ reminderName | The name of the reminder to invoke.
```shell
curl -X POST http://localhost:3000/actors/stormtrooper/50/method/remind/checkRebels \
-H "Content-Type: application/json"
-H "Content-Type: application/json"
```
### Invoke Timer
@ -479,7 +502,9 @@ Invokes a timer for an actor.
#### HTTP Request
`PUT http://localhost:<appPort>/actors/<actorType>/<actorId>/method/timer/<timerName>`
```http
PUT http://localhost:<appPort>/actors/<actorType>/<actorId>/method/timer/<timerName>
```
#### HTTP Response Codes
@ -502,7 +527,7 @@ timerName | The name of the timer to invoke.
```shell
curl -X POST http://localhost:3000/actors/stormtrooper/50/method/timer/checkRebels \
-H "Content-Type: application/json"
-H "Content-Type: application/json"
```
## Querying Actor State Externally
@ -518,8 +543,6 @@ The state namespace created by Dapr for actors is composed of the following item
* Key - A key for the specific state value. An actor ID can hold multiple state keys.
The following example shows how to construct a key for the state of an actor instance under the `myapp` Dapr ID namespace:
``
myapp-cat-hobbit-food
``
`myapp-cat-hobbit-food`
In the example above, we are getting the value for the state key `food`, for the actor ID `hobbit` with an actor type of `cat`, under the Dapr ID namespace of `myapp`.

View File

@ -7,7 +7,7 @@ Examples for bindings include ```Kafka```, ```Rabbit MQ```, ```Azure Event Hubs`
An Dapr Binding has the following structure:
```
```yml
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
@ -66,7 +66,9 @@ This endpoint lets you invoke an Dapr output binding.
### HTTP Request
`POST/GET/PUT/DELETE http://localhost:<daprPort>/v1.0/bindings/<name>`
```http
POST/GET/PUT/DELETE http://localhost:<daprPort>/v1.0/bindings/<name>
```
### HTTP Response codes
@ -79,7 +81,7 @@ Code | Description
The bindings endpoint receives the following JSON payload:
```
```json
{
"data": "",
"metadata": [
@ -100,8 +102,8 @@ name | the name of the binding to invoke
```shell
curl -X POST http://localhost:3500/v1.0/bindings/myKafka \
-H "Content-Type: application/json" \
-d '{
-H "Content-Type: application/json" \
-d '{
"data": {
"message": "Hi"
},

View File

@ -7,7 +7,10 @@ Dapr guarantees at least once semantics for this endpoint.
### HTTP Request
```POST http://localhost:<daprPort>/v1.0/publish/<topic>```
```http
POST http://localhost:<daprPort>/v1.0/publish/<topic>
```
### HTTP Response codes
Code | Description
@ -24,10 +27,10 @@ topic | the name of the topic
```shell
curl -X POST http://localhost:3500/v1.0/publish/deathStarStatus \
-H "Content-Type: application/json" \
-d '{
"status": "completed"
}'
-H "Content-Type: application/json" \
-d '{
"status": "completed"
}'
```
## Broadcast a message to a list of recipients
@ -37,7 +40,9 @@ The list of recipients may include the unique identifiers of other apps (used by
### HTTP Request
```POST http://localhost:<daprPort>/v1.0/publish/<topic>```
```http
POST http://localhost:<daprPort>/v1.0/publish/<topic>
```
### HTTP Response codes
@ -57,8 +62,8 @@ topic | the name of the topic
```shell
curl -X POST http://localhost:3500/v1.0/publish \
-H "Content-Type: application/json" \
-d '{
-H "Content-Type: application/json" \
-d '{
"topic": "DeathStarStatus",
"data": {
"status": "completed"
@ -73,8 +78,8 @@ curl -X POST http://localhost:3500/v1.0/publish \
```shell
curl -X POST http://localhost:3500/v1.0/publish \
-H "Content-Type: application/json" \
-d '{
-H "Content-Type: application/json" \
-d '{
"topic": "DeathStarStatus",
"data": {
"status": "completed"
@ -89,8 +94,8 @@ curl -X POST http://localhost:3500/v1.0/publish \
```shell
curl -X POST http://localhost:3500/v1.0/publish \
-H "Content-Type: application/json" \
-d '{
-H "Content-Type: application/json" \
-d '{
"eventName": "DeathStarStatus",
"data": {
"status": "completed"
@ -109,7 +114,9 @@ In order to receive topic subscriptions, Dapr will invoke the following endpoint
### HTTP Request
`GET http://localhost:<appPort>/dapr/subscribe`
```http
GET http://localhost:<appPort>/dapr/subscribe
```
### URL Parameters
@ -135,7 +142,9 @@ The following example illustrates this point, considering a subscription for top
### HTTP Request
`POST http://localhost:<appPort>/TopicA`
```http
POST http://localhost:<appPort>/TopicA
```
### URL Parameters

View File

@ -9,7 +9,9 @@ This endpoint lets you invoke a method in another Dapr enabled app.
### HTTP Request
`POST/GET/PUT/DELETE http://localhost:<daprPort>/v1.0/invoke/<appId>/method/<method-name>`
```http
POST/GET/PUT/DELETE http://localhost:<daprPort>/v1.0/invoke/<appId>/method/<method-name>
```
### HTTP Response codes
@ -28,7 +30,7 @@ method-name | the name of the method or url to invoke on the remote app
```shell
curl http://localhost:3500/v1.0/invoke/countService/method/sum \
-H "Content-Type: application/json"
-H "Content-Type: application/json"
```
### Sending data
@ -37,8 +39,8 @@ You can send data by posting it as part of the request body.
```shell
curl http://localhost:3500/v1.0/invoke/countService/method/calculate \
-H "Content-Type: application/json"
-d '{ "arg1": 10, "arg2": 23, "operator": "+" }'
-H "Content-Type: application/json"
-d '{ "arg1": 10, "arg2": 23, "operator": "+" }'
```
> The response from the remote endpoint will be returned in the request body.

View File

@ -7,7 +7,7 @@ Examples for state stores include ```Redis```, ```Azure CosmosDB```, ```AWS Dyna
An Dapr State Store has the following structure:
```
```yml
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
@ -29,25 +29,29 @@ Starting with 0.4.0 release, support for multiple state stores was added. This i
Please refer https://github.com/dapr/dapr/blob/master/docs/decision_records/api/API-008-multi-state-store-api-design.md for more details.
## Key scheme
Dapr state stores are key/value stores. To ensure data compatibility, Dapr requires these data stores follow a fixed key scheme. For general states, the key format is:
```bash
<Dapr id>||<state key>
```
For Actor states, the key format is:
```bash
<Dapr id>||<Actor type>||<Actor id>||<state key>
```
## Save state
This endpoint lets you save an array of state objects.
### HTTP Request
`POST http://localhost:<daprPort>/v1.0/state/<storename>`
```http
POST http://localhost:<daprPort>/v1.0/state/<storename>
```
#### URL Parameters
@ -57,6 +61,7 @@ daprPort | the Dapr port
storename | ```metadata.name``` field in the user configured state store component yaml. Please refer Dapr State Store configuration structure mentioned above.
#### Request Body
A JSON array of state objects. Each state object is comprised with the following fields:
Field | Description
@ -70,6 +75,7 @@ options | (optional) state operation options, see [state operation options](#opt
> **ETag format** Dapr runtime treats ETags as opaque strings. The exact ETag format is defined by the corresponding data store.
### HTTP Response
#### Response Codes
Code | Description
@ -79,9 +85,11 @@ Code | Description
500 | Failed to save state
#### Response Body
None.
### Example
```shell
curl -X POST http://localhost:3500/v1.0/state/starwars \
-H "Content-Type: application/json" \
@ -105,7 +113,10 @@ This endpoint lets you get the state for a specific key.
### HTTP Request
`GET http://localhost:<daprPor>/v1.0/state/<storename>/<key>`
```http
GET http://localhost:<daprPor>/v1.0/state/<storename>/<key>
```
#### URL Parameters
@ -150,13 +161,16 @@ curl http://localhost:3500/v1.0/state/starwars/planet \
"name": "Tatooine"
}
```
## Delete state
This endpoint lets you delete the state for a specific key.
### HTTP Request
`DELETE http://localhost:<daprPort>/v1.0/state/<storename>/<key>`
```http
DELETE http://localhost:<daprPort>/v1.0/state/<storename>/<key>
```
#### URL Parameters
@ -197,10 +211,12 @@ curl -X "DELETE" http://localhost:3500/v1.0/state/starwars/planet -H "ETag: xxxx
```
## Configuring State Store for Actors
Actors don't support multiple state stores and require a transactional state store to be used with Dapr. Currently mongodb and redis implement the transactional state store interface.
To specify which state store to be used for actors, specify value of property `actorStateStore` as true in the metadata section of the state store component yaml file.
Example: Following components yaml will configure redis to be used as the state store for Actors.
```
```yml
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
@ -224,15 +240,18 @@ spec:
A Dapr-compatible state store shall use the following key scheme:
* *\<Dapr id>||\<state key>* key format for general states
* *\<Dapr id>||\<Actor type>||\<Actor id>||\<state key>* key format for Actor states.
* *\<Dapr id>||\<Actor type>||\<Actor id>||\<state key>* key format for Actor states.
### Concurrency
Dapr uses Optimized Concurrency Control (OCC) with ETags. Dapr makes optional the following requirements on state stores:
* An Dapr-compatible state store may support optimistic concurrency control using ETags. When an ETag is associated with an *save* or *delete* request, the store shall allow the update only if the attached ETag matches with the latest ETag in the database.
* When ETag is missing in the write requests, the state store shall handle the requests in a last-write-wins fashion. This is to allow optimizations for high-throughput write scenarios in which data contingency is low or has no negative effects.
* A store shall **always** return ETags when returning states to callers.
Dapr uses Optimized Concurrency Control (OCC) with ETags. Dapr makes optional the following requirements on state stores:
* An Dapr-compatible state store may support optimistic concurrency control using ETags. When an ETag is associated with an *save* or *delete* request, the store shall allow the update only if the attached ETag matches with the latest ETag in the database.
* When ETag is missing in the write requests, the state store shall handle the requests in a last-write-wins fashion. This is to allow optimizations for high-throughput write scenarios in which data contingency is low or has no negative effects.
* A store shall **always** return ETags when returning states to callers.
### Consistency
Dapr allows clients to attach a consistency hint to *get*, *set* and *delete* operation. Dapr support two consistency level: **strong** and **eventual**, which are defined as the follows:
#### Eventual Consistency
@ -240,7 +259,7 @@ Dapr allows clients to attach a consistency hint to *get*, *set* and *delete* op
Dapr assumes data stores are eventually consistent by default. A state should:
* For read requests, the state store can return data from any of the replicas
* For write request, the state store should asynchronously replicate updates to configured quorum after acknowledging the update request.
* For write request, the state store should asynchronously replicate updates to configured quorum after acknowledging the update request.
#### Strong Consistency
@ -250,6 +269,7 @@ When a strong consistency hint is attached, a state store should:
* For write/delete requests, the state store should synchronisely replicate updated data to configured quorum before completing the write request.
### Retry Policy
Dapr allows clients to attach retry policies to *set* and *delete* operations. A retry policy is described by three fields:
Field | Description
@ -259,6 +279,7 @@ retryPattern | Retry pattern, can be either *linear* or *exponential*.
retryThreshold | Maximum number of retries.
### Example
The following is a sample *set* request with a complete operation option definition:
```shell
@ -273,7 +294,7 @@ curl -X POST http://localhost:3500/v1.0/state/starwars \
"concurrency": "first-write",
"consistency": "strong",
"retryPolicy": {
"interval": 100,
"interval": 100,
"threshold" : 3,
"pattern": "exponential"
}

View File

@ -3,13 +3,15 @@
The doc describes the sequence of events that occur when `dapr run` is executed in self hosting mode (formerly known as standalone mode). It uses [sample 1](https://github.com/dapr/samples/tree/master/1.hello-world) as an example.
Terminology used below:
- Dapr CLI - the Dapr command line tool. The binary name is dapr (dapr.exe on Windows)
- Dapr runtime - this runs alongside each app. The binary name is daprd (daprd.exe on Windows)
In self hosting mode, running `dapr init` copies the Dapr runtime onto your box and starts the placement service (used for actors) and Redis in containers. These must be present before running `dapr run`.
What happens when `dapr run` is executed?
```
```bash
dapr run --app-id nodeapp --app-port 3000 --port 3500 node app.js
```
@ -23,28 +25,29 @@ Then, the Dapr CLI will [launch](https://github.com/dapr/cli/blob/d585612185a4a5
If you inspect the command lines of the Dapr runtime and the app, observe that the Dapr runtime has these args:
```
```bash
daprd.exe --dapr-id mynode --dapr-http-port 3500 --dapr-grpc-port 43693 --log-level info --max-concurrency -1 --protocol http --app-port 3000 --placement-address localhost:50005
```
```
And the app has these args, which are not modified from what was passed in via the CLI:
```
```bash
node app.js
```
### Dapr runtime
The daprd process is started with the args above. `--app-id`, "nodeapp", which is the dapr app id, is forwarded from the Dapr CLI into `daprd` as the `--dapr-id` arg. Similarly:
- the `--app-port` from the CLI, which represents the port on the app that `daprd` will use to communicate with it has been passed into the `--app-port` arg.
- the `--port` arg from the CLI, which represents the http port that daprd is listening on is passed into the `--dapr-http-port` arg. (Note to specify grpc instead you can use `--grpc-port`). If it's not specified, it will be -1 which means the Dapr CLI will chose a random free port. Below, it's 43693, yours will vary.
### The app
The Dapr CLI doesn't change the command line for the app itself. Since `node app.js` was specified, this will be the command it runs with. However, two environment variables are added, which the app can use to determine the ports the Dapr runtime is listening on.
The two ports below match the ports passed to the Dapr runtime above:
```
```ini
DAPR_GRPC_PORT=43693
DAPR_HTTP_PORT=3500
```