Upmerge 1.6 into 1.7 (#2342)

* Adding dotnet code snippet for saving and retrieving mulitple states (#2280)

Signed-off-by: Will Velida <willvelida@microsoft.com>

* Azure Key Vault RBAC role to get secrets updated (#2256)

* Azure Key Vault RBAC role to get secrets updated
Signed-off-by: Sergio Velmay <sergiovelmay@gmail.com>

* Remove pin to localized version in MS link
Signed-off-by: Sergio Velmay <sergiovelmay@gmail.com>

Co-authored-by: Mark Fussell <markfussell@gmail.com>
Co-authored-by: greenie-msft <56556602+greenie-msft@users.noreply.github.com>

* doc: fix typo premisis → premises (#2288)

Signed-off-by: Andrea Spadaccini <andrea.spadaccini@gmail.com>

* Remove dash in "dapr-init" (#2282)

I'm guessing it's a typo because the md file has the dash. But the command
itself doesn't

Signed-off-by: Doug Davis <dug@microsoft.com>

Co-authored-by: Mark Fussell <markfussell@gmail.com>

* Minor updates to the golang pub-sub quickstart (#2283)

Minor updates to the golang pub-sub quickstart
- make the 'cd' command more clear by telling them to start from the clone
- add the sample output from the publisher

Signed-off-by: Doug Davis <dug@microsoft.com>

Co-authored-by: Mark Fussell <markfussell@gmail.com>
Co-authored-by: greenie-msft <56556602+greenie-msft@users.noreply.github.com>

* Fix Redis Sentinel links

Signed-off-by: Nick Greenfield <nigreenf@microsoft.com>

* fix: Getting started / Init Dapr locally has wrong quickstarts link (#2287)

* fix: Getting started / Init Dapr locally has wrong quickstarts link

Signed-off-by: Yauri<me@yauri-io>
Signed-off-by: y-io <me@yauri.one>

* Change Next Steps to "Use the Dapr API"

Signed-off-by: Nick Greenfield <nigreenf@microsoft.com>

* Update install-dapr-selfhost.md

Co-authored-by: y-io <me@yauri.one>
Co-authored-by: greenie-msft <56556602+greenie-msft@users.noreply.github.com>

* Fix typo

* Update docs for hotfix

Signed-off-by: Nick Greenfield <nigreenf@microsoft.com>

* update shortcode to latest dapr version

Signed-off-by: Nick Greenfield <nigreenf@microsoft.com>

* doc: update arguments-annotations (#2306)

Signed-off-by: Loong Dai <loong.dai@intel.com>

* Fix typo: microservices

Signed-off-by: Kyle Richelhoff <1271617+grepme@users.noreply.github.com>

* Add mandatory `dapr init` step to Self-Hosted Mode in Container Docs (#2303)

I followed the docs to ship Dapr together with an app inside the same container and came across the issue that `daprd` in the `ENTRYPOINT` was not available until i initialized dapr in slim mode.

Signed-off-by: Robin-Manuel Thiel <robin-manuel@thiel1.de>

Co-authored-by: Mark Fussell <markfussell@gmail.com>

* capitalize 'quickstart'; add comma

Signed-off-by: Hannah Hunter <hannahhunter@microsoft.com>

* typo fix; quick grammar pass (#2316)

Signed-off-by: Hannah Hunter <hannahhunter@microsoft.com>

* typo fixes and quick grammar pass (#2317)

Signed-off-by: Hannah Hunter <hannahhunter@microsoft.com>

Co-authored-by: greenie-msft <56556602+greenie-msft@users.noreply.github.com>

* adding Read buffer size argument to fasthttp server and grpc server (#2263)

* adding Read buffer size argument to fasthttp server and grpc server dapr/dapr#3346

By making a contribution to this project, I certify that:
    (a) The contribution was created in whole or in part by me and I have the right to submit it under the open source license indicated in the file; or
    (b) The contribution is based upon previous work that, to the best of my knowledge, is covered under an appropriate open source license and I have the right under that license to submit that work with modifications, whether created in whole or in part by me, under the same open source license (unless I am permitted to submit under a different license), as indicated in the file; or
    (c) The contribution was provided directly to me by some other person who certified (a), (b) or (c) and I have not modified it.
    (d) I understand and agree that this project and the contribution are public and that a record of the contribution (including all personal information I submit with it, including my sign-off) is maintained indefinitely and may be redistributed consistent with this project or the open source license(s) involved.

This is my commit message
Signed-off-by: Fang Yuan <wojiushifangyuanlove@gmail.com>

* added introduction to dapr-http-read-buffer-size

Signed-off-by: Fang Yuan <wojiushifangyuanlove@gmail.com>

* Modify the note description

Signed-off-by: Fang Yuan <wojiushifangyuanlove@gmail.com>

* Adding read buffer size argument to fasthttp server and grpc server

Signed-off-by: Fang Yuan <wojiushifangyuanlove@gmail.com>

* Added introduction to dapr-http-read-buffer-size

Signed-off-by: Fang Yuan <wojiushifangyuanlove@gmail.com>

* Modify the note description

Signed-off-by: Fang Yuan <wojiushifangyuanlove@gmail.com>

* Modify this description

Signed-off-by: Fang Yuan <wojiushifangyuanlove@gmail.com>

* Add a detailed description

Signed-off-by: Fang Yuan <wojiushifangyuanlove@gmail.com>

* Make the appropriate title changes

Signed-off-by: Fang Yuan <wojiushifangyuanlove@gmail.com>

* Modification introduction and corresponding instructions

Signed-off-by: Fang Yuan <wojiushifangyuanlove@gmail.com>

Co-authored-by: Mark Fussell <markfussell@gmail.com>

* fix typo in command for python and js

Signed-off-by: Hannah Hunter <hannahhunter@microsoft.com>

* remove info; clarify in other doc

Signed-off-by: Hannah Hunter <hannahhunter@microsoft.com>

* link to state store spec; grammar pass

Signed-off-by: Hannah Hunter <hannahhunter@microsoft.com>

* update per Mark

Signed-off-by: Hannah Hunter <hannahhunter@microsoft.com>

* Add link to the dev docs

I may have missed it but I couldn't find a way to go from the main README
(https://github.com/dapr/dapr) to the dev docs.

Signed-off-by: Doug Davis <dug@microsoft.com>

Co-authored-by: Will Velida <willvelida@hotmail.co.uk>
Co-authored-by: SergioVelmay <42042494+SergioVelmay@users.noreply.github.com>
Co-authored-by: Mark Fussell <markfussell@gmail.com>
Co-authored-by: Andrea Spadaccini <andrea.spadaccini@gmail.com>
Co-authored-by: Doug Davis <duglin@users.noreply.github.com>
Co-authored-by: yauri-io <30032724+yauri-io@users.noreply.github.com>
Co-authored-by: y-io <me@yauri.one>
Co-authored-by: HMZ <hamzi1995@gmail.com>
Co-authored-by: Looong Dai <loong.dai@intel.com>
Co-authored-by: Kyle Richelhoff <1271617+grepme@users.noreply.github.com>
Co-authored-by: Robin-Manuel Thiel <robin-manuel@thiel1.de>
Co-authored-by: Hannah Hunter <hannahhunter@microsoft.com>
Co-authored-by: Hannah Hunter <94493363+hhunter-ms@users.noreply.github.com>
Co-authored-by: loopFY <34152277+767829413@users.noreply.github.com>
Co-authored-by: Doug Davis <dug@microsoft.com>
This commit is contained in:
greenie-msft 2022-04-07 15:36:53 -07:00 committed by GitHub
parent 6130fbb629
commit af4a649c4f
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
24 changed files with 508 additions and 361 deletions

View File

@ -16,7 +16,7 @@ The Dapr project is focused on performance due to the inherent discussion of Dap
### What is the relationship between Dapr, Orleans and Service Fabric Reliable Actors?
The actors in Dapr are based on the same virtual actor concept that [Orleans](https://www.microsoft.com/research/project/orleans-virtual-actors/) started, meaning that they are activated when called and deactivated after a period of time. If you are familiar with Orleans, Dapr C# actors will be familiar. Dapr C# actors are based on [Service Fabric Reliable Actors](https://docs.microsoft.com/azure/service-fabric/service-fabric-reliable-actors-introduction) (which also came from Orleans) and enable you to take Reliable Actors in Service Fabric and migrate them to other hosting platforms such as Kubernetes or other on-premisis environments.
The actors in Dapr are based on the same virtual actor concept that [Orleans](https://www.microsoft.com/research/project/orleans-virtual-actors/) started, meaning that they are activated when called and deactivated after a period of time. If you are familiar with Orleans, Dapr C# actors will be familiar. Dapr C# actors are based on [Service Fabric Reliable Actors](https://docs.microsoft.com/azure/service-fabric/service-fabric-reliable-actors-introduction) (which also came from Orleans) and enable you to take Reliable Actors in Service Fabric and migrate them to other hosting platforms such as Kubernetes or other on-premises environments.
Moreover, Dapr is about more than just actors. It provides you with a set of best-practice building blocks to build into any microservices application. See [Dapr overview]({{< ref overview.md >}}).
### Differences between Dapr and an actor framework

View File

@ -21,7 +21,7 @@ Today we are experiencing a wave of cloud adoption. Developers are comfortable w
This is where Dapr comes in. Dapr codifies the *best practices* for building microservice applications into open, independent APIs called building blocks, that enable you to build portable applications with the language and framework of your choice. Each building block is completely independent and you can use one, some, or all of them in your application.
Using Dapr you can incrementally migrate your existing applications to a microserivces architecture, thereby adopting cloud native patterns such scale out/in, resiliency and independent deployments.
Using Dapr you can incrementally migrate your existing applications to a microservices architecture, thereby adopting cloud native patterns such scale out/in, resiliency and independent deployments.
In addition, Dapr is platform agnostic, meaning you can run your applications locally, on any Kubernetes cluster, on virtual or physical machines and in other hosting environments that Dapr integrates with. This enables you to build microservice applications that can run on the cloud and edge.

View File

@ -51,6 +51,7 @@ All contributions come through pull requests. To submit a proposed change, follo
1. Make sure there's an issue (bug or proposal) raised, which sets the expectations for the contribution you are about to make.
1. Fork the relevant repo and create a new branch
- Some Dapr repos support [Codespaces]({{< ref codespaces.md >}}) to provide an instant environment for you to build and test your changes.
- See the [Developing Dapr docs](https://github.com/dapr/dapr/blob/master/docs/development/developing-dapr.md) for more information about setting up a Dapr development environment.
1. Create your change
- Code changes require tests
1. Update relevant documentation for the change

View File

@ -529,7 +529,36 @@ Try getting state again and note that no value is returned.
Below are code examples that leverage Dapr SDKs for saving and retrieving multiple states.
{{< tabs Java Python Javascript "HTTP API (Bash)" "HTTP API (PowerShell)">}}
{{< tabs Dotnet Java Python Javascript "HTTP API (Bash)" "HTTP API (PowerShell)">}}
{{% codetab %}}
```csharp
//dependencies
using Dapr.Client;
//code
namespace EventService
{
class Program
{
static async Task Main(string[] args)
{
string DAPR_STORE_NAME = "statestore";
//Using Dapr SDK to retrieve multiple states
using var client = new DaprClientBuilder().Build();
IReadOnlyList<BulkStateItem> mulitpleStateResult = await client.GetBulkStateAsync(DAPR_STORE_NAME, new List<string> { "order_1", "order_2" }, parallelism: 1);
}
}
}
```
Navigate to the directory containing the above code, then run the following command to launch a Dapr sidecar and run the application:
```bash
dapr run --app-id orderprocessing --app-port 6001 --dapr-http-port 3601 --dapr-grpc-port 60001 dotnet run
```
{{% /codetab %}}
{{% codetab %}}

View File

@ -3,7 +3,7 @@ type: docs
title: "Initialize Dapr in your local environment"
linkTitle: "Init Dapr locally"
weight: 20
description: "Fetch the Dapr sidecar binaries and install them locally using `dapr-init`"
description: "Fetch the Dapr sidecar binaries and install them locally using `dapr init`"
aliases:
- /getting-started/set-up-dapr/install-dapr/
---
@ -125,4 +125,5 @@ explorer "%USERPROFILE%\.dapr\"
<br>
{{< button text="Next step: Try Dapr quickstarts >>" page="getting-started/_index.md" >}}
{{< button text="Next step: Use the Dapr API >>" page="getting-started/get-started-api.md" >}}

View File

@ -6,7 +6,7 @@ weight: 70
description: "Get started with Dapr's Publish and Subscribe building block"
---
Let's take a look at Dapr's [Publish and Subscribe (Pub/sub) building block]({{< ref pubsub >}}). In this quickstart, you will run a publisher microservice and a subscriber microservice to demonstrate how Dapr enables a Pub/sub pattern.
Let's take a look at Dapr's [Publish and Subscribe (Pub/sub) building block]({{< ref pubsub >}}). In this Quickstart, you will run a publisher microservice and a subscriber microservice to demonstrate how Dapr enables a Pub/sub pattern.
1. Using a publisher service, developers can repeatedly publish messages to a topic.
1. [A Pub/sub component](https://docs.dapr.io/concepts/components-concept/#pubsub-brokers) queues or brokers those messages. Our example below uses Redis, you can use RabbitMQ, Kafka, etc.
@ -14,7 +14,7 @@ Let's take a look at Dapr's [Publish and Subscribe (Pub/sub) building block]({{<
<img src="/images/pubsub-quickstart/pubsub-diagram.png" width=800 style="padding-bottom:15px;">
Select your preferred language-specific Dapr SDK before proceeding with the quickstart.
Select your preferred language-specific Dapr SDK before proceeding with the Quickstart.
{{< tabs "Python" "JavaScript" ".NET" "Java" "Go" >}}
<!-- Python -->
@ -73,7 +73,8 @@ with DaprClient() as client:
### Step 3: Subscribe to topics
In a new terminal window, navigate to the `order-processor` directory.
In a new terminal window, from the root of the Quickstarts clone directory,
navigate to the `order-processor` directory.
```bash
cd pub_sub/python/sdk/order-processor
@ -161,7 +162,7 @@ When you run `dapr init`, Dapr creates a default Redis `pubsub.yaml` and runs a
With the `pubsub.yaml` component, you can easily swap out underlying components without application code changes.
The Redis `pubsub.yaml` file included for this quickstart contains the following:
The Redis `pubsub.yaml` file included for this Quickstart contains the following:
```yaml
apiVersion: dapr.io/v1alpha1
@ -241,7 +242,8 @@ await client.pubsub.publish(PUBSUB_NAME, PUBSUB_TOPIC, order);
### Step 3: Subscribe to topics
In a new terminal window, navigate to the `order-processor` directory.
In a new terminal window, from the root of the Quickstarts clone directory,
navigate to the `order-processor` directory.
```bash
cd pub_sub/javascript/sdk/order-processor
@ -315,7 +317,7 @@ When you run `dapr init`, Dapr creates a default Redis `pubsub.yaml` and runs a
With the `pubsub.yaml` component, you can easily swap out underlying components without application code changes.
The Redis `pubsub.yaml` file included for this quickstart contains the following:
The Redis `pubsub.yaml` file included for this Quickstart contains the following:
```yaml
apiVersion: dapr.io/v1alpha1
@ -392,7 +394,8 @@ Console.WriteLine("Published data: " + order);
### Step 3: Subscribe to topics
In a new terminal window, navigate to the `order-processor` directory.
In a new terminal window, from the root of the Quickstarts clone directory,
navigate to the `order-processor` directory.
```bash
cd pub_sub/csharp/sdk/order-processor
@ -466,7 +469,7 @@ When you run `dapr init`, Dapr creates a default Redis `pubsub.yaml` and runs a
With the `pubsub.yaml` component, you can easily swap out underlying components without application code changes.
The Redis `pubsub.yaml` file included for this quickstart contains the following:
The Redis `pubsub.yaml` file included for this Quickstart contains the following:
```yaml
apiVersion: dapr.io/v1alpha1
@ -548,7 +551,8 @@ logger.info("Published data: " + order.getOrderId());
### Step 3: Subscribe to topics
In a new terminal window, navigate to the `order-processor` directory.
In a new terminal window, from the root of the Quickstarts clone directory,
navigate to the `order-processor` directory.
```bash
cd pub_sub/java/sdk/order-processor
@ -626,7 +630,7 @@ When you run `dapr init`, Dapr creates a default Redis `pubsub.yaml` and runs a
With the `pubsub.yaml` component, you can easily swap out underlying components without application code changes.
The Redis `pubsub.yaml` file included for this quickstart contains the following:
The Redis `pubsub.yaml` file included for this Quickstart contains the following:
```yaml
apiVersion: dapr.io/v1alpha1
@ -683,7 +687,7 @@ In a terminal window, navigate to the `checkout` directory.
cd pub_sub/go/sdk/checkout
```
Install the dependencies:
Install the dependencies and build the application:
```bash
go build app.go
@ -699,7 +703,7 @@ In the `checkout` publisher, we're publishing the orderId message to the Redis i
```go
if err := client.PublishEvent(ctx, PUBSUB_NAME, PUBSUB_TOPIC, []byte(order)); err != nil {
panic(err)
panic(err)
}
fmt.Sprintf("Published data: ", order)
@ -707,13 +711,14 @@ fmt.Sprintf("Published data: ", order)
### Step 3: Subscribe to topics
In a new terminal window, navigate to the `order-processor` directory.
In a new terminal window, from the root of the quickstarts clone directory,
navigate to the `order-processor` directory.
```bash
cd pub_sub/go/sdk/order-processor
```
Install the dependencies:
Install the dependencies and build the application:
```bash
go build app.go
@ -742,6 +747,16 @@ Publisher output:
```
== APP == dapr client initializing for: 127.0.0.1:63293
== APP == Published data: {"orderId":1}
== APP == Published data: {"orderId":2}
== APP == Published data: {"orderId":3}
== APP == Published data: {"orderId":4}
== APP == Published data: {"orderId":5}
== APP == Published data: {"orderId":6}
== APP == Published data: {"orderId":7}
== APP == Published data: {"orderId":8}
== APP == Published data: {"orderId":9}
== APP == Published data: {"orderId":10}
```
Subscriber output:
@ -759,6 +774,8 @@ Subscriber output:
== APP == Subscriber received: {"orderId":10}
```
Note: the order in which they are received may vary.
#### `pubsub.yaml` component file
When you run `dapr init`, Dapr creates a default Redis `pubsub.yaml` and runs a Redis container on your local machine, located:
@ -768,7 +785,7 @@ When you run `dapr init`, Dapr creates a default Redis `pubsub.yaml` and runs a
With the `pubsub.yaml` component, you can easily swap out underlying components without application code changes.
The Redis `pubsub.yaml` file included for this quickstart contains the following:
The Redis `pubsub.yaml` file included for this Quickstart contains the following:
```yaml
apiVersion: dapr.io/v1alpha1
@ -799,7 +816,7 @@ In the YAML file:
{{< /tabs >}}
## Tell us what you think!
We're continuously working to improve our quickstart examples and value your feedback. Did you find this quickstart helpful? Do you have suggestions for improvement?
We're continuously working to improve our Quickstart examples and value your feedback. Did you find this Quickstart helpful? Do you have suggestions for improvement?
Join the discussion in our [discord channel](https://discord.gg/22ZtJrNe).
@ -813,4 +830,4 @@ Join the discussion in our [discord channel](https://discord.gg/22ZtJrNe).
- [Go](https://github.com/dapr/quickstarts/tree/master/pub_sub/go/http)
- Learn more about [Pub/sub as a Dapr building block]({{< ref pubsub-overview >}})
{{< button text="Explore Dapr tutorials >>" page="getting-started/tutorials/_index.md" >}}
{{< button text="Explore Dapr tutorials >>" page="getting-started/tutorials/_index.md" >}}

View File

@ -10,11 +10,11 @@ With [Dapr's Service Invocation building block](https://docs.dapr.io/developing-
<img src="/images/serviceinvocation-quickstart/service-invocation-overview.png" width=800 alt="Diagram showing the steps of service invocation" style="padding-bottom:25px;">
Dapr offers several methods for service invocation, which you can choose depending on your scenario. For this quickstart, you'll enable the checkout service to invoke a method using HTTP proxy in the order-processor service.
Dapr offers several methods for service invocation, which you can choose depending on your scenario. For this Quickstart, you'll enable the checkout service to invoke a method using HTTP proxy in the order-processor service.
Learn more about Dapr's methods for service invocation in the [overview article]({{< ref service-invocation-overview.md >}}).
Select your preferred language before proceeding with the quickstart.
Select your preferred language before proceeding with the Quickstart.
{{< tabs "Python" "JavaScript" ".NET" "Java" "Go" >}}
<!-- Python -->
@ -87,7 +87,7 @@ pip3 install -r requirements.txt
Run the `order-processor` service alongside a Dapr sidecar.
```bash
dapr run --app-port 6001 --app-id order-processor --app-protocol http --dapr-http-port 3501 -- python3 app.py
dapr run --app-port 7001 --app-id order-processor --app-protocol http --dapr-http-port 3501 -- python3 app.py
```
```py
@ -99,7 +99,7 @@ def getOrder():
'ContentType': 'application/json'}
app.run(port=5001)
app.run(port=7001)
```
### View the Service Invocation outputs
@ -208,7 +208,7 @@ npm install
Run the `order-processor` service alongside a Dapr sidecar.
```bash
dapr run --app-port 6001 --app-id order-processor --app-protocol http --dapr-http-port 3501 -- npm start
dapr run --app-port 5001 --app-id order-processor --app-protocol http --dapr-http-port 3501 -- npm start
```
```javascript
@ -619,7 +619,7 @@ Dapr invokes an application on any Dapr instance. In the code, the sidecar progr
{{% /tabs %}}
## Tell us what you think!
We're continuously working to improve our quickstart examples and value your feedback. Did you find this quickstart helpful? Do you have suggestions for improvement?
We're continuously working to improve our Quickstart examples and value your feedback. Did you find this Quickstart helpful? Do you have suggestions for improvement?
Join the discussion in our [discord channel](https://discord.gg/22ZtJrNe).

View File

@ -112,7 +112,7 @@ Verify you have an [Azure subscription](https://azure.microsoft.com/free/).
Dapr defines resources to use for building block functionality with components. These steps go through how to connect the resources you created above to Dapr for state and pub/sub.
#### Locate your component filese
#### Locate your component files
{{< tabs "Self-Hosted" "Kubernetes" >}}
@ -336,4 +336,4 @@ kubectl apply -f redis-pubsub.yaml
{{< /tabs >}}
## Next steps
[Try out a Dapr quickstart]({{< ref quickstarts.md >}})
[Try out a Dapr quickstart]({{< ref quickstarts.md >}})

View File

@ -0,0 +1,60 @@
---
type: docs
title: "How-To: Handle large http header size"
linkTitle: "HTTP header size"
weight: 6000
description: "Configure a larger http read buffer size"
---
Dapr has a default limit of 4KB for the http header read buffer size. When sending http headers that are bigger than the default 4KB, you can increase this value. Otherwise, you may encounter a `Too big request header` service invocation error. You can change the http header size by using the `dapr.io/http-read-buffer-size` annotation or `--dapr-http-read-buffer-size` flag when using the CLI.
{{< tabs Self-hosted Kubernetes >}}
{{% codetab %}}
When running in self hosted mode, use the `--dapr-http-read-buffer-size` flag to configure Dapr to use non-default http header size:
```bash
dapr run --dapr-http-read-buffer-size 16 node app.js
```
This tells Dapr to set maximum read buffer size to `16` KB.
{{% /codetab %}}
{{% codetab %}}
On Kubernetes, set the following annotations in your deployment YAML:
```yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: myapp
namespace: default
labels:
app: myapp
spec:
replicas: 1
selector:
matchLabels:
app: myapp
template:
metadata:
labels:
app: myapp
annotations:
dapr.io/enabled: "true"
dapr.io/app-id: "myapp"
dapr.io/app-port: "8000"
dapr.io/http-read-buffer-size: "16"
...
```
{{% /codetab %}}
{{< /tabs >}}
## Related links
- [Dapr Kubernetes pod annotations spec]({{< ref arguments-annotations-overview.md >}})

View File

@ -3,15 +3,13 @@ type: docs
title: "Production guidelines on Kubernetes"
linkTitle: "Production guidelines"
weight: 40000
description: "Recommendations and practices for deploying Dapr to a Kubernetes cluster in a production ready configuration"
description: "Recommendations and practices for deploying Dapr to a Kubernetes cluster in a production-ready configuration"
---
## Cluster capacity requirements
For a production ready Kubernetes cluster deployment, it is recommended you run a cluster of at least 3 worker nodes to support a highly-available control plane installation.
Use the following resource settings might serve as a starting point. Requirements will vary depending on cluster size and other factors, so individual testing is needed to find the right values for your environment:
*Note: For more info on CPU and Memory resource units and their meaning, see [this](https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/#resource-units-in-kubernetes) link*
For a production-ready Kubernetes cluster deployment, it is recommended you run a cluster of at least 3 worker nodes to support a highly-available control plane installation.
Use the following resource settings as a starting point. Requirements will vary depending on cluster size and other factors, so perform individual testing to find the right values for your environment:
| Deployment | CPU | Memory
|-------------|-----|-------
@ -21,6 +19,11 @@ Use the following resource settings might serve as a starting point. Requirement
| **Placement** | Limit: 1, Request: 250m | Limit: 150Mi, Request: 75Mi
| **Dashboard** | Limit: 200m, Request: 50m | Limit: 200Mi, Request: 20Mi
{{% alert title="Note" color="primary" %}}
For more info, read the [concept article on CPU and Memory resource units and their meaning](https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/#resource-units-in-kubernetes).
{{% /alert %}}
### Helm
When installing Dapr using Helm, no default limit/request values are set. Each component has a `resources` option (for example, `dapr_dashboard.resources`), which you can use to tune the Dapr control plane to fit your environment. The [Helm chart readme](https://github.com/dapr/dapr/blob/master/charts/dapr/README.md) has detailed information and examples. For local/dev installations, you might simply want to skip configuring the `resources` options.
@ -43,23 +46,26 @@ The specific annotations related to resource constraints are:
- `dapr.io/sidecar-cpu-request`
- `dapr.io/sidecar-memory-request`
If not set, the dapr sidecar will run without resource settings, which may lead to issues. For a production-ready setup it is strongly recommended to configure these settings.
If not set, the Dapr sidecar will run without resource settings, which may lead to issues. For a production-ready setup it is strongly recommended to configure these settings.
For more details on configuring resource in Kubernetes see [Assign Memory Resources to Containers and Pods](https://kubernetes.io/docs/tasks/configure-pod-container/assign-memory-resource/) and [Assign CPU Resources to Containers and Pods](https://kubernetes.io/docs/tasks/configure-pod-container/assign-cpu-resource/).
Example settings for the dapr sidecar in a production-ready setup:
Example settings for the Dapr sidecar in a production-ready setup:
| CPU | Memory |
|-----|--------|
| Limit: 300m, Request: 100m | Limit: 1000Mi, Request: 250Mi
*Note: Since Dapr is intended to do much of the I/O heavy lifting for your app, it's expected that the resources given to Dapr enable you to drastically reduce the resource allocations for the application*
{{% alert title="Note" color="primary" %}}
Since Dapr is intended to do much of the I/O heavy lifting for your app, it's expected that the resources given to Dapr enable you to drastically reduce the resource allocations for the application.
{{% /alert %}}
The CPU and memory limits above account for the fact that Dapr is intended to a high number of I/O bound operations. It is strongly recommended that you use a monitoring tool to baseline the sidecar (and app) containers and tune these settings based on those baselines.
## Highly-available mode
When deploying Dapr in a production-ready configuration, it's recommended to deploy with a highly available (HA) configuration of the control plane, which creates 3 replicas of each control plane pod in the dapr-system namespace. This configuration allows for the Dapr control plane to survive node failures and other outages.
When deploying Dapr in a production-ready configuration, it's recommended to deploy with a highly available (HA) configuration of the control plane, which creates 3 replicas of each control plane pod in the dapr-system namespace. This configuration allows the Dapr control plane to retain 3 running instances and survive node failures and other outages.
For a new Dapr deployment, the HA mode can be set with both the [Dapr CLI]({{< ref "kubernetes-deploy.md#install-in-highly-available-mode" >}}) and with [Helm charts]({{< ref "kubernetes-deploy.md#add-and-install-dapr-helm-chart" >}}).
@ -67,10 +73,10 @@ For an existing Dapr deployment, enabling the HA mode requires additional steps.
## Deploying Dapr with Helm
For a full guide on deploying Dapr with Helm visit [this guide]({{< ref "kubernetes-deploy.md#install-with-helm-advanced" >}}).
[Visit the full guide on deploying Dapr with Helm]({{< ref "kubernetes-deploy.md#install-with-helm-advanced" >}}).
### Parameters file
It is recommended to create a values file instead of specifying parameters on the command-line. This file should be checked in to source control so that you can track changes made to it.
Instead of specifying parameters on the command line, it's recommended to create a values file. This file should be checked into source control so that you can track its changes.
For a full list of all available options you can set in the values file (or by using the `--set` command-line option), see https://github.com/dapr/dapr/blob/master/charts/dapr/README.md.
@ -106,7 +112,10 @@ kubectl get pods --namespace dapr-system
This command will run 3 replicas of each control plane service in the dapr-system namespace.
*Note: The Dapr Helm chart automatically deploys with affinity for nodes with the label `kubernetes.io/os=linux`. You can deploy the Dapr control plane to Windows nodes, but most users should not need to. For more information see [Deploying to a Hybrid Linux/Windows K8s Cluster]({{< ref "kubernetes-hybrid-clusters.md" >}})*
{{% alert title="Note" color="primary" %}}
The Dapr Helm chart automatically deploys with affinity for nodes with the label `kubernetes.io/os=linux`. You can deploy the Dapr control plane to Windows nodes, but most users should not need to. For more information see [Deploying to a Hybrid Linux/Windows K8s Cluster]({{< ref "kubernetes-hybrid-clusters.md" >}}).
{{% /alert %}}
## Upgrading Dapr with Helm
@ -144,18 +153,21 @@ nodeapp 3000 16h 2020-07-29 17:16.22
### Enabling high-availability in an existing Dapr deployment
Enabling HA mode for an existing Dapr deployment requires two steps.
Enabling HA mode for an existing Dapr deployment requires two steps:
First, delete the existing placement stateful set:
```bash
kubectl delete statefulset.apps/dapr-placement-server -n dapr-system
```
Second, issue the upgrade command:
```bash
helm upgrade dapr ./charts/dapr -n dapr-system --set global.ha.enabled=true
```
1. Delete the existing placement stateful set:
The reason for deletion of the placement stateful set is because in the HA mode, the placement service adds [Raft](https://raft.github.io/) for leader election. However, Kubernetes only allows for limited fields in stateful sets to be patched, subsequently failing upgrade of the placement service.
```bash
kubectl delete statefulset.apps/dapr-placement-server -n dapr-system
```
1. Issue the upgrade command:
```bash
helm upgrade dapr ./charts/dapr -n dapr-system --set global.ha.enabled=true
```
You delete the placement stateful set because, in the HA mode, the placement service adds [Raft](https://raft.github.io/) for leader election. However, Kubernetes only allows for limited fields in stateful sets to be patched, subsequently failing upgrade of the placement service.
Deletion of the existing placement stateful set is safe. The agents will reconnect and re-register with the newly created placement service, which will persist its table in Raft.

View File

@ -67,6 +67,7 @@ RUN wget -q https://raw.githubusercontent.com/dapr/cli/master/install/install.sh
ARG DAPR_BUILD_DIR
COPY $DAPR_BUILD_DIR /opt/dapr
ENV PATH="/opt/dapr/:${PATH}"
RUN dapr init --slim
# Install your app
WORKDIR /app
@ -176,4 +177,4 @@ There are published Docker images for each of the Dapr components available on [
- `latest-arm`: The latest release version for ARM, **ONLY** use for development purposes.
- `edge-arm`: The latest edge build for ARM (master).
- `major.minor.patch-arm`: A release version for ARM.
- `major.minor.patch-rc.iteration-arm`: A release candidate for ARM.
- `major.minor.patch-rc.iteration-arm`: A release candidate for ARM.

View File

@ -39,17 +39,18 @@ The table below shows the versions of Dapr releases that have been tested togeth
| May 26th 2021 | 1.2.0</br> | 1.2.0 | Java 1.1.0 </br>Go 1.1.0 </br>PHP 1.1.0 </br>Python 1.1.0 </br>.NET 1.2.0 | 0.6.0 | Unsupported |
| Jun 16th 2021 | 1.2.1</br> | 1.2.0 | Java 1.1.0 </br>Go 1.1.0 </br>PHP 1.1.0 </br>Python 1.1.0 </br>.NET 1.2.0 | 0.6.0 | Unsupported |
| Jun 16th 2021 | 1.2.2</br> | 1.2.0 | Java 1.1.0 </br>Go 1.1.0 </br>PHP 1.1.0 </br>Python 1.1.0 </br>.NET 1.2.0 | 0.6.0 | Unsupported |
| Jul 26th 2021 | 1.3</br> | 1.3.0 | Java 1.2.0 </br>Go 1.2.0 </br>PHP 1.1.0 </br>Python 1.2.0 </br>.NET 1.3.0 | 0.7.0 | Unsupported |
| Sep 14th 2021 | 1.3.1</br> | 1.3.0 | Java 1.2.0 </br>Go 1.2.0 </br>PHP 1.1.0 </br>Python 1.2.0 </br>.NET 1.3.0 | 0.7.0 | Unsupported |
| Sep 15th 2021 | 1.4</br> | 1.4.0 | Java 1.3.0 </br>Go 1.2.0 </br>PHP 1.1.0 </br>Python 1.3.0 </br>.NET 1.4.0 | 0.8.0 | Unsupported |
| Sep 22nd 2021 | 1.4.1</br> | 1.4.0 | Java 1.3.0 </br>Go 1.2.0 </br>PHP 1.1.0 </br>Python 1.3.0 </br>.NET 1.4.0 | 0.8.0 | Unsupported
| Sep 24th 2021 | 1.4.2</br> | 1.4.0 | Java 1.3.0 </br>Go 1.2.0 </br>PHP 1.1.0 </br>Python 1.3.0 </br>.NET 1.4.0 | 0.8.0 | Unsupported |
| Oct 7th 2021 | 1.4.3</br> | 1.4.0 | Java 1.3.0 </br>Go 1.2.0 </br>PHP 1.1.0 </br>Python 1.3.0 </br>.NET 1.4.0 | 0.8.0 | Unsupported |
| Dev 6th 2021 | 1.4.4</br> | 1.4.0 | Java 1.3.0 </br>Go 1.2.0 </br>PHP 1.1.0 </br>Python 1.3.0 </br>.NET 1.4.0 | 0.8.0 | Unsupported |
| Jul 26th 2021 | 1.3</br> | 1.3.0 | Java 1.2.0 </br>Go 1.2.0 </br>PHP 1.1.0 </br>Python 1.2.0 </br>.NET 1.3.0 | 0.7.0 | Unsupported |
| Sep 14th 2021 | 1.3.1</br> | 1.3.0 | Java 1.2.0 </br>Go 1.2.0 </br>PHP 1.1.0 </br>Python 1.2.0 </br>.NET 1.3.0 | 0.7.0 | Unsupported |
| Sep 15th 2021 | 1.4</br> | 1.4.0 | Java 1.3.0 </br>Go 1.2.0 </br>PHP 1.1.0 </br>Python 1.3.0 </br>.NET 1.4.0 | 0.8.0 | Unsupported |
| Sep 22nd 2021 | 1.4.1</br> | 1.4.0 | Java 1.3.0 </br>Go 1.2.0 </br>PHP 1.1.0 </br>Python 1.3.0 </br>.NET 1.4.0 | 0.8.0 | Unsupported |
| Sep 24th 2021 | 1.4.2</br> | 1.4.0 | Java 1.3.0 </br>Go 1.2.0 </br>PHP 1.1.0 </br>Python 1.3.0 </br>.NET 1.4.0 | 0.8.0 | Unsupported |
| Oct 7th 2021 | 1.4.3</br> | 1.4.0 | Java 1.3.0 </br>Go 1.2.0 </br>PHP 1.1.0 </br>Python 1.3.0 </br>.NET 1.4.0 | 0.8.0 | Unsupported |
| Dev 6th 2021 | 1.4.4</br> | 1.4.0 | Java 1.3.0 </br>Go 1.2.0 </br>PHP 1.1.0 </br>Python 1.3.0 </br>.NET 1.4.0 | 0.8.0 | Unsupported |
| Nov 11th 2021 | 1.5.0</br> | 1.5.0 | Java 1.3.0 </br>Go 1.3.0 </br>PHP 1.1.0 </br>Python 1.4.0 </br>.NET 1.5.0 </br>JS 1.0.2 | 0.9.0 | Supported |
| Dec 6th 2021 | 1.5.1</br> | 1.5.1 | Java 1.3.0 </br>Go 1.3.0 </br>PHP 1.1.0 </br>Python 1.4.0 </br>.NET 1.5.0 </br>JS 1.0.2 | 0.9.0 | Supported |
| Jan 25th 2022 | 1.6.0</br> | 1.6.0 | Java 1.4.0 </br>Go 1.3.1 </br>PHP 1.1.0 </br>Python 1.5.0 </br>.NET 1.6.0 </br>JS 2.0.0 | 0.9.0 | Supported (current) |
| Mar 25th 2022 | 1.5.2</br> | 1.6.0 | Java 1.3.0 </br>Go 1.3.0 </br>PHP 1.1.0 </br>Python 1.4.0 </br>.NET 1.5.0 </br>JS 1.0.2 | 0.9.0 | Supported |
| Jan 25th 2022 | 1.6.0</br> | 1.6.0 | Java 1.4.0 </br>Go 1.3.1 </br>PHP 1.1.0 </br>Python 1.5.0 </br>.NET 1.6.0 </br>JS 2.0.0 | 0.9.0 | Supported |
| Mar 25th 2022 | 1.6.1</br> | 1.6.0 | Java 1.4.0 </br>Go 1.3.1 </br>PHP 1.1.0 </br>Python 1.5.0 </br>.NET 1.6.0 </br>JS 2.0.0 | 0.9.0 | Supported (current) |
## Upgrade paths
After the 1.0 release of the runtime there may be situations where it is necessary to explicitly upgrade through an additional release to reach the desired target. For example an upgrade from v1.0 to v1.2 may need go pass through v1.1
@ -64,28 +65,36 @@ General guidance on upgrading can be found for [self hosted mode]({{<ref self-ho
| | 1.1.2 | 1.2.2 |
| | 1.2.2 | 1.3.1 |
| | 1.3.1 | 1.4.4 |
| | 1.4.4 | 1.5.1 |
| | 1.5.1 | 1.6.0 |
| | 1.4.4 | 1.5.2 |
| | 1.5.2 | 1.6.0 |
| | 1.6.0 | 1.6.1|
| 1.1.0 to 1.1.2 | N/A | 1.2.2 |
| | 1.2.2 | 1.3.1 |
| | 1.3.1 | 1.4.4 |
| | 1.4.4 | 1.5.1 |
| | 1.5.1 | 1.6.0 |
| | 1.4.4 | 1.5.2 |
| | 1.5.2 | 1.6.0 |
| | 1.6.0 | 1.6.1 |
| 1.2.0 to 1.2.2 | N/A | 1.3.1 |
| | 1.3.1 | 1.4.4 |
| | 1.4.4 | 1.5.1 |
| | 1.5.1 | 1.6.0 |
| | 1.5.2 | 1.6.0 |
| | 1.6.0 | 1.6.1 |
| 1.3.0 | N/A | 1.3.1 |
| | 1.3.1 | 1.4.4 |
| | 1.4.4 | 1.5.1 |
| | 1.5.1 | 1.6.0 |
| | 1.4.4 | 1.5.2 |
| | 1.5.2 | 1.6.0 |
| | 1.6.0 | 1.6.1 |
| 1.3.1 | N/A | 1.4.4 |
| | 1.4.4 | 1.5.1 |
| | 1.5.1 | 1.6.0 |
| | 1.6.0 | 1.6.1 |
| 1.4.0 to 1.4.2 | N/A | 1.4.4 |
| | 1.4.4 | 1.5.1 |
| | 1.5.1 | 1.6.0 |
| 1.5.0 to 1.5.1 | N/A | 1.6.0 |
| | 1.4.4 | 1.5.2 |
| | 1.5.2 | 1.6.0 |
| | 1.6.0 | 1.6.1 |
| 1.5.0 to 1.5.2 | N/A | 1.6.0 |
| | 1.6.0 | 1.6.1 |
| 1.6.0 | N/A | 1.6.1 |
## Feature and deprecations
There is a process for announcing feature deprecations. Deprecations are applied two (2) releases after the release in which they were announced. For example Feature X is announced to be deprecated in the 1.0.0 release notes and will then be removed in 1.2.0.

View File

@ -163,17 +163,6 @@ Search the Dapr runtime logs and look for any pub/sub errors:
kubectl logs <name-of-pod> daprd
```
## The Dapr Operator pod keeps crashing
Check that there's only one installation of the Dapr Operator in your cluster.
Find out by running
```bash
kubectl get pods -l app=dapr-operator --all-namespaces
```
If two pods appear, delete the redundant Dapr installation.
## I'm getting 500 Error responses when calling Dapr
This means there are some internal issue inside the Dapr runtime.

View File

@ -6,10 +6,10 @@ description: "Detailed documentation on the actors API"
weight: 500
---
Dapr provides native, cross-platform and cross-language virtual actor capabilities.
Dapr provides native, cross-platform, and cross-language virtual actor capabilities.
Besides the [language specific SDKs]({{<ref sdks>}}), a developer can invoke an actor using the API endpoints below.
## User service code calling dapr
## User service code calling Dapr
### Invoke actor method
@ -33,10 +33,10 @@ XXX | Status code from upstream call
Parameter | Description
--------- | -----------
daprPort | The Dapr port.
actorType | The actor type.
actorId | The actor ID.
method | The name of the method to invoke.
`daprPort` | The Dapr port.
`actorType` | The actor type.
`actorId` | The actor ID.
`method` | The name of the method to invoke.
> Note, all URL parameters are case-sensitive.
@ -49,8 +49,7 @@ curl -X POST http://localhost:3500/v1.0/actors/stormtrooper/50/method/shoot \
-H "Content-Type: application/json"
```
Example of invoking a method on an actor that takes parameters: You can provided the method parameters and values in the body of the request, for example in curl using -d "{\"param\":\"value\"}"
You can provide the method parameters and values in the body of the request, for example in curl using `-d "{\"param\":\"value\"}"`. Example of invoking a method on an actor that takes parameters:
```shell
curl -X POST http://localhost:3500/v1.0/actors/x-wing/33/method/fly \
@ -59,6 +58,7 @@ curl -X POST http://localhost:3500/v1.0/actors/x-wing/33/method/fly \
"destination": "Hoth"
}'
```
or
```shell
@ -66,11 +66,12 @@ curl -X POST http://localhost:3500/v1.0/actors/x-wing/33/method/fly \
-H "Content-Type: application/json"
-d "{\"destination\":\"Hoth\"}"
```
The response (the method return) from the remote endpoint is returned in the request body.
### Actor state transactions
Persists the changed to the state for an actor as a multi-item transaction.
Persists the change to the state for an actor as a multi-item transaction.
***Note that this operation is dependant on a using state store component that supports multi-item transactions.***
@ -88,15 +89,13 @@ Code | Description
400 | Actor not found
500 | Request failed
#### URL Parameters
Parameter | Description
--------- | -----------
daprPort | The Dapr port.
actorType | The actor type.
actorId | The actor ID.
`daprPort` | The Dapr port.
`actorType` | The actor type.
`actorId` | The actor ID.
> Note, all URL parameters are case-sensitive.
@ -141,15 +140,14 @@ Code | Description
400 | Actor not found
500 | Request failed
#### URL Parameters
Parameter | Description
--------- | -----------
daprPort | The Dapr port.
actorType | The actor type.
actorId | The actor ID.
key | The key for the state value.
`daprPort` | The Dapr port.
`actorType` | The actor type.
`actorId` | The actor ID.
`key` | The key for the state value.
> Note, all URL parameters are case-sensitive.
@ -184,12 +182,18 @@ A JSON object with the following fields:
| Field | Description |
|-------|--------------|
| dueTime | Specifies the time after which the reminder is invoked, its format should be [time.ParseDuration](https://pkg.go.dev/time#ParseDuration) format
| period | Specifies the period between different invocations, its format should be [time.ParseDuration](https://pkg.go.dev/time#ParseDuration) format or ISO 8601 duration format with optional recurrence.
| `dueTime` | Specifies the time after which the reminder is invoked. Its format should be [time.ParseDuration](https://pkg.go.dev/time#ParseDuration)
| `period` | Specifies the period between different invocations. Its format should be [time.ParseDuration](https://pkg.go.dev/time#ParseDuration) or ISO 8601 duration format with optional recurrence.
`period` field supports `time.Duration` format and ISO 8601 format (with some limitations). Only duration format of ISO 8601 duration `Rn/PnYnMnWnDTnHnMnS` is supported for `period`. Here `Rn/` specifies that the reminder will be invoked `n` number of times. It should be a positive integer greater than zero. If certain values are zero, the `period` can be shortened, for example 10 seconds can be specified in ISO 8601 duration as `PT10S`. If `Rn/` is not specified the reminder will run infinite number of times until deleted.
`period` field supports `time.Duration` format and ISO 8601 format with some limitations. For `period`, only duration format of ISO 8601 duration `Rn/PnYnMnWnDTnHnMnS` is supported. `Rn/` specifies that the reminder will be invoked `n` number of times.
- `n` should be a positive integer greater than 0.
- If certain values are 0, the `period` can be shortened; for example, 10 seconds can be specified in ISO 8601 duration as `PT10S`.
If `Rn/` is not specified, the reminder will run an infinite number of times until deleted.
The following specifies a `dueTime` of 3 seconds and a period of 7 seconds.
```json
{
"dueTime":"0h0m3s0ms",
@ -197,7 +201,8 @@ The following specifies a `dueTime` of 3 seconds and a period of 7 seconds.
}
```
A `dueTime` of 0 means to fire immediately. The following body means to fire immediately, then every 9 seconds.
A `dueTime` of 0 means to fire immediately. The following body means to fire immediately, then every 9 seconds.
```json
{
"dueTime":"0h0m0s0ms",
@ -205,7 +210,8 @@ A `dueTime` of 0 means to fire immediately. The following body means to fire im
}
```
To configure the reminder to fire once only, the period should be set to empty string. The following specifies a `dueTime` of 3 seconds with a period of empty string, which means the reminder will fire in 3 seconds and then never fire again.
To configure the reminder to fire only once, the period should be set to empty string. The following specifies a `dueTime` of 3 seconds with a period of empty string, which means the reminder will fire in 3 seconds and then never fire again.
```json
{
"dueTime":"0h0m3s0ms",
@ -225,10 +231,10 @@ Code | Description
Parameter | Description
--------- | -----------
daprPort | The Dapr port.
actorType | The actor type.
actorId | The actor ID.
name | The name of the reminder to create.
`daprPort` | The Dapr port.
`actorType` | The actor type.
`actorId` | The actor ID.
`name` | The name of the reminder to create.
> Note, all URL parameters are case-sensitive.
@ -265,10 +271,10 @@ Code | Description
Parameter | Description
--------- | -----------
daprPort | The Dapr port.
actorType | The actor type.
actorId | The actor ID.
name | The name of the reminder to get.
`daprPort` | The Dapr port.
`actorType` | The actor type.
`actorId` | The actor ID.
`name` | The name of the reminder to get.
> Note, all URL parameters are case-sensitive.
@ -310,10 +316,10 @@ Code | Description
Parameter | Description
--------- | -----------
daprPort | The Dapr port.
actorType | The actor type.
actorId | The actor ID.
name | The name of the reminder to delete.
`daprPort` | The Dapr port.
`actorType` | The actor type.
`actorId` | The actor ID.
`name` | The name of the reminder to delete.
> Note, all URL parameters are case-sensitive.
@ -337,6 +343,7 @@ POST/PUT http://localhost:<daprPort>/v1.0/actors/<actorType>/<actorId>/timers/<n
Body:
The following specifies a `dueTime` of 3 seconds and a period of 7 seconds.
```json
{
"dueTime":"0h0m3s0ms",
@ -345,6 +352,7 @@ The following specifies a `dueTime` of 3 seconds and a period of 7 seconds.
```
A `dueTime` of 0 means to fire immediately. The following body means to fire immediately, then every 9 seconds.
```json
{
"dueTime":"0h0m0s0ms",
@ -364,10 +372,10 @@ Code | Description
Parameter | Description
--------- | -----------
daprPort | The Dapr port.
actorType | The actor type.
actorId | The actor ID.
name | The name of the timer to create.
`daprPort` | The Dapr port.
`actorType` | The actor type.
`actorId` | The actor ID.
`name` | The name of the timer to create.
> Note, all URL parameters are case-sensitive.
@ -405,10 +413,10 @@ Code | Description
Parameter | Description
--------- | -----------
daprPort | The Dapr port.
actorType | The actor type.
actorId | The actor ID.
name | The name of the timer to delete.
`daprPort` | The Dapr port.
`actorType` | The actor type.
`actorId` | The actor ID.
`name` | The name of the timer to delete.
> Note, all URL parameters are case-sensitive.
@ -421,7 +429,7 @@ curl -X DELETE http://localhost:3500/v1.0/actors/stormtrooper/50/timers/checkReb
### Get registered actors
Gets the registered actors types for this app and the Dapr actor configuration settings.
Get the registered actors types for this app and the Dapr actor configuration settings.
#### HTTP Request
@ -440,7 +448,7 @@ Code | Description
Parameter | Description
--------- | -----------
appPort | The application port.
`appPort` | The application port.
#### Examples
@ -453,18 +461,17 @@ curl -X GET http://localhost:3000/dapr/config \
The above command returns the config (all fields are optional):
Parameter | Description
----------|------------
entities | The actor types this app supports.
actorIdleTimeout | Specifies how long to wait before deactivating an idle actor. An actor is idle if no actor method calls and no reminders have fired on it.
actorScanInterval | A duration which specifies how often to scan for actors to deactivate idle actors. Actors that have been idle longer than the actorIdleTimeout will be deactivated.
drainOngoingCallTimeout | A duration used when in the process of draining rebalanced actors. This specifies how long to wait for the current active actor method to finish. If there is no current actor method call, this is ignored.
drainRebalancedActors | A bool. If true, Dapr will wait for `drainOngoingCallTimeout` to allow a current actor call to complete before trying to deactivate an actor. If false, do not wait.
reentrancy | A configuration object that holds the options for actor reentrancy.
enabled | A flag in the reentrancy configuration that is needed to enable reentrancy.
maxStackDepth | A value in the reentrancy configuration that controls how many reentrant calls be made to the same actor.
entitiesConfig | Array of entity configurations that allow per actor type settings. Any configuration defined here must have an entity that maps back into the root level entities.
`entities` | The actor types this app supports.
`actorIdleTimeout` | Specifies how long to wait before deactivating an idle actor. An actor is idle if no actor method calls and no reminders have fired on it.
`actorScanInterval` | A duration which specifies how often to scan for actors to deactivate idle actors. Actors that have been idle longer than the actorIdleTimeout will be deactivated.
`drainOngoingCallTimeout` | A duration used when in the process of draining rebalanced actors. This specifies how long to wait for the current active actor method to finish. If there is no current actor method call, this is ignored.
`drainRebalancedActors` | A bool. If true, Dapr will wait for `drainOngoingCallTimeout` to allow a current actor call to complete before trying to deactivate an actor. If false, do not wait.
`reentrancy` | A configuration object that holds the options for actor reentrancy.
`enabled` | A flag in the reentrancy configuration that is needed to enable reentrancy.
`maxStackDepth` | A value in the reentrancy configuration that controls how many reentrant calls be made to the same actor.
`entitiesConfig` | Array of entity configurations that allow per actor type settings. Any configuration defined here must have an entity that maps back into the root level entities.
```json
{
@ -492,7 +499,7 @@ entitiesConfig | Array of entity configurations that allow per actor type settin
### Deactivate actor
Deactivates an actor by persisting the instance of the actor to the state store with the specified actorId
Deactivates an actor by persisting the instance of the actor to the state store with the specified actorId.
#### HTTP Request
@ -512,15 +519,15 @@ Code | Description
Parameter | Description
--------- | -----------
appPort | The application port.
actorType | The actor type.
actorId | The actor ID.
`appPort` | The application port.
`actorType` | The actor type.
`actorId` | The actor ID.
> Note, all URL parameters are case-sensitive.
#### Examples
Example of deactivating an actor: The example deactives the actor type stormtrooper that has actorId of 50
The following example deactivates the actor type `stormtrooper` that has `actorId` of 50.
```shell
curl -X DELETE http://localhost:3000/actors/stormtrooper/50 \
@ -529,7 +536,12 @@ curl -X DELETE http://localhost:3000/actors/stormtrooper/50 \
### Invoke actor method
Invokes a method for an actor with the specified methodName where parameters to the method are passed in the body of the request message and return values are provided in the body of the response message. If the actor is not already running, the app side should [activate](#activating-an-actor) it.
Invokes a method for an actor with the specified `methodName` where:
- Parameters to the method are passed in the body of the request message.
- Return values are provided in the body of the response message.
If the actor is not already running, the app side should [activate](#activating-an-actor) it.
#### HTTP Request
@ -549,16 +561,16 @@ Code | Description
Parameter | Description
--------- | -----------
appPort | The application port.
actorType | The actor type.
actorId | The actor ID.
methodName | The name of the method to invoke.
`appPort` | The application port.
`actorType` | The actor type.
`actorId` | The actor ID.
`methodName` | The name of the method to invoke.
> Note, all URL parameters are case-sensitive.
#### Examples
Example of invoking a method for an actor: The example calls the performAction method on the actor type stormtrooper that has actorId of 50
The following example calls the `performAction` method on the actor type `stormtrooper` that has `actorId` of 50.
```shell
curl -X POST http://localhost:3000/actors/stormtrooper/50/method/performAction \
@ -587,16 +599,16 @@ Code | Description
Parameter | Description
--------- | -----------
appPort | The application port.
actorType | The actor type.
actorId | The actor ID.
reminderName | The name of the reminder to invoke.
`appPort` | The application port.
`actorType` | The actor type.
`actorId` | The actor ID.
`reminderName` | The name of the reminder to invoke.
> Note, all URL parameters are case-sensitive.
#### Examples
Example of invoking a reminder for an actor: The example calls the checkRebels reminder method on the actor type stormtrooper that has actorId of 50
The following example calls the `checkRebels` reminder method on the actor type `stormtrooper` that has `actorId` of 50.
```shell
curl -X POST http://localhost:3000/actors/stormtrooper/50/method/remind/checkRebels \
@ -605,7 +617,7 @@ curl -X POST http://localhost:3000/actors/stormtrooper/50/method/remind/checkReb
### Invoke timer
Invokes a timer for an actor with the specified timerName. If the actor is not already running, the app side should [activate](#activating-an-actor) it.
Invokes a timer for an actor with the specified `timerName`. If the actor is not already running, the app side should [activate](#activating-an-actor) it.
#### HTTP Request
@ -625,16 +637,16 @@ Code | Description
Parameter | Description
--------- | -----------
appPort | The application port.
actorType | The actor type.
actorId | The actor ID.
timerName | The name of the timer to invoke.
`appPort` | The application port.
`actorType` | The actor type.
`actorId` | The actor ID.
`timerName` | The name of the timer to invoke.
> Note, all URL parameters are case-sensitive.
#### Examples
Example of invoking a timer for an actor: The example calls the checkRebels timer method on the actor type stormtrooper that has actorId of 50
The following example calls the `checkRebels` timer method on the actor type `stormtrooper` that has `actorId` of 50.
```shell
curl -X POST http://localhost:3000/actors/stormtrooper/50/method/timer/checkRebels \
@ -644,7 +656,7 @@ curl -X POST http://localhost:3000/actors/stormtrooper/50/method/timer/checkRebe
### Health check
Probes the application for a response to signal to Dapr that the app is healthy and running.
Any other response status code other than `200` will be considered as an unhealthy response.
Any response status code other than `200` will be considered an unhealthy response.
A response body is not required.
@ -664,7 +676,7 @@ Code | Description
Parameter | Description
--------- | -----------
appPort | The application port.
`appPort` | The application port.
#### Examples
@ -676,19 +688,21 @@ curl -X GET http://localhost:3000/healthz \
## Activating an Actor
Conceptually, activating an actor means creating the actor's object and adding the actor to a tracking table. Here is an [example](https://github.com/dapr/dotnet-sdk/blob/6c271262231c41b21f3ca866eb0d55f7ce8b7dbc/src/Dapr.Actors/Runtime/ActorManager.cs#L199) from the .NET SDK.
Conceptually, activating an actor means creating the actor's object and adding the actor to a tracking table. [Review an example from the .NET SDK](https://github.com/dapr/dotnet-sdk/blob/6c271262231c41b21f3ca866eb0d55f7ce8b7dbc/src/Dapr.Actors/Runtime/ActorManager.cs#L199).
## Querying actor state externally
In order to enable visibility into the state of an actor and allow for complex scenarios such as state aggregation, Dapr saves actor state in external state stores such as databases. As such, it is possible to query for an actor state externally by composing the correct key or query.
To enable visibility into the state of an actor and allow for complex scenarios like state aggregation, Dapr saves actor state in external state stores, such as databases. As such, it is possible to query for an actor state externally by composing the correct key or query.
The state namespace created by Dapr for actors is composed of the following items:
- App ID - Represents the unique ID given to the Dapr application.
- Actor Type - Represents the type of the actor.
- Actor ID - Represents the unique ID of the actor instance for an actor type.
- Key - A key for the specific state value. An actor ID can hold multiple state keys.
- App ID: Represents the unique ID given to the Dapr application.
- Actor Type: Represents the type of the actor.
- Actor ID: Represents the unique ID of the actor instance for an actor type.
- Key: A key for the specific state value. An actor ID can hold multiple state keys.
The following example shows how to construct a key for the state of an actor instance under the `myapp` App ID namespace:
`myapp||cat||hobbit||food`
In the example above, we are getting the value for the state key `food`, for the actor ID `hobbit` with an actor type of `cat`, under the App ID namespace of `myapp`.

View File

@ -9,7 +9,7 @@ weight: 300
## Publish a message to a given topic
This endpoint lets you publish data to multiple consumers who are listening on a `topic`.
Dapr guarantees at least once semantics for this endpoint.
Dapr guarantees At-Least-Once semantics for this endpoint.
### HTTP Request
@ -30,10 +30,10 @@ Code | Description
Parameter | Description
--------- | -----------
daprPort | the Dapr port
pubsubname | the name of pubsub component
topic | the name of the topic
metadata | query parameters for metadata as described below
`daprPort` | The Dapr port
`pubsubname` | The name of pubsub component
`topic` | The name of the topic
`metadata` | Query parameters for metadata as described below
> Note, all URL parameters are case-sensitive.
@ -47,20 +47,20 @@ curl -X POST http://localhost:3500/v1.0/publish/pubsubName/deathStarStatus \
### Headers
The `Content-Type` header tells Dapr which content type your data adheres to when constructing a CloudEvent envelope.
The value of the `Content-Type` header populates the `datacontenttype` field in the CloudEvent.
The `Content-Type` header tells Dapr which content type your data adheres to when constructing a CloudEvent envelope. The `Content-Type` header value populates the `datacontenttype` field in the CloudEvent.
Unless specified, Dapr assumes `text/plain`. If your content type is JSON, use a `Content-Type` header with the value of `application/json`.
If you want to send your own custom CloudEvent, use the `application/cloudevents+json` value for the `Content-Type` header.
#### Metadata
Metadata can be sent via query parameters in the request's URL. It must be prefixed with `metadata.` as shown below.
Metadata can be sent via query parameters in the request's URL. It must be prefixed with `metadata.`, as shown below.
Parameter | Description
--------- | -----------
metadata.ttlInSeconds | the number of seconds for the message to expire as [described here]({{< ref pubsub-message-ttl.md >}})
metadata.rawPayload | boolean to determine if Dapr should publish the event without wrapping it as CloudEvent as [described here]({{< ref pubsub-raw.md >}})
`metadata.ttlInSeconds` | The number of seconds for the message to expire, as [described here]({{< ref pubsub-message-ttl.md >}})
`metadata.rawPayload` | Boolean to determine if Dapr should publish the event without wrapping it as CloudEvent, as [described here]({{< ref pubsub-raw.md >}})
> Additional metadata parameters are available based on each pubsub component.
@ -80,11 +80,11 @@ GET http://localhost:<appPort>/dapr/subscribe
Parameter | Description
--------- | -----------
appPort | the application port
`appPort` | The application port
#### HTTP Response body
A json encoded array of strings.
A JSON-encoded array of strings.
Example:
@ -109,7 +109,7 @@ Optionally, metadata can be sent via the request body.
Parameter | Description
--------- | -----------
rawPayload | boolean to subscribe to events that do not comply with CloudEvent specification, as [described here]({{< ref pubsub-raw.md >}})
`rawPayload` | boolean to subscribe to events that do not comply with CloudEvent specification, as [described here]({{< ref pubsub-raw.md >}})
### Provide route(s) for Dapr to deliver topic events
@ -129,13 +129,14 @@ POST http://localhost:<appPort>/<path>
Parameter | Description
--------- | -----------
appPort | the application port
path | route path from the subscription configuration
`appPort` | The application port
`path` | Route path from the subscription configuration
#### Expected HTTP Response
An HTTP 2xx response denotes successful processing of message.
For richer response handling, a JSON encoded payload body with the processing status can be sent:
For richer response handling, a JSON-encoded payload body with the processing status can be sent:
```json
{
@ -145,14 +146,14 @@ For richer response handling, a JSON encoded payload body with the processing st
Status | Description
--------- | -----------
SUCCESS | message is processed successfully
RETRY | message to be retried by Dapr
DROP | warning is logged and message is dropped
Others | error, message to be retried by Dapr
`SUCCESS` | Message is processed successfully
`RETRY` | Message to be retried by Dapr
`DROP` | Warning is logged and message is dropped
Others | Error, message to be retried by Dapr
Dapr assumes a JSON encoded payload response without `status` field or an empty payload responses with HTTP 2xx, as `SUCCESS`.
Dapr assumes that a JSON-encoded payload response without `status` field or an empty payload responses with HTTP 2xx is a `SUCCESS`.
The HTTP response might be different from HTTP 2xx, the following are Dapr's behavior in different HTTP statuses:
The HTTP response might be different from HTTP 2xx. The following are Dapr's behavior in different HTTP statuses:
HTTP Status | Description
--------- | -----------
@ -160,7 +161,6 @@ HTTP Status | Description
404 | error is logged and message is dropped
other | warning is logged and message to be retried
## Message envelope
Dapr Pub/Sub adheres to version 1.0 of CloudEvents.

View File

@ -8,7 +8,7 @@ weight: 200
## Component file
A Dapr State Store component yaml file has the following structure:
A Dapr `statestore.yaml` component file has the following structure:
```yaml
apiVersion: dapr.io/v1alpha1
@ -26,9 +26,10 @@ spec:
value: <VALUE>
```
The ```metadata.name``` is the name of the state store.
The ```spec/metadata``` section is an open key value pair metadata that allows a binding to define connection properties.
| Setting | Description |
| ------- | ----------- |
| `metadata.name` | The name of the state store. |
| `spec/metadata` | An open key value pair metadata that allows a binding to define connection properties. |
## Key scheme
@ -58,15 +59,14 @@ POST http://localhost:<daprPort>/v1.0/state/<storename>
Parameter | Description
--------- | -----------
daprPort | the Dapr port
storename | ```metadata.name``` field in the user configured state store component yaml. Please refer Dapr State Store configuration structure mentioned above.
`daprPort` | The Dapr port
`storename` | The `metadata.name` field in the user-configured `statestore.yaml` component file. Refer to the [Dapr state store configuration structure](#component-file) mentioned above.
The optional request metadata is passed via URL query parameters. For example,
```
POST http://localhost:3500/v1.0/state/myStore?metadata.contentType=application/json
```
> Note, all URL parameters are case-sensitive.
> All URL parameters are case-sensitive.
#### Request Body
@ -74,13 +74,13 @@ A JSON array of state objects. Each state object is comprised with the following
Field | Description
---- | -----------
key | state key
value | state value, which can be any byte array
etag | (optional) state ETag
metadata | (optional) additional key-value pairs to be passed to the state store
options | (optional) state operation options, see [state operation options](#optional-behaviors)
`key` | State key
`value` | State value, which can be any byte array
`etag` | (optional) State ETag
`metadata` | (optional) Additional key-value pairs to be passed to the state store
`options` | (optional) State operation options; see [state operation options](#optional-behaviors)
> **ETag format** Dapr runtime treats ETags as opaque strings. The exact ETag format is defined by the corresponding data store.
> **ETag format:** Dapr runtime treats ETags as opaque strings. The exact ETag format is defined by the corresponding data store.
### HTTP Response
@ -88,9 +88,9 @@ options | (optional) state operation options, see [state operation options](#opt
Code | Description
---- | -----------
204 | State saved
400 | State store is missing or misconfigured or malformed request
500 | Failed to save state
`204` | State saved
`400` | State store is missing or misconfigured or malformed request
`500` | Failed to save state
#### Response Body
@ -130,11 +130,11 @@ GET http://localhost:<daprPort>/v1.0/state/<storename>/<key>
Parameter | Description
--------- | -----------
daprPort | the Dapr port
storename | ```metadata.name``` field in the user configured state store component yaml. Please refer Dapr State Store configuration structure mentioned above.
key | the key of the desired state
consistency | (optional) read consistency mode, see [state operation options](#optional-behaviors)
metadata | (optional) metadata as query parameters to the state store
`daprPort` | The Dapr port
`storename` | `metadata.name` field in the user-configured statestore.yaml component file. Refer to the [Dapr state store configuration structure](#component-file) mentioned above.
`key` | The key of the desired state
`consistency` | (optional) Read consistency mode; see [state operation options](#optional-behaviors)
`metadata` | (optional) Metadata as query parameters to the state store
The optional request metadata is passed via URL query parameters. For example,
```
@ -149,18 +149,19 @@ GET http://localhost:3500/v1.0/state/myStore/myKey?metadata.contentType=applicat
Code | Description
---- | -----------
200 | Get state successful
204 | Key is not found
400 | State store is missing or misconfigured
500 | Get state failed
`200` | Get state successful
`204` | Key is not found
`400` | State store is missing or misconfigured
`500` | Get state failed
#### Response Headers
Header | Description
--------- | -----------
ETag | ETag of returned value
`ETag` | ETag of returned value
#### Response Body
JSON-encoded value
### Example
@ -177,7 +178,7 @@ curl http://localhost:3500/v1.0/state/starwars/planet?metadata.contentType=appli
}
```
To pass metadata as query parammeter:
To pass metadata as query parameter:
```
GET http://localhost:3500/v1.0/state/starwars/planet?metadata.partitionKey=mypartitionKey&metadata.contentType=application/json
@ -197,9 +198,9 @@ POST/PUT http://localhost:<daprPort>/v1.0/state/<storename>/bulk
Parameter | Description
--------- | -----------
daprPort | the Dapr port
storename | ```metadata.name``` field in the user configured state store component yaml. Please refer Dapr State Store configuration structure mentioned above.
metadata | (optional) metadata as query parameters to the state store
`daprPort` | The Dapr port
`storename` | `metadata.name` field in the user-configured statestore.yaml component file. Refer to the [Dapr state store configuration structure](#component-file) mentioned above.
`metadata` | (optional) Metadata as query parameters to the state store
The optional request metadata is passed via URL query parameters. For example,
```
@ -214,11 +215,12 @@ POST/PUT http://localhost:3500/v1.0/state/myStore/bulk?metadata.partitionKey=myp
Code | Description
---- | -----------
200 | Get state successful
400 | State store is missing or misconfigured
500 | Get bulk state failed
`200` | Get state successful
`400` | State store is missing or misconfigured
`500` | Get bulk state failed
#### Response Body
An array of JSON-encoded values
### Example
@ -249,6 +251,12 @@ curl http://localhost:3500/v1.0/state/myRedisStore/bulk \
]
```
To pass metadata as query parameter:
```
POST http://localhost:3500/v1.0/state/myRedisStore/bulk?metadata.partitionKey=mypartitionKey
```
## Delete state
This endpoint lets you delete the state for a specific key.
@ -263,11 +271,11 @@ DELETE http://localhost:<daprPort>/v1.0/state/<storename>/<key>
Parameter | Description
--------- | -----------
daprPort | the Dapr port
storename | ```metadata.name``` field in the user configured state store component yaml. Please refer Dapr State Store configuration structure mentioned above.
key | the key of the desired state
concurrency | (optional) either *first-write* or *last-write*, see [state operation options](#optional-behaviors)
consistency | (optional) either *strong* or *eventual*, see [state operation options](#optional-behaviors)
`daprPort` | The Dapr port
`storename` | `metadata.name` field in the user-configured statestore.yaml component file. Refer to the [Dapr state store configuration structure](#component-file) mentioned above.
`key` | The key of the desired state
`concurrency` | (optional) Either *first-write* or *last-write*; see [state operation options](#optional-behaviors)
`consistency` | (optional) Either *strong* or *eventual*; see [state operation options](#optional-behaviors)
The optional request metadata is passed via URL query parameters. For example,
```
@ -288,11 +296,12 @@ If-Match | (Optional) ETag associated with the key to be deleted
Code | Description
---- | -----------
204 | Delete state successful
400 | State store is missing or misconfigured
500 | Delete state failed
`204` | Delete state successful
`400` | State store is missing or misconfigured
`500` | Delete state failed
#### Response Body
None.
### Example
@ -319,9 +328,9 @@ POST/PUT http://localhost:<daprPort>/v1.0-alpha1/state/<storename>/query
Parameter | Description
--------- | -----------
daprPort | the Dapr port
storename | ```metadata.name``` field in the user configured state store component yaml. Refer to the Dapr state store configuration structure mentioned above.
metadata | (optional) metadata as query parameters to the state store
`daprPort` | The Dapr port
`storename` | `metadata.name` field in the user-configured statestore.yaml component file. Refer to the [Dapr state store configuration structure](#component-file) mentioned above.
`metadata` | (optional) Metadata as query parameters to the state store
The optional request metadata is passed via URL query parameters. For example,
```
@ -334,11 +343,12 @@ POST http://localhost:3500/v1.0-alpha1/state/myStore/query?metadata.contentType=
Code | Description
---- | -----------
200 | State query successful
400 | State store is missing or misconfigured
500 | State query failed
`200` | State query successful
`400` | State store is missing or misconfigured
`500` | State query failed
#### Response Body
An array of JSON-encoded values
### Example
@ -425,19 +435,19 @@ curl -X POST http://localhost:3500/v1.0-alpha1/state/myStore/query?metadata.cont
}
```
To pass metadata as query parameter:
```
POST http://localhost:3500/v1.0-alpha1/state/myStore/query?metadata.partitionKey=mypartitionKey
```
## State transactions
Persists the changes to the state store as a multi-item transaction.
***Note that this operation is dependant on a using state store component that supports multi-item transactions.***
> This operation depends on a state store component that supports multi-item transactions.
List of state stores that support transactions:
* Redis
* MongoDB
* PostgreSQL
* SQL Server
* Azure CosmosDB
Refer to the [state store component spec]({{< ref "supported-state-stores.md" >}}) for a full, current list of state stores that support transactions.
#### HTTP Request
@ -449,16 +459,16 @@ POST/PUT http://localhost:<daprPort>/v1.0/state/<storename>/transaction
Code | Description
---- | -----------
204 | Request successful
400 | State store is missing or misconfigured or malformed request
500 | Request failed
`204` | Request successful
`400` | State store is missing or misconfigured or malformed request
`500` | Request failed
#### URL Parameters
Parameter | Description
--------- | -----------
daprPort | the Dapr port
storename | ```metadata.name``` field in the user configured state store component yaml. Please refer Dapr State Store configuration structure mentioned above.
`daprPort` | The Dapr port
`storename` | `metadata.name` field in the user-configured statestore.yaml component file. Refer to the [Dapr state store configuration structure](#component-file) mentioned above.
The optional request metadata is passed via URL query parameters. For example,
```
@ -471,19 +481,18 @@ POST http://localhost:3500/v1.0/state/myStore/transaction?metadata.contentType=a
Field | Description
---- | -----------
operations | A JSON array of state operation
metadata | (optional) the metadata for transaction that applies to all operations
`operations` | A JSON array of state operation
`metadata` | (optional) The metadata for transaction that applies to all operations
Each state operation is comprised with the following fields:
Field | Description
---- | -----------
key | state key
value | state value, which can be any byte array
etag | (optional) state ETag
metadata | (optional) additional key-value pairs to be passed to the state store
options | (optional) state operation options, see [state operation options](#optional-behaviors)
`key` | State key
`value` | State value, which can be any byte array
`etag` | (optional) State ETag
`metadata` | (optional) Additional key-value pairs to be passed to the state store
`options` | (optional) State operation options; see [state operation options](#optional-behaviors)
#### Examples
@ -512,13 +521,12 @@ curl -X POST http://localhost:3500/v1.0/state/starwars/transaction \
}'
```
## Configuring state store for actors
Actors don't support multiple state stores and require a transactional state store to be used with Dapr. Currently Mongodb, Redis, PostgreSQL, SQL Server, and Azure CosmosDB implement the transactional state store interface.
Actors don't support multiple state stores and require a transactional state store to be used with Dapr. [View which services currently implement the transactional state store interface]({{< ref "supported-state-stores.md" >}}).
To specify which state store to be used for actors, specify value of property `actorStateStore` as true in the metadata section of the state store component yaml file.
Example: Following components yaml will configure redis to be used as the state store for Actors.
Specify which state store to be used for actors with a `true` value for the property `actorStateStore` in the metadata section of the `statestore.yaml` component file.
For example, the following components yaml will configure Redis to be used as the state store for Actors.
```yaml
apiVersion: dapr.io/v1alpha1
@ -550,33 +558,35 @@ A Dapr-compatible state store shall use the following key scheme:
### Concurrency
Dapr uses Optimized Concurrency Control (OCC) with ETags. Dapr makes optional the following requirements on state stores:
Dapr uses Optimized Concurrency Control (OCC) with ETags. Dapr makes the following requirements optional on state stores:
* An Dapr-compatible state store may support optimistic concurrency control using ETags. When an ETag is associated with an *save* or *delete* request, the store shall allow the update only if the attached ETag matches with the latest ETag in the database.
* When ETag is missing in the write requests, the state store shall handle the requests in a last-write-wins fashion. This is to allow optimizations for high-throughput write scenarios in which data contingency is low or has no negative effects.
* A store shall **always** return ETags when returning states to callers.
* A Dapr-compatible state store may support optimistic concurrency control using ETags. The store allows the update when an ETag:
* Is associated with an *save* or *delete* request.
* Matches the latest ETag in the database.
* When ETag is missing in the write requests, the state store shall handle the requests in a *last-write-wins* fashion. This allows optimizations for high-throughput write scenarios, in which data contingency is low or has no negative effects.
* A store shall *always* return ETags when returning states to callers.
### Consistency
Dapr allows clients to attach a consistency hint to *get*, *set* and *delete* operation. Dapr support two consistency level: **strong** and **eventual**, which are defined as the follows:
Dapr allows clients to attach a consistency hint to *get*, *set*, and *delete* operation. Dapr supports two consistency levels: **strong** and **eventual**.
#### Eventual Consistency
Dapr assumes data stores are eventually consistent by default. A state should:
* For read requests, the state store can return data from any of the replicas
* For write request, the state store should asynchronously replicate updates to configured quorum after acknowledging the update request.
* For *read* requests, return data from any of the replicas.
* For *write* requests, asynchronously replicate updates to configured quorum after acknowledging the update request.
#### Strong Consistency
When a strong consistency hint is attached, a state store should:
* For read requests, the state store should return the most up-to-date data consistently across replicas.
* For write/delete requests, the state store should synchronisely replicate updated data to configured quorum before completing the write request.
* For *read* requests, return the most up-to-date data consistently across replicas.
* For *write*/*delete* requests, synchronously replicate updated data to configured quorum before completing the write request.
### Example - Complete options request example
### Example: Complete options request example
The following is an example *set* request with a complete options definition:
The following is an example *set* request with a complete `options` definition:
```shell
curl -X POST http://localhost:3500/v1.0/state/starwars \
@ -594,82 +604,85 @@ curl -X POST http://localhost:3500/v1.0/state/starwars \
]'
```
### Example - Working with ETags
The following is an example which walks through the usage of an ETag when setting/deleting an object in a compatible statestore.
### Example: Working with ETags
First, store an object in a statestore (this sample uses Redis that has been defined as 'statestore'):
The following is an example walk-through of an ETag usage when *setting*/*deleting* an object in a compatible state store. This sample defines Redis as `statestore`.
```shell
curl -X POST http://localhost:3500/v1.0/state/statestore \
-H "Content-Type: application/json" \
-d '[
{
"key": "sampleData",
"value": "1"
}
]'
```
1. Store an object in a state store:
Get the object to find the ETag that was set automatically by the statestore:
```shell
curl -X POST http://localhost:3500/v1.0/state/statestore \
-H "Content-Type: application/json" \
-d '[
{
"key": "sampleData",
"value": "1"
}
]'
```
```shell
curl http://localhost:3500/v1.0/state/statestore/sampleData -v
* Connected to localhost (127.0.0.1) port 3500 (#0)
> GET /v1.0/state/statestore/sampleData HTTP/1.1
> Host: localhost:3500
> User-Agent: curl/7.64.1
> Accept: */*
>
< HTTP/1.1 200 OK
< Server: fasthttp
< Date: Sun, 14 Feb 2021 04:51:50 GMT
< Content-Type: application/json
< Content-Length: 3
< Etag: 1
< Traceparent: 00-3452582897d134dc9793a244025256b1-b58d8d773e4d661d-01
<
* Connection #0 to host localhost left intact
"1"* Closing connection 0
```
1. Get the object to find the ETag set automatically by the state store:
The returned ETag here was 1. Sending a new request to update or delete the data with the wrong ETag will return an error (omitting the ETag will allow the request):
```shell
curl http://localhost:3500/v1.0/state/statestore/sampleData -v
* Connected to localhost (127.0.0.1) port 3500 (#0)
> GET /v1.0/state/statestore/sampleData HTTP/1.1
> Host: localhost:3500
> User-Agent: curl/7.64.1
> Accept: */*
>
< HTTP/1.1 200 OK
< Server: fasthttp
< Date: Sun, 14 Feb 2021 04:51:50 GMT
< Content-Type: application/json
< Content-Length: 3
< Etag: 1
< Traceparent: 00-3452582897d134dc9793a244025256b1-b58d8d773e4d661d-01
<
* Connection #0 to host localhost left intact
"1"* Closing connection 0
```
```shell
# Update
curl -X POST http://localhost:3500/v1.0/state/statestore \
-H "Content-Type: application/json" \
-d '[
{
"key": "sampleData",
"value": "2",
"etag": "2"
}
]'
{"errorCode":"ERR_STATE_SAVE","message":"failed saving state in state store statestore: possible etag mismatch. error from state store: ERR Error running script (call to f_83e03ec05d6a3b6fb48483accf5e594597b6058f): @user_script:1: user_script:1: failed to set key nodeapp||sampleData"}
The returned ETag above was 1. If you send a new request to update or delete the data with the wrong ETag, it will return an error. Omitting the ETag will allow the request.
# Delete
curl -X DELETE -H 'If-Match: 5' http://localhost:3500/v1.0/state/statestore/sampleData
{"errorCode":"ERR_STATE_DELETE","message":"failed deleting state with key sampleData: possible etag mismatch. error from state store: ERR Error running script (call to f_9b5da7354cb61e2ca9faff50f6c43b81c73c0b94): @user_script:1: user_script:1: failed to delete node
app||sampleData"}
```
```shell
# Update
curl -X POST http://localhost:3500/v1.0/state/statestore \
-H "Content-Type: application/json" \
-d '[
{
"key": "sampleData",
"value": "2",
"etag": "2"
}
]'
{"errorCode":"ERR_STATE_SAVE","message":"failed saving state in state store statestore: possible etag mismatch. error from state store: ERR Error running script (call to f_83e03ec05d6a3b6fb48483accf5e594597b6058f): @user_script:1: user_script:1: failed to set key nodeapp||sampleData"}
# Delete
curl -X DELETE -H 'If-Match: 5' http://localhost:3500/v1.0/state/statestore/sampleData
{"errorCode":"ERR_STATE_DELETE","message":"failed deleting state with key sampleData: possible etag mismatch. error from state store: ERR Error running script (call to f_9b5da7354cb61e2ca9faff50f6c43b81c73c0b94): @user_script:1: user_script:1: failed to delete node
app||sampleData"}
```
In order to update or delete the object, simply match the ETag in either the request body (update) or the `If-Match` header (delete). Note, when the state is updated, it receives a new ETag so further updates or deletes will need to use the new ETag.
1. Update or delete the object by simply matching the ETag in either the request body (update) or the `If-Match` header (delete). When the state is updated, it receives a new ETag that future updates or deletes will need to use.
```shell
# Update
curl -X POST http://localhost:3500/v1.0/state/statestore \
-H "Content-Type: application/json" \
-d '[
{
"key": "sampleData",
"value": "2",
"etag": "1"
}
]'
```shell
# Update
curl -X POST http://localhost:3500/v1.0/state/statestore \
-H "Content-Type: application/json" \
-d '[
{
"key": "sampleData",
"value": "2",
"etag": "1"
}
]'
# Delete
curl -X DELETE -H 'If-Match: 1' http://localhost:3500/v1.0/state/statestore/sampleData
```
# Delete
curl -X DELETE -H 'If-Match: 1' http://localhost:3500/v1.0/state/statestore/sampleData
```
## Next Steps
- [State management overview]({{< ref state-management-overview.md >}})
- [How-To: Save & get state]({{< ref howto-get-save-state.md >}})

View File

@ -22,12 +22,13 @@ This table is meant to help users understand the equivalent options for running
| `--dapr-grpc-port` | `--dapr-grpc-port` | | not supported | gRPC port for the Dapr API to listen on (default "50001") |
| `--dapr-http-port` | `--dapr-http-port` | | not supported | The HTTP port for the Dapr API |
| `--dapr-http-max-request-size` | --dapr-http-max-request-size | | `dapr.io/http-max-request-size` | Increasing max size of request body http and grpc servers parameter in MB to handle uploading of big files. Default is `4` MB |
| `--dapr-http-read-buffer-size` | --dapr-http-read-buffer-size | | `dapr.io/http-read-buffer-size` | Increasing max size of http header read buffer in KB to handle when sending multi-KB headers. The default 4 KB. When sending bigger than default 4KB http headers, you should set this to a larger value, for example 16 (for 16KB) |
| not supported | `--image` | | `dapr.io/sidecar-image` | Dapr sidecar image. Default is `daprio/daprd:latest` |
| `--internal-grpc-port` | not supported | | not supported | gRPC port for the Dapr Internal API to listen on |
| `--enable-metrics` | not supported | | configuration spec | Enable prometheus metric (default true) |
| `--enable-mtls` | not supported | | configuration spec | Enables automatic mTLS for daprd to daprd communication channels |
| `--enable-profiling` | `--enable-profiling` | | `dapr.io/enable-profiling` | Enable profiling |
| `--unix-domain-socket` | `--unix-domain-socket` | `-u` | not supported | On Linux, when communicating with the Dapr sidecar, use unix domain sockets for lower latency and greater throughput compared to TCP ports. Not available on Windows OS |
| `--unix-domain-socket` | `--unix-domain-socket` | `-u` | `dapr.io/unix-domain-socket-path` | On Linux, when communicating with the Dapr sidecar, use unix domain sockets for lower latency and greater throughput compared to TCP ports. Not available on Windows OS |
| `--log-as-json` | not supported | | `dapr.io/log-as-json` | Setting this parameter to `true` outputs logs in JSON format. Default is `false` |
| `--log-level` | `--log-level` | | `dapr.io/log-level` | Sets the log level for the Dapr sidecar. Allowed values are `debug`, `info`, `warn`, `error`. Default is `info` |
| `--enable-api-logging` | `--enable-api-logging` | | `dapr.io/enable-api-logging` | Enables API logging for the Dapr sidecar |

View File

@ -41,7 +41,7 @@ dapr run [flags] [command]
| `--profile-port` | | `7777` | The port for the profile server to listen on |
| `--unix-domain-socket`, `-u` | | | Path to a unix domain socket dir mount. If specified, communication with the Dapr sidecar uses unix domain sockets for lower latency and greater throughput when compared to using TCP ports. Not available on Windows OS |
| `--dapr-http-max-request-size` | | `4` | Max size of request body in MB. |
| `--dapr-http-read-buffer-size` | | `4` | Max size of http header read buffer in KB. The default 4 KB. When sending bigger than default 4KB http headers, you should set this to a larger value, for example 16 (for 16KB). |
### Examples
```bash

View File

@ -52,8 +52,8 @@ The above example uses secrets as plain strings. It is recommended to use a secr
| enableTLS | N | If the Redis instance supports TLS with public certificates, can be configured to be enabled or disabled. Defaults to `"false"` | `"true"`, `"false"`
| maxRetries | N | Maximum number of retries before giving up. Defaults to `3` | `5`, `10`
| maxRetryBackoff | N | Minimum backoff between each retry. Defaults to `2` seconds; `"-1"` disables backoff. | `3000000000`
| failover | N | Property to enabled failover configuration. Needs sentinalMasterName to be set. The redisHost should be the sentinel host address. See [Redis Sentinel Documentation](https://redis.io/docs/reference/sentinel-clients/). Defaults to `"false"` | `"true"`, `"false"`
| sentinelMasterName | N | The sentinel master name. See [Redis Sentinel Documentation](https://redis.io/docs/reference/sentinel-clients/) | `""`, `"127.0.0.1:6379"`
| failover | N | Property to enabled failover configuration. Needs sentinalMasterName to be set. The redisHost should be the sentinel host address. See [Redis Sentinel Documentation](https://redis.io/docs/manual/sentinel/). Defaults to `"false"` | `"true"`, `"false"`
| sentinelMasterName | N | The sentinel master name. See [Redis Sentinel Documentation](https://redis.io/docs/manual/sentinel/) | `""`, `"127.0.0.1:6379"`
## Setup Redis

View File

@ -63,7 +63,7 @@ The above example uses secrets as plain strings. It is recommended to use a secr
| idleCheckFrequency | N | Frequency of idle checks made by idle connections reaper. Default is `"1m"`. `"-1"` disables idle connections reaper. | `"-1"`
| idleTimeout | N | Amount of time after which the client closes idle connections. Should be less than server's timeout. Default is `"5m"`. `"-1"` disables idle timeout check. | `"10m"`
| failover | N | Property to enabled failover configuration. Needs sentinalMasterName to be set. Defaults to `"false"` | `"true"`, `"false"`
| sentinelMasterName | N | The sentinel master name. See [Redis Sentinel Documentation](https://redis.io/docs/reference/sentinel-clients/) | `""`, `"127.0.0.1:6379"`
| sentinelMasterName | N | The sentinel master name. See [Redis Sentinel Documentation](https://redis.io/docs/manual/sentinel/) | `""`, `"127.0.0.1:6379"`
| maxLenApprox | N | Maximum number of items inside a stream.The old entries are automatically evicted when the specified length is reached, so that the stream is left at a constant size. Defaults to unlimited. | `"10000"`
## Create a Redis instance

View File

@ -101,16 +101,18 @@ Make sure you have followed the steps in the [Authenticating to Azure]({{< ref a
```
5. Using RBAC, assign a role to the Azure AD application so it can access the Key Vault.
In this case, assign the "Key Vault Crypto Officer" role, which has broad access; other more restrictive roles can be used as well, depending on your application.
In this case, assign the "Key Vault Secrets User" role, which has the "Get secrets" permission over Azure Key Vault.
```sh
az role assignment create \
--assignee "${SERVICE_PRINCIPAL_ID}" \
--role "Key Vault Crypto Officer" \
--role "Key Vault Secrets User" \
--scope "${RG_ID}/providers/Microsoft.KeyVault/vaults/${KEYVAULT_NAME}"
```
### Configure the component
Other less restrictive roles like "Key Vault Secrets Officer" and "Key Vault Administrator" can be used as well, depending on your application. For more information about Azure built-in roles for Key Vault see the [Microsoft docs](https://docs.microsoft.com/azure/key-vault/general/rbac-guide?tabs=azure-cli#azure-built-in-roles-for-key-vault-data-plane-operations).
## Configure the component
{{< tabs "Self-Hosted" "Kubernetes">}}

View File

@ -64,8 +64,8 @@ If you wish to use Redis as an actor store, append the following to the yaml.
| enableTLS | N | If the Redis instance supports TLS with public certificates, can be configured to be enabled or disabled. Defaults to `"false"` | `"true"`, `"false"`
| maxRetries | N | Maximum number of retries before giving up. Defaults to `3` | `5`, `10`
| maxRetryBackoff | N | Minimum backoff between each retry. Defaults to `2` seconds; `"-1"` disables backoff. | `3000000000`
| failover | N | Property to enabled failover configuration. Needs sentinalMasterName to be set. The redisHost should be the sentinel host address. See [Redis Sentinel Documentation](https://redis.io/docs/reference/sentinel-clients/). Defaults to `"false"` | `"true"`, `"false"`
| sentinelMasterName | N | The sentinel master name. See [Redis Sentinel Documentation](https://redis.io/docs/reference/sentinel-clients/) | `""`, `"127.0.0.1:6379"`
| failover | N | Property to enabled failover configuration. Needs sentinalMasterName to be set. The redisHost should be the sentinel host address. See [Redis Sentinel Documentation](https://redis.io/docs/manual/sentinel/). Defaults to `"false"` | `"true"`, `"false"`
| sentinelMasterName | N | The sentinel master name. See [Redis Sentinel Documentation](https://redis.io/docs/manual/sentinel/) | `""`, `"127.0.0.1:6379"`
| redeliverInterval | N | The interval between checking for pending messages to redelivery. Defaults to `"60s"`. `"0"` disables redelivery. | `"30s"`
| processingTimeout | N | The amount time a message must be pending before attempting to redeliver it. Defaults to `"15s"`. `"0"` disables redelivery. | `"30s"`
| redisType | N | The type of redis. There are two valid values, one is `"node"` for single node mode, the other is `"cluster"` for redis cluster mode. Defaults to `"node"`. | `"cluster"`

View File

@ -9,7 +9,7 @@ aliases:
## Component format
To setup RethinkDB state store create a component of type `state.rethinkdb`. See [this guide]({{< ref "howto-get-save-state.md#step-1-setup-a-state-store" >}}) on how to create and apply a state store configuration.
To setup RethinkDB state store, create a component of type `state.rethinkdb`. See [the how-to guide]({{< ref "howto-get-save-state.md#step-1-setup-a-state-store" >}}) to create and apply a state store configuration.
```yaml
apiVersion: dapr.io/v1alpha1
@ -36,18 +36,17 @@ spec:
```
{{% alert title="Warning" color="warning" %}}
The above example uses secrets as plain strings. It is recommended to use a secret store for the secrets as described [here]({{< ref component-secrets.md >}}).
The above example uses secrets as plain strings. It is recommended to use a secret store for the secrets, as described [here]({{< ref component-secrets.md >}}).
{{% /alert %}}
If you wish to use Redis as an actor store, append the following to the yaml.
If you wish to use RethinkDB as an actor store, append the following to the YAML.
```yaml
- name: actorStateStore
value: "true"
```
RethinkDB state store supports transactions so it can be used to persist Dapr Actor state. By default, the state will be stored in table name `daprstate` in the specified database.
RethinkDB state store supports transactions, so it can be used to persist Dapr Actor state. By default, the state will be stored in table named `daprstate` in the specified database.
Additionally, if the optional `archive` metadata is set to `true`, on each state change, the RethinkDB state store will also log state changes with timestamp in the `daprstate_archive` table. This allows for time series analyses of the state managed by Dapr.
@ -81,8 +80,7 @@ open "http://$(docker inspect --format '{{ .NetworkSettings.IPAddress }}' rethin
{{% /codetab %}}
{{% /codetab %}}
## Related links
- [Basic schema for a Dapr component]({{< ref component-schema >}})
- Read [this guide]({{< ref "howto-get-save-state.md#step-2-save-and-retrieve-a-single-state" >}}) for instructions on configuring state store components
- [State management building block]({{< ref state-management >}})
- Read [the how-to guide]({{< ref "howto-get-save-state.md#step-2-save-and-retrieve-a-single-state" >}}) for instructions on configuring state store components.
- [State management building block]({{< ref state-management >}}).

View File

@ -1 +1 @@
{{- if .Get "short" }}1.6{{ else if .Get "long" }}1.6.0{{ else if .Get "cli" }}1.6.0{{ else }}1.6.0{{ end -}}
{{- if .Get "short" }}1.6{{ else if .Get "long" }}1.6.1{{ else if .Get "cli" }}1.6.0{{ else }}1.6.1{{ end -}}