Merge branch 'v1.11' into issue_3216
|
@ -18,7 +18,7 @@ jobs:
|
|||
- uses: actions/checkout@v2
|
||||
with:
|
||||
submodules: recursive
|
||||
fetch-depth: 0
|
||||
fetch-depth: 0
|
||||
- name: Setup Docsy
|
||||
run: cd daprdocs && git submodule update --init --recursive && sudo npm install -D --save autoprefixer && sudo npm install -D --save postcss-cli
|
||||
- name: Build And Deploy
|
||||
|
@ -37,7 +37,7 @@ jobs:
|
|||
app_location: "/daprdocs" # App source code path
|
||||
api_location: "api" # Api source code path - optional
|
||||
output_location: "public" # Built app content directory - optional
|
||||
app_build_command: "hugo"
|
||||
app_build_command: "git config --global --add safe.directory /github/workspace && hugo"
|
||||
###### End of Repository/Build Configurations ######
|
||||
|
||||
close_pull_request_job:
|
||||
|
|
|
@ -26,3 +26,6 @@
|
|||
[submodule "sdkdocs/pluggable-components/dotnet"]
|
||||
path = sdkdocs/pluggable-components/dotnet
|
||||
url = https://github.com/dapr-sandbox/components-dotnet-sdk
|
||||
[submodule "sdkdocs/pluggable-components/go"]
|
||||
path = sdkdocs/pluggable-components/go
|
||||
url = https://github.com/dapr-sandbox/components-go-sdk
|
||||
|
|
|
@ -71,6 +71,10 @@ id = "G-60C6Q1ETC1"
|
|||
source = "../sdkdocs/pluggable-components/dotnet/daprdocs/content/en/dotnet-sdk-docs"
|
||||
target = "content/developing-applications/develop-components/pluggable-components/pluggable-components-sdks/pluggable-components-dotnet"
|
||||
lang = "en"
|
||||
[[module.mounts]]
|
||||
source = "../sdkdocs/pluggable-components/go/daprdocs/content/en/go-sdk-docs"
|
||||
target = "content/developing-applications/develop-components/pluggable-components/pluggable-components-sdks/pluggable-components-go"
|
||||
lang = "en"
|
||||
[[module.mounts]]
|
||||
source = "../sdkdocs/dotnet/daprdocs/content/en/dotnet-sdk-contributing"
|
||||
target = "content/contributing/sdk-contrib/"
|
||||
|
|
|
@ -36,7 +36,6 @@ Dapr can be integrated with any developer framework. For example, in the Dapr .N
|
|||
|
||||
Dapr is integrated with the following frameworks;
|
||||
|
||||
- Logic Apps with Dapr [Workflows](https://github.com/dapr/workflows)
|
||||
- Functions with Dapr [Azure Functions Extension](https://github.com/dapr/azure-functions-extension)
|
||||
- Spring Boot Web apps in Java SDK
|
||||
- ASP.NET Core in .NET SDK
|
||||
|
|
|
@ -39,11 +39,11 @@ Style and tone conventions should be followed throughout all Dapr documentation
|
|||
|
||||
## Diagrams and images
|
||||
|
||||
Diagrams and images are invaluable visual aids for documentation pages. Diagrams are kept in a [Dapr Diagrams Deck](https://github.com/dapr/docs/tree/v1.8/daprdocs/static/presentations), which includes guidance on style and icons.
|
||||
Diagrams and images are invaluable visual aids for documentation pages. Diagrams are kept in a [Dapr Diagrams Deck](https://github.com/dapr/docs/tree/v1.10/daprdocs/static/presentations), which includes guidance on style and icons.
|
||||
|
||||
As you create diagrams for your documentation:
|
||||
|
||||
- Save them as high-res PNG files into the [images folder](https://github.com/dapr/docs/tree/v1.8/daprdocs/static/images).
|
||||
- Save them as high-res PNG files into the [images folder](https://github.com/dapr/docs/tree/v1.10/daprdocs/static/images).
|
||||
- Name your PNG files using the convention of a concept or building block so that they are grouped.
|
||||
- For example: `service-invocation-overview.png`.
|
||||
- For more information on calling out images using shortcode, see the [Images guidance](#images) section below.
|
||||
|
|
|
@ -2,31 +2,37 @@
|
|||
type: docs
|
||||
title: "How-to: Enable and use actor reentrancy in Dapr"
|
||||
linkTitle: "How-To: Actor reentrancy"
|
||||
weight: 30
|
||||
weight: 70
|
||||
description: Learn more about actor reentrancy
|
||||
---
|
||||
|
||||
## Actor reentrancy
|
||||
A core tenet of the virtual actor pattern is the single-threaded nature of actor execution. Without reentrancy, the Dapr runtime locks on all actor requests, even those that are in the same call chain. A second request could not start until the first had completed. This means an actor cannot call itself, or have another actor call into it even if it is part of the same chain. Reentrancy solves this by allowing requests from the same chain, or context, to re-enter into an already locked actor. This is especially useful in scenarios where an actor wants to call a method on itself or when actors are used in workflows where other actors are used to perform work, and they then call back onto the coordinating actor. Examples of chains that reentrancy allows are shown below:
|
||||
A core tenet of the [virtual actor pattern](https://www.microsoft.com/research/project/orleans-virtual-actors/) is the single-threaded nature of actor execution. Without reentrancy, the Dapr runtime locks on all actor requests. A second request wouldn't be able to start until the first had completed. This means an actor cannot call itself, or have another actor call into it, even if it's part of the same call chain.
|
||||
|
||||
Reentrancy solves this by allowing requests from the same chain, or context, to re-enter into an already locked actor. This proves useful in scenarios where:
|
||||
- An actor wants to call a method on itself
|
||||
- Actors are used in workflows to perform work, then call back onto the coordinating actor.
|
||||
|
||||
Examples of chains that reentrancy allows are shown below:
|
||||
|
||||
```
|
||||
Actor A -> Actor A
|
||||
ActorA -> Actor B -> Actor A
|
||||
```
|
||||
|
||||
With reentrancy, there can be more complex actor calls without sacrificing the single-threaded behavior of virtual actors.
|
||||
With reentrancy, you can perform more complex actor calls, without sacrificing the single-threaded behavior of virtual actors.
|
||||
|
||||
<img src="/images/actor-reentrancy.png" width=1000 height=500 alt="Diagram showing reentrancy for a coordinator workflow actor calling worker actors or an actor calling an method on itself">
|
||||
|
||||
The `maxStackDepth` parameter sets a value that controls how many reentrant calls be made to the same actor. By default this is set to 32, which is more than sufficient in most cases.
|
||||
The `maxStackDepth` parameter sets a value that controls how many reentrant calls can be made to the same actor. By default, this is set to **32**, which is more than sufficient in most cases.
|
||||
|
||||
## Enable Actor Reentrancy with Actor Configuration
|
||||
## Configure the actor runtime to enable reentrancy
|
||||
|
||||
The actor that will be reentrant must provide configuration to use reentrancy. This is done by the actor's endpoint for `GET /dapr/config`, similar to other actor configuration elements.
|
||||
The reentrant actor must provide the appropriate configuration. This is done by the actor's endpoint for `GET /dapr/config`, similar to other actor configuration elements.
|
||||
|
||||
{{< tabs Dotnet Python Go >}}
|
||||
{{< tabs ".NET" JavaScript Python Java Go >}}
|
||||
|
||||
{{% codetab %}}
|
||||
<!--dotnet-->
|
||||
|
||||
```csharp
|
||||
public class Startup
|
||||
|
@ -50,6 +56,27 @@ public class Startup
|
|||
{{% /codetab %}}
|
||||
|
||||
{{% codetab %}}
|
||||
<!--javascript-->
|
||||
|
||||
```js
|
||||
import { CommunicationProtocolEnum, DaprClient, DaprServer } from "@dapr/dapr";
|
||||
|
||||
// Configure the actor runtime with the DaprClientOptions.
|
||||
const clientOptions = {
|
||||
actor: {
|
||||
reentrancy: {
|
||||
enabled: true,
|
||||
maxStackDepth: 32,
|
||||
},
|
||||
},
|
||||
};
|
||||
```
|
||||
|
||||
{{% /codetab %}}
|
||||
|
||||
{{% codetab %}}
|
||||
<!--python-->
|
||||
|
||||
```python
|
||||
from fastapi import FastAPI
|
||||
from dapr.ext.fastapi import DaprActor
|
||||
|
@ -75,6 +102,16 @@ def do_something_reentrant():
|
|||
```
|
||||
{{% /codetab %}}
|
||||
|
||||
{{% codetab %}}
|
||||
<!--java-->
|
||||
|
||||
```java
|
||||
|
||||
```
|
||||
|
||||
{{% /codetab %}}
|
||||
|
||||
|
||||
{{% codetab %}}
|
||||
|
||||
Here is a snippet of an actor written in Golang providing the reentrancy configuration via the HTTP API. Reentrancy has not yet been included into the Go SDK.
|
||||
|
@ -105,10 +142,11 @@ func configHandler(w http.ResponseWriter, r *http.Request) {
|
|||
}
|
||||
```
|
||||
|
||||
### Handling reentrant requests
|
||||
### Handle reentrant requests
|
||||
|
||||
The key to a reentrant request is the `Dapr-Reentrancy-Id` header. The value of this header is used to match requests to their call chain and allow them to bypass the actor's lock.
|
||||
|
||||
The header is generated by the Dapr runtime for any actor request that has a reentrant config specified. Once it is generated, it is used to lock the actor and must be passed to all future requests. Below are the snippets of code from an actor handling this:
|
||||
The header is generated by the Dapr runtime for any actor request that has a reentrant config specified. Once it is generated, it is used to lock the actor and must be passed to all future requests. Below is an example of an actor handling a reentrant request:
|
||||
|
||||
```go
|
||||
func reentrantCallHandler(w http.ResponseWriter, r *http.Request) {
|
||||
|
@ -134,7 +172,19 @@ func reentrantCallHandler(w http.ResponseWriter, r *http.Request) {
|
|||
|
||||
{{< /tabs >}}
|
||||
|
||||
## Demo
|
||||
|
||||
Watch this [video](https://www.youtube.com/watch?v=QADHQ5v-gww&list=PLcip_LgkYwzuF-OV6zKRADoiBvUvGhkao&t=674s) on how to use actor reentrancy.
|
||||
|
||||
<div class="embed-responsive embed-responsive-16by9">
|
||||
<iframe width="560" height="315" src="https://www.youtube-nocookie.com/embed/QADHQ5v-gww?start=674" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe>
|
||||
</div>
|
||||
|
||||
## Next steps
|
||||
|
||||
{{< button text="Actors in the Dapr SDKs" page="developing-applications/sdks/#sdk-languages" >}}
|
||||
|
||||
## Related links
|
||||
|
||||
- [Actors API reference]({{< ref actors_api.md >}})
|
||||
- [Actors overview]({{< ref actors-overview.md >}})
|
|
@ -0,0 +1,102 @@
|
|||
---
|
||||
type: docs
|
||||
title: "Actor runtime features"
|
||||
linkTitle: "Runtime features"
|
||||
weight: 20
|
||||
description: "Learn more about the features and concepts of Actors in Dapr"
|
||||
aliases:
|
||||
- "/developing-applications/building-blocks/actors/actors-background"
|
||||
---
|
||||
|
||||
Now that you've learned about the [actor building block]({{< ref "actors-overview.md" >}}) at a high level, let's deep dive into the features and concepts included with actors in Dapr.
|
||||
|
||||
## Actor lifetime
|
||||
|
||||
Dapr actors are virtual, meaning that their lifetime is not tied to their in-memory representation. As a result, they do not need to be explicitly created or destroyed. The Dapr actor runtime automatically activates an actor the first time it receives a request for that actor ID. If an actor is not used for a period of time, the Dapr actor runtime garbage-collects the in-memory object. It will also maintain knowledge of the actor's existence should it need to be reactivated later.
|
||||
|
||||
Invocation of actor methods and reminders reset the idle time, e.g. reminder firing will keep the actor active. Actor reminders fire whether an actor is active or inactive, if fired for inactive actor, it will activate the actor first. Actor timers do not reset the idle time, so timer firing will not keep the actor active. Timers only fire while the actor is active.
|
||||
|
||||
The idle timeout and scan interval Dapr runtime uses to see if an actor can be garbage-collected is configurable. This information can be passed when Dapr runtime calls into the actor service to get supported actor types.
|
||||
|
||||
This virtual actor lifetime abstraction carries some caveats as a result of the virtual actor model, and in fact the Dapr Actors implementation deviates at times from this model.
|
||||
|
||||
An actor is automatically activated (causing an actor object to be constructed) the first time a message is sent to its actor ID. After some period of time, the actor object is garbage collected. In the future, using the actor ID again, causes a new actor object to be constructed. An actor's state outlives the object's lifetime as state is stored in configured state provider for Dapr runtime.
|
||||
|
||||
## Distribution and failover
|
||||
|
||||
To provide scalability and reliability, actors instances are distributed throughout the cluster and Dapr automatically migrates them from failed nodes to healthy ones as required.
|
||||
|
||||
Actors are distributed across the instances of the actor service, and those instance are distributed across the nodes in a cluster. Each service instance contains a set of actors for a given actor type.
|
||||
|
||||
### Actor placement service
|
||||
|
||||
The Dapr actor runtime manages distribution scheme and key range settings for you via the actor `Placement` service. When a new instance of a service is created:
|
||||
|
||||
1. The sidecar makes a call to the actor service to retrieve registered actor types and configuration settings.
|
||||
1. The corresponding Dapr runtime registers the actor types it can create.
|
||||
1. The `Placement` service calculates the partitioning across all the instances for a given actor type.
|
||||
|
||||
This partition data table for each actor type is updated and stored in each Dapr instance running in the environment and can change dynamically as new instances of actor services are created and destroyed.
|
||||
|
||||
<img src="/images/actors_background_placement_service_registration.png" width=600>
|
||||
|
||||
When a client calls an actor with a particular id (for example, actor id 123), the Dapr instance for the client hashes the actor type and id, and uses the information to call onto the corresponding Dapr instance that can serve the requests for that particular actor id. As a result, the same partition (or service instance) is always called for any given actor id. This is shown in the diagram below.
|
||||
|
||||
<img src="/images/actors_background_id_hashing_calling.png" width=600>
|
||||
|
||||
This simplifies some choices, but also carries some consideration:
|
||||
|
||||
- By default, actors are randomly placed into pods resulting in uniform distribution.
|
||||
- Because actors are randomly placed, it should be expected that actor operations always require network communication, including serialization and deserialization of method call data, incurring latency and overhead.
|
||||
|
||||
{{% alert title="Note" color="primary" %}}
|
||||
Note: The Dapr actor Placement service is only used for actor placement and therefore is not needed if your services are not using Dapr actors. The Placement service can run in all [hosting environments]({{< ref hosting >}}), including self-hosted and Kubernetes.
|
||||
{{% /alert %}}
|
||||
|
||||
## Actor communication
|
||||
|
||||
You can interact with Dapr to invoke the actor method by calling HTTP/gRPC endpoint.
|
||||
|
||||
```bash
|
||||
POST/GET/PUT/DELETE http://localhost:3500/v1.0/actors/<actorType>/<actorId>/<method/state/timers/reminders>
|
||||
```
|
||||
|
||||
You can provide any data for the actor method in the request body, and the response for the request would be in the response body which is the data from actor call.
|
||||
|
||||
Another, and perhaps more convenient, way of interacting with actors is via SDKs. Dapr currently supports actors SDKs in [.NET]({{< ref "dotnet-actors" >}}), [Java]({{< ref "java#actors" >}}), and [Python]({{< ref "python-actor" >}}).
|
||||
|
||||
Refer to [Dapr Actor Features]({{< ref howto-actors.md >}}) for more details.
|
||||
|
||||
### Concurrency
|
||||
|
||||
The Dapr actor runtime provides a simple turn-based access model for accessing actor methods. This means that no more than one thread can be active inside an actor object's code at any time. Turn-based access greatly simplifies concurrent systems as there is no need for synchronization mechanisms for data access. It also means systems must be designed with special considerations for the single-threaded access nature of each actor instance.
|
||||
|
||||
A single actor instance cannot process more than one request at a time. An actor instance can cause a throughput bottleneck if it is expected to handle concurrent requests.
|
||||
|
||||
Actors can deadlock on each other if there is a circular request between two actors while an external request is made to one of the actors simultaneously. The Dapr actor runtime automatically times out on actor calls and throw an exception to the caller to interrupt possible deadlock situations.
|
||||
|
||||
<img src="/images/actors_background_communication.png" width=600>
|
||||
|
||||
#### Reentrancy
|
||||
|
||||
To allow actors to "re-enter" and invoke methods on themselves, see [Actor Reentrancy]({{<ref actor-reentrancy.md>}}).
|
||||
|
||||
### Turn-based access
|
||||
|
||||
A turn consists of the complete execution of an actor method in response to a request from other actors or clients, or the complete execution of a timer/reminder callback. Even though these methods and callbacks are asynchronous, the Dapr actor runtime does not interleave them. A turn must be fully finished before a new turn is allowed. In other words, an actor method or timer/reminder callback that is currently executing must be fully finished before a new call to a method or callback is allowed. A method or callback is considered to have finished if the execution has returned from the method or callback and the task returned by the method or callback has finished. It is worth emphasizing that turn-based concurrency is respected even across different methods, timers, and callbacks.
|
||||
|
||||
The Dapr actor runtime enforces turn-based concurrency by acquiring a per-actor lock at the beginning of a turn and releasing the lock at the end of the turn. Thus, turn-based concurrency is enforced on a per-actor basis and not across actors. Actor methods and timer/reminder callbacks can execute simultaneously on behalf of different actors.
|
||||
|
||||
The following example illustrates the above concepts. Consider an actor type that implements two asynchronous methods (say, Method1 and Method2), a timer, and a reminder. The diagram below shows an example of a timeline for the execution of these methods and callbacks on behalf of two actors (ActorId1 and ActorId2) that belong to this actor type.
|
||||
|
||||
<img src="/images/actors_background_concurrency.png" width=600>
|
||||
|
||||
## Next steps
|
||||
|
||||
{{< button text="Timers and reminders >>" page="actors-timers-reminders.md" >}}
|
||||
|
||||
## Related links
|
||||
|
||||
- [Actors API reference]({{< ref actors_api.md >}})
|
||||
- [Actors overview]({{< ref actors-overview.md >}})
|
||||
- [How to: Use virtual actors in Dapr]({{< ref howto-actors.md >}})
|
|
@ -8,100 +8,101 @@ aliases:
|
|||
- "/developing-applications/building-blocks/actors/actors-background"
|
||||
---
|
||||
|
||||
## Introduction
|
||||
The [actor pattern](https://en.wikipedia.org/wiki/Actor_model) describes actors as the lowest-level "unit of computation". In other words, you write your code in a self-contained unit (called an actor) that receives messages and processes them one at a time, without any kind of concurrency or threading.
|
||||
|
||||
While your code processes a message, it can send one or more messages to other actors, or create new actors. An underlying runtime manages how, when and where each actor runs, and also routes messages between actors.
|
||||
|
||||
A large number of actors can execute simultaneously, and actors execute independently from each other.
|
||||
|
||||
Dapr includes a runtime that specifically implements the [Virtual Actor pattern](https://www.microsoft.com/research/project/orleans-virtual-actors/). With Dapr's implementation, you write your Dapr actors according to the Actor model, and Dapr leverages the scalability and reliability guarantees that the underlying platform provides.
|
||||
## Actors in Dapr
|
||||
|
||||
### When to use actors
|
||||
|
||||
As with any other technology decision, you should decide whether to use actors based on the problem you're trying to solve.
|
||||
|
||||
The actor design pattern can be a good fit to a number of distributed systems problems and scenarios, but the first thing you should consider are the constraints of the pattern. Generally speaking, consider the actor pattern to model your problem or scenario if:
|
||||
|
||||
* Your problem space involves a large number (thousands or more) of small, independent, and isolated units of state and logic.
|
||||
* You want to work with single-threaded objects that do not require significant interaction from external components, including querying state across a set of actors.
|
||||
* Your actor instances won't block callers with unpredictable delays by issuing I/O operations.
|
||||
|
||||
## Actors in dapr
|
||||
Dapr includes a runtime that specifically implements the [Virtual Actor pattern](https://www.microsoft.com/research/project/orleans-virtual-actors/). With Dapr's implementation, you write your Dapr actors according to the actor model, and Dapr leverages the scalability and reliability guarantees that the underlying platform provides.
|
||||
|
||||
Every actor is defined as an instance of an actor type, identical to the way an object is an instance of a class. For example, there may be an actor type that implements the functionality of a calculator and there could be many actors of that type that are distributed on various nodes across a cluster. Each such actor is uniquely identified by an actor ID.
|
||||
|
||||
<img src="/images/actor_background_game_example.png" width=400>
|
||||
|
||||
## Actor lifetime
|
||||
## Dapr actors vs. Dapr Workflow
|
||||
|
||||
Dapr actors are virtual, meaning that their lifetime is not tied to their in-memory representation. As a result, they do not need to be explicitly created or destroyed. The Dapr actor runtime automatically activates an actor the first time it receives a request for that actor ID. If an actor is not used for a period of time, the Dapr actor runtime garbage-collects the in-memory object. It will also maintain knowledge of the actor's existence should it need to be reactivated later.
|
||||
Dapr actors builds on the state management and service invocation APIs to create stateful, long running objects with identity. [Dapr Workflow]({{< ref workflow-overview.md >}}) and Dapr Actors are related, with workflows building on actors to provide a higher level of abstraction to orchestrate a set of actors, implementing common workflow patterns and managing the lifecycle of actors on your behalf.
|
||||
|
||||
Invocation of actor methods and reminders reset the idle time, e.g. reminder firing will keep the actor active. Actor reminders fire whether an actor is active or inactive, if fired for inactive actor, it will activate the actor first. Actor timers do not reset the idle time, so timer firing will not keep the actor active. Timers only fire while the actor is active.
|
||||
Dapr actors are designed to provide a way to encapsulate state and behavior within a distributed system. An actor can be activated on demand by a client application. When an actor is activated, it is assigned a unique identity, which allows it to maintain its state across multiple invocations. This makes actors useful for building stateful, scalable, and fault-tolerant distributed applications.
|
||||
|
||||
The idle timeout and scan interval Dapr runtime uses to see if an actor can be garbage-collected is configurable. This information can be passed when Dapr runtime calls into the actor service to get supported actor types.
|
||||
On the other hand, Dapr Workflow provides a way to define and orchestrate complex workflows that involve multiple services and components within a distributed system. Workflows allow you to define a sequence of steps or tasks that need to be executed in a specific order, and can be used to implement business processes, event-driven workflows, and other similar scenarios.
|
||||
|
||||
This virtual actor lifetime abstraction carries some caveats as a result of the virtual actor model, and in fact the Dapr Actors implementation deviates at times from this model.
|
||||
As mentioned above, Dapr Workflow builds on Dapr Actors managing their activation and lifecycle.
|
||||
|
||||
An actor is automatically activated (causing an actor object to be constructed) the first time a message is sent to its actor ID. After some period of time, the actor object is garbage collected. In the future, using the actor ID again, causes a new actor object to be constructed. An actor's state outlives the object's lifetime as state is stored in configured state provider for Dapr runtime.
|
||||
### When to use Dapr actors
|
||||
|
||||
## Distribution and failover
|
||||
As with any other technology decision, you should decide whether to use actors based on the problem you're trying to solve. For example, if you were building a chat application, you might use Dapr actors to implement the chat rooms and the individual chat sessions between users, as each chat session needs to maintain its own state and be scalable and fault-tolerant.
|
||||
|
||||
To provide scalability and reliability, actors instances are distributed throughout the cluster and Dapr automatically migrates them from failed nodes to healthy ones as required.
|
||||
Generally speaking, consider the actor pattern to model your problem or scenario if:
|
||||
|
||||
Actors are distributed across the instances of the actor service, and those instance are distributed across the nodes in a cluster. Each service instance contains a set of actors for a given actor type.
|
||||
- Your problem space involves a large number (thousands or more) of small, independent, and isolated units of state and logic.
|
||||
- You want to work with single-threaded objects that do not require significant interaction from external components, including querying state across a set of actors.
|
||||
- Your actor instances won't block callers with unpredictable delays by issuing I/O operations.
|
||||
|
||||
### Actor placement service
|
||||
The Dapr actor runtime manages distribution scheme and key range settings for you. This is done by the actor `Placement` service. When a new instance of a service is created, the corresponding Dapr runtime registers the actor types it can create and the `Placement` service calculates the partitioning across all the instances for a given actor type. This table of partition information for each actor type is updated and stored in each Dapr instance running in the environment and can change dynamically as new instance of actor services are created and destroyed. This is shown in the diagram below.
|
||||
### When to use Dapr Workflow
|
||||
|
||||
<img src="/images/actors_background_placement_service_registration.png" width=600>
|
||||
You would use Dapr Workflow when you need to define and orchestrate complex workflows that involve multiple services and components. For example, using the [chat application example earlier]({{< ref "#when-to-use-dapr-actors" >}}), you might use Dapr Workflows to define the overall workflow of the application, such as how new users are registered, how messages are sent and received, and how the application handles errors and exceptions.
|
||||
|
||||
When a client calls an actor with a particular id (for example, actor id 123), the Dapr instance for the client hashes the actor type and id, and uses the information to call onto the corresponding Dapr instance that can serve the requests for that particular actor id. As a result, the same partition (or service instance) is always called for any given actor id. This is shown in the diagram below.
|
||||
[Learn more about Dapr Workflow and how to use workflows in your application.]({{< ref workflow-overview.md >}})
|
||||
|
||||
<img src="/images/actors_background_id_hashing_calling.png" width=600>
|
||||
## Features
|
||||
|
||||
This simplifies some choices but also carries some consideration:
|
||||
### Actor lifetime
|
||||
|
||||
* By default, actors are randomly placed into pods resulting in uniform distribution.
|
||||
* Because actors are randomly placed, it should be expected that actor operations always require network communication, including serialization and deserialization of method call data, incurring latency and overhead.
|
||||
Since Dapr actors are virtual, they do not need to be explicitly created or destroyed. The Dapr actor runtime:
|
||||
1. Automatically activates an actor once it receives an initial request for that actor ID.
|
||||
1. Garbage-collects the in-memory object of unused actors.
|
||||
1. Maintains knowledge of the actor's existence in case it's reactivated later.
|
||||
|
||||
Note: The Dapr actor Placement service is only used for actor placement and therefore is not needed if your services are not using Dapr actors. The Placement service can run in all [hosting environments]({{< ref hosting >}}), including self-hosted and Kubernetes.
|
||||
An actor's state outlives the object's lifetime, as state is stored in the configured state provider for Dapr runtime.
|
||||
|
||||
## Actor communication
|
||||
[Learn more about actor lifetimes.]({{< ref "actors-features-concepts.md#actor-lifetime" >}})
|
||||
|
||||
You can interact with Dapr to invoke the actor method by calling HTTP/gRPC endpoint.
|
||||
### Distribution and failover
|
||||
|
||||
```bash
|
||||
POST/GET/PUT/DELETE http://localhost:3500/v1.0/actors/<actorType>/<actorId>/<method/state/timers/reminders>
|
||||
```
|
||||
To provide scalability and reliability, actors instances are throughout the cluster and Dapr distributes actor instances throughout the cluster and automatically migrates them to healthy nodes.
|
||||
|
||||
You can provide any data for the actor method in the request body, and the response for the request would be in the response body which is the data from actor call.
|
||||
[Learn more about Dapr actor placement.]({{< ref "actors-features-concepts.md#actor-placement-service" >}})
|
||||
|
||||
Another, and perhaps more convenient, way of interacting with actors is via SDKs. Dapr currently supports actors SDKs in [.NET]({{< ref "dotnet-actors" >}}), [Java]({{< ref "java#actors" >}}), and [Python]({{< ref "python-actor" >}}).
|
||||
### Actor communication
|
||||
|
||||
Refer to [Dapr Actor Features]({{< ref howto-actors.md >}}) for more details.
|
||||
You can invoke actor methods by calling them over HTTP, as shown in the general example below.
|
||||
|
||||
### Concurrency
|
||||
<img src="/images/actors-calling-method.png" width=900>
|
||||
|
||||
The Dapr actor runtime provides a simple turn-based access model for accessing actor methods. This means that no more than one thread can be active inside an actor object's code at any time. Turn-based access greatly simplifies concurrent systems as there is no need for synchronization mechanisms for data access. It also means systems must be designed with special considerations for the single-threaded access nature of each actor instance.
|
||||
1. The service calls the actor API on the sidecar.
|
||||
1. With the cached partitioning information from the placement service, the sidecar determines which actor service instance will host actor ID **3**. The call is forwarded to the appropriate sidecar.
|
||||
1. The sidecar instance in pod 2 calls the service instance to invoke the actor and execute the actor method.
|
||||
|
||||
A single actor instance cannot process more than one request at a time. An actor instance can cause a throughput bottleneck if it is expected to handle concurrent requests.
|
||||
[Learn more about calling actor methods.]({{< ref "actors-features-concepts.md#actor-communication" >}})
|
||||
|
||||
Actors can deadlock on each other if there is a circular request between two actors while an external request is made to one of the actors simultaneously. The Dapr actor runtime automatically times out on actor calls and throw an exception to the caller to interrupt possible deadlock situations.
|
||||
#### Concurrency
|
||||
|
||||
<img src="/images/actors_background_communication.png" width=600>
|
||||
The Dapr actor runtime provides a simple turn-based access model for accessing actor methods. Turn-based access greatly simplifies concurrent systems as there is no need for synchronization mechanisms for data access.
|
||||
|
||||
#### Reentrancy
|
||||
- [Learn more about actor reentrancy]({{< ref "actor-reentrancy.md" >}})
|
||||
- [Learn more about the turn-based access model]({{< ref "actors-features-concepts.md#turn-based-access" >}})
|
||||
|
||||
To allow actors to "re-enter" and invoke methods on themselves, see [Actor Reentrancy]({{<ref actor-reentrancy.md>}}).
|
||||
### Actor timers and reminders
|
||||
|
||||
### Turn-based access
|
||||
Actors can schedule periodic work on themselves by registering either timers or reminders.
|
||||
|
||||
A turn consists of the complete execution of an actor method in response to a request from other actors or clients, or the complete execution of a timer/reminder callback. Even though these methods and callbacks are asynchronous, the Dapr actor runtime does not interleave them. A turn must be fully finished before a new turn is allowed. In other words, an actor method or timer/reminder callback that is currently executing must be fully finished before a new call to a method or callback is allowed. A method or callback is considered to have finished if the execution has returned from the method or callback and the task returned by the method or callback has finished. It is worth emphasizing that turn-based concurrency is respected even across different methods, timers, and callbacks.
|
||||
The functionality of timers and reminders is very similar. The main difference is that Dapr actor runtime is not retaining any information about timers after deactivation, while persisting the information about reminders using Dapr actor state provider.
|
||||
|
||||
The Dapr actor runtime enforces turn-based concurrency by acquiring a per-actor lock at the beginning of a turn and releasing the lock at the end of the turn. Thus, turn-based concurrency is enforced on a per-actor basis and not across actors. Actor methods and timer/reminder callbacks can execute simultaneously on behalf of different actors.
|
||||
This distinction allows users to trade off between light-weight but stateless timers vs. more resource-demanding but stateful reminders.
|
||||
|
||||
The following example illustrates the above concepts. Consider an actor type that implements two asynchronous methods (say, Method1 and Method2), a timer, and a reminder. The diagram below shows an example of a timeline for the execution of these methods and callbacks on behalf of two actors (ActorId1 and ActorId2) that belong to this actor type.
|
||||
- [Learn more about actor timers.]({{< ref "actors-features-concepts.md#timers" >}})
|
||||
- [Learn more about actor reminders.]({{< ref "actors-features-concepts.md#reminders" >}})
|
||||
- [Learn more about timer and reminder error handling and failover.]({{< ref "actors-features-concepts.md#timers-and-reminders-error-handling" >}})
|
||||
|
||||
<img src="/images/actors_background_concurrency.png" width=600>
|
||||
## Next steps
|
||||
|
||||
{{< button text="Actors features and concepts >>" page="actors-features-concepts.md" >}}
|
||||
|
||||
## Related links
|
||||
|
||||
- [Actors API reference]({{< ref actors_api.md >}})
|
||||
- Refer to the [Dapr SDK documentation and examples]({{< ref "developing-applications/sdks/#sdk-languages" >}}).
|
|
@ -0,0 +1,206 @@
|
|||
---
|
||||
type: docs
|
||||
title: "Actor runtime configuration parameters"
|
||||
linkTitle: "Runtime configuration"
|
||||
weight: 30
|
||||
description: Modify the default Dapr actor runtime configuration behavior
|
||||
---
|
||||
|
||||
You can modify the default Dapr actor runtime behavior using the following configuration parameters.
|
||||
|
||||
| Parameter | Description | Default |
|
||||
| --------- | ----------- | ------- |
|
||||
| `entities` | The actor types supported by this host. | N/A |
|
||||
| `actorIdleTimeout` | The timeout before deactivating an idle actor. Checks for timeouts occur every `actorScanInterval` interval. | 60 minutes |
|
||||
| `actorScanInterval` | The duration which specifies how often to scan for actors to deactivate idle actors. Actors that have been idle longer than actor_idle_timeout will be deactivated. | 30 seconds |
|
||||
| `drainOngoingCallTimeout` | The duration when in the process of draining rebalanced actors. This specifies the timeout for the current active actor method to finish. If there is no current actor method call, this is ignored. | 60 seconds |
|
||||
| `drainRebalancedActors` | If true, Dapr will wait for `drainOngoingCallTimeout` duration to allow a current actor call to complete before trying to deactivate an actor. | true |
|
||||
| `reentrancy` (`ActorReentrancyConfig`) | Configure the reentrancy behavior for an actor. If not provided, reentrancy is disabled. | disabled, false |
|
||||
| `remindersStoragePartitions` | Configure the number of partitions for actor's reminders. If not provided, all reminders are saved as a single record in actor's state store. | 0 |
|
||||
| `entitiesConfig` | Configure each actor type individually with an array of configurations. Any entity specified in the individual entity configurations must also be specified in the top level `entities` field. | N/A |
|
||||
|
||||
## Examples
|
||||
|
||||
{{< tabs ".NET" JavaScript Python Java Go >}}
|
||||
|
||||
{{% codetab %}}
|
||||
```csharp
|
||||
// In Startup.cs
|
||||
public void ConfigureServices(IServiceCollection services)
|
||||
{
|
||||
// Register actor runtime with DI
|
||||
services.AddActors(options =>
|
||||
{
|
||||
// Register actor types and configure actor settings
|
||||
options.Actors.RegisterActor<MyActor>();
|
||||
|
||||
// Configure default settings
|
||||
options.ActorIdleTimeout = TimeSpan.FromMinutes(60);
|
||||
options.ActorScanInterval = TimeSpan.FromSeconds(30);
|
||||
options.DrainOngoingCallTimeout = TimeSpan.FromSeconds(60);
|
||||
options.DrainRebalancedActors = true;
|
||||
options.RemindersStoragePartitions = 7;
|
||||
options.ReentrancyConfig = new() { Enabled = false };
|
||||
|
||||
// Add a configuration for a specific actor type.
|
||||
// This actor type must have a matching value in the base level 'entities' field. If it does not, the configuration will be ignored.
|
||||
// If there is a matching entity, the values here will be used to overwrite any values specified in the root configuration.
|
||||
// In this example, `ReentrantActor` has reentrancy enabled; however, 'MyActor' will not have reentrancy enabled.
|
||||
options.Actors.RegisterActor<ReentrantActor>(typeOptions: new()
|
||||
{
|
||||
ReentrancyConfig = new()
|
||||
{
|
||||
Enabled = true,
|
||||
}
|
||||
});
|
||||
});
|
||||
|
||||
// Register additional services for use with actors
|
||||
services.AddSingleton<BankService>();
|
||||
}
|
||||
```
|
||||
[See the .NET SDK documentation on registring actors]({{< ref "dotnet-actors-usage.md#registring-actors" >}}).
|
||||
|
||||
{{% /codetab %}}
|
||||
|
||||
{{% codetab %}}
|
||||
|
||||
<!--javascript-->
|
||||
|
||||
```js
|
||||
import { CommunicationProtocolEnum, DaprClient, DaprServer } from "@dapr/dapr";
|
||||
|
||||
// Configure the actor runtime with the DaprClientOptions.
|
||||
const clientOptions = {
|
||||
actor: {
|
||||
actorIdleTimeout: "1h",
|
||||
actorScanInterval: "30s",
|
||||
drainOngoingCallTimeout: "1m",
|
||||
drainRebalancedActors: true,
|
||||
reentrancy: {
|
||||
enabled: true,
|
||||
maxStackDepth: 32,
|
||||
},
|
||||
remindersStoragePartitions: 0,
|
||||
},
|
||||
};
|
||||
|
||||
// Use the options when creating DaprServer and DaprClient.
|
||||
|
||||
// Note, DaprServer creates a DaprClient internally, which needs to be configured with clientOptions.
|
||||
const server = new DaprServer(serverHost, serverPort, daprHost, daprPort, clientOptions);
|
||||
|
||||
const client = new DaprClient(daprHost, daprPort, CommunicationProtocolEnum.HTTP, clientOptions);
|
||||
```
|
||||
|
||||
[See the documentation on writing actors with the JavaScript SDK]({{< ref "js-actors.md#registering-actors" >}}).
|
||||
|
||||
{{% /codetab %}}
|
||||
|
||||
{{% codetab %}}
|
||||
|
||||
<!--python-->
|
||||
|
||||
```python
|
||||
from datetime import timedelta
|
||||
from dapr.actor.runtime.config import ActorRuntimeConfig, ActorReentrancyConfig
|
||||
|
||||
ActorRuntime.set_actor_config(
|
||||
ActorRuntimeConfig(
|
||||
actor_idle_timeout=timedelta(hours=1),
|
||||
actor_scan_interval=timedelta(seconds=30),
|
||||
drain_ongoing_call_timeout=timedelta(minutes=1),
|
||||
drain_rebalanced_actors=True,
|
||||
reentrancy=ActorReentrancyConfig(enabled=False),
|
||||
remindersStoragePartitions=7
|
||||
)
|
||||
)
|
||||
```
|
||||
|
||||
[See the documentation on running actors with the Python SDK]({{< ref "python-actor.md" >}})
|
||||
|
||||
{{% /codetab %}}
|
||||
|
||||
{{% codetab %}}
|
||||
|
||||
<!--java-->
|
||||
|
||||
```java
|
||||
// import io.dapr.actors.runtime.ActorRuntime;
|
||||
// import java.time.Duration;
|
||||
|
||||
ActorRuntime.getInstance().getConfig().setActorIdleTimeout(Duration.ofMinutes(60));
|
||||
ActorRuntime.getInstance().getConfig().setActorScanInterval(Duration.ofSeconds(30));
|
||||
ActorRuntime.getInstance().getConfig().setDrainOngoingCallTimeout(Duration.ofSeconds(60));
|
||||
ActorRuntime.getInstance().getConfig().setDrainBalancedActors(true);
|
||||
ActorRuntime.getInstance().getConfig().setActorReentrancyConfig(false, null);
|
||||
ActorRuntime.getInstance().getConfig().setRemindersStoragePartitions(7);
|
||||
```
|
||||
|
||||
[See the documentation on writing actors with the Java SDK]({{< ref "java.md#actors" >}}).
|
||||
|
||||
{{% /codetab %}}
|
||||
|
||||
{{% codetab %}}
|
||||
<!--go-->
|
||||
|
||||
```go
|
||||
const (
|
||||
defaultActorType = "basicType"
|
||||
reentrantActorType = "reentrantType"
|
||||
)
|
||||
|
||||
type daprConfig struct {
|
||||
Entities []string `json:"entities,omitempty"`
|
||||
ActorIdleTimeout string `json:"actorIdleTimeout,omitempty"`
|
||||
ActorScanInterval string `json:"actorScanInterval,omitempty"`
|
||||
DrainOngoingCallTimeout string `json:"drainOngoingCallTimeout,omitempty"`
|
||||
DrainRebalancedActors bool `json:"drainRebalancedActors,omitempty"`
|
||||
Reentrancy config.ReentrancyConfig `json:"reentrancy,omitempty"`
|
||||
EntitiesConfig []config.EntityConfig `json:"entitiesConfig,omitempty"`
|
||||
}
|
||||
|
||||
var daprConfigResponse = daprConfig{
|
||||
Entities: []string{defaultActorType, reentrantActorType},
|
||||
ActorIdleTimeout: actorIdleTimeout,
|
||||
ActorScanInterval: actorScanInterval,
|
||||
DrainOngoingCallTimeout: drainOngoingCallTimeout,
|
||||
DrainRebalancedActors: drainRebalancedActors,
|
||||
Reentrancy: config.ReentrancyConfig{Enabled: false},
|
||||
EntitiesConfig: []config.EntityConfig{
|
||||
{
|
||||
// Add a configuration for a specific actor type.
|
||||
// This actor type must have a matching value in the base level 'entities' field. If it does not, the configuration will be ignored.
|
||||
// If there is a matching entity, the values here will be used to overwrite any values specified in the root configuration.
|
||||
// In this example, `reentrantActorType` has reentrancy enabled; however, 'defaultActorType' will not have reentrancy enabled.
|
||||
Entities: []string{reentrantActorType},
|
||||
Reentrancy: config.ReentrancyConfig{
|
||||
Enabled: true,
|
||||
MaxStackDepth: &maxStackDepth,
|
||||
},
|
||||
},
|
||||
},
|
||||
}
|
||||
|
||||
func configHandler(w http.ResponseWriter, r *http.Request) {
|
||||
w.Header().Set("Content-Type", "application/json")
|
||||
w.WriteHeader(http.StatusOK)
|
||||
json.NewEncoder(w).Encode(daprConfigResponse)
|
||||
}
|
||||
```
|
||||
|
||||
[See an example for using actors with the Go SDK](https://github.com/dapr/go-sdk/tree/main/examples/actor).
|
||||
|
||||
{{% /codetab %}}
|
||||
|
||||
{{< /tabs >}}
|
||||
|
||||
## Next steps
|
||||
|
||||
{{< button text="Enable actor reminder partitioning >>" page="howto-actors-partitioning.md" >}}
|
||||
|
||||
## Related links
|
||||
|
||||
- Refer to the [Dapr SDK documentation and examples]({{< ref "developing-applications/sdks/#sdk-languages" >}}).
|
||||
- [Actors API reference]({{< ref actors_api.md >}})
|
||||
- [Actors overview]({{< ref actors-overview.md >}})
|
|
@ -0,0 +1,156 @@
|
|||
---
|
||||
type: docs
|
||||
title: "Actors timers and reminders"
|
||||
linkTitle: "Timers and reminders"
|
||||
weight: 40
|
||||
description: "Setting timers and reminders and performing error handling for your actors"
|
||||
aliases:
|
||||
- "/developing-applications/building-blocks/actors/actors-background"
|
||||
---
|
||||
|
||||
Actors can schedule periodic work on themselves by registering either timers or reminders.
|
||||
|
||||
The functionality of timers and reminders is very similar. The main difference is that Dapr actor runtime is not retaining any information about timers after deactivation, while persisting the information about reminders using Dapr actor state provider.
|
||||
|
||||
This distinction allows users to trade off between light-weight but stateless timers vs. more resource-demanding but stateful reminders.
|
||||
|
||||
The scheduling configuration of timers and reminders is identical, as summarized below:
|
||||
|
||||
---
|
||||
`dueTime` is an optional parameter that sets time at which or time interval before the callback is invoked for the first time. If `dueTime` is omitted, the callback is invoked immediately after timer/reminder registration.
|
||||
|
||||
Supported formats:
|
||||
- RFC3339 date format, e.g. `2020-10-02T15:00:00Z`
|
||||
- time.Duration format, e.g. `2h30m`
|
||||
- [ISO 8601 duration](https://en.wikipedia.org/wiki/ISO_8601#Durations) format, e.g. `PT2H30M`
|
||||
|
||||
---
|
||||
`period` is an optional parameter that sets time interval between two consecutive callback invocations. When specified in `ISO 8601-1 duration` format, you can also configure the number of repetition in order to limit the total number of callback invocations.
|
||||
If `period` is omitted, the callback will be invoked only once.
|
||||
|
||||
Supported formats:
|
||||
- time.Duration format, e.g. `2h30m`
|
||||
- [ISO 8601 duration](https://en.wikipedia.org/wiki/ISO_8601#Durations) format, e.g. `PT2H30M`, `R5/PT1M30S`
|
||||
|
||||
---
|
||||
`ttl` is an optional parameter that sets time at which or time interval after which the timer/reminder will be expired and deleted. If `ttl` is omitted, no restrictions are applied.
|
||||
|
||||
Supported formats:
|
||||
* RFC3339 date format, e.g. `2020-10-02T15:00:00Z`
|
||||
* time.Duration format, e.g. `2h30m`
|
||||
* [ISO 8601 duration](https://en.wikipedia.org/wiki/ISO_8601#Durations) format. Example: `PT2H30M`
|
||||
|
||||
---
|
||||
The actor runtime validates correctness of the scheduling configuration and returns error on invalid input.
|
||||
|
||||
When you specify both the number of repetitions in `period` as well as `ttl`, the timer/reminder will be stopped when either condition is met.
|
||||
|
||||
## Actor timers
|
||||
|
||||
You can register a callback on actor to be executed based on a timer.
|
||||
|
||||
The Dapr actor runtime ensures that the callback methods respect the turn-based concurrency guarantees. This means that no other actor methods or timer/reminder callbacks will be in progress until this callback completes execution.
|
||||
|
||||
The Dapr actor runtime saves changes made to the actor's state when the callback finishes. If an error occurs in saving the state, that actor object is deactivated and a new instance will be activated.
|
||||
|
||||
All timers are stopped when the actor is deactivated as part of garbage collection. No timer callbacks are invoked after that. Also, the Dapr actor runtime does not retain any information about the timers that were running before deactivation. It is up to the actor to register any timers that it needs when it is reactivated in the future.
|
||||
|
||||
You can create a timer for an actor by calling the HTTP/gRPC request to Dapr as shown below, or via Dapr SDK.
|
||||
|
||||
```md
|
||||
POST/PUT http://localhost:3500/v1.0/actors/<actorType>/<actorId>/timers/<name>
|
||||
```
|
||||
|
||||
### Examples
|
||||
|
||||
The timer parameters are specified in the request body.
|
||||
|
||||
The following request body configures a timer with a `dueTime` of 9 seconds and a `period` of 3 seconds. This means it will first fire after 9 seconds, then every 3 seconds after that.
|
||||
```json
|
||||
{
|
||||
"dueTime":"0h0m9s0ms",
|
||||
"period":"0h0m3s0ms"
|
||||
}
|
||||
```
|
||||
|
||||
The following request body configures a timer with a `period` of 3 seconds (in ISO 8601 duration format). It also limits the number of invocations to 10. This means it will fire 10 times: first, immediately after registration, then every 3 seconds after that.
|
||||
```json
|
||||
{
|
||||
"period":"R10/PT3S",
|
||||
}
|
||||
```
|
||||
|
||||
The following request body configures a timer with a `period` of 3 seconds (in ISO 8601 duration format) and a `ttl` of 20 seconds. This means it fires immediately after registration, then every 3 seconds after that for the duration of 20 seconds.
|
||||
```json
|
||||
{
|
||||
"period":"PT3S",
|
||||
"ttl":"20s"
|
||||
}
|
||||
```
|
||||
|
||||
The following request body configures a timer with a `dueTime` of 10 seconds, a `period` of 3 seconds, and a `ttl` of 10 seconds. It also limits the number of invocations to 4. This means it will first fire after 10 seconds, then every 3 seconds after that for the duration of 10 seconds, but no more than 4 times in total.
|
||||
```json
|
||||
{
|
||||
"dueTime":"10s",
|
||||
"period":"R4/PT3S",
|
||||
"ttl":"10s"
|
||||
}
|
||||
```
|
||||
|
||||
You can remove the actor timer by calling
|
||||
|
||||
```md
|
||||
DELETE http://localhost:3500/v1.0/actors/<actorType>/<actorId>/timers/<name>
|
||||
```
|
||||
|
||||
Refer [api spec]({{< ref "actors_api.md#invoke-timer" >}}) for more details.
|
||||
|
||||
## Actor reminders
|
||||
|
||||
Reminders are a mechanism to trigger *persistent* callbacks on an actor at specified times. Their functionality is similar to timers. But unlike timers, reminders are triggered under all circumstances until the actor explicitly unregisters them or the actor is explicitly deleted or the number in invocations is exhausted. Specifically, reminders are triggered across actor deactivations and failovers because the Dapr actor runtime persists the information about the actors' reminders using Dapr actor state provider.
|
||||
|
||||
You can create a persistent reminder for an actor by calling the HTTP/gRPC request to Dapr as shown below, or via Dapr SDK.
|
||||
|
||||
```md
|
||||
POST/PUT http://localhost:3500/v1.0/actors/<actorType>/<actorId>/reminders/<name>
|
||||
```
|
||||
|
||||
The request structure for reminders is identical to those of actors. Please refer to the [actor timers examples]({{< ref "#actor-timers" >}}).
|
||||
|
||||
### Retrieve actor reminder
|
||||
|
||||
You can retrieve the actor reminder by calling
|
||||
|
||||
```md
|
||||
GET http://localhost:3500/v1.0/actors/<actorType>/<actorId>/reminders/<name>
|
||||
```
|
||||
|
||||
### Remove the actor reminder
|
||||
|
||||
You can remove the actor reminder by calling
|
||||
|
||||
```md
|
||||
DELETE http://localhost:3500/v1.0/actors/<actorType>/<actorId>/reminders/<name>
|
||||
```
|
||||
|
||||
Refer [api spec]({{< ref "actors_api.md#invoke-reminder" >}}) for more details.
|
||||
|
||||
## Error handling
|
||||
|
||||
When an actor's method completes successfully, the runtime will contineu to invoke the method at the specified timer or reminder schedule. However, if the method throws an exception, the runtime catches it and logs the error message in the Dapr sidecar logs, without retrying.
|
||||
|
||||
To allow actors to recover from failures and retry after a crash or restart, you can persist an actor's state by configuring a state store, like Redis or Azure Cosmos DB.
|
||||
|
||||
If an invocation of the method fails, the timer is not removed. Timers are only removed when:
|
||||
- The sidecar crashes
|
||||
- The executions run out
|
||||
- You delete it explicitly
|
||||
|
||||
## Next steps
|
||||
|
||||
{{< button text="Configure actor runtime behavior >>" page="actors-runtime-config.md" >}}
|
||||
|
||||
## Related links
|
||||
|
||||
- [Actors API reference]({{< ref actors_api.md >}})
|
||||
- [Actors overview]({{< ref actors-overview.md >}})
|
|
@ -0,0 +1,196 @@
|
|||
---
|
||||
type: docs
|
||||
title: "How to: Enable partitioning of actor reminders"
|
||||
linkTitle: "How to: Partition reminders"
|
||||
weight: 50
|
||||
description: "Enable actor reminders partitioning for your application"
|
||||
aliases:
|
||||
- "/developing-applications/building-blocks/actors/actors-background"
|
||||
---
|
||||
|
||||
[Actor reminders]({{< ref "howto-actors-partitioning.md#actor-reminders" >}}) are persisted and continue to be triggered after sidecar restarts. Applications with multiple reminders registered can experience the following issues:
|
||||
|
||||
- Low throughput on reminders registration and de-registration
|
||||
- Limited number of reminders registered based on the single record size limit on the state store
|
||||
|
||||
To sidestep these issues, applications can enable partitioning of actor reminders while data is distributed in multiple keys in the state store.
|
||||
|
||||
1. A metadata record in `actors\|\|<actor type>\|\|metadata` is used to store the persisted configuration for a given actor type.
|
||||
1. Multiple records store subsets of the reminders for the same actor type.
|
||||
|
||||
| Key | Value |
|
||||
| ----------- | ----------- |
|
||||
| `actors\|\|<actor type>\|\|metadata` | `{ "id": <actor metadata identifier>, "actorRemindersMetadata": { "partitionCount": <number of partitions for reminders> } }` |
|
||||
| `actors\|\|<actor type>\|\|<actor metadata identifier>\|\|reminders\|\|1` | `[ <reminder 1-1>, <reminder 1-2>, ... , <reminder 1-n> ]` |
|
||||
| `actors\|\|<actor type>\|\|<actor metadata identifier>\|\|reminders\|\|2` | `[ <reminder 1-1>, <reminder 1-2>, ... , <reminder 1-m> ]` |
|
||||
|
||||
If you need to change the number of partitions, Dapr's sidecar will automatically redistribute the reminders' set.
|
||||
|
||||
## Configure the actor runtime to partition actor reminders
|
||||
|
||||
Similar to other actor configuration elements, the actor runtime provides the appropriate configuration to partition actor reminders via the actor's endpoint for `GET /dapr/config`. Select your preferred language for an actor runtime configuration example.
|
||||
|
||||
{{< tabs ".NET" JavaScript Python Java Go >}}
|
||||
|
||||
{{% codetab %}}
|
||||
|
||||
<!--dotnet-->
|
||||
|
||||
```csharp
|
||||
// In Startup.cs
|
||||
public void ConfigureServices(IServiceCollection services)
|
||||
{
|
||||
// Register actor runtime with DI
|
||||
services.AddActors(options =>
|
||||
{
|
||||
// Register actor types and configure actor settings
|
||||
options.Actors.RegisterActor<MyActor>();
|
||||
|
||||
// Configure default settings
|
||||
options.ActorIdleTimeout = TimeSpan.FromMinutes(60);
|
||||
options.ActorScanInterval = TimeSpan.FromSeconds(30);
|
||||
options.RemindersStoragePartitions = 7;
|
||||
});
|
||||
|
||||
// Register additional services for use with actors
|
||||
services.AddSingleton<BankService>();
|
||||
}
|
||||
```
|
||||
|
||||
[See the .NET SDK documentation on registring actors]({{< ref "dotnet-actors-usage.md#registring-actors" >}}).
|
||||
|
||||
{{% /codetab %}}
|
||||
|
||||
{{% codetab %}}
|
||||
<!--javascript-->
|
||||
|
||||
```js
|
||||
import { CommunicationProtocolEnum, DaprClient, DaprServer } from "@dapr/dapr";
|
||||
|
||||
// Configure the actor runtime with the DaprClientOptions.
|
||||
const clientOptions = {
|
||||
actor: {
|
||||
remindersStoragePartitions: 0,
|
||||
},
|
||||
};
|
||||
|
||||
const actor = builder.build(new ActorId("my-actor"));
|
||||
|
||||
// Register a reminder, it has a default callback: `receiveReminder`
|
||||
await actor.registerActorReminder(
|
||||
"reminder-id", // Unique name of the reminder.
|
||||
Temporal.Duration.from({ seconds: 2 }), // DueTime
|
||||
Temporal.Duration.from({ seconds: 1 }), // Period
|
||||
Temporal.Duration.from({ seconds: 1 }), // TTL
|
||||
100, // State to be sent to reminder callback.
|
||||
);
|
||||
|
||||
// Delete the reminder
|
||||
await actor.unregisterActorReminder("reminder-id");
|
||||
```
|
||||
|
||||
[See the documentation on writing actors with the JavaScript SDK]({{< ref "js-actors.md#registering-actors" >}}).
|
||||
|
||||
{{% /codetab %}}
|
||||
|
||||
{{% codetab %}}
|
||||
|
||||
<!--python-->
|
||||
|
||||
```python
|
||||
from datetime import timedelta
|
||||
|
||||
ActorRuntime.set_actor_config(
|
||||
ActorRuntimeConfig(
|
||||
actor_idle_timeout=timedelta(hours=1),
|
||||
actor_scan_interval=timedelta(seconds=30),
|
||||
remindersStoragePartitions=7
|
||||
)
|
||||
)
|
||||
```
|
||||
|
||||
[See the documentation on running actors with the Python SDK]({{< ref "python-actor.md" >}})
|
||||
|
||||
{{% /codetab %}}
|
||||
|
||||
{{% codetab %}}
|
||||
<!--java-->
|
||||
|
||||
```java
|
||||
// import io.dapr.actors.runtime.ActorRuntime;
|
||||
// import java.time.Duration;
|
||||
|
||||
ActorRuntime.getInstance().getConfig().setActorIdleTimeout(Duration.ofMinutes(60));
|
||||
ActorRuntime.getInstance().getConfig().setActorScanInterval(Duration.ofSeconds(30));
|
||||
ActorRuntime.getInstance().getConfig().setRemindersStoragePartitions(7);
|
||||
```
|
||||
|
||||
[See the documentation on writing actors with the Java SDK]({{< ref "java.md#actors" >}}).
|
||||
|
||||
{{% /codetab %}}
|
||||
|
||||
{{% codetab %}}
|
||||
<!--go-->
|
||||
|
||||
```go
|
||||
type daprConfig struct {
|
||||
Entities []string `json:"entities,omitempty"`
|
||||
ActorIdleTimeout string `json:"actorIdleTimeout,omitempty"`
|
||||
ActorScanInterval string `json:"actorScanInterval,omitempty"`
|
||||
DrainOngoingCallTimeout string `json:"drainOngoingCallTimeout,omitempty"`
|
||||
DrainRebalancedActors bool `json:"drainRebalancedActors,omitempty"`
|
||||
RemindersStoragePartitions int `json:"remindersStoragePartitions,omitempty"`
|
||||
}
|
||||
|
||||
var daprConfigResponse = daprConfig{
|
||||
[]string{defaultActorType},
|
||||
actorIdleTimeout,
|
||||
actorScanInterval,
|
||||
drainOngoingCallTimeout,
|
||||
drainRebalancedActors,
|
||||
7,
|
||||
}
|
||||
|
||||
func configHandler(w http.ResponseWriter, r *http.Request) {
|
||||
w.Header().Set("Content-Type", "application/json")
|
||||
w.WriteHeader(http.StatusOK)
|
||||
json.NewEncoder(w).Encode(daprConfigResponse)
|
||||
}
|
||||
```
|
||||
|
||||
[See an example for using actors with the Go SDK](https://github.com/dapr/go-sdk/tree/main/examples/actor).
|
||||
|
||||
{{% /codetab %}}
|
||||
|
||||
{{< /tabs >}}
|
||||
|
||||
The following is an example of a valid configuration for reminder partitioning:
|
||||
|
||||
```json
|
||||
{
|
||||
"entities": [ "MyActorType", "AnotherActorType" ],
|
||||
"remindersStoragePartitions": 7
|
||||
}
|
||||
```
|
||||
|
||||
## Handle configuration changes
|
||||
|
||||
To configure actor reminders partitioning, Dapr persists the actor type metadata in the actor's state store. This allows the configuration changes to be applied globally, not just in a single sidecar instance.
|
||||
|
||||
In addition, **you can only increase the number of partitions**, not decrease. This allows Dapr to automatically redistribute the data on a rolling restart, where one or more partition configurations might be active.
|
||||
|
||||
## Demo
|
||||
|
||||
Watch [this video for a demo of actor reminder partitioning](https://youtu.be/ZwFOEUYe1WA?t=1493):
|
||||
|
||||
<div class="embed-responsive embed-responsive-16by9">
|
||||
<iframe width="560" height="315" src="https://www.youtube-nocookie.com/embed/ZwFOEUYe1WA?start=1495" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe>
|
||||
|
||||
## Next steps
|
||||
|
||||
{{< button text="Interact with virtual actors >>" page="howto-actors.md" >}}
|
||||
|
||||
## Related links
|
||||
|
||||
- [Actors API reference]({{< ref actors_api.md >}})
|
||||
- [Actors overview]({{< ref actors-overview.md >}})
|
|
@ -1,14 +1,14 @@
|
|||
---
|
||||
type: docs
|
||||
title: "How-to: Use virtual actors in Dapr"
|
||||
linkTitle: "How-To: Virtual actors"
|
||||
weight: 20
|
||||
description: Learn more about the actor pattern
|
||||
title: "How-to: Interact with virtual actors using scripting"
|
||||
linkTitle: "How-To: Interact with virtual actors"
|
||||
weight: 60
|
||||
description: Invoke the actor method for state management
|
||||
---
|
||||
|
||||
The Dapr actor runtime provides support for [virtual actors]({{< ref actors-overview.md >}}) through following capabilities:
|
||||
Learn how to use virtual actors by calling HTTP/gRPC endpoints.
|
||||
|
||||
## Actor method invocation
|
||||
## Invoke the actor method
|
||||
|
||||
You can interact with Dapr to invoke the actor method by calling HTTP/gRPC endpoint.
|
||||
|
||||
|
@ -16,424 +16,28 @@ You can interact with Dapr to invoke the actor method by calling HTTP/gRPC endpo
|
|||
POST/GET/PUT/DELETE http://localhost:3500/v1.0/actors/<actorType>/<actorId>/method/<method>
|
||||
```
|
||||
|
||||
You can provide any data for the actor method in the request body and the response for the request is in response body which is data from actor method call.
|
||||
Provide data for the actor method in the request body. The response for the request, which is data from actor method call, is in the response body.
|
||||
|
||||
Refer [api spec]({{< ref "actors_api.md#invoke-actor-method" >}}) for more details.
|
||||
Refer [to the Actors API spec]({{< ref "actors_api.md#invoke-actor-method" >}}) for more details.
|
||||
|
||||
Alternatively, you can use the Dapr SDK in [.NET]({{< ref "dotnet-actors" >}}), [Java]({{< ref "java#actors" >}}), or [Python]({{< ref "python-actor" >}}).
|
||||
{{% alert title="Note" color="primary" %}}
|
||||
Alternatively, you can use [Dapr SDKs to use actors]({{< ref "developing-applications/sdks/#sdk-languages" >}}).
|
||||
{{% /alert %}}
|
||||
|
||||
## Actor state management
|
||||
## Save state with actors
|
||||
|
||||
Actors can save state reliably using state management capability.
|
||||
You can interact with Dapr through HTTP/gRPC endpoints for state management.
|
||||
You can interact with Dapr via HTTP/gRPC endpoints to save state reliably using the Dapr actor state mangement capabaility.
|
||||
|
||||
To use actors, your state store must support multi-item transactions. This means your state store [component](https://github.com/dapr/components-contrib/tree/master/state) must implement the [TransactionalStore](https://github.com/dapr/components-contrib/blob/master/state/transactional_store.go) interface. The list of components that support transactions/actors can be found here: [supported state stores]({{< ref supported-state-stores.md >}}). Only a single state store component can be used as the statestore for all actors.
|
||||
To use actors, your state store must support multi-item transactions. This means your state store component must implement the `TransactionalStore` interface.
|
||||
|
||||
## Actor timers and reminders
|
||||
[See the list of components that support transactions/actors]({{< ref supported-state-stores.md >}}). Only a single state store component can be used as the state store for all actors.
|
||||
|
||||
Actors can schedule periodic work on themselves by registering either timers or reminders.
|
||||
## Next steps
|
||||
|
||||
The functionality of timers and reminders is very similar. The main difference is that Dapr actor runtime is not retaining any information about timers after deactivation, while persisting the information about reminders using Dapr actor state provider.
|
||||
{{< button text="Actor reentrancy >>" page="actor-reentrancy.md" >}}
|
||||
|
||||
This distinction allows users to trade off between light-weight but stateless timers vs. more resource-demanding but stateful reminders.
|
||||
## Related links
|
||||
|
||||
The scheduling configuration of timers and reminders is identical, as summarized below:
|
||||
|
||||
---
|
||||
`dueTime` is an optional parameter that sets time at which or time interval before the callback is invoked for the first time. If `dueTime` is omitted, the callback is invoked immediately after timer/reminder registration.
|
||||
|
||||
Supported formats:
|
||||
- RFC3339 date format, e.g. `2020-10-02T15:00:00Z`
|
||||
- time.Duration format, e.g. `2h30m`
|
||||
- [ISO 8601 duration](https://en.wikipedia.org/wiki/ISO_8601#Durations) format, e.g. `PT2H30M`
|
||||
|
||||
---
|
||||
`period` is an optional parameter that sets time interval between two consecutive callback invocations. When specified in `ISO 8601-1 duration` format, you can also configure the number of repetition in order to limit the total number of callback invocations.
|
||||
If `period` is omitted, the callback will be invoked only once.
|
||||
|
||||
Supported formats:
|
||||
- time.Duration format, e.g. `2h30m`
|
||||
- [ISO 8601 duration](https://en.wikipedia.org/wiki/ISO_8601#Durations) format, e.g. `PT2H30M`, `R5/PT1M30S`
|
||||
|
||||
---
|
||||
`ttl` is an optional parameter that sets time at which or time interval after which the timer/reminder will be expired and deleted. If `ttl` is omitted, no restrictions are applied.
|
||||
|
||||
Supported formats:
|
||||
* RFC3339 date format, e.g. `2020-10-02T15:00:00Z`
|
||||
* time.Duration format, e.g. `2h30m`
|
||||
* [ISO 8601 duration](https://en.wikipedia.org/wiki/ISO_8601#Durations) format. Example: `PT2H30M`
|
||||
|
||||
---
|
||||
The actor runtime validates correctness of the scheduling configuration and returns error on invalid input.
|
||||
|
||||
When you specify both the number of repetitions in `period` as well as `ttl`, the timer/reminder will be stopped when either condition is met.
|
||||
|
||||
### Actor timers
|
||||
|
||||
You can register a callback on actor to be executed based on a timer.
|
||||
|
||||
The Dapr actor runtime ensures that the callback methods respect the turn-based concurrency guarantees. This means that no other actor methods or timer/reminder callbacks will be in progress until this callback completes execution.
|
||||
|
||||
The Dapr actor runtime saves changes made to the actor's state when the callback finishes. If an error occurs in saving the state, that actor object is deactivated and a new instance will be activated.
|
||||
|
||||
All timers are stopped when the actor is deactivated as part of garbage collection. No timer callbacks are invoked after that. Also, the Dapr actor runtime does not retain any information about the timers that were running before deactivation. It is up to the actor to register any timers that it needs when it is reactivated in the future.
|
||||
|
||||
You can create a timer for an actor by calling the HTTP/gRPC request to Dapr as shown below, or via Dapr SDK.
|
||||
|
||||
```md
|
||||
POST/PUT http://localhost:3500/v1.0/actors/<actorType>/<actorId>/timers/<name>
|
||||
```
|
||||
|
||||
**Examples**
|
||||
|
||||
The timer parameters are specified in the request body.
|
||||
|
||||
The following request body configures a timer with a `dueTime` of 9 seconds and a `period` of 3 seconds. This means it will first fire after 9 seconds, then every 3 seconds after that.
|
||||
```json
|
||||
{
|
||||
"dueTime":"0h0m9s0ms",
|
||||
"period":"0h0m3s0ms"
|
||||
}
|
||||
```
|
||||
|
||||
The following request body configures a timer with a `period` of 3 seconds (in ISO 8601 duration format). It also limits the number of invocations to 10. This means it will fire 10 times: first, immediately after registration, then every 3 seconds after that.
|
||||
```json
|
||||
{
|
||||
"period":"R10/PT3S",
|
||||
}
|
||||
```
|
||||
|
||||
The following request body configures a timer with a `period` of 3 seconds (in ISO 8601 duration format) and a `ttl` of 20 seconds. This means it fires immediately after registration, then every 3 seconds after that for the duration of 20 seconds.
|
||||
```json
|
||||
{
|
||||
"period":"PT3S",
|
||||
"ttl":"20s"
|
||||
}
|
||||
```
|
||||
|
||||
The following request body configures a timer with a `dueTime` of 10 seconds, a `period` of 3 seconds, and a `ttl` of 10 seconds. It also limits the number of invocations to 4. This means it will first fire after 10 seconds, then every 3 seconds after that for the duration of 10 seconds, but no more than 4 times in total.
|
||||
```json
|
||||
{
|
||||
"dueTime":"10s",
|
||||
"period":"R4/PT3S",
|
||||
"ttl":"10s"
|
||||
}
|
||||
```
|
||||
|
||||
You can remove the actor timer by calling
|
||||
|
||||
```md
|
||||
DELETE http://localhost:3500/v1.0/actors/<actorType>/<actorId>/timers/<name>
|
||||
```
|
||||
|
||||
Refer [api spec]({{< ref "actors_api.md#invoke-timer" >}}) for more details.
|
||||
|
||||
### Actor reminders
|
||||
|
||||
Reminders are a mechanism to trigger *persistent* callbacks on an actor at specified times. Their functionality is similar to timers. But unlike timers, reminders are triggered under all circumstances until the actor explicitly unregisters them or the actor is explicitly deleted or the number in invocations is exhausted. Specifically, reminders are triggered across actor deactivations and failovers because the Dapr actor runtime persists the information about the actors' reminders using Dapr actor state provider.
|
||||
|
||||
You can create a persistent reminder for an actor by calling the HTTP/gRPC request to Dapr as shown below, or via Dapr SDK.
|
||||
|
||||
```md
|
||||
POST/PUT http://localhost:3500/v1.0/actors/<actorType>/<actorId>/reminders/<name>
|
||||
```
|
||||
|
||||
The request structure for reminders is identical to those of actors. Please refer to the [actor timers examples]({{< ref "#actor-timers" >}}).
|
||||
|
||||
#### Retrieve actor reminder
|
||||
|
||||
You can retrieve the actor reminder by calling
|
||||
|
||||
```md
|
||||
GET http://localhost:3500/v1.0/actors/<actorType>/<actorId>/reminders/<name>
|
||||
```
|
||||
|
||||
#### Remove the actor reminder
|
||||
|
||||
You can remove the actor reminder by calling
|
||||
|
||||
```md
|
||||
DELETE http://localhost:3500/v1.0/actors/<actorType>/<actorId>/reminders/<name>
|
||||
```
|
||||
|
||||
Refer [api spec]({{< ref "actors_api.md#invoke-reminder" >}}) for more details.
|
||||
|
||||
## Actor runtime configuration
|
||||
|
||||
You can configure the Dapr actor runtime configuration to modify the default runtime behavior.
|
||||
|
||||
### Configuration parameters
|
||||
- `entities` - The actor types supported by this host.
|
||||
- `actorIdleTimeout` - The timeout before deactivating an idle actor. Checks for timeouts occur every `actorScanInterval` interval. **Default: 60 minutes**
|
||||
- `actorScanInterval` - The duration which specifies how often to scan for actors to deactivate idle actors. Actors that have been idle longer than actor_idle_timeout will be deactivated. **Default: 30 seconds**
|
||||
- `drainOngoingCallTimeout` - The duration when in the process of draining rebalanced actors. This specifies the timeout for the current active actor method to finish. If there is no current actor method call, this is ignored. **Default: 60 seconds**
|
||||
- `drainRebalancedActors` - If true, Dapr will wait for `drainOngoingCallTimeout` duration to allow a current actor call to complete before trying to deactivate an actor. **Default: true**
|
||||
- `reentrancy` (ActorReentrancyConfig) - Configure the reentrancy behavior for an actor. If not provided, reentrancy is disabled. **Default: disabled**
|
||||
**Default: false**
|
||||
- `remindersStoragePartitions` - Configure the number of partitions for actor's reminders. If not provided, all reminders are saved as a single record in actor's state store. **Default: 0**
|
||||
- `entitiesConfig` - Configure each actor type individually with an array of configurations. Any entity specified in the individual entity configurations must also be specified in the top level `entities` field. **Default: None**
|
||||
|
||||
{{< tabs Java Dotnet Python Go >}}
|
||||
|
||||
{{% codetab %}}
|
||||
```java
|
||||
// import io.dapr.actors.runtime.ActorRuntime;
|
||||
// import java.time.Duration;
|
||||
|
||||
ActorRuntime.getInstance().getConfig().setActorIdleTimeout(Duration.ofMinutes(60));
|
||||
ActorRuntime.getInstance().getConfig().setActorScanInterval(Duration.ofSeconds(30));
|
||||
ActorRuntime.getInstance().getConfig().setDrainOngoingCallTimeout(Duration.ofSeconds(60));
|
||||
ActorRuntime.getInstance().getConfig().setDrainBalancedActors(true);
|
||||
ActorRuntime.getInstance().getConfig().setActorReentrancyConfig(false, null);
|
||||
ActorRuntime.getInstance().getConfig().setRemindersStoragePartitions(7);
|
||||
```
|
||||
|
||||
See [this example](https://github.com/dapr/java-sdk/blob/master/examples/src/main/java/io/dapr/examples/actors/DemoActorService.java)
|
||||
{{% /codetab %}}
|
||||
|
||||
{{% codetab %}}
|
||||
```csharp
|
||||
// In Startup.cs
|
||||
public void ConfigureServices(IServiceCollection services)
|
||||
{
|
||||
// Register actor runtime with DI
|
||||
services.AddActors(options =>
|
||||
{
|
||||
// Register actor types and configure actor settings
|
||||
options.Actors.RegisterActor<MyActor>();
|
||||
|
||||
// Configure default settings
|
||||
options.ActorIdleTimeout = TimeSpan.FromMinutes(60);
|
||||
options.ActorScanInterval = TimeSpan.FromSeconds(30);
|
||||
options.DrainOngoingCallTimeout = TimeSpan.FromSeconds(60);
|
||||
options.DrainRebalancedActors = true;
|
||||
options.RemindersStoragePartitions = 7;
|
||||
options.ReentrancyConfig = new() { Enabled = false };
|
||||
|
||||
// Add a configuration for a specific actor type.
|
||||
// This actor type must have a matching value in the base level 'entities' field. If it does not, the configuration will be ignored.
|
||||
// If there is a matching entity, the values here will be used to overwrite any values specified in the root configuration.
|
||||
// In this example, `ReentrantActor` has reentrancy enabled; however, 'MyActor' will not have reentrancy enabled.
|
||||
options.Actors.RegisterActor<ReentrantActor>(typeOptions: new()
|
||||
{
|
||||
ReentrancyConfig = new()
|
||||
{
|
||||
Enabled = true,
|
||||
}
|
||||
});
|
||||
});
|
||||
|
||||
// Register additional services for use with actors
|
||||
services.AddSingleton<BankService>();
|
||||
}
|
||||
```
|
||||
See the .NET SDK [documentation](https://github.com/dapr/dotnet-sdk/blob/master/daprdocs/content/en/dotnet-sdk-docs/dotnet-actors/dotnet-actors-usage.md#registering-actors).
|
||||
{{% /codetab %}}
|
||||
|
||||
{{% codetab %}}
|
||||
```python
|
||||
from datetime import timedelta
|
||||
from dapr.actor.runtime.config import ActorRuntimeConfig, ActorReentrancyConfig
|
||||
|
||||
ActorRuntime.set_actor_config(
|
||||
ActorRuntimeConfig(
|
||||
actor_idle_timeout=timedelta(hours=1),
|
||||
actor_scan_interval=timedelta(seconds=30),
|
||||
drain_ongoing_call_timeout=timedelta(minutes=1),
|
||||
drain_rebalanced_actors=True,
|
||||
reentrancy=ActorReentrancyConfig(enabled=False),
|
||||
remindersStoragePartitions=7
|
||||
)
|
||||
)
|
||||
```
|
||||
{{% /codetab %}}
|
||||
|
||||
{{% codetab %}}
|
||||
```go
|
||||
const (
|
||||
defaultActorType = "basicType"
|
||||
reentrantActorType = "reentrantType"
|
||||
)
|
||||
|
||||
type daprConfig struct {
|
||||
Entities []string `json:"entities,omitempty"`
|
||||
ActorIdleTimeout string `json:"actorIdleTimeout,omitempty"`
|
||||
ActorScanInterval string `json:"actorScanInterval,omitempty"`
|
||||
DrainOngoingCallTimeout string `json:"drainOngoingCallTimeout,omitempty"`
|
||||
DrainRebalancedActors bool `json:"drainRebalancedActors,omitempty"`
|
||||
Reentrancy config.ReentrancyConfig `json:"reentrancy,omitempty"`
|
||||
EntitiesConfig []config.EntityConfig `json:"entitiesConfig,omitempty"`
|
||||
}
|
||||
|
||||
var daprConfigResponse = daprConfig{
|
||||
Entities: []string{defaultActorType, reentrantActorType},
|
||||
ActorIdleTimeout: actorIdleTimeout,
|
||||
ActorScanInterval: actorScanInterval,
|
||||
DrainOngoingCallTimeout: drainOngoingCallTimeout,
|
||||
DrainRebalancedActors: drainRebalancedActors,
|
||||
Reentrancy: config.ReentrancyConfig{Enabled: false},
|
||||
EntitiesConfig: []config.EntityConfig{
|
||||
{
|
||||
// Add a configuration for a specific actor type.
|
||||
// This actor type must have a matching value in the base level 'entities' field. If it does not, the configuration will be ignored.
|
||||
// If there is a matching entity, the values here will be used to overwrite any values specified in the root configuration.
|
||||
// In this example, `reentrantActorType` has reentrancy enabled; however, 'defaultActorType' will not have reentrancy enabled.
|
||||
Entities: []string{reentrantActorType},
|
||||
Reentrancy: config.ReentrancyConfig{
|
||||
Enabled: true,
|
||||
MaxStackDepth: &maxStackDepth,
|
||||
},
|
||||
},
|
||||
},
|
||||
}
|
||||
|
||||
func configHandler(w http.ResponseWriter, r *http.Request) {
|
||||
w.Header().Set("Content-Type", "application/json")
|
||||
w.WriteHeader(http.StatusOK)
|
||||
json.NewEncoder(w).Encode(daprConfigResponse)
|
||||
}
|
||||
```
|
||||
{{% /codetab %}}
|
||||
|
||||
{{< /tabs >}}
|
||||
|
||||
Refer to the documentation and examples of the [Dapr SDKs]({{< ref "developing-applications/sdks/#sdk-languages" >}}) for more details.
|
||||
|
||||
## Partitioning reminders
|
||||
|
||||
Actor reminders are persisted and continue to be triggered after sidecar restarts. Applications with multiple reminders registered can experience the following issues:
|
||||
|
||||
- Low throughput on reminders registration and de-registration
|
||||
- Limit on the total number of reminders registered based on the single record size limit on the state store
|
||||
|
||||
Applications can enable partitioning of actor reminders while data is distributed in multiple keys in the state store.
|
||||
|
||||
1. A metadata record in `actors\|\|<actor type>\|\|metadata` is used to store persisted configuration for a given actor type.
|
||||
1. Multiple records store subsets of the reminders for the same actor type.
|
||||
|
||||
| Key | Value |
|
||||
| ----------- | ----------- |
|
||||
| `actors\|\|<actor type>\|\|metadata` | `{ "id": <actor metadata identifier>, "actorRemindersMetadata": { "partitionCount": <number of partitions for reminders> } }` |
|
||||
| `actors\|\|<actor type>\|\|<actor metadata identifier>\|\|reminders\|\|1` | `[ <reminder 1-1>, <reminder 1-2>, ... , <reminder 1-n> ]` |
|
||||
| `actors\|\|<actor type>\|\|<actor metadata identifier>\|\|reminders\|\|2` | `[ <reminder 1-1>, <reminder 1-2>, ... , <reminder 1-m> ]` |
|
||||
| ... | ... |
|
||||
|
||||
If you need to change the number of partitions, Dapr's sidecar will automatically redistribute the reminders's set.
|
||||
|
||||
### Enabling actor reminders partitioning
|
||||
|
||||
#### Actor runtime configuration for actor reminders partitioning
|
||||
|
||||
Similar to other actor configuration elements, the actor runtime provides the appropriate configuration to partition actor reminders via the actor's endpoint for `GET /dapr/config`.
|
||||
|
||||
{{< tabs Java Dotnet Python Go >}}
|
||||
|
||||
{{% codetab %}}
|
||||
|
||||
```java
|
||||
// import io.dapr.actors.runtime.ActorRuntime;
|
||||
// import java.time.Duration;
|
||||
|
||||
ActorRuntime.getInstance().getConfig().setActorIdleTimeout(Duration.ofMinutes(60));
|
||||
ActorRuntime.getInstance().getConfig().setActorScanInterval(Duration.ofSeconds(30));
|
||||
ActorRuntime.getInstance().getConfig().setRemindersStoragePartitions(7);
|
||||
```
|
||||
|
||||
For more information, see [the Java actors example](https://github.com/dapr/java-sdk/blob/master/examples/src/main/java/io/dapr/examples/actors/DemoActorService.java)
|
||||
{{% /codetab %}}
|
||||
|
||||
{{% codetab %}}
|
||||
|
||||
```csharp
|
||||
// In Startup.cs
|
||||
public void ConfigureServices(IServiceCollection services)
|
||||
{
|
||||
// Register actor runtime with DI
|
||||
services.AddActors(options =>
|
||||
{
|
||||
// Register actor types and configure actor settings
|
||||
options.Actors.RegisterActor<MyActor>();
|
||||
|
||||
// Configure default settings
|
||||
options.ActorIdleTimeout = TimeSpan.FromMinutes(60);
|
||||
options.ActorScanInterval = TimeSpan.FromSeconds(30);
|
||||
options.RemindersStoragePartitions = 7;
|
||||
});
|
||||
|
||||
// Register additional services for use with actors
|
||||
services.AddSingleton<BankService>();
|
||||
}
|
||||
```
|
||||
|
||||
See the .NET SDK [documentation for registering actors](https://github.com/dapr/dotnet-sdk/blob/master/daprdocs/content/en/dotnet-sdk-docs/dotnet-actors/dotnet-actors-usage.md#registering-actors).
|
||||
{{% /codetab %}}
|
||||
|
||||
{{% codetab %}}
|
||||
|
||||
```python
|
||||
from datetime import timedelta
|
||||
|
||||
ActorRuntime.set_actor_config(
|
||||
ActorRuntimeConfig(
|
||||
actor_idle_timeout=timedelta(hours=1),
|
||||
actor_scan_interval=timedelta(seconds=30),
|
||||
remindersStoragePartitions=7
|
||||
)
|
||||
)
|
||||
```
|
||||
|
||||
{{% /codetab %}}
|
||||
|
||||
{{% codetab %}}
|
||||
|
||||
```go
|
||||
type daprConfig struct {
|
||||
Entities []string `json:"entities,omitempty"`
|
||||
ActorIdleTimeout string `json:"actorIdleTimeout,omitempty"`
|
||||
ActorScanInterval string `json:"actorScanInterval,omitempty"`
|
||||
DrainOngoingCallTimeout string `json:"drainOngoingCallTimeout,omitempty"`
|
||||
DrainRebalancedActors bool `json:"drainRebalancedActors,omitempty"`
|
||||
RemindersStoragePartitions int `json:"remindersStoragePartitions,omitempty"`
|
||||
}
|
||||
|
||||
var daprConfigResponse = daprConfig{
|
||||
[]string{defaultActorType},
|
||||
actorIdleTimeout,
|
||||
actorScanInterval,
|
||||
drainOngoingCallTimeout,
|
||||
drainRebalancedActors,
|
||||
7,
|
||||
}
|
||||
|
||||
func configHandler(w http.ResponseWriter, r *http.Request) {
|
||||
w.Header().Set("Content-Type", "application/json")
|
||||
w.WriteHeader(http.StatusOK)
|
||||
json.NewEncoder(w).Encode(daprConfigResponse)
|
||||
}
|
||||
```
|
||||
|
||||
{{% /codetab %}}
|
||||
|
||||
{{< /tabs >}}
|
||||
|
||||
The following is an example of a valid configuration for reminder partitioning:
|
||||
|
||||
```json
|
||||
{
|
||||
"entities": [ "MyActorType", "AnotherActorType" ],
|
||||
"remindersStoragePartitions": 7
|
||||
}
|
||||
```
|
||||
|
||||
#### Handling configuration changes
|
||||
|
||||
To configure actor reminders partitioning, Dapr persists the actor type metadata in the actor's state store. This allows the configuration changes to be applied globally, not just in a single sidecar instance.
|
||||
|
||||
Also, **you can only increase the number of partitions**, not decrease. This allows Dapr to automatically redistribute the data on a rolling restart where one or more partition configurations might be active.
|
||||
|
||||
#### Demo
|
||||
|
||||
Watch [this video for a demo of actor reminder partitioning](https://youtu.be/ZwFOEUYe1WA?t=1493):
|
||||
|
||||
<div class="embed-responsive embed-responsive-16by9">
|
||||
<iframe width="560" height="315" src="https://www.youtube-nocookie.com/embed/ZwFOEUYe1WA?start=1495" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe>
|
||||
- Refer to the [Dapr SDK documentation and examples]({{< ref "developing-applications/sdks/#sdk-languages" >}}).
|
||||
- [Actors API reference]({{< ref actors_api.md >}})
|
||||
- [Actors overview]({{< ref actors-overview.md >}})
|
|
@ -270,7 +270,11 @@ const daprHost = "127.0.0.1";
|
|||
async function sendOrder(orderId) {
|
||||
const BINDING_NAME = "checkout";
|
||||
const BINDING_OPERATION = "create";
|
||||
const client = new DaprClient(daprHost, process.env.DAPR_HTTP_PORT, CommunicationProtocolEnum.HTTP);
|
||||
const client = new DaprClient({
|
||||
daprHost,
|
||||
daprPort: process.env.DAPR_HTTP_PORT,
|
||||
communicationProtocol: CommunicationProtocolEnum.HTTP,
|
||||
});
|
||||
//Using Dapr SDK to invoke output binding
|
||||
const result = await client.binding.send(BINDING_NAME, BINDING_OPERATION, orderId);
|
||||
console.log("Sending message: " + orderId);
|
||||
|
|
|
@ -237,7 +237,15 @@ start().catch((e) => {
|
|||
});
|
||||
|
||||
async function start() {
|
||||
const server = new DaprServer(serverHost, serverPort, daprHost, daprPort, CommunicationProtocolEnum.HTTP);
|
||||
const server = new DaprServer({
|
||||
serverHost,
|
||||
serverPort,
|
||||
communicationProtocol: CommunicationProtocolEnum.HTTP,
|
||||
clientOptions: {
|
||||
daprHost,
|
||||
daprPort,
|
||||
}
|
||||
});
|
||||
await server.binding.receive('checkout', async (orderId) => console.log(`Received Message: ${JSON.stringify(orderId)}`));
|
||||
await server.startServer();
|
||||
}
|
||||
|
|
|
@ -28,7 +28,7 @@ App health checks are disabled by default.
|
|||
|
||||
App health checks in Dapr are meant to be complementary to, and not replace, any platform-level health checks, like [liveness probes](https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/) when running on Kubernetes.
|
||||
|
||||
Platform-level health checks (or liveness probes) generally ensure that the application is running, and cause the platform to restart the application (or, in case of Kubernetes, pod) in case of failures.
|
||||
Platform-level health checks (or liveness probes) generally ensure that the application is running, and cause the platform to restart the application in case of failures. For Kubernetes, a failing App Health check won't remove a pod from service discovery. This remains the responsibility of the Kubernetes liveness probe, _not_ Dapr.
|
||||
Unlike platform-level health checks, Dapr's app health checks focus on pausing work to an application that is currently unable to accept it, but is expected to be able to resume accepting work *eventually*. Goals include:
|
||||
|
||||
- Not bringing more load to an application that is already overloaded.
|
||||
|
|
|
@ -12,7 +12,7 @@ Now that you've learned what the Dapr pub/sub building block provides, learn how
|
|||
- An order processing service using Dapr to publish a message to RabbitMQ.
|
||||
|
||||
|
||||
<img src="/images/building-block-pub-sub-example.png" width=1000 alt="Diagram showing state management of example service">
|
||||
<img src="/images/pubsub-howto-overview.png" width=1000 alt="Diagram showing state management of example service">
|
||||
|
||||
Dapr automatically wraps the user payload in a CloudEvents v1.0 compliant envelope, using `Content-Type` header value for `datacontenttype` attribute. [Learn more about messages with CloudEvents.]({{< ref pubsub-cloudevents.md >}})
|
||||
|
||||
|
@ -122,8 +122,16 @@ spec:
|
|||
type: pubsub.rabbitmq
|
||||
version: v1
|
||||
metadata:
|
||||
- name: host
|
||||
- name: connectionString
|
||||
value: "amqp://localhost:5672"
|
||||
- name: protocol
|
||||
value: amqp
|
||||
- name: hostname
|
||||
value: localhost
|
||||
- name: username
|
||||
value: username
|
||||
- name: password
|
||||
value: password
|
||||
- name: durable
|
||||
value: "false"
|
||||
- name: deletedWhenUnused
|
||||
|
@ -347,13 +355,15 @@ start().catch((e) => {
|
|||
});
|
||||
|
||||
async function start(orderId) {
|
||||
const server = new DaprServer(
|
||||
serverHost,
|
||||
serverPort,
|
||||
daprHost,
|
||||
process.env.DAPR_HTTP_PORT,
|
||||
CommunicationProtocolEnum.HTTP
|
||||
);
|
||||
const server = new DaprServer({
|
||||
serverHost,
|
||||
serverPort,
|
||||
communicationProtocol: CommunicationProtocolEnum.HTTP,
|
||||
clientOptions: {
|
||||
daprHost,
|
||||
daprPort: process.env.DAPR_HTTP_PORT,
|
||||
},
|
||||
});
|
||||
//Subscribe to a topic
|
||||
await server.pubsub.subscribe("order-pub-sub", "orders", async (orderId) => {
|
||||
console.log(`Subscriber received: ${JSON.stringify(orderId)}`)
|
||||
|
@ -617,7 +627,11 @@ var main = function() {
|
|||
async function start(orderId) {
|
||||
const PUBSUB_NAME = "order-pub-sub"
|
||||
const TOPIC_NAME = "orders"
|
||||
const client = new DaprClient(daprHost, process.env.DAPR_HTTP_PORT, CommunicationProtocolEnum.HTTP);
|
||||
const client = new DaprClient({
|
||||
daprHost,
|
||||
daprPort: process.env.DAPR_HTTP_PORT,
|
||||
communicationProtocol: CommunicationProtocolEnum.HTTP
|
||||
});
|
||||
console.log("Published data:" + orderId)
|
||||
//Using Dapr SDK to publish a topic
|
||||
await client.pubsub.publish(PUBSUB_NAME, TOPIC_NAME, orderId);
|
||||
|
|
|
@ -313,10 +313,17 @@ A JSON-encoded payload body with the processing status against each entry needs
|
|||
|
||||
```json
|
||||
{
|
||||
"statuses": {
|
||||
"entryId": "<entryId>",
|
||||
"statuses":
|
||||
[
|
||||
{
|
||||
"entryId": "<entryId1>",
|
||||
"status": "<status>"
|
||||
}
|
||||
},
|
||||
{
|
||||
"entryId": "<entryId2>",
|
||||
"status": "<status>"
|
||||
}
|
||||
]
|
||||
}
|
||||
```
|
||||
|
||||
|
@ -334,7 +341,7 @@ Please refer [Expected HTTP Response for Bulk Subscribe]({{< ref pubsub_api.md >
|
|||
|
||||
Please refer following code samples for how to use Bulk Subscribe:
|
||||
|
||||
{{< tabs Java Javascript "HTTP API (Bash)" "HTTP API (PowerShell)" >}}
|
||||
{{< tabs "Java" "JavaScript" ".NET" >}}
|
||||
|
||||
{{% codetab %}}
|
||||
|
||||
|
@ -387,13 +394,20 @@ import { DaprServer } from "@dapr/dapr";
|
|||
const pubSubName = "orderPubSub";
|
||||
const topic = "topicbulk";
|
||||
|
||||
const DAPR_HOST = process.env.DAPR_HOST || "127.0.0.1";
|
||||
const DAPR_HTTP_PORT = process.env.DAPR_HTTP_PORT || "3502";
|
||||
const SERVER_HOST = process.env.SERVER_HOST || "127.0.0.1";
|
||||
const SERVER_PORT = process.env.APP_PORT || 5001;
|
||||
const daprHost = process.env.DAPR_HOST || "127.0.0.1";
|
||||
const daprPort = process.env.DAPR_HTTP_PORT || "3502";
|
||||
const serverHost = process.env.SERVER_HOST || "127.0.0.1";
|
||||
const serverPort = process.env.APP_PORT || 5001;
|
||||
|
||||
async function start() {
|
||||
const server = new DaprServer(SERVER_HOST, SERVER_PORT, DAPR_HOST, DAPR_HTTP_PORT);
|
||||
const server = new DaprServer({
|
||||
serverHost,
|
||||
serverPort,
|
||||
clientOptions: {
|
||||
daprHost,
|
||||
daprPort,
|
||||
},
|
||||
});
|
||||
|
||||
// Publish multiple messages to a topic with default config.
|
||||
await client.pubsub.bulkSubscribeWithDefaultConfig(pubSubName, topic, (data) => console.log("Subscriber received: " + JSON.stringify(data)));
|
||||
|
@ -406,6 +420,56 @@ async function start() {
|
|||
|
||||
{{% /codetab %}}
|
||||
|
||||
{{% codetab %}}
|
||||
|
||||
```csharp
|
||||
using Microsoft.AspNetCore.Mvc;
|
||||
using Dapr.AspNetCore;
|
||||
using Dapr;
|
||||
|
||||
namespace DemoApp.Controllers;
|
||||
|
||||
[ApiController]
|
||||
[Route("[controller]")]
|
||||
public class BulkMessageController : ControllerBase
|
||||
{
|
||||
private readonly ILogger<BulkMessageController> logger;
|
||||
|
||||
public BulkMessageController(ILogger<BulkMessageController> logger)
|
||||
{
|
||||
this.logger = logger;
|
||||
}
|
||||
|
||||
[BulkSubscribe("messages", 10, 10)]
|
||||
[Topic("pubsub", "messages")]
|
||||
public ActionResult<BulkSubscribeAppResponse> HandleBulkMessages([FromBody] BulkSubscribeMessage<BulkMessageModel<BulkMessageModel>> bulkMessages)
|
||||
{
|
||||
List<BulkSubscribeAppResponseEntry> responseEntries = new List<BulkSubscribeAppResponseEntry>();
|
||||
logger.LogInformation($"Received {bulkMessages.Entries.Count()} messages");
|
||||
foreach (var message in bulkMessages.Entries)
|
||||
{
|
||||
try
|
||||
{
|
||||
logger.LogInformation($"Received a message with data '{message.Event.Data.MessageData}'");
|
||||
responseEntries.Add(new BulkSubscribeAppResponseEntry(message.EntryId, BulkSubscribeAppResponseStatus.SUCCESS));
|
||||
}
|
||||
catch (Exception e)
|
||||
{
|
||||
logger.LogError(e.Message);
|
||||
responseEntries.Add(new BulkSubscribeAppResponseEntry(message.EntryId, BulkSubscribeAppResponseStatus.RETRY));
|
||||
}
|
||||
}
|
||||
return new BulkSubscribeAppResponse(responseEntries);
|
||||
}
|
||||
public class BulkMessageModel
|
||||
{
|
||||
public string MessageData { get; set; }
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
{{% /codetab %}}
|
||||
|
||||
{{< /tabs >}}
|
||||
## How components handle publishing and subscribing to bulk messages
|
||||
|
||||
|
|
|
@ -212,6 +212,15 @@ app.MapPost("/checkout", [Topic("pubsub", "orders")] (Order order) => {
|
|||
});
|
||||
```
|
||||
|
||||
Both of the handlers defined above also need to be mapped to configure the `dapr/subscribe` endpoint. This is done in the application startup code while defining endpoints.
|
||||
|
||||
```csharp
|
||||
app.UseEndpoints(endpoints =>
|
||||
{
|
||||
endpoints.MapSubscribeHandler();
|
||||
}
|
||||
```
|
||||
|
||||
{{% /codetab %}}
|
||||
|
||||
{{% codetab %}}
|
||||
|
|
|
@ -218,7 +218,11 @@ import { DaprClient, HttpMethod, CommunicationProtocolEnum } from '@dapr/dapr';
|
|||
const daprHost = "127.0.0.1";
|
||||
|
||||
async function main() {
|
||||
const client = new DaprClient(daprHost, process.env.DAPR_HTTP_PORT, CommunicationProtocolEnum.HTTP);
|
||||
const client = new DaprClient({
|
||||
daprHost,
|
||||
daprPort: process.env.DAPR_HTTP_PORT,
|
||||
communicationProtocol: CommunicationProtocolEnum.HTTP,
|
||||
});
|
||||
const SECRET_STORE_NAME = "localsecretstore";
|
||||
//Using Dapr SDK to get a secret
|
||||
var secret = await client.secret.get(SECRET_STORE_NAME, "secret");
|
||||
|
|
|
@ -347,7 +347,12 @@ var main = function() {
|
|||
}
|
||||
|
||||
async function start(orderId) {
|
||||
const client = new DaprClient(daprHost, process.env.DAPR_HTTP_PORT, CommunicationProtocolEnum.HTTP);
|
||||
const client = new DaprClient({
|
||||
daprHost: daprHost,
|
||||
daprPort: process.env.DAPR_HTTP_PORT,
|
||||
communicationProtocol: CommunicationProtocolEnum.HTTP
|
||||
});
|
||||
|
||||
//Using Dapr SDK to invoke a method
|
||||
const result = await client.invoker.invoke('checkoutservice' , "checkout/" + orderId , HttpMethod.GET);
|
||||
console.log("Order requested: " + orderId);
|
||||
|
|
|
@ -266,7 +266,11 @@ var main = function() {
|
|||
}
|
||||
|
||||
async function start(orderId) {
|
||||
const client = new DaprClient(daprHost, process.env.DAPR_HTTP_PORT, CommunicationProtocolEnum.HTTP);
|
||||
const client = new DaprClient({
|
||||
daprHost,
|
||||
daprPort: process.env.DAPR_HTTP_PORT,
|
||||
communicationProtocol: CommunicationProtocolEnum.HTTP,
|
||||
});
|
||||
const STATE_STORE_NAME = "statestore";
|
||||
//Using Dapr SDK to save and get state
|
||||
await client.state.save(STATE_STORE_NAME, [
|
||||
|
@ -483,7 +487,12 @@ const daprHost = "127.0.0.1";
|
|||
var main = function() {
|
||||
const STATE_STORE_NAME = "statestore";
|
||||
//Using Dapr SDK to save and get state
|
||||
const client = new DaprClient(daprHost, process.env.DAPR_HTTP_PORT, CommunicationProtocolEnum.HTTP);
|
||||
const client = new DaprClient({
|
||||
daprHost,
|
||||
daprPort: process.env.DAPR_HTTP_PORT,
|
||||
communicationProtocol: CommunicationProtocolEnum.HTTP,
|
||||
});
|
||||
|
||||
await client.state.delete(STATE_STORE_NAME, "order_1");
|
||||
}
|
||||
|
||||
|
@ -630,7 +639,12 @@ var main = function() {
|
|||
const STATE_STORE_NAME = "statestore";
|
||||
var orderId = 100;
|
||||
//Using Dapr SDK to save and retrieve multiple states
|
||||
const client = new DaprClient(daprHost, process.env.DAPR_HTTP_PORT, CommunicationProtocolEnum.HTTP);
|
||||
const client = new DaprClient({
|
||||
daprHost,
|
||||
daprPort: process.env.DAPR_HTTP_PORT,
|
||||
communicationProtocol: CommunicationProtocolEnum.HTTP,
|
||||
});
|
||||
|
||||
await client.state.save(STATE_STORE_NAME, [
|
||||
{
|
||||
key: "order_1",
|
||||
|
@ -870,7 +884,12 @@ var main = function() {
|
|||
}
|
||||
|
||||
async function start(orderId) {
|
||||
const client = new DaprClient(daprHost, process.env.DAPR_HTTP_PORT, CommunicationProtocolEnum.HTTP);
|
||||
const client = new DaprClient({
|
||||
daprHost,
|
||||
daprPort: process.env.DAPR_HTTP_PORT,
|
||||
communicationProtocol: CommunicationProtocolEnum.HTTP,
|
||||
});
|
||||
|
||||
const STATE_STORE_NAME = "statestore";
|
||||
//Using Dapr SDK to save and retrieve multiple states
|
||||
await client.state.transaction(STATE_STORE_NAME, [
|
||||
|
@ -926,7 +945,7 @@ curl -X POST -H "Content-Type: application/json" -d '{"keys":["order_1", "order_
|
|||
With the same Dapr instance running from above, save two key/value pairs into your statestore:
|
||||
|
||||
```powershell
|
||||
Invoke-RestMethod -Method Post -ContentType 'application/json' -Body '{"operations": [{"operation":"upsert", "request": {"key": "order_1", "value": "250"}}, {"operation":"delete", "request": {"key": "order_2"}}]}' -Uri 'http://localhost:3601/v1.0/state/statestore'
|
||||
Invoke-RestMethod -Method Post -ContentType 'application/json' -Body '{"operations": [{"operation":"upsert", "request": {"key": "order_1", "value": "250"}}, {"operation":"delete", "request": {"key": "order_2"}}]}' -Uri 'http://localhost:3601/v1.0/state/statestore/transaction'
|
||||
```
|
||||
|
||||
Now see the results of your state transactions:
|
||||
|
|
|
@ -28,10 +28,12 @@ Refer to the TTL column in the [state store components guide]({{< ref supported-
|
|||
|
||||
You can set state TTL in the metadata as part of the state store set request:
|
||||
|
||||
{{< tabs Python "HTTP API (Bash)" "HTTP API (PowerShell)">}}
|
||||
{{< tabs ".NET" Python Go "HTTP API (Bash)" "HTTP API (PowerShell)">}}
|
||||
|
||||
{{% codetab %}}
|
||||
|
||||
<!--python-->
|
||||
|
||||
```python
|
||||
#dependencies
|
||||
|
||||
|
@ -58,6 +60,59 @@ dapr run --app-id orderprocessing --app-port 6001 --dapr-http-port 3601 --dapr-g
|
|||
|
||||
{{% codetab %}}
|
||||
|
||||
<!--dotnet-->
|
||||
|
||||
```csharp
|
||||
// dependencies
|
||||
|
||||
using Dapr.Client;
|
||||
|
||||
// code
|
||||
|
||||
await client.SaveStateAsync(storeName, stateKeyName, state, metadata: new Dictionary<string, string>() {
|
||||
{
|
||||
"metadata.ttlInSeconds", "120"
|
||||
}
|
||||
});
|
||||
```
|
||||
|
||||
To launch a Dapr sidecar and run the above example application, you'd then run a command similar to the following:
|
||||
|
||||
```bash
|
||||
dapr run --app-id orderprocessing --app-port 6001 --dapr-http-port 3601 --dapr-grpc-port 60001 dotnet run
|
||||
```
|
||||
|
||||
{{% /codetab %}}
|
||||
|
||||
{{% codetab %}}
|
||||
|
||||
<!--go-->
|
||||
|
||||
```go
|
||||
// dependencies
|
||||
|
||||
import (
|
||||
dapr "github.com/dapr/go-sdk/client"
|
||||
)
|
||||
|
||||
// code
|
||||
|
||||
md := map[string]string{"ttlInSeconds": "120"}
|
||||
if err := client.SaveState(ctx, store, "key1", []byte("hello world"), md); err != nil {
|
||||
panic(err)
|
||||
}
|
||||
```
|
||||
|
||||
To launch a Dapr sidecar and run the above example application, you'd then run a command similar to the following:
|
||||
|
||||
```bash
|
||||
dapr run --app-id orderprocessing --app-port 6001 --dapr-http-port 3601 --dapr-grpc-port 60001 go run .
|
||||
```
|
||||
|
||||
{{% /codetab %}}
|
||||
|
||||
{{% codetab %}}
|
||||
|
||||
```bash
|
||||
curl -X POST -H "Content-Type: application/json" -d '[{ "key": "order_1", "value": "250", "metadata": { "ttlInSeconds": "120" } }]' http://localhost:3601/v1.0/state/statestore
|
||||
```
|
||||
|
|
|
@ -25,7 +25,6 @@ Dapr Workflow logic is implemented using general purpose programming languages,
|
|||
- Write unit tests for your workflows, just like any other part of your application logic.
|
||||
|
||||
The Dapr sidecar doesn’t load any workflow definitions. Rather, the sidecar simply drives the execution of the workflows, leaving all the workflow activities to be part of the application.
|
||||
The Dapr sidecar doesn’t load any workflow definitions. Rather, the sidecar simply drives the execution of the workflows, leaving all the workflow activitie to be part of the application.
|
||||
|
||||
## Write the workflow activities
|
||||
|
||||
|
|
|
@ -45,9 +45,11 @@ Manage your workflow using HTTP calls. The example below plugs in the properties
|
|||
To start your workflow with an ID `12345678`, run:
|
||||
|
||||
```bash
|
||||
POST http://localhost:3500/v1.0-alpha1/workflows/dapr/OrderProcessingWorkflow/12345678/start
|
||||
POST http://localhost:3500/v1.0-alpha1/workflows/dapr/OrderProcessingWorkflow/start?instanceID=12345678
|
||||
```
|
||||
|
||||
Note that workflow instance IDs can only contain alphanumeric characters, underscores, and dashes.
|
||||
|
||||
### Terminate workflow
|
||||
|
||||
To terminate your workflow with an ID `12345678`, run:
|
||||
|
@ -61,7 +63,7 @@ POST http://localhost:3500/v1.0-alpha1/workflows/dapr/12345678/terminate
|
|||
To fetch workflow information (outputs and inputs) with an ID `12345678`, run:
|
||||
|
||||
```bash
|
||||
GET http://localhost:3500/v1.0-alpha1/workflows/dapr/OrderProcessingWorkflow/12345678
|
||||
GET http://localhost:3500/v1.0-alpha1/workflows/dapr/12345678
|
||||
```
|
||||
|
||||
Learn more about these HTTP calls in the [workflow API reference guide]({{< ref workflow_api.md >}}).
|
||||
|
|
|
@ -52,8 +52,10 @@ Each workflow instance managed by the engine is represented as one or more spans
|
|||
|
||||
There are two types of actors that are internally registered within the Dapr sidecar in support of the workflow engine:
|
||||
|
||||
- `dapr.internal.wfengine.workflow`
|
||||
- `dapr.internal.wfengine.activity`
|
||||
- `dapr.internal.{namespace}.{appID}.workflow`
|
||||
- `dapr.internal.{namespace}.{appID}.activity`
|
||||
|
||||
The `{namespace}` value is the Dapr namespace and defaults to `default` if no namespace is configured. The `{appID}` value is the app's ID. For example, if you have a workflow app named "wfapp", then the type of the workflow actor would be `dapr.internal.default.wfapp.workflow` and the type of the activity actor would be `dapr.internal.default.wfapp.activity`.
|
||||
|
||||
The following diagram demonstrates how internal workflow actors operate in a Kubernetes scenario:
|
||||
|
||||
|
@ -61,11 +63,13 @@ The following diagram demonstrates how internal workflow actors operate in a Kub
|
|||
|
||||
Just like user-defined actors, internal workflow actors are distributed across the cluster by the actor placement service. They also maintain their own state and make use of reminders. However, unlike actors that live in application code, these _internal_ actors are embedded into the Dapr sidecar. Application code is completely unaware that these actors exist.
|
||||
|
||||
There are two types of actors registered by the Dapr sidecar for workflow: the _workflow_ actor and the _activity_ actor. The next sections will go into more details on each.
|
||||
{{% alert title="Note" color="primary" %}}
|
||||
The internal workflow actor types are only registered after an app has registered a workflow using a Dapr Workflow SDK. If an app never registers a workflow, then the internal workflow actors are never registered.
|
||||
{{% /alert %}}
|
||||
|
||||
### Workflow actors
|
||||
|
||||
A new instance of the `dapr.internal.wfengine.workflow` actor is activated for every workflow instance that gets created. The ID of the _workflow_ actor is the ID of the workflow. This internal actor stores the state of the workflow as it progresses and determines the node on which the workflow code executes via the actor placement service.
|
||||
Workflow actors are responsible for managing the state and placement of all workflows running in the app. A new instance of the workflow actor is activated for every workflow instance that gets created. The ID of the workflow actor is the ID of the workflow. This internal actor stores the state of the workflow as it progresses and determines the node on which the workflow code executes via the actor placement service.
|
||||
|
||||
Each workflow actor saves its state using the following keys in the configured state store:
|
||||
|
||||
|
@ -94,17 +98,13 @@ To summarize:
|
|||
|
||||
### Activity actors
|
||||
|
||||
A new instance of the `dapr.internal.wfengine.activity` actor is activated for every activity task that gets scheduled by a workflow. The ID of the _activity_ actor is the ID of the workflow combined with a sequence number (sequence numbers start with 0). For example, if a workflow has an ID of `876bf371` and is the third activity to be scheduled by the workflow, it's ID will be `876bf371#2` where `2` is the sequence number.
|
||||
Activity actors are responsible for managing the state and placement of all workflow activity invocations. A new instance of the activity actor is activated for every activity task that gets scheduled by a workflow. The ID of the activity actor is the ID of the workflow combined with a sequence number (sequence numbers start with 0). For example, if a workflow has an ID of `876bf371` and is the third activity to be scheduled by the workflow, it's ID will be `876bf371::2` where `2` is the sequence number.
|
||||
|
||||
Each activity actor stores a single key into the state store:
|
||||
|
||||
| Key | Description |
|
||||
| --- | ----------- |
|
||||
| `activityreq-N` | The key contains the activity invocation payload, which includes the serialized activity input data. The `N` value is a 64-bit unsigned integer that represents the _generation_ of the workflow, a concept which is outside the scope of this documentation. |
|
||||
|
||||
{{% alert title="Warning" color="warning" %}}
|
||||
In the [Alpha release of the Dapr Workflow engine]({{< ref support-preview-features.md >}}), activity actor state will remain in the state store even after the activity task has completed. Scheduling a large number of workflow activities could result in unbounded storage usage. In a future release, data retention policies will be introduced that can automatically purge the state store of completed activity state.
|
||||
{{% /alert %}}
|
||||
| `activityState` | The key contains the activity invocation payload, which includes the serialized activity input data. This key is deleted automatically after the activity invocation has completed. |
|
||||
|
||||
The following diagram illustrates the typical lifecycle of an activity actor.
|
||||
|
||||
|
@ -133,7 +133,7 @@ Dapr Workflows use actors internally to drive the execution of workflows. Like a
|
|||
|
||||
As discussed in the [workflow actors]({{< ref "workflow-architecture.md#workflow-actors" >}}) section, workflows save their state incrementally by appending to a history log. The history log for a workflow is distributed across multiple state store keys so that each "checkpoint" only needs to append the newest entries.
|
||||
|
||||
The size of each checkpoint is determined by the number of concurrent actions scheduled by the workflow before it goes into an idle state. [Sequential workflows]({{< ref "workflow-overview.md#task-chaining" >}}) will therefore make smaller batch updates to the state store, while [fan-out/fan-in workflows]({{< ref "workflow-overview.md#fan-outfan-in" >}}) will require larger batches. The size of the batch is also impacted by the size of inputs and outputs when workflows [invoke activities]({<< ref "workflow-features-concepts.md#workflow-activities" >>}) or [child workflows]({{< ref "workflow-features-concepts.md#child-workflows" >}}).
|
||||
The size of each checkpoint is determined by the number of concurrent actions scheduled by the workflow before it goes into an idle state. [Sequential workflows]({{< ref "workflow-overview.md#task-chaining" >}}) will therefore make smaller batch updates to the state store, while [fan-out/fan-in workflows]({{< ref "workflow-overview.md#fan-outfan-in" >}}) will require larger batches. The size of the batch is also impacted by the size of inputs and outputs when workflows [invoke activities]({{< ref "workflow-features-concepts.md#workflow-activities" >}}) or [child workflows]({{< ref "workflow-features-concepts.md#child-workflows" >}}).
|
||||
|
||||
<img src="/images/workflow-overview/workflow-state-store-interactions.png" width=600 alt="Diagram of workflow actor state store interactions"/>
|
||||
|
||||
|
|
|
@ -8,6 +8,10 @@ description: "Learn more about the Dapr Workflow features and concepts"
|
|||
|
||||
Now that you've learned about the [workflow building block]({{< ref workflow-overview.md >}}) at a high level, let's deep dive into the features and concepts included with the Dapr Workflow engine and SDKs. Dapr Workflow exposes several core features and concepts which are common across all supported languages.
|
||||
|
||||
{{% alert title="Note" color="primary" %}}
|
||||
For more information on how workflow state is managed, see the [workflow architecture guide]({{< ref workflow-architecture.md >}}).
|
||||
{{% /alert %}}
|
||||
|
||||
## Workflows
|
||||
|
||||
Dapr Workflows are functions you write that define a series of steps or tasks to be executed in a particular order. The Dapr Workflow engine takes care of coordinating and managing the execution of the steps, including managing failures and retries. If the app hosting your workflows is scaled out across multiple machines, the workflow engine may also load balance the execution of workflows and their tasks across multiple machines.
|
||||
|
@ -26,55 +30,48 @@ Only one workflow instance with a given ID can exist at any given time. However,
|
|||
|
||||
### Workflow replay
|
||||
|
||||
Dapr Workflows maintain their execution state by using a technique known as [event sourcing](https://learn.microsoft.com/azure/architecture/patterns/event-sourcing). Instead of directly storing the current state of a workflow as a snapshot, the workflow engine manages an append-only log of history events that describe the various steps that a workflow has taken. When using the workflow authoring SDK, the storing of these history events happens automatically whenever the workflow "awaits" for the result of a scheduled task.
|
||||
Dapr Workflows maintain their execution state by using a technique known as [event sourcing](https://learn.microsoft.com/azure/architecture/patterns/event-sourcing). Instead of storing the current state of a workflow as a snapshot, the workflow engine manages an append-only log of history events that describe the various steps that a workflow has taken. When using the workflow SDK, these history events are stored automatically whenever the workflow "awaits" for the result of a scheduled task.
|
||||
|
||||
When a workflow "awaits" a scheduled task, it unloads itself from memory until the task completes. Once the task completes, the workflow engine schedules the workflow function to run again. This second workflow function execution is known as a _replay_.
|
||||
|
||||
When a workflow function is replayed, it runs again from the beginning. However, when it encounters a task that already completed, instead of scheduling that task again, the workflow engine:
|
||||
|
||||
1. Returns the result of the completed task to the workflow.
|
||||
1. Continues execution until the next "await" point.
|
||||
|
||||
This "replay" behavior continues until the workflow function completes or fails with an error.
|
||||
|
||||
Using this replay technique, a workflow is able to resume execution from any "await" point as if it had never been unloaded from memory. Even the values of local variables from previous runs can be restored without the workflow engine knowing anything about what data they stored. This ability to restore state makes Dapr Workflows _durable_ and _fault tolerant_.
|
||||
|
||||
{{% alert title="Note" color="primary" %}}
|
||||
For more information on how workflow state is managed, see the [workflow architecture guide]({{< ref workflow-architecture.md >}}).
|
||||
The workflow replay behavior described here requires that workflow function code be _deterministic_. Deterministic workflow functions take the exact same actions when provided the exact same inputs. [Learn more about the limitations around deterministic workflow code.]({{< ref "workflow-features-concepts.md#workflow-determinism-and-code-constraints" >}})
|
||||
{{% /alert %}}
|
||||
|
||||
When a workflow "awaits" a scheduled task, it may unload itself from memory until the task completes. Once the task completes, the workflow engine schedules the workflow function to run again. This second execution of the workflow function is known as a _replay_. When a workflow function is replayed, it runs again from the beginning. However, when it encounters a task that it already scheduled, instead of scheduling that task again, the workflow engine returns the result of the scheduled task to the workflow and continues execution until the next "await" point. This "replay" behavior continues until the workflow function completes or fails with an error.
|
||||
|
||||
Using this replay technique, a workflow is able to resume execution from any "await" point as if it had never been unloaded from memory. Even the values of local variables from previous runs can be restored without the workflow engine knowing anything about what data they stored. This ability to restore state is what makes Dapr Workflows _durable_ and fault tolerant.
|
||||
|
||||
### Workflow determinism and code constraints
|
||||
|
||||
The workflow replay behavior described previously requires that workflow function code be _deterministic_. A deterministic workflow function is one that takes the exact same actions when provided the exact same inputs.
|
||||
|
||||
You must follow the following rules to ensure that your workflow code is deterministic.
|
||||
|
||||
1. **Workflow functions must not call non-deterministic APIs.**
|
||||
For example, APIs that generate random numbers, random UUIDs, or the current date are non-deterministic. To work around this limitation, use these APIs in activity functions or (preferred) use built-in equivalent APIs offered by the authoring SDK. For example, each authoring SDK provides an API for retrieving the current time in a deterministic manner.
|
||||
|
||||
1. **Workflow functions must not interact _directly_ with external state.**
|
||||
External data includes any data that isn't stored in the workflow state. For example, workflows must not interact with global variables, environment variables, the file system, or make network calls. Instead, workflows should interact with external state _indirectly_ using workflow inputs, activity tasks, and through external event handling.
|
||||
|
||||
1. **Workflow functions must execute only on the workflow dispatch thread.**
|
||||
The implementation of each language SDK requires that all workflow function operations operate on the same thread (goroutine, etc.) that the function was scheduled on. Workflow functions must never schedule background threads or use APIs that schedule a callback function to run on another thread. Failure to follow this rule could result in undefined behavior. Any background processing should instead be delegated to activity tasks, which can be scheduled to run serially or concurrently.
|
||||
|
||||
While it's critically important to follow these determinism code constraints, you'll quickly become familiar with them and learn how to work with them effectively when writing workflow code.
|
||||
|
||||
### Infinite loops and eternal workflows
|
||||
|
||||
As discussed in the [workflow replay]({{< ref "#workflow-replay" >}}) section, workflows maintain a write-only event-sourced history log of all its operations. To avoid runaway resource usage, workflows should limit the number of operations they schedule. For example, a workflow should never use infinite loops in its implementation, nor should it schedule millions of tasks.
|
||||
As discussed in the [workflow replay]({{< ref "#workflow-replay" >}}) section, workflows maintain a write-only event-sourced history log of all its operations. To avoid runaway resource usage, workflows must limit the number of operations they schedule. For example, ensure your workflow doesn't:
|
||||
|
||||
There are two techniques that can be used to write workflows that need to potentially schedule extreme numbers of tasks:
|
||||
- Use infinite loops in its implementation
|
||||
- Schedule thousands of tasks.
|
||||
|
||||
You can use the following two techniques to write workflows that may need to schedule extreme numbers of tasks:
|
||||
|
||||
1. **Use the _continue-as-new_ API**:
|
||||
Each workflow authoring SDK exposes a _continue-as-new_ API that workflows can invoke to restart themselves with a new input and history. The _continue-as-new_ API is especially ideal for implementing "eternal workflows" or workflows that have no logical end state, like monitoring agents, which would otherwise be implemented using a `while (true)`-like construct. Using _continue-as-new_ is a great way to keep the workflow history size small.
|
||||
Each workflow SDK exposes a _continue-as-new_ API that workflows can invoke to restart themselves with a new input and history. The _continue-as-new_ API is especially ideal for implementing "eternal workflows", like monitoring agents, which would otherwise be implemented using a `while (true)`-like construct. Using _continue-as-new_ is a great way to keep the workflow history size small.
|
||||
|
||||
1. **Use child workflows**:
|
||||
Each workflow authoring SDK also exposes an API for creating child workflows. A child workflow is just like any other workflow except that it's scheduled by a parent workflow. Child workflows have their own history and also have the benefit of allowing you to distribute workflow function execution across multiple machines. If a workflow needs to schedule thousands of tasks or more, it's recommended that those tasks be distributed across child workflows so that no single workflow's history size grows too large.
|
||||
Each workflow SDK exposes an API for creating child workflows. A child workflow behaves like any other workflow, except that it's scheduled by a parent workflow. Child workflows have:
|
||||
- Their own history
|
||||
- The benefit of distributing workflow function execution across multiple machines.
|
||||
|
||||
If a workflow needs to schedule thousands of tasks or more, it's recommended that those tasks be distributed across child workflows so that no single workflow's history size grows too large.
|
||||
|
||||
### Updating workflow code
|
||||
|
||||
Because workflows are long-running and durable, updating workflow code must be done with extreme care. As discussed in the [Workflow determinism]({{< ref "#workflow-determinism-and-code-constraints" >}}) section, workflow code must be deterministic so that the workflow engine can rebuild its state to exactly match its previous checkpoint. Updates to workflow code must preserve this determinism if there are any non-completed workflow instances in the system. Otherwise, updates to workflow code can result in runtime failures the next time those workflows execute.
|
||||
Because workflows are long-running and durable, updating workflow code must be done with extreme care. As discussed in the [workflow determinism]({{< ref "#workflow-determinism-and-code-constraints" >}}) limitation section, workflow code must be deterministic. Updates to workflow code must preserve this determinism if there are any non-completed workflow instances in the system. Otherwise, updates to workflow code can result in runtime failures the next time those workflows execute.
|
||||
|
||||
We'll mention a couple examples of code updates that can break workflow determinism:
|
||||
|
||||
* **Changing workflow function signatures**: Changing the name, input, or output of a workflow or activity function is considered a breaking change and must be avoided.
|
||||
* **Changing the number or order of workflow tasks**: Changing the number or order of workflow tasks causes a workflow instance's history to no longer match the code and may result in runtime errors or other unexpected behavior.
|
||||
|
||||
To work around these constraints, instead of updating existing workflow code, leave the existing workflow code as-is and create new workflow definitions that include the updates. Upstream code that creates workflows should also be updated to only create instances of the new workflows. Leaving the old code around ensures that existing workflow instances can continue to run without interruption. If and when it's known that all instances of the old workflow logic have completed, then the old workflow code can be safely deleted.
|
||||
[See known limitations]({{< ref "workflow-features-concepts.md#workflow-determinism-and-code-constraints" >}})
|
||||
|
||||
## Workflow activities
|
||||
|
||||
|
@ -120,6 +117,113 @@ The ability to raise external events to workflows is not included in the alpha v
|
|||
|
||||
Workflows can also wait for multiple external event signals of the same name, in which case they are dispatched to the corresponding workflow tasks in a first-in, first-out (FIFO) manner. If a workflow receives an external event signal but has not yet created a "wait for external event" task, the event will be saved into the workflow's history and consumed immediately after the workflow requests the event.
|
||||
|
||||
## Limitations
|
||||
|
||||
### Workflow determinism and code restraints
|
||||
|
||||
To take advantage of the workflow replay technique, your workflow code needs to be deterministic. For your workflow code to be deterministic, you may need to work around some limitations.
|
||||
|
||||
#### Workflow functions must call deterministic APIs.
|
||||
APIs that generate random numbers, random UUIDs, or the current date are _non-deterministic_. To work around this limitation, you can:
|
||||
- Use these APIs in activity functions, or
|
||||
- (Preferred) Use built-in equivalent APIs offered by the SDK. For example, each authoring SDK provides an API for retrieving the current time in a deterministic manner.
|
||||
|
||||
For example, instead of this:
|
||||
|
||||
{{< tabs ".NET" >}}
|
||||
|
||||
{{% codetab %}}
|
||||
|
||||
```csharp
|
||||
// DON'T DO THIS!
|
||||
DateTime currentTime = DateTime.UtcNow;
|
||||
Guid newIdentifier = Guid.NewGuid();
|
||||
string randomString = GetRandomString();
|
||||
```
|
||||
|
||||
{{% /codetab %}}
|
||||
|
||||
{{< /tabs >}}
|
||||
|
||||
Do this:
|
||||
|
||||
{{< tabs ".NET" >}}
|
||||
|
||||
{{% codetab %}}
|
||||
|
||||
```csharp
|
||||
// Do this!!
|
||||
DateTime currentTime = context.CurrentUtcDateTime;
|
||||
Guid newIdentifier = context.NewGuid();
|
||||
string randomString = await context.CallActivityAsync<string>("GetRandomString");
|
||||
```
|
||||
|
||||
{{% /codetab %}}
|
||||
|
||||
{{< /tabs >}}
|
||||
|
||||
|
||||
#### Workflow functions must only interact _indirectly_ with external state.
|
||||
External data includes any data that isn't stored in the workflow state. Workflows must not interact with global variables, environment variables, the file system, or make network calls.
|
||||
|
||||
Instead, workflows should interact with external state _indirectly_ using workflow inputs, activity tasks, and through external event handling.
|
||||
|
||||
For example, instead of this:
|
||||
|
||||
```csharp
|
||||
// DON'T DO THIS!
|
||||
string configuration = Environment.GetEnvironmentVariable("MY_CONFIGURATION")!;
|
||||
string data = await new HttpClient().GetStringAsync("https://example.com/api/data");
|
||||
```
|
||||
|
||||
Do this:
|
||||
|
||||
```csharp
|
||||
// Do this!!
|
||||
string configuation = workflowInput.Configuration; // imaginary workflow input argument
|
||||
string data = await context.CallActivityAsync<string>("MakeHttpCall", "https://example.com/api/data");
|
||||
```
|
||||
|
||||
#### Workflow functions must execute only on the workflow dispatch thread.
|
||||
The implementation of each language SDK requires that all workflow function operations operate on the same thread (goroutine, etc.) that the function was scheduled on. Workflow functions must never:
|
||||
- Schedule background threads, or
|
||||
- Use APIs that schedule a callback function to run on another thread.
|
||||
|
||||
Failure to follow this rule could result in undefined behavior. Any background processing should instead be delegated to activity tasks, which can be scheduled to run serially or concurrently.
|
||||
|
||||
For example, instead of this:
|
||||
|
||||
```csharp
|
||||
// DON'T DO THIS!
|
||||
Task t = Task.Run(() => context.CallActivityAsync("DoSomething"));
|
||||
await context.CreateTimer(5000).ConfigureAwait(false);
|
||||
```
|
||||
|
||||
Do this:
|
||||
|
||||
```csharp
|
||||
// Do this!!
|
||||
Task t = context.CallActivityAsync("DoSomething");
|
||||
await context.CreateTimer(5000).ConfigureAwait(true);
|
||||
```
|
||||
|
||||
### Updating workflow code
|
||||
|
||||
Make sure updates you make to the workflow code maintain its determinism. A couple examples of code updates that can break workflow determinism:
|
||||
|
||||
- **Changing workflow function signatures**:
|
||||
Changing the name, input, or output of a workflow or activity function is considered a breaking change and must be avoided.
|
||||
|
||||
- **Changing the number or order of workflow tasks**:
|
||||
Changing the number or order of workflow tasks causes a workflow instance's history to no longer match the code and may result in runtime errors or other unexpected behavior.
|
||||
|
||||
To work around these constraints:
|
||||
|
||||
- Instead of updating existing workflow code, leave the existing workflow code as-is and create new workflow definitions that include the updates.
|
||||
- Upstream code that creates workflows should only be updated to create instances of the new workflows.
|
||||
- Leave the old code around to ensure that existing workflow instances can continue to run without interruption. If and when it's known that all instances of the old workflow logic have completed, then the old workflow code can be safely deleted.
|
||||
|
||||
|
||||
## Next steps
|
||||
|
||||
{{< button text="Workflow patterns >>" page="workflow-patterns.md" >}}
|
||||
|
|
|
@ -68,7 +68,7 @@ You can call other workflow runtimes (for example, Temporal and Netflix Conducto
|
|||
|
||||
Dapr Workflow simplifies complex, stateful coordination requirements in microservice architectures. The following sections describe several application patterns that can benefit from Dapr Workflow.
|
||||
|
||||
Learn more about [different types of workflow patterns](todo)
|
||||
Learn more about [different types of workflow patterns]({{< ref workflow-patterns.md >}})
|
||||
|
||||
## Workflow SDKs
|
||||
|
||||
|
|
|
@ -78,7 +78,7 @@ In the fan-out/fan-in design pattern, you execute multiple tasks simultaneously
|
|||
|
||||
<img src="/images/workflow-overview/workflows-fanin-fanout.png" width=800 alt="Diagram showing how the fan-out/fan-in workflow pattern works">
|
||||
|
||||
In addition to the challenges mentioned in [the previous pattern]({{< ref "workflow-overview.md#task-chaining" >}}), there are several important questions to consider when implementing the fan-out/fan-in pattern manually:
|
||||
In addition to the challenges mentioned in [the previous pattern]({{< ref "workflow-patterns.md#task-chaining" >}}), there are several important questions to consider when implementing the fan-out/fan-in pattern manually:
|
||||
|
||||
- How do you control the degree of parallelism?
|
||||
- How do you know when to trigger subsequent aggregation steps?
|
||||
|
|
|
@ -14,4 +14,5 @@ The Dapr SDKs are the easiest way for you to create pluggable components. Choose
|
|||
|
||||
| Language | Status |
|
||||
|----------|:------:|
|
||||
| [Go]({{< ref pluggable-components-go >}}) | In development |
|
||||
| [.NET]({{< ref pluggable-components-dotnet >}}) | In development |
|
||||
|
|
|
@ -44,7 +44,7 @@ When running Dapr (or the Dapr runtime directly) in stand-alone mode, you have t
|
|||
FOO=bar daprd --app-id myapp
|
||||
```
|
||||
|
||||
If you have [configured named AWS profiles](https://docs.aws.amazon.com/cli/latest/userguide/cli-configure-profiles.html) locally , you can tell Dapr (or the Dapr runtime) which profile to use by specifying the "AWS_PROFILE" environment variable:
|
||||
If you have [configured named AWS profiles](https://docs.aws.amazon.com/cli/latest/userguide/cli-configure-files.html) locally , you can tell Dapr (or the Dapr runtime) which profile to use by specifying the "AWS_PROFILE" environment variable:
|
||||
|
||||
```bash
|
||||
AWS_PROFILE=myprofile dapr run...
|
||||
|
|
|
@ -1,35 +1,105 @@
|
|||
---
|
||||
type: docs
|
||||
title: "Developing Dapr applications with remote dev containers"
|
||||
linkTitle: "Remote dev containers"
|
||||
title: "Developing Dapr applications with Dev Containers"
|
||||
linkTitle: "Dev Containers"
|
||||
weight: 30000
|
||||
description: "How to setup a remote dev container environment with Dapr"
|
||||
description: "How to setup a containerized development environment with Dapr"
|
||||
---
|
||||
|
||||
The Visual Studio Code [Remote Containers extension](https://code.visualstudio.com/docs/remote/containers) lets you use a Docker container as a full-featured development environment without installing any additional frameworks or packages to your local filesystem.
|
||||
The Visual Studio Code [Dev Containers extension](https://code.visualstudio.com/docs/remote/containers) lets you use a self-contained Docker container as a complete development environment, without installing any additional packages, libraries, or utilities in your local filesystem.
|
||||
|
||||
Dapr has pre-built Docker remote containers for NodeJS and C#. You can pick the one of your choice for a ready made environment. Note these pre-built containers automatically update to the latest Dapr release.
|
||||
Dapr has pre-built Dev Containers for C# and JavaScript/TypeScript; you can pick the one of your choice for a ready made environment. Note these pre-built containers automatically update to the latest Dapr release.
|
||||
|
||||
### Setup a remote dev container
|
||||
We also publish a Dev Container feature that installs the Dapr CLI inside any Dev Container.
|
||||
|
||||
#### Prerequisites
|
||||
<!-- IGNORE_LINKS -->
|
||||
- [Docker Desktop](https://www.docker.com/products/docker-desktop)
|
||||
<!-- END_IGNORE -->
|
||||
## Setup the development environment
|
||||
|
||||
### Prerequisites
|
||||
|
||||
- [Docker Desktop](https://docs.docker.com/desktop/)
|
||||
- [Visual Studio Code](https://code.visualstudio.com/)
|
||||
- [VSCode Remote Development extension pack](https://aka.ms/vscode-remote/download/extension)
|
||||
- [VS Code Remote Development extension pack](https://aka.ms/vscode-remote/download/extension)
|
||||
|
||||
### Add the Dapr CLI using a Dev Container feature
|
||||
|
||||
You can install the Dapr CLI inside any Dev Container using [Dev Container features](https://containers.dev/features).
|
||||
|
||||
To do that, edit your `devcontainer.json` and add two objects in the `"features"` section:
|
||||
|
||||
```json
|
||||
"features": {
|
||||
// Install the Dapr CLI
|
||||
"ghcr.io/dapr/cli/dapr-cli:0": {},
|
||||
// Enable Docker (via Docker-in-Docker)
|
||||
"ghcr.io/devcontainers/features/docker-in-docker:2": {},
|
||||
// Alternatively, use Docker-outside-of-Docker (uses Docker in the host)
|
||||
//"ghcr.io/devcontainers/features/docker-outside-of-docker:1": {},
|
||||
}
|
||||
```
|
||||
|
||||
After saving the JSON file and (re-)building the container that hosts your development environment, you will have the Dapr CLI (and Docker) available, and can install Dapr by running this command in the container:
|
||||
|
||||
```sh
|
||||
dapr init
|
||||
```
|
||||
|
||||
#### Example: create a Java Dev Container for Dapr
|
||||
|
||||
This is an exmaple of creating a Dev Container for creating Java apps that use Dapr, based on the [official Java 17 Dev Container image](https://github.com/devcontainers/images/tree/main/src/java).
|
||||
|
||||
Place this in a file called `.devcontainer/devcontainer.json` in your project:
|
||||
|
||||
```json
|
||||
// For format details, see https://aka.ms/devcontainer.json. For config options, see the
|
||||
// README at: https://github.com/devcontainers/templates/tree/main/src/java
|
||||
{
|
||||
"name": "Java",
|
||||
// Or use a Dockerfile or Docker Compose file. More info: https://containers.dev/guide/dockerfile
|
||||
"image": "mcr.microsoft.com/devcontainers/java:0-17",
|
||||
|
||||
"features": {
|
||||
"ghcr.io/devcontainers/features/java:1": {
|
||||
"version": "none",
|
||||
"installMaven": "false",
|
||||
"installGradle": "false"
|
||||
},
|
||||
// Install the Dapr CLI
|
||||
"ghcr.io/dapr/cli/dapr-cli:0": {},
|
||||
// Enable Docker (via Docker-in-Docker)
|
||||
"ghcr.io/devcontainers/features/docker-in-docker:2": {},
|
||||
// Alternatively, use Docker-outside-of-Docker (uses Docker in the host)
|
||||
//"ghcr.io/devcontainers/features/docker-outside-of-docker:1": {},
|
||||
}
|
||||
|
||||
// Use 'forwardPorts' to make a list of ports inside the container available locally.
|
||||
// "forwardPorts": [],
|
||||
|
||||
// Use 'postCreateCommand' to run commands after the container is created.
|
||||
// "postCreateCommand": "java -version",
|
||||
|
||||
// Configure tool-specific properties.
|
||||
// "customizations": {},
|
||||
|
||||
// Uncomment to connect as root instead. More info: https://aka.ms/dev-containers-non-root.
|
||||
// "remoteUser": "root"
|
||||
}
|
||||
```
|
||||
|
||||
Then, using the VS Code command palette (`CTRL + SHIFT + P` or `CMD + SHIFT + P` on Mac), select `Dev Containers: Rebuild and Reopen in Container`.
|
||||
|
||||
### Use a pre-built Dev Container (C# and JavaScript/TypeScript)
|
||||
|
||||
#### Create remote Dapr container
|
||||
1. Open your application workspace in VS Code
|
||||
2. In the command command palette (`CTRL+SHIFT+P`) type and select `Remote-Containers: Add Development Container Configuration Files...`
|
||||
2. In the command command palette (`CTRL + SHIFT + P` or `CMD + SHIFT + P` on Mac) type and select `Dev Containers: Add Development Container Configuration Files...`
|
||||
<br /><img src="/images/vscode-remotecontainers-addcontainer.png" alt="Screenshot of adding a remote container" width="700">
|
||||
3. Type `dapr` to filter the list to available Dapr remote containers and choose the language container that matches your application. Note you may need to select `Show All Definitions...`
|
||||
<br /><img src="/images/vscode-remotecontainers-daprcontainers.png" alt="Screenshot of adding a Dapr container" width="700">
|
||||
4. Follow the prompts to rebuild your application in container.
|
||||
4. Follow the prompts to reopen your workspace in the container.
|
||||
<br /><img src="/images/vscode-remotecontainers-reopen.png" alt="Screenshot of reopening an application in the dev container" width="700">
|
||||
|
||||
#### Example
|
||||
Watch this [video](https://www.youtube.com/watch?v=D2dO4aGpHcg&t=120) on how to use the Dapr VS Code Remote Containers with your application.
|
||||
|
||||
Watch this [video](https://www.youtube.com/watch?v=D2dO4aGpHcg&t=120) on how to use the Dapr Dev Containers with your application.
|
||||
|
||||
<div class="embed-responsive embed-responsive-16by9">
|
||||
<iframe width="560" height="315" src="https://www.youtube-nocookie.com/embed/D2dO4aGpHcg?start=120" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe>
|
||||
|
|
|
@ -72,19 +72,19 @@ version: 1
|
|||
common: # optional section for variables shared across apps
|
||||
resourcesPath: ./app/components # any dapr resources to be shared across apps
|
||||
env: # any environment variable shared across apps
|
||||
- DEBUG: true
|
||||
DEBUG: true
|
||||
apps:
|
||||
- appID: webapp # optional
|
||||
appDirPath: .dapr/webapp/ # REQUIRED
|
||||
resourcesPath: .dapr/resources # (optional) can be default by convention
|
||||
configFilePath: .dapr/config.yaml # (optional) can be default by convention too, ignore if file is not found.
|
||||
appProtocol: HTTP
|
||||
appProtocol: http
|
||||
appPort: 8080
|
||||
appHealthCheckPath: "/healthz"
|
||||
command: ["python3" "app.py"]
|
||||
- appID: backend # optional
|
||||
appDirPath: .dapr/backend/ # REQUIRED
|
||||
appProtocol: GRPC
|
||||
appProtocol: grpc
|
||||
appPort: 3000
|
||||
unixDomainSocket: "/tmp/test-socket"
|
||||
env:
|
||||
|
@ -112,7 +112,7 @@ The properties for the Multi-App Run template align with the `dapr run` CLI flag
|
|||
| `appID` | N | Application's app ID. If not provided, will be derived from `appDirPath` | `webapp`, `backend` |
|
||||
| `resourcesPath` | N | Path to your Dapr resources. Can be default by convention; ignore if directory isn't found | `./app/components`, `./webapp/components` |
|
||||
| `configFilePath` | N | Path to your application's configuration file | `./webapp/config.yaml` |
|
||||
| `appProtocol` | N | The protocol Dapr uses to talk to the application. | `HTTP`, `GRPC` |
|
||||
| `appProtocol` | N | The protocol Dapr uses to talk to the application. | `http`, `grpc` |
|
||||
| `appPort` | N | The port your application is listening on | `8080`, `3000` |
|
||||
| `daprHTTPPort` | N | Dapr HTTP port | |
|
||||
| `daprGRPCPort` | N | Dapr GRPC port | |
|
||||
|
|
|
@ -202,7 +202,7 @@ Each release of Dapr CLI includes various OSes and architectures. You can manual
|
|||
Verify the CLI is installed by restarting your terminal/command prompt and running the following:
|
||||
|
||||
```bash
|
||||
dapr
|
||||
dapr -h
|
||||
```
|
||||
|
||||
**Output:**
|
||||
|
|
|
@ -90,7 +90,7 @@ dapr run --app-id batch-sdk --app-port 50051 --resources-path ../../../component
|
|||
|
||||
> **Note**: Since Python3.exe is not defined in Windows, you may need to use `python app.py` instead of `python3 app.py`.
|
||||
|
||||
The code inside the `process_batch` function is executed every 10 seconds (defined in [`binding-cron.yaml`]({{< ref "#componentsbinding-cronyaml-component-file" >}}) in the `components` directory). The binding trigger looks for a route called via HTTP POST in your Flask application by the Dapr sidecar.
|
||||
The code inside the `process_batch` function is executed every 10 seconds (defined in [`binding-cron.yaml`]({{< ref "#componentsbinding-cronyaml-component-file" >}}) in the `components` directory). The binding trigger looks for a route called via HTTP POST in your application by the Dapr sidecar.
|
||||
|
||||
```python
|
||||
# Triggered by Dapr input binding
|
||||
|
@ -295,7 +295,7 @@ Run the `batch-sdk` service alongside a Dapr sidecar.
|
|||
dapr run --app-id batch-sdk --app-port 5002 --dapr-http-port 3500 --resources-path ../../../components -- node index.js
|
||||
```
|
||||
|
||||
The code inside the `process_batch` function is executed every 10 seconds (defined in [`binding-cron.yaml`]({{< ref "#componentsbinding-cronyaml-component-file" >}}) in the `components` directory). The binding trigger looks for a route called via HTTP POST in your Flask application by the Dapr sidecar.
|
||||
The code inside the `process_batch` function is executed every 10 seconds (defined in [`binding-cron.yaml`]({{< ref "#componentsbinding-cronyaml-component-file" >}}) in the `components` directory). The binding trigger looks for a route called via HTTP POST in your application by the Dapr sidecar.
|
||||
|
||||
```javascript
|
||||
async function start() {
|
||||
|
@ -498,7 +498,7 @@ Run the `batch-sdk` service alongside a Dapr sidecar.
|
|||
dapr run --app-id batch-sdk --app-port 7002 --resources-path ../../../components -- dotnet run
|
||||
```
|
||||
|
||||
The code inside the `process_batch` function is executed every 10 seconds (defined in [`binding-cron.yaml`]({{< ref "#componentsbinding-cronyaml-component-file" >}}) in the `components` directory). The binding trigger looks for a route called via HTTP POST in your Flask application by the Dapr sidecar.
|
||||
The code inside the `process_batch` function is executed every 10 seconds (defined in [`binding-cron.yaml`]({{< ref "#componentsbinding-cronyaml-component-file" >}}) in the `components` directory). The binding trigger looks for a route called via HTTP POST in your application by the Dapr sidecar.
|
||||
|
||||
```csharp
|
||||
app.MapPost("/" + cronBindingName, async () => {
|
||||
|
@ -704,7 +704,7 @@ Run the `batch-sdk` service alongside a Dapr sidecar.
|
|||
dapr run --app-id batch-sdk --app-port 8080 --resources-path ../../../components -- java -jar target/BatchProcessingService-0.0.1-SNAPSHOT.jar
|
||||
```
|
||||
|
||||
The code inside the `process_batch` function is executed every 10 seconds (defined in [`binding-cron.yaml`]({{< ref "#componentsbinding-cronyaml-component-file" >}}) in the `components` directory). The binding trigger looks for a route called via HTTP POST in your Flask application by the Dapr sidecar.
|
||||
The code inside the `process_batch` function is executed every 10 seconds (defined in [`binding-cron.yaml`]({{< ref "#componentsbinding-cronyaml-component-file" >}}) in the `components` directory). The binding trigger looks for a route called via HTTP POST in your application by the Dapr sidecar.
|
||||
|
||||
```java
|
||||
@PostMapping(path = cronBindingPath, consumes = MediaType.ALL_VALUE)
|
||||
|
@ -911,7 +911,7 @@ Run the `batch-sdk` service alongside a Dapr sidecar.
|
|||
dapr run --app-id batch-sdk --app-port 6002 --dapr-http-port 3502 --dapr-grpc-port 60002 --resources-path ../../../components -- go run .
|
||||
```
|
||||
|
||||
The code inside the `process_batch` function is executed every 10 seconds (defined in [`binding-cron.yaml`]({{< ref "#componentsbinding-cronyaml-component-file" >}}) in the `components` directory). The binding trigger looks for a route called via HTTP POST in your Flask application by the Dapr sidecar.
|
||||
The code inside the `process_batch` function is executed every 10 seconds (defined in [`binding-cron.yaml`]({{< ref "#componentsbinding-cronyaml-component-file" >}}) in the `components` directory). The binding trigger looks for a route called via HTTP POST in your application by the Dapr sidecar.
|
||||
|
||||
```go
|
||||
// Triggered by Dapr input binding
|
||||
|
|
|
@ -64,7 +64,7 @@ pip3 install -r requirements.txt
|
|||
Run the `order-processor` service alongside a Dapr sidecar.
|
||||
|
||||
```bash
|
||||
dapr run --app-id order-processor --components-path ../../../components/ --app-port 6001 -- python3 app.py
|
||||
dapr run --app-id order-processor --resources-path ../../../components/ --app-port 6001 -- python3 app.py
|
||||
```
|
||||
|
||||
> **Note**: Since Python3.exe is not defined in Windows, you may need to use `python app.py` instead of `python3 app.py`.
|
||||
|
@ -90,7 +90,7 @@ docker exec dapr_redis redis-cli MSET orderId1 "103" orderId2 "104"
|
|||
Run the `order-processor` service again:
|
||||
|
||||
```bash
|
||||
dapr run --app-id order-processor --components-path ../../../components/ --app-port 6001 -- python3 app.py
|
||||
dapr run --app-id order-processor --resources-path ../../../components/ --app-port 6001 -- python3 app.py
|
||||
```
|
||||
|
||||
> **Note**: Since Python3.exe is not defined in Windows, you may need to use `python app.py` instead of `python3 app.py`.
|
||||
|
@ -187,7 +187,7 @@ npm install
|
|||
Run the `order-processor` service alongside a Dapr sidecar.
|
||||
|
||||
```bash
|
||||
dapr run --app-id order-processor --components-path ../../../components/ --app-protocol grpc --dapr-grpc-port 3500 -- node index.js
|
||||
dapr run --app-id order-processor --resources-path ../../../components/ --app-protocol grpc --dapr-grpc-port 3500 -- node index.js
|
||||
```
|
||||
|
||||
The expected output:
|
||||
|
@ -209,7 +209,7 @@ docker exec dapr_redis redis-cli MSET orderId1 "103" orderId2 "104"
|
|||
Run the `order-processor` service again:
|
||||
|
||||
```bash
|
||||
dapr run --app-id order-processor --components-path ../../../components/ --app-protocol grpc --dapr-grpc-port 3500 -- node index.js
|
||||
dapr run --app-id order-processor --resources-path ../../../components/ --app-protocol grpc --dapr-grpc-port 3500 -- node index.js
|
||||
```
|
||||
|
||||
The app will return the updated configuration values:
|
||||
|
@ -309,7 +309,7 @@ dotnet build
|
|||
Run the `order-processor` service alongside a Dapr sidecar.
|
||||
|
||||
```bash
|
||||
dapr run --app-id order-processor-http --components-path ../../../components/ --app-port 7001 -- dotnet run --project .
|
||||
dapr run --app-id order-processor-http --resources-path ../../../components/ --app-port 7001 -- dotnet run --project .
|
||||
```
|
||||
|
||||
The expected output:
|
||||
|
@ -331,7 +331,7 @@ docker exec dapr_redis redis-cli MSET orderId1 "103" orderId2 "104"
|
|||
Run the `order-processor` service again:
|
||||
|
||||
```bash
|
||||
dapr run --app-id order-processor-http --components-path ../../../components/ --app-port 7001 -- dotnet run --project .
|
||||
dapr run --app-id order-processor-http --resources-path ../../../components/ --app-port 7001 -- dotnet run --project .
|
||||
```
|
||||
|
||||
The app will return the updated configuration values:
|
||||
|
@ -428,7 +428,7 @@ mvn clean install
|
|||
Run the `order-processor` service alongside a Dapr sidecar.
|
||||
|
||||
```bash
|
||||
dapr run --app-id order-processor --components-path ../../../components -- java -jar target/OrderProcessingService-0.0.1-SNAPSHOT.jar
|
||||
dapr run --app-id order-processor --resources-path ../../../components -- java -jar target/OrderProcessingService-0.0.1-SNAPSHOT.jar
|
||||
```
|
||||
|
||||
The expected output:
|
||||
|
@ -450,7 +450,7 @@ docker exec dapr_redis redis-cli MSET orderId1 "103" orderId2 "104"
|
|||
Run the `order-processor` service again:
|
||||
|
||||
```bash
|
||||
dapr run --app-id order-processor --components-path ../../../components -- java -jar target/OrderProcessingService-0.0.1-SNAPSHOT.jar
|
||||
dapr run --app-id order-processor --resources-path ../../../components -- java -jar target/OrderProcessingService-0.0.1-SNAPSHOT.jar
|
||||
```
|
||||
|
||||
The app will return the updated configuration values:
|
||||
|
@ -537,7 +537,7 @@ cd configuration/go/sdk/order-processor
|
|||
Run the `order-processor` service alongside a Dapr sidecar.
|
||||
|
||||
```bash
|
||||
dapr run --app-id order-processor --app-port 6001 --components-path ../../../components -- go run .
|
||||
dapr run --app-id order-processor --app-port 6001 --resources-path ../../../components -- go run .
|
||||
```
|
||||
|
||||
The expected output:
|
||||
|
@ -560,7 +560,7 @@ docker exec dapr_redis redis-cli MSET orderId1 "103" orderId2 "104"
|
|||
Run the `order-processor` service again:
|
||||
|
||||
```bash
|
||||
dapr run --app-id order-processor --app-port 6001 --components-path ../../../components -- go run .
|
||||
dapr run --app-id order-processor --app-port 6001 --resources-path ../../../components -- go run .
|
||||
```
|
||||
|
||||
The app will return the updated configuration values:
|
||||
|
@ -636,4 +636,4 @@ Join the discussion in our [discord channel](https://discord.com/channels/778680
|
|||
- [Go](https://github.com/dapr/quickstarts/tree/master/configuration/go/http)
|
||||
- Learn more about [Configuration building block]({{< ref configuration-api-overview >}})
|
||||
|
||||
{{< button text="Explore Dapr tutorials >>" page="getting-started/tutorials/_index.md" >}}
|
||||
{{< button text="Explore Dapr tutorials >>" page="getting-started/tutorials/_index.md" >}}
|
||||
|
|
|
@ -56,7 +56,7 @@ pip3 install -r requirements.txt
|
|||
Run the `order-processor` subscriber service alongside a Dapr sidecar.
|
||||
|
||||
```bash
|
||||
dapr run --app-id order-processor --resources-path ../../../components/ --app-port 5001 -- python3 app.py
|
||||
dapr run --app-id order-processor --resources-path ../../../components/ --app-port 6002 -- python3 app.py
|
||||
```
|
||||
|
||||
> **Note**: Since Python3.exe is not defined in Windows, you may need to use `python app.py` instead of `python3 app.py`.
|
||||
|
@ -273,7 +273,7 @@ dapr run --app-id checkout --app-protocol http --dapr-http-port 3500 --resources
|
|||
In the `checkout` publisher service, we're publishing the orderId message to the Redis instance called `orderpubsub` [(as defined in the `pubsub.yaml` component)]({{< ref "#pubsubyaml-component-file" >}}) and topic `orders`. As soon as the service starts, it publishes in a loop:
|
||||
|
||||
```js
|
||||
const client = new DaprClient(DAPR_HOST, DAPR_HTTP_PORT);
|
||||
const client = new DaprClient();
|
||||
|
||||
await client.pubsub.publish(PUBSUB_NAME, PUBSUB_TOPIC, order);
|
||||
console.log("Published data: " + JSON.stringify(order));
|
||||
|
@ -389,7 +389,7 @@ dotnet build
|
|||
Run the `order-processor` subscriber service alongside a Dapr sidecar.
|
||||
|
||||
```bash
|
||||
dapr run --app-id order-processor --resources-path ../../../components --app-port 7002 -- dotnet run
|
||||
dapr run --app-id order-processor --resources-path ../../../components --app-port 7005 -- dotnet run
|
||||
```
|
||||
|
||||
In the `order-processor` subscriber, we're subscribing to the Redis instance called `orderpubsub` [(as defined in the `pubsub.yaml` component)]({{< ref "#pubsubyaml-component-file" >}}) and topic `orders`. This enables your app code to talk to the Redis component instance through the Dapr sidecar.
|
||||
|
|
|
@ -298,7 +298,7 @@ Dapr invokes an application on any Dapr instance. In the code, the sidecar progr
|
|||
For this example, you will need:
|
||||
|
||||
- [Dapr CLI and initialized environment](https://docs.dapr.io/getting-started).
|
||||
- [.NET SDK or .NET 6 SDK installed](https://dotnet.microsoft.com/download).
|
||||
- [.NET SDK or .NET 7 SDK installed](https://dotnet.microsoft.com/download).
|
||||
<!-- IGNORE_LINKS -->
|
||||
- [Docker Desktop](https://www.docker.com/products/docker-desktop)
|
||||
<!-- END_IGNORE -->
|
||||
|
|
|
@ -177,29 +177,19 @@ dapr run --app-id order-processor --resources-path ../../../resources/ -- npm ru
|
|||
The `order-processor` service writes, reads, and deletes an `orderId` key/value pair to the `statestore` instance [defined in the `statestore.yaml` component]({{< ref "#statestoreyaml-component-file" >}}). As soon as the service starts, it performs a loop.
|
||||
|
||||
```js
|
||||
const client = new DaprClient(DAPR_HOST, DAPR_HTTP_PORT);
|
||||
const client = new DaprClient()
|
||||
|
||||
// Save state into the state store
|
||||
client.state.save(STATE_STORE_NAME, [
|
||||
{
|
||||
key: orderId.toString(),
|
||||
value: order
|
||||
}
|
||||
]);
|
||||
console.log("Saving Order: ", order);
|
||||
// Save state into a state store
|
||||
await client.state.save(DAPR_STATE_STORE_NAME, state)
|
||||
console.log("Saving Order: ", order)
|
||||
|
||||
// Get state from the state store
|
||||
var result = client.state.get(STATE_STORE_NAME, orderId.toString());
|
||||
result.then(function(val) {
|
||||
console.log("Getting Order: ", val);
|
||||
});
|
||||
|
||||
// Delete state from the state store
|
||||
client.state.delete(STATE_STORE_NAME, orderId.toString());
|
||||
result.then(function(val) {
|
||||
console.log("Deleting Order: ", val);
|
||||
});
|
||||
// Get state from a state store
|
||||
const savedOrder = await client.state.get(DAPR_STATE_STORE_NAME, order.orderId)
|
||||
console.log("Getting Order: ", savedOrd)
|
||||
|
||||
// Delete state from the state store
|
||||
await client.state.delete(DAPR_STATE_STORE_NAME, order.orderId)
|
||||
console.log("Deleting Order: ", order)
|
||||
```
|
||||
### Step 3: View the order-processor outputs
|
||||
|
||||
|
|
|
@ -97,6 +97,12 @@ Expected output:
|
|||
== APP == Workflow Status: Completed
|
||||
```
|
||||
|
||||
### (Optional) Step 4: View in Zipkin
|
||||
|
||||
If you have Zipkin configured for Dapr locally on your machine, you can view the workflow trace spans in the Zipkin web UI (typically at `http://localhost:9411/zipkin/`).
|
||||
|
||||
<img src="/images/workflow-trace-spans-zipkin.png" width=800 style="padding-bottom:15px;">
|
||||
|
||||
### What happened?
|
||||
|
||||
When you ran `dapr run --app-id order-processor dotnet run`:
|
||||
|
|
|
@ -214,7 +214,7 @@ See the [preview features]({{< ref "preview-features.md" >}}) guide for informat
|
|||
|
||||
### Example sidecar configuration
|
||||
|
||||
The following yaml shows an example configuration file that can be applied to an applications' Dapr sidecar.
|
||||
The following YAML shows an example configuration file that can be applied to an applications' Dapr sidecar.
|
||||
|
||||
```yml
|
||||
apiVersion: dapr.io/v1alpha1
|
||||
|
@ -266,15 +266,21 @@ There is a single configuration file called `daprsystem` installed with the Dapr
|
|||
|
||||
### Control-plane configuration settings
|
||||
|
||||
A Dapr control plane configuration can configure the following settings:
|
||||
A Dapr control plane configuration contains the following sections:
|
||||
|
||||
- [`mtls`](#mtls-mutual-tls) for mTLS (Mutual TLS)
|
||||
|
||||
### mTLS (Mutual TLS)
|
||||
|
||||
The `mtls` section contains properties for mTLS.
|
||||
|
||||
| Property | Type | Description |
|
||||
|------------------|--------|-------------|
|
||||
| enabled | bool | Set mtls to be enabled or disabled
|
||||
| allowedClockSkew | string | The extra time to give for certificate expiry based on possible clock skew on a machine. Default is 15 minutes.
|
||||
| workloadCertTTL | string | Time a certificate is valid for. Default is 24 hours
|
||||
| `enabled` | bool | If true, enables mTLS for communication between services and apps in the cluster.
|
||||
| `allowedClockSkew` | string | Allowed tolerance when checking the expiration of TLS certificates, to allow for clock skew. Follows the format used by [Go's time.ParseDuration](https://pkg.go.dev/time#ParseDuration). Default is `15m` (15 minutes).
|
||||
| `workloadCertTTL` | string | How long a certificate TLS issued by Dapr is valid for. Follows the format used by [Go's time.ParseDuration](https://pkg.go.dev/time#ParseDuration). Default is `24h` (24 hours).
|
||||
|
||||
See the [Mutual TLS]({{< ref "mtls.md" >}}) HowTo and [security concepts]({{< ref "security-concept.md" >}}) for more information.
|
||||
See the [mTLS how-to]({{< ref "mtls.md" >}}) and [security concepts]({{< ref "security-concept.md" >}}) for more information.
|
||||
|
||||
### Example control plane configuration
|
||||
|
||||
|
@ -282,7 +288,7 @@ See the [Mutual TLS]({{< ref "mtls.md" >}}) HowTo and [security concepts]({{< re
|
|||
apiVersion: dapr.io/v1alpha1
|
||||
kind: Configuration
|
||||
metadata:
|
||||
name: default
|
||||
name: daprsystem
|
||||
namespace: default
|
||||
spec:
|
||||
mtls:
|
||||
|
|
|
@ -11,7 +11,7 @@ This article provides guidance on running Dapr with Podman on a Windows/Linux/ma
|
|||
## Prerequisites
|
||||
|
||||
- [Dapr CLI]({{< ref install-dapr-cli.md >}})
|
||||
- [Podman](https://podman.io/getting-started/installation.html)
|
||||
- [Podman](https://podman.io/docs/tutorials/installation)
|
||||
|
||||
## Initialize Dapr environment
|
||||
|
||||
|
|
|
@ -142,6 +142,8 @@ First you need to connect Prometheus as a data source to Grafana.
|
|||
- Name: `Dapr`
|
||||
- HTTP URL: `http://dapr-prom-prometheus-server.dapr-monitoring`
|
||||
- Default: On
|
||||
- Skip TLS Verify: On
|
||||
- Necessary in order to save and test the configuration
|
||||
|
||||
<img src="/images/grafana-prometheus-dapr-server-url.png" alt="Screenshot of the Prometheus Data Source configuration" width=600>
|
||||
|
||||
|
|
|
@ -90,7 +90,7 @@ If you are Minikube user or want to disable persistent volume for development pu
|
|||
|
||||
```bash
|
||||
helm install dapr-prom prometheus-community/prometheus -n dapr-monitoring
|
||||
--set alertmanager.persistentVolume.enable=false --set pushgateway.persistentVolume.enabled=false --set server.persistentVolume.enabled=false
|
||||
--set alertmanager.persistence.enabled=false --set pushgateway.persistentVolume.enabled=false --set server.persistentVolume.enabled=false
|
||||
```
|
||||
|
||||
3. Validation
|
||||
|
@ -119,4 +119,4 @@ dapr-prom-prometheus-server-694fd8d7c-q5d59 2/2 Running 0
|
|||
## References
|
||||
|
||||
* [Prometheus Installation](https://github.com/prometheus-community/helm-charts)
|
||||
* [Prometheus Query Language](https://prometheus.io/docs/prometheus/latest/querying/basics/)
|
||||
* [Prometheus Query Language](https://prometheus.io/docs/prometheus/latest/querying/basics/)
|
||||
|
|
|
@ -0,0 +1,55 @@
|
|||
---
|
||||
type: docs
|
||||
title: "How-To: Set up Datadog for distributed tracing"
|
||||
linkTitle: "Datadog"
|
||||
weight: 5000
|
||||
description: "Set up Datadog for distributed tracing"
|
||||
---
|
||||
|
||||
Dapr captures metrics and traces that can be sent directly to Datadog through the OpenTelemetry Collector Datadog exporter.
|
||||
|
||||
## Configure Dapr tracing with the OpenTelemetry Collector and Datadog
|
||||
|
||||
Using the OpenTelemetry Collector Datadog exporter, you can configure Dapr to create traces for each application in your Kubernetes cluster and collect them in Datadog.
|
||||
|
||||
> Before you begin, [set up the OpenTelemetry Collector]({{< ref "open-telemetry-collector.md#setting-opentelemetry-collector" >}}).
|
||||
|
||||
1. Add your Datadog API key to the `./deploy/opentelemetry-collector-generic-datadog.yaml` file in the `datadog` exporter configuration section:
|
||||
```yaml
|
||||
data:
|
||||
otel-collector-config:
|
||||
...
|
||||
exporters:
|
||||
...
|
||||
datadog:
|
||||
api:
|
||||
key: <YOUR_API_KEY>
|
||||
```
|
||||
|
||||
1. Apply the `opentelemetry-collector` configuration by running the following command.
|
||||
|
||||
```sh
|
||||
kubectl apply -f ./deploy/open-telemetry-collector-generic-datadog.yaml
|
||||
```
|
||||
|
||||
1. Set up a Dapr configuration file that will turn on tracing and deploy a tracing exporter component that uses the OpenTelemetry Collector.
|
||||
|
||||
```sh
|
||||
kubectl apply -f ./deploy/collector-config.yaml
|
||||
|
||||
1. Apply the `appconfig` configuration by adding a `dapr.io/config` annotation to the container that you want to participate in the distributed tracing.
|
||||
|
||||
```yml
|
||||
annotations:
|
||||
dapr.io/config: "appconfig"
|
||||
|
||||
1. Create and configure the application. Once running, telemetry data is sent to Datadog and visible in Datadog APM.
|
||||
|
||||
<img src="/images/datadog-traces.png" width=1200 alt="Datadog APM showing telemetry data.">
|
||||
|
||||
|
||||
## Related Links/References
|
||||
|
||||
* [Complete example of setting up Dapr on a Kubernetes cluster](https://github.com/ericmustin/quickstarts/tree/master/hello-kubernetes)
|
||||
* [Datadog documentation about OpenTelemetry support](https://docs.datadoghq.com/opentelemetry/)
|
||||
* [Datadog Application Performance Monitoring](https://docs.datadoghq.com/tracing/)
|
|
@ -12,12 +12,12 @@ Define timeouts, retries, and circuit breaker policies under `policies`. Each po
|
|||
|
||||
## Timeouts
|
||||
|
||||
Timeouts can be used to early-terminate long-running operations. If you've exceeded a timeout duration:
|
||||
Timeouts are optional policies that can be used to early-terminate long-running operations. If you've exceeded a timeout duration:
|
||||
|
||||
- The operation in progress is terminated (if possible).
|
||||
- An error is returned.
|
||||
|
||||
Valid values are of the form accepted by Go's [time.ParseDuration](https://pkg.go.dev/time#ParseDuration), for example: `15s`, `2m`, `1h30m`.
|
||||
Valid values are of the form accepted by Go's [time.ParseDuration](https://pkg.go.dev/time#ParseDuration), for example: `15s`, `2m`, `1h30m`. Timeouts have no set maximum value.
|
||||
|
||||
Example:
|
||||
|
||||
|
@ -31,6 +31,8 @@ spec:
|
|||
largeResponse: 10s
|
||||
```
|
||||
|
||||
If you don't specify a timeout value, the policy does not enforce a time and defaults to whatever you set up per the request client.
|
||||
|
||||
## Retries
|
||||
|
||||
With `retries`, you can define a retry strategy for failed operations, including requests failed due to triggering a defined timeout or circuit breaker policy. The following retry options are configurable:
|
||||
|
@ -69,6 +71,8 @@ spec:
|
|||
maxRetries: -1 # Retry indefinitely
|
||||
```
|
||||
|
||||
|
||||
|
||||
## Circuit Breakers
|
||||
|
||||
Circuit Breaker (CB) policies are used when other applications/services/components are experiencing elevated failure rates. CBs monitor the requests and shut off all traffic to the impacted service when a certain criteria is met ("open" state). By doing this, CBs give the service time to recover from their outage instead of flooding it with events. The CB can also allow partial traffic through to see if the system has healed ("half-open" state). Once requests resume being successful, the CB gets into "closed" state and allows traffic to completely resume.
|
||||
|
@ -95,7 +99,7 @@ spec:
|
|||
|
||||
## Overriding default retries
|
||||
|
||||
Dapr provides default retries for certain request failures and transient errors. Within a resiliency spec, you have the option to override Dapr's default retry logic by defining policies with reserved, named keywords. For example, defining a policy with the name `DaprBuiltInServiceRetries`, overrides the default retries for failures between sidecars via service-to-service requests. Policy overrides are not applied to specific targets.
|
||||
Dapr provides default retries for any unsuccessful request, such as failures and transient errors. Within a resiliency spec, you have the option to override Dapr's default retry logic by defining policies with reserved, named keywords. For example, defining a policy with the name `DaprBuiltInServiceRetries`, overrides the default retries for failures between sidecars via service-to-service requests. Policy overrides are not applied to specific targets.
|
||||
|
||||
> Note: Although you can override default values with more robust retries, you cannot override with lesser values than the provided default value, or completely remove default retries. This prevents unexpected downtime.
|
||||
|
||||
|
|
|
@ -163,14 +163,14 @@ spec:
|
|||
Watch this video for how to use [resiliency](https://www.youtube.com/watch?t=184&v=7D6HOU3Ms6g&feature=youtu.be):
|
||||
|
||||
<div class="embed-responsive embed-responsive-16by9">
|
||||
<iframe width="560" height="315" src="https://www.youtube-nocookie.com/embed/sembed/7D6HOU3Ms6g?start=184" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe>
|
||||
<iframe width="560" height="315" src="https://www.youtube-nocookie.com/embed/7D6HOU3Ms6g?start=184" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" allowfullscreen></iframe>
|
||||
</div>
|
||||
|
||||
- [Policies]({{< ref "policies.md" >}})
|
||||
- [Targets]({{< ref "targets.md" >}})
|
||||
|
||||
## Next steps
|
||||
|
||||
Learn more about resiliency policies and targets:
|
||||
- [Policies]({{< ref "policies.md" >}})
|
||||
- [Targets]({{< ref "targets.md" >}})
|
||||
Try out one of the Resiliency quickstarts:
|
||||
- [Resiliency: Service-to-service]({{< ref resiliency-serviceinvo-quickstart.md >}})
|
||||
- [Resiliency: State Management]({{< ref resiliency-state-quickstart.md >}})
|
|
@ -34,7 +34,12 @@ The table below shows the versions of Dapr releases that have been tested togeth
|
|||
|
||||
| Release date | Runtime | CLI | SDKs | Dashboard | Status |
|
||||
|--------------------|:--------:|:--------|---------|---------|---------|
|
||||
| February 14 2023 | 1.10.0</br> | 1.10.0 | Java 1.8.0 </br>Go 1.6.0 </br>PHP 1.1.0 </br>Python 1.9.0 </br>.NET 1.10.0 </br>JS 2.5.0 | 0.11.0 | Supported (current) |
|
||||
| April 13 2023 | 1.10.5</br> | 1.10.0 | Java 1.8.0 </br>Go 1.6.0 </br>PHP 1.1.0 </br>Python 1.9.0 </br>.NET 1.10.0 </br>JS 2.5.0 | 0.11.0 | Supported (current) |
|
||||
| March 16 2023 | 1.10.4</br> | 1.10.0 | Java 1.8.0 </br>Go 1.6.0 </br>PHP 1.1.0 </br>Python 1.9.0 </br>.NET 1.10.0 </br>JS 2.5.0 | 0.11.0 | Supported |
|
||||
| March 14 2023 | 1.10.3</br> | 1.10.0 | Java 1.8.0 </br>Go 1.6.0 </br>PHP 1.1.0 </br>Python 1.9.0 </br>.NET 1.10.0 </br>JS 2.5.0 | 0.11.0 | Supported |
|
||||
| February 24 2023 | 1.10.2</br> | 1.10.0 | Java 1.8.0 </br>Go 1.6.0 </br>PHP 1.1.0 </br>Python 1.9.0 </br>.NET 1.10.0 </br>JS 2.5.0 | 0.11.0 | Supported |
|
||||
| February 20 2023 | 1.10.1</br> | 1.10.0 | Java 1.8.0 </br>Go 1.6.0 </br>PHP 1.1.0 </br>Python 1.9.0 </br>.NET 1.10.0 </br>JS 2.5.0 | 0.11.0 | Supported |
|
||||
| February 14 2023 | 1.10.0</br> | 1.10.0 | Java 1.8.0 </br>Go 1.6.0 </br>PHP 1.1.0 </br>Python 1.9.0 </br>.NET 1.10.0 </br>JS 2.5.0 | 0.11.0 | Supported|
|
||||
| December 2nd 2022 | 1.9.5</br> | 1.9.1 | Java 1.7.0 </br>Go 1.6.0 </br>PHP 1.1.0 </br>Python 1.8.3 </br>.NET 1.9.0 </br>JS 2.4.2 | 0.11.0 | Supported |
|
||||
| November 17th 2022 | 1.9.4</br> | 1.9.1 | Java 1.7.0 </br>Go 1.6.0 </br>PHP 1.1.0 </br>Python 1.8.3 </br>.NET 1.9.0 </br>JS 2.4.2 | 0.11.0 | Supported |
|
||||
| November 4th 2022 | 1.9.3</br> | 1.9.1 | Java 1.7.0 </br>Go 1.6.0 </br>PHP 1.1.0 </br>Python 1.8.3 </br>.NET 1.9.0 </br>JS 2.4.2 | 0.11.0 | Supported |
|
||||
|
@ -86,15 +91,18 @@ General guidance on upgrading can be found for [self hosted mode]({{< ref self-h
|
|||
| | 1.6.0 | 1.6.2 |
|
||||
| | 1.6.2 | 1.7.5 |
|
||||
| | 1.7.5 | 1.8.6 |
|
||||
| | 1.8.6 | 1.9.5 |
|
||||
| | 1.8.6 | 1.9.6 |
|
||||
| | 1.9.6 | 1.10.5 |
|
||||
| 1.6.0 to 1.6.2 | N/A | 1.7.5 |
|
||||
| | 1.7.5 | 1.8.6 |
|
||||
| | 1.8.6 | 1.9.5 |
|
||||
| | 1.8.6 | 1.9.6 |
|
||||
| | 1.9.6 | 1.10.5 |
|
||||
| 1.7.0 to 1.7.5 | N/A | 1.8.6 |
|
||||
| | 1.8.6 | 1.9.5 |
|
||||
| 1.8.0 to 1.8.6 | N/A | 1.9.5 |
|
||||
| 1.9.0 | N/A | 1.9.5 |
|
||||
| 1.10.0 | N/A | 1.10.0 |
|
||||
| | 1.8.6 | 1.9.6 |
|
||||
| | 1.9.6 | 1.10.5 |
|
||||
| 1.8.0 to 1.8.6 | N/A | 1.9.6 |
|
||||
| 1.9.0 | N/A | 1.9.6 |
|
||||
| 1.10.0 | N/A | 1.10.5 |
|
||||
|
||||
## Breaking changes and deprecations
|
||||
|
||||
|
@ -147,6 +155,7 @@ After announcing a future breaking change, the change will happen in 2 releases
|
|||
| GET /v1.0/shutdown API (Users should use [POST API]({{< ref kubernetes-job.md >}}) instead) | 1.2.0 | 1.4.0 |
|
||||
| Java domain builder classes deprecated (Users should use [setters](https://github.com/dapr/java-sdk/issues/587) instead) | Java SDK 1.3.0 | Java SDK 1.5.0 |
|
||||
| Service invocation will no longer provide a default content type header of `application/json` when no content-type is specified. You must explicitly [set a content-type header]({{< ref "service_invocation_api.md#request-contents" >}}) for service invocation if your invoked apps rely on this header. | 1.7.0 | 1.9.0 |
|
||||
| gRPC service invocation using `invoke` method is deprecated. Use proxy mode service invocation instead. See [How-To: Invoke services using gRPC ]({{< ref howto-invoke-services-grpc.md >}}) to use the proxy mode.| 1.9.0 | 1.10.0 |
|
||||
|
||||
## Upgrade on Hosting platforms
|
||||
|
||||
|
|
|
@ -93,7 +93,8 @@ curl http://localhost:3500/v1.0/metadata
|
|||
],
|
||||
"extended": {
|
||||
"cliPID":"1031040",
|
||||
"appCommand":"uvicorn --port 3000 demo_actor_service:app"
|
||||
"appCommand":"uvicorn --port 3000 demo_actor_service:app",
|
||||
"daprRuntimeVersion": "1.10.0"
|
||||
},
|
||||
"components":[
|
||||
{
|
||||
|
|
|
@ -262,10 +262,17 @@ A JSON-encoded payload body with the processing status against each entry needs
|
|||
|
||||
```json
|
||||
{
|
||||
"statuses": {
|
||||
"entryId": "<entryId>",
|
||||
"statuses":
|
||||
[
|
||||
{
|
||||
"entryId": "<entryId1>",
|
||||
"status": "<status>"
|
||||
}
|
||||
},
|
||||
{
|
||||
"entryId": "<entryId2>",
|
||||
"status": "<status>"
|
||||
}
|
||||
]
|
||||
}
|
||||
```
|
||||
|
||||
|
|
|
@ -5,9 +5,120 @@ linkTitle: "Workflow API"
|
|||
description: "Detailed documentation on the workflow API"
|
||||
weight: 900
|
||||
---
|
||||
|
||||
Dapr provides users with the ability to interact with workflows and comes with a built-in `dapr` component.
|
||||
|
||||
## Start workflow request
|
||||
|
||||
Start a workflow instance with the given name and optionally, an instance ID.
|
||||
|
||||
```bash
|
||||
POST http://localhost:3500/v1.0-alpha1/workflows/<workflowComponentName>/<workflowName>/start[?instanceId=<instanceId>]
|
||||
```
|
||||
|
||||
Note that workflow instance IDs can only contain alphanumeric characters, underscores, and dashes.
|
||||
|
||||
### URL parameters
|
||||
|
||||
Parameter | Description
|
||||
--------- | -----------
|
||||
`workflowComponentName` | Current default is `dapr` for Dapr Workflows
|
||||
`workflowName` | Identify the workflow type
|
||||
`instanceId` | (Optional) Unique value created for each run of a specific workflow
|
||||
|
||||
### Request content
|
||||
|
||||
Any request content will be passed to the workflow as input. The Dapr API passes the content as-is without attempting to interpret it.
|
||||
|
||||
### HTTP response codes
|
||||
|
||||
Code | Description
|
||||
---- | -----------
|
||||
`202` | Accepted
|
||||
`400` | Request was malformed
|
||||
`500` | Request formatted correctly, error in dapr code or underlying component
|
||||
|
||||
### Response content
|
||||
|
||||
The API call will provide a response similar to this:
|
||||
|
||||
```json
|
||||
{
|
||||
"instanceID": "12345678"
|
||||
}
|
||||
```
|
||||
|
||||
## Terminate workflow request
|
||||
|
||||
Terminate a running workflow instance with the given name and instance ID.
|
||||
|
||||
```bash
|
||||
POST http://localhost:3500/v1.0-alpha1/workflows/<instanceId>/terminate
|
||||
```
|
||||
|
||||
### URL parameters
|
||||
|
||||
Parameter | Description
|
||||
--------- | -----------
|
||||
`workflowComponentName` | Current default is `dapr` for Dapr Workflows
|
||||
`instanceId` | Unique value created for each run of a specific workflow
|
||||
|
||||
### HTTP response codes
|
||||
|
||||
Code | Description
|
||||
---- | -----------
|
||||
`202` | Accepted
|
||||
`400` | Request was malformed
|
||||
`500` | Request formatted correctly, error in dapr code or underlying component
|
||||
|
||||
### Response content
|
||||
|
||||
This API does not return any content.
|
||||
|
||||
### Get workflow request
|
||||
|
||||
Get information about a given workflow instance.
|
||||
|
||||
```bash
|
||||
GET http://localhost:3500/v1.0-alpha1/workflows/<workflowComponentName>/<instanceId>
|
||||
```
|
||||
|
||||
### URL parameters
|
||||
|
||||
Parameter | Description
|
||||
--------- | -----------
|
||||
`workflowComponentName` | Current default is `dapr` for Dapr Workflows
|
||||
`instanceId` | Unique value created for each run of a specific workflow
|
||||
|
||||
### HTTP response codes
|
||||
|
||||
Code | Description
|
||||
---- | -----------
|
||||
`200` | OK
|
||||
`400` | Request was malformed
|
||||
`500` | Request formatted correctly, error in dapr code or underlying component
|
||||
|
||||
### Response content
|
||||
|
||||
The API call will provide a JSON response similar to this:
|
||||
|
||||
```json
|
||||
{
|
||||
"createdAt": "2023-01-12T21:31:13Z",
|
||||
"instanceID": "12345678",
|
||||
"lastUpdatedAt": "2023-01-12T21:31:13Z",
|
||||
"properties": {
|
||||
"property1": "value1",
|
||||
"property2": "value2",
|
||||
},
|
||||
"runtimeStatus": "RUNNING",
|
||||
}
|
||||
```
|
||||
|
||||
## Component format
|
||||
|
||||
A Dapr `workflow.yaml` component file has the following structure:
|
||||
|
||||
```yaml
|
||||
apiVersion: dapr.io/v1alpha1
|
||||
kind: Component
|
||||
|
@ -20,101 +131,15 @@ spec:
|
|||
- name: <NAME>
|
||||
value: <VALUE>
|
||||
```
|
||||
|
||||
| Setting | Description |
|
||||
| ------- | ----------- |
|
||||
| `metadata.name` | The name of the workflow component. |
|
||||
| `spec/metadata` | Additional metadata parameters specified by workflow component |
|
||||
|
||||
|
||||
|
||||
## Supported workflow methods
|
||||
|
||||
### POST start workflow request
|
||||
```bash
|
||||
POST http://localhost:3500/v1.0-alpha1/workflows/<workflowComponentName>/<workflowName>/<instanceId>/start
|
||||
```
|
||||
### POST terminate workflow request
|
||||
```bash
|
||||
POST http://localhost:3500/v1.0-alpha1/workflows/<workflowComponentName>/<instanceId>/terminate
|
||||
```
|
||||
### GET workflow request
|
||||
```bash
|
||||
GET http://localhost:3500/v1.0-alpha1/workflows/<workflowComponentName>/<workflowName>/<instanceId>
|
||||
```
|
||||
|
||||
### URL parameters
|
||||
|
||||
Parameter | Description
|
||||
--------- | -----------
|
||||
`workflowComponentName` | Current default is `dapr` for Dapr Workflows
|
||||
`workflowName` | Identify the workflow type
|
||||
`instanceId` | Unique value created for each run of a specific workflow
|
||||
|
||||
|
||||
### Headers
|
||||
|
||||
As part of the start HTTP request, the caller can optionally include one or more `dapr-workflow-metadata` HTTP request headers. The format of the header value is a list of `{key}={value}` values, similar to the format for HTTP cookie request headers. These key/value pairs are saved in the workflow instance’s metadata and can be made available for search (in cases where the workflow implementation supports this kind of search).
|
||||
|
||||
|
||||
## HTTP responses
|
||||
|
||||
### Response codes
|
||||
|
||||
Code | Description
|
||||
---- | -----------
|
||||
`202` | Accepted
|
||||
`400` | Request was malformed
|
||||
`500` | Request formatted correctly, error in dapr code or underlying component
|
||||
|
||||
### Examples of response body for each method
|
||||
|
||||
#### POST start workflow response body
|
||||
|
||||
```bash
|
||||
"WFInfo": {
|
||||
"instance_id": "SampleWorkflow"
|
||||
}
|
||||
```
|
||||
|
||||
|
||||
#### POST terminate workflow response body
|
||||
|
||||
```bash
|
||||
HTTP/1.1 202 Accepted
|
||||
Server: fasthttp
|
||||
Date: Thu, 12 Jan 2023 21:31:16 GMT
|
||||
Content-Type: application/json
|
||||
Content-Length: 139
|
||||
Traceparent: 00-e3dedffedbeb9efbde9fbed3f8e2d8-5f38960d43d24e98-01
|
||||
Connection: close
|
||||
```
|
||||
|
||||
|
||||
### GET workflow response body
|
||||
|
||||
```bash
|
||||
HTTP/1.1 202 Accepted
|
||||
Server: fasthttp
|
||||
Date: Thu, 12 Jan 2023 21:31:16 GMT
|
||||
Content-Type: application/json
|
||||
Content-Length: 139
|
||||
Traceparent: 00-e3dedffedbeb9efbde9fbed3f8e2d8-5f38960d43d24e98-01
|
||||
Connection: close
|
||||
|
||||
{
|
||||
"WFInfo": {
|
||||
"instance_id": "SampleWorkflow"
|
||||
},
|
||||
"start_time": "2023-01-12T21:31:13Z",
|
||||
"metadata": {
|
||||
"status": "Running",
|
||||
"task_queue": "WorkflowSampleQueue"
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
However, Dapr comes with a built-in `dapr` workflow component that is built on Dapr Actors. No component file is required to use the built-in Dapr workflow component.
|
||||
|
||||
## Next Steps
|
||||
|
||||
- [Workflow API overview]({{< ref workflow-overview.md >}})
|
||||
- [Route user to workflow patterns ](todo)
|
||||
- [Route user to workflow patterns ]({{< ref workflow-patterns.md >}})
|
||||
|
|
|
@ -69,7 +69,7 @@ app.post('/scheduled', async function(req, res){
|
|||
});
|
||||
```
|
||||
|
||||
When running this code, note that the `/scheduled` endpoint is called every five minutes by the Dapr sidecar.
|
||||
When running this code, note that the `/scheduled` endpoint is called every fifteen minutes by the Dapr sidecar.
|
||||
|
||||
|
||||
## Binding support
|
||||
|
|
|
@ -39,6 +39,7 @@ The above example uses secrets as plain strings. It is recommended to use a secr
|
|||
|--------------------|:--------:|------------|-----|---------|
|
||||
| endpoint | Y | Output | GraphQL endpoint string See [here](#url-format) for more details | `"http://localhost:4000/graphql/graphql"` |
|
||||
| header:[HEADERKEY] | N | Output | GraphQL header. Specify the header key in the `name`, and the header value in the `value`. | `"no-cache"` (see above) |
|
||||
| variable:[VARIABLEKEY] | N | Output | GraphQL query variable. Specify the variable name in the `name`, and the variable value in the `value`. | `"123"` (see below) |
|
||||
|
||||
### Endpoint and Header format
|
||||
|
||||
|
@ -65,6 +66,18 @@ Metadata: map[string]string{ "query": `query { users { name } }`},
|
|||
}
|
||||
```
|
||||
|
||||
To use a `query` that requires [query variables](https://graphql.org/learn/queries/#variables), add a key-value pair to the `metadata` map, wherein every key corresponding to a query variable is the variable name prefixed with `variable:`
|
||||
|
||||
```golang
|
||||
in := &dapr.InvokeBindingRequest{
|
||||
Name: "example.bindings.graphql",
|
||||
Operation: "query",
|
||||
Metadata: map[string]string{
|
||||
"query": `query HeroNameAndFriends($episode: string!) { hero(episode: $episode) { name } }`,
|
||||
"variable:episode": "JEDI",
|
||||
}
|
||||
```
|
||||
|
||||
## Related links
|
||||
|
||||
- [Basic schema for a Dapr component]({{< ref component-schema >}})
|
||||
|
|
|
@ -70,6 +70,11 @@ The above example uses secrets as plain strings. It is recommended to use a secr
|
|||
When running the Dapr sidecar (daprd) with your application on EKS (AWS Kubernetes), if you're using a node/pod that has already been attached to an IAM policy defining access to AWS resources, you **must not** provide AWS access-key, secret-key, and tokens in the definition of the component spec you're using.
|
||||
{{% /alert %}}
|
||||
|
||||
|
||||
### S3 Bucket Creation
|
||||
{{< tabs "Minio" "LocalStack" "AWS" >}}
|
||||
|
||||
{{% codetab %}}
|
||||
### Using with Minio
|
||||
|
||||
[Minio](https://min.io/) is a service that exposes local storage as S3-compatible block storage, and it's a popular alternative to S3 especially in development environments. You can use the S3 binding with Minio too, with some configuration tweaks:
|
||||
|
@ -78,6 +83,70 @@ When running the Dapr sidecar (daprd) with your application on EKS (AWS Kubernet
|
|||
3. The value for `region` is not important; you can set it to `us-east-1`.
|
||||
4. Depending on your environment, you may need to set `disableSSL` to `true` if you're connecting to Minio using a non-secure connection (using the `http://` protocol). If you are using a secure connection (`https://` protocol) but with a self-signed certificate, you may need to set `insecureSSL` to `true`.
|
||||
|
||||
{{% /codetab %}}
|
||||
|
||||
{{% codetab %}}
|
||||
For local development, the [LocalStack project](https://github.com/localstack/localstack) is used to integrate AWS S3. Follow [these instructions](https://github.com/localstack/localstack#running) to run LocalStack.
|
||||
|
||||
To run LocalStack locally from the command line using Docker, use a `docker-compose.yaml` similar to the following:
|
||||
|
||||
```yaml
|
||||
version: "3.8"
|
||||
|
||||
services:
|
||||
localstack:
|
||||
container_name: "cont-aws-s3"
|
||||
image: localstack/localstack:1.4.0
|
||||
ports:
|
||||
- "127.0.0.1:4566:4566"
|
||||
environment:
|
||||
- DEBUG=1
|
||||
- DOCKER_HOST=unix:///var/run/docker.sock
|
||||
volumes:
|
||||
- "<PATH>/init-aws.sh:/etc/localstack/init/ready.d/init-aws.sh" # init hook
|
||||
- "${LOCALSTACK_VOLUME_DIR:-./volume}:/var/lib/localstack"
|
||||
- "/var/run/docker.sock:/var/run/docker.sock"
|
||||
```
|
||||
|
||||
To use the S3 component, you need to use an existing bucket. The example above uses a [LocalStack Initialization Hook](https://docs.localstack.cloud/references/init-hooks/) to setup the bucket.
|
||||
|
||||
To use LocalStack with your S3 binding, you need to provide the `endpoint` configuration in the component metadata. The `endpoint` is unnecessary when running against production AWS.
|
||||
|
||||
|
||||
```yaml
|
||||
apiVersion: dapr.io/v1alpha1
|
||||
kind: Component
|
||||
metadata:
|
||||
name: aws-s3
|
||||
namespace: default
|
||||
spec:
|
||||
type: bindings.aws.s3
|
||||
version: v1
|
||||
metadata:
|
||||
- name: bucket
|
||||
value: conformance-test-docker
|
||||
- name: endpoint
|
||||
value: "http://localhost:4566"
|
||||
- name: accessKey
|
||||
value: "my-access"
|
||||
- name: secretKey
|
||||
value: "my-secret"
|
||||
- name: region
|
||||
value: "us-east-1"
|
||||
```
|
||||
|
||||
{{% /codetab %}}
|
||||
|
||||
{{% codetab %}}
|
||||
|
||||
To use the S3 component, you need to use an existing bucket. Follow the [AWS documentation for creating a bucket](https://docs.aws.amazon.com/AmazonS3/latest/userguide/create-bucket-overview.html).
|
||||
|
||||
{{% /codetab %}}
|
||||
|
||||
|
||||
|
||||
{{< /tabs >}}
|
||||
|
||||
## Binding support
|
||||
|
||||
This component supports **output binding** with the following operations:
|
||||
|
|
|
@ -11,10 +11,11 @@ WebAssembly is a way to safely run code compiled in other languages. Runtimes
|
|||
execute WebAssembly Modules (Wasm), which are most often binaries with a `.wasm`
|
||||
extension.
|
||||
|
||||
The Wasm [HTTP middleware]({{< ref middleware.md >}}) allows you to rewrite a
|
||||
request URI with custom logic compiled to a Wasm binary. In other words, you
|
||||
can extend Dapr using external files that are not pre-compiled into the `daprd`
|
||||
binary. Dapr embeds [wazero](https://wazero.io) to accomplish this without CGO.
|
||||
The Wasm [HTTP middleware]({{< ref middleware.md >}}) allows you to manipulate
|
||||
an incoming request or serve a response with custom logic compiled to a Wasm
|
||||
binary. In other words, you can extend Dapr using external files that are not
|
||||
pre-compiled into the `daprd` binary. Dapr embeds [wazero](https://wazero.io)
|
||||
to accomplish this without CGO.
|
||||
|
||||
Wasm modules are loaded from a filesystem path. On Kubernetes, see [mounting
|
||||
volumes to the Dapr sidecar]({{< ref kubernetes-volume-mounts.md >}}) to configure
|
||||
|
@ -28,27 +29,21 @@ kind: Component
|
|||
metadata:
|
||||
name: wasm
|
||||
spec:
|
||||
type: middleware.http.wasm.basic
|
||||
type: middleware.http.wasm
|
||||
version: v1
|
||||
metadata:
|
||||
- name: path
|
||||
value: "./hello.wasm"
|
||||
- name: poolSize
|
||||
value: 1
|
||||
value: "./router.wasm"
|
||||
```
|
||||
|
||||
## Spec metadata fields
|
||||
|
||||
Minimally, a user must specify a Wasm binary that contains the custom logic
|
||||
used to rewrite requests. An instance of the Wasm binary is not safe to use
|
||||
concurrently. The below configuration fields control both the binary to
|
||||
instantiate and how large an instance pool to use. A larger pool allows higher
|
||||
concurrency while consuming more memory.
|
||||
Minimally, a user must specify a Wasm binary implements the [http-handler](https://http-wasm.io/http-handler/).
|
||||
How to compile this is described later.
|
||||
|
||||
| Field | Details | Required | Example |
|
||||
|----------|----------------------------------------------------------------|----------|----------------|
|
||||
| path | A relative or absolute path to the Wasm binary to instantiate. | true | "./hello.wasm" |
|
||||
| poolSize | Number of concurrent instances of the Wasm binary. Default: 10 | false | 1 |
|
||||
|
||||
## Dapr configuration
|
||||
|
||||
|
@ -64,7 +59,60 @@ spec:
|
|||
httpPipeline:
|
||||
handlers:
|
||||
- name: wasm
|
||||
type: middleware.http.wasm.basic
|
||||
type: middleware.http.wasm
|
||||
```
|
||||
|
||||
*Note*: WebAssembly middleware uses more resources than native middleware. This
|
||||
result in a resource constraint faster than the same logic in native code.
|
||||
Production usage should [Control max concurrency]({{< ref control-concurrency.md >}}).
|
||||
|
||||
### Generating Wasm
|
||||
|
||||
This component lets you manipulate an incoming request or serve a response with
|
||||
custom logic compiled using the [http-handler](https://http-wasm.io/http-handler/)
|
||||
Application Binary Interface (ABI). The `handle_request` function receives an
|
||||
incoming request and can manipulate it or serve a response as necessary.
|
||||
|
||||
To compile your Wasm, you must compile source using a http-handler compliant
|
||||
guest SDK such as [TinyGo](https://github.com/http-wasm/http-wasm-guest-tinygo).
|
||||
|
||||
Here's an example in TinyGo:
|
||||
|
||||
```go
|
||||
package main
|
||||
|
||||
import (
|
||||
"strings"
|
||||
|
||||
"github.com/http-wasm/http-wasm-guest-tinygo/handler"
|
||||
"github.com/http-wasm/http-wasm-guest-tinygo/handler/api"
|
||||
)
|
||||
|
||||
func main() {
|
||||
handler.HandleRequestFn = handleRequest
|
||||
}
|
||||
|
||||
// handleRequest implements a simple HTTP router.
|
||||
func handleRequest(req api.Request, resp api.Response) (next bool, reqCtx uint32) {
|
||||
// If the URI starts with /host, trim it and dispatch to the next handler.
|
||||
if uri := req.GetURI(); strings.HasPrefix(uri, "/host") {
|
||||
req.SetURI(uri[5:])
|
||||
next = true // proceed to the next handler on the host.
|
||||
return
|
||||
}
|
||||
|
||||
// Serve a static response
|
||||
resp.Headers().Set("Content-Type", "text/plain")
|
||||
resp.Body().WriteString("hello")
|
||||
return // skip the next handler, as we wrote a response.
|
||||
}
|
||||
```
|
||||
|
||||
If using TinyGo, compile as shown below and set the spec metadata field named
|
||||
"path" to the location of the output (ex "router.wasm"):
|
||||
|
||||
```bash
|
||||
tinygo build -o router.wasm -scheduler=none --no-debug -target=wasi router.go`
|
||||
```
|
||||
|
||||
### Generating Wasm
|
||||
|
@ -108,4 +156,4 @@ tinygo build -o example.wasm -scheduler=none --no-debug -target=wasi example.go
|
|||
- [Middleware]({{< ref middleware.md >}})
|
||||
- [Configuration concept]({{< ref configuration-concept.md >}})
|
||||
- [Configuration overview]({{< ref configuration-overview.md >}})
|
||||
- [waPC protocol](https://wapc.io/docs/spec/)
|
||||
- [Control max concurrency]({{< ref control-concurrency.md >}})
|
||||
|
|
|
@ -82,7 +82,7 @@ The `secretKeyRef` above is referencing a [kubernetes secrets store]({{< ref ku
|
|||
|
||||
Kafka supports a variety of authentication schemes and Dapr supports several: SASL password, mTLS, OIDC/OAuth2. With the added authentication methods, the `authRequired` field has
|
||||
been deprecated from the v1.6 release and instead the `authType` field should be used. If `authRequired` is set to `true`, Dapr will attempt to configure `authType` correctly
|
||||
based on the value of `saslPassword`. There are four valid values for `authType`: `none`, `password`, `mtls`, and `oidc`. Note this is authentication only; authorization is still configured within Kafka.
|
||||
based on the value of `saslPassword`. There are four valid values for `authType`: `none`, `password`, `certificate`, `mtls`, and `oidc`. Note this is authentication only; authorization is still configured within Kafka.
|
||||
|
||||
#### None
|
||||
|
||||
|
@ -275,17 +275,11 @@ spec:
|
|||
- name: clientID # Optional. Used as client tracing ID by Kafka brokers.
|
||||
value: "my-dapr-app-id"
|
||||
- name: authType # Required.
|
||||
value: "password"
|
||||
- name: saslUsername # Required if authType is `password`.
|
||||
value: "adminuser"
|
||||
value: "certificate"
|
||||
- name: consumeRetryInterval # Optional.
|
||||
value: 200ms
|
||||
- name: version # Optional.
|
||||
value: 0.10.2.0
|
||||
- name: saslPassword # Required if authRequired is `true`.
|
||||
secretKeyRef:
|
||||
name: kafka-secrets
|
||||
key: saslPasswordSecret
|
||||
- name: maxMessageBytes # Optional.
|
||||
value: 1024
|
||||
- name: caCert # Certificate authority certificate.
|
||||
|
|
|
@ -25,6 +25,10 @@ spec:
|
|||
value: service_account
|
||||
- name: projectId
|
||||
value: <PROJECT_ID> # replace
|
||||
- name: endpoint # Optional.
|
||||
value: "http://localhost:8085"
|
||||
- name: consumerID # Optional - defaults to the app's own ID
|
||||
value: <CONSUMER_ID>
|
||||
- name: identityProjectId
|
||||
value: <IDENTITY_PROJECT_ID> # replace
|
||||
- name: privateKeyId
|
||||
|
@ -46,11 +50,17 @@ spec:
|
|||
- name: disableEntityManagement
|
||||
value: "false"
|
||||
- name: enableMessageOrdering
|
||||
value: "false"
|
||||
value: "false"
|
||||
- name: orderingKey # Optional
|
||||
value: <ORDERING_KEY>
|
||||
- name: maxReconnectionAttempts # Optional
|
||||
value: 30
|
||||
- name: connectionRecoveryInSec # Optional
|
||||
value: 2
|
||||
- name: deadLetterTopic # Optional
|
||||
value: <EXISTING_PUBSUB_TOPIC>
|
||||
- name: maxDeliveryAttempts # Optional
|
||||
value: 5
|
||||
```
|
||||
{{% alert title="Warning" color="warning" %}}
|
||||
The above example uses secrets as plain strings. It is recommended to use a secret store for the secrets as described [here]({{< ref component-secrets.md >}}).
|
||||
|
@ -60,8 +70,9 @@ The above example uses secrets as plain strings. It is recommended to use a secr
|
|||
|
||||
| Field | Required | Details | Example |
|
||||
|--------------------|:--------:|---------|---------|
|
||||
| type | N | GCP credentials type. Only `service_account` is supported. Defaults to `service_account` | `service_account`
|
||||
| projectId | Y | GCP project id| `myproject-123`
|
||||
| endpoint | N | GCP endpoint for the component to use. Only used for local development (for example) with [GCP Pub/Sub Emulator](https://cloud.google.com/pubsub/docs/emulator). The `endpoint` is unnecessary when running against the GCP production API. | `"http://localhost:8085"`
|
||||
| `consumerID` | N | The Consumer ID organizes one or more consumers into a group. Consumers with the same consumer ID work as one virtual consumer; for example, a message is processed only once by one of the consumers in the group. If the consumer ID is not set, the Dapr runtime will set it to the Dapr application ID. The `consumerID`, along with the `topic` provided as part of the request, are used to build the Pub/Sub subscription ID |
|
||||
| identityProjectId | N | If the GCP pubsub project is different from the identity project, specify the identity project using this attribute | `"myproject-123"`
|
||||
| privateKeyId | N | If using explicit credentials, this field should contain the `private_key_id` field from the service account json document | `"my-private-key"`
|
||||
| privateKey | N | If using explicit credentials, this field should contain the `private_key` field from the service account json | `-----BEGIN PRIVATE KEY-----MIIBVgIBADANBgkqhkiG9w0B`
|
||||
|
@ -73,18 +84,78 @@ The above example uses secrets as plain strings. It is recommended to use a secr
|
|||
| clientX509CertUrl | N | If using explicit credentials, this field should contain the `client_x509_cert_url` field from the service account json | `https://www.googleapis.com/robot/v1/metadata/x509/myserviceaccount%40myproject.iam.gserviceaccount.com`
|
||||
| disableEntityManagement | N | When set to `"true"`, topics and subscriptions do not get created automatically. Default: `"false"` | `"true"`, `"false"`
|
||||
| enableMessageOrdering | N | When set to `"true"`, subscribed messages will be received in order, depending on publishing and permissions configuration. | `"true"`, `"false"`
|
||||
| orderingKey |N | The key provided in the request. It's used when `enableMessageOrdering` is set to `true` to order messages based on such key. | "my-orderingkey"
|
||||
| maxReconnectionAttempts | N |Defines the maximum number of reconnect attempts. Default: `30` | `30`
|
||||
| connectionRecoveryInSec | N |Time in seconds to wait between connection recovery attempts. Default: `2` | `2`
|
||||
| deadLetterTopic | N | Name of the GCP Pub/Sub Topic. This topic **must** exist before using this component. | `"myapp-dlq"`
|
||||
| maxDeliveryAttempts | N | Maximum number of attempts to deliver the message. If `deadLetterTopic` is specified, `maxDeliveryAttempts` is the maximum number of attempts for failed processing of messages. Once that number is reached, the message will be moved to the dead-letter topic. Default: `5` | `5`
|
||||
| type | N | **DEPRECATED** GCP credentials type. Only `service_account` is supported. Defaults to `service_account` | `service_account`
|
||||
|
||||
|
||||
|
||||
{{% alert title="Warning" color="warning" %}}
|
||||
If `enableMessageOrdering` is set to "true", the roles/viewer or roles/pubsub.viewer role will be required on the service account in order to guarantee ordering in cases where order tokens are not embedded in the messages. If this role is not given, or the call to Subscription.Config() fails for any other reason, ordering by embedded order tokens will still function correctly.
|
||||
{{% /alert %}}
|
||||
|
||||
## GCP Credentials
|
||||
|
||||
Since the GCP Pub/Sub component uses the GCP Go Client Libraries, by default it authenticates using **Application Default Credentials**. This is explained further in the [Authenticate to GCP Cloud services using client libraries](https://cloud.google.com/docs/authentication/client-libraries) guide.
|
||||
|
||||
## Create a GCP Pub/Sub
|
||||
|
||||
{{< tabs "Self-Hosted" "GCP" >}}
|
||||
|
||||
{{% codetab %}}
|
||||
For local development, the [GCP Pub/Sub Emulator](https://cloud.google.com/pubsub/docs/emulator) is used to test the GCP Pub/Sub Component. Follow [these instructions](https://cloud.google.com/pubsub/docs/emulator#start) to run the GCP Pub/Sub Emulator.
|
||||
|
||||
To run the GCP Pub/Sub Emulator locally using Docker, use the following `docker-compose.yaml`:
|
||||
|
||||
```yaml
|
||||
version: '3'
|
||||
services:
|
||||
pubsub:
|
||||
image: gcr.io/google.com/cloudsdktool/cloud-sdk:422.0.0-emulators
|
||||
ports:
|
||||
- "8085:8085"
|
||||
container_name: gcp-pubsub
|
||||
entrypoint: gcloud beta emulators pubsub start --project local-test-prj --host-port 0.0.0.0:8085
|
||||
|
||||
```
|
||||
|
||||
In order to use the GCP Pub/Sub Emulator with your pub/sub binding, you need to provide the `endpoint` configuration in the component metadata. The `endpoint` is unnecessary when running against the GCP Production API.
|
||||
|
||||
The **projectId** attribute must match the `--project` used in either the `docker-compose.yaml` or Docker command.
|
||||
|
||||
```yaml
|
||||
apiVersion: dapr.io/v1alpha1
|
||||
kind: Component
|
||||
metadata:
|
||||
name: gcp-pubsub
|
||||
spec:
|
||||
type: pubsub.gcp.pubsub
|
||||
version: v1
|
||||
metadata:
|
||||
- name: projectId
|
||||
value: "local-test-prj"
|
||||
- name: consumerID
|
||||
value: "testConsumer"
|
||||
- name: endpoint
|
||||
value: "localhost:8085"
|
||||
```
|
||||
|
||||
{{% /codetab %}}
|
||||
|
||||
|
||||
{{% codetab %}}
|
||||
|
||||
You can use either "explicit" or "implicit" credentials to configure access to your GCP pubsub instance. If using explicit, most fields are required. Implicit relies on dapr running under a Kubernetes service account (KSA) mapped to a Google service account (GSA) which has the necessary permissions to access pubsub. In implicit mode, only the `projectId` attribute is needed, all other are optional.
|
||||
|
||||
Follow the instructions [here](https://cloud.google.com/pubsub/docs/quickstart-console) on setting up Google Cloud Pub/Sub system.
|
||||
|
||||
{{% /codetab %}}
|
||||
|
||||
{{< /tabs >}}
|
||||
|
||||
## Related links
|
||||
- [Basic schema for a Dapr component]({{< ref component-schema >}})
|
||||
- Read [this guide]({{< ref "howto-publish-subscribe.md#step-2-publish-a-topic" >}}) for instructions on configuring pub/sub components
|
||||
|
|
|
@ -77,7 +77,7 @@ spec:
|
|||
|
||||
### Enabling message delivery retries
|
||||
|
||||
The Pulsar pub/sub component has no built-in support for retry strategies. This means that sidecar sends a message to the service only once and is not retried in case of failures. To make Dapr use more spohisticated retry policies, you can apply a [retry resiliency policy]({{< ref "policies.md#retries" >}}) to the MQTT pub/sub component. Note that it will be the same Dapr sidecar retrying the redelivery the message to the same app instance and not other instances.
|
||||
The Pulsar pub/sub component has no built-in support for retry strategies. This means that sidecar sends a message to the service only once and is not retried in case of failures. To make Dapr use more spohisticated retry policies, you can apply a [retry resiliency policy]({{< ref "policies.md#retries" >}}) to the Pulsar pub/sub component. Note that it will be the same Dapr sidecar retrying the redelivery the message to the same app instance and not other instances.
|
||||
|
||||
### Delay queue
|
||||
|
||||
|
|
|
@ -18,8 +18,16 @@ spec:
|
|||
type: pubsub.rabbitmq
|
||||
version: v1
|
||||
metadata:
|
||||
- name: host
|
||||
- name: connectionString
|
||||
value: "amqp://localhost:5672"
|
||||
- name: protocol
|
||||
value: amqp
|
||||
- name: hostname
|
||||
value: localhost
|
||||
- name: username
|
||||
value: username
|
||||
- name: password
|
||||
value: password
|
||||
- name: consumerID
|
||||
value: myapp
|
||||
- name: durable
|
||||
|
@ -48,6 +56,8 @@ spec:
|
|||
value: 10485760
|
||||
- name: exchangeKind
|
||||
value: fanout
|
||||
- name: saslExternal
|
||||
value: false
|
||||
```
|
||||
|
||||
{{% alert title="Warning" color="warning" %}}
|
||||
|
@ -58,7 +68,11 @@ The above example uses secrets as plain strings. It is recommended to use a secr
|
|||
|
||||
| Field | Required | Details | Example |
|
||||
|--------------------|:--------:|---------|---------|
|
||||
| host | Y | Connection-string for the rabbitmq host | `amqp://user:pass@localhost:5672`
|
||||
| connectionString | Y* | The RabbitMQ connection string. *Mutally exclusive with protocol, hostname, username, password field | `amqp://user:pass@localhost:5672` |
|
||||
| protocol | N* | The RabbitMQ protocol. *Mutally exclusive with connectionString field | `amqp` |
|
||||
| hostname | N* | The RabbitMQ hostname. *Mutally exclusive with connectionString field | `localhost` |
|
||||
| username | N* | The RabbitMQ username. *Mutally exclusive with connectionString field | `username` |
|
||||
| password | N* | The RabbitMQ password. *Mutally exclusive with connectionString field | `password` |
|
||||
| consumerID | N | Consumer ID a.k.a consumer tag organizes one or more consumers into a group. Consumers with the same consumer ID work as one virtual consumer, i.e. a message is processed only once by one of the consumers in the group. If the consumer ID is not set, the dapr runtime will set it to the dapr application ID. |
|
||||
| durable | N | Whether or not to use [durable](https://www.rabbitmq.com/queues.html#durability) queues. Defaults to `"false"` | `"true"`, `"false"`
|
||||
| deletedWhenUnused | N | Whether or not the queue should be configured to [auto-delete](https://www.rabbitmq.com/queues.html) Defaults to `"true"` | `"true"`, `"false"`
|
||||
|
@ -73,6 +87,7 @@ The above example uses secrets as plain strings. It is recommended to use a secr
|
|||
| maxLen | N | The maximum number of messages of a queue and its dead letter queue (if dead letter enabled). If both `maxLen` and `maxLenBytes` are set then both will apply; whichever limit is hit first will be enforced. Defaults to no limit. | `"1000"` |
|
||||
| maxLenBytes | N | Maximum length in bytes of a queue and its dead letter queue (if dead letter enabled). If both `maxLen` and `maxLenBytes` are set then both will apply; whichever limit is hit first will be enforced. Defaults to no limit. | `"1048576"` |
|
||||
| exchangeKind | N | Exchange kind of the rabbitmq exchange. Defaults to `"fanout"`. | `"fanout"`,`"topic"` |
|
||||
| saslExternal | N | With TLS, should the username be taken from an additional field (e.g. CN.) See [RabbitMQ Authentication Mechanisms](https://www.rabbitmq.com/access-control.html#mechanisms). Defaults to `"false"`. | `"true"`, `"false"` |
|
||||
| caCert | Required for using TLS | Input/Output | Certificate Authority (CA) certificate in PEM format for verifying server TLS certificates. | `"-----BEGIN CERTIFICATE-----\n<base64-encoded DER>\n-----END CERTIFICATE-----"`
|
||||
| clientCert | Required for using TLS | Input/Output | TLS client certificate in PEM format. Must be used with `clientKey`. | `"-----BEGIN CERTIFICATE-----\n<base64-encoded DER>\n-----END CERTIFICATE-----"`
|
||||
| clientKey | Required for using TLS | Input/Output | TLS client key in PEM format. Must be used with `clientCert`. Can be `secretKeyRef` to use a secret reference. | `"-----BEGIN RSA PRIVATE KEY-----\n<base64-encoded PKCS8>\n-----END RSA PRIVATE KEY-----"`
|
||||
|
@ -121,6 +136,8 @@ spec:
|
|||
value: 10485760
|
||||
- name: exchangeKind
|
||||
value: fanout
|
||||
- name: saslExternal
|
||||
value: false
|
||||
- name: caCert
|
||||
value: ${{ myLoadedCACert }}
|
||||
- name: clientCert
|
||||
|
|
|
@ -9,9 +9,10 @@ aliases:
|
|||
|
||||
## Component format
|
||||
|
||||
To setup Azure Key Vault secret store create a component of type `secretstores.azure.keyvault`. See [this guide]({{< ref "setup-secret-store.md#apply-the-configuration" >}}) on how to create and apply a secretstore configuration. See this guide on [referencing secrets]({{< ref component-secrets.md >}}) to retrieve and use the secret with Dapr components.
|
||||
|
||||
See also [configure the component](#configure-the-component) guide in this page.
|
||||
To setup Azure Key Vault secret store, create a component of type `secretstores.azure.keyvault`.
|
||||
- See [the secret store components guide]({{< ref "setup-secret-store.md#apply-the-configuration" >}}) on how to create and apply a secret store configuration.
|
||||
- See [the guide on referencing secrets]({{< ref component-secrets.md >}}) to retrieve and use the secret with Dapr components.
|
||||
- See [the Configure the component section](#configure-the-component) below.
|
||||
|
||||
```yaml
|
||||
apiVersion: dapr.io/v1alpha1
|
||||
|
@ -37,7 +38,10 @@ spec:
|
|||
|
||||
## Authenticating with Azure AD
|
||||
|
||||
The Azure Key Vault secret store component supports authentication with Azure AD only. Before you enable this component, make sure you've read the [Authenticating to Azure]({{< ref authenticating-azure.md >}}) document and created an Azure AD application (also called Service Principal). Alternatively, make sure you have created a managed identity for your application platform.
|
||||
The Azure Key Vault secret store component supports authentication with Azure AD only. Before you enable this component:
|
||||
1. Read the [Authenticating to Azure]({{< ref authenticating-azure.md >}}) document.
|
||||
1. Create an Azure AD application (also called Service Principal).
|
||||
1. Alternatively, create a managed identity for your application platform.
|
||||
|
||||
## Spec metadata fields
|
||||
|
||||
|
@ -49,20 +53,21 @@ The Azure Key Vault secret store component supports authentication with Azure AD
|
|||
|
||||
Additionally, you must provide the authentication fields as explained in the [Authenticating to Azure]({{< ref authenticating-azure.md >}}) document.
|
||||
|
||||
## Example: Create an Azure Key Vault and authorize a Service Principal
|
||||
## Example
|
||||
|
||||
### Prerequisites
|
||||
|
||||
- Azure Subscription
|
||||
- [Azure CLI](https://docs.microsoft.com/cli/azure/install-azure-cli)
|
||||
- [jq](https://stedolan.github.io/jq/download/)
|
||||
- The scripts below are optimized for a bash or zsh shell
|
||||
- You are using bash or zsh shell
|
||||
- You've created an Azure AD application (Service Principal) per the instructions in [Authenticating to Azure]({{< ref authenticating-azure.md >}}). You will need the following values:
|
||||
|
||||
Make sure you have followed the steps in the [Authenticating to Azure]({{< ref authenticating-azure.md >}}) document to create an Azure AD application (also called Service Principal). You will need the following values:
|
||||
| Value | Description |
|
||||
| ----- | ----------- |
|
||||
| `SERVICE_PRINCIPAL_ID` | The ID of the Service Principal that you created for a given application |
|
||||
|
||||
- `SERVICE_PRINCIPAL_ID`: the ID of the Service Principal that you created for a given application
|
||||
|
||||
### Steps
|
||||
### Create an Azure Key Vault and authorize a Service Principal
|
||||
|
||||
1. Set a variable with the Service Principal that you created:
|
||||
|
||||
|
@ -70,7 +75,7 @@ Make sure you have followed the steps in the [Authenticating to Azure]({{< ref a
|
|||
SERVICE_PRINCIPAL_ID="[your_service_principal_object_id]"
|
||||
```
|
||||
|
||||
2. Set a variable with the location where to create all resources:
|
||||
1. Set a variable with the location in which to create all resources:
|
||||
|
||||
```sh
|
||||
LOCATION="[your_location]"
|
||||
|
@ -78,7 +83,7 @@ Make sure you have followed the steps in the [Authenticating to Azure]({{< ref a
|
|||
|
||||
(You can get the full list of options with: `az account list-locations --output tsv`)
|
||||
|
||||
3. Create a Resource Group, giving it any name you'd like:
|
||||
1. Create a Resource Group, giving it any name you'd like:
|
||||
|
||||
```sh
|
||||
RG_NAME="[resource_group_name]"
|
||||
|
@ -88,7 +93,7 @@ Make sure you have followed the steps in the [Authenticating to Azure]({{< ref a
|
|||
| jq -r .id)
|
||||
```
|
||||
|
||||
4. Create an Azure Key Vault (that uses Azure RBAC for authorization):
|
||||
1. Create an Azure Key Vault that uses Azure RBAC for authorization:
|
||||
|
||||
```sh
|
||||
KEYVAULT_NAME="[key_vault_name]"
|
||||
|
@ -99,7 +104,7 @@ Make sure you have followed the steps in the [Authenticating to Azure]({{< ref a
|
|||
--location "${LOCATION}"
|
||||
```
|
||||
|
||||
5. Using RBAC, assign a role to the Azure AD application so it can access the Key Vault.
|
||||
1. Using RBAC, assign a role to the Azure AD application so it can access the Key Vault.
|
||||
In this case, assign the "Key Vault Secrets User" role, which has the "Get secrets" permission over Azure Key Vault.
|
||||
|
||||
```sh
|
||||
|
@ -109,15 +114,17 @@ Make sure you have followed the steps in the [Authenticating to Azure]({{< ref a
|
|||
--scope "${RG_ID}/providers/Microsoft.KeyVault/vaults/${KEYVAULT_NAME}"
|
||||
```
|
||||
|
||||
Other less restrictive roles like "Key Vault Secrets Officer" and "Key Vault Administrator" can be used as well, depending on your application. For more information about Azure built-in roles for Key Vault see the [Microsoft docs](https://docs.microsoft.com/azure/key-vault/general/rbac-guide?tabs=azure-cli#azure-built-in-roles-for-key-vault-data-plane-operations).
|
||||
Other less restrictive roles, like "Key Vault Secrets Officer" and "Key Vault Administrator", can be used, depending on your application. [See Microsoft Docs for more information about Azure built-in roles for Key Vault](https://docs.microsoft.com/azure/key-vault/general/rbac-guide?tabs=azure-cli#azure-built-in-roles-for-key-vault-data-plane-operations).
|
||||
|
||||
## Configure the component
|
||||
### Configure the component
|
||||
|
||||
{{< tabs "Self-Hosted" "Kubernetes">}}
|
||||
|
||||
{{% codetab %}}
|
||||
|
||||
To use a **client secret**, create a file called `azurekeyvault.yaml` in the components directory, filling in with the Azure AD application that you created following the [Authenticating to Azure]({{< ref authenticating-azure.md >}}) document:
|
||||
#### Using a client secret
|
||||
|
||||
To use a **client secret**, create a file called `azurekeyvault.yaml` in the components directory. Use the following template, filling in [the Azure AD application you created]({{< ref authenticating-azure.md >}}):
|
||||
|
||||
```yaml
|
||||
apiVersion: dapr.io/v1alpha1
|
||||
|
@ -138,7 +145,9 @@ spec:
|
|||
value : "[your_client_secret]"
|
||||
```
|
||||
|
||||
If you want to use a **certificate** saved on the local disk, instead, use this template, filling in with details of the Azure AD application that you created following the [Authenticating to Azure]({{< ref authenticating-azure.md >}}) document:
|
||||
#### Using a certificate
|
||||
|
||||
If you want to use a **certificate** saved on the local disk instead, use the following template. Fill in the details of [the Azure AD application you created]({{< ref authenticating-azure.md >}}):
|
||||
|
||||
```yaml
|
||||
apiVersion: dapr.io/v1alpha1
|
||||
|
@ -161,9 +170,9 @@ spec:
|
|||
{{% /codetab %}}
|
||||
|
||||
{{% codetab %}}
|
||||
In Kubernetes, you store the client secret or the certificate into the Kubernetes Secret Store and then refer to those in the YAML file. You will need the details of the Azure AD application that was created following the [Authenticating to Azure]({{< ref authenticating-azure.md >}}) document.
|
||||
In Kubernetes, you store the client secret or the certificate into the Kubernetes Secret Store and then refer to those in the YAML file. Before you start, you need the details of [the Azure AD application you created]({{< ref authenticating-azure.md >}}).
|
||||
|
||||
To use a **client secret**:
|
||||
#### Using a client secret
|
||||
|
||||
1. Create a Kubernetes secret using the following command:
|
||||
|
||||
|
@ -176,7 +185,7 @@ To use a **client secret**:
|
|||
- `[your_k8s_secret_key]` is secret key in the Kubernetes secret store
|
||||
|
||||
|
||||
2. Create an `azurekeyvault.yaml` component file.
|
||||
1. Create an `azurekeyvault.yaml` component file.
|
||||
|
||||
The component yaml refers to the Kubernetes secretstore using `auth` property and `secretKeyRef` refers to the client secret stored in the Kubernetes secret store.
|
||||
|
||||
|
@ -203,13 +212,13 @@ To use a **client secret**:
|
|||
secretStore: kubernetes
|
||||
```
|
||||
|
||||
3. Apply the `azurekeyvault.yaml` component:
|
||||
1. Apply the `azurekeyvault.yaml` component:
|
||||
|
||||
```bash
|
||||
kubectl apply -f azurekeyvault.yaml
|
||||
```
|
||||
|
||||
To use a **certificate**:
|
||||
#### Using a certificate
|
||||
|
||||
1. Create a Kubernetes secret using the following command:
|
||||
|
||||
|
@ -221,7 +230,7 @@ To use a **certificate**:
|
|||
- `[your_k8s_secret_name]` is secret name in the Kubernetes secret store
|
||||
- `[your_k8s_secret_key]` is secret key in the Kubernetes secret store
|
||||
|
||||
2. Create an `azurekeyvault.yaml` component file.
|
||||
1. Create an `azurekeyvault.yaml` component file.
|
||||
|
||||
The component yaml refers to the Kubernetes secretstore using `auth` property and `secretKeyRef` refers to the certificate stored in the Kubernetes secret store.
|
||||
|
||||
|
@ -248,16 +257,16 @@ To use a **certificate**:
|
|||
secretStore: kubernetes
|
||||
```
|
||||
|
||||
3. Apply the `azurekeyvault.yaml` component:
|
||||
1. Apply the `azurekeyvault.yaml` component:
|
||||
|
||||
```bash
|
||||
kubectl apply -f azurekeyvault.yaml
|
||||
```
|
||||
|
||||
To use **Azure managed identity**:
|
||||
#### Using Azure managed identity
|
||||
|
||||
1. Ensure your AKS cluster has managed identity enabled and follow the [guide for using managed identities](https://docs.microsoft.com/azure/aks/use-managed-identity).
|
||||
2. Create an `azurekeyvault.yaml` component file.
|
||||
1. Create an `azurekeyvault.yaml` component file.
|
||||
|
||||
The component yaml refers to a particular KeyVault name. The managed identity you will use in a later step must be given read access to this particular KeyVault instance.
|
||||
|
||||
|
@ -274,12 +283,23 @@ To use **Azure managed identity**:
|
|||
value: "[your_keyvault_name]"
|
||||
```
|
||||
|
||||
3. Apply the `azurekeyvault.yaml` component:
|
||||
1. Apply the `azurekeyvault.yaml` component:
|
||||
|
||||
```bash
|
||||
kubectl apply -f azurekeyvault.yaml
|
||||
```
|
||||
4. Create and use a managed identity / pod identity by following [this guide](https://docs.microsoft.com/azure/aks/use-azure-ad-pod-identity#create-a-pod-identity). After creating an AKS pod identity, [give this identity read permissions on your desired KeyVault instance](https://docs.microsoft.com/azure/key-vault/general/assign-access-policy?tabs=azure-cli#assign-the-access-policy), and finally in your application deployment inject the pod identity via a label annotation:
|
||||
1. Create and assign a managed identity at the pod-level via either:
|
||||
- [Azure AD workload identity](https://learn.microsoft.com/azure/aks/workload-identity-overview) (preferred method)
|
||||
- [Azure AD pod identity](https://docs.microsoft.com/azure/aks/use-azure-ad-pod-identity#create-a-pod-identity)
|
||||
|
||||
|
||||
**Important**: While both Azure AD pod identity and workload identity are in preview, currently Azure AD Workload Identity is planned for general availability (stable state).
|
||||
|
||||
1. After creating a workload identity, give it `read` permissions:
|
||||
- [On your desired KeyVault instance](https://docs.microsoft.com/azure/key-vault/general/assign-access-policy?tabs=azure-cli#assign-the-access-policy)
|
||||
- In your application deployment. Inject the pod identity both:
|
||||
- Via a label annotation
|
||||
- By specifying the Kubernetes service account associated with the desired workload identity
|
||||
|
||||
```yaml
|
||||
apiVersion: v1
|
||||
|
@ -290,6 +310,12 @@ To use **Azure managed identity**:
|
|||
aadpodidbinding: $POD_IDENTITY_NAME
|
||||
```
|
||||
|
||||
#### Using Azure managed identity directly vs. via Azure AD workload identity
|
||||
|
||||
When using **managed identity directly**, you can have multiple identities associated with an app, requiring `azureClientId` to specify which identity should be used.
|
||||
|
||||
However, when using **managed identity via Azure AD workload identity**, `azureClientId` is not necessary and has no effect. The Azure identity to be used is inferred from the service account tied to an Azure identity via the Azure federated identity.
|
||||
|
||||
{{% /codetab %}}
|
||||
|
||||
{{< /tabs >}}
|
||||
|
|
|
@ -11,6 +11,7 @@ aliases:
|
|||
|
||||
Create a file called `cockroachdb.yaml`, paste the following and replace the `<CONNECTION STRING>` value with your connection string. The connection string for CockroachDB follow the same standard for PostgreSQL connection string. For example, `"host=localhost user=root port=26257 connect_timeout=10 database=dapr_test"`. See the CockroachDB [documentation on database connections](https://www.cockroachlabs.com/docs/stable/connect-to-the-database.html) for information on how to define a connection string.
|
||||
|
||||
If you want to also configure CockroachDB to store actors, add the `actorStateStore` option as in the example below.
|
||||
|
||||
```yaml
|
||||
apiVersion: dapr.io/v1alpha1
|
||||
|
@ -21,16 +22,44 @@ spec:
|
|||
type: state.cockroachdb
|
||||
version: v1
|
||||
metadata:
|
||||
# Connection string
|
||||
- name: connectionString
|
||||
value: "<CONNECTION STRING>"
|
||||
# Timeout for database operations, in seconds (optional)
|
||||
#- name: timeoutInSeconds
|
||||
# value: 20
|
||||
# Name of the table where to store the state (optional)
|
||||
#- name: tableName
|
||||
# value: "state"
|
||||
# Name of the table where to store metadata used by Dapr (optional)
|
||||
#- name: metadataTableName
|
||||
# value: "dapr_metadata"
|
||||
# Cleanup interval in seconds, to remove expired rows (optional)
|
||||
#- name: cleanupIntervalInSeconds
|
||||
# value: 3600
|
||||
# Max idle time for connections before they're closed (optional)
|
||||
#- name: connectionMaxIdleTime
|
||||
# value: 0
|
||||
# Uncomment this if you wish to use CockroachDB as a state store for actors (optional)
|
||||
#- name: actorStateStore
|
||||
# value: "true"
|
||||
```
|
||||
|
||||
{{% alert title="Warning" color="warning" %}}
|
||||
The above example uses secrets as plain strings. It is recommended to use a secret store for the secrets as described [here]({{< ref component-secrets.md >}}).
|
||||
{{% /alert %}}
|
||||
|
||||
## Spec metadata fields
|
||||
|
||||
| Field | Required | Details | Example |
|
||||
|--------------------|:--------:|---------|---------|
|
||||
| connectionString | Y | The connection string for CockroachDB | `"host=localhost user=root port=26257 connect_timeout=10 database=dapr_test"`
|
||||
| actorStateStore | N | Consider this state store for actors. Defaults to `"false"` | `"true"`, `"false"`
|
||||
| `connectionString` | Y | The connection string for CockroachDB | `"host=localhost user=root port=26257 connect_timeout=10 database=dapr_test"`
|
||||
| `timeoutInSeconds` | N | Timeout, in seconds, for all database operations. Defaults to `20` | `30`
|
||||
| `tableName` | N | Name of the table where the data is stored. Defaults to `state`. Can optionally have the schema name as prefix, such as `public.state` | `"state"`, `"public.state"`
|
||||
| `metadataTableName` | N | Name of the table Dapr uses to store a few metadata properties. Defaults to `dapr_metadata`. Can optionally have the schema name as prefix, such as `public.dapr_metadata` | `"dapr_metadata"`, `"public.dapr_metadata"`
|
||||
| `cleanupIntervalInSeconds` | N | Interval, in seconds, to clean up rows with an expired TTL. Default: `3600` (i.e. 1 hour). Setting this to values <=0 disables the periodic cleanup. | `1800`, `-1`
|
||||
| `connectionMaxIdleTime` | N | Max idle time before unused connections are automatically closed in the connection pool. By default, there's no value and this is left to the database driver to choose. | `"5m"`
|
||||
| `actorStateStore` | N | Consider this state store for actors. Defaults to `"false"` | `"true"`, `"false"`
|
||||
|
||||
|
||||
## Setup CockroachDB
|
||||
|
@ -62,6 +91,19 @@ The easiest way to install CockroachDB on Kubernetes is by using the [CockroachD
|
|||
|
||||
{{% /tabs %}}
|
||||
|
||||
## Advanced
|
||||
|
||||
### TTLs and cleanups
|
||||
|
||||
This state store supports [Time-To-Live (TTL)]({{< ref state-store-ttl.md >}}) for records stored with Dapr. When storing data using Dapr, you can set the `ttlInSeconds` metadata property to indicate after how many seconds the data should be considered "expired".
|
||||
|
||||
Because CockroachDB doesn't have built-in support for TTLs, you implement this in Dapr by adding a column in the state table indicating when the data should be considered "expired". "Expired" records are not returned to the caller, even if they're still physically stored in the database. A background "garbage collector" periodically scans the state table for expired rows and deletes them.
|
||||
|
||||
You can set the interval for the deletion of expired records with the `cleanupIntervalInSeconds` metadata property, which defaults to 3600 seconds (that is, 1 hour).
|
||||
|
||||
- Longer intervals require less frequent scans for expired rows, but can require storing expired records for longer, potentially requiring more storage space. If you plan to store many records in your state table, with short TTLs, consider setting `cleanupIntervalInSeconds` to a smaller value - for example, `300` (300 seconds, or 5 minutes).
|
||||
- If you do not plan to use TTLs with Dapr and the CockroachDB state store, you should consider setting `cleanupIntervalInSeconds` to a value <= 0 (e.g. `0` or `-1`) to disable the periodic cleanup and reduce the load on the database.
|
||||
|
||||
## Related links
|
||||
- [Basic schema for a Dapr component]({{< ref component-schema >}})
|
||||
- Read [this guide]({{< ref "howto-get-save-state.md#step-2-save-and-retrieve-a-single-state" >}}) for instructions on configuring state store components
|
||||
|
|
|
@ -21,30 +21,32 @@ spec:
|
|||
type: state.gcp.firestore
|
||||
version: v1
|
||||
metadata:
|
||||
- name: type
|
||||
value: <REPLACE-WITH-CREDENTIALS-TYPE> # Required. Example: "serviceaccount"
|
||||
- name: project_id
|
||||
value: <REPLACE-WITH-PROJECT-ID> # Required.
|
||||
- name: endpoint # Optional.
|
||||
value: "http://localhost:8432"
|
||||
- name: private_key_id
|
||||
value: <REPLACE-WITH-PRIVATE-KEY-ID> # Required.
|
||||
value: <REPLACE-WITH-PRIVATE-KEY-ID> # Optional.
|
||||
- name: private_key
|
||||
value: <REPLACE-WITH-PRIVATE-KEY> # Required.
|
||||
value: <REPLACE-WITH-PRIVATE-KEY> # Optional, but Required if `private_key_id` is specified.
|
||||
- name: client_email
|
||||
value: <REPLACE-WITH-CLIENT-EMAIL> # Required.
|
||||
value: <REPLACE-WITH-CLIENT-EMAIL> # Optional, but Required if `private_key_id` is specified.
|
||||
- name: client_id
|
||||
value: <REPLACE-WITH-CLIENT-ID> # Required.
|
||||
value: <REPLACE-WITH-CLIENT-ID> # Optional, but Required if `private_key_id` is specified.
|
||||
- name: auth_uri
|
||||
value: <REPLACE-WITH-AUTH-URI> # Required.
|
||||
value: <REPLACE-WITH-AUTH-URI> # Optional.
|
||||
- name: token_uri
|
||||
value: <REPLACE-WITH-TOKEN-URI> # Required.
|
||||
value: <REPLACE-WITH-TOKEN-URI> # Optional.
|
||||
- name: auth_provider_x509_cert_url
|
||||
value: <REPLACE-WITH-AUTH-X509-CERT-URL> # Required.
|
||||
value: <REPLACE-WITH-AUTH-X509-CERT-URL> # Optional.
|
||||
- name: client_x509_cert_url
|
||||
value: <REPLACE-WITH-CLIENT-x509-CERT-URL> # Required.
|
||||
value: <REPLACE-WITH-CLIENT-x509-CERT-URL> # Optional.
|
||||
- name: entity_kind
|
||||
value: <REPLACE-WITH-ENTITY-KIND> # Optional. default: "DaprState"
|
||||
- name: noindex
|
||||
value: <REPLACE-WITH-BOOLEAN> # Optional. default: "false"
|
||||
- name: type
|
||||
value: <REPLACE-WITH-CREDENTIALS-TYPE> # Deprecated.
|
||||
```
|
||||
|
||||
{{% alert title="Warning" color="warning" %}}
|
||||
|
@ -55,17 +57,23 @@ The above example uses secrets as plain strings. It is recommended to use a secr
|
|||
|
||||
| Field | Required | Details | Example |
|
||||
|--------------------|:--------:|---------|---------|
|
||||
| type | Y | The credentials type | `"serviceaccount"`
|
||||
| project_id | Y | The ID of the GCP project to use | `"project-id"`
|
||||
| private_key_id | Y | The ID of the prvate key to use | `"private-key-id"`
|
||||
| client_email | Y | The email address for the client | `"eample@example.com"`
|
||||
| client_id | Y | The client id value to use for authentication | `"client-id"`
|
||||
| auth_uri | Y | The authentication URI to use | `"https://accounts.google.com/o/oauth2/auth"`
|
||||
| token_uri | Y | The token URI to query for Auth token | `"https://oauth2.googleapis.com/token"`
|
||||
| auth_provider_x509_cert_url | Y | The auth provider certificate URL | `"https://www.googleapis.com/oauth2/v1/certs"`
|
||||
| client_x509_cert_url | Y | The client certificate URL | `"https://www.googleapis.com/robot/v1/metadata/x509/x"`
|
||||
| endpoint | N | GCP endpoint for the component to use. Only used for local development with (for example) [GCP Datastore Emulator](https://cloud.google.com/datastore/docs/tools/datastore-emulator). The `endpoint` is unnecessary when running against the GCP production API. | `"localhost:8432"`
|
||||
| private_key_id | N | The ID of the prvate key to use | `"private-key-id"`
|
||||
| privateKey | N | If using explicit credentials, this field should contain the `private_key` field from the service account json | `-----BEGIN PRIVATE KEY-----MIIBVgIBADANBgkqhkiG9w0B`
|
||||
| client_email | N | The email address for the client | `"eample@example.com"`
|
||||
| client_id | N | The client id value to use for authentication | `"client-id"`
|
||||
| auth_uri | N | The authentication URI to use | `"https://accounts.google.com/o/oauth2/auth"`
|
||||
| token_uri | N | The token URI to query for Auth token | `"https://oauth2.googleapis.com/token"`
|
||||
| auth_provider_x509_cert_url | N | The auth provider certificate URL | `"https://www.googleapis.com/oauth2/v1/certs"`
|
||||
| client_x509_cert_url | N | The client certificate URL | `"https://www.googleapis.com/robot/v1/metadata/x509/x"`
|
||||
| entity_kind | N | The entity name in Filestore. Defaults to `"DaprState"` | `"DaprState"`
|
||||
| noindex | N | Whether to disable indexing of state entities. Use this setting if you encounter Firestore index size limitations. Defaults to `"false"` | `"true"`
|
||||
| type | N | **DEPRECATED** The credentials type | `"serviceaccount"`
|
||||
|
||||
|
||||
## GCP Credentials
|
||||
Since the GCP Firestore component uses the GCP Go Client Libraries, by default it authenticates using **Application Default Credentials**. This is explained in the [Authenticate to GCP Cloud services using client libraries](https://cloud.google.com/docs/authentication/client-libraries) guide.
|
||||
|
||||
## Setup GCP Firestore
|
||||
|
||||
|
@ -74,7 +82,7 @@ The above example uses secrets as plain strings. It is recommended to use a secr
|
|||
{{% codetab %}}
|
||||
You can use the GCP Datastore emulator to run locally using the instructions [here](https://cloud.google.com/datastore/docs/tools/datastore-emulator).
|
||||
|
||||
You can then interact with the server using `localhost:8081`.
|
||||
You can then interact with the server using `http://localhost:8432`.
|
||||
{{% /codetab %}}
|
||||
|
||||
{{% codetab %}}
|
||||
|
|
|
@ -34,7 +34,7 @@ spec:
|
|||
value: # Optional
|
||||
- name: maxRetryBackoff
|
||||
value: # Optional
|
||||
- name: failover
|
||||
- name: failover
|
||||
value: # Optional
|
||||
- name: sentinelMasterName
|
||||
value: # Optional
|
||||
|
|
|
@ -33,6 +33,10 @@ spec:
|
|||
value: <SCHEMA> # Optional. defaults to "dbo"
|
||||
- name: indexedProperties
|
||||
value: <INDEXED-PROPERTIES> # Optional. List of IndexedProperties.
|
||||
- name: metadataTableName # Optional. Name of the table where to store metadata used by Dapr
|
||||
value: "dapr_metadata"
|
||||
- name: cleanupIntervalInSeconds # Optional. Cleanup interval in seconds, to remove expired rows
|
||||
value: 300
|
||||
|
||||
```
|
||||
|
||||
|
@ -58,6 +62,8 @@ If you wish to use SQL server as an [actor state store]({{< ref "state_api.md#co
|
|||
| schema | N | The schema to use. Defaults to `"dbo"` | `"dapr"`,`"dbo"`
|
||||
| indexedProperties | N | List of IndexedProperties. | `'[{"column": "transactionid", "property": "id", "type": "int"}, {"column": "customerid", "property": "customer", "type": "nvarchar(100)"}]'`
|
||||
| actorStateStore | N | Indicates that Dapr should configure this component for the actor state store ([more information]({{< ref "state_api.md#configuring-state-store-for-actors" >}})). | `"true"`
|
||||
| metadataTableName | N | Name of the table Dapr uses to store a few metadata properties. Defaults to `dapr_metadata`. | `"dapr_metadata"`
|
||||
| cleanupIntervalInSeconds | N | Interval, in seconds, to clean up rows with an expired TTL. Default: `3600` (i.e. 1 hour). Setting this to values <=0 disables the periodic cleanup. | `1800`, `-1`
|
||||
|
||||
|
||||
## Create Azure SQL instance
|
||||
|
@ -80,6 +86,23 @@ When connecting with a dedicated user (not `sa`), these authorizations are requi
|
|||
- `CREATE TABLE`
|
||||
- `CREATE TYPE`
|
||||
|
||||
### TTLs and cleanups
|
||||
|
||||
This state store supports [Time-To-Live (TTL)]({{< ref state-store-ttl.md >}}) for records stored with Dapr. When storing data using Dapr, you can set the `ttlInSeconds` metadata property to indicate after how many seconds the data should be considered "expired".
|
||||
|
||||
Because SQL Server doesn't have built-in support for TTLs, Dapr implements this by adding a column in the state table indicating when the data should be considered "expired". "Expired" records are not returned to the caller, even if they're still physically stored in the database. A background "garbage collector" periodically scans the state table for expired rows and deletes them.
|
||||
|
||||
You can set the interval for the deletion of expired records with the `cleanupIntervalInSeconds` metadata property, which defaults to 3600 seconds (that is, 1 hour).
|
||||
|
||||
- Longer intervals require less frequent scans for expired rows, but can require storing expired records for longer, potentially requiring more storage space. If you plan to store many records in your state table, with short TTLs, consider setting `cleanupIntervalInSeconds` to a smaller value - for example, `300` (300 seconds, or 5 minutes).
|
||||
- If you do not plan to use TTLs with Dapr and the SQL Server state store, you should consider setting `cleanupIntervalInSeconds` to a value <= 0 (e.g. `0` or `-1`) to disable the periodic cleanup and reduce the load on the database.
|
||||
|
||||
The state store does not have an index on the `ExpireDate` column, which means that each clean up operation must perform a full table scan. If you intend to write to the table with a large number of records that use TTLs, you should consider creating an index on the `ExpireDate` column. An index makes queries faster, but uses more storage space and slightly slows down writes.
|
||||
|
||||
```sql
|
||||
CREATE CLUSTERED INDEX expiredate_idx ON state(ExpireDate ASC)
|
||||
```
|
||||
|
||||
## Related links
|
||||
- [Basic schema for a Dapr component]({{< ref component-schema >}})
|
||||
- Read [this guide]({{< ref "howto-get-save-state.md#step-2-save-and-retrieve-a-single-state" >}}) for instructions on configuring state store components
|
||||
|
|
|
@ -8,7 +8,7 @@
|
|||
output: true
|
||||
- component: AWS S3
|
||||
link: s3
|
||||
state: Alpha
|
||||
state: Stable
|
||||
version: v1
|
||||
since: "1.0"
|
||||
features:
|
||||
|
|
|
@ -1,6 +1,6 @@
|
|||
- component: GCP Pub/Sub
|
||||
link: setup-gcp-pubsub
|
||||
state: Alpha
|
||||
state: Stable
|
||||
version: v1
|
||||
since: "1.0"
|
||||
features:
|
||||
|
|
|
@ -5,7 +5,7 @@
|
|||
since: "1.10"
|
||||
features:
|
||||
crud: true
|
||||
transactions: false
|
||||
transactions: true
|
||||
etag: true
|
||||
ttl: true
|
||||
query: false
|
||||
|
|
|
@ -1,6 +1,6 @@
|
|||
- component: GCP Firestore
|
||||
link: setup-firestore
|
||||
state: Alpha
|
||||
state: Stable
|
||||
version: v1
|
||||
since: "1.0"
|
||||
features:
|
||||
|
|
|
@ -29,7 +29,7 @@
|
|||
crud: true
|
||||
transactions: true
|
||||
etag: true
|
||||
ttl: false
|
||||
ttl: true
|
||||
query: true
|
||||
- component: Couchbase
|
||||
link: setup-couchbase
|
||||
|
|
|
@ -1 +1 @@
|
|||
{{- if .Get "short" }}1.10{{ else if .Get "long" }}1.10.0{{ else if .Get "cli" }}1.10.0{{ else }}1.9.5{{ end -}}
|
||||
{{- if .Get "short" }}1.10{{ else if .Get "long" }}1.10.5{{ else if .Get "cli" }}1.10.0{{ else }}1.10.5{{ end -}}
|
||||
|
|
After Width: | Height: | Size: 64 KiB |
Before Width: | Height: | Size: 167 KiB |
Before Width: | Height: | Size: 135 KiB After Width: | Height: | Size: 131 KiB |
After Width: | Height: | Size: 361 KiB |
Before Width: | Height: | Size: 361 KiB After Width: | Height: | Size: 62 KiB |
After Width: | Height: | Size: 46 KiB |
After Width: | Height: | Size: 62 KiB |
After Width: | Height: | Size: 606 KiB |
|
@ -1 +1 @@
|
|||
Subproject commit 9dcae7b0e771d7328559bef1dd65df4c1a54b793
|
||||
Subproject commit f42b690f4c67e6bb4209932f660c46a96d0b0457
|
|
@ -0,0 +1 @@
|
|||
Subproject commit dbb1a9526875e8df6af1823e09dae11216221444
|