Merge branch 'v1.3' into release-v1.3

This commit is contained in:
Ori Zohar 2021-07-21 10:02:08 -07:00
commit 26b9b6c3e8
14 changed files with 577 additions and 47 deletions

View File

@ -77,7 +77,7 @@ Refer [api spec]({{< ref "actors_api.md#invoke-timer" >}}) for more details.
### Actor reminders
Reminders are a mechanism to trigger *persistent* callbacks on an actor at specified times. Their functionality is similar to timers. But unlike timers, reminders are triggered under all circumstances until the actor explicitly unregisters them or the actor is explicitly deleted. Specifically, reminders are triggered across actor deactivations and failovers because the Dapr actors runtime persists the information about the actors' reminders using Dapr actor state provider.
Reminders are a mechanism to trigger *persistent* callbacks on an actor at specified times. Their functionality is similar to timers. But unlike timers, reminders are triggered under all circumstances until the actor explicitly unregisters them or the actor is explicitly deleted or the number in invocations is exhausted. Specifically, reminders are triggered across actor deactivations and failovers because the Dapr actors runtime persists the information about the actors' reminders using Dapr actor state provider.
You can create a persistent reminder for an actor by calling the Http/gRPC request to Dapr.
@ -111,6 +111,34 @@ The following request body configures a reminder with a `dueTime` 15 seconds and
}
```
[ISO 8601 duration](https://en.wikipedia.org/wiki/ISO_8601#Durations) can also be used to specify `period`. The following request body configures a reminder with a `dueTime` 0 seconds an `period` of 15 seconds.
```json
{
"dueTime":"0h0m0s0ms",
"period":"P0Y0M0W0DT0H0M15S"
}
```
The designators for zero are optional and the above `period` can be simplified to `PT15S`.
ISO 8601 specifies multiple recurrence formats but only the duration format is currently supported.
#### Reminders with repetitions
When configured with ISO 8601 durations, the `period` column also allows to specify number of times a reminder can run. The following request body will create a reminder that will execute for 5 number of times with a period of 15 seconds.
```json
{
"dueTime":"0h0m0s0ms",
"period":"R5/PT15S"
}
```
The number of repetitions i.e. the number of times the reminder is run should be a positive number.
**Example**
Watch this [video](https://www.youtube.com/watch?v=B_vkXqptpXY&t=1002s) for more information on using ISO 861 for Reminders
<iframe width="560" height="315" src="https://www.youtube.com/embed/B_vkXqptpXY?start=1003" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe>
#### Retrieve actor reminder
You can retrieve the actor reminder by calling

View File

@ -83,8 +83,7 @@ Dapr apps are also able to subscribe to raw events coming from existing pub/sub
<img src="/images/pubsub_subscribe_raw.png" alt="Diagram showing how to subscribe with Dapr when publisher does not use Dapr or CloudEvent" width=1000>
### Programmatically subscribe to raw events
### Programmatically subscribe to raw events
When subscribing programmatically, add the additional metadata entry for `rawPayload` so the Dapr sidecar automatically wraps the payloads into a CloudEvent that is compatible with current Dapr SDKs.
@ -148,10 +147,25 @@ $app->start();
{{< /tabs >}}
## Declaratively subscribe to raw events
Subscription Custom Resources Definitions (CRDs) do not currently contain metadata attributes ([issue #3225](https://github.com/dapr/dapr/issues/3225)). At this time subscribing to raw events can only be done through programmatic subscriptions.
Similarly, you can subscribe to raw events declaratively by adding the `rawPayload` metadata entry to your Subscription Custom Resource Definition (CRD):
```yaml
apiVersion: dapr.io/v1alpha1
kind: Subscription
metadata:
name: myevent-subscription
spec:
topic: deathStarStatus
route: /dsstatus
pubsubname: pubsub
metadata:
rawPayload: "true"
scopes:
- app1
- app2
```
## Related links

View File

@ -1,7 +1,7 @@
---
type: docs
title: "How-To: Invoke services using HTTP"
linkTitle: "How-To: Invoke services"
linkTitle: "How-To: Invoke with HTTP"
description: "Call between services using service invocation"
weight: 2000
---

View File

@ -0,0 +1,307 @@
---
type: docs
title: "How-To: Invoke services using gRPC"
linkTitle: "How-To: Invoke with gRPC"
description: "Call between services using service invocation"
weight: 3000
---
This article describe how to use Dapr to connect services using gRPC.
By using Dapr's gRPC proxying capability, you can use your existing proto based gRPC services and have the traffic go through the Dapr sidecar. Doing so yields the following [Dapr Service Invocation]({{< ref service-invocation-overview.md >}}) benefits to developers:
1. Mutual authentication
2. Tracing
3. Metrics
4. Access lists
5. Network level resiliency
6. API token based authentication
## Step 1: Run a gRPC server
The following example is taken from the [hello world grpc-go example](https://github.com/grpc/grpc-go/tree/master/examples/helloworld).
Note this example is in Go, but applies to all programming languages supported by gRPC.
```go
package main
import (
"context"
"log"
"net"
"google.golang.org/grpc"
pb "google.golang.org/grpc/examples/helloworld/helloworld"
)
const (
port = ":50051"
)
// server is used to implement helloworld.GreeterServer.
type server struct {
pb.UnimplementedGreeterServer
}
// SayHello implements helloworld.GreeterServer
func (s *server) SayHello(ctx context.Context, in *pb.HelloRequest) (*pb.HelloReply, error) {
log.Printf("Received: %v", in.GetName())
return &pb.HelloReply{Message: "Hello " + in.GetName()}, nil
}
func main() {
lis, err := net.Listen("tcp", port)
if err != nil {
log.Fatalf("failed to listen: %v", err)
}
s := grpc.NewServer()
pb.RegisterGreeterServer(s, &server{})
log.Printf("server listening at %v", lis.Addr())
if err := s.Serve(lis); err != nil {
log.Fatalf("failed to serve: %v", err)
}
}
```
This Go app implements the Greeter proto service and exposes a `SayHello` method.
### Run the gRPC server using the Dapr CLI
Since gRPC proxying is currently a preview feature, you need to opt-in using a configuration file. See https://docs.dapr.io/operations/configuration/preview-features/ for more information.
```yaml
apiVersion: dapr.io/v1alpha1
kind: Configuration
metadata:
name: serverconfig
spec:
tracing:
samplingRate: "1"
zipkin:
endpointAddress: http://localhost:9411/api/v2/spans
features:
- name: proxy.grpc
enabled: true
```
Run the sidecar and the Go server:
```bash
dapr run --app-id server --app-protocol grpc --app-port 50051 --config config.yaml -- go run main.go
```
Using the Dapr CLI, we're assigning a unique id to the app, `server`, using the `--app-id` flag.
## Step 2: Invoke the service
The following example shows you how to discover the Greeter service using Dapr from a gRPC client.
Notice that instead of invoking the target service directly at port `50051`, the client is invoking its local Dapr sidecar over port `50007` which then provides all the capabilities of service invocation including service discovery, tracing, mTLS and retries.
```go
package main
import (
"context"
"log"
"time"
"google.golang.org/grpc"
pb "google.golang.org/grpc/examples/helloworld/helloworld"
"google.golang.org/grpc/metadata"
)
const (
address = "localhost:50007"
)
func main() {
// Set up a connection to the server.
conn, err := grpc.Dial(address, grpc.WithInsecure(), grpc.WithBlock())
if err != nil {
log.Fatalf("did not connect: %v", err)
}
defer conn.Close()
c := pb.NewGreeterClient(conn)
ctx, cancel := context.WithTimeout(context.Background(), time.Second*2)
defer cancel()
ctx = metadata.AppendToOutgoingContext(ctx, "dapr-app-id", "server")
r, err := c.SayHello(ctx, &pb.HelloRequest{Name: "Darth Tyrannus"})
if err != nil {
log.Fatalf("could not greet: %v", err)
}
log.Printf("Greeting: %s", r.GetMessage())
}
```
The following line tells Dapr to discover and invoke an app named `server`:
```go
ctx = metadata.AppendToOutgoingContext(ctx, "dapr-app-id", "server")
```
All languages supported by gRPC allow for adding metadata. Here are a few examples:
{{< tabs Java Dotnet Python JavaScript Ruby "C++">}}
{{% codetab %}}
```java
Metadata headers = new Metadata();
Metadata.Key<String> jwtKey = Metadata.Key.of("dapr-app-id", "server");
GreeterService.ServiceBlockingStub stub = GreeterService.newBlockingStub(channel);
stub = MetadataUtils.attachHeaders(stub, header);
stub.SayHello(new HelloRequest() { Name = "Darth Malak" });
```
{{% /codetab %}}
{{% codetab %}}
```csharp
var metadata = new Metadata
{
{ "dapr-app-id", "server" }
};
var call = client.SayHello(new HelloRequest { Name = "Darth Nihilus" }, metadata);
```
{{% /codetab %}}
{{% codetab %}}
```python
metadata = (('dapr-app-id', 'server'))
response = stub.SayHello(request={ name: 'Darth Revan' }, metadata=metadata)
```
{{% /codetab %}}
{{% codetab %}}
```javascript
const metadata = new grpc.Metadata();
metadata.add('dapr-app-id', 'server');
client.sayHello({ name: "Darth Malgus", metadata })
```
{{% /codetab %}}
{{% codetab %}}
```ruby
metadata = { 'dapr-app-id' : 'server' }
response = service.sayHello({ 'name': 'Darth Bane' }, metadata)
```
{{% /codetab %}}
{{% codetab %}}
```c++
grpc::ClientContext context;
context.AddMetadata("dapr-app-id", "Darth Sidious");
```
{{% /codetab %}}
{{< /tabs >}}
### Run the client using the Dapr CLI
Since gRPC proxying is currently a preview feature, you need to opt-in using a configuration file. See https://docs.dapr.io/operations/configuration/preview-features/ for more information.
```yaml
apiVersion: dapr.io/v1alpha1
kind: Configuration
metadata:
name: serverconfig
spec:
tracing:
samplingRate: "1"
zipkin:
endpointAddress: http://localhost:9411/api/v2/spans
features:
- name: proxy.grpc
enabled: true
```
```bash
dapr run --app-id client --dapr-grpc-port 50007 --config config.yaml -- go run main.go
```
### View telemetry
If you're running Dapr locally with Zipkin installed, open the browser at `http://localhost:9411` and view the traces between the client and server.
## Deploying to Kubernetes
### Step 1: Apply the following configuration YAML using `kubectl`
```yaml
apiVersion: dapr.io/v1alpha1
kind: Configuration
metadata:
name: serverconfig
spec:
tracing:
samplingRate: "1"
zipkin:
endpointAddress: http://localhost:9411/api/v2/spans
features:
- name: proxy.grpc
enabled: true
```
```bash
kubectl apply -f config.yaml
```
### Step 2: set the following Dapr annotations on your pod
```yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: grpc-app
namespace: default
labels:
app: grpc-app
spec:
replicas: 1
selector:
matchLabels:
app: grpc-app
template:
metadata:
labels:
app: grpc-app
annotations:
dapr.io/enabled: "true"
dapr.io/app-id: "server"
dapr.io/app-protocol: "grpc"
dapr.io/app-port: "50051"
dapr.io/config: "serverconfig"
...
```
*If your app uses an SSL connection, you can tell Dapr to invoke your app over an insecure SSL connection with the `app-ssl: "true"` annotation (full list [here]({{< ref kubernetes-annotations.md >}}))*
The `dapr.io/app-protocol: "grpc"` annotation tells Dapr to invoke the app using gRPC.
The `dapr.io/config: "serverconfig"` annotation tells Dapr to use the configuration applied above that enables gRPC proxying.
### Namespaces
When running on [namespace supported platforms]({{< ref "service_invocation_api.md#namespace-supported-platforms" >}}), you include the namespace of the target app in the app ID: `myApp.production`
For example, invoking the gRPC server on a different namespace:
```go
ctx = metadata.AppendToOutgoingContext(ctx, "dapr-app-id", "server.production")
```
See the [Cross namespace API spec]({{< ref "service_invocation_api.md#cross-namespace-invocation" >}}) for more information on namespaces.
## Step 3: View traces and logs
The example above showed you how to directly invoke a different service running locally or in Kubernetes. Dapr outputs metrics, tracing and logging information allowing you to visualize a call graph between services, log errors and optionally log the payload body.
For more information on tracing and logs see the [observability]({{< ref observability-concept.md >}}) article.
## Related Links
* [Service invocation overview]({{< ref service-invocation-overview.md >}})
* [Service invocation API specification]({{< ref service_invocation_api.md >}})
* [gRPC proxying community call video](https://youtu.be/B_vkXqptpXY?t=70)

View File

@ -103,6 +103,10 @@ By default, all calls between applications are traced and metrics are gathered t
The API for service invocation can be found in the [service invocation API reference]({{< ref service_invocation_api.md >}}) which describes how to invoke a method on another service.
### gRPC proxying
Dapr allows users to keep their own proto services and work natively with gRPC. This means that you can use service invocation to call your existing gRPC apps without having to include any Dapr SDKs or include custom gRPC services. For more information, see the [how-to tutorial for Dapr and gRPC]({{< ref howto-invoke-services-grpc.md >}}).
## Example
Following the above call sequence, suppose you have the applications as described in the [hello world quickstart](https://github.com/dapr/quickstarts/blob/master/hello-world/README.md), where a python app invokes a node.js app. In such a scenario, the python app would be "Service A" , and a Node.js app would be "Service B".
@ -124,6 +128,7 @@ The diagram below shows sequence 1-7 again on a local machine showing the API ca
- Follow these guides on:
- [How-to: Invoke services using HTTP]({{< ref howto-invoke-discover-services.md >}})
- [How-To: Configure Dapr to use gRPC]({{< ref grpc >}})
- [How-to: Invoke services using gRPC]({{< ref howto-invoke-services-grpc.md >}})
- Try out the [hello world quickstart](https://github.com/dapr/quickstarts/blob/master/hello-world/README.md) which shows how to use HTTP service invocation or try the samples in the [Dapr SDKs]({{< ref sdks >}})
- Read the [service invocation API specification]({{< ref service_invocation_api.md >}})
- Understand the [service invocation performance]({{< ref perf-service-invocation.md >}}) numbers

View File

@ -0,0 +1,101 @@
---
type: docs
title: "State Time-to-Live (TTL)"
linkTitle: "State TTL"
weight: 500
description: "Manage state with time-to-live."
---
## Introduction
Dapr enables per state set request time-to-live (TTL). This means that applications can set time-to-live per state stored, and these states cannot be retrieved after expiration.
Only a subset of Dapr [state store components]({{< ref supported-state-stores >}}) are compatible with state TTL. For supported state stores simply set the `ttlInSeconds` metadata when publishing a message. Other state stores will ignore this value.
Some state stores can specify a default expiration on a per-table/container basis. Please refer to the official documentation of these state stores to utilize this feature of desired. Dapr support per-state TTLs for supported state stores.
## Native state TTL support
When state time-to-live has native support in the state store component, Dapr simply forwards the time-to-live configuration without adding any extra logic, keeping predictable behavior. This is helpful when the expired state is handled differently by the component.
When a TTL is not specified the default behavior of the state store is retained.
## Persisting state (ignoring an existing TTL)
To explictly persist a state (ignoring any TTLs set for the key), specify a `ttlInSeconds` value of `-1`.
## Supported components
Please refer to the TTL column in the tables at [state store components]({{< ref supported-state-stores >}}).
## Example
State TTL can be set in the metadata as part of the state store set request:
{{< tabs "HTTP API (Bash)" "HTTP API (PowerShell)" "Python SDK" "PHP SDK">}}
{{% codetab %}}
```bash
curl -X POST -H "Content-Type: application/json" -d '[{ "key": "key1", "value": "value1", "metadata": { "ttlInSeconds": "120" } }]' http://localhost:3500/v1.0/state/statestore
```
{{% /codetab %}}
{{% codetab %}}
```powershell
Invoke-RestMethod -Method Post -ContentType 'application/json' -Body '[{"key": "key1", "value": "value1", "metadata": {"ttlInSeconds": "120"}}]' -Uri 'http://localhost:3500/v1.0/state/statestore'
```
{{% /codetab %}}
{{% codetab %}}
```python
from dapr.clients import DaprClient
with DaprClient() as d:
d.save_state(
store_name="statestore",
key="myFirstKey",
value="myFirstValue",
metadata=(
('ttlInSeconds', '120')
)
)
print("State has been stored")
```
{{% /codetab %}}
{{% codetab %}}
Save the following in `state-example.php`:
```php
<?php
require_once __DIR__.'/vendor/autoload.php';
$app = \Dapr\App::create();
$app->run(function(\Dapr\State\StateManager $stateManager, \Psr\Log\LoggerInterface $logger) {
$stateManager->save_state(store_name: 'statestore', item: new \Dapr\State\StateItem(
key: 'myFirstKey',
value: 'myFirstValue',
metadata: ['ttlInSeconds' => '120']
));
$logger->alert('State has been stored');
});
```
{{% /codetab %}}
{{< /tabs >}}
See [this guide]({{< ref state_api.md >}}) for a reference on the state API.
## Related links
- Learn [how to use key value pairs to persist a state]({{< ref howto-get-save-state.md >}})
- List of [state store components]({{< ref supported-state-stores >}})
- Read the [API reference]({{< ref state_api.md >}})

View File

@ -15,7 +15,7 @@ You can find a list of auto-generated clients [here](https://github.com/dapr/doc
The Dapr runtime implements a [proto service](https://github.com/dapr/dapr/blob/master/dapr/proto/runtime/v1/dapr.proto) that apps can communicate with via gRPC.
In addition to calling Dapr via gRPC, Dapr can communicate with an application via gRPC. To do that, the app needs to host a gRPC server and implements the [Dapr appcallback service](https://github.com/dapr/dapr/blob/master/dapr/proto/runtime/v1/appcallback.proto)
In addition to calling Dapr via gRPC, Dapr supports service to service calls with gRPC by acting as a proxy. See more information [here]({{< ref howto-invoke-services-grpc.md >}}).
## Configuring Dapr to communicate with an app via gRPC

View File

@ -12,40 +12,62 @@ Begin by downloading and installing the Dapr CLI:
{{< tabs Linux Windows MacOS Binaries>}}
{{% codetab %}}
### Install from Terminal
This command installs the latest linux Dapr CLI to `/usr/local/bin`:
```bash
wget -q https://raw.githubusercontent.com/dapr/cli/master/install/install.sh -O - | /bin/bash
```
{{% /codetab %}}
{{% codetab %}}
This Command Prompt command installs the latest windows Dapr cli to `C:\dapr` and adds this directory to User PATH environment variable.
```powershell
powershell -Command "iwr -useb https://raw.githubusercontent.com/dapr/cli/master/install/install.ps1 | iex"
### Install without `sudo`
If you do not have access to the `sudo` command or your username is not in the `sudoers` file you can install Dapr to an alternate directory via the `DAPR_INSTALL_DIR` environment variable.
```bash
wget -q https://raw.githubusercontent.com/dapr/cli/master/install/install.sh -O - | DAPR_INSTALL_DIR="$HOME/dapr" /bin/bash
```
{{% /codetab %}}
{{% codetab %}}
### Install from Command Prompt
This Command Prompt command installs the latest windows Dapr cli to `C:\dapr` and adds this directory to User PATH environment variable.
```powershell
powershell -Command "iwr -useb https://raw.githubusercontent.com/dapr/cli/master/install/install.ps1 | iex"
```
### Install without administrative rights
If you do not have admin rights you can install Dapr to an alternate directory via the `DAPR_INSTALL_DIR` environment variable.
```powershell
$script=iwr -useb https://raw.githubusercontent.com/dapr/cli/master/install/install.ps1; $block=[ScriptBlock]::Create($script); invoke-command -ScriptBlock $block -ArgumentList "", "$HOME/dapr"
```
{{% /codetab %}}
{{% codetab %}}
### Install from Terminal
This command installs the latest darwin Dapr CLI to `/usr/local/bin`:
```bash
curl -fsSL https://raw.githubusercontent.com/dapr/cli/master/install/install.sh | /bin/bash
```
Or you can install via [Homebrew](https://brew.sh):
### Install from Homebrew
You can install via [Homebrew](https://brew.sh):
```bash
brew install dapr/tap/dapr-cli
```
{{% alert title="Note for M1 Macs" color="primary" %}}
#### Note for M1 Macs
For M1 Macs, homebrew is not supported. You will need to use the dapr install script and have the rosetta amd64 compatibility layer installed. If you do not have it installed already, you can run the following:
```bash
softwareupdate --install-rosetta
```
{{% /alert %}}
### Install without `sudo`
If you do not have access to the `sudo` command or your username is not in the `sudoers` file you can install Dapr to an alternate directory via the `DAPR_INSTALL_DIR` environment variable.
```bash
curl -fsSL https://raw.githubusercontent.com/dapr/cli/master/install/install.sh | DAPR_INSTALL_DIR="$HOME/dapr" /bin/bash
```
{{% /codetab %}}
{{% codetab %}}
@ -54,7 +76,7 @@ Each release of Dapr CLI includes various OSes and architectures. These binary v
1. Download the desired Dapr CLI from the latest [Dapr Release](https://github.com/dapr/cli/releases)
2. Unpack it (e.g. dapr_linux_amd64.tar.gz, dapr_windows_amd64.zip)
3. Move it to your desired location.
- For Linux/MacOS - `/usr/local/bin`
- For Linux/MacOS `/usr/local/bin` is recommended.
- For Windows, create a directory and add this to your System PATH. For example create a directory called `C:\dapr` and add this directory to your User PATH, by editing your system environment variable.
{{% /codetab %}}
{{< /tabs >}}

View File

@ -61,7 +61,9 @@ The CPU and memory limits above account for the fact that Dapr is intended to a
When deploying Dapr in a production-ready configuration, it's recommended to deploy with a highly available (HA) configuration of the control plane, which creates 3 replicas of each control plane pod in the dapr-system namespace. This configuration allows for the Dapr control plane to survive node failures and other outages.
HA mode can be enabled with both the [Dapr CLI]({{< ref "kubernetes-deploy.md#install-in-highly-available-mode" >}}) and with [Helm charts]({{< ref "kubernetes-deploy.md#add-and-install-dapr-helm-chart" >}}).
For a new Dapr deployment, the HA mode can be set with both the [Dapr CLI]({{< ref "kubernetes-deploy.md#install-in-highly-available-mode" >}}) and with [Helm charts]({{< ref "kubernetes-deploy.md#add-and-install-dapr-helm-chart" >}}).
For an existing Dapr deployment, enabling the HA mode requires additional steps. Please refer to [this paragraph]({{< ref "#enabling-high-availability-in-an-existing-dapr-deployment" >}}) for more details.
## Deploying Dapr with Helm
@ -139,6 +141,23 @@ APP ID APP PORT AGE CREATED
nodeapp 3000 16h 2020-07-29 17:16.22
```
### Enabling high-availability in an existing Dapr deployment
Enabling HA mode for an existing Dapr deployment requires two steps.
First, delete the existing placement stateful set:
```bash
kubectl delete statefulset.apps/dapr-placement-server -n dapr-system
```
Second, issue the upgrade command:
```bash
helm upgrade dapr ./charts/dapr -n dapr-system --set global.ha.enabled=true
```
The reason for deletion of the placement stateful set is because in the HA mode, the placement service adds [Raft](https://raft.github.io/) for leader election. However, Kubernetes only allows for limited fields in stateful sets to be patched, subsequently failing upgrade of the placement service.
Deletion of the existing placement stateful set is safe. The agents will reconnect and re-register with the newly created placement service, which will persist its table in Raft.
## Recommended security configuration
When properly configured, Dapr ensures secure communication. It can also make your application more secure with a number of built-in features.

View File

@ -81,6 +81,11 @@ From version 1.0.0 onwards, upgrading Dapr using Helm is no longer a disruptive
4. All done!
#### Upgrading existing Dapr to enable high availability mode
Enabling HA mode in an existing Dapr deployment requires additional steps. Please refer to [this paragraph]({{< ref "kubernetes-production.md#enabling-high-availability-in-an-existing-dapr-deployment" >}}) for more details.
## Next steps
- [Dapr on Kubernetes]({{< ref kubernetes-overview.md >}})

View File

@ -21,7 +21,7 @@ To enable profiling in Standalone mode, pass the `--enable-profiling` and the `-
Note that `profile-port` is not required, and if not provided Dapr will pick an available port.
```bash
dapr run --enable-profiling true --profile-port 7777 python myapp.py
dapr run --enable-profiling --profile-port 7777 python myapp.py
```
### Kubernetes

View File

@ -178,7 +178,16 @@ Creates a persistent reminder for an actor.
POST/PUT http://localhost:<daprPort>/v1.0/actors/<actorType>/<actorId>/reminders/<name>
```
Body:
#### Request Body
A JSON object with the following fields:
| Field | Description |
|-------|--------------|
| dueTime | Specifies the time after which the reminder is invoked, its format should be [time.ParseDuration](https://pkg.go.dev/time#ParseDuration) format
| period | Specifies the period between different invocations, its format should be [time.ParseDuration](https://pkg.go.dev/time#ParseDuration) format or ISO 8601 duration format with optional recurrence.
`period` field supports `time.Duration` format and ISO 8601 format (with some limitations). Only duration format of ISO 8601 duration `Rn/PnYnMnWnDTnHnMnS` is supported for `period`. Here `Rn/` specifies that the reminder will be invoked `n` number of times. It should be a positive integer greater than zero. If certain values are zero, the `period` can be shortened, for example 10 seconds can be specified in ISO 8601 duration as `PT10S`. If `Rn/` is not specified the reminder will run infinite number of times until deleted.
The following specifies a `dueTime` of 3 seconds and a period of 7 seconds.
```json

View File

@ -37,6 +37,12 @@ spec:
value: "0"
- name: concurrencyMode
value: parallel
- name: backOffPolicy
value: "exponential"
- name: backOffInitialInterval
value: "100"
- name: backOffMaxRetries
value: "16"
```
{{% alert title="Warning" color="warning" %}}
The above example uses secrets as plain strings. It is recommended to use a secret store for the secrets as described [here]({{< ref component-secrets.md >}}).
@ -55,6 +61,20 @@ The above example uses secrets as plain strings. It is recommended to use a secr
| prefetchCount | N | Number of messages to [prefetch](https://www.rabbitmq.com/consumer-prefetch.html). Consider changing this to a non-zero value for production environments. Defaults to `"0"`, which means that all available messages will be pre-fetched. | `"2"`
| reconnectWait | N | How long to wait (in seconds) before reconnecting if a connection failure occurs | `"0"`
| concurrencyMode | N | `parallel` is the default, and allows processing multiple messages in parallel (limited by the `app-max-concurrency` annotation, if configured). Set to `single` to disable parallel processing. In most situations there's no reason to change this. | `parallel`, `single`
| backOffPolicy | N | Retry policy, `"constant"` is a backoff policy that always returns the same backoff delay. `"exponential"` is a backoff policy that increases the backoff period for each retry attempt using a randomization function that grows exponentially. Defaults to `"constant"`. | `constant`、`exponential` |
| backOffDuration | N | The fixed interval only takes effect when the policy is constant. There are two valid formats, one is the fraction with a unit suffix format, and the other is the pure digital format that will be processed as milliseconds. Valid time units are "ns", "us" (or "µs"), "ms", "s", "m", "h". Defaults to `"5s"`. | `"5s"`、`"5000"` |
| backOffInitialInterval | N | The backoff initial interval on retry. Only takes effect when the policy is exponential. There are two valid formats, one is the fraction with a unit suffix format, and the other is the pure digital format that will be processed as milliseconds. Valid time units are "ns", "us" (or "µs"), "ms", "s", "m", "h". Defaults to `"500"` | `"50"` |
| backOffMaxInterval | N | The backoff initial interval on retry. Only takes effect when the policy is exponential. There are two valid formats, one is the fraction with a unit suffix format, and the other is the pure digital format that will be processed as milliseconds. Valid time units are "ns", "us" (or "µs"), "ms", "s", "m", "h". Defaults to `"60s"` | `"60000"` |
| backOffMaxRetries | N | The maximum number of retries to process the message before returning an error. Defaults to `"0"` which means the component will not retry processing the message. `"-1"` will retry indefinitely until the message is processed or the application is shutdown. Any positive number is treated as the maximum retry count. | `"3"` |
| backOffRandomizationFactor | N | Randomization factor, between 1 and 0, including 0 but not 1. Randomized interval = RetryInterval * (1 ± backOffRandomizationFactor). Defaults to `"0.5"`. | `"0.5"` |
| backOffMultiplier | N | Backoff multiplier for the policy. Increments the interval by multiplying it with the multiplier. Defaults to `"1.5"` | `"1.5"` |
| backOffMaxElapsedTime | N | After MaxElapsedTime the ExponentialBackOff returns Stop. There are two valid formats, one is the fraction with a unit suffix format, and the other is the pure digital format that will be processed as milliseconds. Valid time units are "ns", "us" (or "µs"), "ms", "s", "m", "h". Defaults to `"15m"` | `"15m"` |
### Backoff policy introduction
Backoff retry strategy can instruct the dapr sidecar how to resend the message. By default, the retry strategy is turned off, which means that the sidecar will send a message to the service once. When the service returns a result, the message will be marked as consumption regardless of whether it is correct or not. The above is based on the condition of `autoAck` and `requeueInFailure` is setting to false(if `requeueInFailure` is set to true, the message will get a second chance).
But in some cases, you may want dapr to retry pushing message with an (exponential or constant) backoff strategy until the message is processed normally or the number of retries is exhausted. This maybe useful when your service breaks down abnormally but the sidecar is not stopped together. Adding backoff policy will retry the message pushing during the service downtime, instead of marking these message as consumed.
## Create a RabbitMQ server

View File

@ -26,38 +26,38 @@ The following stores are supported, at various levels, by the Dapr state managem
### Generic
| Name | CRUD | Transactional | ETag | Actors | Status | Component version | Since |
|----------------------------------------------------------------|------|---------------------|------|------|--------| -------|------|
| [Aerospike]({{< ref setup-aerospike.md >}}) | ✅ | ❌ | ✅ | ❌ | Alpha | v1 | 1.0 |
| [Apache Cassandra]({{< ref setup-cassandra.md >}}) | ✅ | ❌ | ❌ | ❌ | Alpha | v1 | 1.0 |
| [Cloudstate]({{< ref setup-cloudstate.md >}}) | ✅ | ❌ | ✅ | ❌ | Alpha | v1 | 1.0 |
| [Couchbase]({{< ref setup-couchbase.md >}}) | ✅ | ❌ | ✅ | ❌ | Alpha | v1 | 1.0 |
| [Hashicorp Consul]({{< ref setup-consul.md >}}) | ✅ | ❌ | ❌ | ❌ | Alpha | v1 | 1.0 |
| [Hazelcast]({{< ref setup-hazelcast.md >}}) | ✅ | ❌ | ❌ | ❌ | Alpha | v1 | 1.0 |
| [Memcached]({{< ref setup-memcached.md >}}) | ✅ | ❌ | ❌ | ❌ | Alpha | v1 | 1.0 |
| [MongoDB]({{< ref setup-mongodb.md >}}) | ✅ | ✅ | ✅ | ✅ | GA | v1 | 1.0 |
| [MySQL]({{< ref setup-mysql.md >}}) | ✅ | ✅ | ✅ | ✅ | Alpha | v1 | 1.0 |
| [PostgreSQL]({{< ref setup-postgresql.md >}}) | ✅ | ✅ | ✅ | ✅ | Alpha | v1 | 1.0 |
| [Redis]({{< ref setup-redis.md >}}) | ✅ | ✅ | ✅ | ✅ | GA | v1 | 1.0 |
| [RethinkDB]({{< ref setup-rethinkdb.md >}}) | ✅ | ✅ | ✅ | ✅ | Alpha | v1 | 1.0 |
| [Zookeeper]({{< ref setup-zookeeper.md >}}) | ✅ | ❌ | ✅ | ❌ | Alpha | v1 | 1.0 |
| Name | CRUD | Transactional | ETag | [TTL]({{< ref state-store-ttl.md >}}) | [Actors]({{< ref howto-actors.md >}}) | Status | Component version | Since |
|----------------------------------------------------------------|------|---------------------|------|-----|------|--------| -------|------|
| [Aerospike]({{< ref setup-aerospike.md >}}) | ✅ | ❌ | ✅ | ❌ | ❌ | Alpha | v1 | 1.0 |
| [Apache Cassandra]({{< ref setup-cassandra.md >}}) | ✅ | ❌ | ❌ | ✅ | ❌ | Alpha | v1 | 1.0 |
| [Cloudstate]({{< ref setup-cloudstate.md >}}) | ✅ | ❌ | ✅ | ❌ | ❌ | Alpha | v1 | 1.0 |
| [Couchbase]({{< ref setup-couchbase.md >}}) | ✅ | ❌ | ✅ | ❌ | ❌ | Alpha | v1 | 1.0 |
| [Hashicorp Consul]({{< ref setup-consul.md >}}) | ✅ | ❌ | ❌ | ❌ | ❌ | Alpha | v1 | 1.0 |
| [Hazelcast]({{< ref setup-hazelcast.md >}}) | ✅ | ❌ | ❌ | ❌ | ❌ | Alpha | v1 | 1.0 |
| [Memcached]({{< ref setup-memcached.md >}}) | ✅ | ❌ | ❌ | ✅ | ❌ | Alpha | v1 | 1.0 |
| [MongoDB]({{< ref setup-mongodb.md >}}) | ✅ | ✅ | ✅ | ❌ | ✅ | GA | v1 | 1.0 |
| [MySQL]({{< ref setup-mysql.md >}}) | ✅ | ✅ | ✅ | ❌ | ✅ | Alpha | v1 | 1.0 |
| [PostgreSQL]({{< ref setup-postgresql.md >}}) | ✅ | ✅ | ✅ | ❌ | ✅ | Alpha | v1 | 1.0 |
| [Redis]({{< ref setup-redis.md >}}) | ✅ | ✅ | ✅ | ✅ | ✅ | GA | v1 | 1.0 |
| [RethinkDB]({{< ref setup-rethinkdb.md >}}) | ✅ | ✅ | ✅ | ❌ | ✅ | Alpha | v1 | 1.0 |
| [Zookeeper]({{< ref setup-zookeeper.md >}}) | ✅ | ❌ | ✅ | ❌ | ❌ | Alpha | v1 | 1.0 |
### Amazon Web Services (AWS)
| Name | CRUD | Transactional | ETag | Actors | Status | Component version | Since |
|------------------------------------------------------------------|------|---------------------|------|--------|-----|-----|-------|
| [AWS DynamoDB]({{< ref setup-dynamodb.md>}}) | ✅ | ❌ | ❌ | ❌ | Alpha | v1 | 1.0 |
| Name | CRUD | Transactional | ETag | [TTL]({{< ref state-store-ttl.md >}}) | [Actors]({{< ref howto-actors.md >}}) | Status | Component version | Since |
|------------------------------------------------------------------|------|---------------------|------|-----|--------|-----|-----|-------|
| [AWS DynamoDB]({{< ref setup-dynamodb.md>}}) | ✅ | ❌ | ❌ | ❌ | ❌ | Alpha | v1 | 1.0 |
### Google Cloud Platform (GCP)
| Name | CRUD | Transactional | ETag | Actors | Status | Component version | Since |
|------------------------------------------------------------------|------|---------------------|------|--------|-----|-----|-------|
| [GCP Firestore]({{< ref setup-firestore.md >}}) | ✅ | ❌ | ❌ | ❌ | Alpha | v1 | 1.0 |
| Name | CRUD | Transactional | ETag | [TTL]({{< ref state-store-ttl.md >}}) | [Actors]({{< ref howto-actors.md >}}) | Status | Component version | Since |
|------------------------------------------------------------------|------|---------------------|------|-----|--------|-----|-----|-------|
| [GCP Firestore]({{< ref setup-firestore.md >}}) | ✅ | ❌ | ❌ | ❌ | ❌ | Alpha | v1 | 1.0 |
### Microsoft Azure
| Name | CRUD | Transactional | ETag | Actors | Status | Component version | Since |
|------------------------------------------------------------------|------|---------------------|------|--------|-----|-----|-------|
| [Azure Blob Storage]({{< ref setup-azure-blobstorage.md >}}) | ✅ | ❌ | ✅ | ❌ | GA | v1 | 1.0 |
| [Azure CosmosDB]({{< ref setup-azure-cosmosdb.md >}}) | ✅ | ✅ | ✅ | ✅ | GA | v1 | 1.0 |
| [Azure SQL Server]({{< ref setup-sqlserver.md >}}) | ✅ | ✅ | ✅ | ✅ | Alpha | v1 | 1.0 |
| [Azure Table Storage]({{< ref setup-azure-tablestorage.md >}}) | ✅ | ❌ | ✅ | ❌ | Alpha | v1 | 1.0 |
| Name | CRUD | Transactional | ETag | [TTL]({{< ref state-store-ttl.md >}}) | [Actors]({{< ref howto-actors.md >}}) | Status | Component version | Since |
|------------------------------------------------------------------|------|---------------------|------|-----|--------|-----|-----|-------|
| [Azure Blob Storage]({{< ref setup-azure-blobstorage.md >}}) | ✅ | ❌ | ✅ | ❌ | ❌ | GA | v1 | 1.0 |
| [Azure CosmosDB]({{< ref setup-azure-cosmosdb.md >}}) | ✅ | ✅ | ✅ | ✅ | ✅ | GA | v1 | 1.0 |
| [Azure SQL Server]({{< ref setup-sqlserver.md >}}) | ✅ | ✅ | ✅ | ❌ | ✅ | Alpha | v1 | 1.0 |
| [Azure Table Storage]({{< ref setup-azure-tablestorage.md >}}) | ✅ | ❌ | ✅ | ❌ | ❌ | Alpha | v1 | 1.0 |