mirror of https://github.com/dapr/docs.git
Merge branch 'v1.3' into jviviano
This commit is contained in:
commit
31c96146ac
|
@ -1,13 +1,13 @@
|
|||
name: Azure Static Web App v1.2
|
||||
name: Azure Static Web App v1.3
|
||||
|
||||
on:
|
||||
push:
|
||||
branches:
|
||||
- v1.2
|
||||
- v1.3
|
||||
pull_request:
|
||||
types: [opened, synchronize, reopened, closed]
|
||||
branches:
|
||||
- v1.2
|
||||
- v1.3
|
||||
|
||||
jobs:
|
||||
build_and_deploy_job:
|
||||
|
@ -27,7 +27,7 @@ jobs:
|
|||
HUGO_ENV: production
|
||||
HUGO_VERSION: "0.74.3"
|
||||
with:
|
||||
azure_static_web_apps_api_token: ${{ secrets.AZURE_STATIC_WEB_APPS_API_TOKEN_WONDERFUL_ISLAND_07C05FD1E }}
|
||||
azure_static_web_apps_api_token: ${{ secrets.AZURE_STATIC_WEB_APPS_V1_3 }}
|
||||
repo_token: ${{ secrets.GITHUB_TOKEN }} # Used for Github integrations (i.e. PR comments)
|
||||
skip_deploy_on_missing_secrets: true
|
||||
action: "upload"
|
||||
|
@ -48,6 +48,6 @@ jobs:
|
|||
id: closepullrequest
|
||||
uses: Azure/static-web-apps-deploy@v0.0.1-preview
|
||||
with:
|
||||
azure_static_web_apps_api_token: ${{ secrets.AZURE_STATIC_WEB_APPS_API_TOKEN_WONDERFUL_ISLAND_07C05FD1E }}
|
||||
azure_static_web_apps_api_token: ${{ secrets.AZURE_STATIC_WEB_APPS_V1_3 }}
|
||||
skip_deploy_on_missing_secrets: true
|
||||
action: "close"
|
|
@ -1,5 +1,5 @@
|
|||
# Site Configuration
|
||||
baseURL = "https://docs.dapr.io/"
|
||||
baseURL = "https://v1-3.docs.dapr.io/"
|
||||
title = "Dapr Docs"
|
||||
theme = "docsy"
|
||||
disableFastRender = true
|
||||
|
@ -141,17 +141,20 @@ offlineSearch = false
|
|||
github_repo = "https://github.com/dapr/docs"
|
||||
github_project_repo = "https://github.com/dapr/dapr"
|
||||
github_subdir = "daprdocs"
|
||||
github_branch = "v1.2"
|
||||
github_branch = "v1.3"
|
||||
|
||||
# Versioning
|
||||
version_menu = "v1.2 (latest)"
|
||||
version = "v1.2"
|
||||
version_menu = "v1.3 (preview)"
|
||||
version = "v1.3"
|
||||
archived_version = false
|
||||
url_latest_version = "https://docs.dapr.io"
|
||||
|
||||
[[params.versions]]
|
||||
version = "v1.2 (latest)"
|
||||
version = "v1.3 (preview)"
|
||||
url = "#"
|
||||
[[params.versions]]
|
||||
version = "v1.2 (latest)"
|
||||
url = "https://docs.dapr.io"
|
||||
[[params.versions]]
|
||||
version = "v1.1"
|
||||
url = "https://v1-1.docs.dapr.io"
|
||||
|
|
|
@ -77,7 +77,7 @@ Refer [api spec]({{< ref "actors_api.md#invoke-timer" >}}) for more details.
|
|||
|
||||
### Actor reminders
|
||||
|
||||
Reminders are a mechanism to trigger *persistent* callbacks on an actor at specified times. Their functionality is similar to timers. But unlike timers, reminders are triggered under all circumstances until the actor explicitly unregisters them or the actor is explicitly deleted. Specifically, reminders are triggered across actor deactivations and failovers because the Dapr actors runtime persists the information about the actors' reminders using Dapr actor state provider.
|
||||
Reminders are a mechanism to trigger *persistent* callbacks on an actor at specified times. Their functionality is similar to timers. But unlike timers, reminders are triggered under all circumstances until the actor explicitly unregisters them or the actor is explicitly deleted or the number in invocations is exhausted. Specifically, reminders are triggered across actor deactivations and failovers because the Dapr actors runtime persists the information about the actors' reminders using Dapr actor state provider.
|
||||
|
||||
You can create a persistent reminder for an actor by calling the Http/gRPC request to Dapr.
|
||||
|
||||
|
@ -111,6 +111,34 @@ The following request body configures a reminder with a `dueTime` 15 seconds and
|
|||
}
|
||||
```
|
||||
|
||||
[ISO 8601 duration](https://en.wikipedia.org/wiki/ISO_8601#Durations) can also be used to specify `period`. The following request body configures a reminder with a `dueTime` 0 seconds an `period` of 15 seconds.
|
||||
```json
|
||||
{
|
||||
"dueTime":"0h0m0s0ms",
|
||||
"period":"P0Y0M0W0DT0H0M15S"
|
||||
}
|
||||
```
|
||||
The designators for zero are optional and the above `period` can be simplified to `PT15S`.
|
||||
ISO 8601 specifies multiple recurrence formats but only the duration format is currently supported.
|
||||
|
||||
#### Reminders with repetitions
|
||||
|
||||
When configured with ISO 8601 durations, the `period` column also allows to specify number of times a reminder can run. The following request body will create a reminder that will execute for 5 number of times with a period of 15 seconds.
|
||||
```json
|
||||
{
|
||||
"dueTime":"0h0m0s0ms",
|
||||
"period":"R5/PT15S"
|
||||
}
|
||||
```
|
||||
|
||||
The number of repetitions i.e. the number of times the reminder is run should be a positive number.
|
||||
|
||||
**Example**
|
||||
|
||||
Watch this [video](https://www.youtube.com/watch?v=B_vkXqptpXY&t=1002s) for more information on using ISO 861 for Reminders
|
||||
|
||||
<iframe width="560" height="315" src="https://www.youtube.com/embed/B_vkXqptpXY?start=1003" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe>
|
||||
|
||||
#### Retrieve actor reminder
|
||||
|
||||
You can retrieve the actor reminder by calling
|
||||
|
@ -139,6 +167,8 @@ You can configure the Dapr Actors runtime configuration to modify the default ru
|
|||
- `drainOngoingCallTimeout` - The duration when in the process of draining rebalanced actors. This specifies the timeout for the current active actor method to finish. If there is no current actor method call, this is ignored. **Default: 60 seconds**
|
||||
- `drainRebalancedActors` - If true, Dapr will wait for `drainOngoingCallTimeout` duration to allow a current actor call to complete before trying to deactivate an actor. **Default: true**
|
||||
- `reentrancy` (ActorReentrancyConfig) - Configure the reentrancy behavior for an actor. If not provided, reentrancy is diabled. **Default: disabled**
|
||||
**Default: 0**
|
||||
- `remindersStoragePartitions` - Configure the number of partitions for actor's reminders. If not provided, all reminders are saved as a single record in actor's state store. **Default: 0**
|
||||
|
||||
{{< tabs Java Dotnet Python >}}
|
||||
|
||||
|
@ -152,6 +182,7 @@ ActorRuntime.getInstance().getConfig().setActorScanInterval(Duration.ofSeconds(3
|
|||
ActorRuntime.getInstance().getConfig().setDrainOngoingCallTimeout(Duration.ofSeconds(60));
|
||||
ActorRuntime.getInstance().getConfig().setDrainBalancedActors(true);
|
||||
ActorRuntime.getInstance().getConfig().setActorReentrancyConfig(false, null);
|
||||
ActorRuntime.getInstance().getConfig().setRemindersStoragePartitions(7);
|
||||
```
|
||||
|
||||
See [this example](https://github.com/dapr/java-sdk/blob/master/examples/src/main/java/io/dapr/examples/actors/DemoActorService.java)
|
||||
|
@ -173,6 +204,7 @@ public void ConfigureServices(IServiceCollection services)
|
|||
options.ActorScanInterval = TimeSpan.FromSeconds(30);
|
||||
options.DrainOngoingCallTimeout = TimeSpan.FromSeconds(60);
|
||||
options.DrainRebalancedActors = true;
|
||||
options.RemindersStoragePartitions = 7;
|
||||
// reentrancy not implemented in the .NET SDK at this time
|
||||
});
|
||||
|
||||
|
@ -194,7 +226,8 @@ ActorRuntime.set_actor_config(
|
|||
actor_scan_interval=timedelta(seconds=30),
|
||||
drain_ongoing_call_timeout=timedelta(minutes=1),
|
||||
drain_rebalanced_actors=True,
|
||||
reentrancy=ActorReentrancyConfig(enabled=False)
|
||||
reentrancy=ActorReentrancyConfig(enabled=False),
|
||||
remindersStoragePartitions=7
|
||||
)
|
||||
)
|
||||
```
|
||||
|
@ -203,3 +236,153 @@ ActorRuntime.set_actor_config(
|
|||
{{< /tabs >}}
|
||||
|
||||
Refer to the documentation and examples of the [Dapr SDKs]({{< ref "developing-applications/sdks/#sdk-languages" >}}) for more details.
|
||||
|
||||
## Partitioning reminders
|
||||
|
||||
{{% alert title="Preview feature" color="warning" %}}
|
||||
Actor reminders partitioning is currently in [preview]({{< ref preview-features.md >}}). Use this feature if you are runnining into issues due to a high number of reminders registered.
|
||||
{{% /alert %}}
|
||||
|
||||
Actor reminders are persisted and continue to be triggered after sidecar restarts. Prior to Dapr runtime version 1.3, reminders were persisted on a single record in the actor state store:
|
||||
|
||||
| Key | Value |
|
||||
| ----------- | ----------- |
|
||||
| `actors\|\|<actor type>` | `[ <reminder 1>, <reminder 2>, ... , <reminder n> ]` |
|
||||
|
||||
Applications that register many reminders can experience the following issues:
|
||||
|
||||
* Low throughput on reminders registration and deregistration
|
||||
* Limit on total number of reminders registered based on the single record size limit on the state store
|
||||
|
||||
Since version 1.3, applications can now enable partitioning of actor reminders in the state store. As data is distributed in multiple keys in the state store. First, there is a metadata record in `actors\|\|<actor type>\|\|metadata` that is used to store persisted configuration for a given actor type. Then, there are multiple records that stores subsets of the reminders for the same actor type.
|
||||
|
||||
| Key | Value |
|
||||
| ----------- | ----------- |
|
||||
| `actors\|\|<actor type>\|\|metadata` | `{ "id": <actor metadata identifier>, "actorRemindersMetadata": { "partitionCount": <number of partitions for reminders> } }` |
|
||||
| `actors\|\|<actor type>\|\|<actor metadata identifier>\|\|reminders\|\|1` | `[ <reminder 1-1>, <reminder 1-2>, ... , <reminder 1-n> ]` |
|
||||
| `actors\|\|<actor type>\|\|<actor metadata identifier>\|\|reminders\|\|2` | `[ <reminder 1-1>, <reminder 1-2>, ... , <reminder 1-m> ]` |
|
||||
| ... | ... |
|
||||
|
||||
If the number of partitions is not enough, it can be changed and Dapr's sidecar will automatically redistribute the reminders's set.
|
||||
|
||||
### Enabling actor reminders partitioning
|
||||
Actor reminders partitioning is currently in preview, so enabling it is a two step process.
|
||||
|
||||
#### Preview feature configuration
|
||||
Before using reminders partitioning, actor type metadata must be enabled in Dapr. For more information on preview configurations, see [the full guide on opting into preview features in Dapr]({{< ref preview-features.md >}}). Below is an example of the configuration:
|
||||
|
||||
```yaml
|
||||
apiVersion: dapr.io/v1alpha1
|
||||
kind: Configuration
|
||||
metadata:
|
||||
name: myconfig
|
||||
spec:
|
||||
features:
|
||||
- name: Actor.TypeMetadata
|
||||
enabled: true
|
||||
```
|
||||
|
||||
#### Actor runtime configuration
|
||||
Once actor type metadata is enabled as an opt-in preview feature, the actor runtime must also provide the appropriate configuration to partition actor reminders. This is done by the actor's endpoint for `GET /dapr/config`, similar to other actor configuration elements.
|
||||
|
||||
{{< tabs Java Dotnet Python Go >}}
|
||||
|
||||
{{% codetab %}}
|
||||
```java
|
||||
// import io.dapr.actors.runtime.ActorRuntime;
|
||||
// import java.time.Duration;
|
||||
|
||||
ActorRuntime.getInstance().getConfig().setActorIdleTimeout(Duration.ofMinutes(60));
|
||||
ActorRuntime.getInstance().getConfig().setActorScanInterval(Duration.ofSeconds(30));
|
||||
ActorRuntime.getInstance().getConfig().setRemindersStoragePartitions(7);
|
||||
```
|
||||
|
||||
See [this example](https://github.com/dapr/java-sdk/blob/master/examples/src/main/java/io/dapr/examples/actors/DemoActorService.java)
|
||||
{{% /codetab %}}
|
||||
|
||||
{{% codetab %}}
|
||||
```csharp
|
||||
// In Startup.cs
|
||||
public void ConfigureServices(IServiceCollection services)
|
||||
{
|
||||
// Register actor runtime with DI
|
||||
services.AddActors(options =>
|
||||
{
|
||||
// Register actor types and configure actor settings
|
||||
options.Actors.RegisterActor<MyActor>();
|
||||
|
||||
// Configure default settings
|
||||
options.ActorIdleTimeout = TimeSpan.FromMinutes(60);
|
||||
options.ActorScanInterval = TimeSpan.FromSeconds(30);
|
||||
options.RemindersStoragePartitions = 7;
|
||||
// reentrancy not implemented in the .NET SDK at this time
|
||||
});
|
||||
|
||||
// Register additional services for use with actors
|
||||
services.AddSingleton<BankService>();
|
||||
}
|
||||
```
|
||||
See the .NET SDK [documentation](https://github.com/dapr/dotnet-sdk/blob/master/daprdocs/content/en/dotnet-sdk-docs/dotnet-actors/dotnet-actors-usage.md#registering-actors).
|
||||
{{% /codetab %}}
|
||||
|
||||
{{% codetab %}}
|
||||
```python
|
||||
from datetime import timedelta
|
||||
|
||||
ActorRuntime.set_actor_config(
|
||||
ActorRuntimeConfig(
|
||||
actor_idle_timeout=timedelta(hours=1),
|
||||
actor_scan_interval=timedelta(seconds=30),
|
||||
remindersStoragePartitions=7
|
||||
)
|
||||
)
|
||||
```
|
||||
{{% /codetab %}}
|
||||
|
||||
{{% codetab %}}
|
||||
```go
|
||||
type daprConfig struct {
|
||||
Entities []string `json:"entities,omitempty"`
|
||||
ActorIdleTimeout string `json:"actorIdleTimeout,omitempty"`
|
||||
ActorScanInterval string `json:"actorScanInterval,omitempty"`
|
||||
DrainOngoingCallTimeout string `json:"drainOngoingCallTimeout,omitempty"`
|
||||
DrainRebalancedActors bool `json:"drainRebalancedActors,omitempty"`
|
||||
RemindersStoragePartitions int `json:"remindersStoragePartitions,omitempty"`
|
||||
}
|
||||
|
||||
var daprConfigResponse = daprConfig{
|
||||
[]string{defaultActorType},
|
||||
actorIdleTimeout,
|
||||
actorScanInterval,
|
||||
drainOngoingCallTimeout,
|
||||
drainRebalancedActors,
|
||||
7,
|
||||
}
|
||||
|
||||
func configHandler(w http.ResponseWriter, r *http.Request) {
|
||||
w.Header().Set("Content-Type", "application/json")
|
||||
w.WriteHeader(http.StatusOK)
|
||||
json.NewEncoder(w).Encode(daprConfigResponse)
|
||||
}
|
||||
```
|
||||
{{% /codetab %}}
|
||||
|
||||
{{< /tabs >}}
|
||||
|
||||
The following, is an example of a valid configuration for reminder partitioning:
|
||||
|
||||
```json
|
||||
{
|
||||
"entities": [ "MyActorType", "AnotherActorType" ],
|
||||
"remindersStoragePartitions": 7
|
||||
}
|
||||
```
|
||||
|
||||
#### Handling configuration changes
|
||||
For production scenarios, there are some points to be considered before enabling this feature:
|
||||
|
||||
* Enabling actor type metadata can only be reverted if the number of partitions remains zero, otherwise the reminders' set will be reverted to an previous state.
|
||||
* Number of partitions can only be increased and not decreased. This allows Dapr to automatically redistribute the data on a rolling restart where one or more partition configurations might be active.
|
||||
|
||||
#### Demo
|
||||
* [Actor reminder partitioning community call video](https://youtu.be/ZwFOEUYe1WA?t=1493)
|
||||
|
|
|
@ -83,8 +83,7 @@ Dapr apps are also able to subscribe to raw events coming from existing pub/sub
|
|||
|
||||
<img src="/images/pubsub_subscribe_raw.png" alt="Diagram showing how to subscribe with Dapr when publisher does not use Dapr or CloudEvent" width=1000>
|
||||
|
||||
|
||||
### Programmatically subscribe to raw events
|
||||
### Programmatically subscribe to raw events
|
||||
|
||||
When subscribing programmatically, add the additional metadata entry for `rawPayload` so the Dapr sidecar automatically wraps the payloads into a CloudEvent that is compatible with current Dapr SDKs.
|
||||
|
||||
|
@ -148,10 +147,25 @@ $app->start();
|
|||
|
||||
{{< /tabs >}}
|
||||
|
||||
|
||||
## Declaratively subscribe to raw events
|
||||
|
||||
Subscription Custom Resources Definitions (CRDs) do not currently contain metadata attributes ([issue #3225](https://github.com/dapr/dapr/issues/3225)). At this time subscribing to raw events can only be done through programmatic subscriptions.
|
||||
Similarly, you can subscribe to raw events declaratively by adding the `rawPayload` metadata entry to your Subscription Custom Resource Definition (CRD):
|
||||
|
||||
```yaml
|
||||
apiVersion: dapr.io/v1alpha1
|
||||
kind: Subscription
|
||||
metadata:
|
||||
name: myevent-subscription
|
||||
spec:
|
||||
topic: deathStarStatus
|
||||
route: /dsstatus
|
||||
pubsubname: pubsub
|
||||
metadata:
|
||||
rawPayload: "true"
|
||||
scopes:
|
||||
- app1
|
||||
- app2
|
||||
```
|
||||
|
||||
## Related links
|
||||
|
||||
|
|
|
@ -9,7 +9,7 @@ type: docs
|
|||
|
||||
You can read [guidance on setting up secret store components]({{< ref setup-secret-store >}}) to configure a secret store for an application. Once configured, by default *any* secret defined within that store is accessible from the Dapr application.
|
||||
|
||||
To limit the secrets to which the Dapr application has access to, you can can define secret scopes by adding a secret scope policy to the application configuration with restrictive permissions. Follow [these instructions]({{< ref configuration-concept.md >}}) to define an application configuration.
|
||||
To limit the secrets to which the Dapr application has access to, you can define secret scopes by adding a secret scope policy to the application configuration with restrictive permissions. Follow [these instructions]({{< ref configuration-concept.md >}}) to define an application configuration.
|
||||
|
||||
The secret scoping policy applies to any [secret store]({{< ref supported-secret-stores.md >}}), whether that is a local secret store, a Kubernetes secret store or a public cloud secret store. For details on how to set up a [secret stores]({{< ref setup-secret-store.md >}}) read [How To: Retrieve a secret]({{< ref howto-secrets.md >}})
|
||||
|
||||
|
@ -18,7 +18,9 @@ Watch this [video](https://youtu.be/j99RN_nxExA?start=2272) for a demo on how to
|
|||
|
||||
## Scenario 1 : Deny access to all secrets for a secret store
|
||||
|
||||
This example uses Kubernetes. The native Kubernetes secret store is added to you Dapr application by default. In some scenarios it may be necessary to deny access to Dapr secrets for a given application. To add this configuration follow the steps below:
|
||||
In this example all secret access is denied to an application running on a Kubernetes cluster which has a configured [Kubernetes secret store]({{<ref kubernetes-secret-store>}}) named `mycustomsecretstore`. In the case of Kubernetes, aside from the user defined custom store, the default store named `kubernetes` is also addressed to ensure all secrets are denied access (See [here]({{<ref "kubernetes-secret-store.md#default-kubernetes-secret-store-component">}}) to learn more about the Kubernetes default secret store).
|
||||
|
||||
To add this configuration follow the steps below:
|
||||
|
||||
Define the following `appconfig.yaml` configuration and apply it to the Kubernetes cluster using the command `kubectl apply -f appconfig.yaml`.
|
||||
|
||||
|
@ -32,6 +34,8 @@ spec:
|
|||
scopes:
|
||||
- storeName: kubernetes
|
||||
defaultAccess: deny
|
||||
- storeName: mycustomsecreststore
|
||||
defaultAccess: deny
|
||||
```
|
||||
|
||||
For applications that need to be denied access to the Kubernetes secret store, follow [these instructions]({{< ref kubernetes-overview.md >}}), and add the following annotation to the application pod.
|
||||
|
|
|
@ -1,7 +1,7 @@
|
|||
---
|
||||
type: docs
|
||||
title: "How-To: Invoke services using HTTP"
|
||||
linkTitle: "How-To: Invoke services"
|
||||
linkTitle: "How-To: Invoke with HTTP"
|
||||
description: "Call between services using service invocation"
|
||||
weight: 2000
|
||||
---
|
||||
|
|
|
@ -0,0 +1,307 @@
|
|||
---
|
||||
type: docs
|
||||
title: "How-To: Invoke services using gRPC"
|
||||
linkTitle: "How-To: Invoke with gRPC"
|
||||
description: "Call between services using service invocation"
|
||||
weight: 3000
|
||||
---
|
||||
|
||||
This article describe how to use Dapr to connect services using gRPC.
|
||||
By using Dapr's gRPC proxying capability, you can use your existing proto based gRPC services and have the traffic go through the Dapr sidecar. Doing so yields the following [Dapr Service Invocation]({{< ref service-invocation-overview.md >}}) benefits to developers:
|
||||
|
||||
1. Mutual authentication
|
||||
2. Tracing
|
||||
3. Metrics
|
||||
4. Access lists
|
||||
5. Network level resiliency
|
||||
6. API token based authentication
|
||||
|
||||
## Step 1: Run a gRPC server
|
||||
|
||||
The following example is taken from the [hello world grpc-go example](https://github.com/grpc/grpc-go/tree/master/examples/helloworld).
|
||||
|
||||
Note this example is in Go, but applies to all programming languages supported by gRPC.
|
||||
|
||||
```go
|
||||
package main
|
||||
|
||||
import (
|
||||
"context"
|
||||
"log"
|
||||
"net"
|
||||
|
||||
"google.golang.org/grpc"
|
||||
pb "google.golang.org/grpc/examples/helloworld/helloworld"
|
||||
)
|
||||
|
||||
const (
|
||||
port = ":50051"
|
||||
)
|
||||
|
||||
// server is used to implement helloworld.GreeterServer.
|
||||
type server struct {
|
||||
pb.UnimplementedGreeterServer
|
||||
}
|
||||
|
||||
// SayHello implements helloworld.GreeterServer
|
||||
func (s *server) SayHello(ctx context.Context, in *pb.HelloRequest) (*pb.HelloReply, error) {
|
||||
log.Printf("Received: %v", in.GetName())
|
||||
return &pb.HelloReply{Message: "Hello " + in.GetName()}, nil
|
||||
}
|
||||
|
||||
func main() {
|
||||
lis, err := net.Listen("tcp", port)
|
||||
if err != nil {
|
||||
log.Fatalf("failed to listen: %v", err)
|
||||
}
|
||||
s := grpc.NewServer()
|
||||
pb.RegisterGreeterServer(s, &server{})
|
||||
log.Printf("server listening at %v", lis.Addr())
|
||||
if err := s.Serve(lis); err != nil {
|
||||
log.Fatalf("failed to serve: %v", err)
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
This Go app implements the Greeter proto service and exposes a `SayHello` method.
|
||||
|
||||
### Run the gRPC server using the Dapr CLI
|
||||
|
||||
Since gRPC proxying is currently a preview feature, you need to opt-in using a configuration file. See https://docs.dapr.io/operations/configuration/preview-features/ for more information.
|
||||
|
||||
```yaml
|
||||
apiVersion: dapr.io/v1alpha1
|
||||
kind: Configuration
|
||||
metadata:
|
||||
name: serverconfig
|
||||
spec:
|
||||
tracing:
|
||||
samplingRate: "1"
|
||||
zipkin:
|
||||
endpointAddress: http://localhost:9411/api/v2/spans
|
||||
features:
|
||||
- name: proxy.grpc
|
||||
enabled: true
|
||||
```
|
||||
|
||||
Run the sidecar and the Go server:
|
||||
|
||||
```bash
|
||||
dapr run --app-id server --app-protocol grpc --app-port 50051 --config config.yaml -- go run main.go
|
||||
```
|
||||
|
||||
Using the Dapr CLI, we're assigning a unique id to the app, `server`, using the `--app-id` flag.
|
||||
|
||||
## Step 2: Invoke the service
|
||||
|
||||
The following example shows you how to discover the Greeter service using Dapr from a gRPC client.
|
||||
Notice that instead of invoking the target service directly at port `50051`, the client is invoking its local Dapr sidecar over port `50007` which then provides all the capabilities of service invocation including service discovery, tracing, mTLS and retries.
|
||||
|
||||
```go
|
||||
package main
|
||||
|
||||
import (
|
||||
"context"
|
||||
"log"
|
||||
"time"
|
||||
|
||||
"google.golang.org/grpc"
|
||||
pb "google.golang.org/grpc/examples/helloworld/helloworld"
|
||||
"google.golang.org/grpc/metadata"
|
||||
)
|
||||
|
||||
const (
|
||||
address = "localhost:50007"
|
||||
)
|
||||
|
||||
func main() {
|
||||
// Set up a connection to the server.
|
||||
conn, err := grpc.Dial(address, grpc.WithInsecure(), grpc.WithBlock())
|
||||
if err != nil {
|
||||
log.Fatalf("did not connect: %v", err)
|
||||
}
|
||||
defer conn.Close()
|
||||
c := pb.NewGreeterClient(conn)
|
||||
|
||||
ctx, cancel := context.WithTimeout(context.Background(), time.Second*2)
|
||||
defer cancel()
|
||||
|
||||
ctx = metadata.AppendToOutgoingContext(ctx, "dapr-app-id", "server")
|
||||
r, err := c.SayHello(ctx, &pb.HelloRequest{Name: "Darth Tyrannus"})
|
||||
if err != nil {
|
||||
log.Fatalf("could not greet: %v", err)
|
||||
}
|
||||
|
||||
log.Printf("Greeting: %s", r.GetMessage())
|
||||
}
|
||||
```
|
||||
|
||||
The following line tells Dapr to discover and invoke an app named `server`:
|
||||
|
||||
```go
|
||||
ctx = metadata.AppendToOutgoingContext(ctx, "dapr-app-id", "server")
|
||||
```
|
||||
|
||||
All languages supported by gRPC allow for adding metadata. Here are a few examples:
|
||||
|
||||
{{< tabs Java Dotnet Python JavaScript Ruby "C++">}}
|
||||
|
||||
{{% codetab %}}
|
||||
```java
|
||||
Metadata headers = new Metadata();
|
||||
Metadata.Key<String> jwtKey = Metadata.Key.of("dapr-app-id", "server");
|
||||
|
||||
GreeterService.ServiceBlockingStub stub = GreeterService.newBlockingStub(channel);
|
||||
stub = MetadataUtils.attachHeaders(stub, header);
|
||||
stub.SayHello(new HelloRequest() { Name = "Darth Malak" });
|
||||
```
|
||||
{{% /codetab %}}
|
||||
|
||||
{{% codetab %}}
|
||||
```csharp
|
||||
var metadata = new Metadata
|
||||
{
|
||||
{ "dapr-app-id", "server" }
|
||||
};
|
||||
|
||||
var call = client.SayHello(new HelloRequest { Name = "Darth Nihilus" }, metadata);
|
||||
```
|
||||
{{% /codetab %}}
|
||||
|
||||
{{% codetab %}}
|
||||
```python
|
||||
metadata = (('dapr-app-id', 'server'))
|
||||
response = stub.SayHello(request={ name: 'Darth Revan' }, metadata=metadata)
|
||||
```
|
||||
{{% /codetab %}}
|
||||
|
||||
{{% codetab %}}
|
||||
```javascript
|
||||
const metadata = new grpc.Metadata();
|
||||
metadata.add('dapr-app-id', 'server');
|
||||
|
||||
client.sayHello({ name: "Darth Malgus", metadata })
|
||||
```
|
||||
{{% /codetab %}}
|
||||
|
||||
{{% codetab %}}
|
||||
```ruby
|
||||
metadata = { 'dapr-app-id' : 'server' }
|
||||
response = service.sayHello({ 'name': 'Darth Bane' }, metadata)
|
||||
```
|
||||
{{% /codetab %}}
|
||||
|
||||
{{% codetab %}}
|
||||
```c++
|
||||
grpc::ClientContext context;
|
||||
context.AddMetadata("dapr-app-id", "Darth Sidious");
|
||||
```
|
||||
{{% /codetab %}}
|
||||
|
||||
{{< /tabs >}}
|
||||
|
||||
### Run the client using the Dapr CLI
|
||||
|
||||
Since gRPC proxying is currently a preview feature, you need to opt-in using a configuration file. See https://docs.dapr.io/operations/configuration/preview-features/ for more information.
|
||||
|
||||
```yaml
|
||||
apiVersion: dapr.io/v1alpha1
|
||||
kind: Configuration
|
||||
metadata:
|
||||
name: serverconfig
|
||||
spec:
|
||||
tracing:
|
||||
samplingRate: "1"
|
||||
zipkin:
|
||||
endpointAddress: http://localhost:9411/api/v2/spans
|
||||
features:
|
||||
- name: proxy.grpc
|
||||
enabled: true
|
||||
```
|
||||
|
||||
```bash
|
||||
dapr run --app-id client --dapr-grpc-port 50007 --config config.yaml -- go run main.go
|
||||
```
|
||||
|
||||
### View telemetry
|
||||
|
||||
If you're running Dapr locally with Zipkin installed, open the browser at `http://localhost:9411` and view the traces between the client and server.
|
||||
|
||||
## Deploying to Kubernetes
|
||||
|
||||
### Step 1: Apply the following configuration YAML using `kubectl`
|
||||
|
||||
```yaml
|
||||
apiVersion: dapr.io/v1alpha1
|
||||
kind: Configuration
|
||||
metadata:
|
||||
name: serverconfig
|
||||
spec:
|
||||
tracing:
|
||||
samplingRate: "1"
|
||||
zipkin:
|
||||
endpointAddress: http://localhost:9411/api/v2/spans
|
||||
features:
|
||||
- name: proxy.grpc
|
||||
enabled: true
|
||||
```
|
||||
|
||||
```bash
|
||||
kubectl apply -f config.yaml
|
||||
```
|
||||
|
||||
### Step 2: set the following Dapr annotations on your pod
|
||||
|
||||
```yaml
|
||||
apiVersion: apps/v1
|
||||
kind: Deployment
|
||||
metadata:
|
||||
name: grpc-app
|
||||
namespace: default
|
||||
labels:
|
||||
app: grpc-app
|
||||
spec:
|
||||
replicas: 1
|
||||
selector:
|
||||
matchLabels:
|
||||
app: grpc-app
|
||||
template:
|
||||
metadata:
|
||||
labels:
|
||||
app: grpc-app
|
||||
annotations:
|
||||
dapr.io/enabled: "true"
|
||||
dapr.io/app-id: "server"
|
||||
dapr.io/app-protocol: "grpc"
|
||||
dapr.io/app-port: "50051"
|
||||
dapr.io/config: "serverconfig"
|
||||
...
|
||||
```
|
||||
*If your app uses an SSL connection, you can tell Dapr to invoke your app over an insecure SSL connection with the `app-ssl: "true"` annotation (full list [here]({{< ref arguments-annotations-overview.md >}}))*
|
||||
|
||||
The `dapr.io/app-protocol: "grpc"` annotation tells Dapr to invoke the app using gRPC.
|
||||
The `dapr.io/config: "serverconfig"` annotation tells Dapr to use the configuration applied above that enables gRPC proxying.
|
||||
|
||||
### Namespaces
|
||||
|
||||
When running on [namespace supported platforms]({{< ref "service_invocation_api.md#namespace-supported-platforms" >}}), you include the namespace of the target app in the app ID: `myApp.production`
|
||||
|
||||
For example, invoking the gRPC server on a different namespace:
|
||||
|
||||
```go
|
||||
ctx = metadata.AppendToOutgoingContext(ctx, "dapr-app-id", "server.production")
|
||||
```
|
||||
|
||||
See the [Cross namespace API spec]({{< ref "service_invocation_api.md#cross-namespace-invocation" >}}) for more information on namespaces.
|
||||
|
||||
## Step 3: View traces and logs
|
||||
|
||||
The example above showed you how to directly invoke a different service running locally or in Kubernetes. Dapr outputs metrics, tracing and logging information allowing you to visualize a call graph between services, log errors and optionally log the payload body.
|
||||
|
||||
For more information on tracing and logs see the [observability]({{< ref observability-concept.md >}}) article.
|
||||
|
||||
## Related Links
|
||||
|
||||
* [Service invocation overview]({{< ref service-invocation-overview.md >}})
|
||||
* [Service invocation API specification]({{< ref service_invocation_api.md >}})
|
||||
* [gRPC proxying community call video](https://youtu.be/B_vkXqptpXY?t=70)
|
|
@ -103,6 +103,10 @@ By default, all calls between applications are traced and metrics are gathered t
|
|||
|
||||
The API for service invocation can be found in the [service invocation API reference]({{< ref service_invocation_api.md >}}) which describes how to invoke a method on another service.
|
||||
|
||||
### gRPC proxying
|
||||
|
||||
Dapr allows users to keep their own proto services and work natively with gRPC. This means that you can use service invocation to call your existing gRPC apps without having to include any Dapr SDKs or include custom gRPC services. For more information, see the [how-to tutorial for Dapr and gRPC]({{< ref howto-invoke-services-grpc.md >}}).
|
||||
|
||||
## Example
|
||||
|
||||
Following the above call sequence, suppose you have the applications as described in the [hello world quickstart](https://github.com/dapr/quickstarts/blob/master/hello-world/README.md), where a python app invokes a node.js app. In such a scenario, the python app would be "Service A" , and a Node.js app would be "Service B".
|
||||
|
@ -124,6 +128,7 @@ The diagram below shows sequence 1-7 again on a local machine showing the API ca
|
|||
- Follow these guides on:
|
||||
- [How-to: Invoke services using HTTP]({{< ref howto-invoke-discover-services.md >}})
|
||||
- [How-To: Configure Dapr to use gRPC]({{< ref grpc >}})
|
||||
- [How-to: Invoke services using gRPC]({{< ref howto-invoke-services-grpc.md >}})
|
||||
- Try out the [hello world quickstart](https://github.com/dapr/quickstarts/blob/master/hello-world/README.md) which shows how to use HTTP service invocation or try the samples in the [Dapr SDKs]({{< ref sdks >}})
|
||||
- Read the [service invocation API specification]({{< ref service_invocation_api.md >}})
|
||||
- Understand the [service invocation performance]({{< ref perf-service-invocation.md >}}) numbers
|
||||
|
|
|
@ -0,0 +1,101 @@
|
|||
---
|
||||
type: docs
|
||||
title: "State Time-to-Live (TTL)"
|
||||
linkTitle: "State TTL"
|
||||
weight: 500
|
||||
description: "Manage state with time-to-live."
|
||||
---
|
||||
|
||||
## Introduction
|
||||
|
||||
Dapr enables per state set request time-to-live (TTL). This means that applications can set time-to-live per state stored, and these states cannot be retrieved after expiration.
|
||||
|
||||
Only a subset of Dapr [state store components]({{< ref supported-state-stores >}}) are compatible with state TTL. For supported state stores simply set the `ttlInSeconds` metadata when publishing a message. Other state stores will ignore this value.
|
||||
|
||||
Some state stores can specify a default expiration on a per-table/container basis. Please refer to the official documentation of these state stores to utilize this feature of desired. Dapr support per-state TTLs for supported state stores.
|
||||
|
||||
## Native state TTL support
|
||||
|
||||
When state time-to-live has native support in the state store component, Dapr simply forwards the time-to-live configuration without adding any extra logic, keeping predictable behavior. This is helpful when the expired state is handled differently by the component.
|
||||
When a TTL is not specified the default behavior of the state store is retained.
|
||||
|
||||
## Persisting state (ignoring an existing TTL)
|
||||
|
||||
To explictly persist a state (ignoring any TTLs set for the key), specify a `ttlInSeconds` value of `-1`.
|
||||
|
||||
## Supported components
|
||||
|
||||
Please refer to the TTL column in the tables at [state store components]({{< ref supported-state-stores >}}).
|
||||
|
||||
## Example
|
||||
|
||||
State TTL can be set in the metadata as part of the state store set request:
|
||||
|
||||
{{< tabs "HTTP API (Bash)" "HTTP API (PowerShell)" "Python SDK" "PHP SDK">}}
|
||||
|
||||
{{% codetab %}}
|
||||
|
||||
```bash
|
||||
curl -X POST -H "Content-Type: application/json" -d '[{ "key": "key1", "value": "value1", "metadata": { "ttlInSeconds": "120" } }]' http://localhost:3500/v1.0/state/statestore
|
||||
```
|
||||
|
||||
{{% /codetab %}}
|
||||
|
||||
{{% codetab %}}
|
||||
|
||||
```powershell
|
||||
Invoke-RestMethod -Method Post -ContentType 'application/json' -Body '[{"key": "key1", "value": "value1", "metadata": {"ttlInSeconds": "120"}}]' -Uri 'http://localhost:3500/v1.0/state/statestore'
|
||||
```
|
||||
|
||||
{{% /codetab %}}
|
||||
|
||||
{{% codetab %}}
|
||||
|
||||
```python
|
||||
from dapr.clients import DaprClient
|
||||
|
||||
with DaprClient() as d:
|
||||
d.save_state(
|
||||
store_name="statestore",
|
||||
key="myFirstKey",
|
||||
value="myFirstValue",
|
||||
metadata=(
|
||||
('ttlInSeconds', '120')
|
||||
)
|
||||
)
|
||||
print("State has been stored")
|
||||
|
||||
```
|
||||
|
||||
{{% /codetab %}}
|
||||
|
||||
{{% codetab %}}
|
||||
|
||||
Save the following in `state-example.php`:
|
||||
|
||||
```php
|
||||
<?php
|
||||
require_once __DIR__.'/vendor/autoload.php';
|
||||
|
||||
$app = \Dapr\App::create();
|
||||
$app->run(function(\Dapr\State\StateManager $stateManager, \Psr\Log\LoggerInterface $logger) {
|
||||
$stateManager->save_state(store_name: 'statestore', item: new \Dapr\State\StateItem(
|
||||
key: 'myFirstKey',
|
||||
value: 'myFirstValue',
|
||||
metadata: ['ttlInSeconds' => '120']
|
||||
));
|
||||
$logger->alert('State has been stored');
|
||||
});
|
||||
```
|
||||
|
||||
{{% /codetab %}}
|
||||
|
||||
{{< /tabs >}}
|
||||
|
||||
See [this guide]({{< ref state_api.md >}}) for a reference on the state API.
|
||||
|
||||
## Related links
|
||||
|
||||
- Learn [how to use key value pairs to persist a state]({{< ref howto-get-save-state.md >}})
|
||||
- List of [state store components]({{< ref supported-state-stores >}})
|
||||
- Read the [API reference]({{< ref state_api.md >}})
|
|
@ -15,7 +15,7 @@ You can find a list of auto-generated clients [here](https://github.com/dapr/doc
|
|||
|
||||
The Dapr runtime implements a [proto service](https://github.com/dapr/dapr/blob/master/dapr/proto/runtime/v1/dapr.proto) that apps can communicate with via gRPC.
|
||||
|
||||
In addition to calling Dapr via gRPC, Dapr can communicate with an application via gRPC. To do that, the app needs to host a gRPC server and implements the [Dapr appcallback service](https://github.com/dapr/dapr/blob/master/dapr/proto/runtime/v1/appcallback.proto)
|
||||
In addition to calling Dapr via gRPC, Dapr supports service to service calls with gRPC by acting as a proxy. See more information [here]({{< ref howto-invoke-services-grpc.md >}}).
|
||||
|
||||
## Configuring Dapr to communicate with an app via gRPC
|
||||
|
||||
|
|
|
@ -12,40 +12,62 @@ Begin by downloading and installing the Dapr CLI:
|
|||
{{< tabs Linux Windows MacOS Binaries>}}
|
||||
|
||||
{{% codetab %}}
|
||||
### Install from Terminal
|
||||
|
||||
This command installs the latest linux Dapr CLI to `/usr/local/bin`:
|
||||
```bash
|
||||
wget -q https://raw.githubusercontent.com/dapr/cli/master/install/install.sh -O - | /bin/bash
|
||||
```
|
||||
{{% /codetab %}}
|
||||
|
||||
{{% codetab %}}
|
||||
This Command Prompt command installs the latest windows Dapr cli to `C:\dapr` and adds this directory to User PATH environment variable.
|
||||
```powershell
|
||||
powershell -Command "iwr -useb https://raw.githubusercontent.com/dapr/cli/master/install/install.ps1 | iex"
|
||||
### Install without `sudo`
|
||||
If you do not have access to the `sudo` command or your username is not in the `sudoers` file you can install Dapr to an alternate directory via the `DAPR_INSTALL_DIR` environment variable.
|
||||
|
||||
```bash
|
||||
wget -q https://raw.githubusercontent.com/dapr/cli/master/install/install.sh -O - | DAPR_INSTALL_DIR="$HOME/dapr" /bin/bash
|
||||
```
|
||||
{{% /codetab %}}
|
||||
|
||||
{{% codetab %}}
|
||||
### Install from Command Prompt
|
||||
This Command Prompt command installs the latest windows Dapr cli to `C:\dapr` and adds this directory to User PATH environment variable.
|
||||
```powershell
|
||||
powershell -Command "iwr -useb https://raw.githubusercontent.com/dapr/cli/master/install/install.ps1 | iex"
|
||||
```
|
||||
|
||||
### Install without administrative rights
|
||||
If you do not have admin rights you can install Dapr to an alternate directory via the `DAPR_INSTALL_DIR` environment variable.
|
||||
|
||||
```powershell
|
||||
$script=iwr -useb https://raw.githubusercontent.com/dapr/cli/master/install/install.ps1; $block=[ScriptBlock]::Create($script); invoke-command -ScriptBlock $block -ArgumentList "", "$HOME/dapr"
|
||||
```
|
||||
{{% /codetab %}}
|
||||
|
||||
{{% codetab %}}
|
||||
### Install from Terminal
|
||||
This command installs the latest darwin Dapr CLI to `/usr/local/bin`:
|
||||
```bash
|
||||
curl -fsSL https://raw.githubusercontent.com/dapr/cli/master/install/install.sh | /bin/bash
|
||||
```
|
||||
|
||||
Or you can install via [Homebrew](https://brew.sh):
|
||||
### Install from Homebrew
|
||||
You can install via [Homebrew](https://brew.sh):
|
||||
```bash
|
||||
brew install dapr/tap/dapr-cli
|
||||
```
|
||||
|
||||
{{% alert title="Note for M1 Macs" color="primary" %}}
|
||||
#### Note for M1 Macs
|
||||
For M1 Macs, homebrew is not supported. You will need to use the dapr install script and have the rosetta amd64 compatibility layer installed. If you do not have it installed already, you can run the following:
|
||||
|
||||
```bash
|
||||
softwareupdate --install-rosetta
|
||||
```
|
||||
|
||||
{{% /alert %}}
|
||||
|
||||
### Install without `sudo`
|
||||
If you do not have access to the `sudo` command or your username is not in the `sudoers` file you can install Dapr to an alternate directory via the `DAPR_INSTALL_DIR` environment variable.
|
||||
|
||||
```bash
|
||||
curl -fsSL https://raw.githubusercontent.com/dapr/cli/master/install/install.sh | DAPR_INSTALL_DIR="$HOME/dapr" /bin/bash
|
||||
```
|
||||
{{% /codetab %}}
|
||||
|
||||
{{% codetab %}}
|
||||
|
@ -54,7 +76,7 @@ Each release of Dapr CLI includes various OSes and architectures. These binary v
|
|||
1. Download the desired Dapr CLI from the latest [Dapr Release](https://github.com/dapr/cli/releases)
|
||||
2. Unpack it (e.g. dapr_linux_amd64.tar.gz, dapr_windows_amd64.zip)
|
||||
3. Move it to your desired location.
|
||||
- For Linux/MacOS - `/usr/local/bin`
|
||||
- For Linux/MacOS `/usr/local/bin` is recommended.
|
||||
- For Windows, create a directory and add this to your System PATH. For example create a directory called `C:\dapr` and add this directory to your User PATH, by editing your system environment variable.
|
||||
{{% /codetab %}}
|
||||
{{< /tabs >}}
|
||||
|
|
|
@ -7,20 +7,19 @@ description: "How to specify and enable preview features"
|
|||
---
|
||||
|
||||
## Overview
|
||||
Some features in Dapr are considered experimental when they are first released. These features require explicit opt-in in order to be used. The opt-in is specified in Dapr's configuration.
|
||||
Preview features in Dapr are considered experimental when they are first released. These preview features require explicit opt-in in order to be used. The opt-in is specified in Dapr's configuration.
|
||||
|
||||
Currently, preview features are enabled on a per application basis when running on Kubernetes. A global scope may be introduced in the future should there be a use case for it.
|
||||
Preview features are enabled on a per application basis by setting configuration when running an application instance.
|
||||
|
||||
### Current preview features
|
||||
Below is a list of existing preview features:
|
||||
- [Actor Reentrancy]({{<ref actor-reentrancy.md>}})
|
||||
### Preview features
|
||||
The current list of preview features can be found [here]({{<ref support-preview-features>}}).
|
||||
|
||||
## Configuration properties
|
||||
The `features` section under the `Configuration` spec contains the following properties:
|
||||
|
||||
| Property | Type | Description |
|
||||
|----------------|--------|-------------|
|
||||
|name|string|The name of the preview feature that will be enabled/disabled
|
||||
|name|string|The name of the preview feature that is enabled/disabled
|
||||
|enabled|bool|Boolean specifying if the feature is enabled or disabled
|
||||
|
||||
## Enabling a preview feature
|
||||
|
|
|
@ -32,17 +32,21 @@ Deploying and running a Dapr enabled application into your Kubernetes cluster is
|
|||
dapr.io/config: "tracing"
|
||||
```
|
||||
|
||||
## Pulling container images from private registries
|
||||
|
||||
Dapr works seamlessly with any user application container image, regardless of its origin. Simply init Dapr and add the [Dapr annotations]({{< ref arguments-annotations-overview.md >}}) to your Kubernetes definition to add the Dapr sidecar.
|
||||
|
||||
The Dapr control-plane and sidecar images come from the [daprio Docker Hub](https://hub.docker.com/u/daprio) container registry, which is a public registry.
|
||||
|
||||
For information about pulling your application images from a private registry, reference the [Kubernetes documentation](https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/). If you are using Azure Container Registry with Azure Kubernetes Service, reference the [AKS documentation](https://docs.microsoft.com/en-us/azure/aks/cluster-container-registry-integration).
|
||||
|
||||
|
||||
## Quickstart
|
||||
|
||||
You can see some examples [here](https://github.com/dapr/quickstarts/tree/master/hello-kubernetes) in the Kubernetes getting started quickstart.
|
||||
|
||||
## Supported versions
|
||||
Dapr is tested and supported on the following versions of Kubernetes.
|
||||
|
||||
| Supported Kubernetes versions |
|
||||
|-----------------------|
|
||||
| 1.17.x and above |
|
||||
|
||||
Dapr support for Kubernetes is aligned with [Kubernetes Version Skew Policy](https://kubernetes.io/releases/version-skew-policy).
|
||||
|
||||
## Related links
|
||||
|
||||
|
|
|
@ -61,7 +61,9 @@ The CPU and memory limits above account for the fact that Dapr is intended to a
|
|||
|
||||
When deploying Dapr in a production-ready configuration, it's recommended to deploy with a highly available (HA) configuration of the control plane, which creates 3 replicas of each control plane pod in the dapr-system namespace. This configuration allows for the Dapr control plane to survive node failures and other outages.
|
||||
|
||||
HA mode can be enabled with both the [Dapr CLI]({{< ref "kubernetes-deploy.md#install-in-highly-available-mode" >}}) and with [Helm charts]({{< ref "kubernetes-deploy.md#add-and-install-dapr-helm-chart" >}}).
|
||||
For a new Dapr deployment, the HA mode can be set with both the [Dapr CLI]({{< ref "kubernetes-deploy.md#install-in-highly-available-mode" >}}) and with [Helm charts]({{< ref "kubernetes-deploy.md#add-and-install-dapr-helm-chart" >}}).
|
||||
|
||||
For an existing Dapr deployment, enabling the HA mode requires additional steps. Please refer to [this paragraph]({{< ref "#enabling-high-availability-in-an-existing-dapr-deployment" >}}) for more details.
|
||||
|
||||
## Deploying Dapr with Helm
|
||||
|
||||
|
@ -139,6 +141,23 @@ APP ID APP PORT AGE CREATED
|
|||
nodeapp 3000 16h 2020-07-29 17:16.22
|
||||
```
|
||||
|
||||
### Enabling high-availability in an existing Dapr deployment
|
||||
|
||||
Enabling HA mode for an existing Dapr deployment requires two steps.
|
||||
|
||||
First, delete the existing placement stateful set:
|
||||
```bash
|
||||
kubectl delete statefulset.apps/dapr-placement-server -n dapr-system
|
||||
```
|
||||
Second, issue the upgrade command:
|
||||
```bash
|
||||
helm upgrade dapr ./charts/dapr -n dapr-system --set global.ha.enabled=true
|
||||
```
|
||||
|
||||
The reason for deletion of the placement stateful set is because in the HA mode, the placement service adds [Raft](https://raft.github.io/) for leader election. However, Kubernetes only allows for limited fields in stateful sets to be patched, subsequently failing upgrade of the placement service.
|
||||
|
||||
Deletion of the existing placement stateful set is safe. The agents will reconnect and re-register with the newly created placement service, which will persist its table in Raft.
|
||||
|
||||
## Recommended security configuration
|
||||
|
||||
When properly configured, Dapr ensures secure communication. It can also make your application more secure with a number of built-in features.
|
||||
|
|
|
@ -81,6 +81,11 @@ From version 1.0.0 onwards, upgrading Dapr using Helm is no longer a disruptive
|
|||
|
||||
4. All done!
|
||||
|
||||
#### Upgrading existing Dapr to enable high availability mode
|
||||
|
||||
Enabling HA mode in an existing Dapr deployment requires additional steps. Please refer to [this paragraph]({{< ref "kubernetes-production.md#enabling-high-availability-in-an-existing-dapr-deployment" >}}) for more details.
|
||||
|
||||
|
||||
## Next steps
|
||||
|
||||
- [Dapr on Kubernetes]({{< ref kubernetes-overview.md >}})
|
||||
|
|
|
@ -0,0 +1,16 @@
|
|||
---
|
||||
type: docs
|
||||
title: "Preview features"
|
||||
linkTitle: "Preview features"
|
||||
weight: 4000
|
||||
description: "List of current preview features"
|
||||
---
|
||||
Preview features in Dapr are considered experimental when they are first released. These preview features require explicit opt-in in order to be used. The opt-in is specified in Dapr's configuration. See [How-To: Enable preview features]({{<ref preview-features>}}) for information more information.
|
||||
|
||||
|
||||
## Current preview features
|
||||
| Description | Setting | Documentation |
|
||||
|-------------|---------|---------------|
|
||||
| Preview feature that enables Actors to be called multiple times in the same call chain allowing call backs between actors. | Actor.Reentrancy | [Actor reentrancy]({{<ref actor-reentrancy>}}) |
|
||||
| Preview feature that allows Actor reminders to be partitioned across multiple keys in the underlying statestore in order to improve scale and performance. | Actor.TypeMetadata | [How-To: Partition Actor Reminders]({{< ref "howto-actors.md#partitioning-reminders" >}}) |
|
||||
| Preview feature that enables you to call endpoints using service invocation on gRPC services through Dapr via gRPC proxying, without requiring the use of Dapr SDKs. | proxy.grpc | [How-To: Invoke services using gRPC]({{<ref howto-invoke-services-grpc>}}) |
|
|
@ -72,5 +72,9 @@ After announcing a future breaking change, the change will happen in 2 releases
|
|||
## Upgrade on Hosting platforms
|
||||
Dapr can support multiple hosting platforms for production. With the 1.0 release the two supported platforms are Kubernetes and physical machines. For Kubernetes upgrades see [Production guidelines on Kubernetes]({{< ref kubernetes-production.md >}})
|
||||
|
||||
### Supported Kubernetes versions
|
||||
|
||||
Dapr follows [Kubernetes Version Skew Policy](https://kubernetes.io/releases/version-skew-policy).
|
||||
|
||||
## Related links
|
||||
* Read the [Versioning policy]({{< ref support-versioning.md >}})
|
||||
|
|
|
@ -21,7 +21,7 @@ To enable profiling in Standalone mode, pass the `--enable-profiling` and the `-
|
|||
Note that `profile-port` is not required, and if not provided Dapr will pick an available port.
|
||||
|
||||
```bash
|
||||
dapr run --enable-profiling true --profile-port 7777 python myapp.py
|
||||
dapr run --enable-profiling --profile-port 7777 python myapp.py
|
||||
```
|
||||
|
||||
### Kubernetes
|
||||
|
|
|
@ -178,7 +178,16 @@ Creates a persistent reminder for an actor.
|
|||
POST/PUT http://localhost:<daprPort>/v1.0/actors/<actorType>/<actorId>/reminders/<name>
|
||||
```
|
||||
|
||||
Body:
|
||||
#### Request Body
|
||||
|
||||
A JSON object with the following fields:
|
||||
|
||||
| Field | Description |
|
||||
|-------|--------------|
|
||||
| dueTime | Specifies the time after which the reminder is invoked, its format should be [time.ParseDuration](https://pkg.go.dev/time#ParseDuration) format
|
||||
| period | Specifies the period between different invocations, its format should be [time.ParseDuration](https://pkg.go.dev/time#ParseDuration) format or ISO 8601 duration format with optional recurrence.
|
||||
|
||||
`period` field supports `time.Duration` format and ISO 8601 format (with some limitations). Only duration format of ISO 8601 duration `Rn/PnYnMnWnDTnHnMnS` is supported for `period`. Here `Rn/` specifies that the reminder will be invoked `n` number of times. It should be a positive integer greater than zero. If certain values are zero, the `period` can be shortened, for example 10 seconds can be specified in ISO 8601 duration as `PT10S`. If `Rn/` is not specified the reminder will run infinite number of times until deleted.
|
||||
|
||||
The following specifies a `dueTime` of 3 seconds and a period of 7 seconds.
|
||||
```json
|
||||
|
|
|
@ -0,0 +1,23 @@
|
|||
---
|
||||
type: docs
|
||||
title: "build-info CLI command reference"
|
||||
linkTitle: "build-info"
|
||||
description: "Detailed build information on dapr-cli and daprd executables"
|
||||
---
|
||||
|
||||
## Description
|
||||
|
||||
Get the version and git commit data for `dapr-cli` and `daprd` executables.
|
||||
|
||||
## Supported platforms
|
||||
|
||||
- [Self-Hosted]({{< ref self-hosted >}})
|
||||
|
||||
## Usage
|
||||
```bash
|
||||
dapr build-info
|
||||
```
|
||||
|
||||
## Related facts
|
||||
|
||||
You can get `daprd` build information directly by invoking `daprd --build-info` command.
|
|
@ -25,6 +25,7 @@ dapr list [flags]
|
|||
| --- | --- | --- | --- |
|
||||
| `--help`, `-h` | | | Print this help message |
|
||||
| `--kubernetes`, `-k` | | `false` | List all Dapr pods in a Kubernetes cluster |
|
||||
| `--output`, `-o` | | `table` | The output format of the list. Valid values are: `json`, `yaml`, or `table`
|
||||
|
||||
## Examples
|
||||
|
||||
|
@ -37,3 +38,8 @@ dapr list
|
|||
```bash
|
||||
dapr list -k
|
||||
```
|
||||
|
||||
### List Dapr instances in JSON format
|
||||
```bash
|
||||
dapr list -o json
|
||||
```
|
||||
|
|
|
@ -36,7 +36,7 @@ dapr run [flags] [command]
|
|||
| `--help`, `-h` | | | Print this help message |
|
||||
| `--image` | | | The image to build the code in. Input is: `repository/image` |
|
||||
| `--log-level` | | `info` | The log verbosity. Valid values are: `debug`, `info`, `warn`, `error`, `fatal`, or `panic` |
|
||||
| `--placement-host-address` | `DAPR_PLACEMENT_HOST` | `localhost` | The host on which the placement service resides |
|
||||
| `--placement-host-address` | `DAPR_PLACEMENT_HOST` | `localhost` | The address of the placement service. Format is either `<hostname>` for default port (`6050` on Windows, `50005` on Linux/MacOS) or `<hostname>:<port>` for custom port |
|
||||
| `--profile-port` | | `7777` | The port for the profile server to listen on |
|
||||
| `--dapr-http-max-request-size` | | `4` | Max size of request body in MB.|
|
||||
|
||||
|
|
|
@ -27,6 +27,7 @@ Table captions:
|
|||
|------|:----------------:|:-----------------:|--------|-------- | ---------|
|
||||
| [Apple Push Notifications (APN)]({{< ref apns.md >}}) | | ✅ | Alpha | v1 | 1.0 |
|
||||
| [Cron (Scheduler)]({{< ref cron.md >}}) | ✅ | ✅ | Alpha | v1 | 1.0 |
|
||||
| [GraphQL]({{< ref graghql.md >}}) | | ✅ | Alpha | v1 | 1.0 |
|
||||
| [HTTP]({{< ref http.md >}}) | | ✅ | GA | v1 | 1.0 |
|
||||
| [InfluxDB]({{< ref influxdb.md >}}) | | ✅ | Alpha | v1 | 1.0 |
|
||||
| [Kafka]({{< ref kafka.md >}}) | ✅ | ✅ | Alpha | v1 | 1.0 |
|
||||
|
@ -79,9 +80,9 @@ Table captions:
|
|||
| [Azure SignalR]({{< ref signalr.md >}}) | | ✅ | Alpha | v1 | 1.0 |
|
||||
| [Azure Storage Queues]({{< ref storagequeues.md >}}) | ✅ | ✅ | GA | v1 | 1.0 |
|
||||
|
||||
### Zeebe (Camunda)
|
||||
### Zeebe (Camunda Cloud)
|
||||
|
||||
| Name | Input<br>Binding | Output<br>Binding | Status | Component version | Since |
|
||||
|------|:----------------:|:-----------------:|--------| --------- | ---------- |
|
||||
| [Zeebe Command]({{< ref zeebe-command.md >}}) | | ✅ | Alpha | v1 | 1.2 |
|
||||
| [Zeebe Job Worker]({{< ref zeebe-jobworker.md >}}) | ✅ | | Alpha | v1 | 1.2 |
|
||||
| [Zeebe Job Worker]({{< ref zeebe-jobworker.md >}}) | ✅ | | Alpha | v1 | 1.2 |
|
||||
|
|
|
@ -41,11 +41,11 @@ The above example uses secrets as plain strings. It is recommended to use a secr
|
|||
|
||||
| Field | Required | Binding support | Details | Example |
|
||||
|--------------------|:--------:|--------|---------|---------|
|
||||
| storageAccount | Y | Output | The Blob Storage account name | `"myexmapleaccount"` |
|
||||
| storageAccessKey | Y | Output | The Blob Storage access key | `"access-key"` |
|
||||
| container | Y | Output | The name of the Blob Storage container to write to | `"myexamplecontainer"` |
|
||||
| decodeBase64 | N | Output | Configuration to decode base64 file content before saving to Blob Storage. (In case of saving a file with binary content). `"true"` is the only allowed positive value. Other positive variations like `"True"` are not acceptable. Defaults to `"false"` | `"true"`, `"false"` |
|
||||
| getBlobRetryCount | N | Output | Specifies the maximum number of HTTP GET requests that will be made while reading from a RetryReader Defaults to `"10"` | `"1"`, `"2"`
|
||||
| storageAccount | Y | Output | The Blob Storage account name | `myexmapleaccount` |
|
||||
| storageAccessKey | Y | Output | The Blob Storage access key | `access-key` |
|
||||
| container | Y | Output | The name of the Blob Storage container to write to | `myexamplecontainer` |
|
||||
| decodeBase64 | N | Output | Configuration to decode base64 file content before saving to Blob Storage. (In case of saving a file with binary content). `true` is the only allowed positive value. Other positive variations like `"True", "1"` are not acceptable. Defaults to `false` | `true`, `false` |
|
||||
| getBlobRetryCount | N | Output | Specifies the maximum number of HTTP GET requests that will be made while reading from a RetryReader Defaults to `10` | `1`, `2`
|
||||
|
||||
|
||||
## Binding support
|
||||
|
@ -55,6 +55,7 @@ This component supports **output binding** with the following operations:
|
|||
- `create` : [Create blob](#create-blob)
|
||||
- `get` : [Get blob](#get-blob)
|
||||
- `delete` : [Delete blob](#delete-blob)
|
||||
- `list`: [List blobs](#list-blobs)
|
||||
|
||||
### Create blob
|
||||
|
||||
|
@ -133,7 +134,7 @@ spec:
|
|||
- name: container
|
||||
value: container1
|
||||
- name: decodeBase64
|
||||
value: "true"
|
||||
value: true
|
||||
```
|
||||
|
||||
Then you can upload it as you would normally:
|
||||
|
@ -174,11 +175,17 @@ To perform a get blob operation, invoke the Azure Blob Storage binding with a `P
|
|||
{
|
||||
"operation": "get",
|
||||
"metadata": {
|
||||
"blobName": "myblob"
|
||||
"blobName": "myblob",
|
||||
"includeMetadata": "true"
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
The metadata parameters are:
|
||||
|
||||
- `blobName` - the name of the blob
|
||||
- `includeMetadata`- (optional) defines if the user defined metadata should be returned or not, defaults to: false
|
||||
|
||||
#### Example
|
||||
|
||||
{{< tabs Windows Linux >}}
|
||||
|
@ -200,7 +207,10 @@ To perform a get blob operation, invoke the Azure Blob Storage binding with a `P
|
|||
|
||||
#### Response
|
||||
|
||||
The response body contains the value stored in the blob object.
|
||||
The response body contains the value stored in the blob object. If enabled, the user defined metadata will be returned as HTTP headers in the form:
|
||||
|
||||
`Metadata.key1: value1`
|
||||
`Metadata.key2: value2`
|
||||
|
||||
### Delete blob
|
||||
|
||||
|
@ -215,6 +225,13 @@ To perform a delete blob operation, invoke the Azure Blob Storage binding with a
|
|||
}
|
||||
```
|
||||
|
||||
The metadata parameters are:
|
||||
|
||||
- `blobName` - the name of the blob
|
||||
- `deleteSnapshots` - (optional) required if the blob has associated snapshots. Specify one of the following two options:
|
||||
- include: Delete the base blob and all of its snapshots
|
||||
- only: Delete only the blob's snapshots and not the blob itself
|
||||
|
||||
#### Examples
|
||||
|
||||
##### Delete blob
|
||||
|
@ -242,13 +259,13 @@ To perform a delete blob operation, invoke the Azure Blob Storage binding with a
|
|||
|
||||
{{% codetab %}}
|
||||
```bash
|
||||
curl -d '{ \"operation\": \"delete\", \"metadata\": { \"blobName\": \"myblob\", \"DeleteSnapshotOptions\": \"only\" }}' http://localhost:<dapr-port>/v1.0/bindings/<binding-name>
|
||||
curl -d '{ \"operation\": \"delete\", \"metadata\": { \"blobName\": \"myblob\", \"deleteSnapshots\": \"only\" }}' http://localhost:<dapr-port>/v1.0/bindings/<binding-name>
|
||||
```
|
||||
{{% /codetab %}}
|
||||
|
||||
{{% codetab %}}
|
||||
```bash
|
||||
curl -d '{ "operation": "delete", "metadata": { "blobName": "myblob", "DeleteSnapshotOptions": "only" }}' \
|
||||
curl -d '{ "operation": "delete", "metadata": { "blobName": "myblob", "deleteSnapshots": "only" }}' \
|
||||
http://localhost:<dapr-port>/v1.0/bindings/<binding-name>
|
||||
```
|
||||
{{% /codetab %}}
|
||||
|
@ -261,13 +278,13 @@ To perform a delete blob operation, invoke the Azure Blob Storage binding with a
|
|||
|
||||
{{% codetab %}}
|
||||
```bash
|
||||
curl -d '{ \"operation\": \"delete\", \"metadata\": { \"blobName\": \"myblob\", \"DeleteSnapshotOptions\": \"include\" }}' http://localhost:<dapr-port>/v1.0/bindings/<binding-name>
|
||||
curl -d '{ \"operation\": \"delete\", \"metadata\": { \"blobName\": \"myblob\", \"deleteSnapshots\": \"include\" }}' http://localhost:<dapr-port>/v1.0/bindings/<binding-name>
|
||||
```
|
||||
{{% /codetab %}}
|
||||
|
||||
{{% codetab %}}
|
||||
```bash
|
||||
curl -d '{ "operation": "delete", "metadata": { "blobName": "myblob", "DeleteSnapshotOptions": "include" }}' \
|
||||
curl -d '{ "operation": "delete", "metadata": { "blobName": "myblob", "deleteSnapshots": "include" }}' \
|
||||
http://localhost:<dapr-port>/v1.0/bindings/<binding-name>
|
||||
```
|
||||
{{% /codetab %}}
|
||||
|
@ -278,6 +295,104 @@ To perform a delete blob operation, invoke the Azure Blob Storage binding with a
|
|||
|
||||
An HTTP 204 (No Content) and empty body will be retuned if successful.
|
||||
|
||||
### List blobs
|
||||
|
||||
To perform a list blobs operation, invoke the Azure Blob Storage binding with a `POST` method and the following JSON body:
|
||||
|
||||
```json
|
||||
{
|
||||
"operation": "list",
|
||||
"data": {
|
||||
"maxResults": 10,
|
||||
"prefix": "file",
|
||||
"marker": "2!108!MDAwMDM1IWZpbGUtMDgtMDctMjAyMS0wOS0zOC01NS03NzgtMjEudHh0ITAwMDAyOCE5OTk5LTEyLTMxVDIzOjU5OjU5Ljk5OTk5OTlaIQ--",
|
||||
"include": {
|
||||
"snapshots": false,
|
||||
"metadata": true,
|
||||
"uncommittedBlobs": false,
|
||||
"copy": false,
|
||||
"deleted": false
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
The data parameters are:
|
||||
|
||||
- `maxResults` - (optional) specifies the maximum number of blobs to return, including all BlobPrefix elements. If the request does not specify maxresults the server will return up to 5,000 items.
|
||||
- `prefix` - (optional) filters the results to return only blobs whose names begin with the specified prefix.
|
||||
- `marker` - (optional) a string value that identifies the portion of the list to be returned with the next list operation. The operation returns a marker value within the response body if the list returned was not complete. The marker value may then be used in a subsequent call to request the next set of list items.
|
||||
- `include` - (optional) Specifies one or more datasets to include in the response:
|
||||
- snapshots: Specifies that snapshots should be included in the enumeration. Snapshots are listed from oldest to newest in the response. Defaults to: false
|
||||
- metadata: Specifies that blob metadata be returned in the response. Defaults to: false
|
||||
- uncommittedBlobs: Specifies that blobs for which blocks have been uploaded, but which have not been committed using Put Block List, be included in the response. Defaults to: false
|
||||
- copy: Version 2012-02-12 and newer. Specifies that metadata related to any current or previous Copy Blob operation should be included in the response. Defaults to: false
|
||||
- deleted: Version 2017-07-29 and newer. Specifies that soft deleted blobs should be included in the response. Defaults to: false
|
||||
|
||||
#### Response
|
||||
|
||||
The response body contains the list of found blocks as also the following HTTP headers:
|
||||
|
||||
`Metadata.marker: 2!108!MDAwMDM1IWZpbGUtMDgtMDctMjAyMS0wOS0zOC0zNC04NjctMTEudHh0ITAwMDAyOCE5OTk5LTEyLTMxVDIzOjU5OjU5Ljk5OTk5OTlaIQ--`
|
||||
`Metadata.number: 10`
|
||||
|
||||
- `marker` - the next marker which can be used in a subsequent call to request the next set of list items. See the marker description on the data property of the binding input.
|
||||
- `number` - the number of found blobs
|
||||
|
||||
The list of blobs will be returned as JSON array in the following form:
|
||||
|
||||
```json
|
||||
[
|
||||
{
|
||||
"XMLName": {
|
||||
"Space": "",
|
||||
"Local": "Blob"
|
||||
},
|
||||
"Name": "file-08-07-2021-09-38-13-776-1.txt",
|
||||
"Deleted": false,
|
||||
"Snapshot": "",
|
||||
"Properties": {
|
||||
"XMLName": {
|
||||
"Space": "",
|
||||
"Local": "Properties"
|
||||
},
|
||||
"CreationTime": "2021-07-08T07:38:16Z",
|
||||
"LastModified": "2021-07-08T07:38:16Z",
|
||||
"Etag": "0x8D941E3593C6573",
|
||||
"ContentLength": 1,
|
||||
"ContentType": "application/octet-stream",
|
||||
"ContentEncoding": "",
|
||||
"ContentLanguage": "",
|
||||
"ContentMD5": "xMpCOKC5I4INzFCab3WEmw==",
|
||||
"ContentDisposition": "",
|
||||
"CacheControl": "",
|
||||
"BlobSequenceNumber": null,
|
||||
"BlobType": "BlockBlob",
|
||||
"LeaseStatus": "unlocked",
|
||||
"LeaseState": "available",
|
||||
"LeaseDuration": "",
|
||||
"CopyID": null,
|
||||
"CopyStatus": "",
|
||||
"CopySource": null,
|
||||
"CopyProgress": null,
|
||||
"CopyCompletionTime": null,
|
||||
"CopyStatusDescription": null,
|
||||
"ServerEncrypted": true,
|
||||
"IncrementalCopy": null,
|
||||
"DestinationSnapshot": null,
|
||||
"DeletedTime": null,
|
||||
"RemainingRetentionDays": null,
|
||||
"AccessTier": "Hot",
|
||||
"AccessTierInferred": true,
|
||||
"ArchiveStatus": "",
|
||||
"CustomerProvidedKeySha256": null,
|
||||
"AccessTierChangeTime": null
|
||||
},
|
||||
"Metadata": null
|
||||
}
|
||||
]
|
||||
```
|
||||
|
||||
## Metadata information
|
||||
|
||||
By default the Azure Blob Storage output binding auto generates a UUID as the blob filename and is not assigned any system or custom metadata to it. It is configurable in the metadata property of the message (all optional).
|
||||
|
@ -289,13 +404,13 @@ Applications publishing to an Azure Blob Storage output binding should send a me
|
|||
"data": "file content",
|
||||
"metadata": {
|
||||
"blobName" : "filename.txt",
|
||||
"ContentType" : "text/plain",
|
||||
"ContentMD5" : "vZGKbMRDAnMs4BIwlXaRvQ==",
|
||||
"ContentEncoding" : "UTF-8",
|
||||
"ContentLanguage" : "en-us",
|
||||
"ContentDisposition" : "attachment",
|
||||
"CacheControl" : "no-cache",
|
||||
"Custom" : "hello-world",
|
||||
"contentType" : "text/plain",
|
||||
"contentMD5" : "vZGKbMRDAnMs4BIwlXaRvQ==",
|
||||
"contentEncoding" : "UTF-8",
|
||||
"contentLanguage" : "en-us",
|
||||
"contentDisposition" : "attachment",
|
||||
"cacheControl" : "no-cache",
|
||||
"custom" : "hello-world"
|
||||
},
|
||||
"operation": "create"
|
||||
}
|
||||
|
|
|
@ -58,6 +58,40 @@ This component supports **output binding** with the following operations:
|
|||
|
||||
- `create`
|
||||
|
||||
## Input Binding to Azure IoT Hub Events
|
||||
|
||||
Azure IoT Hub provides an [endpoint that is compatible with Event Hubs](https://docs.microsoft.com/en-us/azure/iot-hub/iot-hub-devguide-messages-read-builtin#read-from-the-built-in-endpoint), so Dapr apps can create input bindings to read Azure IoT Hub events using the Event Hubs bindings component.
|
||||
|
||||
The device-to-cloud events created by Azure IoT Hub devices will contain additional [IoT Hub System Properties](https://docs.microsoft.com/en-us/azure/iot-hub/iot-hub-devguide-messages-construct#system-properties-of-d2c-iot-hub-messages), and the Azure Event Hubs binding for Dapr will return the following as part of the response metadata:
|
||||
|
||||
| System Property Name | Description |
|
||||
|--------------------|:--------|
|
||||
| `iothub-connection-auth-generation-id` | The **connectionDeviceGenerationId** of the device that sent the message. See [IoT Hub device identity properties](https://docs.microsoft.com/en-us/azure/iot-hub/iot-hub-devguide-identity-registry#device-identity-properties). |
|
||||
| `iothub-connection-auth-method` | The authentication method used to authenticate the device that sent the message. |
|
||||
| `iothub-connection-device-id` | The **deviceId** of the device that sent the message. See [IoT Hub device identity properties](https://docs.microsoft.com/en-us/azure/iot-hub/iot-hub-devguide-identity-registry#device-identity-properties). |
|
||||
| `iothub-connection-module-id` | The **moduleId** of the device that sent the message. See [IoT Hub device identity properties](https://docs.microsoft.com/en-us/azure/iot-hub/iot-hub-devguide-identity-registry#device-identity-properties). |
|
||||
| `iothub-enqueuedtime` | The date and time in RFC3339 format that the device-to-cloud message was received by IoT Hub. |
|
||||
|
||||
For example, the headers of a HTTP `Read()` response would contain:
|
||||
|
||||
```nodejs
|
||||
{
|
||||
'user-agent': 'fasthttp',
|
||||
'host': '127.0.0.1:3000',
|
||||
'content-type': 'application/json',
|
||||
'content-length': '120',
|
||||
'iothub-connection-device-id': 'my-test-device',
|
||||
'iothub-connection-auth-generation-id': '637618061680407492',
|
||||
'iothub-connection-auth-method': '{"scope":"module","type":"sas","issuer":"iothub","acceptingIpFilterRule":null}',
|
||||
'iothub-connection-module-id': 'my-test-module-a',
|
||||
'iothub-enqueuedtime': '2021-07-13T22:08:09Z',
|
||||
'x-opt-sequence-number': '35',
|
||||
'x-opt-enqueued-time': '2021-07-13T22:08:09Z',
|
||||
'x-opt-offset': '21560',
|
||||
'traceparent': '00-4655608164bc48b985b42d39865f3834-ed6cf3697c86e7bd-01'
|
||||
}
|
||||
```
|
||||
|
||||
## Related links
|
||||
|
||||
- [Basic schema for a Dapr component]({{< ref component-schema >}})
|
||||
|
|
|
@ -0,0 +1,74 @@
|
|||
---
|
||||
type: docs
|
||||
title: "GraphQL binding spec"
|
||||
linkTitle: "GraphQL"
|
||||
description: "Detailed documentation on the GraphQL binding component"
|
||||
aliases:
|
||||
- "/operations/components/setup-bindings/supported-bindings/graphql/"
|
||||
---
|
||||
|
||||
## Component format
|
||||
|
||||
To setup GraphQL binding create a component of type `bindings.graphql`. See [this guide]({{< ref "howto-bindings.md#1-create-a-binding" >}}) on how to create and apply a binding configuration. To separate normal config settings (e.g. endpoint) from headers, "header:" is used a prefix on the header names.
|
||||
|
||||
|
||||
```yaml
|
||||
apiVersion: dapr.io/v1alpha1
|
||||
kind: Component
|
||||
metadata:
|
||||
name: example.bindings.graphql
|
||||
spec:
|
||||
type: bindings.graphql
|
||||
version: v1
|
||||
metadata:
|
||||
- name: endpoint
|
||||
value: http://localhost:8080/v1/graphql
|
||||
- name: header:x-hasura-access-key
|
||||
value: adminkey
|
||||
- name: header:Cache-Control
|
||||
value: no-cache
|
||||
```
|
||||
|
||||
{{% alert title="Warning" color="warning" %}}
|
||||
The above example uses secrets as plain strings. It is recommended to use a secret store for the secrets as described [here]({{< ref component-secrets.md >}}).
|
||||
{{% /alert %}}
|
||||
|
||||
## Spec metadata fields
|
||||
|
||||
| Field | Required | Binding support | Details | Example |
|
||||
|--------------------|:--------:|------------|-----|---------|
|
||||
| endpoint | Y | Output | GraphQL endpoint string See [here](#url-format) for more details | `"http://localhost:4000/graphql/graphql"` |
|
||||
| header:[HEADERKEY] | N | Output | GraphQL header. Specify the header key in the `name`, and the header value in the `value`. | `"no-cache"` (see above) |
|
||||
|
||||
### Endpoint and Header format
|
||||
|
||||
The GraphQL binding uses [GraphQL client](https://github.com/machinebox/graphql) internally.
|
||||
|
||||
## Binding support
|
||||
|
||||
This component supports **output binding** with the following operations:
|
||||
|
||||
- `query`
|
||||
- `mutation`
|
||||
|
||||
### query
|
||||
|
||||
The `query` operation is used for `query` statements, which returns the metadata along with data in a form of an array of row values.
|
||||
|
||||
**Request**
|
||||
|
||||
```golang
|
||||
in := &dapr.InvokeBindingRequest{
|
||||
Name: "example.bindings.graphql",
|
||||
Operation: "query",
|
||||
Metadata: map[string]string{ "query": `query { users { name } }`},
|
||||
}
|
||||
```
|
||||
|
||||
## Related links
|
||||
|
||||
- [Basic schema for a Dapr component]({{< ref component-schema >}})
|
||||
- [Bindings building block]({{< ref bindings >}})
|
||||
- [How-To: Trigger application with input binding]({{< ref howto-triggers.md >}})
|
||||
- [How-To: Use bindings to interface with external resources]({{< ref howto-bindings.md >}})
|
||||
- [Bindings API reference]({{< ref bindings_api.md >}})
|
|
@ -9,53 +9,50 @@ aliases:
|
|||
|
||||
## Component format
|
||||
|
||||
To setup Kafka binding create a component of type `bindings.kafka`. See [this guide]({{< ref "howto-bindings.md#1-create-a-binding" >}}) on how to create and apply a binding configuration.
|
||||
|
||||
To setup Kafka binding create a component of type `bindings.kafka`. See [this guide]({{< ref "howto-bindings.md#1-create-a-binding" >}}) on how to create and apply a binding configuration. For details on using `secretKeyRef`, see the guide on [how to reference secrets in components]({{< ref component-secrets.md >}}).
|
||||
|
||||
```yaml
|
||||
apiVersion: dapr.io/v1alpha1
|
||||
kind: Component
|
||||
metadata:
|
||||
name: <NAME>
|
||||
namespace: <NAMESPACE>
|
||||
name: kafka-binding
|
||||
namespace: default
|
||||
spec:
|
||||
type: bindings.kafka
|
||||
version: v1
|
||||
metadata:
|
||||
- name: topics # Optional. in use for input bindings
|
||||
value: topic1,topic2
|
||||
- name: brokers
|
||||
value: localhost:9092,localhost:9093
|
||||
- name: consumerGroup
|
||||
value: group1
|
||||
- name: publishTopic # Optional. in use for output bindings
|
||||
value: topic3
|
||||
- name: authRequired # Required. default: "true"
|
||||
value: "false"
|
||||
- name: saslUsername # Optional.
|
||||
- name: topics # Optional. Used for input bindings.
|
||||
value: "topic1,topic2"
|
||||
- name: brokers # Required.
|
||||
value: "localhost:9092,localhost:9093"
|
||||
- name: consumerGroup # Optional. Used for input bindings.
|
||||
value: "group1"
|
||||
- name: publishTopic # Optional. Used for output bindings.
|
||||
value: "topic3"
|
||||
- name: authRequired # Required.
|
||||
value: "true"
|
||||
- name: saslUsername # Required if authRequired is `true`.
|
||||
value: "user"
|
||||
- name: saslPassword # Optional.
|
||||
value: "password"
|
||||
- name: saslPassword # Required if authRequired is `true`.
|
||||
secretKeyRef:
|
||||
name: kafka-secrets
|
||||
key: saslPasswordSecret
|
||||
- name: maxMessageBytes # Optional.
|
||||
value: 1024
|
||||
```
|
||||
|
||||
{{% alert title="Warning" color="warning" %}}
|
||||
The above example uses secrets as plain strings. It is recommended to use a secret store for the secrets as described [here]({{< ref component-secrets.md >}}).
|
||||
{{% /alert %}}
|
||||
## Spec metadata fields
|
||||
|
||||
| Field | Required | Binding support | Details | Example |
|
||||
|--------------------|:--------:|------------|-----|---------|
|
||||
| topics | N | Input | A comma separated string of topics | `"mytopic1,topic2"` |
|
||||
| brokers | Y | Input/Output | A comma separated string of kafka brokers | `"localhost:9092,localhost:9093"` |
|
||||
| consumerGroup | N | Input | A kafka consumer group to listen on | `"group1"` |
|
||||
| publishTopic | Y | Output | The topic to publish to | `"mytopic"` |
|
||||
| authRequired | Y | Input/Output | Determines whether to use SASL authentication or not. Defaults to `"true"` | `"true"`, `"false"` |
|
||||
| saslUsername | N | Input/Output | The SASL username for authentication. Only used if `authRequired` is set to - `"true"` | `"user"` |
|
||||
| saslPassword | N | Input/Output | The SASL password for authentication. Only used if `authRequired` is set to - `"true"` | `"password"` |
|
||||
| maxMessageBytes | N | Input/Output | The maximum size allowed for a single Kafka message. Defaults to 1024 | `2048` |
|
||||
|
||||
| topics | N | Input | A comma-separated string of topics. | `"mytopic1,topic2"` |
|
||||
| brokers | Y | Input/Output | A comma-separated string of Kafka brokers. | `"localhost:9092,dapr-kafka.myapp.svc.cluster.local:9093"` |
|
||||
| consumerGroup | N | Input | A kafka consumer group to listen on. Each record published to a topic is delivered to one consumer within each consumer group subscribed to the topic. | `"group1"` |
|
||||
| publishTopic | Y | Output | The topic to publish to. | `"mytopic"` |
|
||||
| authRequired | Y | Input/Output | Enable [SASL](https://en.wikipedia.org/wiki/Simple_Authentication_and_Security_Layer) authentication with the Kafka brokers. | `"true"`, `"false"` |
|
||||
| saslUsername | N | Input/Output | The SASL username used for authentication. Only required if `authRequired` is set to `"true"`. | `"adminuser"` |
|
||||
| saslPassword | N | Input/Output | The SASL password used for authentication. Can be `secretKeyRef` to use a [secret reference]({{< ref component-secrets.md >}}). Only required if `authRequired` is set to `"true"`. | `""`, `"KeFg23!"` |
|
||||
| maxMessageBytes | N | Input/Output | The maximum size in bytes allowed for a single Kafka message. Defaults to 1024. | `2048` |
|
||||
|
||||
## Binding support
|
||||
|
||||
|
@ -87,7 +84,6 @@ curl -X POST http://localhost:3500/v1.0/bindings/myKafka \
|
|||
}'
|
||||
```
|
||||
|
||||
|
||||
## Related links
|
||||
|
||||
- [Basic schema for a Dapr component]({{< ref component-schema >}})
|
||||
|
|
|
@ -41,6 +41,24 @@ The above example uses secrets as plain strings. It is recommended to use a secr
|
|||
| redisHost | Y | Output | The Redis host address | `"localhost:6379"` |
|
||||
| redisPassword | Y | Output | The Redis password | `"password"` |
|
||||
| enableTLS | N | Output | If the Redis instance supports TLS with public certificates it can be configured to enable or disable TLS. Defaults to `"false"` | `"true"`, `"false"` |
|
||||
| failover | N | Property to enabled failover configuration. Needs sentinalMasterName to be set. Defaults to `"false"` | `"true"`, `"false"`
|
||||
| sentinelMasterName | N | The sentinel master name. See [Redis Sentinel Documentation](https://redis.io/topics/sentinel) | `""`, `"127.0.0.1:6379"`
|
||||
| redeliverInterval | N | The interval between checking for pending messages to redelivery. Defaults to `"60s"`. `"0"` disables redelivery. | `"30s"`
|
||||
| processingTimeout | N | The amount time a message must be pending before attempting to redeliver it. Defaults to `"15s"`. `"0"` disables redelivery. | `"30s"`
|
||||
| redisType | N | The type of redis. There are two valid values, one is `"node"` for single node mode, the other is `"cluster"` for redis cluster mode. Defaults to `"node"`. | `"cluster"`
|
||||
| redisDB | N | Database selected after connecting to redis. If `"redisType"` is `"cluster"` this option is ignored. Defaults to `"0"`. | `"0"`
|
||||
| redisMaxRetries | N | Maximum number of times to retry commands before giving up. Default is to not retry failed commands. | `"5"`
|
||||
| redisMinRetryInterval | N | Minimum backoff for redis commands between each retry. Default is `"8ms"`; `"-1"` disables backoff. | `"8ms"`
|
||||
| redisMaxRetryInterval | N | Maximum backoff for redis commands between each retry. Default is `"512ms"`;`"-1"` disables backoff. | `"5s"`
|
||||
| dialTimeout | N | Dial timeout for establishing new connections. Defaults to `"5s"`. | `"5s"`
|
||||
| readTimeout | N | Timeout for socket reads. If reached, redis commands will fail with a timeout instead of blocking. Defaults to `"3s"`, `"-1"` for no timeout. | `"3s"`
|
||||
| writeTimeout | N | Timeout for socket writes. If reached, redis commands will fail with a timeout instead of blocking. Defaults is readTimeout. | `"3s"`
|
||||
| poolSize | N | Maximum number of socket connections. Default is 10 connections per every CPU as reported by runtime.NumCPU. | `"20"`
|
||||
| poolTimeout | N | Amount of time client waits for a connection if all connections are busy before returning an error. Default is readTimeout + 1 second. | `"5s"`
|
||||
| maxConnAge | N | Connection age at which the client retires (closes) the connection. Default is to not close aged connections. | `"30m"`
|
||||
| minIdleConns | N | Minimum number of idle connections to keep open in order to avoid the performance degradation associated with creating new connections. Defaults to `"0"`. | `"2"`
|
||||
| idleCheckFrequency | N | Frequency of idle checks made by idle connections reaper. Default is `"1m"`. `"-1"` disables idle connections reaper. | `"-1"`
|
||||
| idleTimeout | N | Amount of time after which the client closes idle connections. Should be less than server's timeout. Default is `"5m"`. `"-1"` disables idle timeout check. | `"10m"`
|
||||
|
||||
|
||||
## Binding support
|
||||
|
|
|
@ -45,7 +45,7 @@ spec:
|
|||
This component supports **output binding** with the following operations:
|
||||
|
||||
- `topology`
|
||||
- `deploy-workflow`
|
||||
- `deploy-process`
|
||||
- `create-instance`
|
||||
- `cancel-instance`
|
||||
- `set-variables`
|
||||
|
@ -65,7 +65,7 @@ Zeebe uses gRPC under the hood for the Zeebe client we use in this binding. Plea
|
|||
|
||||
The `topology` operation obtains the current topology of the cluster the gateway is part of.
|
||||
|
||||
To perform a `topology` operation, invoke the Zeebe command binding with a `POST` method and the following JSON body:
|
||||
To perform a `topology` operation, invoke the Zeebe command binding with a `POST` method, and the following JSON body:
|
||||
|
||||
```json
|
||||
{
|
||||
|
@ -119,28 +119,25 @@ The response values are:
|
|||
- `replicationFactor` - configured replication factor for this cluster
|
||||
- `gatewayVersion` - gateway version
|
||||
|
||||
#### deploy-workflow
|
||||
#### deploy-process
|
||||
|
||||
The `deploy-workflow` operation deploys a single workflow to Zeebe.
|
||||
The `deploy-process` operation deploys a single process to Zeebe.
|
||||
|
||||
To perform a `deploy-workflow` operation, invoke the Zeebe command binding with a `POST` method and the following JSON body:
|
||||
To perform a `deploy-process` operation, invoke the Zeebe command binding with a `POST` method, and the following JSON body:
|
||||
|
||||
```json
|
||||
{
|
||||
"data": "YOUR_FILE_CONTENT",
|
||||
"metadata": {
|
||||
"fileName": "products-process.bpmn",
|
||||
"fileType": "bpmn"
|
||||
"fileName": "products-process.bpmn"
|
||||
},
|
||||
"operation": "deploy-workflow"
|
||||
"operation": "deploy-process"
|
||||
}
|
||||
```
|
||||
|
||||
The metadata parameters are:
|
||||
|
||||
- `fileName` - the name of the workflow file
|
||||
- `fileType` - (optional) the type of the file 'bpmn' or 'file'. If no type was given, the default will be recognized based on the file extension
|
||||
'bpmn' for file extension .bpmn, for all other files it will be set to 'file'
|
||||
- `fileName` - the name of the process file
|
||||
|
||||
##### Response
|
||||
|
||||
|
@ -149,11 +146,11 @@ The binding returns a JSON with the following response:
|
|||
```json
|
||||
{
|
||||
"key": 2251799813687320,
|
||||
"workflows": [
|
||||
"processes": [
|
||||
{
|
||||
"bpmnProcessId": "products-process",
|
||||
"version": 3,
|
||||
"workflowKey": 2251799813685895,
|
||||
"processDefinitionKey": 2251799813685895,
|
||||
"resourceName": "products-process.bpmn"
|
||||
}
|
||||
]
|
||||
|
@ -163,23 +160,23 @@ The binding returns a JSON with the following response:
|
|||
The response values are:
|
||||
|
||||
- `key` - the unique key identifying the deployment
|
||||
- `workflows` - a list of deployed workflows
|
||||
- `processes` - a list of deployed processes
|
||||
- `bpmnProcessId` - the bpmn process ID, as parsed during deployment; together with the version forms a unique identifier for a specific
|
||||
workflow definition
|
||||
process definition
|
||||
- `version` - the assigned process version
|
||||
- `workflowKey` - the assigned key, which acts as a unique identifier for this workflow
|
||||
- `resourceName` - the resource name from which this workflow was parsed
|
||||
- `processDefinitionKey` - the assigned key, which acts as a unique identifier for this process
|
||||
- `resourceName` - the resource name from which this process was parsed
|
||||
|
||||
#### create-instance
|
||||
|
||||
The `create-instance` operation creates and starts an instance of the specified workflow. The workflow definition to use to create the instance can be
|
||||
specified either using its unique key (as returned by the `deploy-workflow` operation), or using the BPMN process ID and a version.
|
||||
The `create-instance` operation creates and starts an instance of the specified process. The process definition to use to create the instance can be
|
||||
specified either using its unique key (as returned by the `deploy-process` operation), or using the BPMN process ID and a version.
|
||||
|
||||
Note that only workflows with none start events can be started through this command.
|
||||
Note that only processes with none start events can be started through this command.
|
||||
|
||||
##### By BPMN process ID
|
||||
|
||||
To perform a `create-instance` operation, invoke the Zeebe command binding with a `POST` method and the following JSON body:
|
||||
To perform a `create-instance` operation, invoke the Zeebe command binding with a `POST` method, and the following JSON body:
|
||||
|
||||
```json
|
||||
{
|
||||
|
@ -197,22 +194,22 @@ To perform a `create-instance` operation, invoke the Zeebe command binding with
|
|||
|
||||
The data parameters are:
|
||||
|
||||
- `bpmnProcessId` - the BPMN process ID of the workflow definition to instantiate
|
||||
- `bpmnProcessId` - the BPMN process ID of the process definition to instantiate
|
||||
- `version` - (optional, default: latest version) the version of the process to instantiate
|
||||
- `variables` - (optional) JSON document that will instantiate the variables for the root variable scope of the
|
||||
workflow instance; it must be a JSON object, as variables will be mapped in a
|
||||
process instance; it must be a JSON object, as variables will be mapped in a
|
||||
key-value fashion. e.g. { "a": 1, "b": 2 } will create two variables, named "a" and
|
||||
"b" respectively, with their associated values. [{ "a": 1, "b": 2 }] would not be a
|
||||
valid argument, as the root of the JSON document is an array and not an object
|
||||
|
||||
##### By workflow key
|
||||
##### By process definition key
|
||||
|
||||
To perform a `create-instance` operation, invoke the Zeebe command binding with a `POST` method and the following JSON body:
|
||||
To perform a `create-instance` operation, invoke the Zeebe command binding with a `POST` method, and the following JSON body:
|
||||
|
||||
```json
|
||||
{
|
||||
"data": {
|
||||
"workflowKey": 2251799813685895,
|
||||
"processDefinitionKey": 2251799813685895,
|
||||
"variables": {
|
||||
"productId": "some-product-id",
|
||||
"productName": "some-product-name",
|
||||
|
@ -225,44 +222,43 @@ To perform a `create-instance` operation, invoke the Zeebe command binding with
|
|||
|
||||
The data parameters are:
|
||||
|
||||
- `workflowKey` - the unique key identifying the workflow definition to instantiate
|
||||
- `variables` - (optional) JSON document that will instantiate the variables for the root variable scope of the
|
||||
workflow instance; it must be a JSON object, as variables will be mapped in a
|
||||
- `processDefinitionKey` - the unique key identifying the process definition to instantiate
|
||||
- `variables` - (optional) JSON document that will instantiate the variables for the root variable scope of the
|
||||
process instance; it must be a JSON object, as variables will be mapped in a
|
||||
key-value fashion. e.g. { "a": 1, "b": 2 } will create two variables, named "a" and
|
||||
"b" respectively, with their associated values. [{ "a": 1, "b": 2 }] would not be a
|
||||
valid argument, as the root of the JSON document is an array and not an object
|
||||
|
||||
|
||||
##### Response
|
||||
|
||||
The binding returns a JSON with the following response:
|
||||
|
||||
```json
|
||||
{
|
||||
"workflowKey": 2251799813685895,
|
||||
"processDefinitionKey": 2251799813685895,
|
||||
"bpmnProcessId": "products-process",
|
||||
"version": 3,
|
||||
"workflowInstanceKey": 2251799813687851
|
||||
"processInstanceKey": 2251799813687851
|
||||
}
|
||||
```
|
||||
|
||||
The response values are:
|
||||
|
||||
- `workflowKey` - the key of the workflow definition which was used to create the workflow instance
|
||||
- `bpmnProcessId` - the BPMN process ID of the workflow definition which was used to create the workflow instance
|
||||
- `version` - the version of the workflow definition which was used to create the workflow instance
|
||||
- `workflowInstanceKey` - the unique identifier of the created workflow instance
|
||||
- `processDefinitionKey` - the key of the process definition which was used to create the process instance
|
||||
- `bpmnProcessId` - the BPMN process ID of the process definition which was used to create the process instance
|
||||
- `version` - the version of the process definition which was used to create the process instance
|
||||
- `processInstanceKey` - the unique identifier of the created process instance
|
||||
|
||||
#### cancel-instance
|
||||
|
||||
The `cancel-instance` operation cancels a running workflow instance.
|
||||
The `cancel-instance` operation cancels a running process instance.
|
||||
|
||||
To perform a `cancel-instance` operation, invoke the Zeebe command binding with a `POST` method and the following JSON body:
|
||||
To perform a `cancel-instance` operation, invoke the Zeebe command binding with a `POST` method, and the following JSON body:
|
||||
|
||||
```json
|
||||
{
|
||||
"data": {
|
||||
"workflowInstanceKey": 2251799813687851
|
||||
"processInstanceKey": 2251799813687851
|
||||
},
|
||||
"metadata": {},
|
||||
"operation": "cancel-instance"
|
||||
|
@ -271,7 +267,7 @@ To perform a `cancel-instance` operation, invoke the Zeebe command binding with
|
|||
|
||||
The data parameters are:
|
||||
|
||||
- `workflowInstanceKey` - the workflow instance key
|
||||
- `processInstanceKey` - the process instance key
|
||||
|
||||
##### Response
|
||||
|
||||
|
@ -279,9 +275,9 @@ The binding does not return a response body.
|
|||
|
||||
#### set-variables
|
||||
|
||||
The `set-variables` operation creates or updates variables for an element instance (e.g. workflow instance, flow element instance).
|
||||
The `set-variables` operation creates or updates variables for an element instance (e.g. process instance, flow element instance).
|
||||
|
||||
To perform a `set-variables` operation, invoke the Zeebe command binding with a `POST` method and the following JSON body:
|
||||
To perform a `set-variables` operation, invoke the Zeebe command binding with a `POST` method, and the following JSON body:
|
||||
|
||||
```json
|
||||
{
|
||||
|
@ -300,7 +296,7 @@ To perform a `set-variables` operation, invoke the Zeebe command binding with a
|
|||
|
||||
The data parameters are:
|
||||
|
||||
- `elementInstanceKey` - the unique identifier of a particular element; can be the workflow instance key (as
|
||||
- `elementInstanceKey` - the unique identifier of a particular element; can be the process instance key (as
|
||||
obtained during instance creation), or a given element, such as a service task (see elementInstanceKey on the job message)
|
||||
- `local` - (optional, default: `false`) if true, the variables will be merged strictly into the local scope (as indicated by
|
||||
elementInstanceKey); this means the variables is not propagated to upper scopes.
|
||||
|
@ -329,7 +325,7 @@ The response values are:
|
|||
|
||||
The `resolve-incident` operation resolves an incident.
|
||||
|
||||
To perform a `resolve-incident` operation, invoke the Zeebe command binding with a `POST` method and the following JSON body:
|
||||
To perform a `resolve-incident` operation, invoke the Zeebe command binding with a `POST` method, and the following JSON body:
|
||||
|
||||
```json
|
||||
{
|
||||
|
@ -353,7 +349,7 @@ The binding does not return a response body.
|
|||
|
||||
The `publish-message` operation publishes a single message. Messages are published to specific partitions computed from their correlation keys.
|
||||
|
||||
To perform a `publish-message` operation, invoke the Zeebe command binding with a `POST` method and the following JSON body:
|
||||
To perform a `publish-message` operation, invoke the Zeebe command binding with a `POST` method, and the following JSON body:
|
||||
|
||||
```json
|
||||
{
|
||||
|
@ -397,7 +393,7 @@ The response values are:
|
|||
The `activate-jobs` operation iterates through all known partitions round-robin and activates up to the requested maximum and streams them back to
|
||||
the client as they are activated.
|
||||
|
||||
To perform a `activate-jobs` operation, invoke the Zeebe command binding with a `POST` method and the following JSON body:
|
||||
To perform a `activate-jobs` operation, invoke the Zeebe command binding with a `POST` method, and the following JSON body:
|
||||
|
||||
```json
|
||||
{
|
||||
|
@ -442,12 +438,12 @@ The response values are:
|
|||
|
||||
- `key` - the key, a unique identifier for the job
|
||||
- `type` - the type of the job (should match what was requested)
|
||||
- `workflowInstanceKey` - the job's workflow instance key
|
||||
- `bpmnProcessId` - the bpmn process ID of the job workflow definition
|
||||
- `workflowDefinitionVersion` - the version of the job workflow definition
|
||||
- `workflowKey` - the key of the job workflow definition
|
||||
- `processInstanceKey` - the job's process instance key
|
||||
- `bpmnProcessId` - the bpmn process ID of the job process definition
|
||||
- `processDefinitionVersion` - the version of the job process definition
|
||||
- `processDefinitionKey` - the key of the job process definition
|
||||
- `elementId` - the associated task element ID
|
||||
- `elementInstanceKey` - the unique key identifying the associated task, unique within the scope of the workflow instance
|
||||
- `elementInstanceKey` - the unique key identifying the associated task, unique within the scope of the process instance
|
||||
- `customHeaders` - a set of custom headers defined during modelling; returned as a serialized JSON document
|
||||
- `worker` - the name of the worker which activated this job
|
||||
- `retries` - the amount of retries left to this job (should always be positive)
|
||||
|
@ -458,7 +454,7 @@ The response values are:
|
|||
|
||||
The `complete-job` operation completes a job with the given payload, which allows completing the associated service task.
|
||||
|
||||
To perform a `complete-job` operation, invoke the Zeebe command binding with a `POST` method and the following JSON body:
|
||||
To perform a `complete-job` operation, invoke the Zeebe command binding with a `POST` method, and the following JSON body:
|
||||
|
||||
```json
|
||||
{
|
||||
|
@ -490,7 +486,7 @@ The `fail-job` operation marks the job as failed; if the retries argument is pos
|
|||
worker could try again to process it. If it is zero or negative however, an incident will be raised, tagged with the given errorMessage, and the
|
||||
job will not be activatable until the incident is resolved.
|
||||
|
||||
To perform a `fail-job` operation, invoke the Zeebe command binding with a `POST` method and the following JSON body:
|
||||
To perform a `fail-job` operation, invoke the Zeebe command binding with a `POST` method, and the following JSON body:
|
||||
|
||||
```json
|
||||
{
|
||||
|
@ -520,7 +516,7 @@ The binding does not return a response body.
|
|||
The `update-job-retries` operation updates the number of retries a job has left. This is mostly useful for jobs that have run out of retries, should the
|
||||
underlying problem be solved.
|
||||
|
||||
To perform a `update-job-retries` operation, invoke the Zeebe command binding with a `POST` method and the following JSON body:
|
||||
To perform a `update-job-retries` operation, invoke the Zeebe command binding with a `POST` method, and the following JSON body:
|
||||
|
||||
```json
|
||||
{
|
||||
|
@ -545,9 +541,9 @@ The binding does not return a response body.
|
|||
#### throw-error
|
||||
|
||||
The `throw-error` operation throw an error to indicate that a business error is occurred while processing the job. The error is identified
|
||||
by an error code and is handled by an error catch event in the workflow with the same error code.
|
||||
by an error code and is handled by an error catch event in the process with the same error code.
|
||||
|
||||
To perform a `throw-error` operation, invoke the Zeebe command binding with a `POST` method and the following JSON body:
|
||||
To perform a `throw-error` operation, invoke the Zeebe command binding with a `POST` method, and the following JSON body:
|
||||
|
||||
```json
|
||||
{
|
||||
|
|
|
@ -73,14 +73,16 @@ This component supports **input** binding interfaces.
|
|||
|
||||
### Input binding
|
||||
|
||||
The Zeebe workflow engine handles the workflow state as also workflow variables which can be passed
|
||||
on workflow instantiation or which can be updated or created during workflow execution. These variables
|
||||
#### Variables
|
||||
|
||||
The Zeebe process engine handles the process state as also process variables which can be passed
|
||||
on process instantiation or which can be updated or created during process execution. These variables
|
||||
can be passed to a registered job worker by defining the variable names as comma-separated list in
|
||||
the `fetchVariables` metadata field. The workflow engine will then pass these variables with its current
|
||||
the `fetchVariables` metadata field. The process engine will then pass these variables with its current
|
||||
values to the job worker implementation.
|
||||
|
||||
If the binding will register three variables `productId`, `productName` and `productKey` then the service will
|
||||
be called with the following JSON:
|
||||
If the binding will register three variables `productId`, `productName` and `productKey` then the worker will
|
||||
be called with the following JSON body:
|
||||
|
||||
```json
|
||||
{
|
||||
|
@ -90,10 +92,35 @@ be called with the following JSON:
|
|||
}
|
||||
```
|
||||
|
||||
Note: if the `fetchVariables` metadata field will not be passed, all process variables will be passed to the worker.
|
||||
|
||||
#### Headers
|
||||
|
||||
The Zeebe process engine has the ability to pass custom task headers to a job worker. These headers can be defined for every
|
||||
[service task](https://stage.docs.zeebe.io/bpmn-workflows/service-tasks/service-tasks.html). Task headers will be passed
|
||||
by the binding as metadata (HTTP headers) to the job worker.
|
||||
|
||||
The binding will also pass the following job related variables as metadata. The values will be passed as string. The table contains also the
|
||||
original data type so that it can be converted back to the equivalent data type in the used programming language for the worker.
|
||||
|
||||
| Metadata | Data type | Description |
|
||||
|------------------------------------|-----------|-------------------------------------------------------------------------------------------------|
|
||||
| X-Zeebe-Job-Key | int64 | The key, a unique identifier for the job |
|
||||
| X-Zeebe-Job-Type | string | The type of the job (should match what was requested) |
|
||||
| X-Zeebe-Process-Instance-Key | int64 | The job's process instance key |
|
||||
| X-Zeebe-Bpmn-Process-Id | string | The bpmn process ID of the job process definition |
|
||||
| X-Zeebe-Process-Definition-Version | int32 | The version of the job process definition |
|
||||
| X-Zeebe-Process-Definition-Key | int64 | The key of the job process definition |
|
||||
| X-Zeebe-Element-Id | string | The associated task element ID |
|
||||
| X-Zeebe-Element-Instance-Key | int64 | The unique key identifying the associated task, unique within the scope of the process instance |
|
||||
| X-Zeebe-Worker | string | The name of the worker which activated this job |
|
||||
| X-Zeebe-Retries | int32 | The amount of retries left to this job (should always be positive) |
|
||||
| X-Zeebe-Deadline | int64 | When the job can be activated again, sent as a UNIX epoch timestamp |
|
||||
|
||||
## Related links
|
||||
|
||||
- [Basic schema for a Dapr component]({{< ref component-schema >}})
|
||||
- [Bindings building block]({{< ref bindings >}})
|
||||
- [How-To: Trigger application with input binding]({{< ref howto-triggers.md >}})
|
||||
- [How-To: Use bindings to interface with external resources]({{< ref howto-bindings.md >}})
|
||||
- [Bindings API reference]({{< ref bindings_api.md >}})
|
||||
- [Bindings API reference]({{< ref bindings_api.md >}})
|
|
@ -9,7 +9,7 @@ aliases:
|
|||
|
||||
## Component format
|
||||
|
||||
To setup Apache Kafka pubsub create a component of type `pubsub.kafka`. See [this guide]({{< ref "howto-publish-subscribe.md#step-1-setup-the-pubsub-component" >}}) on how to create and apply a pubsub configuration.
|
||||
To setup Apache Kafka pubsub create a component of type `pubsub.kafka`. See [this guide]({{< ref "howto-publish-subscribe.md#step-1-setup-the-pubsub-component" >}}) on how to create and apply a pubsub configuration. For details on using `secretKeyRef`, see the guide on [how to reference secrets in components]({{< ref component-secrets.md >}}).
|
||||
|
||||
```yaml
|
||||
apiVersion: dapr.io/v1alpha1
|
||||
|
@ -21,32 +21,35 @@ spec:
|
|||
type: pubsub.kafka
|
||||
version: v1
|
||||
metadata:
|
||||
# Kafka broker connection setting
|
||||
- name: brokers
|
||||
value: "dapr-kafka.myapp.svc.cluster.local:9092"
|
||||
- name: authRequired
|
||||
value: "true"
|
||||
- name: saslUsername
|
||||
value: "adminuser"
|
||||
- name: saslPassword
|
||||
value: "KeFg23!"
|
||||
- name: maxMessageBytes
|
||||
value: 1024
|
||||
- name: brokers # Required. Kafka broker connection setting
|
||||
value: "dapr-kafka.myapp.svc.cluster.local:9092"
|
||||
- name: consumerGroup # Optional. Used for input bindings.
|
||||
value: "group1"
|
||||
- name: clientID # Optional. Used as client tracing ID by Kafka brokers.
|
||||
value: "my-dapr-app-id"
|
||||
- name: authRequired # Required.
|
||||
value: "true"
|
||||
- name: saslUsername # Required if authRequired is `true`.
|
||||
value: "adminuser"
|
||||
- name: saslPassword # Required if authRequired is `true`.
|
||||
secretKeyRef:
|
||||
name: kafka-secrets
|
||||
key: saslPasswordSecret
|
||||
- name: maxMessageBytes # Optional.
|
||||
value: 1024
|
||||
```
|
||||
|
||||
{{% alert title="Warning" color="warning" %}}
|
||||
The above example uses secrets as plain strings. It is recommended to use a secret store for the secrets as described [here]({{< ref component-secrets.md >}}).
|
||||
{{% /alert %}}
|
||||
|
||||
## Spec metadata fields
|
||||
|
||||
| Field | Required | Details | Example |
|
||||
|--------------------|:--------:|---------|---------|
|
||||
| brokers | Y | Comma separated list of kafka brokers | `localhost:9092`, `dapr-kafka.myapp.svc.cluster.local:9092`
|
||||
| authRequired | N | Enable authentication on the Kafka broker. Defaults to `"false"`. |`"true"`, `"false"`
|
||||
| saslUsername | N | Username used for authentication. Only required if authRequired is set to true. | `"adminuser"`
|
||||
| saslPassword | N | Password used for authentication. Can be `secretKeyRef` to use a secret reference. Only required if authRequired is set to true. Can be `secretKeyRef` to use a [secret reference]({{< ref component-secrets.md >}}) | `""`, `"KeFg23!"`
|
||||
| maxMessageBytes | N | The maximum message size allowed for a single Kafka message. Default is 1024. | `2048`
|
||||
| brokers | Y | A comma-separated list of Kafka brokers. | `"localhost:9092,dapr-kafka.myapp.svc.cluster.local:9093"`
|
||||
| consumerGroup | N | A kafka consumer group to listen on. Each record published to a topic is delivered to one consumer within each consumer group subscribed to the topic. | `"group1"`
|
||||
| clientID | N | A user-provided string sent with every request to the Kafka brokers for logging, debugging, and auditing purposes. Defaults to `"sarama"`. | `"my-dapr-app"`
|
||||
| authRequired | Y | Enable [SASL](https://en.wikipedia.org/wiki/Simple_Authentication_and_Security_Layer) authentication with the Kafka brokers. | `"true"`, `"false"`
|
||||
| saslUsername | N | The SASL username used for authentication. Only required if `authRequired` is set to `"true"`. | `"adminuser"`
|
||||
| saslPassword | N | The SASL password used for authentication. Can be `secretKeyRef` to use a [secret reference]({{< ref component-secrets.md >}}). Only required if `authRequired` is set to `"true"`. | `""`, `"KeFg23!"`
|
||||
| maxMessageBytes | N | The maximum size in bytes allowed for a single Kafka message. Defaults to 1024. | `2048`
|
||||
|
||||
## Per-call metadata fields
|
||||
|
||||
|
@ -69,6 +72,7 @@ curl -X POST http://localhost:3500/v1.0/publish/myKafka/myTopic?metadata.partiti
|
|||
```
|
||||
|
||||
## Create a Kafka instance
|
||||
|
||||
{{< tabs "Self-Hosted" "Kubernetes">}}
|
||||
|
||||
{{% codetab %}}
|
||||
|
@ -82,7 +86,6 @@ To run Kafka on Kubernetes, you can use any Kafka operator, such as [Strimzi](ht
|
|||
|
||||
{{< /tabs >}}
|
||||
|
||||
|
||||
## Related links
|
||||
- [Basic schema for a Dapr component]({{< ref component-schema >}})
|
||||
- Read [this guide]({{< ref "howto-publish-subscribe.md##step-1-setup-the-pubsub-component" >}}) for instructions on configuring pub/sub components
|
||||
|
|
|
@ -58,6 +58,40 @@ For example, a Dapr app running on Kubernetes with `dapr.io/app-id: "myapp"` wil
|
|||
|
||||
Note: Dapr passes the name of the Consumer group to the EventHub and so this is not supplied in the metadata.
|
||||
|
||||
## Subscribing to Azure IoT Hub Events
|
||||
|
||||
Azure IoT Hub provides an [endpoint that is compatible with Event Hubs](https://docs.microsoft.com/en-us/azure/iot-hub/iot-hub-devguide-messages-read-builtin#read-from-the-built-in-endpoint), so the Azure Event Hubs pubsub component can also be used to subscribe to Azure IoT Hub events.
|
||||
|
||||
The device-to-cloud events created by Azure IoT Hub devices will contain additional [IoT Hub System Properties](https://docs.microsoft.com/en-us/azure/iot-hub/iot-hub-devguide-messages-construct#system-properties-of-d2c-iot-hub-messages), and the Azure Event Hubs pubsub component for Dapr will return the following as part of the response metadata:
|
||||
|
||||
| System Property Name | Description |
|
||||
|--------------------|:--------|
|
||||
| `iothub-connection-auth-generation-id` | The **connectionDeviceGenerationId** of the device that sent the message. See [IoT Hub device identity properties](https://docs.microsoft.com/en-us/azure/iot-hub/iot-hub-devguide-identity-registry#device-identity-properties). |
|
||||
| `iothub-connection-auth-method` | The authentication method used to authenticate the device that sent the message. |
|
||||
| `iothub-connection-device-id` | The **deviceId** of the device that sent the message. See [IoT Hub device identity properties](https://docs.microsoft.com/en-us/azure/iot-hub/iot-hub-devguide-identity-registry#device-identity-properties). |
|
||||
| `iothub-connection-module-id` | The **moduleId** of the device that sent the message. See [IoT Hub device identity properties](https://docs.microsoft.com/en-us/azure/iot-hub/iot-hub-devguide-identity-registry#device-identity-properties). |
|
||||
| `iothub-enqueuedtime` | The date and time in RFC3339 format that the device-to-cloud message was received by IoT Hub. |
|
||||
|
||||
For example, the headers of a delivered HTTP subscription message would contain:
|
||||
|
||||
```nodejs
|
||||
{
|
||||
'user-agent': 'fasthttp',
|
||||
'host': '127.0.0.1:3000',
|
||||
'content-type': 'application/json',
|
||||
'content-length': '120',
|
||||
'iothub-connection-device-id': 'my-test-device',
|
||||
'iothub-connection-auth-generation-id': '637618061680407492',
|
||||
'iothub-connection-auth-method': '{"scope":"module","type":"sas","issuer":"iothub","acceptingIpFilterRule":null}',
|
||||
'iothub-connection-module-id': 'my-test-module-a',
|
||||
'iothub-enqueuedtime': '2021-07-13T22:08:09Z',
|
||||
'x-opt-sequence-number': '35',
|
||||
'x-opt-enqueued-time': '2021-07-13T22:08:09Z',
|
||||
'x-opt-offset': '21560',
|
||||
'traceparent': '00-4655608164bc48b985b42d39865f3834-ed6cf3697c86e7bd-01'
|
||||
}
|
||||
```
|
||||
|
||||
## Related links
|
||||
- [Basic schema for a Dapr component]({{< ref component-schema >}})
|
||||
- Read [this guide]({{< ref "howto-publish-subscribe.md#step-2-publish-a-topic" >}}) for instructions on configuring pub/sub components
|
||||
|
|
|
@ -37,6 +37,12 @@ spec:
|
|||
value: "0"
|
||||
- name: concurrencyMode
|
||||
value: parallel
|
||||
- name: backOffPolicy
|
||||
value: "exponential"
|
||||
- name: backOffInitialInterval
|
||||
value: "100"
|
||||
- name: backOffMaxRetries
|
||||
value: "16"
|
||||
```
|
||||
{{% alert title="Warning" color="warning" %}}
|
||||
The above example uses secrets as plain strings. It is recommended to use a secret store for the secrets as described [here]({{< ref component-secrets.md >}}).
|
||||
|
@ -55,6 +61,20 @@ The above example uses secrets as plain strings. It is recommended to use a secr
|
|||
| prefetchCount | N | Number of messages to [prefetch](https://www.rabbitmq.com/consumer-prefetch.html). Consider changing this to a non-zero value for production environments. Defaults to `"0"`, which means that all available messages will be pre-fetched. | `"2"`
|
||||
| reconnectWait | N | How long to wait (in seconds) before reconnecting if a connection failure occurs | `"0"`
|
||||
| concurrencyMode | N | `parallel` is the default, and allows processing multiple messages in parallel (limited by the `app-max-concurrency` annotation, if configured). Set to `single` to disable parallel processing. In most situations there's no reason to change this. | `parallel`, `single`
|
||||
| backOffPolicy | N | Retry policy, `"constant"` is a backoff policy that always returns the same backoff delay. `"exponential"` is a backoff policy that increases the backoff period for each retry attempt using a randomization function that grows exponentially. Defaults to `"constant"`. | `constant`、`exponential` |
|
||||
| backOffDuration | N | The fixed interval only takes effect when the policy is constant. There are two valid formats, one is the fraction with a unit suffix format, and the other is the pure digital format that will be processed as milliseconds. Valid time units are "ns", "us" (or "µs"), "ms", "s", "m", "h". Defaults to `"5s"`. | `"5s"`、`"5000"` |
|
||||
| backOffInitialInterval | N | The backoff initial interval on retry. Only takes effect when the policy is exponential. There are two valid formats, one is the fraction with a unit suffix format, and the other is the pure digital format that will be processed as milliseconds. Valid time units are "ns", "us" (or "µs"), "ms", "s", "m", "h". Defaults to `"500"` | `"50"` |
|
||||
| backOffMaxInterval | N | The backoff initial interval on retry. Only takes effect when the policy is exponential. There are two valid formats, one is the fraction with a unit suffix format, and the other is the pure digital format that will be processed as milliseconds. Valid time units are "ns", "us" (or "µs"), "ms", "s", "m", "h". Defaults to `"60s"` | `"60000"` |
|
||||
| backOffMaxRetries | N | The maximum number of retries to process the message before returning an error. Defaults to `"0"` which means the component will not retry processing the message. `"-1"` will retry indefinitely until the message is processed or the application is shutdown. Any positive number is treated as the maximum retry count. | `"3"` |
|
||||
| backOffRandomizationFactor | N | Randomization factor, between 1 and 0, including 0 but not 1. Randomized interval = RetryInterval * (1 ± backOffRandomizationFactor). Defaults to `"0.5"`. | `"0.5"` |
|
||||
| backOffMultiplier | N | Backoff multiplier for the policy. Increments the interval by multiplying it with the multiplier. Defaults to `"1.5"` | `"1.5"` |
|
||||
| backOffMaxElapsedTime | N | After MaxElapsedTime the ExponentialBackOff returns Stop. There are two valid formats, one is the fraction with a unit suffix format, and the other is the pure digital format that will be processed as milliseconds. Valid time units are "ns", "us" (or "µs"), "ms", "s", "m", "h". Defaults to `"15m"` | `"15m"` |
|
||||
|
||||
|
||||
### Backoff policy introduction
|
||||
Backoff retry strategy can instruct the dapr sidecar how to resend the message. By default, the retry strategy is turned off, which means that the sidecar will send a message to the service once. When the service returns a result, the message will be marked as consumption regardless of whether it is correct or not. The above is based on the condition of `autoAck` and `requeueInFailure` is setting to false(if `requeueInFailure` is set to true, the message will get a second chance).
|
||||
|
||||
But in some cases, you may want dapr to retry pushing message with an (exponential or constant) backoff strategy until the message is processed normally or the number of retries is exhausted. This maybe useful when your service breaks down abnormally but the sidecar is not stopped together. Adding backoff policy will retry the message pushing during the service downtime, instead of marking these message as consumed.
|
||||
|
||||
|
||||
## Create a RabbitMQ server
|
||||
|
|
|
@ -61,6 +61,8 @@ The above example uses secrets as plain strings. It is recommended to use a secr
|
|||
| minIdleConns | N | Minimum number of idle connections to keep open in order to avoid the performance degradation associated with creating new connections. Defaults to `"0"`. | `"2"`
|
||||
| idleCheckFrequency | N | Frequency of idle checks made by idle connections reaper. Default is `"1m"`. `"-1"` disables idle connections reaper. | `"-1"`
|
||||
| idleTimeout | N | Amount of time after which the client closes idle connections. Should be less than server's timeout. Default is `"5m"`. `"-1"` disables idle timeout check. | `"10m"`
|
||||
| failover | N | Property to enabled failover configuration. Needs sentinalMasterName to be set. Defaults to `"false"` | `"true"`, `"false"`
|
||||
| sentinelMasterName | N | The sentinel master name. See [Redis Sentinel Documentation](https://redis.io/topics/sentinel) | `""`, `"127.0.0.1:6379"`
|
||||
| maxLenApprox | N | Maximum number of items inside a stream.The old entries are automatically evicted when the specified length is reached, so that the stream is left at a constant size. Defaults to unlimited. | `"10000"`
|
||||
|
||||
## Create a Redis instance
|
||||
|
|
|
@ -47,7 +47,7 @@ The above example uses secrets as plain strings. It is recommended to use a loca
|
|||
|
||||
| Field | Required | Details | Example |
|
||||
|--------------------|:--------:|---------|---------|
|
||||
| vaultName | Y | The name of the Azure Key Vault | `"mykeyvault"`
|
||||
| vaultName | Y | The name of the Azure Key Vault. If you only provide a name, it will covert to `[your_keyvault_name].vault.azure.net` in Dapr. If your URL uses another suffix, please provide the entire URI, such as `test.vault.azure.cn`. | `"mykeyvault"`, `"mykeyvault.value.azure.cn"`
|
||||
| spnTenantId | Y | Service Principal Tenant Id | `"spnTenantId"`
|
||||
| spnClientId | Y | Service Principal App Id | `"spnAppId"`
|
||||
| spnCertificateFile | Y | PFX certificate file path. <br></br> For Windows the `[pfx_certificate_file_fully_qualified_local_path]` value must use escaped backslashes, i.e. double backslashes. For example `"C:\\folder1\\folder2\\certfile.pfx"`. <br></br> For Linux you can use single slashes. For example `"/folder1/folder2/certfile.pfx"`. <br></br> See [configure the component](#configure-the-component) for more details | `"C:\\folder1\\folder2\\certfile.pfx"`, `"/folder1/folder2/certfile.pfx"`
|
||||
|
|
|
@ -39,6 +39,8 @@ spec:
|
|||
value : "[path_to_file_containing_token]"
|
||||
- name: vaultKVPrefix # Optional. Default: "dapr"
|
||||
value : "[vault_prefix]"
|
||||
- name: vaultKVUsePrefix # Optional. default: "true"
|
||||
value: "[true/false]"
|
||||
```
|
||||
{{% alert title="Warning" color="warning" %}}
|
||||
The above example uses secrets as plain strings. It is recommended to use a local secret store such as [Kubernetes secret store]({{< ref kubernetes-secret-store.md >}}) or a [local file]({{< ref file-secret-store.md >}}) to bootstrap secure key storage.
|
||||
|
@ -57,6 +59,7 @@ The above example uses secrets as plain strings. It is recommended to use a loca
|
|||
| vaultTokenMountPath | Y | Path to file containing token | `"path/to/file"` |
|
||||
| vaultToken | Y | [Token](https://learn.hashicorp.com/tutorials/vault/tokens) for authentication within Vault. | `"tokenValue"` |
|
||||
| vaultKVPrefix | N | The prefix in vault. Defaults to `"dapr"` | `"dapr"`, `"myprefix"` |
|
||||
| vaultKVUsePrefix | N | If false, vaultKVPrefix is forced to be empty. If the value is not given or set to true, vaultKVPrefix is used when accessing the vault. Setting it to false is needed to be able to use the BulkGetSecret method of the store. | `"true"`, `"false"` |
|
||||
|
||||
## Setup Hashicorp Vault instance
|
||||
|
||||
|
|
|
@ -7,12 +7,34 @@ aliases:
|
|||
- "/operations/components/setup-secret-store/supported-secret-stores/kubernetes-secret-store/"
|
||||
---
|
||||
|
||||
## Summary
|
||||
## Default Kubernetes secret store component
|
||||
When Dapr is deployed to a Kubernetes cluster, a secret store with the name `kubernetes` is automatically provisioned. This pre-provisioned secret store allows you to use the native Kubernetes secret store with no need to author, deploy or maintain a component configuration file for the secret store and is useful for developers looking to simply access secrets stored natively in a Kubernetes cluster.
|
||||
|
||||
Kubernetes has a built-in secrets store which Dapr components can use to retrieve secrets from. No special configuration is needed to setup the Kubernetes secrets store, and you are able to retrieve secrets from the `http://localhost:3500/v1.0/secrets/kubernetes/[my-secret]` URL. See this guide on [referencing secrets]({{< ref component-secrets.md >}}) to retrieve and use the secret with Dapr components.
|
||||
A custom component definition file for a Kubernetes secret store can still be configured (See below for details). Using a custom definition decouples referencing the secret store in your code from the hosting platform as the store name is not fixed and can be customized, keeping you code more generic and portable. Additionally, by explicitly defining a Kubernetes secret store component you can connect to a Kubernetes secret store from a local Dapr self-hosted installation. This requires a valid [`kubeconfig`](https://kubernetes.io/docs/concepts/configuration/organize-cluster-access-kubeconfig/) file.
|
||||
|
||||
{{% alert title="Scoping secret store access" color="warning" %}}
|
||||
When limiting access to secrets in your application using [secret scopes]({{<ref secrets-scopes.md>}}), it's important to include the default secret store in the scope definition in order to restrict it.
|
||||
{{% /alert %}}
|
||||
|
||||
## Create a custom Kubernetes secret store component
|
||||
|
||||
To setup a Kubernetes secret store create a component of type `secretstores.kubernetes`. See [this guide]({{< ref "setup-secret-store.md#apply-the-configuration" >}}) on how to create and apply a secretstore configuration. See this guide on [referencing secrets]({{< ref component-secrets.md >}}) to retrieve and use the secret with Dapr components.
|
||||
|
||||
```yaml
|
||||
apiVersion: dapr.io/v1alpha1
|
||||
kind: Component
|
||||
metadata:
|
||||
name: mycustomsecretstore
|
||||
namespace: default
|
||||
spec:
|
||||
type: secretstores.kubernetes
|
||||
version: v1
|
||||
metadata:
|
||||
- name: ""
|
||||
```
|
||||
## Related links
|
||||
- [Secrets building block]({{< ref secrets >}})
|
||||
- [How-To: Retrieve a secret]({{< ref "howto-secrets.md" >}})
|
||||
- [How-To: Reference secrets in Dapr components]({{< ref component-secrets.md >}})
|
||||
- [Secrets API reference]({{< ref secrets_api.md >}})
|
||||
- [How To: Use secret scoping]({{<ref secrets-scopes.md>}})
|
||||
|
|
|
@ -26,38 +26,38 @@ The following stores are supported, at various levels, by the Dapr state managem
|
|||
|
||||
### Generic
|
||||
|
||||
| Name | CRUD | Transactional | ETag | Actors | Status | Component version | Since |
|
||||
|----------------------------------------------------------------|------|---------------------|------|------|--------| -------|------|
|
||||
| [Aerospike]({{< ref setup-aerospike.md >}}) | ✅ | ❌ | ✅ | ❌ | Alpha | v1 | 1.0 |
|
||||
| [Apache Cassandra]({{< ref setup-cassandra.md >}}) | ✅ | ❌ | ❌ | ❌ | Alpha | v1 | 1.0 |
|
||||
| [Cloudstate]({{< ref setup-cloudstate.md >}}) | ✅ | ❌ | ✅ | ❌ | Alpha | v1 | 1.0 |
|
||||
| [Couchbase]({{< ref setup-couchbase.md >}}) | ✅ | ❌ | ✅ | ❌ | Alpha | v1 | 1.0 |
|
||||
| [Hashicorp Consul]({{< ref setup-consul.md >}}) | ✅ | ❌ | ❌ | ❌ | Alpha | v1 | 1.0 |
|
||||
| [Hazelcast]({{< ref setup-hazelcast.md >}}) | ✅ | ❌ | ❌ | ❌ | Alpha | v1 | 1.0 |
|
||||
| [Memcached]({{< ref setup-memcached.md >}}) | ✅ | ❌ | ❌ | ❌ | Alpha | v1 | 1.0 |
|
||||
| [MongoDB]({{< ref setup-mongodb.md >}}) | ✅ | ✅ | ✅ | ✅ | GA | v1 | 1.0 |
|
||||
| [MySQL]({{< ref setup-mysql.md >}}) | ✅ | ✅ | ✅ | ✅ | Alpha | v1 | 1.0 |
|
||||
| [PostgreSQL]({{< ref setup-postgresql.md >}}) | ✅ | ✅ | ✅ | ✅ | Alpha | v1 | 1.0 |
|
||||
| [Redis]({{< ref setup-redis.md >}}) | ✅ | ✅ | ✅ | ✅ | GA | v1 | 1.0 |
|
||||
| [RethinkDB]({{< ref setup-rethinkdb.md >}}) | ✅ | ✅ | ✅ | ✅ | Alpha | v1 | 1.0 |
|
||||
| [Zookeeper]({{< ref setup-zookeeper.md >}}) | ✅ | ❌ | ✅ | ❌ | Alpha | v1 | 1.0 |
|
||||
| Name | CRUD | Transactional | ETag | [TTL]({{< ref state-store-ttl.md >}}) | [Actors]({{< ref howto-actors.md >}}) | Status | Component version | Since |
|
||||
|----------------------------------------------------------------|------|---------------------|------|-----|------|--------| -------|------|
|
||||
| [Aerospike]({{< ref setup-aerospike.md >}}) | ✅ | ❌ | ✅ | ❌ | ❌ | Alpha | v1 | 1.0 |
|
||||
| [Apache Cassandra]({{< ref setup-cassandra.md >}}) | ✅ | ❌ | ❌ | ✅ | ❌ | Alpha | v1 | 1.0 |
|
||||
| [Cloudstate]({{< ref setup-cloudstate.md >}}) | ✅ | ❌ | ✅ | ❌ | ❌ | Alpha | v1 | 1.0 |
|
||||
| [Couchbase]({{< ref setup-couchbase.md >}}) | ✅ | ❌ | ✅ | ❌ | ❌ | Alpha | v1 | 1.0 |
|
||||
| [Hashicorp Consul]({{< ref setup-consul.md >}}) | ✅ | ❌ | ❌ | ❌ | ❌ | Alpha | v1 | 1.0 |
|
||||
| [Hazelcast]({{< ref setup-hazelcast.md >}}) | ✅ | ❌ | ❌ | ❌ | ❌ | Alpha | v1 | 1.0 |
|
||||
| [Memcached]({{< ref setup-memcached.md >}}) | ✅ | ❌ | ❌ | ✅ | ❌ | Alpha | v1 | 1.0 |
|
||||
| [MongoDB]({{< ref setup-mongodb.md >}}) | ✅ | ✅ | ✅ | ❌ | ✅ | GA | v1 | 1.0 |
|
||||
| [MySQL]({{< ref setup-mysql.md >}}) | ✅ | ✅ | ✅ | ❌ | ✅ | Alpha | v1 | 1.0 |
|
||||
| [PostgreSQL]({{< ref setup-postgresql.md >}}) | ✅ | ✅ | ✅ | ❌ | ✅ | Alpha | v1 | 1.0 |
|
||||
| [Redis]({{< ref setup-redis.md >}}) | ✅ | ✅ | ✅ | ✅ | ✅ | GA | v1 | 1.0 |
|
||||
| [RethinkDB]({{< ref setup-rethinkdb.md >}}) | ✅ | ✅ | ✅ | ❌ | ✅ | Alpha | v1 | 1.0 |
|
||||
| [Zookeeper]({{< ref setup-zookeeper.md >}}) | ✅ | ❌ | ✅ | ❌ | ❌ | Alpha | v1 | 1.0 |
|
||||
|
||||
|
||||
### Amazon Web Services (AWS)
|
||||
| Name | CRUD | Transactional | ETag | Actors | Status | Component version | Since |
|
||||
|------------------------------------------------------------------|------|---------------------|------|--------|-----|-----|-------|
|
||||
| [AWS DynamoDB]({{< ref setup-dynamodb.md>}}) | ✅ | ❌ | ❌ | ❌ | Alpha | v1 | 1.0 |
|
||||
| Name | CRUD | Transactional | ETag | [TTL]({{< ref state-store-ttl.md >}}) | [Actors]({{< ref howto-actors.md >}}) | Status | Component version | Since |
|
||||
|------------------------------------------------------------------|------|---------------------|------|-----|--------|-----|-----|-------|
|
||||
| [AWS DynamoDB]({{< ref setup-dynamodb.md>}}) | ✅ | ❌ | ❌ | ❌ | ❌ | Alpha | v1 | 1.0 |
|
||||
|
||||
### Google Cloud Platform (GCP)
|
||||
| Name | CRUD | Transactional | ETag | Actors | Status | Component version | Since |
|
||||
|------------------------------------------------------------------|------|---------------------|------|--------|-----|-----|-------|
|
||||
| [GCP Firestore]({{< ref setup-firestore.md >}}) | ✅ | ❌ | ❌ | ❌ | Alpha | v1 | 1.0 |
|
||||
| Name | CRUD | Transactional | ETag | [TTL]({{< ref state-store-ttl.md >}}) | [Actors]({{< ref howto-actors.md >}}) | Status | Component version | Since |
|
||||
|------------------------------------------------------------------|------|---------------------|------|-----|--------|-----|-----|-------|
|
||||
| [GCP Firestore]({{< ref setup-firestore.md >}}) | ✅ | ❌ | ❌ | ❌ | ❌ | Alpha | v1 | 1.0 |
|
||||
|
||||
### Microsoft Azure
|
||||
|
||||
| Name | CRUD | Transactional | ETag | Actors | Status | Component version | Since |
|
||||
|------------------------------------------------------------------|------|---------------------|------|--------|-----|-----|-------|
|
||||
| [Azure Blob Storage]({{< ref setup-azure-blobstorage.md >}}) | ✅ | ❌ | ✅ | ❌ | GA | v1 | 1.0 |
|
||||
| [Azure CosmosDB]({{< ref setup-azure-cosmosdb.md >}}) | ✅ | ✅ | ✅ | ✅ | GA | v1 | 1.0 |
|
||||
| [Azure SQL Server]({{< ref setup-sqlserver.md >}}) | ✅ | ✅ | ✅ | ✅ | Alpha | v1 | 1.0 |
|
||||
| [Azure Table Storage]({{< ref setup-azure-tablestorage.md >}}) | ✅ | ❌ | ✅ | ❌ | Alpha | v1 | 1.0 |
|
||||
| Name | CRUD | Transactional | ETag | [TTL]({{< ref state-store-ttl.md >}}) | [Actors]({{< ref howto-actors.md >}}) | Status | Component version | Since |
|
||||
|------------------------------------------------------------------|------|---------------------|------|-----|--------|-----|-----|-------|
|
||||
| [Azure Blob Storage]({{< ref setup-azure-blobstorage.md >}}) | ✅ | ❌ | ✅ | ❌ | ❌ | GA | v1 | 1.0 |
|
||||
| [Azure CosmosDB]({{< ref setup-azure-cosmosdb.md >}}) | ✅ | ✅ | ✅ | ✅ | ✅ | GA | v1 | 1.0 |
|
||||
| [Azure SQL Server]({{< ref setup-sqlserver.md >}}) | ✅ | ✅ | ✅ | ❌ | ✅ | Alpha | v1 | 1.0 |
|
||||
| [Azure Table Storage]({{< ref setup-azure-tablestorage.md >}}) | ✅ | ❌ | ✅ | ❌ | ❌ | Alpha | v1 | 1.0 |
|
||||
|
|
|
@ -22,8 +22,10 @@ spec:
|
|||
type: state.mongodb
|
||||
version: v1
|
||||
metadata:
|
||||
- name: server
|
||||
value: <REPLACE-WITH-SERVER> # Required unless "host" field is set . Example: "server.example.com"
|
||||
- name: host
|
||||
value: <REPLACE-WITH-HOST> # Required. Example: "mongo-mongodb.default.svc.cluster.local:27017"
|
||||
value: <REPLACE-WITH-HOST> # Required unless "server" field is set . Example: "mongo-mongodb.default.svc.cluster.local:27017"
|
||||
- name: username
|
||||
value: <REPLACE-WITH-USERNAME> # Optional. Example: "admin"
|
||||
- name: password
|
||||
|
@ -56,15 +58,18 @@ If you wish to use MongoDB as an actor store, append the following to the yaml.
|
|||
|
||||
| Field | Required | Details | Example |
|
||||
|--------------------|:--------:|---------|---------|
|
||||
| host | Y | The host to connect to | `"mongo-mongodb.default.svc.cluster.local:27017"`
|
||||
| username | N | The username of the user to connect with | `"admin"`
|
||||
| password | N | The password of the user | `"password"`
|
||||
| server | Y<sup>*</sup> | The server to connect to, when using DNS SRV record | `"server.example.com"`
|
||||
| host | Y<sup>*</sup> | The host to connect to | `"mongo-mongodb.default.svc.cluster.local:27017"`
|
||||
| username | N | The username of the user to connect with (applicable in conjunction with `host`) | `"admin"`
|
||||
| password | N | The password of the user (applicable in conjunction with `host`) | `"password"`
|
||||
| databaseName | N | The name of the database to use. Defaults to `"daprStore"` | `"daprStore"`
|
||||
| collectionName | N | The name of the collection to use. Defaults to `"daprCollection"` | `"daprCollection"`
|
||||
| writeconcern | N | The write concern to use | `"majority"`
|
||||
| readconcern | N | The read concern to use | `"majority"`, `"local"`,`"available"`, `"linearizable"`, `"snapshot"`
|
||||
| operationTimeout | N | The timeout for the operation. Defaults to `"5s"` | `"5s"`
|
||||
|
||||
> <sup>[*]</sup> The `server` and `host` fields are mutually exclusive. If neither or both are set, Dapr will return an error.
|
||||
|
||||
## Setup MongoDB
|
||||
|
||||
{{< tabs "Self-Hosted" "Kubernetes" >}}
|
||||
|
|
|
@ -61,9 +61,25 @@ If you wish to use Redis as an actor store, append the following to the yaml.
|
|||
| consumerID | N | The consumer group ID | `"myGroup"`
|
||||
| enableTLS | N | If the Redis instance supports TLS with public certificates, can be configured to be enabled or disabled. Defaults to `"false"` | `"true"`, `"false"`
|
||||
| maxRetries | N | Maximum number of retries before giving up. Defaults to `3` | `5`, `10`
|
||||
| maxRetryBackoff | N | Minimum backoff between each retry. Defaults to `2` seconds | `3000000000`
|
||||
| maxRetryBackoff | N | Minimum backoff between each retry. Defaults to `2` seconds; `"-1"` disables backoff. | `3000000000`
|
||||
| failover | N | Property to enabled failover configuration. Needs sentinalMasterName to be set. Defaults to `"false"` | `"true"`, `"false"`
|
||||
| sentinelMasterName | N | The sentinel master name. See [Redis Sentinel Documentation](https://redis.io/topics/sentinel) | `""`, `"127.0.0.1:6379"`
|
||||
| redeliverInterval | N | The interval between checking for pending messages to redelivery. Defaults to `"60s"`. `"0"` disables redelivery. | `"30s"`
|
||||
| processingTimeout | N | The amount time a message must be pending before attempting to redeliver it. Defaults to `"15s"`. `"0"` disables redelivery. | `"30s"`
|
||||
| redisType | N | The type of redis. There are two valid values, one is `"node"` for single node mode, the other is `"cluster"` for redis cluster mode. Defaults to `"node"`. | `"cluster"`
|
||||
| redisDB | N | Database selected after connecting to redis. If `"redisType"` is `"cluster"` this option is ignored. Defaults to `"0"`. | `"0"`
|
||||
| redisMaxRetries | N | Alias for `maxRetries`. If both values are set `maxRetries` is ignored. | `"5"`
|
||||
| redisMinRetryInterval | N | Minimum backoff for redis commands between each retry. Default is `"8ms"`; `"-1"` disables backoff. | `"8ms"`
|
||||
| redisMaxRetryInterval | N | Alias for `maxRetryBackoff`. If both values are set `maxRetryBackoff` is ignored. | `"5s"`
|
||||
| dialTimeout | N | Dial timeout for establishing new connections. Defaults to `"5s"`. | `"5s"`
|
||||
| readTimeout | N | Timeout for socket reads. If reached, redis commands will fail with a timeout instead of blocking. Defaults to `"3s"`, `"-1"` for no timeout. | `"3s"`
|
||||
| writeTimeout | N | Timeout for socket writes. If reached, redis commands will fail with a timeout instead of blocking. Defaults is readTimeout. | `"3s"`
|
||||
| poolSize | N | Maximum number of socket connections. Default is 10 connections per every CPU as reported by runtime.NumCPU. | `"20"`
|
||||
| poolTimeout | N | Amount of time client waits for a connection if all connections are busy before returning an error. Default is readTimeout + 1 second. | `"5s"`
|
||||
| maxConnAge | N | Connection age at which the client retires (closes) the connection. Default is to not close aged connections. | `"30m"`
|
||||
| minIdleConns | N | Minimum number of idle connections to keep open in order to avoid the performance degradation associated with creating new connections. Defaults to `"0"`. | `"2"`
|
||||
| idleCheckFrequency | N | Frequency of idle checks made by idle connections reaper. Default is `"1m"`. `"-1"` disables idle connections reaper. | `"-1"`
|
||||
| idleTimeout | N | Amount of time after which the client closes idle connections. Should be less than server's timeout. Default is `"5m"`. `"-1"` disables idle timeout check. | `"10m"`
|
||||
| actorStateStore | N | Consider this state store for actors. Defaults to `"false"` | `"true"`, `"false"`
|
||||
|
||||
## Setup Redis
|
||||
|
|
Loading…
Reference in New Issue