mirror of https://github.com/dapr/docs.git
Merge branch 'v1.10' of github.com:dapr/docs into bulksubscribe_doc
This commit is contained in:
commit
855d44cc3b
|
|
@ -7,7 +7,8 @@ weight: 3000
|
|||
---
|
||||
|
||||
This article describe how to use Dapr to connect services using gRPC.
|
||||
By using Dapr's gRPC proxying capability, you can use your existing proto based gRPC services and have the traffic go through the Dapr sidecar. Doing so yields the following [Dapr service invocation]({{< ref service-invocation-overview.md >}}) benefits to developers:
|
||||
|
||||
By using Dapr's gRPC proxying capability, you can use your existing proto-based gRPC services and have the traffic go through the Dapr sidecar. Doing so yields the following [Dapr service invocation]({{< ref service-invocation-overview.md >}}) benefits to developers:
|
||||
|
||||
1. Mutual authentication
|
||||
2. Tracing
|
||||
|
|
@ -16,11 +17,11 @@ By using Dapr's gRPC proxying capability, you can use your existing proto based
|
|||
5. Network level resiliency
|
||||
6. API token based authentication
|
||||
|
||||
Dapr allows proxying all kinds of gRPC invocations, including unary and [stream-based](#proxying-of-streaming-rpcs) ones.
|
||||
|
||||
## Step 1: Run a gRPC server
|
||||
|
||||
The following example is taken from the [hello world grpc-go example](https://github.com/grpc/grpc-go/tree/master/examples/helloworld).
|
||||
|
||||
Note this example is in Go, but applies to all programming languages supported by gRPC.
|
||||
The following example is taken from the ["hello world" grpc-go example](https://github.com/grpc/grpc-go/tree/master/examples/helloworld). Although this example is in Go, the same concepts apply to all programming languages supported by gRPC.
|
||||
|
||||
```go
|
||||
package main
|
||||
|
|
@ -175,7 +176,7 @@ response = service.sayHello({ 'name': 'Darth Bane' }, metadata)
|
|||
{{% codetab %}}
|
||||
```c++
|
||||
grpc::ClientContext context;
|
||||
context.AddMetadata("dapr-app-id", "Darth Sidious");
|
||||
context.AddMetadata("dapr-app-id", "server");
|
||||
```
|
||||
{{% /codetab %}}
|
||||
|
||||
|
|
@ -191,7 +192,7 @@ dapr run --app-id client --dapr-grpc-port 50007 -- go run main.go
|
|||
|
||||
If you're running Dapr locally with Zipkin installed, open the browser at `http://localhost:9411` and view the traces between the client and server.
|
||||
|
||||
## Deploying to Kubernetes
|
||||
### Deploying to Kubernetes
|
||||
|
||||
Set the following Dapr annotations on your deployment:
|
||||
|
||||
|
|
@ -241,15 +242,88 @@ The example above showed you how to directly invoke a different service running
|
|||
|
||||
For more information on tracing and logs see the [observability]({{< ref observability-concept.md >}}) article.
|
||||
|
||||
## Related Links
|
||||
## Proxying of streaming RPCs
|
||||
|
||||
When using Dapr to proxy streaming RPC calls using gRPC, you must set an additional metadata option `dapr-stream` with value `true`.
|
||||
|
||||
For example:
|
||||
|
||||
{{< tabs Go Java Dotnet Python JavaScript Ruby "C++">}}
|
||||
|
||||
{{% codetab %}}
|
||||
```go
|
||||
ctx = metadata.AppendToOutgoingContext(ctx, "dapr-app-id", "server")
|
||||
ctx = metadata.AppendToOutgoingContext(ctx, "dapr-stream", "true")
|
||||
```
|
||||
{{% /codetab %}}
|
||||
|
||||
{{% codetab %}}
|
||||
```java
|
||||
Metadata headers = new Metadata();
|
||||
Metadata.Key<String> jwtKey = Metadata.Key.of("dapr-app-id", "server");
|
||||
Metadata.Key<String> jwtKey = Metadata.Key.of("dapr-stream", "true");
|
||||
```
|
||||
{{% /codetab %}}
|
||||
|
||||
{{% codetab %}}
|
||||
```csharp
|
||||
var metadata = new Metadata
|
||||
{
|
||||
{ "dapr-app-id", "server" },
|
||||
{ "dapr-stream", "true" }
|
||||
};
|
||||
```
|
||||
{{% /codetab %}}
|
||||
|
||||
{{% codetab %}}
|
||||
```python
|
||||
metadata = (('dapr-app-id', 'server'), ('dapr-stream', 'true'),)
|
||||
```
|
||||
{{% /codetab %}}
|
||||
|
||||
{{% codetab %}}
|
||||
```javascript
|
||||
const metadata = new grpc.Metadata();
|
||||
metadata.add('dapr-app-id', 'server');
|
||||
metadata.add('dapr-stream', 'true');
|
||||
```
|
||||
{{% /codetab %}}
|
||||
|
||||
{{% codetab %}}
|
||||
```ruby
|
||||
metadata = { 'dapr-app-id' : 'server' }
|
||||
metadata = { 'dapr-stream' : 'true' }
|
||||
```
|
||||
{{% /codetab %}}
|
||||
|
||||
{{% codetab %}}
|
||||
```c++
|
||||
grpc::ClientContext context;
|
||||
context.AddMetadata("dapr-app-id", "server");
|
||||
context.AddMetadata("dapr-stream", "true");
|
||||
```
|
||||
{{% /codetab %}}
|
||||
|
||||
{{< /tabs >}}
|
||||
|
||||
### Streaming gRPCs and Resiliency
|
||||
|
||||
When proxying streaming gRPCs, due to their long-lived nature, [resiliency]({{< ref "resiliency-overview.md" >}}) policies are applied on the "initial handshake" only. As a consequence:
|
||||
|
||||
- If the stream is interrupted after the initial handshake, it will not be automatically re-established by Dapr. Your application will be notified that the stream has ended, and will need to recreate it.
|
||||
- Retry policies only impact the initial connection "handshake". If your resiliency policy includes retries, Dapr will detect failures in establishing the initial connection to the target app and will retry until it succeeds (or until the number of retries defined in the policy is exhausted).
|
||||
- Likewise, timeouts defined in resiliency policies only apply to the initial "handshake". After the connection has been established, timeouts do not impact the stream anymore.
|
||||
|
||||
## Related Links
|
||||
|
||||
* [Service invocation overview]({{< ref service-invocation-overview.md >}})
|
||||
* [Service invocation API specification]({{< ref service_invocation_api.md >}})
|
||||
* [gRPC proxying community call video](https://youtu.be/B_vkXqptpXY?t=70)
|
||||
|
||||
## Community call demo
|
||||
|
||||
Watch this [video](https://youtu.be/B_vkXqptpXY?t=69) on how to use Dapr's gRPC proxying capability:
|
||||
|
||||
<div class="embed-responsive embed-responsive-16by9">
|
||||
<iframe width="560" height="315" src="https://www.youtube.com/embed/B_vkXqptpXY?start=69" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe>
|
||||
</div>
|
||||
<iframe width="560" height="315" src="https://www.youtube-nocookie.com/embed/B_vkXqptpXY?start=69" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe>
|
||||
</div>
|
||||
|
|
|
|||
|
|
@ -16,7 +16,21 @@ Now that you've [set up the workflow and its activities in your application]({{<
|
|||
Manage your workflow within your code. In the `OrderProcessingWorkflow` example from the [Author a workflow]({{< ref "howto-author-workflow.md#write-the-workflow" >}}) guide, the workflow is registered in the code. You can then start, terminate, and get information about the workflow:
|
||||
|
||||
```csharp
|
||||
string orderId = "exampleOrderId";
|
||||
string workflowComponent = "dapr";
|
||||
string workflowName = "OrderProcessingWorkflow";
|
||||
OrderPayload input = new OrderPayload("Paperclips", 99.95);
|
||||
Dictionary<string, string> workflowOptions; // This is an optional parameter
|
||||
CancellationToken cts = CancellationToken.None;
|
||||
|
||||
// Start the workflow. This returns back a "WorkflowReference" which contains the instanceID for the particular workflow instance.
|
||||
WorkflowReference startResponse = await daprClient.StartWorkflowAsync(orderId, workflowComponent, workflowName, input, workflowOptions, cts);
|
||||
|
||||
// Get information on the workflow. This response will contain information such as the status of the workflow, when it started, and more!
|
||||
GetWorkflowResponse getResponse = await daprClient.GetWorkflowAsync(orderId, workflowComponent, workflowName);
|
||||
|
||||
// Terminate the workflow
|
||||
await daprClient.TerminateWorkflowAsync(instanceId, workflowComponent);
|
||||
```
|
||||
|
||||
{{% /codetab %}}
|
||||
|
|
|
|||
|
|
@ -1,7 +1,7 @@
|
|||
---
|
||||
type: docs
|
||||
title: "How to: Implement pluggable components"
|
||||
linkTitle: "Pluggable components"
|
||||
linkTitle: "Implement pluggable components"
|
||||
weight: 1100
|
||||
description: "Learn how to author and implement pluggable components"
|
||||
---
|
||||
|
|
@ -105,6 +105,201 @@ After generating the above state store example's service scaffolding code using
|
|||
|
||||
This concrete implementation and auxiliary code are the **core** of your pluggable component. They define how your component behaves when handling gRPC requests from Dapr.
|
||||
|
||||
## Returning semantic errors
|
||||
|
||||
Returning semantic errors are also part of the pluggable component protocol. The component must return specific gRPC codes that have semantic meaning for the user application, those errors are used to a variety of situations from concurrency requirements to informational only.
|
||||
|
||||
| Error | gRPC error code | Source component | Description |
|
||||
| ------------------------ | ------------------------------- | ---------------- | ----------- |
|
||||
| ETag Mismatch | `codes.FailedPrecondition` | State store | Error mapping to meet concurrency requirements |
|
||||
| ETag Invalid | `codes.InvalidArgument` | State store | |
|
||||
| Bulk Delete Row Mismatch | `codes.Internal` | State store | |
|
||||
|
||||
Learn more about concurrency requirements in the [State Management overview]({{< ref "state-management-overview.md#concurrency" >}}).
|
||||
|
||||
The following examples demonstrate how to return an error in your own pluggable component, changing the messages to suit your needs.
|
||||
|
||||
{{< tabs ".NET" "Java" "Go" >}}
|
||||
<!-- .NET -->
|
||||
{{% codetab %}}
|
||||
|
||||
> **Important:** In order to use .NET for error mapping, first install the [`Google.Api.CommonProtos` NuGet package](https://www.nuget.org/packages/Google.Api.CommonProtos/).
|
||||
|
||||
**Etag Mismatch**
|
||||
|
||||
```csharp
|
||||
var badRequest = new BadRequest();
|
||||
var des = "The ETag field provided does not match the one in the store";
|
||||
badRequest.FieldViolations.Add(
|
||||
new Google.Rpc.BadRequest.Types.FieldViolation
|
||||
{
|
||||
Field = "etag",
|
||||
Description = des
|
||||
});
|
||||
|
||||
var baseStatusCode = Grpc.Core.StatusCode.FailedPrecondition;
|
||||
var status = new Google.Rpc.Status{
|
||||
Code = (int)baseStatusCode
|
||||
};
|
||||
|
||||
status.Details.Add(Google.Protobuf.WellKnownTypes.Any.Pack(badRequest));
|
||||
|
||||
var metadata = new Metadata();
|
||||
metadata.Add("grpc-status-details-bin", status.ToByteArray());
|
||||
throw new RpcException(new Grpc.Core.Status(baseStatusCode, "fake-err-msg"), metadata);
|
||||
```
|
||||
|
||||
**Etag Invalid**
|
||||
|
||||
```csharp
|
||||
var badRequest = new BadRequest();
|
||||
var des = "The ETag field must only contain alphanumeric characters";
|
||||
badRequest.FieldViolations.Add(
|
||||
new Google.Rpc.BadRequest.Types.FieldViolation
|
||||
{
|
||||
Field = "etag",
|
||||
Description = des
|
||||
});
|
||||
|
||||
var baseStatusCode = Grpc.Core.StatusCode.InvalidArgument;
|
||||
var status = new Google.Rpc.Status
|
||||
{
|
||||
Code = (int)baseStatusCode
|
||||
};
|
||||
|
||||
status.Details.Add(Google.Protobuf.WellKnownTypes.Any.Pack(badRequest));
|
||||
|
||||
var metadata = new Metadata();
|
||||
metadata.Add("grpc-status-details-bin", status.ToByteArray());
|
||||
throw new RpcException(new Grpc.Core.Status(baseStatusCode, "fake-err-msg"), metadata);
|
||||
```
|
||||
|
||||
**Bulk Delete Row Mismatch**
|
||||
|
||||
```csharp
|
||||
var errorInfo = new Google.Rpc.ErrorInfo();
|
||||
|
||||
errorInfo.Metadata.Add("expected", "100");
|
||||
errorInfo.Metadata.Add("affected", "99");
|
||||
|
||||
var baseStatusCode = Grpc.Core.StatusCode.Internal;
|
||||
var status = new Google.Rpc.Status{
|
||||
Code = (int)baseStatusCode
|
||||
};
|
||||
|
||||
status.Details.Add(Google.Protobuf.WellKnownTypes.Any.Pack(errorInfo));
|
||||
|
||||
var metadata = new Metadata();
|
||||
metadata.Add("grpc-status-details-bin", status.ToByteArray());
|
||||
throw new RpcException(new Grpc.Core.Status(baseStatusCode, "fake-err-msg"), metadata);
|
||||
```
|
||||
|
||||
{{% /codetab %}}
|
||||
|
||||
<!-- Java -->
|
||||
{{% codetab %}}
|
||||
|
||||
Just like the [Dapr Java SDK](https://github.com/tmacam/dapr-java-sdk/), the Java Pluggable Components SDK uses [Project Reactor](https://projectreactor.io/), which provides an asynchronous API for Java.
|
||||
|
||||
Errors can be returned directly by:
|
||||
1. Calling the `.error()` method in the `Mono` or `Flux` that your method returns
|
||||
1. Providing the appropriate exception as parameter.
|
||||
|
||||
You can also raise an exception, as long as it is captured and fed back to your resulting `Mono` or `Flux`.
|
||||
|
||||
**ETag Mismatch**
|
||||
|
||||
```java
|
||||
final Status status = Status.newBuilder()
|
||||
.setCode(io.grpc.Status.Code.FAILED_PRECONDITION.value())
|
||||
.setMessage("fake-err-msg-for-etag-mismatch")
|
||||
.addDetails(Any.pack(BadRequest.FieldViolation.newBuilder()
|
||||
.setField("etag")
|
||||
.setDescription("The ETag field provided does not match the one in the store")
|
||||
.build()))
|
||||
.build();
|
||||
return Mono.error(StatusProto.toStatusException(status));
|
||||
```
|
||||
|
||||
**ETag Invalid**
|
||||
|
||||
```java
|
||||
final Status status = Status.newBuilder()
|
||||
.setCode(io.grpc.Status.Code.INVALID_ARGUMENT.value())
|
||||
.setMessage("fake-err-msg-for-invalid-etag")
|
||||
.addDetails(Any.pack(BadRequest.FieldViolation.newBuilder()
|
||||
.setField("etag")
|
||||
.setDescription("The ETag field must only contain alphanumeric characters")
|
||||
.build()))
|
||||
.build();
|
||||
return Mono.error(StatusProto.toStatusException(status));
|
||||
```
|
||||
|
||||
**Bulk Delete Row Mismatch**
|
||||
|
||||
```java
|
||||
final Status status = Status.newBuilder()
|
||||
.setCode(io.grpc.Status.Code.INTERNAL.value())
|
||||
.setMessage("fake-err-msg-for-bulk-delete-row-mismatch")
|
||||
.addDetails(Any.pack(ErrorInfo.newBuilder()
|
||||
.putAllMetadata(Map.ofEntries(
|
||||
Map.entry("affected", "99"),
|
||||
Map.entry("expected", "100")
|
||||
))
|
||||
.build()))
|
||||
.build();
|
||||
return Mono.error(StatusProto.toStatusException(status));
|
||||
```
|
||||
|
||||
{{% /codetab %}}
|
||||
|
||||
<!-- Go -->
|
||||
{{% codetab %}}
|
||||
|
||||
**ETag Mismatch**
|
||||
|
||||
```go
|
||||
st := status.New(codes.FailedPrecondition, "fake-err-msg")
|
||||
desc := "The ETag field provided does not match the one in the store"
|
||||
v := &errdetails.BadRequest_FieldViolation{
|
||||
Field: etagField,
|
||||
Description: desc,
|
||||
}
|
||||
br := &errdetails.BadRequest{}
|
||||
br.FieldViolations = append(br.FieldViolations, v)
|
||||
st, err := st.WithDetails(br)
|
||||
```
|
||||
|
||||
**ETag Invalid**
|
||||
|
||||
```go
|
||||
st := status.New(codes.InvalidArgument, "fake-err-msg")
|
||||
desc := "The ETag field must only contain alphanumeric characters"
|
||||
v := &errdetails.BadRequest_FieldViolation{
|
||||
Field: etagField,
|
||||
Description: desc,
|
||||
}
|
||||
br := &errdetails.BadRequest{}
|
||||
br.FieldViolations = append(br.FieldViolations, v)
|
||||
st, err := st.WithDetails(br)
|
||||
```
|
||||
|
||||
**Bulk Delete Row Mismatch**
|
||||
|
||||
```go
|
||||
st := status.New(codes.Internal, "fake-err-msg")
|
||||
br := &errdetails.ErrorInfo{}
|
||||
br.Metadata = map[string]string{
|
||||
affected: "99",
|
||||
expected: "100",
|
||||
}
|
||||
st, err := st.WithDetails(br)
|
||||
```
|
||||
|
||||
{{% /codetab %}}
|
||||
|
||||
{{< /tabs >}}
|
||||
|
||||
## Next steps
|
||||
|
||||
- Get started with developing .NET pluggable component using this [sample code](https://github.com/dapr/samples/tree/master/pluggable-components-dotnet-template)
|
||||
|
|
|
|||
|
|
@ -0,0 +1,7 @@
|
|||
---
|
||||
type: docs
|
||||
title: "Multi-App Run"
|
||||
linkTitle: "Multi-App Run"
|
||||
weight: 300
|
||||
description: "Support for running multiple Dapr applications with one command"
|
||||
---
|
||||
|
|
@ -0,0 +1,85 @@
|
|||
---
|
||||
type: docs
|
||||
title: Multi-App Run overview
|
||||
linkTitle: Multi-App Run overview
|
||||
weight: 1000
|
||||
description: Run multiple applications with one CLI command
|
||||
---
|
||||
|
||||
{{% alert title="Note" color="primary" %}}
|
||||
Multi-App Run is currently a preview feature only supported in Linux/MacOS.
|
||||
{{% /alert %}}
|
||||
|
||||
Let's say you want to run several applications locally to test them together, similar to a production scenario. With a local Kubernetes cluster, you'd be able to do this with helm/deployment YAML files. You'd also have to build them as containers and set up Kubernetes, which can add some complexity.
|
||||
|
||||
Instead, you simply want to run them as local executables in self-hosted mode. However, self-hosted mode requires you to:
|
||||
|
||||
- Run multiple `dapr run` commands
|
||||
- Keep track of all ports opened (you cannot have duplicate ports for different applications).
|
||||
- Remember the resources folders and configuration files that each application refers to.
|
||||
- Recall all of the additional flags you used to tweak the `dapr run` command behavior (`--app-health-check-path`, `--dapr-grpc-port`, `--unix-domain-socket`, etc.)
|
||||
|
||||
With Multi-App Run, you can start multiple applications in self-hosted mode using a single `dapr run -f` command using a template file. The template file describes how to start multiple applications as if you had run many separate CLI `run`commands. By default, this template file is called `dapr.yaml`.
|
||||
|
||||
## Multi-App Run template file
|
||||
|
||||
When you execute `dapr run -f .`, it uses the multi-app template file (named `dapr.yaml`) present in the current directory to run all the applications.
|
||||
|
||||
You can name template file with preferred name other than the default. For example `dapr run -f ./<your-preferred-file-name>.yaml`.
|
||||
|
||||
The following example includes some of the template properties you can customize for your applications. In the example, you can simultaneously launch 2 applications with app IDs of `processor` and `emit-metrics`.
|
||||
|
||||
```yaml
|
||||
version: 1
|
||||
apps:
|
||||
- appID: processor
|
||||
appDirPath: ../apps/processor/
|
||||
appPort: 9081
|
||||
daprHTTPPort: 3510
|
||||
command: ["go","run", "app.go"]
|
||||
- appID: emit-metrics
|
||||
appDirPath: ../apps/emit-metrics/
|
||||
daprHTTPPort: 3511
|
||||
env:
|
||||
DAPR_HOST_ADD: localhost
|
||||
command: ["go","run", "app.go"]
|
||||
```
|
||||
|
||||
For a more in-depth example and explanation of the template properties, see [Multi-app template]({{< ref multi-app-template.md >}}).
|
||||
|
||||
## Locations for resources and configuration files
|
||||
|
||||
You have options on where to place your applications' resources and configuration files when using Multi-App Run.
|
||||
|
||||
### Point to one file location (with convention)
|
||||
|
||||
You can set all of your applications resources and configurations at the `~/.dapr` root. This is helpful when all applications share the same resources path, like when testing on a local machine.
|
||||
|
||||
### Separate file locations for each application (with convention)
|
||||
|
||||
When using Multi-App Run, each application directory can have a `.dapr` folder, which contains a `config.yaml` file and a `resources` directory. Otherwise, if the `.dapr` directory is not present within the app directory, the default `~/.dapr/resources/` and `~/.dapr/config.yaml` locations are used.
|
||||
|
||||
If you decide to add a `.dapr` directory in each application directory, with a `/resources` directory and `config.yaml` file, you can specify different resources paths for each application. This approach remains within convention by using the default `~/.dapr`.
|
||||
|
||||
### Point to separate locations (custom)
|
||||
|
||||
You can also name each app directory's `.dapr` directory something other than `.dapr`, such as, `webapp`, or `backend`. This helps if you'd like to be explicit about resource or application directory paths.
|
||||
|
||||
## Logs
|
||||
|
||||
Logs for application and `daprd` are captured in separate files. These log files are created automatically under `.dapr/logs` directory under each application directory (`appDirPath` in the template). These log file names follow the pattern seen below:
|
||||
|
||||
- `<appID>_app_<timestamp>.log` (file name format for `app` log)
|
||||
- `<appID>_daprd_<timestamp>.log` (file name format for `daprd` log)
|
||||
|
||||
Even if you've decided to rename your resources folder to something other than `.dapr`, the log files are written only to the `.dapr/logs` folder (created in the application directory).
|
||||
|
||||
## Watch the demo
|
||||
|
||||
Watch [this video for an overview on Multi-App Run](https://youtu.be/s1p9MNl4VGo?t=2456):
|
||||
|
||||
<iframe width="560" height="315" src="https://www.youtube-nocookie.com/embed/s1p9MNl4VGo?start=2456" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" allowfullscreen></iframe>
|
||||
|
||||
## Next steps
|
||||
|
||||
[Learn the Multi-App Run template file structure and its properties]({{< ref multi-app-template.md >}})
|
||||
|
|
@ -0,0 +1,145 @@
|
|||
---
|
||||
type: docs
|
||||
title: "How to: Use the Multi-App Run template file"
|
||||
linkTitle: "How to: Use the Multi-App Run template"
|
||||
weight: 2000
|
||||
description: Unpack the Multi-App Run template file and its properties
|
||||
---
|
||||
|
||||
{{% alert title="Note" color="primary" %}}
|
||||
Multi-App Run is currently a preview feature only supported in Linux/MacOS.
|
||||
{{% /alert %}}
|
||||
|
||||
The Multi-App Run template file is a YAML file that you can use to run multiple applications at once. In this guide, you'll learn how to:
|
||||
- Use the multi-app template
|
||||
- View started applications
|
||||
- Stop the multi-app template
|
||||
- Stucture the multi-app template file
|
||||
|
||||
## Use the multi-app template
|
||||
|
||||
You can use the multi-app template file in one of the following two ways:
|
||||
|
||||
### Execute by providing a directory path
|
||||
|
||||
When you provide a directory path, the CLI will try to locate the Multi-App Run template file, named `dapr.yaml` by default in the directory. If the file is not found, the CLI will return an error.
|
||||
|
||||
Execute the following CLI command to read the Multi-App Run template file, named `dapr.yaml` by default:
|
||||
|
||||
```cmd
|
||||
# the template file needs to be called `dapr.yaml` by default if a directory path is given
|
||||
|
||||
dapr run -f <dir_path>
|
||||
```
|
||||
|
||||
### Execute by providing a file path
|
||||
|
||||
If the Multi-App Run template file is named something other than `dapr.yaml`, then you can provide the relative or absolute file path to the command:
|
||||
|
||||
```cmd
|
||||
dapr run -f ./path/to/<your-preferred-file-name>.yaml
|
||||
```
|
||||
|
||||
## View the started applications
|
||||
|
||||
Once the multi-app template is running, you can view the started applications with the following command:
|
||||
|
||||
```cmd
|
||||
dapr list
|
||||
```
|
||||
|
||||
## Stop the multi-app template
|
||||
|
||||
Stop the multi-app run template anytime with either of the following commands:
|
||||
|
||||
```cmd
|
||||
# the template file needs to be called `dapr.yaml` by default if a directory path is given
|
||||
|
||||
dapr stop -f
|
||||
```
|
||||
or:
|
||||
|
||||
```cmd
|
||||
dapr stop -f ./path/to/<your-preferred-file-name>.yaml
|
||||
```
|
||||
|
||||
## Template file structure
|
||||
|
||||
The Multi-App Run template file can include the following properties. Below is an example template showing two applications that are configured with some of the properties.
|
||||
|
||||
```yaml
|
||||
version: 1
|
||||
common: # optional section for variables shared across apps
|
||||
resourcesPath: ./app/components # any dapr resources to be shared across apps
|
||||
env: # any environment variable shared across apps
|
||||
- DEBUG: true
|
||||
apps:
|
||||
- appID: webapp # optional
|
||||
appDirPath: .dapr/webapp/ # REQUIRED
|
||||
resourcesPath: .dapr/resources # (optional) can be default by convention
|
||||
configFilePath: .dapr/config.yaml # (optional) can be default by convention too, ignore if file is not found.
|
||||
appProtocol: HTTP
|
||||
appPort: 8080
|
||||
appHealthCheckPath: "/healthz"
|
||||
command: ["python3" "app.py"]
|
||||
- appID: backend # optional
|
||||
appDirPath: .dapr/backend/ # REQUIRED
|
||||
appProtocol: GRPC
|
||||
appPort: 3000
|
||||
unixDomainSocket: "/tmp/test-socket"
|
||||
env:
|
||||
- DEBUG: false
|
||||
command: ["./backend"]
|
||||
```
|
||||
|
||||
{{% alert title="Important" color="warning" %}}
|
||||
The following rules apply for all the paths present in the template file:
|
||||
- If the path is absolute, it is used as is.
|
||||
- All relative paths under comman section should be provided relative to the template file path.
|
||||
- `appDirPath` under apps section should be provided relative to the template file path.
|
||||
- All relative paths under app section should be provided relative to the appDirPath.
|
||||
|
||||
{{% /alert %}}
|
||||
|
||||
## Template properties
|
||||
|
||||
The properties for the Multi-App Run template align with the `dapr run` CLI flags, [listed in the CLI reference documentation]({{< ref "dapr-run.md#flags" >}}).
|
||||
|
||||
|
||||
| Properties | Required | Details | Example |
|
||||
|--------------------------|:--------:|--------|---------|
|
||||
| `appDirPath` | Y | Path to the your application code | `./webapp/`, `./backend/` |
|
||||
| `appID` | N | Application's app ID. If not provided, will be derived from `appDirPath` | `webapp`, `backend` |
|
||||
| `resourcesPath` | N | Path to your Dapr resources. Can be default by convention; ignore if directory isn't found | `./app/components`, `./webapp/components` |
|
||||
| `configFilePath` | N | Path to your application's configuration file | `./webapp/config.yaml` |
|
||||
| `appProtocol` | N | The protocol Dapr uses to talk to the application. | `HTTP`, `GRPC` |
|
||||
| `appPort` | N | The port your application is listening on | `8080`, `3000` |
|
||||
| `daprHTTPPort` | N | Dapr HTTP port | |
|
||||
| `daprGRPCPort` | N | Dapr GRPC port | |
|
||||
| `daprInternalGRPCPort` | N | gRPC port for the Dapr Internal API to listen on; used when parsing the value from a local DNS component | |
|
||||
| `metricsPort` | N | The port that Dapr sends its metrics information to | |
|
||||
| `unixDomainSocket` | N | Path to a unix domain socket dir mount. If specified, communication with the Dapr sidecar uses unix domain sockets for lower latency and greater throughput when compared to using TCP ports. Not available on Windows. | `/tmp/test-socket` |
|
||||
| `profilePort` | N | The port for the profile server to listen on | |
|
||||
| `enableProfiling` | N | Enable profiling via an HTTP endpoint | |
|
||||
| `apiListenAddresses` | N | Dapr API listen addresses | |
|
||||
| `logLevel` | N | The log verbosity. | |
|
||||
| `appMaxConcurrency` | N | The concurrency level of the application; default is unlimited | |
|
||||
| `placementHostAddress` | N | | |
|
||||
| `appSSL` | N | Enable https when Dapr invokes the application | |
|
||||
| `daprHTTPMaxRequestSize` | N | Max size of the request body in MB. | |
|
||||
| `daprHTTPReadBufferSize` | N | Max size of the HTTP read buffer in KB. This also limits the maximum size of HTTP headers. The default 4 KB | |
|
||||
| `enableAppHealthCheck` | N | Enable the app health check on the application | `true`, `false` |
|
||||
| `appHealthCheckPath` | N | Path to the health check file | `/healthz` |
|
||||
| `appHealthProbeInterval` | N | Interval to probe for the health of the app in seconds
|
||||
| |
|
||||
| `appHealthProbeTimeout` | N | Timeout for app health probes in milliseconds | |
|
||||
| `appHealthThreshold` | N | Number of consecutive failures for the app to be considered unhealthy | |
|
||||
| `enableApiLogging` | N | Enable the logging of all API calls from application to Dapr | |
|
||||
| `runtimePath` | N | Dapr runtime install path | |
|
||||
| `env` | N | Map to environment variable; environment variables applied per application will overwrite environment variables shared across applications | `DEBUG`, `DAPR_HOST_ADD` |
|
||||
|
||||
## Next steps
|
||||
|
||||
Watch [this video for an overview on Multi-App Run](https://youtu.be/s1p9MNl4VGo?t=2456):
|
||||
|
||||
<iframe width="560" height="315" src="https://www.youtube-nocookie.com/embed/s1p9MNl4VGo?start=2456" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" allowfullscreen></iframe>
|
||||
|
|
@ -29,4 +29,3 @@ Hit the ground running with our Dapr quickstarts, complete with code samples aim
|
|||
| [Secrets Management]({{< ref secrets-quickstart.md >}}) | Securely fetch secrets. |
|
||||
| [Configuration]({{< ref configuration-quickstart.md >}}) | Get configuration items and subscribe for configuration updates. |
|
||||
| [Resiliency]({{< ref resiliency >}}) | Define and apply fault-tolerance policies to your Dapr API requests. |
|
||||
|
||||
|
|
|
|||
|
|
@ -42,9 +42,18 @@ spec:
|
|||
| spec.ignoreErrors | N | Tells the Dapr sidecar to continue initialization if the component fails to load. Default is false | `false`
|
||||
| **spec.metadata** | - | **A key/value pair of component specific configuration. See your component definition for fields**|
|
||||
|
||||
### Special metadata values
|
||||
### Templated metadata values
|
||||
|
||||
Metadata values can contain a `{uuid}` tag that is replaced with a randomly generate UUID when the Dapr sidecar starts up. A new UUID is generated on every start up. It can be used, for example, to have a pod on Kubernetes with multiple application instances consuming a [shared MQTT subscription]({{< ref "setup-mqtt3.md" >}}). Below is an example of using the `{uuid}` tag.
|
||||
Metadata values can contain template tags that are resolved on Dapr sidecar startup. The table below shows the current templating tags that can be used in components.
|
||||
|
||||
| Tag | Details | Example use case |
|
||||
|-------------|--------------------------------------------------------------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
|
||||
| {uuid} | Randomly generated UUIDv4 | When you need a unique identifier in self-hosted mode; for example, multiple application instances consuming a [shared MQTT subscription]({{< ref "setup-mqtt3.md" >}}) |
|
||||
| {podName} | Name of the pod containing the Dapr sidecar | Use to have a persisted behavior, where the ConsumerID does not change on restart when using StatefulSets in Kubernetes |
|
||||
| {namespace} | Namespace where the Dapr sidecar resides combined with its appId | Using a shared `clientId` when multiple application instances consume a Kafka topic in Kubernetes |
|
||||
| {appID} | The configured `appID` of the resource containing the Dapr sidecar | Having a shared `clientId` when multiple application instances consumer a Kafka topic in self-hosted mode |
|
||||
|
||||
Below is an example of using the `{uuid}` tag in an MQTT pubsub component. Note that multiple template tags can be used in a single metadata value.
|
||||
|
||||
```yaml
|
||||
apiVersion: dapr.io/v1alpha1
|
||||
|
|
@ -67,9 +76,6 @@ spec:
|
|||
value: "false"
|
||||
```
|
||||
|
||||
The consumerID metadata values can also contain a `{podName}` tag that is replaced with the Kubernetes POD's name when the Dapr sidecar starts up. This can be used to have a persisted behavior where the ConsumerID does not change on restart when using StatefulSets in Kubernetes.
|
||||
|
||||
|
||||
## Further reading
|
||||
- [Components concept]({{< ref components-concept.md >}})
|
||||
- [Reference secrets in component definitions]({{< ref component-secrets.md >}})
|
||||
|
|
|
|||
|
|
@ -132,7 +132,7 @@ Follow the steps provided in the [Deploy Dapr on a Kubernetes cluster]({{< ref k
|
|||
|
||||
## Add the pluggable component container in your deployments
|
||||
|
||||
When running in Kubernetes mode, pluggable components are deployed as containers in the same pod as your application.
|
||||
Pluggable components are deployed as containers **in the same pod** as your application.
|
||||
|
||||
Since pluggable components are backed by [Unix Domain Sockets][uds], make the socket created by your pluggable component accessible by Dapr runtime. Configure the deployment spec to:
|
||||
|
||||
|
|
@ -140,7 +140,7 @@ Since pluggable components are backed by [Unix Domain Sockets][uds], make the so
|
|||
2. Hint to Dapr the mounted Unix socket volume location
|
||||
3. Attach volume to your pluggable component container
|
||||
|
||||
Below is an example of a deployment that configures a pluggable component:
|
||||
In the following example, your configured pluggable component is deployed as a container within the same pod as your application container.
|
||||
|
||||
```yaml
|
||||
apiVersion: apps/v1
|
||||
|
|
@ -167,17 +167,51 @@ spec:
|
|||
- name: dapr-unix-domain-socket
|
||||
emptyDir: {}
|
||||
containers:
|
||||
### --------------------- YOUR APPLICATION CONTAINER GOES HERE -----------
|
||||
##
|
||||
### --------------------- YOUR APPLICATION CONTAINER GOES HERE -----------
|
||||
### This is the pluggable component container.
|
||||
containers:
|
||||
### --------------------- YOUR APPLICATION CONTAINER GOES HERE -----------
|
||||
- name: app
|
||||
image: YOUR_APP_IMAGE:YOUR_APP_IMAGE_VERSION
|
||||
### --------------------- YOUR PLUGGABLE COMPONENT CONTAINER GOES HERE -----------
|
||||
- name: component
|
||||
image: YOUR_IMAGE_GOES_HERE:YOUR_IMAGE_VERSION
|
||||
volumeMounts: # required, the sockets volume mount
|
||||
- name: dapr-unix-domain-socket
|
||||
mountPath: /tmp/dapr-components-sockets
|
||||
image: YOUR_IMAGE_GOES_HERE:YOUR_IMAGE_VERSION
|
||||
```
|
||||
|
||||
Alternatively, you can annotate your pods, telling Dapr which containers within that pod are pluggable components, like in the example below:
|
||||
|
||||
```yaml
|
||||
apiVersion: apps/v1
|
||||
kind: Deployment
|
||||
metadata:
|
||||
name: app
|
||||
labels:
|
||||
app: app
|
||||
spec:
|
||||
replicas: 1
|
||||
selector:
|
||||
matchLabels:
|
||||
app: app
|
||||
template:
|
||||
metadata:
|
||||
labels:
|
||||
app: app
|
||||
annotations:
|
||||
dapr.io/pluggable-components: "component" ## the name of the pluggable component container separated by `,`, e.g "componentA,componentB".
|
||||
dapr.io/app-id: "my-app"
|
||||
dapr.io/enabled: "true"
|
||||
spec:
|
||||
containers:
|
||||
### --------------------- YOUR APPLICATION CONTAINER GOES HERE -----------
|
||||
- name: app
|
||||
image: YOUR_APP_IMAGE:YOUR_APP_IMAGE_VERSION
|
||||
### --------------------- YOUR PLUGGABLE COMPONENT CONTAINER GOES HERE -----------
|
||||
- name: component
|
||||
image: YOUR_IMAGE_GOES_HERE:YOUR_IMAGE_VERSION
|
||||
```
|
||||
|
||||
Before applying the deployment, let's add one more configuration: the component spec.
|
||||
|
||||
## Define a component
|
||||
|
|
|
|||
|
|
@ -117,6 +117,7 @@ The `logging` section under the `Configuration` spec contains the following prop
|
|||
logging:
|
||||
apiLogging:
|
||||
enabled: false
|
||||
obfuscateURLs: false
|
||||
omitHealthChecks: false
|
||||
```
|
||||
|
||||
|
|
@ -125,6 +126,7 @@ The following table lists the properties for logging:
|
|||
| Property | Type | Description |
|
||||
|--------------|--------|-------------|
|
||||
| `apiLogging.enabled` | boolean | The default value for the `--enable-api-logging` flag for `daprd` (and the corresponding `dapr.io/enable-api-logging` annotation): the value set in the Configuration spec is used as default unless a `true` or `false` value is passed to each Dapr Runtime. Default: `false`.
|
||||
| `apiLogging.obfuscateURLs` | boolean | When enabled, obfuscates the values of URLs in HTTP API logs (if enabled), logging the abstract route name rather than the full path being invoked, which could contain Personal Identifiable Information (PII). Default: `false`.
|
||||
| `apiLogging.omitHealthChecks` | boolean | If `true`, calls to health check endpoints (e.g. `/v1.0/healthz`) are not logged when API logging is enabled. This is useful if those calls are adding a lot of noise in your logs. Default: `false`
|
||||
|
||||
See [logging documentation]({{< ref "logs.md" >}}) for more information.
|
||||
|
|
|
|||
|
|
@ -72,6 +72,30 @@ spec:
|
|||
enabled: false
|
||||
```
|
||||
|
||||
## High cardinality metrics
|
||||
|
||||
Depending on your use case, some metrics emitted by Dapr might contain values that have a high cardinality. This might cause increased memory usage for the Dapr process/container and incur expensive egress costs in certain cloud environments. To mitigate this issue, you can set regular expressions for every metric exposed by the Dapr sidecar. [See a list of all Dapr metrics](https://github.com/dapr/dapr/blob/master/docs/development/dapr-metrics.md).
|
||||
|
||||
The following example shows how to apply a regular expression for the label `method` in the metric `dapr_runtime_service_invocation_req_sent_total`:
|
||||
|
||||
```yaml
|
||||
apiVersion: dapr.io/v1alpha1
|
||||
kind: Configuration
|
||||
metadata:
|
||||
name: daprConfig
|
||||
spec:
|
||||
metric:
|
||||
enabled: true
|
||||
rules:
|
||||
- name: dapr_runtime_service_invocation_req_sent_total
|
||||
labels:
|
||||
- name: method
|
||||
regex:
|
||||
"orders/": "orders/.+"
|
||||
```
|
||||
|
||||
When this configuration is applied, a recorded metric with the `method` label of `orders/a746dhsk293972nz` will be replaced with `orders/`.
|
||||
|
||||
## References
|
||||
|
||||
* [Howto: Run Prometheus locally]({{< ref prometheus.md >}})
|
||||
|
|
|
|||
|
|
@ -14,9 +14,35 @@ For CLI there is no explicit opt-in, just the version that this was first made a
|
|||
## Current preview features
|
||||
|
||||
| Feature | Description | Setting | Documentation | Version introduced |
|
||||
| ---------- |-------------|---------|---------------|-----------------|
|
||||
| **`--image-registry`** flag in Dapr CLI| In self-hosted mode, you can set this flag to specify any private registry to pull the container images required to install Dapr| N/A | [CLI `init` command reference]({{< ref "dapr-init.md#self-hosted-environment" >}}) | v1.7 |
|
||||
| **App Middleware** | Allow middleware components to be executed when making service-to-service calls | N/A | [App Middleware]({{< ref "middleware.md#app-middleware" >}}) | v1.9 |
|
||||
| **App health checks** | Allows configuring app health checks | `AppHealthCheck` | [App health checks]({{< ref "app-health.md" >}}) | v1.9 |
|
||||
| **Pluggable components** | Allows creating self-hosted gRPC-based components written in any language that supports gRPC. The following component APIs are supported: State stores, Pub/sub, Bindings | N/A | [Pluggable components concept]({{< ref "components-concept#pluggable-components" >}})| v1.9 |
|
||||
| **Workflows** | Author workflows as code to automate and orchestrate tasks within your application, like messaging, state management, and failure handling | N/A | [Workflows concept]({{< ref "components-concept#workflows" >}})| v1.10 |
|
||||
| **App Middleware** | Allow middleware components to be executed when making service-to-service calls | N/A | [App Middleware]({{<ref "middleware.md#app-middleware" >}}) | v1.9 |
|
||||
| **Streaming for HTTP service invocation** | Enables (partial) support for using streams in HTTP service invocation; see below for more details. | `ServiceInvocationStreaming` | [Details]({{< ref "support-preview-features.md#streaming-for-http-service-invocation" >}}) | v1.10 |
|
||||
| **App health checks** | Allows configuring app health checks | `AppHealthCheck` | [App health checks]({{<ref "app-health.md" >}}) | v1.9 |
|
||||
| **Pluggable components** | Allows creating self-hosted gRPC-based components written in any language that supports gRPC. The following component APIs are supported: State stores, Pub/sub, Bindings | N/A | [Pluggable components concept]({{<ref "components-concept#pluggable-components" >}})| v1.9 |
|
||||
| **Multi-App Run** | Configure multiple Dapr applications from a single configuration file and run from a single command | `dapr run -f` | [Multi-App Run]({{< ref multi-app-dapr-run.md >}}) | v1.10 |
|
||||
| **Workflows** | Author workflows as code to automate and orchestrate tasks within your application, like messaging, state management, and failure handling | N/A | [Workflows concept]({{< ref "components-concept#workflows" >}})| v1.10 |
|
||||
|
||||
### Streaming for HTTP service invocation
|
||||
|
||||
Running Dapr with the `ServiceInvocationStreaming` feature flag enables partial support for handling data as a stream in HTTP service invocation. This can offer improvements in performance and memory utilization when using Dapr to invoke another service using HTTP with large request or response bodies.
|
||||
|
||||
The table below summarizes the current state of support for streaming in HTTP service invocation in Dapr, including the impact of enabling `ServiceInvocationStreaming`, in the example where "app A" is invoking "app B" using Dapr. There are six steps in the data flow, with various levels of support for handling data as a stream:
|
||||
|
||||
<img src="/images/service-invocation-simple.webp" width=600 alt="Diagram showing the steps of service invocation described in the table below" />
|
||||
|
||||
| Step | Handles data as a stream | Dapr 1.10 | Dapr 1.10 with<br/>`ServiceInvocationStreaming` |
|
||||
|:---:|---|:---:|:---:|
|
||||
| 1 | Request: "App A" to "Dapr sidecar A | <span role="img" aria-label="No">❌</span> | <span role="img" aria-label="No">❌</span> |
|
||||
| 2 | Request: "Dapr sidecar A" to "Dapr sidecar B | <span role="img" aria-label="No">❌</span> | <span role="img" aria-label="Yes">✅</span> |
|
||||
| 3 | Request: "Dapr sidecar B" to "App B" | <span role="img" aria-label="Yes">✅</span> | <span role="img" aria-label="Yes">✅</span> |
|
||||
| 4 | Response: "App B" to "Dapr sidecar B" | <span role="img" aria-label="Yes">✅</span> | <span role="img" aria-label="Yes">✅</span> |
|
||||
| 5 | Response: "Dapr sidecar B" to "Dapr sidecar A | <span role="img" aria-label="No">❌</span> | <span role="img" aria-label="Yes">✅</span> |
|
||||
| 6 | Response: "Dapr sidecar A" to "App A | <span role="img" aria-label="No">❌</span> | <span role="img" aria-label="Yes">✅</span> |
|
||||
|
||||
Important notes:
|
||||
|
||||
- `ServiceInvocationStreaming` needs to be applied on caller sidecars only.
|
||||
In the example above, streams are used for HTTP service invocation if `ServiceInvocationStreaming` is applied to the configuration of "app A" and its Dapr sidecar, regardless of whether the feature flag is enabled for "app B" and its sidecar.
|
||||
- When `ServiceInvocationStreaming` is enabled, you should make sure that all services your app invokes using Dapr ("app B") are updated to Dapr 1.10, even if `ServiceInvocationStreaming` is not enabled for those sidecars.
|
||||
Invoking an app using Dapr 1.9 or older is still possible, but those calls may fail if you have applied a Dapr Resiliency policy with retries enabled.
|
||||
|
||||
> Full support for streaming for HTTP service invocation will be completed in a future Dapr version.
|
||||
|
|
|
|||
|
|
@ -40,11 +40,11 @@ $ dapr run --enable-api-logging -- node myapp.js
|
|||
ℹ️ Starting Dapr with id order-processor on port 56730
|
||||
✅ You are up and running! Both Dapr and your app logs will appear here.
|
||||
.....
|
||||
INFO[0000] HTTP API Called app_id=order-processor instance=mypc method="POST /v1.0/state/{name}" scope=dapr.runtime.http-info type=log useragent=Go-http-client/1.1 ver=edge
|
||||
INFO[0000] HTTP API Called app_id=order-processor instance=mypc method="POST /v1.0/state/mystate" scope=dapr.runtime.http-info type=log useragent=Go-http-client/1.1 ver=edge
|
||||
== APP == INFO:root:Saving Order: {'orderId': '483'}
|
||||
INFO[0000] HTTP API Called app_id=order-processor instance=mypc method="GET /v1.0/state/{name}/{key}" scope=dapr.runtime.http-info type=log useragent=Go-http-client/1.1 ver=edge
|
||||
INFO[0000] HTTP API Called app_id=order-processor instance=mypc method="GET /v1.0/state/mystate/key123" scope=dapr.runtime.http-info type=log useragent=Go-http-client/1.1 ver=edge
|
||||
== APP == INFO:root:Getting Order: {'orderId': '483'}
|
||||
INFO[0000] HTTP API Called app_id=order-processor instance=mypc method="DELETE /v1.0/state/{name}" scope=dapr.runtime.http-info type=log useragent=Go-http-client/1.1 ver=edge
|
||||
INFO[0000] HTTP API Called app_id=order-processor instance=mypc method="DELETE /v1.0/state/mystate" scope=dapr.runtime.http-info type=log useragent=Go-http-client/1.1 ver=edge
|
||||
== APP == INFO:root:Deleted Order: {'orderId': '483'}
|
||||
INFO[0000] HTTP API Called app_id=order-processor instance=mypc method="PUT /v1.0/metadata/cliPID" scope=dapr.runtime.http-info type=log useragent=Go-http-client/1.1 ver=edge
|
||||
```
|
||||
|
|
@ -68,7 +68,7 @@ See the kubernetes API logs by executing the below command.
|
|||
kubectl logs <pod_name> daprd -n <name_space>
|
||||
```
|
||||
|
||||
The example below show `info` level API logging in Kubernetes.
|
||||
The example below show `info` level API logging in Kubernetes (with [URL obfuscation](#obfuscate-urls-in-http-api-logging) enabled).
|
||||
|
||||
```bash
|
||||
time="2022-03-16T18:32:02.487041454Z" level=info msg="HTTP API Called" method="POST /v1.0/invoke/{id}/method/{method:*}" app_id=invoke-caller instance=invokecaller-f4f949886-cbnmt scope=dapr.runtime.http-info type=log useragent=Go-http-client/1.1 ver=edge
|
||||
|
|
@ -98,6 +98,22 @@ logging:
|
|||
enabled: true
|
||||
```
|
||||
|
||||
### Obfuscate URLs in HTTP API logging
|
||||
|
||||
By default, logs for API calls in the HTTP endpoints include the full URL being invoked (for example, `POST /v1.0/invoke/directory/method/user-123`), which could contain Personal Identifiable Information (PII).
|
||||
|
||||
To reduce the risk of PII being accidentally included in API logs (when enabled), Dapr can instead log the abstract route being invoked (for example, `POST /v1.0/invoke/{id}/method/{method:*}`). This can help ensuring compliance with privacy regulations such as GDPR.
|
||||
|
||||
To enable obfuscation of URLs in Dapr's HTTP API logs, set `logging.apiLogging.obfuscateURLs` to `true`. For example:
|
||||
|
||||
```yaml
|
||||
logging:
|
||||
apiLogging:
|
||||
obfuscateURLs: true
|
||||
```
|
||||
|
||||
Logs emitted by the Dapr gRPC APIs are not impacted by this configuration option, as they only include the name of the method invoked and no arguments.
|
||||
|
||||
### Omit health checks from API logging
|
||||
|
||||
When API logging is enabled, all calls to the Dapr API server are logged, including those to health check endpoints (e.g. `/v1.0/healthz`). Depending on your environment, this may generate multiple log lines per minute and could create unwanted noise.
|
||||
|
|
|
|||
|
|
@ -29,11 +29,13 @@ dapr run [flags] [command]
|
|||
| `--app-protocol`, `-P` | | `http` | The protocol Dapr uses to talk to the application. Valid values are: `http` or `grpc` |
|
||||
| `--app-ssl` | | `false` | Enable https when Dapr invokes the application |
|
||||
| `--resources-path`, `-d` | | Linux/Mac: `$HOME/.dapr/components` <br/>Windows: `%USERPROFILE%\.dapr\components` | The path for components directory |
|
||||
| `--runtime-path` | | | Dapr runtime install path |
|
||||
| `--config`, `-c` | | Linux/Mac: `$HOME/.dapr/config.yaml` <br/>Windows: `%USERPROFILE%\.dapr\config.yaml` | Dapr configuration file |
|
||||
| `--dapr-grpc-port` | `DAPR_GRPC_PORT` | `50001` | The gRPC port for Dapr to listen on |
|
||||
| `--dapr-http-port` | `DAPR_HTTP_PORT` | `3500` | The HTTP port for Dapr to listen on |
|
||||
| `--enable-profiling` | | `false` | Enable "pprof" profiling via an HTTP endpoint |
|
||||
| `--help`, `-h` | | | Print the help message |
|
||||
| `--help`, `-h` | | | Print the help message |
|
||||
| `--run-file`, `-f` | | Linux/MacOS: `$HOME/.dapr/dapr.yaml` | Run multiple applications at once using a Multi-App Run template file. Currently in [alpha]({{< ref "support-preview-features.md" >}}) and only availale in Linux/MacOS |
|
||||
| `--image` | | | Use a custom Docker image. Format is `repository/image` for Docker Hub, or `example.com/repository/image` for a custom registry. |
|
||||
| `--log-level` | | `info` | The log verbosity. Valid values are: `debug`, `info`, `warn`, `error`, `fatal`, or `panic` |
|
||||
| `--enable-api-logging` | | `false` | Enable the logging of all API calls from application to Dapr |
|
||||
|
|
|
|||
|
|
@ -21,10 +21,11 @@ dapr stop [flags]
|
|||
|
||||
### Flags
|
||||
|
||||
| Name | Environment Variable | Default | Description |
|
||||
| ---------------- | -------------------- | ------- | -------------------------------- |
|
||||
| `--app-id`, `-a` | `APP_ID` | | The application id to be stopped |
|
||||
| `--help`, `-h` | | | Print this help message |
|
||||
| Name | Environment Variable | Default | Description |
|
||||
| -------------------- | -------------------- | ------- | -------------------------------- |
|
||||
| `--app-id`, `-a` | `APP_ID` | | The application id to be stopped |
|
||||
| `--help`, `-h` | | | Print this help message |
|
||||
| `--run-file`, `-f` | | | Stop running multiple applications at once using a Multi-App Run template file. Currently in [alpha]({{< ref "support-preview-features.md" >}}) and only availale in Linux/MacOS |
|
||||
|
||||
### Examples
|
||||
|
||||
|
|
|
|||
|
|
@ -9,9 +9,9 @@ aliases:
|
|||
|
||||
## Component format
|
||||
|
||||
To setup Azure Event Grid binding create a component of type `bindings.azure.eventgrid`. See [this guide]({{< ref "howto-bindings.md#1-create-a-binding" >}}) on how to create and apply a binding configuration.
|
||||
To setup an Azure Event Grid binding create a component of type `bindings.azure.eventgrid`. See [this guide]({{< ref "howto-bindings.md#1-create-a-binding" >}}) on how to create and apply a binding configuration.
|
||||
|
||||
See [this](https://docs.microsoft.com/azure/event-grid/) for Azure Event Grid documentation.
|
||||
See [this](https://docs.microsoft.com/azure/event-grid/) for the documentation for Azure Event Grid.
|
||||
|
||||
```yml
|
||||
apiVersion: dapr.io/v1alpha1
|
||||
|
|
@ -22,29 +22,30 @@ spec:
|
|||
type: bindings.azure.eventgrid
|
||||
version: v1
|
||||
metadata:
|
||||
# Required Input Binding Metadata
|
||||
- name: tenantId
|
||||
value: "[AzureTenantId]"
|
||||
- name: subscriptionId
|
||||
value: "[AzureSubscriptionId]"
|
||||
- name: clientId
|
||||
value: "[ClientId]"
|
||||
- name: clientSecret
|
||||
value: "[ClientSecret]"
|
||||
- name: subscriberEndpoint
|
||||
value: "[SubscriberEndpoint]"
|
||||
- name: handshakePort
|
||||
value: [HandshakePort]
|
||||
- name: scope
|
||||
value: "[Scope]"
|
||||
# Optional Input Binding Metadata
|
||||
- name: eventSubscriptionName
|
||||
value: "[EventSubscriptionName]"
|
||||
# Required Output Binding Metadata
|
||||
- name: accessKey
|
||||
value: "[AccessKey]"
|
||||
- name: topicEndpoint
|
||||
value: "[TopicEndpoint]"
|
||||
# Required Input Binding Metadata
|
||||
- name: azureTenantId
|
||||
value: "[AzureTenantId]"
|
||||
- name: azureSubscriptionId
|
||||
value: "[AzureSubscriptionId]"
|
||||
- name: azureClientId
|
||||
value: "[ClientId]"
|
||||
- name: azureClientSecret
|
||||
value: "[ClientSecret]"
|
||||
- name: subscriberEndpoint
|
||||
value: "[SubscriberEndpoint]"
|
||||
- name: handshakePort
|
||||
# Make sure to pass this as a string, with quotes around the value
|
||||
value: "[HandshakePort]"
|
||||
- name: scope
|
||||
value: "[Scope]"
|
||||
# Optional Input Binding Metadata
|
||||
- name: eventSubscriptionName
|
||||
value: "[EventSubscriptionName]"
|
||||
```
|
||||
|
||||
{{% alert title="Warning" color="warning" %}}
|
||||
|
|
@ -55,57 +56,99 @@ The above example uses secrets as plain strings. It is recommended to use a secr
|
|||
|
||||
| Field | Required | Binding support | Details | Example |
|
||||
|--------------------|:--------:|------------|-----|---------|
|
||||
| tenantId | Y | Input | The Azure tenant id in which this Event Grid Event Subscription should be created | `"tenentID"` |
|
||||
| subscriptionId | Y | Input | The Azure subscription id in which this Event Grid Event Subscription should be created | `"subscriptionId"` |
|
||||
| clientId | Y | Input | The client id that should be used by the binding to create or update the Event Grid Event Subscription | `"clientId"` |
|
||||
| clientSecret | Y | Input | The client id that should be used by the binding to create or update the Event Grid Event Subscription | `"clientSecret"` |
|
||||
| subscriberEndpoint | Y | Input | The https endpoint in which Event Grid will handshake and send Cloud Events. If you aren't re-writing URLs on ingress, it should be in the form of: `https://[YOUR HOSTNAME]/api/events` If testing on your local machine, you can use something like [ngrok](https://ngrok.com) to create a public endpoint. | `"https://[YOUR HOSTNAME]/api/events"` |
|
||||
| handshakePort | Y | Input | The container port that the input binding will listen on for handshakes and events | `"9000"` |
|
||||
| scope | Y | Input | The identifier of the resource to which the event subscription needs to be created or updated. See [here](#scope) for more details | `"/subscriptions/{subscriptionId}/"` |
|
||||
| eventSubscriptionName | N | Input | The name of the event subscription. Event subscription names must be between 3 and 64 characters in length and should use alphanumeric letters only | `"name"` |
|
||||
| accessKey | Y | Output | The Access Key to be used for publishing an Event Grid Event to a custom topic | `"accessKey"` |
|
||||
| topicEndpoint | Y | Output | The topic endpoint in which this output binding should publish events | `"topic-endpoint"` |
|
||||
| `accessKey` | Y | Output | The Access Key to be used for publishing an Event Grid Event to a custom topic | `"accessKey"` |
|
||||
| `topicEndpoint` | Y | Output | The topic endpoint in which this output binding should publish events | `"topic-endpoint"` |
|
||||
| `azureTenantId` | Y | Input | The Azure tenant ID of the Event Grid resource | `"tenentID"` |
|
||||
| `azureSubscriptionId` | Y | Input | The Azure subscription ID of the Event Grid resource | `"subscriptionId"` |
|
||||
| `azureClientId` | Y | Input | The client ID that should be used by the binding to create or update the Event Grid Event Subscription and to authenticate incoming messages | `"clientId"` |
|
||||
| `azureClientSecret` | Y | Input | The client id that should be used by the binding to create or update the Event Grid Event Subscription and to authenticate incoming messages | `"clientSecret"` |
|
||||
| `subscriberEndpoint` | Y | Input | The HTTPS endpoint of the webhook Event Grid sends events (formatted as Cloud Events) to. If you're not re-writing URLs on ingress, it should be in the form of: `"https://[YOUR HOSTNAME]/<path>"`<br/>If testing on your local machine, you can use something like [ngrok](https://ngrok.com) to create a public endpoint. | `"https://[YOUR HOSTNAME]/<path>"` |
|
||||
| `handshakePort` | Y | Input | The container port that the input binding listens on when receiving events on the webhook | `"9000"` |
|
||||
| `scope` | Y | Input | The identifier of the resource to which the event subscription needs to be created or updated. See the [scope section](#scope) for more details | `"/subscriptions/{subscriptionId}/"` |
|
||||
| `eventSubscriptionName` | N | Input | The name of the event subscription. Event subscription names must be between 3 and 64 characters long and should use alphanumeric letters only | `"name"` |
|
||||
|
||||
### Scope
|
||||
|
||||
Scope is the identifier of the resource to which the event subscription needs to be created or updated. The scope can be a subscription, or a resource group, or a top level resource belonging to a resource provider namespace, or an Event Grid topic. For example:
|
||||
- `'/subscriptions/{subscriptionId}/'` for a subscription
|
||||
- `'/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}'` for a resource group
|
||||
- `'/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/{resourceProviderNamespace}/{resourceType}/{resourceName}'` for a resource
|
||||
- `'/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.EventGrid/topics/{topicName}'` for an Event Grid topic
|
||||
Scope is the identifier of the resource to which the event subscription needs to be created or updated. The scope can be a subscription, a resource group, a top-level resource belonging to a resource provider namespace, or an Event Grid topic. For example:
|
||||
|
||||
- `/subscriptions/{subscriptionId}/` for a subscription
|
||||
- `/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}` for a resource group
|
||||
- `/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/{resourceProviderNamespace}/{resourceType}/{resourceName}` for a resource
|
||||
- `/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.EventGrid/topics/{topicName}` for an Event Grid topic
|
||||
|
||||
> Values in braces {} should be replaced with actual values.
|
||||
|
||||
## Binding support
|
||||
|
||||
This component supports both **input and output** binding interfaces.
|
||||
|
||||
This component supports **output binding** with the following operations:
|
||||
|
||||
- `create`
|
||||
## Additional information
|
||||
- `create`: publishes a message on the Event Grid topic
|
||||
|
||||
Event Grid Binding creates an [event subscription](https://docs.microsoft.com/azure/event-grid/concepts#event-subscriptions) when Dapr initializes. Your Service Principal needs to have the RBAC permissions to enable this.
|
||||
## Azure AD credentials
|
||||
|
||||
The Azure Event Grid binding requires an Azure AD application and service principal for two reasons:
|
||||
|
||||
- Creating an [event subscription](https://docs.microsoft.com/azure/event-grid/concepts#event-subscriptions) when Dapr is started (and updating it if the Dapr configuration changes)
|
||||
- Authenticating messages delivered by Event Hubs to your application.
|
||||
|
||||
Requirements:
|
||||
|
||||
- The [Azure CLI](https://docs.microsoft.com/cli/azure/install-azure-cli) installed.
|
||||
- [PowerShell 7](https://learn.microsoft.com/powershell/scripting/install/installing-powershell) installed.
|
||||
- [Az module for PowerShell](https://learn.microsoft.com/powershell/azure/install-az-ps) for PowerShell installed:
|
||||
`Install-Module Az -Scope CurrentUser -Repository PSGallery -Force`
|
||||
- [Microsoft.Graph module for PowerShell](https://learn.microsoft.com/powershell/microsoftgraph/installation) for PowerShell installed:
|
||||
`Install-Module Microsoft.Graph -Scope CurrentUser -Repository PSGallery -Force`
|
||||
|
||||
For the first purpose, you will need to [create an Azure Service Principal](https://learn.microsoft.com/azure/active-directory/develop/howto-create-service-principal-portal). After creating it, take note of the Azure AD application's **clientID** (a UUID), and run the following script with the Azure CLI:
|
||||
|
||||
```bash
|
||||
# Set the client ID of the app you created
|
||||
CLIENT_ID="..."
|
||||
# Scope of the resource, usually in the format:
|
||||
# `/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.EventGrid/topics/{topicName}`
|
||||
SCOPE="..."
|
||||
|
||||
# First ensure that Azure Resource Manager provider is registered for Event Grid
|
||||
az provider register --namespace Microsoft.EventGrid
|
||||
az provider show --namespace Microsoft.EventGrid --query "registrationState"
|
||||
az provider register --namespace "Microsoft.EventGrid"
|
||||
az provider show --namespace "Microsoft.EventGrid" --query "registrationState"
|
||||
# Give the SP needed permissions so that it can create event subscriptions to Event Grid
|
||||
az role assignment create --assignee <clientId> --role "EventGrid EventSubscription Contributor" --scopes <scope>
|
||||
az role assignment create --assignee "$CLIENT_ID" --role "EventGrid EventSubscription Contributor" --scopes "$SCOPE"
|
||||
```
|
||||
|
||||
_Make sure to also to add quotes around the `[HandshakePort]` in your Event Grid binding component because Kubernetes expects string values from the config._
|
||||
For the second purpose, first download a script:
|
||||
|
||||
```sh
|
||||
curl -LO "https://raw.githubusercontent.com/dapr/components-contrib/master/.github/infrastructure/conformance/azure/setup-eventgrid-sp.ps1"
|
||||
```
|
||||
|
||||
Then, **using PowerShell** (`pwsh`), run:
|
||||
|
||||
```powershell
|
||||
# Set the client ID of the app you created
|
||||
$clientId = "..."
|
||||
|
||||
# Authenticate with the Microsoft Graph
|
||||
# You may need to add the -TenantId flag to the next command if needed
|
||||
Connect-MgGraph -Scopes "Application.Read.All","Application.ReadWrite.All"
|
||||
./setup-eventgrid-sp.ps1 $clientId
|
||||
```
|
||||
|
||||
> Note: if your directory does not have a Service Principal for the application "Microsoft.EventGrid", you may need to run the command `Connect-MgGraph` and sign in as an admin for the Azure AD tenant (this is related to permissions on the Azure AD directory, and not the Azure subscription). Otherwise, please ask your tenant's admin to sign in and run this PowerShell command: `New-MgServicePrincipal -AppId "4962773b-9cdb-44cf-a8bf-237846a00ab7"` (the UUID is a constant)
|
||||
|
||||
### Testing locally
|
||||
|
||||
- Install [ngrok](https://ngrok.com/download)
|
||||
- Run locally using custom port `9000` for handshakes
|
||||
- Run locally using a custom port, for example `9000`, for handshakes
|
||||
|
||||
```bash
|
||||
# Using random port 9000 as an example
|
||||
# Using port 9000 as an example
|
||||
ngrok http --host-header=localhost 9000
|
||||
```
|
||||
|
||||
- Configure the ngrok's HTTPS endpoint and custom port to input binding metadata
|
||||
- Configure the ngrok's HTTPS endpoint and the custom port to input binding metadata
|
||||
- Run Dapr
|
||||
|
||||
```bash
|
||||
|
|
@ -115,19 +158,19 @@ dapr run --app-id dotnetwebapi --app-port 5000 --dapr-http-port 3500 dotnet run
|
|||
|
||||
### Testing on Kubernetes
|
||||
|
||||
Azure Event Grid requires a valid HTTPS endpoint for custom webhooks. Self signed certificates won't do. In order to enable traffic from public internet to your app's Dapr sidecar you need an ingress controller enabled with Dapr. There's a good article on this topic: [Kubernetes NGINX ingress controller with Dapr](https://carlos.mendible.com/2020/04/05/kubernetes-nginx-ingress-controller-with-dapr/).
|
||||
Azure Event Grid requires a valid HTTPS endpoint for custom webhooks; self-signed certificates aren't accepted. In order to enable traffic from the public internet to your app's Dapr sidecar you need an ingress controller enabled with Dapr. There's a good article on this topic: [Kubernetes NGINX ingress controller with Dapr](https://carlos.mendible.com/2020/04/05/kubernetes-nginx-ingress-controller-with-dapr/).
|
||||
|
||||
To get started, first create `dapr-annotations.yaml` for Dapr annotations
|
||||
To get started, first create a `dapr-annotations.yaml` file for Dapr annotations:
|
||||
|
||||
```yaml
|
||||
controller:
|
||||
podAnnotations:
|
||||
dapr.io/enabled: "true"
|
||||
dapr.io/app-id: "nginx-ingress"
|
||||
dapr.io/app-port: "80"
|
||||
podAnnotations:
|
||||
dapr.io/enabled: "true"
|
||||
dapr.io/app-id: "nginx-ingress"
|
||||
dapr.io/app-port: "80"
|
||||
```
|
||||
|
||||
Then install NGINX ingress controller to your Kubernetes cluster with Helm 3 using the annotations
|
||||
Then install the NGINX ingress controller to your Kubernetes cluster with Helm 3 using the annotations:
|
||||
|
||||
```bash
|
||||
helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx
|
||||
|
|
@ -137,12 +180,13 @@ helm install nginx-ingress ingress-nginx/ingress-nginx -f ./dapr-annotations.yam
|
|||
kubectl get svc -l component=controller -o jsonpath='Public IP is: {.items[0].status.loadBalancer.ingress[0].ip}{"\n"}'
|
||||
```
|
||||
|
||||
If deploying to Azure Kubernetes Service, you can follow [the official MS documentation for rest of the steps](https://docs.microsoft.com/azure/aks/ingress-tls)
|
||||
If deploying to Azure Kubernetes Service, you can follow [the official Microsoft documentation for rest of the steps](https://docs.microsoft.com/azure/aks/ingress-tls):
|
||||
|
||||
- Add an A record to your DNS zone
|
||||
- Install cert-manager
|
||||
- Create a CA cluster issuer
|
||||
|
||||
Final step for enabling communication between Event Grid and Dapr is to define `http` and custom port to your app's service and an `ingress` in Kubernetes. This example uses .NET Core web api and Dapr default ports and custom port 9000 for handshakes.
|
||||
Final step for enabling communication between Event Grid and Dapr is to define `http` and custom port to your app's service and an `ingress` in Kubernetes. This example uses a .NET Core web api and Dapr default ports and custom port 9000 for handshakes.
|
||||
|
||||
```yaml
|
||||
# dotnetwebapi.yaml
|
||||
|
|
@ -217,7 +261,7 @@ spec:
|
|||
imagePullPolicy: Always
|
||||
```
|
||||
|
||||
Deploy binding and app (including ingress) to Kubernetes
|
||||
Deploy the binding and app (including ingress) to Kubernetes
|
||||
|
||||
```bash
|
||||
# Deploy Dapr components
|
||||
|
|
@ -226,7 +270,7 @@ kubectl apply -f eventgrid.yaml
|
|||
kubectl apply -f dotnetwebapi.yaml
|
||||
```
|
||||
|
||||
> **Note:** This manifest deploys everything to Kubernetes default namespace.
|
||||
> **Note:** This manifest deploys everything to Kubernetes' default namespace.
|
||||
|
||||
#### Troubleshooting possible issues with Nginx controller
|
||||
|
||||
|
|
|
|||
|
|
@ -11,6 +11,9 @@ aliases:
|
|||
|
||||
To setup Kafka binding create a component of type `bindings.kafka`. See [this guide]({{< ref "howto-bindings.md#1-create-a-binding" >}}) on how to create and apply a binding configuration. For details on using `secretKeyRef`, see the guide on [how to reference secrets in components]({{< ref component-secrets.md >}}).
|
||||
|
||||
All component metadata field values can carry [templated metadata values]({{< ref "component-schema.md#templated-metadata-values" >}}), which are resolved on Dapr sidecar startup.
|
||||
For example, you can choose to use `{namespace}` as the `consumerGroup`, to enable using the same `appId` in different namespaces using the same topics as described in [this article]({{< ref "howto-namespace.md#with-namespace-consumer-groups">}}).
|
||||
|
||||
```yaml
|
||||
apiVersion: dapr.io/v1alpha1
|
||||
kind: Component
|
||||
|
|
|
|||
|
|
@ -21,18 +21,19 @@ spec:
|
|||
type: bindings.mqtt3
|
||||
version: v1
|
||||
metadata:
|
||||
- name: url
|
||||
value: "tcp://[username][:password]@host.domain[:port]"
|
||||
- name: topic
|
||||
value: "mytopic"
|
||||
- name: qos
|
||||
value: 1
|
||||
- name: retain
|
||||
value: "false"
|
||||
- name: cleanSession
|
||||
value: "true"
|
||||
- name: backOffMaxRetries
|
||||
value: "0"
|
||||
- name: url
|
||||
value: "tcp://[username][:password]@host.domain[:port]"
|
||||
- name: topic
|
||||
value: "mytopic"
|
||||
- name: consumerID
|
||||
value: "myapp"
|
||||
# Optional
|
||||
- name: retain
|
||||
value: "false"
|
||||
- name: cleanSession
|
||||
value: "false"
|
||||
- name: backOffMaxRetries
|
||||
value: "0"
|
||||
```
|
||||
|
||||
{{% alert title="Warning" color="warning" %}}
|
||||
|
|
@ -43,20 +44,19 @@ The above example uses secrets as plain strings. It is recommended to use a secr
|
|||
|
||||
| Field | Required | Binding support | Details | Example |
|
||||
|--------------------|:--------:|---------|---------|---------|
|
||||
| url | Y | Input/Output | Address of the MQTT broker. Can be `secretKeyRef` to use a secret reference. <br> Use the **`tcp://`** URI scheme for non-TLS communication. <br> Use the **`ssl://`** URI scheme for TLS communication. | `"tcp://[username][:password]@host.domain[:port]"`
|
||||
| topic | Y | Input/Output | The topic to listen on or send events to. | `"mytopic"` |
|
||||
| consumerID | N | Input/Output | The client ID used to connect to the MQTT broker. Defaults to the Dapr app ID. | `"myMqttClientApp"`
|
||||
| qos | N | Input/Output | Indicates the Quality of Service Level (QoS) of the message. Defaults to `0`. |`1`
|
||||
| retain | N | Input/Output | Defines whether the message is saved by the broker as the last known good value for a specified topic. Defaults to `"false"`. | `"true"`, `"false"`
|
||||
| cleanSession | N | Input/Output | Sets the `clean_session` flag in the connection message to the MQTT broker if `"true"`. Defaults to `"true"`. | `"true"`, `"false"`
|
||||
| caCert | Required for using TLS | Input/Output | Certificate Authority (CA) certificate in PEM format for verifying server TLS certificates. | `"-----BEGIN CERTIFICATE-----\n<base64-encoded DER>\n-----END CERTIFICATE-----"`
|
||||
| clientCert | Required for using TLS | Input/Output | TLS client certificate in PEM format. Must be used with `clientKey`. | `"-----BEGIN CERTIFICATE-----\n<base64-encoded DER>\n-----END CERTIFICATE-----"`
|
||||
| clientKey | Required for using TLS | Input/Output | TLS client key in PEM format. Must be used with `clientCert`. Can be `secretKeyRef` to use a secret reference. | `"-----BEGIN RSA PRIVATE KEY-----\n<base64-encoded PKCS8>\n-----END RSA PRIVATE KEY-----"`
|
||||
| backOffMaxRetries | N | Input | The maximum number of retries to process the message before returning an error. Defaults to `"0"`, which means that no retries will be attempted. `"-1"` can be specified to indicate that messages should be retried indefinitely until they are successfully processed or the application is shutdown. The component will wait 5 seconds between retries. | `"3"`
|
||||
| `url` | Y | Input/Output | Address of the MQTT broker. Can be `secretKeyRef` to use a secret reference. <br> Use the **`tcp://`** URI scheme for non-TLS communication. <br> Use the **`ssl://`** URI scheme for TLS communication. | `"tcp://[username][:password]@host.domain[:port]"`
|
||||
| `topic` | Y | Input/Output | The topic to listen on or send events to. | `"mytopic"` |
|
||||
| `consumerID` | Y | Input/Output | The client ID used to connect to the MQTT broker. | `"myMqttClientApp"`
|
||||
| `retain` | N | Input/Output | Defines whether the message is saved by the broker as the last known good value for a specified topic. Defaults to `"false"`. | `"true"`, `"false"`
|
||||
| `cleanSession` | N | Input/Output | Sets the `clean_session` flag in the connection message to the MQTT broker if `"true"`. Defaults to `"false"`. | `"true"`, `"false"`
|
||||
| `caCert` | Required for using TLS | Input/Output | Certificate Authority (CA) certificate in PEM format for verifying server TLS certificates. | See example below
|
||||
| `clientCert` | Required for using TLS | Input/Output | TLS client certificate in PEM format. Must be used with `clientKey`. | See example below
|
||||
| `clientKey` | Required for using TLS | Input/Output | TLS client key in PEM format. Must be used with `clientCert`. Can be `secretKeyRef` to use a secret reference. | See example below
|
||||
| `backOffMaxRetries` | N | Input | The maximum number of retries to process the message before returning an error. Defaults to `"0"`, which means that no retries will be attempted. `"-1"` can be specified to indicate that messages should be retried indefinitely until they are successfully processed or the application is shutdown. The component will wait 5 seconds between retries. | `"3"`
|
||||
|
||||
### Communication using TLS
|
||||
|
||||
To configure communication using TLS, ensure that the MQTT broker (e.g. mosquitto) is configured to support certificates and provide the `caCert`, `clientCert`, `clientKey` metadata in the component configuration. For example:
|
||||
To configure communication using TLS, ensure that the MQTT broker (e.g. emqx) is configured to support certificates and provide the `caCert`, `clientCert`, `clientKey` metadata in the component configuration. For example:
|
||||
|
||||
```yaml
|
||||
apiVersion: dapr.io/v1alpha1
|
||||
|
|
@ -67,35 +67,41 @@ spec:
|
|||
type: bindings.mqtt3
|
||||
version: v1
|
||||
metadata:
|
||||
- name: url
|
||||
value: "ssl://host.domain[:port]"
|
||||
- name: topic
|
||||
value: "topic1"
|
||||
- name: qos
|
||||
value: 1
|
||||
- name: retain
|
||||
value: "false"
|
||||
- name: cleanSession
|
||||
value: "false"
|
||||
- name: backoffMaxRetries
|
||||
value: "0"
|
||||
- name: caCert
|
||||
value: ${{ myLoadedCACert }}
|
||||
- name: clientCert
|
||||
value: ${{ myLoadedClientCert }}
|
||||
- name: clientKey
|
||||
secretKeyRef:
|
||||
name: myMqttClientKey
|
||||
key: myMqttClientKey
|
||||
auth:
|
||||
secretStore: <SECRET_STORE_NAME>
|
||||
- name: url
|
||||
value: "ssl://host.domain[:port]"
|
||||
- name: topic
|
||||
value: "topic1"
|
||||
- name: consumerID
|
||||
value: "myapp"
|
||||
# TLS configuration
|
||||
- name: caCert
|
||||
value: |
|
||||
-----BEGIN CERTIFICATE-----
|
||||
...
|
||||
-----END CERTIFICATE-----
|
||||
- name: clientCert
|
||||
value: |
|
||||
-----BEGIN CERTIFICATE-----
|
||||
...
|
||||
-----END CERTIFICATE-----
|
||||
- name: clientKey
|
||||
secretKeyRef:
|
||||
name: myMqttClientKey
|
||||
key: myMqttClientKey
|
||||
# Optional
|
||||
- name: retain
|
||||
value: "false"
|
||||
- name: cleanSession
|
||||
value: "false"
|
||||
- name: backoffMaxRetries
|
||||
value: "0"
|
||||
```
|
||||
|
||||
Note that while the `caCert` and `clientCert` values may not be secrets, they can be referenced from a Dapr secret store as well for convenience.
|
||||
> Note that while the `caCert` and `clientCert` values may not be secrets, they can be referenced from a Dapr secret store as well for convenience.
|
||||
|
||||
### Consuming a shared topic
|
||||
|
||||
When consuming a shared topic, each consumer must have a unique identifier. By default, the application ID is used to uniquely identify each consumer and publisher. In self-hosted mode, invoking each `dapr run` with a different application ID is sufficient to have them consume from the same shared topic. However, on Kubernetes, multiple instances of an application pod will share the same application ID, prohibiting all instances from consuming the same topic. To overcome this, configure the component's `consumerID` metadata with a `{uuid}` tag, which will give each instance a randomly generated `consumerID` value on start up. For example:
|
||||
When consuming a shared topic, each consumer must have a unique identifier. If running multiple instances of an application, you configure the component's `consumerID` metadata with a `{uuid}` tag, which will give each instance a randomly generated `consumerID` value on start up. For example:
|
||||
|
||||
```yaml
|
||||
apiVersion: dapr.io/v1alpha1
|
||||
|
|
@ -113,12 +119,10 @@ spec:
|
|||
value: "tcp://admin:public@localhost:1883"
|
||||
- name: topic
|
||||
value: "topic1"
|
||||
- name: qos
|
||||
value: 1
|
||||
- name: retain
|
||||
value: "false"
|
||||
- name: cleanSession
|
||||
value: "false"
|
||||
value: "true"
|
||||
- name: backoffMaxRetries
|
||||
value: "0"
|
||||
```
|
||||
|
|
@ -127,13 +131,15 @@ spec:
|
|||
The above example uses secrets as plain strings. It is recommended to use a secret store for the secrets as described [here]({{< ref component-secrets.md >}}).
|
||||
{{% /alert %}}
|
||||
|
||||
> In this case, the value of the consumer ID is random every time Dapr restarts, so you should set `cleanSession` to `true` as well.
|
||||
|
||||
## Binding support
|
||||
|
||||
This component supports both **input and output** binding interfaces.
|
||||
|
||||
This component supports **output binding** with the following operations:
|
||||
|
||||
- `create`
|
||||
- `create`: publishes a new message
|
||||
|
||||
## Set topic per-request
|
||||
|
||||
|
|
|
|||
|
|
@ -23,25 +23,39 @@ spec:
|
|||
version: v1
|
||||
metadata:
|
||||
- name: connectionString # Required when not using Azure Authentication.
|
||||
value: "Endpoint=sb://************"
|
||||
value: "Endpoint=sb://{ServiceBusNamespace}.servicebus.windows.net/;SharedAccessKeyName={PolicyName};SharedAccessKey={Key};EntityPath={ServiceBus}"
|
||||
- name: queueName
|
||||
value: queue1
|
||||
# - name: ttlInSeconds # Optional
|
||||
# value: 86400
|
||||
# - name: maxRetriableErrorsPerSec # Optional
|
||||
# - name: timeoutInSec # Optional
|
||||
# value: 60
|
||||
# - name: handlerTimeoutInSec # Optional
|
||||
# value: 60
|
||||
# - name: disableEntityManagement # Optional
|
||||
# value: "false"
|
||||
# - name: maxDeliveryCount # Optional
|
||||
# value: 3
|
||||
# - name: lockDurationInSec # Optional
|
||||
# value: 60
|
||||
# - name: lockRenewalInSec # Optional
|
||||
# value: 20
|
||||
# - name: maxActiveMessages # Optional
|
||||
# value: 10000
|
||||
# - name: maxConcurrentHandlers # Optional
|
||||
# value: 10
|
||||
# - name: defaultMessageTimeToLiveInSec # Optional
|
||||
# value: 10
|
||||
# - name: autoDeleteOnIdleInSec # Optional
|
||||
# value: 3600
|
||||
# - name: minConnectionRecoveryInSec # Optional
|
||||
# value: 2
|
||||
# - name: maxConnectionRecoveryInSec # Optional
|
||||
# value: 300
|
||||
# - name: maxActiveMessages # Optional
|
||||
# value: 1
|
||||
# - name: maxConcurrentHandlers # Optional
|
||||
# value: 1
|
||||
# - name: lockRenewalInSec # Optional
|
||||
# value: 20
|
||||
# - name: timeoutInSec # Optional
|
||||
# value: 60
|
||||
# - name: maxRetriableErrorsPerSec # Optional
|
||||
# value: 10
|
||||
# - name: publishMaxRetries # Optional
|
||||
# value: 5
|
||||
# - name: publishInitialRetryIntervalInMs # Optional
|
||||
# value: 500
|
||||
```
|
||||
{{% alert title="Warning" color="warning" %}}
|
||||
The above example uses secrets as plain strings. It is recommended to use a secret store for the secrets as described [here]({{< ref component-secrets.md >}}).
|
||||
|
|
@ -52,16 +66,26 @@ The above example uses secrets as plain strings. It is recommended to use a secr
|
|||
| Field | Required | Binding support | Details | Example |
|
||||
|--------------------|:--------:|-----------------|----------|---------|
|
||||
| `connectionString` | Y | Input/Output | The Service Bus connection string. Required unless using Azure AD authentication. | `"Endpoint=sb://************"` |
|
||||
| `namespaceName`| N | Input/Output | Parameter to set the address of the Service Bus namespace, as a fully-qualified domain name. Required if using Azure AD authentication. | `"namespace.servicebus.windows.net"` |
|
||||
| `queueName` | Y | Input/Output | The Service Bus queue name. Queue names are case-insensitive and will always be forced to lowercase. | `"queuename"` |
|
||||
| `ttlInSeconds` | N | Output | Parameter to set the default message [time to live](https://docs.microsoft.com/azure/service-bus-messaging/message-expiration). If this parameter is omitted, messages will expire after 14 days. See [also](#specifying-a-ttl-per-message) | `86400` |
|
||||
| `maxRetriableErrorsPerSec` | N | Input | Maximum number of retriable errors that are processed per second. If a message fails to be processed with a retriable error, the component adds a delay before it starts processing another message, to avoid immediately re-processing messages that have failed. Default: `10` | `10` |
|
||||
| `timeoutInSec` | N | Input/Output | Timeout for all invocations to the Azure Service Bus endpoint, in seconds. *Note that this option impacts network calls and it's unrelated to the TTL applies to messages*. Default: `60` | `60` |
|
||||
| `namespaceName`| N | Input/Output | Parameter to set the address of the Service Bus namespace, as a fully-qualified domain name. Required if using Azure AD authentication. | `"namespace.servicebus.windows.net"` |
|
||||
| `disableEntityManagement` | N | Input/Output | When set to true, queues and subscriptions do not get created automatically. Default: `"false"` | `"true"`, `"false"`
|
||||
| `lockDurationInSec` | N | Input/Output | Defines the length in seconds that a message will be locked for before expiring. Used during subscription creation only. Default set by server. | `30`
|
||||
| `autoDeleteOnIdleInSec` | N | Input/Output | Time in seconds to wait before auto deleting idle subscriptions. Used during subscription creation only. Default: `0` (disabled) | `3600`
|
||||
| `defaultMessageTimeToLiveInSec` | N | Input/Output | Default message time to live, in seconds. Used during subscription creation only. | `10`
|
||||
| `maxDeliveryCount` | N | Input/Output | Defines the number of attempts the server will make to deliver a message. Used during subscription creation only. Default set by server. | `10`
|
||||
| `minConnectionRecoveryInSec` | N | Input/Output | Minimum interval (in seconds) to wait before attempting to reconnect to Azure Service Bus in case of a connection failure. Default: `2` | `5`
|
||||
| `maxConnectionRecoveryInSec` | N | Input/Output | Maximum interval (in seconds) to wait before attempting to reconnect to Azure Service Bus in case of a connection failure. After each attempt, the component waits a random number of seconds, increasing every time, between the minimum and the maximum. Default: `300` (5 minutes) | `600`
|
||||
| `maxActiveMessages` | N | Defines the maximum number of messages to be processing or in the buffer at once. This should be at least as big as the maximum concurrent handlers. Default: `1` | `1`
|
||||
| `handlerTimeoutInSec`| N | Input | Timeout for invoking the app's handler. Default: `0` (no timeout) | `30`
|
||||
| `minConnectionRecoveryInSec` | N | Input | Minimum interval (in seconds) to wait before attempting to reconnect to Azure Service Bus in case of a connection failure. Default: `2` | `5` |
|
||||
| `maxConnectionRecoveryInSec` | N | Input | Maximum interval (in seconds) to wait before attempting to reconnect to Azure Service Bus in case of a connection failure. After each attempt, the binding waits a random number of seconds, increasing every time, between the minimum and the maximum. Default: `300` (5 minutes) | `600` |
|
||||
| `maxActiveMessages` | N |Defines the maximum number of messages to be processing or in the buffer at once. This should be at least as big as the maximum concurrent handlers. Default: `1` | `1`
|
||||
| `maxConcurrentHandlers` | N |Defines the maximum number of concurrent message handlers. Default: `1`. | `1`
|
||||
| `lockRenewalInSec` | N |Defines the frequency at which buffered message locks will be renewed. Default: `20`. | `20`
|
||||
| `timeoutInSec` | N | Input/Output | Timeout for all invocations to the Azure Service Bus endpoint, in seconds. *Note that this option impacts network calls and it's unrelated to the TTL applies to messages*. Default: `60` | `60` |
|
||||
| `lockRenewalInSec` | N | Input | Defines the frequency at which buffered message locks will be renewed. Default: `20`. | `20`
|
||||
| `maxActiveMessages` | N | Input | Defines the maximum number of messages to be processing or in the buffer at once. This should be at least as big as the maximum concurrent handlers. Default: `1` | `2000`
|
||||
| `maxConcurrentHandlers` | N | Input | Defines the maximum number of concurrent message handlers; set to `0` for unlimited. Default: `1` | `10`
|
||||
| `maxRetriableErrorsPerSec` | N | Input | Maximum number of retriable errors that are processed per second. If a message fails to be processed with a retriable error, the component adds a delay before it starts processing another message, to avoid immediately re-processing messages that have failed. Default: `10` | `10`
|
||||
| `publishMaxRetries` | N | Output | The max number of retries for when Azure Service Bus responds with "too busy" in order to throttle messages. Defaults: `5` | `5`
|
||||
| `publishInitialRetryIntervalInMs` | N | Output | Time in milliseconds for the initial exponential backoff when Azure Service Bus throttle messages. Defaults: `500` | `500`
|
||||
|
||||
### Azure Active Directory (AAD) authentication
|
||||
|
||||
|
|
@ -100,15 +124,13 @@ This component supports both **input and output** binding interfaces.
|
|||
|
||||
This component supports **output binding** with the following operations:
|
||||
|
||||
- `create`
|
||||
- `create`: publishes a message to the specified queue
|
||||
|
||||
## Specifying a TTL per message
|
||||
|
||||
Time to live can be defined on queue level (as illustrated above) or at the message level. The value defined at message level overwrites any value set at queue level.
|
||||
Time to live can be defined on a per-queue level (as illustrated above) or at the message level. The value defined at message level overwrites any value set at the queue level.
|
||||
|
||||
To set time to live at message level use the `metadata` section in the request body during the binding invocation.
|
||||
|
||||
The field name is `ttlInSeconds`.
|
||||
To set time to live at message level use the `metadata` section in the request body during the binding invocation: the field name is `ttlInSeconds`.
|
||||
|
||||
{{< tabs "Linux">}}
|
||||
|
||||
|
|
|
|||
|
|
@ -31,8 +31,12 @@ spec:
|
|||
# value: "60"
|
||||
# - name: decodeBase64
|
||||
# value: "false"
|
||||
# - name: encodeBase64
|
||||
# value: "false"
|
||||
# - name: endpoint
|
||||
# value: "http://127.0.0.1:10001"
|
||||
# - name: visibilityTimeout
|
||||
# value: "30s"
|
||||
```
|
||||
|
||||
{{% alert title="Warning" color="warning" %}}
|
||||
|
|
@ -47,8 +51,10 @@ The above example uses secrets as plain strings. It is recommended to use a secr
|
|||
| `accountKey` | Y* | Input/Output | The access key of the Azure Storage account. Only required when not using Azure AD authentication. | `"access-key"` |
|
||||
| `queueName` | Y | Input/Output | The name of the Azure Storage queue | `"myqueue"` |
|
||||
| `ttlInSeconds` | N | Output | Parameter to set the default message time to live. If this parameter is omitted, messages will expire after 10 minutes. See [also](#specifying-a-ttl-per-message) | `"60"` |
|
||||
| `decodeBase64` | N | Output | Configuration to decode base64 file content before saving to Blob Storage. (In case of saving a file with binary content). `true` is the only allowed positive value. Other positive variations like `"True", "1"` are not acceptable. Defaults to `false` | `true`, `false` |
|
||||
| `decodeBase64` | N | Output | Configuration to decode base64 file content before saving to Storage Queues. (In case of saving a file with binary content). Defaults to `false` | `true`, `false` |
|
||||
| `encodeBase64` | N | Output | If enabled base64 encodes the data payload before uploading to Azure storage queues. Default `false`. | `true`, `false` |
|
||||
| `endpoint` | N | Input/Output | Optional custom endpoint URL. This is useful when using the [Azurite emulator](https://github.com/Azure/azurite) or when using custom domains for Azure Storage (although this is not officially supported). The endpoint must be the full base URL, including the protocol (`http://` or `https://`), the IP or FQDN, and optional port. | `"http://127.0.0.1:10001"` or `"https://accountName.queue.example.com"` |
|
||||
| `visibilityTimeout` | N | Input | Allows setting a custom queue visibility timeout to avoid immediate retrying of recently failed messages. Defaults to 30 seconds. | "100s" |
|
||||
|
||||
### Azure Active Directory (Azure AD) authentication
|
||||
|
||||
|
|
|
|||
|
|
@ -7,6 +7,10 @@ aliases:
|
|||
- "/operations/components/setup-bindings/supported-bindings/twitter/"
|
||||
---
|
||||
|
||||
{{% alert title="Deprecation notice" color="warning" %}}
|
||||
The Twitter binding component has been deprecated and will be removed in a future release. See [this GitHub issue](https://github.com/dapr/components-contrib/issues/2503) for details.
|
||||
{{% /alert %}}
|
||||
|
||||
## Component format
|
||||
|
||||
To setup Twitter binding create a component of type `bindings.twitter`. See [this guide]({{< ref "howto-bindings.md#1-create-a-binding" >}}) on how to create and apply a binding configuration.
|
||||
|
|
|
|||
|
|
@ -7,7 +7,7 @@ aliases:
|
|||
- /developing-applications/middleware/supported-middleware/middleware-opa/
|
||||
---
|
||||
|
||||
The Open Policy Agent (OPA) [HTTP middleware]({{< ref middleware.md >}}) applys [OPA Policies](https://www.openpolicyagent.org/) to incoming Dapr HTTP requests. This can be used to apply reusable authorization policies to app endpoints.
|
||||
The Open Policy Agent (OPA) [HTTP middleware]({{< ref middleware.md >}}) applies [OPA Policies](https://www.openpolicyagent.org/) to incoming Dapr HTTP requests. This can be used to apply reusable authorization policies to app endpoints.
|
||||
|
||||
## Component format
|
||||
|
||||
|
|
@ -30,6 +30,11 @@ spec:
|
|||
- name: defaultStatus
|
||||
value: 403
|
||||
|
||||
# `readBody` controls whether the middleware reads the entire request body in-memory and make it
|
||||
# availble for policy decisions.
|
||||
- name: readBody
|
||||
value: "false"
|
||||
|
||||
# `rego` is the open policy agent policy to evaluate. required
|
||||
# The policy package must be http and the policy must set data.http.allow
|
||||
- name: rego
|
||||
|
|
@ -66,15 +71,16 @@ spec:
|
|||
}
|
||||
```
|
||||
|
||||
You can prototype and experiment with policies using the [official opa playground](https://play.openpolicyagent.org). For example, [you can find the example policy above here](https://play.openpolicyagent.org/p/oRIDSo6OwE).
|
||||
You can prototype and experiment with policies using the [official OPA playground](https://play.openpolicyagent.org). For example, [you can find the example policy above here](https://play.openpolicyagent.org/p/oRIDSo6OwE).
|
||||
|
||||
## Spec metadata fields
|
||||
|
||||
| Field | Details | Example |
|
||||
|--------|---------|---------|
|
||||
| rego | The Rego policy language | See above |
|
||||
| defaultStatus | The status code to return for denied responses | `"https://accounts.google.com"`, `"https://login.salesforce.com"`
|
||||
| includedHeaders | A comma-separated set of case-insensitive headers to include in the request input. Request headers are not passed to the policy by default. Include to receive incoming request headers in the input | `"x-my-custom-header, x-jwt-header"`
|
||||
| `rego` | The Rego policy language | See above |
|
||||
| `defaultStatus` | The status code to return for denied responses | `"https://accounts.google.com"`, `"https://login.salesforce.com"`
|
||||
| `readBody` | If set to `true` (the default value), the body of each request is read fully in-memory and can be used to make policy decisions. If your policy doesn't depend on inspecting the request body, consider disabling this (setting to `false`) for significant performance improvements. | `"false"`
|
||||
| `includedHeaders` | A comma-separated set of case-insensitive headers to include in the request input. Request headers are not passed to the policy by default. Include to receive incoming request headers in the input | `"x-my-custom-header, x-jwt-header"`
|
||||
|
||||
## Dapr configuration
|
||||
|
||||
|
|
@ -193,6 +199,7 @@ allow = { "allow": true, "additional_headers": { "X-JWT-Payload": payload } } {
|
|||
```
|
||||
|
||||
### Result structure
|
||||
|
||||
```go
|
||||
type Result bool
|
||||
// or
|
||||
|
|
|
|||
|
|
@ -11,6 +11,9 @@ aliases:
|
|||
|
||||
To setup Apache Kafka pubsub create a component of type `pubsub.kafka`. See [this guide]({{< ref "howto-publish-subscribe.md#step-1-setup-the-pubsub-component" >}}) on how to create and apply a pubsub configuration. For details on using `secretKeyRef`, see the guide on [how to reference secrets in components]({{< ref component-secrets.md >}}).
|
||||
|
||||
All component metadata field values can carry [templated metadata values]({{< ref "component-schema.md#templated-metadata-values" >}}), which are resolved on Dapr sidecar startup.
|
||||
For example, you can choose to use `{namespace}` as the `consumerGroup` to enable using the same `appId` in different namespaces using the same topics as described in [this article]({{< ref "howto-namespace.md#with-namespace-consumer-groups">}}).
|
||||
|
||||
```yaml
|
||||
apiVersion: dapr.io/v1alpha1
|
||||
kind: Component
|
||||
|
|
@ -23,7 +26,7 @@ spec:
|
|||
- name: brokers # Required. Kafka broker connection setting
|
||||
value: "dapr-kafka.myapp.svc.cluster.local:9092"
|
||||
- name: consumerGroup # Optional. Used for input bindings.
|
||||
value: "group1"
|
||||
value: "{namespace}"
|
||||
- name: clientID # Optional. Used as client tracing ID by Kafka brokers.
|
||||
value: "my-dapr-app-id"
|
||||
- name: authType # Required.
|
||||
|
|
@ -108,7 +111,7 @@ spec:
|
|||
value: 200ms
|
||||
- name: version # Optional.
|
||||
value: 0.10.2.0
|
||||
- name: disableTls
|
||||
- name: disableTls
|
||||
value: "true"
|
||||
```
|
||||
|
||||
|
|
@ -198,13 +201,13 @@ spec:
|
|||
|
||||
#### OAuth2 or OpenID Connect
|
||||
|
||||
Setting `authType` to `oidc` enables SASL authentication via the **OAUTHBEARER** mechanism. This supports specifying a bearer token from an external OAuth2 or [OIDC](https://en.wikipedia.org/wiki/OpenID) identity provider. Currently, only the **client_credentials** grant is supported.
|
||||
Setting `authType` to `oidc` enables SASL authentication via the **OAUTHBEARER** mechanism. This supports specifying a bearer token from an external OAuth2 or [OIDC](https://en.wikipedia.org/wiki/OpenID) identity provider. Currently, only the **client_credentials** grant is supported.
|
||||
|
||||
Configure `oidcTokenEndpoint` to the full URL for the identity provider access token endpoint.
|
||||
Configure `oidcTokenEndpoint` to the full URL for the identity provider access token endpoint.
|
||||
|
||||
Set `oidcClientID` and `oidcClientSecret` to the client credentials provisioned in the identity provider.
|
||||
Set `oidcClientID` and `oidcClientSecret` to the client credentials provisioned in the identity provider.
|
||||
|
||||
If `caCert` is specified in the component configuration, the certificate is appended to the system CA trust for verifying the identity provider certificate. Similarly, if `skipVerify` is specified in the component configuration, verification will also be skipped when accessing the identity provider.
|
||||
If `caCert` is specified in the component configuration, the certificate is appended to the system CA trust for verifying the identity provider certificate. Similarly, if `skipVerify` is specified in the component configuration, verification will also be skipped when accessing the identity provider.
|
||||
|
||||
By default, the only scope requested for the token is `openid`; it is **highly** recommended that additional scopes be specified via `oidcScopes` in a comma-separated list and validated by the Kafka broker. If additional scopes are not used to narrow the validity of the access token,
|
||||
a compromised Kafka broker could replay the token to access other services as the Dapr clientID.
|
||||
|
|
@ -293,6 +296,19 @@ auth:
|
|||
secretStore: <SECRET_STORE_NAME>
|
||||
```
|
||||
|
||||
## Sending and receiving multiple messages
|
||||
|
||||
Apache Kafka component supports sending and receiving multiple messages in a single operation using the bulk Pub/sub API.
|
||||
|
||||
### Configuring bulk subscribe
|
||||
|
||||
When subscribing to a topic, you can configure `bulkSubscribe` options. Refer to [Subscription methods]({{< ref subscription-methods >}}) for more details. Learn more about [the bulk subscribe API]({{< ref pubsub-bulk.md >}}).
|
||||
|
||||
| Configuration | Default |
|
||||
|----------|---------|
|
||||
| `maxBulkAwaitDurationMs` | `10000` (10s) |
|
||||
| `maxBulkSubCount` | `80` |
|
||||
|
||||
## Per-call metadata fields
|
||||
|
||||
### Partition Key
|
||||
|
|
|
|||
|
|
@ -1,14 +1,18 @@
|
|||
---
|
||||
type: docs
|
||||
title: "Azure Service Bus"
|
||||
linkTitle: "Azure Service Bus"
|
||||
description: "Detailed documentation on the Azure Service Bus pubsub component"
|
||||
title: "Azure Service Bus Queues"
|
||||
linkTitle: "Azure Service Bus Queues"
|
||||
description: "Detailed documentation on the Azure Service Bus Queues pubsub component"
|
||||
aliases:
|
||||
- "/operations/components/setup-pubsub/supported-pubsub/setup-azure-servicebus/"
|
||||
- "/operations/components/setup-pubsub/supported-pubsub/setup-azure-servicebus-queues/"
|
||||
---
|
||||
|
||||
## Component format
|
||||
To setup Azure Service Bus pubsub create a component of type `pubsub.azure.servicebus`. See [this guide]({{< ref "howto-publish-subscribe.md#step-1-setup-the-pubsub-component" >}}) on how to create and apply a pubsub configuration.
|
||||
|
||||
To setup Azure Service Bus Queues pubsub create a component of type `pubsub.azure.servicebus.queues`. See [this guide]({{< ref "howto-publish-subscribe.md#step-1-setup-the-pubsub-component" >}}) on how to create and apply a pubsub configuration.
|
||||
|
||||
> This component uses queues on Azure Service Bus; see the official documentation for the differences between [topics and queues](https://learn.microsoft.com/azure/service-bus-messaging/service-bus-queues-topics-subscriptions).
|
||||
> For using topics, see the [Azure Service Bus Topics pubsub component]({{< ref "setup-azure-servicebus-topics" >}}).
|
||||
|
||||
### Connection String Authentication
|
||||
|
||||
|
|
@ -18,13 +22,12 @@ kind: Component
|
|||
metadata:
|
||||
name: servicebus-pubsub
|
||||
spec:
|
||||
type: pubsub.azure.servicebus
|
||||
type: pubsub.azure.servicebus.queues
|
||||
version: v1
|
||||
metadata:
|
||||
- name: connectionString # Required when not using Azure Authentication.
|
||||
# Required when not using Azure AD Authentication
|
||||
- name: connectionString
|
||||
value: "Endpoint=sb://{ServiceBusNamespace}.servicebus.windows.net/;SharedAccessKeyName={PolicyName};SharedAccessKey={Key};EntityPath={ServiceBus}"
|
||||
# - name: consumerID # Optional: defaults to the app's own ID
|
||||
# value: "{identifier}"
|
||||
# - name: timeoutInSec # Optional
|
||||
# value: 60
|
||||
# - name: handlerTimeoutInSec # Optional
|
||||
|
|
@ -53,12 +56,10 @@ spec:
|
|||
# value: 10
|
||||
# - name: publishMaxRetries # Optional
|
||||
# value: 5
|
||||
# - name: publishInitialRetryInternalInMs # Optional
|
||||
# - name: publishInitialRetryIntervalInMs # Optional
|
||||
# value: 500
|
||||
```
|
||||
|
||||
> __NOTE:__ The above settings are shared across all topics that use this component.
|
||||
|
||||
{{% alert title="Warning" color="warning" %}}
|
||||
The above example uses secrets as plain strings. It is recommended to use a secret store for the secrets as described [here]({{< ref component-secrets.md >}}).
|
||||
{{% /alert %}}
|
||||
|
|
@ -67,28 +68,27 @@ The above example uses secrets as plain strings. It is recommended to use a secr
|
|||
|
||||
| Field | Required | Details | Example |
|
||||
|--------------------|:--------:|---------|---------|
|
||||
| `connectionString` | Y | Shared access policy connection-string for the Service Bus. Required unless using Azure AD authentication. | "`Endpoint=sb://{ServiceBusNamespace}.servicebus.windows.net/;SharedAccessKeyName={PolicyName};SharedAccessKey={Key};EntityPath={ServiceBus}`"
|
||||
| `connectionString` | Y | Shared access policy connection string for the Service Bus. Required unless using Azure AD authentication. | See example above
|
||||
| `namespaceName`| N | Parameter to set the address of the Service Bus namespace, as a fully-qualified domain name. Required if using Azure AD authentication. | `"namespace.servicebus.windows.net"` |
|
||||
| `consumerID` | N | Consumer ID a.k.a consumer tag organizes one or more consumers into a group. Consumers with the same consumer ID work as one virtual consumer, i.e. a message is processed only once by one of the consumers in the group. If the consumer ID is not set, the dapr runtime will set it to the dapr application ID. |
|
||||
| `timeoutInSec` | N | Timeout for sending messages and for management operations. Default: `60` |`30`
|
||||
| `handlerTimeoutInSec`| N | Timeout for invoking the app's handler. Default: `60` | `30`
|
||||
| `disableEntityManagement` | N | When set to true, topics and subscriptions do not get created automatically. Default: `"false"` | `"true"`, `"false"`
|
||||
| `maxDeliveryCount` | N |Defines the number of attempts the server will make to deliver a message. Default set by server| `10`
|
||||
| `lockDurationInSec` | N |Defines the length in seconds that a message will be locked for before expiring. Default set by server | `30`
|
||||
| `lockRenewalInSec` | N |Defines the frequency at which buffered message locks will be renewed. Default: `20`. | `20`
|
||||
| `maxActiveMessages` | N |Defines the maximum number of messages to be processing or in the buffer at once. This should be at least as big as the maximum concurrent handlers. Default: `10000` | `2000`
|
||||
| `maxConcurrentHandlers` | N |Defines the maximum number of concurrent message handlers. | `10`
|
||||
| `defaultMessageTimeToLiveInSec` | N |Default message time to live. | `10`
|
||||
| `autoDeleteOnIdleInSec` | N |Time in seconds to wait before auto deleting idle subscriptions. | `3600`
|
||||
| `lockRenewalInSec` | N | Defines the frequency at which buffered message locks will be renewed. Default: `20`. | `20`
|
||||
| `maxActiveMessages` | N | Defines the maximum number of messages to be processing or in the buffer at once. This should be at least as big as the maximum concurrent handlers. Default: `1000` | `2000`
|
||||
| `maxConcurrentHandlers` | N | Defines the maximum number of concurrent message handlers. Default: `0` (unlimited) | `10`
|
||||
| `disableEntityManagement` | N | When set to true, queues and subscriptions do not get created automatically. Default: `"false"` | `"true"`, `"false"`
|
||||
| `defaultMessageTimeToLiveInSec` | N | Default message time to live, in seconds. Used during subscription creation only. | `10`
|
||||
| `autoDeleteOnIdleInSec` | N | Time in seconds to wait before auto deleting idle subscriptions. Used during subscription creation only. Default: `0` (disabled) | `3600`
|
||||
| `maxDeliveryCount` | N | Defines the number of attempts the server will make to deliver a message. Used during subscription creation only. Default set by server. | `10`
|
||||
| `lockDurationInSec` | N | Defines the length in seconds that a message will be locked for before expiring. Used during subscription creation only. Default set by server. | `30`
|
||||
| `minConnectionRecoveryInSec` | N | Minimum interval (in seconds) to wait before attempting to reconnect to Azure Service Bus in case of a connection failure. Default: `2` | `5`
|
||||
| `maxConnectionRecoveryInSec` | N | Maximum interval (in seconds) to wait before attempting to reconnect to Azure Service Bus in case of a connection failure. After each attempt, the component waits a random number of seconds, increasing every time, between the minimum and the maximum. Default: `300` (5 minutes) | `600`
|
||||
| `maxRetriableErrorsPerSec` | N | Maximum number of retriable errors that are processed per second. If a message fails to be processed with a retriable error, the component adds a delay before it starts processing another message, to avoid immediately re-processing messages that have failed. Default: `10` | `10`
|
||||
| `publishMaxRetries` | N | The max number of retries for when Azure Service Bus responds with "too busy" in order to throttle messages. Defaults: `5` | `5`
|
||||
| `publishInitialRetryInternalInMs` | N | Time in milliseconds for the initial exponential backoff when Azure Service Bus throttle messages. Defaults: `500` | `500`
|
||||
| `publishInitialRetryIntervalInMs` | N | Time in milliseconds for the initial exponential backoff when Azure Service Bus throttle messages. Defaults: `500` | `500`
|
||||
|
||||
### Azure Active Directory (AAD) authentication
|
||||
|
||||
The Azure Service Bus pubsub component supports authentication using all Azure Active Directory mechanisms, including Managed Identities. For further information and the relevant component metadata fields to provide depending on the choice of AAD authentication mechanism, see the [docs for authenticating to Azure]({{< ref authenticating-azure.md >}}).
|
||||
The Azure Service Bus Queues pubsub component supports authentication using all Azure Active Directory mechanisms, including Managed Identities. For further information and the relevant component metadata fields to provide depending on the choice of AAD authentication mechanism, see the [docs for authenticating to Azure]({{< ref authenticating-azure.md >}}).
|
||||
|
||||
#### Example Configuration
|
||||
|
||||
|
|
@ -98,7 +98,7 @@ kind: Component
|
|||
metadata:
|
||||
name: servicebus-pubsub
|
||||
spec:
|
||||
type: pubsub.azure.servicebus
|
||||
type: pubsub.azure.servicebus.queues
|
||||
version: v1
|
||||
metadata:
|
||||
- name: namespaceName
|
||||
|
|
@ -132,7 +132,7 @@ To set Azure Service Bus metadata when sending a message, set the query paramete
|
|||
- `metadata.ScheduledEnqueueTimeUtc`
|
||||
- `metadata.ReplyToSessionId`
|
||||
|
||||
> **NOTE:** The `metadata.MessageId` property does not set the `id` property of the cloud event and should be treated in isolation.
|
||||
> **Note:** The `metadata.MessageId` property does not set the `id` property of the cloud event returned by Dapr and should be treated in isolation.
|
||||
|
||||
### Receiving a message with metadata
|
||||
|
||||
|
|
@ -147,11 +147,11 @@ In addition to the [settable metadata listed above](#sending-a-message-with-meta
|
|||
|
||||
To find out more details on the purpose of any of these metadata properties, please refer to [the official Azure Service Bus documentation](https://docs.microsoft.com/rest/api/servicebus/message-headers-and-properties#message-headers).
|
||||
|
||||
> Note that all times are populated by the server and are not adjusted for clock skews.
|
||||
> Note: that all times are populated by the server and are not adjusted for clock skews.
|
||||
|
||||
## Create an Azure Service Bus
|
||||
## Create an Azure Service Bus broker for queues
|
||||
|
||||
Follow the instructions [here](https://docs.microsoft.com/azure/service-bus-messaging/service-bus-quickstart-topics-subscriptions-portal) on setting up Azure Service Bus Topics.
|
||||
Follow the instructions [here](https://learn.microsoft.com/azure/service-bus-messaging/service-bus-quickstart-portal) on setting up Azure Service Bus Queues.
|
||||
|
||||
## Related links
|
||||
|
||||
|
|
@ -0,0 +1,166 @@
|
|||
---
|
||||
type: docs
|
||||
title: "Azure Service Bus Topics"
|
||||
linkTitle: "Azure Service Bus Topics"
|
||||
description: "Detailed documentation on the Azure Service Bus Topics pubsub component"
|
||||
aliases:
|
||||
- "/operations/components/setup-pubsub/supported-pubsub/setup-azure-servicebus-topics/"
|
||||
- "/operations/components/setup-pubsub/supported-pubsub/setup-azure-servicebus/"
|
||||
---
|
||||
|
||||
## Component format
|
||||
|
||||
To setup Azure Service Bus Topics pubsub create a component of type `pubsub.azure.servicebus.topics`. See [this guide]({{< ref "howto-publish-subscribe.md#step-1-setup-the-pubsub-component" >}}) on how to create and apply a pubsub configuration.
|
||||
|
||||
> This component uses topics on Azure Service Bus; see the official documentation for the differences between [topics and queues](https://learn.microsoft.com/azure/service-bus-messaging/service-bus-queues-topics-subscriptions).
|
||||
> For using queues, see the [Azure Service Bus Queues pubsub component]({{< ref "setup-azure-servicebus-queues" >}}).
|
||||
|
||||
### Connection String Authentication
|
||||
|
||||
```yaml
|
||||
apiVersion: dapr.io/v1alpha1
|
||||
kind: Component
|
||||
metadata:
|
||||
name: servicebus-pubsub
|
||||
spec:
|
||||
type: pubsub.azure.servicebus.topics
|
||||
version: v1
|
||||
metadata:
|
||||
# Required when not using Azure AD Authentication
|
||||
- name: connectionString
|
||||
value: "Endpoint=sb://{ServiceBusNamespace}.servicebus.windows.net/;SharedAccessKeyName={PolicyName};SharedAccessKey={Key};EntityPath={ServiceBus}"
|
||||
# - name: consumerID # Optional: defaults to the app's own ID
|
||||
# value: "{identifier}"
|
||||
# - name: timeoutInSec # Optional
|
||||
# value: 60
|
||||
# - name: handlerTimeoutInSec # Optional
|
||||
# value: 60
|
||||
# - name: disableEntityManagement # Optional
|
||||
# value: "false"
|
||||
# - name: maxDeliveryCount # Optional
|
||||
# value: 3
|
||||
# - name: lockDurationInSec # Optional
|
||||
# value: 60
|
||||
# - name: lockRenewalInSec # Optional
|
||||
# value: 20
|
||||
# - name: maxActiveMessages # Optional
|
||||
# value: 10000
|
||||
# - name: maxConcurrentHandlers # Optional
|
||||
# value: 10
|
||||
# - name: defaultMessageTimeToLiveInSec # Optional
|
||||
# value: 10
|
||||
# - name: autoDeleteOnIdleInSec # Optional
|
||||
# value: 3600
|
||||
# - name: minConnectionRecoveryInSec # Optional
|
||||
# value: 2
|
||||
# - name: maxConnectionRecoveryInSec # Optional
|
||||
# value: 300
|
||||
# - name: maxRetriableErrorsPerSec # Optional
|
||||
# value: 10
|
||||
# - name: publishMaxRetries # Optional
|
||||
# value: 5
|
||||
# - name: publishInitialRetryIntervalInMs # Optional
|
||||
# value: 500
|
||||
```
|
||||
|
||||
> __NOTE:__ The above settings are shared across all topics that use this component.
|
||||
|
||||
{{% alert title="Warning" color="warning" %}}
|
||||
The above example uses secrets as plain strings. It is recommended to use a secret store for the secrets as described [here]({{< ref component-secrets.md >}}).
|
||||
{{% /alert %}}
|
||||
|
||||
## Spec metadata fields
|
||||
|
||||
| Field | Required | Details | Example |
|
||||
|--------------------|:--------:|---------|---------|
|
||||
| `connectionString` | Y | Shared access policy connection string for the Service Bus. Required unless using Azure AD authentication. | See example above
|
||||
| `namespaceName`| N | Parameter to set the address of the Service Bus namespace, as a fully-qualified domain name. Required if using Azure AD authentication. | `"namespace.servicebus.windows.net"` |
|
||||
| `consumerID` | N | Consumer ID (a.k.a consumer tag) organizes one or more consumers into a group. Consumers with the same consumer ID work as one virtual consumer, i.e. a message is processed only once by one of the consumers in the group. If the consumer ID is not set, the dapr runtime will set it to the dapr application ID. |
|
||||
| `timeoutInSec` | N | Timeout for sending messages and for management operations. Default: `60` |`30`
|
||||
| `handlerTimeoutInSec`| N | Timeout for invoking the app's handler. Default: `60` | `30`
|
||||
| `lockRenewalInSec` | N | Defines the frequency at which buffered message locks will be renewed. Default: `20`. | `20`
|
||||
| `maxActiveMessages` | N | Defines the maximum number of messages to be processing or in the buffer at once. This should be at least as big as the maximum concurrent handlers. Default: `1000` | `2000`
|
||||
| `maxConcurrentHandlers` | N | Defines the maximum number of concurrent message handlers. Default: `0` (unlimited) | `10`
|
||||
| `disableEntityManagement` | N | When set to true, queues and subscriptions do not get created automatically. Default: `"false"` | `"true"`, `"false"`
|
||||
| `defaultMessageTimeToLiveInSec` | N | Default message time to live, in seconds. Used during subscription creation only. | `10`
|
||||
| `autoDeleteOnIdleInSec` | N | Time in seconds to wait before auto deleting idle subscriptions. Used during subscription creation only. Default: `0` (disabled) | `3600`
|
||||
| `maxDeliveryCount` | N | Defines the number of attempts the server will make to deliver a message. Used during subscription creation only. Default set by server. | `10`
|
||||
| `lockDurationInSec` | N | Defines the length in seconds that a message will be locked for before expiring. Used during subscription creation only. Default set by server. | `30`
|
||||
| `minConnectionRecoveryInSec` | N | Minimum interval (in seconds) to wait before attempting to reconnect to Azure Service Bus in case of a connection failure. Default: `2` | `5`
|
||||
| `maxConnectionRecoveryInSec` | N | Maximum interval (in seconds) to wait before attempting to reconnect to Azure Service Bus in case of a connection failure. After each attempt, the component waits a random number of seconds, increasing every time, between the minimum and the maximum. Default: `300` (5 minutes) | `600`
|
||||
| `maxRetriableErrorsPerSec` | N | Maximum number of retriable errors that are processed per second. If a message fails to be processed with a retriable error, the component adds a delay before it starts processing another message, to avoid immediately re-processing messages that have failed. Default: `10` | `10`
|
||||
| `publishMaxRetries` | N | The max number of retries for when Azure Service Bus responds with "too busy" in order to throttle messages. Defaults: `5` | `5`
|
||||
| `publishInitialRetryIntervalInMs` | N | Time in milliseconds for the initial exponential backoff when Azure Service Bus throttle messages. Defaults: `500` | `500`
|
||||
|
||||
### Azure Active Directory (AAD) authentication
|
||||
|
||||
The Azure Service Bus Topics pubsub component supports authentication using all Azure Active Directory mechanisms, including Managed Identities. For further information and the relevant component metadata fields to provide depending on the choice of AAD authentication mechanism, see the [docs for authenticating to Azure]({{< ref authenticating-azure.md >}}).
|
||||
|
||||
#### Example Configuration
|
||||
|
||||
```yaml
|
||||
apiVersion: dapr.io/v1alpha1
|
||||
kind: Component
|
||||
metadata:
|
||||
name: servicebus-pubsub
|
||||
spec:
|
||||
type: pubsub.azure.servicebus.topics
|
||||
version: v1
|
||||
metadata:
|
||||
- name: namespaceName
|
||||
# Required when using Azure Authentication.
|
||||
# Must be a fully-qualified domain name
|
||||
value: "servicebusnamespace.servicebus.windows.net"
|
||||
- name: azureTenantId
|
||||
value: "***"
|
||||
- name: azureClientId
|
||||
value: "***"
|
||||
- name: azureClientSecret
|
||||
value: "***"
|
||||
```
|
||||
|
||||
## Message metadata
|
||||
|
||||
Azure Service Bus messages extend the Dapr message format with additional contextual metadata. Some metadata fields are set by Azure Service Bus itself (read-only) and others can be set by the client when publishing a message.
|
||||
|
||||
### Sending a message with metadata
|
||||
|
||||
To set Azure Service Bus metadata when sending a message, set the query parameters on the HTTP request or the gRPC metadata as documented [here](https://docs.dapr.io/reference/api/pubsub_api/#metadata).
|
||||
|
||||
- `metadata.MessageId`
|
||||
- `metadata.CorrelationId`
|
||||
- `metadata.SessionId`
|
||||
- `metadata.Label`
|
||||
- `metadata.ReplyTo`
|
||||
- `metadata.PartitionKey`
|
||||
- `metadata.To`
|
||||
- `metadata.ContentType`
|
||||
- `metadata.ScheduledEnqueueTimeUtc`
|
||||
- `metadata.ReplyToSessionId`
|
||||
|
||||
> **Note:** The `metadata.MessageId` property does not set the `id` property of the cloud event returned by Dapr and should be treated in isolation.
|
||||
|
||||
### Receiving a message with metadata
|
||||
|
||||
When Dapr calls your application, it will attach Azure Service Bus message metadata to the request using either HTTP headers or gRPC metadata.
|
||||
In addition to the [settable metadata listed above](#sending-a-message-with-metadata), you can also access the following read-only message metadata.
|
||||
|
||||
- `metadata.DeliveryCount`
|
||||
- `metadata.LockedUntilUtc`
|
||||
- `metadata.LockToken`
|
||||
- `metadata.EnqueuedTimeUtc`
|
||||
- `metadata.SequenceNumber`
|
||||
|
||||
To find out more details on the purpose of any of these metadata properties, please refer to [the official Azure Service Bus documentation](https://docs.microsoft.com/rest/api/servicebus/message-headers-and-properties#message-headers).
|
||||
|
||||
> Note: that all times are populated by the server and are not adjusted for clock skews.
|
||||
|
||||
## Create an Azure Service Bus broker for topics
|
||||
|
||||
Follow the instructions [here](https://docs.microsoft.com/azure/service-bus-messaging/service-bus-quickstart-topics-subscriptions-portal) on setting up Azure Service Bus Topics.
|
||||
|
||||
## Related links
|
||||
|
||||
- [Basic schema for a Dapr component]({{< ref component-schema >}})
|
||||
- [Pub/Sub building block]({{< ref pubsub >}})
|
||||
- Read [this guide]({{< ref "howto-publish-subscribe.md#step-2-publish-a-topic" >}}) for instructions on configuring pub/sub components
|
||||
|
|
@ -21,16 +21,15 @@ spec:
|
|||
type: pubsub.mqtt3
|
||||
version: v1
|
||||
metadata:
|
||||
- name: url
|
||||
value: "tcp://[username][:password]@host.domain[:port]"
|
||||
- name: qos
|
||||
value: 1
|
||||
- name: retain
|
||||
value: "false"
|
||||
- name: cleanSession
|
||||
value: "false"
|
||||
- name: backOffMaxRetries
|
||||
value: "0"
|
||||
- name: url
|
||||
value: "tcp://[username][:password]@host.domain[:port]"
|
||||
# Optional
|
||||
- name: retain
|
||||
value: "false"
|
||||
- name: cleanSession
|
||||
value: "false"
|
||||
- name: qos
|
||||
value: "1"
|
||||
```
|
||||
|
||||
{{% alert title="Warning" color="warning" %}}
|
||||
|
|
@ -41,18 +40,18 @@ The above example uses secrets as plain strings. It is recommended to use a secr
|
|||
|
||||
| Field | Required | Details | Example |
|
||||
|--------------------|:--------:|---------|---------|
|
||||
| url | Y | Address of the MQTT broker. Can be `secretKeyRef` to use a secret reference. <br> Use the **`tcp://`** URI scheme for non-TLS communication. <br> Use the **`ssl://`** URI scheme for TLS communication. | `"tcp://[username][:password]@host.domain[:port]"`
|
||||
| consumerID | N | The client ID used to connect to the MQTT broker for the consumer connection. Defaults to the Dapr app ID.<br>Note: if `producerID` is not set, `-consumer` is appended to this value for the consumer connection | `"myMqttClientApp"`
|
||||
| producerID | N | The client ID used to connect to the MQTT broker for the producer connection. Defaults to `{consumerID}-producer`. | `"myMqttProducerApp"`
|
||||
| qos | N | Indicates the Quality of Service Level (QoS) of the message ([more info](https://www.hivemq.com/blog/mqtt-essentials-part-6-mqtt-quality-of-service-levels/)). Defaults to `1`. |`0`, `1`, `2`
|
||||
| retain | N | Defines whether the message is saved by the broker as the last known good value for a specified topic. Defaults to `"false"`. | `"true"`, `"false"`
|
||||
| cleanSession | N | Sets the `clean_session` flag in the connection message to the MQTT broker if `"true"` ([more info](http://www.steves-internet-guide.com/mqtt-clean-sessions-example/)). Defaults to `"false"`. | `"true"`, `"false"`
|
||||
| caCert | Required for using TLS | Certificate Authority (CA) certificate in PEM format for verifying server TLS certificates. | `"-----BEGIN CERTIFICATE-----\n<base64-encoded DER>\n-----END CERTIFICATE-----"`
|
||||
| clientCert | Required for using TLS | TLS client certificate in PEM format. Must be used with `clientKey`. | `"-----BEGIN CERTIFICATE-----\n<base64-encoded DER>\n-----END CERTIFICATE-----"`
|
||||
| clientKey | Required for using TLS | TLS client key in PEM format. Must be used with `clientCert`. Can be `secretKeyRef` to use a secret reference. | `"-----BEGIN RSA PRIVATE KEY-----\n<base64-encoded PKCS8>\n-----END RSA PRIVATE KEY-----"`
|
||||
| `url` | Y | Address of the MQTT broker. Can be `secretKeyRef` to use a secret reference. <br> Use the **`tcp://`** URI scheme for non-TLS communication. <br> Use the **`ssl://`** URI scheme for TLS communication. | `"tcp://[username][:password]@host.domain[:port]"`
|
||||
| `consumerID` | N | The client ID used to connect to the MQTT broker. Defaults to the Dapr app ID. | `"myMqttClientApp"`
|
||||
| `retain` | N | Defines whether the message is saved by the broker as the last known good value for a specified topic. Defaults to `"false"`. | `"true"`, `"false"`
|
||||
| `cleanSession` | N | Sets the `clean_session` flag in the connection message to the MQTT broker if `"true"` ([more info](http://www.steves-internet-guide.com/mqtt-clean-sessions-example/)). Defaults to `"false"`. | `"true"`, `"false"`
|
||||
| `caCert` | Required for using TLS | Certificate Authority (CA) certificate in PEM format for verifying server TLS certificates. | See example below
|
||||
| `clientCert` | Required for using TLS | TLS client certificate in PEM format. Must be used with `clientKey`. | See example below
|
||||
| `clientKey` | Required for using TLS | TLS client key in PEM format. Must be used with `clientCert`. Can be `secretKeyRef` to use a secret reference. | See example below
|
||||
| `qos` | N | Indicates the Quality of Service Level (QoS) of the message ([more info](https://www.hivemq.com/blog/mqtt-essentials-part-6-mqtt-quality-of-service-levels/)). Defaults to `1`. |`0`, `1`, `2`
|
||||
|
||||
### Communication using TLS
|
||||
|
||||
To configure communication using TLS, ensure that the MQTT broker (e.g. mosquitto) is configured to support certificates and provide the `caCert`, `clientCert`, `clientKey` metadata in the component configuration. For example:
|
||||
To configure communication using TLS, ensure that the MQTT broker (e.g. emqx) is configured to support certificates and provide the `caCert`, `clientCert`, `clientKey` metadata in the component configuration. For example:
|
||||
|
||||
```yaml
|
||||
apiVersion: dapr.io/v1alpha1
|
||||
|
|
@ -63,26 +62,30 @@ spec:
|
|||
type: pubsub.mqtt3
|
||||
version: v1
|
||||
metadata:
|
||||
- name: url
|
||||
value: "ssl://host.domain[:port]"
|
||||
- name: qos
|
||||
value: 1
|
||||
- name: retain
|
||||
value: "false"
|
||||
- name: cleanSession
|
||||
value: "false"
|
||||
- name: backoffMaxRetries
|
||||
value: "0"
|
||||
- name: caCert
|
||||
value: ${{ myLoadedCACert }}
|
||||
- name: clientCert
|
||||
value: ${{ myLoadedClientCert }}
|
||||
- name: clientKey
|
||||
secretKeyRef:
|
||||
name: myMqttClientKey
|
||||
key: myMqttClientKey
|
||||
auth:
|
||||
secretStore: <SECRET_STORE_NAME>
|
||||
- name: url
|
||||
value: "ssl://host.domain[:port]"
|
||||
# TLS configuration
|
||||
- name: caCert
|
||||
value: |
|
||||
-----BEGIN CERTIFICATE-----
|
||||
...
|
||||
-----END CERTIFICATE-----
|
||||
- name: clientCert
|
||||
value: |
|
||||
-----BEGIN CERTIFICATE-----
|
||||
...
|
||||
-----END CERTIFICATE-----
|
||||
- name: clientKey
|
||||
secretKeyRef:
|
||||
name: myMqttClientKey
|
||||
key: myMqttClientKey
|
||||
# Optional
|
||||
- name: retain
|
||||
value: "false"
|
||||
- name: cleanSession
|
||||
value: "false"
|
||||
- name: qos
|
||||
value: 1
|
||||
```
|
||||
|
||||
Note that while the `caCert` and `clientCert` values may not be secrets, they can be referenced from a Dapr secret store as well for convenience.
|
||||
|
|
@ -102,36 +105,34 @@ spec:
|
|||
metadata:
|
||||
- name: consumerID
|
||||
value: "{uuid}"
|
||||
- name: cleanSession
|
||||
value: "true"
|
||||
- name: url
|
||||
value: "tcp://admin:public@localhost:1883"
|
||||
- name: qos
|
||||
value: 1
|
||||
- name: retain
|
||||
value: "false"
|
||||
- name: cleanSession
|
||||
value: "true"
|
||||
- name: backoffMaxRetries
|
||||
value: "0"
|
||||
```
|
||||
|
||||
{{% alert title="Warning" color="warning" %}}
|
||||
The above example uses secrets as plain strings. It is recommended to use a secret store for the secrets as described [here]({{< ref component-secrets.md >}}).
|
||||
{{% /alert %}}
|
||||
|
||||
Note that in the case, the value of the consumer ID is random every time Dapr restarts, so we are setting `cleanSession` to true as well.
|
||||
Note that in the case, the value of the consumer ID is random every time Dapr restarts, so you should set `cleanSession` to `true` as well.
|
||||
|
||||
## Create a MQTT3 broker
|
||||
|
||||
{{< tabs "Self-Hosted" "Kubernetes">}}
|
||||
|
||||
{{% codetab %}}
|
||||
You can run a MQTT broker [locally using Docker](https://hub.docker.com/_/eclipse-mosquitto):
|
||||
You can run a MQTT broker like emqx [locally using Docker](https://hub.docker.com/_/emqx):
|
||||
|
||||
```bash
|
||||
docker run -d -p 1883:1883 -p 9001:9001 --name mqtt eclipse-mosquitto:1.6
|
||||
docker run -d -p 1883:1883 --name mqtt emqx:latest
|
||||
```
|
||||
|
||||
You can then interact with the server using the client port: `mqtt://localhost:1883`
|
||||
You can then interact with the server using the client port: `tcp://localhost:1883`
|
||||
{{% /codetab %}}
|
||||
|
||||
{{% codetab %}}
|
||||
|
|
@ -156,15 +157,12 @@ spec:
|
|||
spec:
|
||||
containers:
|
||||
- name: mqtt
|
||||
image: eclipse-mosquitto:1.6
|
||||
image: emqx:latest
|
||||
imagePullPolicy: IfNotPresent
|
||||
ports:
|
||||
- name: default
|
||||
containerPort: 1883
|
||||
protocol: TCP
|
||||
- name: websocket
|
||||
containerPort: 9001
|
||||
protocol: TCP
|
||||
---
|
||||
apiVersion: v1
|
||||
kind: Service
|
||||
|
|
@ -181,10 +179,6 @@ spec:
|
|||
targetPort: default
|
||||
name: default
|
||||
protocol: TCP
|
||||
- port: 9001
|
||||
targetPort: websocket
|
||||
name: websocket
|
||||
protocol: TCP
|
||||
```
|
||||
|
||||
You can then interact with the server using the client port: `tcp://mqtt-broker.default.svc.cluster.local:1883`
|
||||
|
|
|
|||
|
|
@ -43,6 +43,8 @@ spec:
|
|||
value: <REPLACE-WITH-CLIENT-x509-CERT-URL> # Required.
|
||||
- name: entity_kind
|
||||
value: <REPLACE-WITH-ENTITY-KIND> # Optional. default: "DaprState"
|
||||
- name: noindex
|
||||
value: <REPLACE-WITH-BOOLEAN> # Optional. default: "false"
|
||||
```
|
||||
|
||||
{{% alert title="Warning" color="warning" %}}
|
||||
|
|
@ -63,6 +65,7 @@ The above example uses secrets as plain strings. It is recommended to use a secr
|
|||
| auth_provider_x509_cert_url | Y | The auth provider certificate URL | `"https://www.googleapis.com/oauth2/v1/certs"`
|
||||
| client_x509_cert_url | Y | The client certificate URL | `"https://www.googleapis.com/robot/v1/metadata/x509/x"`
|
||||
| entity_kind | N | The entity name in Filestore. Defaults to `"DaprState"` | `"DaprState"`
|
||||
| noindex | N | Whether to disable indexing of state entities. Use this setting if you encounter Firestore index size limitations. Defaults to `"false"` | `"true"`
|
||||
|
||||
## Setup GCP Firestore
|
||||
|
||||
|
|
|
|||
|
|
@ -0,0 +1,86 @@
|
|||
---
|
||||
type: docs
|
||||
title: "SQLite"
|
||||
linkTitle: "SQLite"
|
||||
description: Detailed information on the SQLite state store component
|
||||
aliases:
|
||||
- "/operations/components/setup-state-store/supported-state-stores/setup-sqlite/"
|
||||
---
|
||||
|
||||
This component allows using SQLite 3 as state store for Dapr.
|
||||
|
||||
> The component is currently compiled with SQLite version 3.40.1.
|
||||
|
||||
## Create a Dapr component
|
||||
|
||||
Create a file called `sqlite.yaml`, paste the following, and replace the `<CONNECTION STRING>` value with your connection string, which is the path to a file on disk.
|
||||
|
||||
If you want to also configure SQLite to store actors, add the `actorStateStore` option as in the example below.
|
||||
|
||||
```yaml
|
||||
apiVersion: dapr.io/v1alpha1
|
||||
kind: Component
|
||||
metadata:
|
||||
name: <NAME>
|
||||
spec:
|
||||
type: state.sqlite
|
||||
version: v1
|
||||
metadata:
|
||||
# Connection string
|
||||
- name: connectionString
|
||||
value: "data.db"
|
||||
# Timeout for database operations, in seconds (optional)
|
||||
#- name: timeoutInSeconds
|
||||
# value: 20
|
||||
# Name of the table where to store the state (optional)
|
||||
#- name: tableName
|
||||
# value: "state"
|
||||
# Cleanup interval in seconds, to remove expired rows (optional)
|
||||
#- name: cleanupIntervalInSeconds
|
||||
# value: 3600
|
||||
# Uncomment this if you wish to use SQLite as a state store for actors (optional)
|
||||
#- name: actorStateStore
|
||||
# value: "true"
|
||||
```
|
||||
|
||||
## Spec metadata fields
|
||||
|
||||
| Field | Required | Details | Example |
|
||||
|--------------------|:--------:|---------|---------|
|
||||
| `connectionString` | Y | The connection string for the SQLite database. See below for more details. | `"path/to/data.db"`, `"file::memory:?cache=shared"`
|
||||
| `timeoutInSeconds` | N | Timeout, in seconds, for all database operations. Defaults to `20` | `30`
|
||||
| `tableName` | N | Name of the table where the data is stored. Defaults to `state`. | `"state"`
|
||||
| `cleanupIntervalInSeconds` | N | Interval, in seconds, to clean up rows with an expired TTL. Default: `3600` (i.e. 1 hour). Setting this to values <=0 disables the periodic cleanup. | `1800`, `-1`
|
||||
| `actorStateStore` | N | Consider this state store for actors. Defaults to `"false"` | `"true"`, `"false"`
|
||||
|
||||
The **`connectionString`** parameter configures how to open the SQLite database.
|
||||
|
||||
- Normally, this is the path to a file on disk, relative to the current working directory, or absolute. For example: `"data.db"` (relative to the working directory) or `"/mnt/data/mydata.db"`.
|
||||
- The path is interpreted by the SQLite library, so it's possible to pass additional options to the SQLite driver using "URI options" if the path begins with `file:`. For example: `"file:path/to/data.db?mode=ro"` opens the database at path `path/to/data.db` in read-only mode. [Refer to the SQLite documentation for all supported URI options](https://www.sqlite.org/uri.html).
|
||||
- The special case `":memory:"` launches the component backed by an in-memory SQLite database. This database is not persisted on disk, not shared across multiple Dapr instances, and all data is lost when the Dapr sidecar is stopped. When using an in-memory database, you should always set the `?cache=shared` URI option: `"file::memory:?cache=shared"`
|
||||
|
||||
## Advanced
|
||||
|
||||
### TTLs and cleanups
|
||||
|
||||
This state store supports [Time-To-Live (TTL)]({{< ref state-store-ttl.md >}}) for records stored with Dapr. When storing data using Dapr, you can set the `ttlInSeconds` metadata property to indicate when the data should be considered "expired".
|
||||
|
||||
Because SQLite doesn't have built-in support for TTLs, this is implemented in Dapr by adding a column in the state table indicating when the data is to be considered "expired". Records that are "expired" are not returned to the caller, even if they're still physically stored in the database. A background "garbage collector" periodically scans the state table for expired rows and deletes them.
|
||||
|
||||
The `cleanupIntervalInSeconds` metadata property sets the expired records deletion interval, which defaults to 3600 seconds (that is, 1 hour).
|
||||
|
||||
- Longer intervals require less frequent scans for expired rows, but can require storing expired records for longer, potentially requiring more storage space. If you plan to store many records in your state table, with short TTLs, consider setting `cleanupIntervalInSeconds` to a smaller value, for example `300` (300 seconds, or 5 minutes).
|
||||
- If you do not plan to use TTLs with Dapr and the SQLite state store, you should consider setting `cleanupIntervalInSeconds` to a value <= 0 (e.g. `0` or `-1`) to disable the periodic cleanup and reduce the load on the database.
|
||||
|
||||
The `expiration_time` column in the state table, where the expiration date for records is stored, **does not have an index by default**, so each periodic cleanup must perform a full-table scan. If you have a table with a very large number of records, and only some of them use a TTL, you may find it useful to create an index on that column. Assuming that your state table name is `state` (the default), you can use this query:
|
||||
|
||||
```sql
|
||||
CREATE INDEX idx_expiration_time
|
||||
ON state (expiration_time);
|
||||
```
|
||||
|
||||
## Related links
|
||||
|
||||
- [Basic schema for a Dapr component]({{< ref component-schema >}})
|
||||
- Read [this guide]({{< ref "howto-get-save-state.md#step-2-save-and-retrieve-a-single-state" >}}) for instructions on configuring state store components
|
||||
- [State management building block]({{< ref state-management >}})
|
||||
|
|
@ -6,11 +6,19 @@
|
|||
features:
|
||||
bulkPublish: true
|
||||
bulkSubscribe: false
|
||||
- component: Azure Service Bus
|
||||
link: setup-azure-servicebus
|
||||
- component: Azure Service Bus Topics
|
||||
link: setup-azure-servicebus-topics
|
||||
state: Stable
|
||||
version: v1
|
||||
since: "1.0"
|
||||
features:
|
||||
bulkPublish: true
|
||||
bulkSubscribe: true
|
||||
- component: Azure Service Bus Queues
|
||||
link: setup-azure-servicebus-queues
|
||||
state: Beta
|
||||
version: v1
|
||||
since: "1.10"
|
||||
features:
|
||||
bulkPublish: true
|
||||
bulkSubscribe: true
|
||||
|
|
|
|||
|
|
@ -163,6 +163,17 @@
|
|||
etag: false
|
||||
ttl: false
|
||||
query: false
|
||||
- component: SQLite
|
||||
link: setup-sqlite
|
||||
state: Beta
|
||||
version: v1
|
||||
since: "1.10"
|
||||
features:
|
||||
crud: true
|
||||
transactions: true
|
||||
etag: true
|
||||
ttl: true
|
||||
query: false
|
||||
- component: Zookeeper
|
||||
link: setup-zookeeper
|
||||
state: Alpha
|
||||
|
|
|
|||
Binary file not shown.
|
After Width: | Height: | Size: 13 KiB |
Binary file not shown.
|
After Width: | Height: | Size: 38 KiB |
|
|
@ -1 +1 @@
|
|||
Subproject commit 52b82d7ce6599822a37d2528379f5ca146e286bb
|
||||
Subproject commit bc3ec80e4a187269abeb46e698464d5f0a9a018a
|
||||
Loading…
Reference in New Issue