Merge with v1.10

Signed-off-by: Shubham Sharma <shubhash@microsoft.com>
This commit is contained in:
Shubham Sharma 2023-01-30 13:17:34 +05:30
commit 15bfe12460
124 changed files with 4429 additions and 585 deletions

View File

@ -11,7 +11,7 @@ Dapr uses a modular design where functionality is delivered as a component. Each
You can contribute implementations and extend Dapr's component interfaces capabilities via:
- The [components-contrib repository](https://github.com/dapr/components-contrib)
- [Pluggable components]({{<ref "components-concept.md#pluggable-components" >}}).
- [Pluggable components]({{<ref "components-concept.md#built-in-and-pluggable-components" >}}).
A building block can use any combination of components. For example, the [actors]({{<ref "actors-overview.md">}}) and the [state management]({{<ref "state-management-overview.md">}}) building blocks both use [state components](https://github.com/dapr/components-contrib/tree/master/state).

View File

@ -49,10 +49,10 @@ For a detailed list of all available arguments run `daprd --help` or see this [t
daprd --app-id --app-port 5000
```
3. If you are using several custom components and want to specify the location of the component definition files, use the `--components-path` argument:
3. If you are using several custom resources and want to specify the location of the resource definition files, use the `--resources-path` argument:
```bash
daprd --app-id myapp --components-path <PATH-TO-COMPONENTS-FILES>
daprd --app-id myapp --resources-path <PATH-TO-RESOURCES-FILES>
```
4. Enable collection of Prometheus metrics while running your app

View File

@ -0,0 +1,39 @@
---
type: docs
title: "Resiliency"
linkTitle: "Resiliency"
weight: 400
description: "Configure policies and monitor app and sidecar health"
---
Distributed applications are commonly comprised of many microservices, with dozens - sometimes hundreds - of instances scaling across underlying infrastructure. As these distributed solutions grow in size and complexity, the potential for system failures inevitably increases. Service instances can fail or become unresponsive due to any number of issues, including hardware failures, unexpected throughput, or application lifecycle events, such as scaling out and application restarts. Designing and implementing a self-healing solution with the ability to detect, mitigate, and respond to failure is critical.
## Resiliency Policies
<img src="/images/resiliency_diagram.png" width="1200" alt="Diagram showing the resiliency applied to Dapr APIs">
Dapr provides a capability for defining and applying fault tolerance resiliency policies to your application. You can define policies for following resiliency patterns:
- Timeouts
- Retries/back-offs
- Circuit breakers
These policies can be applied to any Dapr API calls when calling components with a [resiliency spec]({{< ref resiliency-overview >}}).
## App Health Checks
<img src="/images/observability-app-health.webp" width="800" alt="Diagram showing the app health feature. Running Dapr with app health enabled causes Dapr to periodically probe the app for its health">
Applications can become unresponsive for a variety of reasons. For example, they are too busy to accept new work, could have crashed, or be in a deadlock state. Sometimes the condition can be transitory or persistent.
Dapr provides a capability for monitoring app health through probes that check the health of your application and react to status changes. When an unhealthy app is detected, Dapr stops accepting new work on behalf of the application.
Read more on how to apply [app health checks]({{< ref app-health >}}) to your application.
## Sidecar Health Checks
<img src="/images/sidecar-health.png" width="800" alt="Diagram showing the app health feature. Running Dapr with app health enabled causes Dapr to periodically probe the app for its health">
Dapr provides a way to determine its health using an [HTTP `/healthz` endpoint]({{< ref health_api.md >}}). With this endpoint, the *daprd* process, or sidecar, can be:
- Probed for its health
- Determined for readiness and liveness
Read more on about how to apply [dapr health checks]({{< ref sidecar-health >}}) to your application.

View File

@ -1,11 +1,11 @@
---
type: docs
title: "Building blocks"
linkTitle: "Building blocks"
title: "API building blocks"
linkTitle: "API building blocks"
weight: 10
description: "Dapr capabilities that solve common development challenges for distributed applications"
---
Get a high-level [overview of Dapr building blocks]({{< ref building-blocks-concept >}}) in the **Concepts** section.
Get a high-level [overview of Dapr API building blocks]({{< ref building-blocks-concept >}}) in the **Concepts** section.
<img src="/images/buildingblocks-overview.png" alt="Diagram showing the different Dapr API building blocks" width=1000>

View File

@ -35,7 +35,7 @@ Create a new binding component named `checkout`. Within the `metadata` section,
{{% codetab %}}
Use the `--components-path` flag with `dapr run` to point to your custom components directory.
Use the `--resources-path` flag with `dapr run` to point to your custom resources directory.
```yaml
apiVersion: dapr.io/v1alpha1

View File

@ -41,7 +41,7 @@ Create a new binding component named `checkout`. Within the `metadata` section,
{{% codetab %}}
Use the `--components-path` flag with the `dapr run` command to point to your custom components directory.
Use the `--resources-path` flag with the `dapr run` command to point to your custom resources directory.
```yaml
apiVersion: dapr.io/v1alpha1

View File

@ -6,17 +6,40 @@ weight: 1000
description: "Overview of the configuration API building block"
---
## Introduction
Consuming application configuration is a common task when writing applications. Frequently, configuration stores are used to manage this configuration data. A configuration item is often dynamic in nature and tightly coupled to the needs of the application that consumes it.
Consuming application configuration is a common task when writing applications and frequently configuration stores are used to manage this configuration data. A configuration item is often dynamic in nature and is tightly coupled to the needs of the application that consumes it. For example, common uses for application configuration include names of secrets, different identifiers, partition or consumer IDs, names of databases to connect to etc. These configuration items are typically stored as key/value items in a state store or database. Application configuration can be changed by either developers or operators at runtime and the developer needs to be notified of these changes in order to take the required action and load the new configuration. Also configuration data is typically read only from the application API perspective, with updates to the configuration store made through operator tooling. Dapr's configuration API allows developers to consume configuration items that are returned as read only key/value pairs and subscribe to changes whenever a configuration item changes.
For example, application configuration can include:
- Names of secrets
- Different identifiers
- Partition or consumer IDs
- Names of databases to connect to, etc
Usually, configuration items are stored as key/value items in a state store or database. Developers or operators can change application configuration at runtime in the configuration store. Once changes are made, a service is notified to load the new configuration.
Configuration data is read-only from the application API perspective, with updates to the configuration store made through operator tooling. With Dapr's configuration API, you can:
- Consume configuration items that are returned as read-only key/value pairs
- Subscribe to changes whenever a configuration item changes
<img src="/images/configuration-api-overview.png" width=900>
It is worth noting that this configuration API should not be confused with the [Dapr sidecar and control plane configuration]({{<ref "configuration-overview">}}) which is used to set policies and settings on instances of Dapr sidecars or the installed Dapr control plane.
{{% alert title="Note" color="primary" %}}
The Configuration API should not be confused with the [Dapr sidecar and control plane configuration]({{< ref "configuration-overview" >}}), which is used to set policies and settings on Dapr sidecar instances or the installed Dapr control plane.
{{% /alert %}}
*This API is currently in `Alpha` state*
## Try out configuration
### Quickstart
Want to put the Dapr configuration API to the test? Walk through the following quickstart to see the configuration API in action:
| Quickstart | Description |
| ---------- | ----------- |
| [Configuration quickstart]({{< ref configuration-quickstart.md >}}) | Get configuration items or subscribe to configuration changes using the configuration API. |
### Start using the configuration API directly in your app
Want to skip the quickstarts? Not a problem. You can try out the configuration building block directly in your application to read and manage configuration data. After [Dapr is installed]({{< ref "getting-started/_index.md" >}}), you can begin using the configuration API starting with [the configuration how-to guide]({{< ref howto-manage-configuration.md >}}).
## Features
## Next steps
Follow these guides on:

View File

@ -8,12 +8,13 @@ description: "Learn how to get application configuration and subscribe for chang
This example uses the Redis configuration store component to demonstrate how to retrieve a configuration item.
<img src="/images/building-block-configuration-example.png" width=1000 alt="Diagram showing get configuration of example service">
{{% alert title="Note" color="primary" %}}
This API is currently in `Alpha` state and only available on gRPC. An HTTP1.1 supported version with this URL syntax `/v1.0/configuration` will be available before the API is certified into `Stable` state.
If you haven't already, [try out the configuration quickstart]({{< ref configuration-quickstart.md >}}) for a quick walk-through on how to use the configuration API.
{{% /alert %}}
<img src="/images/building-block-configuration-example.png" width=1000 alt="Diagram showing get configuration of example service">
## Create a configuration item in store
@ -68,7 +69,7 @@ spec:
## Retrieve Configuration Items
### Get configuration items using Dapr SDKs
{{< tabs Dotnet Java Python>}}
{{< tabs ".NET" Java Python>}}
{{% codetab %}}
@ -304,7 +305,7 @@ asyncio.run(executeConfiguration())
```
```bash
dapr run --app-id orderprocessing --components-path components/ -- python3 OrderProcessingService.py
dapr run --app-id orderprocessing --resources-path components/ -- python3 OrderProcessingService.py
```
{{% /codetab %}}

View File

@ -62,12 +62,14 @@ namespace LockService
{
class Program
{
[Obsolete("Distributed Lock API is in Alpha, this can be removed once it is stable.")]
static async Task Main(string[] args)
{
string DAPR_LOCK_NAME = "lockstore";
string fileName = "my_file_name";
var client = new DaprClientBuilder().Build();
using (var fileLock = await client.Lock(DAPR_LOCK_NAME, "my_file_name", "random_id_abc123", 60))
await using (var fileLock = await client.Lock(DAPR_LOCK_NAME, fileName, "random_id_abc123", 60))
{
if (fileLock.Success)
{
@ -147,7 +149,7 @@ namespace LockService
var client = new DaprClientBuilder().Build();
var response = await client.Unlock(DAPR_LOCK_NAME, "my_file_name", "random_id_abc123"));
Console.WriteLine(response.LockStatus);
Console.WriteLine(response.status);
}
}
}

View File

@ -6,7 +6,7 @@ weight: 200
description: Dapr sidecar health checks
---
Dapr provides a way to [determine its health using an [HTTP `/healthz` endpoint]({{< ref health_api.md >}}). With this endpoint, the *daprd* process, or sidecar, can be:
Dapr provides a way to determine its health using an [HTTP `/healthz` endpoint]({{< ref health_api.md >}}). With this endpoint, the *daprd* process, or sidecar, can be:
- Probed for its health
- Determined for readiness and liveness

View File

@ -18,7 +18,7 @@ There are two scenarios for how tracing is used:
1. Dapr generates the trace context and you propagate the trace context to another service.
2. You generate the trace context and Dapr propagates the trace context to a service.
### Propogating sequential service calls
### Propagating sequential service calls
Dapr takes care of creating the trace headers. However, when there are more than two services, you're responsible for propagating the trace headers between them. Let's go through the scenarios with examples:
@ -45,7 +45,7 @@ There are no helper methods exposed in Dapr SDKs to propagate and retrieve trace
4. Pub/sub messages
Dapr generates the trace headers in the published message topic. These trace headers are propagated to any services listening on that topic.
### Propogating multiple different service calls
### Propagating multiple different service calls
In the following scenarios, Dapr does some of the work for you and you need to either create or propagate trace headers.
@ -115,4 +115,4 @@ In the gRPC API calls, trace context is passed through `grpc-trace-bin` header.
- [Observability concepts]({{< ref observability-concept.md >}})
- [W3C Trace Context for distributed tracing]({{< ref w3c-tracing-overview >}})
- [W3C Trace Context specification](https://www.w3.org/TR/trace-context/)
- [Observability quickstart](https://github.com/dapr/quickstarts/tree/master/tutorials/observability)
- [Observability quickstart](https://github.com/dapr/quickstarts/tree/master/tutorials/observability)

View File

@ -63,14 +63,14 @@ scopes:
- checkout
```
You can override this file with another [pubsub component]({{< ref setup-pubsub >}}) by creating a components directory (in this example, `myComponents`) containing the file and using the flag `--components-path` with the `dapr run` CLI command.
You can override this file with another [pubsub component]({{< ref setup-pubsub >}}) by creating a components directory (in this example, `myComponents`) containing the file and using the flag `--resources-path` with the `dapr run` CLI command.
{{< tabs Dotnet Java Python Go Javascript >}}
{{% codetab %}}
```bash
dapr run --app-id myapp --components-path ./myComponents -- dotnet run
dapr run --app-id myapp --resources-path ./myComponents -- dotnet run
```
{{% /codetab %}}
@ -78,7 +78,7 @@ dapr run --app-id myapp --components-path ./myComponents -- dotnet run
{{% codetab %}}
```bash
dapr run --app-id myapp --components-path ./myComponents -- mvn spring-boot:run
dapr run --app-id myapp --resources-path ./myComponents -- mvn spring-boot:run
```
{{% /codetab %}}
@ -86,7 +86,7 @@ dapr run --app-id myapp --components-path ./myComponents -- mvn spring-boot:run
{{% codetab %}}
```bash
dapr run --app-id myapp --components-path ./myComponents -- python3 app.py
dapr run --app-id myapp --resources-path ./myComponents -- python3 app.py
```
{{% /codetab %}}
@ -94,7 +94,7 @@ dapr run --app-id myapp --components-path ./myComponents -- python3 app.py
{{% codetab %}}
```bash
dapr run --app-id myapp --components-path ./myComponents -- go run app.go
dapr run --app-id myapp --resources-path ./myComponents -- go run app.go
```
{{% /codetab %}}
@ -102,7 +102,7 @@ dapr run --app-id myapp --components-path ./myComponents -- go run app.go
{{% codetab %}}
```bash
dapr run --app-id myapp --components-path ./myComponents -- npm start
dapr run --app-id myapp --resources-path ./myComponents -- npm start
```
{{% /codetab %}}
@ -155,13 +155,14 @@ Learn more in the [declarative and programmatic subscriptions doc]({{< ref subsc
Create a file named `subscription.yaml` and paste the following:
```yaml
apiVersion: dapr.io/v1alpha1
apiVersion: dapr.io/v2alpha1
kind: Subscription
metadata:
name: order-pub-sub
spec:
topic: orders
route: /checkout
routes:
default: /checkout
pubsubname: order-pub-sub
scopes:
- orderprocessing

View File

@ -47,7 +47,7 @@ When running Dapr, set the YAML component file path to point Dapr to the compone
{{% codetab %}}
```bash
dapr run --app-id myapp --components-path ./myComponents -- dotnet run
dapr run --app-id myapp --resources-path ./myComponents -- dotnet run
```
{{% /codetab %}}
@ -55,7 +55,7 @@ dapr run --app-id myapp --components-path ./myComponents -- dotnet run
{{% codetab %}}
```bash
dapr run --app-id myapp --components-path ./myComponents -- mvn spring-boot:run
dapr run --app-id myapp --resources-path ./myComponents -- mvn spring-boot:run
```
{{% /codetab %}}
@ -63,7 +63,7 @@ dapr run --app-id myapp --components-path ./myComponents -- mvn spring-boot:run
{{% codetab %}}
```bash
dapr run --app-id myapp --components-path ./myComponents -- python3 app.py
dapr run --app-id myapp --resources-path ./myComponents -- python3 app.py
```
{{% /codetab %}}
@ -71,7 +71,7 @@ dapr run --app-id myapp --components-path ./myComponents -- python3 app.py
{{% codetab %}}
```bash
dapr run --app-id myapp --components-path ./myComponents -- npm start
dapr run --app-id myapp --resources-path ./myComponents -- npm start
```
{{% /codetab %}}
@ -79,7 +79,7 @@ dapr run --app-id myapp --components-path ./myComponents -- npm start
{{% codetab %}}
```bash
dapr run --app-id myapp --components-path ./myComponents -- go run app.go
dapr run --app-id myapp --resources-path ./myComponents -- go run app.go
```
{{% /codetab %}}
@ -104,7 +104,6 @@ In your application code, subscribe to the topic specified in the Dapr pub/sub c
```csharp
//Subscribe to a topic
[Topic("pubsub", "orders")]
[HttpPost("checkout")]
public void getCheckout([FromBody] int orderId)
{
@ -117,16 +116,15 @@ public void getCheckout([FromBody] int orderId)
{{% codetab %}}
```java
import io.dapr.client.domain.CloudEvent;
//Subscribe to a topic
@Topic(name = "orders", pubsubName = "pubsub")
@PostMapping(path = "/checkout")
public Mono<Void> getCheckout(@RequestBody(required = false) CloudEvent<String> cloudEvent) {
return Mono.fromRunnable(() -> {
try {
log.info("Subscriber received: " + cloudEvent.getData());
} catch (Exception e) {
throw new RuntimeException(e);
}
}
});
}
```
@ -136,13 +134,13 @@ public Mono<Void> getCheckout(@RequestBody(required = false) CloudEvent<String>
{{% codetab %}}
```python
from cloudevents.sdk.event import v1
#Subscribe to a topic
@app.subscribe(pubsub_name='pubsub', topic='orders')
def mytopic(event: v1.Event) -> None:
@app.route('/checkout', methods=['POST'])
def checkout(event: v1.Event) -> None:
data = json.loads(event.Data())
logging.info('Subscriber received: ' + str(data))
app.run(6002)
```
{{% /codetab %}}
@ -150,11 +148,16 @@ app.run(6002)
{{% codetab %}}
```javascript
//Subscribe to a topic
await server.pubsub.subscribe("pubsub", "orders", async (orderId) => {
console.log(`Subscriber received: ${JSON.stringify(orderId)}`)
const express = require('express')
const bodyParser = require('body-parser')
const app = express()
app.use(bodyParser.json({ type: 'application/*+json' }));
// listen to the declarative route
app.post('/checkout', (req, res) => {
console.log(req.body);
res.sendStatus(200);
});
await server.startServer();
```
{{% /codetab %}}
@ -163,11 +166,10 @@ await server.startServer();
```go
//Subscribe to a topic
if err := s.AddTopicEventHandler(sub, eventHandler); err != nil {
log.Fatalf("error adding topic subscription: %v", err)
}
if err := s.Start(); err != nil && err != http.ErrServerClosed {
log.Fatalf("error listenning: %v", err)
var sub = &common.Subscription{
PubsubName: "pubsub",
Topic: "orders",
Route: "/checkout",
}
func eventHandler(ctx context.Context, e *common.TopicEvent) (retry bool, err error) {

View File

@ -405,9 +405,9 @@ dapr invoke --app-id checkout --method checkout/100
### Namespaces
When running on [namespace supported platforms]({{< ref "service_invocation_api.md#namespace-supported-platforms" >}}), you include the namespace of the target app in the app ID: `checkout.production`
When running on [namespace supported platforms]({{< ref "service_invocation_api.md#namespace-supported-platforms" >}}), you include the namespace of the target app in the app ID. For example, following the `<app>.<namespace>` format, use `checkout.production`.
For example, invoking the example service with a namespace would look like:
Using this example, invoking the service with a namespace would look like:
```bash
curl http://localhost:3602/v1.0/invoke/checkout.production/method/checkout/100 -X POST

View File

@ -134,4 +134,4 @@ For quick testing, try using the Dapr CLI for service invocation:
- Read the [service invocation API specification]({{< ref service_invocation_api.md >}}). This reference guide for service invocation describes how to invoke methods on other services.
- Understand the [service invocation performance numbers]({{< ref perf-service-invocation.md >}}).
- Take a look at [observability]({{< ref monitoring.md >}}). Here you can dig into Dapr's monitoring tools like tracing, metrics and logging.
- Read up on our [security practices]({{< ref monitoring.md >}}) around mTLS encryption, token authentication, and endpoint authorization.
- Read up on our [security practices]({{< ref security-concept.md >}}) around mTLS encryption, token authentication, and endpoint authorization.

View File

@ -16,7 +16,7 @@ Even though the state store is a key/value store, the `value` might be a JSON do
## Querying the state
Submit query requests via HTTP POST/PUT or gRPC. The body of the request is the JSON map with 3 _optional_ entries:
Submit query requests via HTTP POST/PUT or gRPC. The body of the request is the JSON map with 3 entries:
- `filter`
- `sort`
@ -96,7 +96,7 @@ docker run -d --rm -p 27017:27017 --name mongodb mongo:5
Next, start a Dapr application. Refer to the [component configuration file](../query-api-examples/components/mongodb/mongodb.yml), which instructs Dapr to use MongoDB as its state store.
```bash
dapr run --app-id demo --dapr-http-port 3500 --components-path query-api-examples/components/mongodb
dapr run --app-id demo --dapr-http-port 3500 --resources-path query-api-examples/components/mongodb
```
Populate the state store with the employee dataset, so you can query it later.

View File

@ -2,6 +2,6 @@
type: docs
title: "Debugging Dapr applications and the Dapr control plane"
linkTitle: "Debugging"
weight: 60
weight: 50
description: "Guides on how to debug Dapr applications and the Dapr control plane"
---

View File

@ -0,0 +1,7 @@
---
type: docs
title: "Components"
linkTitle: "Components"
weight: 30
description: "Learn more about developing Dapr's pluggable and middleware components"
---

View File

@ -0,0 +1,44 @@
---
type: docs
title: "How to: Author middleware components"
linkTitle: "Middleware components"
weight: 200
description: "Learn how to develop middleware components"
aliases:
- /developing-applications/middleware/middleware-overview/
- /concepts/middleware-concept/
---
Dapr allows custom processing pipelines to be defined by chaining a series of middleware components. In this guide, you'll learn how to create a middleware component. To learn how to configure an existing middleware component, see [Configure middleware components]({{< ref middleware.md >}})
## Writing a custom middleware
Dapr uses [FastHTTP](https://github.com/valyala/fasthttp) to implement its HTTP server. Hence, your HTTP middleware needs to be written as a FastHTTP handler. Your middleware needs to implement a middleware interface, which defines a **GetHandler** method that returns **fasthttp.RequestHandler** and **error**:
```go
type Middleware interface {
GetHandler(metadata Metadata) (func(h fasthttp.RequestHandler) fasthttp.RequestHandler, error)
}
```
Your handler implementation can include any inbound logic, outbound logic, or both:
```go
func (m *customMiddleware) GetHandler(metadata Metadata) (func(fasthttp.RequestHandler) fasthttp.RequestHandler, error) {
var err error
return func(h fasthttp.RequestHandler) fasthttp.RequestHandler {
return func(ctx *fasthttp.RequestCtx) {
// inbound logic
h(ctx) // call the downstream handler
// outbound logic
}
}, err
}
```
## Related links
- [Component schema]({{< ref component-schema.md >}})
- [Configuration overview]({{< ref configuration-overview.md >}})
- [API middleware sample](https://github.com/dapr/samples/tree/master/middleware-oauth-google)

View File

@ -1,9 +1,9 @@
---
type: docs
title: "Pluggable Components"
linkTitle: "Pluggable Components"
title: "Pluggable components"
linkTitle: "Pluggable components"
description: "Guidance on how to work with pluggable components"
weight: 4000
weight: 100
aliases:
- "/operations/components/pluggable-components/pluggable-components-overview/"
---
---

View File

@ -0,0 +1,112 @@
---
type: docs
title: "How to: Implement pluggable components"
linkTitle: "Pluggable components"
weight: 1100
description: "Learn how to author and implement pluggable components"
---
In this guide, you'll learn why and how to implement a [pluggable component]({{< ref pluggable-components-overview >}}). To learn how to configure and register a pluggable component, refer to [How to: Register a pluggable component]({{< ref pluggable-components-registration.md >}})
## Implement a pluggable component
In order to implement a pluggable component, you need to implement a gRPC service in the component. Implementing the gRPC service requires three steps:
### Find the proto definition file
Proto definitions are provided for each supported service interface (state store, pub/sub, bindings).
Currently, the following component APIs are supported:
- State stores
- Pub/sub
- Bindings
| Component | Type | gRPC definition | Built-in Reference Implementation | Docs |
| :---------: | :--------: | :--------------: | :----------------------------------------------------------------------------: | --------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| State Store | `state` | [state.proto] | [Redis](https://github.com/dapr/components-contrib/tree/master/state/redis) | [concept]({{< ref "state-management-overview" >}}), [howto]({{< ref "howto-get-save-state" >}}), [api spec]({{< ref "state_api" >}}) |
| Pub/sub | `pubsub` | [pubsub.proto] | [Redis](https://github.com/dapr/components-contrib/tree/master/pubsub/redis) | [concept]({{< ref "pubsub-overview" >}}), [howto]({{< ref "howto-publish-subscribe" >}}), [api spec]({{< ref "pubsub_api" >}}) |
| Bindings | `bindings` | [bindings.proto] | [Kafka](https://github.com/dapr/components-contrib/tree/master/bindings/kafka) | [concept]({{< ref "bindings-overview" >}}), [input howto]({{< ref "howto-triggers" >}}), [output howto]({{< ref "howto-bindings" >}}), [api spec]({{< ref "bindings_api" >}}) |
Below is a snippet of the gRPC service definition for pluggable component state stores ([state.proto]):
```protobuf
// StateStore service provides a gRPC interface for state store components.
service StateStore {
// Initializes the state store component with the given metadata.
rpc Init(InitRequest) returns (InitResponse) {}
// Returns a list of implemented state store features.
rpc Features(FeaturesRequest) returns (FeaturesResponse) {}
// Ping the state store. Used for liveness purposes.
rpc Ping(PingRequest) returns (PingResponse) {}
// Deletes the specified key from the state store.
rpc Delete(DeleteRequest) returns (DeleteResponse) {}
// Get data from the given key.
rpc Get(GetRequest) returns (GetResponse) {}
// Sets the value of the specified key.
rpc Set(SetRequest) returns (SetResponse) {}
// Deletes many keys at once.
rpc BulkDelete(BulkDeleteRequest) returns (BulkDeleteResponse) {}
// Retrieves many keys at once.
rpc BulkGet(BulkGetRequest) returns (BulkGetResponse) {}
// Set the value of many keys at once.
rpc BulkSet(BulkSetRequest) returns (BulkSetResponse) {}
}
```
The interface for the `StateStore` service exposes a total of 9 methods:
- 2 methods for initialization and components capability advertisement (Init and Features)
- 1 method for health-ness or liveness check (Ping)
- 3 methods for CRUD (Get, Set, Delete)
- 3 methods for bulk CRUD operations (BulkGet, BulkSet, BulkDelete)
### Create service scaffolding
Use [protocol buffers and gRPC tools](https://grpc.io) to create the necessary scaffolding for the service. Learn more about these tools via [the gRPC concepts documentation](https://grpc.io/docs/what-is-grpc/core-concepts/).
These tools generate code targeting [any gRPC-supported language](https://grpc.io/docs/what-is-grpc/introduction/#protocol-buffer-versions). This code serves as the base for your server and it provides:
- Functionality to handle client calls
- Infrastructure to:
- Decode incoming requests
- Execute service methods
- Encode service responses
The generated code is incomplete. It is missing:
- A concrete implementation for the methods your target service defines (the core of your pluggable component).
- Code on how to handle Unix Socket Domain integration, which is Dapr specific.
- Code handling integration with your downstream services.
Learn more about filling these gaps in the next step.
### Define the service
Provide a concrete implementation of the desired service. Each component has a gRPC service definition for its core functionality which is the same as the core component interface. For example:
- **State stores**
A pluggable state store **must** provide an implementation of the `StateStore` service interface.
In addition to this core functionality, some components might also expose functionality under other **optional** services. For example, you can add extra functionality by defining the implementation for a `QueriableStateStore` service and a `TransactionalStateStore` service.
- **Pub/sub**
Pluggable pub/sub components only have a single core service interface defined ([pubsub.proto]). They have no optional service interfaces.
- **Bindings**
Pluggable input and output bindings have a single core service definition on [bindings.proto]. They have no optional service interfaces.
After generating the above state store example's service scaffolding code using gRPC and protocol buffers tools, you can define concrete implementations for the 9 methods defined under `service StateStore`, along with code to initialize and communicate with your dependencies.
This concrete implementation and auxiliary code are the **core** of your pluggable component. They define how your component behaves when handling gRPC requests from Dapr.
## Next steps
- Get started with developing .NET pluggable component using this [sample code](https://github.com/dapr/samples/tree/master/pluggable-components-dotnet-template)
- [Review the pluggable components overview]({{< ref pluggable-components-overview.md >}})
- [Learn how to register your pluggable component]({{< ref pluggable-components-registration >}})

View File

@ -0,0 +1,69 @@
---
type: docs
title: "Pluggable components overview"
linkTitle: "Overview"
weight: 1000
description: "Overview of pluggable component anatomy and supported component types"
---
Pluggable components are components that are not included as part the runtime, as opposed to the built-in components included with `dapr init`. You can configure Dapr to use pluggable components that leverage the building block APIs, but are registered differently from the [built-in Dapr components](https://github.com/dapr/components-contrib).
<img src="/images/concepts-building-blocks.png" width=400>
## Pluggable components vs. built-in components
Dapr provides two approaches for registering and creating components:
- The built-in components included in the runtime and found in the [components-contrib repository ](https://github.com/dapr/components-contrib).
- Pluggable components which are deployed and registered independently.
While both registration options leverage Dapr's building block APIs, each has a different implementation processes.
| Component details | [Built-in Component](https://github.com/dapr/components-contrib/blob/master/docs/developing-component.md) | Pluggable Components |
| ---------------------------- | :--------------------------------------------------------------------------------------------------------- | :--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| **Language** | Can only be written in Go | [Can be written in any gRPC-supported language](https://grpc.io/docs/what-is-grpc/introduction/#protocol-buffer-versions) |
| **Where it runs** | As part of the Dapr runtime executable | As a distinct process or container in a pod. Runs separate from Dapr itself. |
| **Registers with Dapr** | Included into the Dapr codebase | Registers with Dapr via Unix Domain Sockets (using gRPC ) |
| **Distribution** | Distributed with Dapr release. New features added to component are aligned with Dapr releases | Distributed independently from Dapr itself. New features can be added when needed and follows its own release cycle. |
| **How component is activated** | Dapr starts runs the component (automatic) | User starts component (manual) |
## Why create a pluggable component?
Pluggable components prove useful in scenarios where:
- You require a private component.
- You want to keep your component separate from the Dapr release process.
- You are not as familiar with Go, or implementing your component in Go is not ideal.
## Features
### Implement a pluggable component
In order to implement a pluggable component, you need to implement a gRPC service in the component. Implementing the gRPC service requires three steps:
1. Find the proto definition file
1. Create service scaffolding
1. Define the service
Learn more about [how to develop and implement a pluggable component]({{< ref develop-pluggable.md >}})
### Leverage multiple building blocks for a component
In addition to implementing multiple gRPC services from the same component (for example `StateStore`, `QueriableStateStore`, `TransactionalStateStore` etc.), a pluggable component can also expose implementations for other component interfaces. This means that a single pluggable component can simultaneously function as a state store, pub/sub, and input or output binding. In other words, you can implement multiple component interfaces into a pluggable component and expose them as gRPC services.
While exposing multiple component interfaces on the same pluggable component lowers the operational burden of deploying multiple components, it makes implementing and debugging your component harder. If in doubt, stick to a "separation of concerns" by merging multiple components interfaces into the same pluggable component only when necessary.
## Operationalize a pluggable component
Built-in components and pluggable components share one thing in common: both need a [component specification]({{< ref "components-concept.md#component-specification" >}}). Built-in components do not require any extra steps to be used: Dapr is ready to use them automatically.
In contrast, pluggable components require additional steps before they can communicate with Dapr. You need to first run the component and facilitate Dapr-component communication to kick off the registration process.
## Next steps
- [Implement a pluggable component]({{< ref develop-pluggable.md >}})
- [Pluggable component registration]({{< ref "pluggable-components-registration" >}})
[state.proto]: https://github.com/dapr/dapr/blob/master/dapr/proto/components/v1/state.proto
[pubsub.proto]: https://github.com/dapr/dapr/blob/master/dapr/proto/components/v1/pubsub.proto
[bindings.proto]: https://github.com/dapr/dapr/blob/master/dapr/proto/components/v1/bindings.proto

View File

@ -84,7 +84,7 @@ To start, create a new Azure AD application, which will also be used as Service
Prerequisites:
- [Azure Subscription](https://azure.microsoft.com/free/)
- Azure Subscription
- [Azure CLI](https://docs.microsoft.com/cli/azure/install-azure-cli)
- [jq](https://stedolan.github.io/jq/download/)
- OpenSSL (included by default on all Linux and macOS systems, as well as on WSL)

View File

@ -6,6 +6,6 @@ description: "Publish APIs for Dapr services and components through Azure API Ma
weight: 2000
---
Azure API Management (APIM) is a way to create consistent and modern API gateways for back-end services, including as those built with Dapr. Dapr support can be enabled in self-hosted API Management gateways to allow them to forward requests to Dapr services, send messages to Dapr Pub/Sub topics, or trigger Dapr output bindings. For more information, read the guide on [API Management Dapr Integration policies](https://docs.microsoft.com/azure/api-management/api-management-dapr-policies) and try out the [Dapr & Azure API Management Integration Demo](https://github.com/dapr/samples/tree/master/dapr-apim-integration).
Azure API Management (APIM) is a way to create consistent and modern API gateways for back-end services, including those built with Dapr. Dapr support can be enabled in self-hosted API Management gateways to allow them to forward requests to Dapr services, send messages to Dapr Pub/Sub topics, or trigger Dapr output bindings. For more information, read the guide on [API Management Dapr Integration policies](https://docs.microsoft.com/azure/api-management/api-management-dapr-policies) and try out the [Dapr & Azure API Management Integration Demo](https://github.com/dapr/samples/tree/master/dapr-apim-integration).
{{< button text="Learn more" link="https://docs.microsoft.com/azure/api-management/api-management-dapr-policies" >}}

View File

@ -7,7 +7,7 @@ weight: 4000
---
# Prerequisites
- [Azure subscription](https://azure.microsoft.com/free/?WT.mc_id=A261C142F)
- Azure subscription
- [Azure CLI](https://docs.microsoft.com/cli/azure/install-azure-cli-windows?tabs=azure-cli) and the ***aks-preview*** extension.
- [Azure Kubernetes Service (AKS) cluster](https://docs.microsoft.com/azure/aks/tutorial-kubernetes-deploy-cluster?tabs=azure-cli)
@ -106,4 +106,4 @@ dapr-sidecar-injector-9555889bc-rpjwl 1/1 Running 0 1h
dapr-sidecar-injector-9555889bc-rqjgt 1/1 Running 0 1h
```
For further information such as configuration options and targeting specific versions of Dapr, see the official [AKS Dapr Extension Docs](https://docs.microsoft.com/azure/aks/dapr).
For more information about configuration options and targeting specific Dapr versions, see the official [AKS Dapr Extension Docs](https://docs.microsoft.com/azure/aks/dapr).

View File

@ -2,6 +2,6 @@
type: docs
title: "Integrations"
linkTitle: "Integrations"
weight: 10
weight: 60
description: "Dapr integrations with other technologies"
---

View File

@ -214,17 +214,17 @@ func main() {
}
```
This creates a gRPC server for your app on port 4000.
This creates a gRPC server for your app on port 50001.
4. Run your app
To run locally, use the Dapr CLI:
```
dapr run --app-id goapp --app-port 4000 --app-protocol grpc go run main.go
dapr run --app-id goapp --app-port 50001 --app-protocol grpc go run main.go
```
On Kubernetes, set the required `dapr.io/app-protocol: "grpc"` and `dapr.io/app-port: "4000` annotations in your pod spec template as mentioned above.
On Kubernetes, set the required `dapr.io/app-protocol: "grpc"` and `dapr.io/app-port: "50001` annotations in your pod spec template as mentioned above.
## Other languages

View File

@ -3,14 +3,21 @@ type: docs
weight: 5000
title: "Use the Dapr CLI in a GitHub Actions workflow"
linkTitle: "GitHub Actions"
description: "Learn how to add the Dapr CLI to your GitHub Actions to deploy and manage Dapr in your environments."
description: "Add the Dapr CLI to your GitHub Actions to deploy and manage Dapr in your environments."
---
Dapr can be integrated with GitHub Actions via the [Dapr tool installer](https://github.com/marketplace/actions/dapr-tool-installer) available in the GitHub Marketplace. This installer adds the Dapr CLI to your workflow, allowing you to deploy, manage, and upgrade Dapr across your environments.
Dapr can be integrated with GitHub Actions via the [Dapr tool installer](https://github.com/marketplace/actions/dapr-tool-installer) available in the GitHub Marketplace. This installer adds the Dapr CLI to your workflow, allowing you to deploy, manage, and upgrade Dapr across your environments.
## Overview
Copy and paste the following installer snippet into your applicatin's YAML file to get started:
The `dapr/setup-dapr` action will install the specified version of the Dapr CLI on macOS, Linux and Windows runners. Once installed, you can run any [Dapr CLI command]({{< ref cli >}}) to manage your Dapr environments.
```yaml
- name: Dapr tool installer
uses: dapr/setup-dapr@v1
```
The [`dapr/setup-dapr` action](https://github.com/dapr/setup-dapr) will install the specified version of the Dapr CLI on macOS, Linux, and Windows runners. Once installed, you can run any [Dapr CLI command]({{< ref cli >}}) to manage your Dapr environments.
Refer to the [`action.yml` metadata file](https://github.com/dapr/setup-dapr/blob/main/action.yml) for details about all the inputs.
## Example
@ -34,4 +41,8 @@ The `dapr/setup-dapr` action will install the specified version of the Dapr CLI
dapr status --kubernetes
working-directory: ./twitter-sentiment-processor/demos/demo3
```
```
## Next steps
Learn more about [GitHub Actions](https://docs.github.com/en/actions).

View File

@ -0,0 +1,7 @@
---
type: docs
title: "Local development"
linkTitle: "Local development"
weight: 40
description: "Capabilities for developing Dapr applications locally"
---

View File

@ -2,6 +2,6 @@
type: docs
title: "IDE support"
linkTitle: "IDE support"
weight: 30
weight: 200
description: "Support for common Integrated Development Environments (IDEs)"
---

View File

@ -33,7 +33,7 @@ To create a dedicated components folder with the default `statestore`, `pubsub`,
1. Open your application directory in Visual Studio Code
2. Open the Command Palette with `Ctrl+Shift+P`
3. Select `Dapr: Scaffold Dapr Components`
4. Run your application with `dapr run --components-path ./components -- ...`
4. Run your application with `dapr run --resources-path ./components -- ...`
### View running Dapr applications

View File

@ -188,6 +188,7 @@ Available Commands:
stop Stop Dapr instances and their associated apps. . Supported platforms: Self-hosted
uninstall Uninstall Dapr runtime. Supported platforms: Kubernetes and self-hosted
upgrade Upgrades a Dapr control plane installation in a cluster. Supported platforms: Kubernetes
version Print the Dapr runtime and CLI version
Flags:
-h, --help help for dapr

View File

@ -27,7 +27,5 @@ Hit the ground running with our Dapr quickstarts, complete with code samples aim
| [State Management]({{< ref statemanagement-quickstart.md >}}) | Store a service's data as key/value pairs in supported state stores. |
| [Bindings]({{< ref bindings-quickstart.md >}}) | Work with external systems using input bindings to respond to events and output bindings to call operations. |
| [Secrets Management]({{< ref secrets-quickstart.md >}}) | Securely fetch secrets. |
| Actors | Coming soon. |
| Observability | Coming soon. |
| Configuration | Coming soon. |
| Distributed Lock | Coming soon. |
| [Configuration]({{< ref configuration-quickstart.md >}}) | Get configuration items and subscribe for configuration updates. |
| [Resiliency]({{< ref resiliency >}}) | Define and apply fault-tolerance policies to your Dapr API requests. |

View File

@ -46,7 +46,7 @@ Run the [PostgreSQL instance](https://www.postgresql.org/) locally in a Docker c
In a terminal window, from the root of the Quickstarts clone directory, navigate to the `bindings/db` directory.
```bash
cd quickstarts/bindings/db
cd bindings/db
```
Run the following command to set up the container:
@ -73,7 +73,7 @@ CONTAINER ID IMAGE COMMAND CREATED STATUS
In a new terminal window, navigate to the SDK directory.
```bash
cd quickstarts/bindings/python/sdk/batch
cd bindings/python/sdk/batch
```
Install the dependencies:
@ -85,7 +85,7 @@ pip3 install -r requirements.txt
Run the `batch-sdk` service alongside a Dapr sidecar.
```bash
dapr run --app-id batch-sdk --app-port 50051 --components-path ../../../components -- python3 app.py
dapr run --app-id batch-sdk --app-port 50051 --resources-path ../../../components -- python3 app.py
```
> **Note**: Since Python3.exe is not defined in Windows, you may need to use `python app.py` instead of `python3 app.py`.
@ -137,7 +137,7 @@ Your output binding's `print` statement output:
In a new terminal, verify the same data has been inserted into the database. Navigate to the `bindings/db` directory.
```bash
cd quickstarts/bindings/db
cd bindings/db
```
Run the following to start the interactive Postgres CLI:
@ -253,7 +253,7 @@ Run the [PostgreSQL instance](https://www.postgresql.org/) locally in a Docker c
In a terminal window, from the root of the Quickstarts clone directory, navigate to the `bindings/db` directory.
```bash
cd quickstarts/bindings/db
cd bindings/db
```
Run the following command to set up the container:
@ -280,7 +280,7 @@ CONTAINER ID IMAGE COMMAND CREATED STATUS
In a new terminal window, navigate to the SDK directory.
```bash
cd quickstarts/bindings/javascript/sdk/batch
cd bindings/javascript/sdk/batch
```
Install the dependencies:
@ -292,7 +292,7 @@ npm install
Run the `batch-sdk` service alongside a Dapr sidecar.
```bash
dapr run --app-id batch-sdk --app-port 5002 --dapr-http-port 3500 --components-path ../../../components -- node index.js
dapr run --app-id batch-sdk --app-port 5002 --dapr-http-port 3500 --resources-path ../../../components -- node index.js
```
The code inside the `process_batch` function is executed every 10 seconds (defined in [`binding-cron.yaml`]({{< ref "#componentsbinding-cronyaml-component-file" >}}) in the `components` directory). The binding trigger looks for a route called via HTTP POST in your Flask application by the Dapr sidecar.
@ -339,7 +339,7 @@ Your output binding's `print` statement output:
In a new terminal, verify the same data has been inserted into the database. Navigate to the `bindings/db` directory.
```bash
cd quickstarts/bindings/db
cd bindings/db
```
Run the following to start the interactive Postgres CLI:
@ -455,7 +455,7 @@ Run the [PostgreSQL instance](https://www.postgresql.org/) locally in a Docker c
In a terminal window, from the root of the Quickstarts clone directory, navigate to the `bindings/db` directory.
```bash
cd quickstarts/bindings/db
cd bindings/db
```
Run the following command to set up the container:
@ -482,7 +482,7 @@ CONTAINER ID IMAGE COMMAND CREATED STATUS
In a new terminal window, navigate to the SDK directory.
```bash
cd quickstarts/bindings/csharp/sdk/batch
cd bindings/csharp/sdk/batch
```
Install the dependencies:
@ -495,7 +495,7 @@ dotnet build batch.csproj
Run the `batch-sdk` service alongside a Dapr sidecar.
```bash
dapr run --app-id batch-sdk --app-port 7002 --components-path ../../../components -- dotnet run
dapr run --app-id batch-sdk --app-port 7002 --resources-path ../../../components -- dotnet run
```
The code inside the `process_batch` function is executed every 10 seconds (defined in [`binding-cron.yaml`]({{< ref "#componentsbinding-cronyaml-component-file" >}}) in the `components` directory). The binding trigger looks for a route called via HTTP POST in your Flask application by the Dapr sidecar.
@ -543,7 +543,7 @@ Your output binding's `print` statement output:
In a new terminal, verify the same data has been inserted into the database. Navigate to the `bindings/db` directory.
```bash
cd quickstarts/bindings/db
cd bindings/db
```
Run the following to start the interactive Postgres CLI:
@ -662,7 +662,7 @@ Run the [PostgreSQL instance](https://www.postgresql.org/) locally in a Docker c
In a terminal window, from the root of the Quickstarts clone directory, navigate to the `bindings/db` directory.
```bash
cd quickstarts/bindings/db
cd bindings/db
```
Run the following command to set up the container:
@ -689,7 +689,7 @@ CONTAINER ID IMAGE COMMAND CREATED STATUS
In a new terminal window, navigate to the SDK directory.
```bash
cd quickstarts/bindings/java/sdk/batch
cd bindings/java/sdk/batch
```
Install the dependencies:
@ -701,7 +701,7 @@ mvn clean install
Run the `batch-sdk` service alongside a Dapr sidecar.
```bash
dapr run --app-id batch-sdk --app-port 8080 --components-path ../../../components -- java -jar target/BatchProcessingService-0.0.1-SNAPSHOT.jar
dapr run --app-id batch-sdk --app-port 8080 --resources-path ../../../components -- java -jar target/BatchProcessingService-0.0.1-SNAPSHOT.jar
```
The code inside the `process_batch` function is executed every 10 seconds (defined in [`binding-cron.yaml`]({{< ref "#componentsbinding-cronyaml-component-file" >}}) in the `components` directory). The binding trigger looks for a route called via HTTP POST in your Flask application by the Dapr sidecar.
@ -753,7 +753,7 @@ Your output binding's `print` statement output:
In a new terminal, verify the same data has been inserted into the database. Navigate to the `bindings/db` directory.
```bash
cd quickstarts/bindings/db
cd bindings/db
```
Run the following to start the interactive Postgres CLI:
@ -869,7 +869,7 @@ Run the [PostgreSQL instance](https://www.postgresql.org/) locally in a Docker c
In a terminal window, from the root of the Quickstarts clone directory, navigate to the `bindings/db` directory.
```bash
cd quickstarts/bindings/db
cd bindings/db
```
Run the following command to set up the container:
@ -896,7 +896,7 @@ CONTAINER ID IMAGE COMMAND CREATED STATUS
In a new terminal window, navigate to the SDK directory.
```bash
cd quickstarts/bindings/go/sdk/batch
cd bindings/go/sdk/batch
```
Install the dependencies:
@ -908,7 +908,7 @@ go build .
Run the `batch-sdk` service alongside a Dapr sidecar.
```bash
dapr run --app-id batch-sdk --app-port 6002 --dapr-http-port 3502 --dapr-grpc-port 60002 --components-path ../../../components -- go run .
dapr run --app-id batch-sdk --app-port 6002 --dapr-http-port 3502 --dapr-grpc-port 60002 --resources-path ../../../components -- go run .
```
The code inside the `process_batch` function is executed every 10 seconds (defined in [`binding-cron.yaml`]({{< ref "#componentsbinding-cronyaml-component-file" >}}) in the `components` directory). The binding trigger looks for a route called via HTTP POST in your Flask application by the Dapr sidecar.
@ -965,7 +965,7 @@ Your output binding's `print` statement output:
In a new terminal, verify the same data has been inserted into the database. Navigate to the `bindings/db` directory.
```bash
cd quickstarts/bindings/db
cd bindings/db
```
Run the following to start the interactive Postgres CLI:

View File

@ -0,0 +1,639 @@
---
type: docs
title: "Quickstart: Configuration"
linkTitle: Configuration
weight: 76
description: Get started with Dapr's Configuration building block
---
Let's take a look at Dapr's [Configuration building block]({{< ref configuration-api-overview.md >}}). A configuration item is often dynamic in nature and tightly coupled to the needs of the application that consumes it. Configuration items are key/value pairs containing configuration data, such as:
- App ids
- Partition keys
- Database names, etc
In this quickstart, you'll run an `order-processor` microservice that utilizes the Configuration API. The service:
1. Gets configuration items from the configuration store.
1. Subscribes for configuration updates.
<img src="/images/configuration-quickstart/configuration-quickstart-flow.png" width=1000 alt="Diagram that demonstrates the flow of the configuration API quickstart with key/value pairs used.">
Select your preferred language-specific Dapr SDK before proceeding with the Quickstart.
{{< tabs "Python" "JavaScript" ".NET" "Java" "Go" >}}
<!-- Python -->
{{% codetab %}}
### Pre-requisites
For this example, you will need:
- [Dapr CLI and initialized environment](https://docs.dapr.io/getting-started).
- [Python 3.7+ installed](https://www.python.org/downloads/).
<!-- IGNORE_LINKS -->
- [Docker Desktop](https://www.docker.com/products/docker-desktop)
<!-- END_IGNORE -->
### Step 1: Set up the environment
Clone the [sample provided in the Quickstarts repo](https://github.com/dapr/quickstarts/tree/master/configuration).
```bash
git clone https://github.com/dapr/quickstarts.git
```
Once cloned, open a new terminal and run the following command to set values for configuration items `orderId1` and `orderId2`.
```bash
docker exec dapr_redis redis-cli MSET orderId1 "101" orderId2 "102"
```
### Step 2: Run the `order-processor` service
From the root of the Quickstarts clone directory, navigate to the `order-processor` directory.
```bash
cd configuration/python/sdk/order-processor
```
Install the dependencies:
```bash
pip3 install -r requirements.txt
```
Run the `order-processor` service alongside a Dapr sidecar.
```bash
dapr run --app-id order-processor --components-path ../../../components/ --app-port 6001 -- python3 app.py
```
> **Note**: Since Python3.exe is not defined in Windows, you may need to use `python app.py` instead of `python3 app.py`.
The expected output:
```
== APP == Configuration for orderId1 : value: "101"
== APP ==
== APP == Configuration for orderId2 : value: "102"
== APP ==
== APP == App unsubscribed from config changes
```
### (Optional) Step 3: Update configuration item values
Once the app has unsubscribed, try updating the configuration item values. Change the `orderId1` and `orderId2` values using the following command:
```bash
docker exec dapr_redis redis-cli MSET orderId1 "103" orderId2 "104"
```
Run the `order-processor` service again:
```bash
dapr run --app-id order-processor --components-path ../../../components/ --app-port 6001 -- python3 app.py
```
> **Note**: Since Python3.exe is not defined in Windows, you may need to use `python app.py` instead of `python3 app.py`.
The app will return the updated configuration values:
```
== APP == Configuration for orderId1 : value: "103"
== APP ==
== APP == Configuration for orderId2 : value: "104"
== APP ==
```
### The `order-processor` service
The `order-processor` service includes code for:
- Getting the configuration items from the config store
- Subscribing to configuration updates (which you made in the CLI earlier)
- Unsubscribing from configuration updates and exiting the app after 20 seconds of inactivity.
Get configuration items:
```python
# Get config items from the config store
for config_item in CONFIGURATION_ITEMS:
config = client.get_configuration(store_name=DAPR_CONFIGURATION_STORE, keys=[config_item], config_metadata={})
print(f"Configuration for {config_item} : {config.items[config_item]}", flush=True)
```
Subscribe to configuration updates:
```python
# Subscribe for configuration changes
configuration = await client.subscribe_configuration(DAPR_CONFIGURATION_STORE, CONFIGURATION_ITEMS)
```
Unsubscribe from configuration updates and exit the application:
```python
# Unsubscribe from configuration updates
unsubscribed = True
for config_item in CONFIGURATION_ITEMS:
unsub_item = client.unsubscribe_configuration(DAPR_CONFIGURATION_STORE, config_item)
#...
if unsubscribed == True:
print("App unsubscribed from config changes", flush=True)
```
{{% /codetab %}}
<!-- JavaScript -->
{{% codetab %}}
### Pre-requisites
For this example, you will need:
- [Dapr CLI and initialized environment](https://docs.dapr.io/getting-started).
- [Python 3.7+ installed](https://www.python.org/downloads/).
<!-- IGNORE_LINKS -->
- [Docker Desktop](https://www.docker.com/products/docker-desktop)
<!-- END_IGNORE -->
### Step 1: Set up the environment
Clone the [sample provided in the Quickstarts repo](https://github.com/dapr/quickstarts/tree/master/configuration).
```bash
git clone https://github.com/dapr/quickstarts.git
```
Once cloned, open a new terminal and run the following command to set values for configuration items `orderId1` and `orderId2`.
```bash
docker exec dapr_redis redis-cli MSET orderId1 "101" orderId2 "102"
```
### Step 2: Run the `order-processor` service
From the root of the Quickstarts clone directory, navigate to the `order-processor` directory.
```bash
cd configuration/javascript/sdk/order-processor
```
Install the dependencies:
```bash
npm install
```
Run the `order-processor` service alongside a Dapr sidecar.
```bash
dapr run --app-id order-processor --components-path ../../../components/ --app-protocol grpc --dapr-grpc-port 3500 -- node index.js
```
The expected output:
```
== APP == Configuration for orderId1: {"key":"orderId1","value":"101","version":"","metadata":{}}
== APP == Configuration for orderId2: {"key":"orderId2","value":"102","version":"","metadata":{}}
== APP == App unsubscribed to config changes
```
### (Optional) Step 3: Update configuration item values
Once the app has unsubscribed, try updating the configuration item values. Change the `orderId1` and `orderId2` values using the following command:
```bash
docker exec dapr_redis redis-cli MSET orderId1 "103" orderId2 "104"
```
Run the `order-processor` service again:
```bash
dapr run --app-id order-processor --components-path ../../../components/ --app-protocol grpc --dapr-grpc-port 3500 -- node index.js
```
The app will return the updated configuration values:
```
== APP == Configuration for orderId1: {"key":"orderId1","value":"103","version":"","metadata":{}}
== APP == Configuration for orderId2: {"key":"orderId2","value":"104","version":"","metadata":{}}
```
### The `order-processor` service
The `order-processor` service includes code for:
- Getting the configuration items from the config store
- Subscribing to configuration updates (which you made in the CLI earlier)
- Unsubscribing from configuration updates and exiting the app after 20 seconds of inactivity.
Get configuration items:
```javascript
// Get config items from the config store
//...
const config = await client.configuration.get(DAPR_CONFIGURATION_STORE, CONFIGURATION_ITEMS);
Object.keys(config.items).forEach((key) => {
console.log("Configuration for " + key + ":", JSON.stringify(config.items[key]));
});
```
Subscribe to configuration updates:
```javascript
// Subscribe to config updates
try {
const stream = await client.configuration.subscribeWithKeys(
DAPR_CONFIGURATION_STORE,
CONFIGURATION_ITEMS,
(config) => {
console.log("Configuration update", JSON.stringify(config.items));
}
);
```
Unsubscribe from configuration updates and exit the application:
```javascript
// Unsubscribe to config updates and exit app after 20 seconds
setTimeout(() => {
stream.stop();
console.log("App unsubscribed to config changes");
process.exit(0);
},
```
{{% /codetab %}}
<!-- .NET -->
{{% codetab %}}
### Pre-requisites
For this example, you will need:
- [Dapr CLI and initialized environment](https://docs.dapr.io/getting-started).
- [.NET SDK or .NET 6 SDK installed](https://dotnet.microsoft.com/download).
<!-- IGNORE_LINKS -->
- [Docker Desktop](https://www.docker.com/products/docker-desktop)
<!-- END_IGNORE -->
### Step 1: Set up the environment
Clone the [sample provided in the Quickstarts repo](https://github.com/dapr/quickstarts/tree/master/configuration).
```bash
git clone https://github.com/dapr/quickstarts.git
```
Once cloned, open a new terminal and run the following command to set values for configuration items `orderId1` and `orderId2`.
```bash
docker exec dapr_redis redis-cli MSET orderId1 "101" orderId2 "102"
```
### Step 2: Run the `order-processor` service
From the root of the Quickstarts clone directory, navigate to the `order-processor` directory.
```bash
cd configuration/csharp/sdk/order-processor
```
Recall NuGet packages:
```bash
dotnet restore
dotnet build
```
Run the `order-processor` service alongside a Dapr sidecar.
```bash
dapr run --app-id order-processor-http --components-path ../../../components/ --app-port 7001 -- dotnet run --project .
```
The expected output:
```
== APP == Configuration for orderId1: {"Value":"101","Version":"","Metadata":{}}
== APP == Configuration for orderId2: {"Value":"102","Version":"","Metadata":{}}
== APP == App unsubscribed from config changes
```
### (Optional) Step 3: Update configuration item values
Once the app has unsubscribed, try updating the configuration item values. Change the `orderId1` and `orderId2` values using the following command:
```bash
docker exec dapr_redis redis-cli MSET orderId1 "103" orderId2 "104"
```
Run the `order-processor` service again:
```bash
dapr run --app-id order-processor-http --components-path ../../../components/ --app-port 7001 -- dotnet run --project .
```
The app will return the updated configuration values:
```
== APP == Configuration for orderId1: {"Value":"103","Version":"","Metadata":{}}
== APP == Configuration for orderId2: {"Value":"104","Version":"","Metadata":{}}
```
### The `order-processor` service
The `order-processor` service includes code for:
- Getting the configuration items from the config store
- Subscribing to configuration updates (which you made in the CLI earlier)
- Unsubscribing from configuration updates and exiting the app after 20 seconds of inactivity.
Get configuration items:
```csharp
// Get config from configuration store
GetConfigurationResponse config = await client.GetConfiguration(DAPR_CONFIGURATION_STORE, CONFIGURATION_ITEMS);
foreach (var item in config.Items)
{
var cfg = System.Text.Json.JsonSerializer.Serialize(item.Value);
Console.WriteLine("Configuration for " + item.Key + ": " + cfg);
}
```
Subscribe to configuration updates:
```csharp
// Subscribe to config updates
SubscribeConfigurationResponse subscribe = await client.SubscribeConfiguration(DAPR_CONFIGURATION_STORE, CONFIGURATION_ITEMS);
```
Unsubscribe from configuration updates and exit the application:
```csharp
// Unsubscribe to config updates and exit the app
try
{
client.UnsubscribeConfiguration(DAPR_CONFIGURATION_STORE, subscriptionId);
Console.WriteLine("App unsubscribed from config changes");
Environment.Exit(0);
}
```
{{% /codetab %}}
<!-- Java -->
{{% codetab %}}
### Pre-requisites
For this example, you will need:
- [Dapr CLI and initialized environment](https://docs.dapr.io/getting-started).
- Java JDK 11 (or greater):
- [Oracle JDK](https://www.oracle.com/technetwork/java/javase/downloads/index.html#JDK11), or
- OpenJDK
- [Apache Maven](https://maven.apache.org/install.html), version 3.x.
<!-- IGNORE_LINKS -->
- [Docker Desktop](https://www.docker.com/products/docker-desktop)
<!-- END_IGNORE -->
### Step 1: Set up the environment
Clone the [sample provided in the Quickstarts repo](https://github.com/dapr/quickstarts/tree/master/configuration).
```bash
git clone https://github.com/dapr/quickstarts.git
```
Once cloned, open a new terminal and run the following command to set values for configuration items `orderId1` and `orderId2`.
```bash
docker exec dapr_redis redis-cli MSET orderId1 "101" orderId2 "102"
```
### Step 2: Run the `order-processor` service
From the root of the Quickstarts clone directory, navigate to the `order-processor` directory.
```bash
cd configuration/java/sdk/order-processor
```
Install the dependencies:
```bash
mvn clean install
```
Run the `order-processor` service alongside a Dapr sidecar.
```bash
dapr run --app-id order-processor --components-path ../../../components -- java -jar target/OrderProcessingService-0.0.1-SNAPSHOT.jar
```
The expected output:
```
== APP == Configuration for orderId1: {'value':'101'}
== APP == Configuration for orderId2: {'value':'102'}
== APP == App unsubscribed to config changes
```
### (Optional) Step 3: Update configuration item values
Once the app has unsubscribed, try updating the configuration item values. Change the `orderId1` and `orderId2` values using the following command:
```bash
docker exec dapr_redis redis-cli MSET orderId1 "103" orderId2 "104"
```
Run the `order-processor` service again:
```bash
dapr run --app-id order-processor --components-path ../../../components -- java -jar target/OrderProcessingService-0.0.1-SNAPSHOT.jar
```
The app will return the updated configuration values:
```
== APP == Configuration for orderId1: {'value':'103'}
== APP == Configuration for orderId2: {'value':'104'}
```
### The `order-processor` service
The `order-processor` service includes code for:
- Getting the configuration items from the config store
- Subscribing to configuration updates (which you made in the CLI earlier)
- Unsubscribing from configuration updates and exiting the app after 20 seconds of inactivity.
Get configuration items:
```java
// Get config items from the config store
try (DaprPreviewClient client = (new DaprClientBuilder()).buildPreviewClient()) {
for (String configurationItem : CONFIGURATION_ITEMS) {
ConfigurationItem item = client.getConfiguration(DAPR_CONFIGURATON_STORE, configurationItem).block();
System.out.println("Configuration for " + configurationItem + ": {'value':'" + item.getValue() + "'}");
}
```
Subscribe to configuration updates:
```java
// Subscribe for config changes
Flux<SubscribeConfigurationResponse> subscription = client.subscribeConfiguration(DAPR_CONFIGURATON_STORE,
CONFIGURATION_ITEMS.toArray(String[]::new));
```
Unsubscribe from configuration updates and exit the application:
```java
// Unsubscribe from config changes
UnsubscribeConfigurationResponse unsubscribe = client
.unsubscribeConfiguration(subscriptionId, DAPR_CONFIGURATON_STORE).block();
if (unsubscribe.getIsUnsubscribed()) {
System.out.println("App unsubscribed to config changes");
}
```
{{% /codetab %}}
<!-- Go -->
{{% codetab %}}
### Pre-requisites
For this example, you will need:
- [Dapr CLI and initialized environment](https://docs.dapr.io/getting-started).
- [Latest version of Go](https://go.dev/dl/).
<!-- IGNORE_LINKS -->
- [Docker Desktop](https://www.docker.com/products/docker-desktop)
<!-- END_IGNORE -->
### Step 1: Set up the environment
Clone the [sample provided in the Quickstarts repo](https://github.com/dapr/quickstarts/tree/master/configuration).
```bash
git clone https://github.com/dapr/quickstarts.git
```
Once cloned, open a new terminal and run the following command to set values for configuration items `orderId1` and `orderId2`.
```bash
docker exec dapr_redis redis-cli MSET orderId1 "101" orderId2 "102"
```
### Step 2: Run the `order-processor` service
From the root of the Quickstarts clone directory, navigate to the `order-processor` directory.
```bash
cd configuration/go/sdk/order-processor
```
Run the `order-processor` service alongside a Dapr sidecar.
```bash
dapr run --app-id order-processor --app-port 6001 --components-path ../../../components -- go run .
```
The expected output:
```
== APP == Configuration for orderId1: {"Value":"101","Version":"","Metadata":null}
== APP == Configuration for orderId2: {"Value":"102","Version":"","Metadata":null}
== APP == dapr configuration subscribe finished.
== APP == App unsubscribed to config changes
```
### (Optional) Step 3: Update configuration item values
Once the app has unsubscribed, try updating the configuration item values. Change the `orderId1` and `orderId2` values using the following command:
```bash
docker exec dapr_redis redis-cli MSET orderId1 "103" orderId2 "104"
```
Run the `order-processor` service again:
```bash
dapr run --app-id order-processor --app-port 6001 --components-path ../../../components -- go run .
```
The app will return the updated configuration values:
```
== APP == Configuration for orderId1: {"Value":"103","Version":"","Metadata":null}
== APP == Configuration for orderId2: {"Value":"104","Version":"","Metadata":null}
```
### The `order-processor` service
The `order-processor` service includes code for:
- Getting the configuration items from the config store
- Subscribing to configuration updates (which you made in the CLI earlier)
- Unsubscribing from configuration updates and exiting the app after 20 seconds of inactivity.
Get configuration items:
```go
// Get config items from config store
for _, item := range CONFIGURATION_ITEMS {
config, err := client.GetConfigurationItem(ctx, DAPR_CONFIGURATION_STORE, item)
//...
c, _ := json.Marshal(config)
fmt.Println("Configuration for " + item + ": " + string(c))
}
```
Subscribe to configuration updates:
```go
// Subscribe for config changes
err = client.SubscribeConfigurationItems(ctx, DAPR_CONFIGURATION_STORE, CONFIGURATION_ITEMS, func(id string, config map[string]*dapr.ConfigurationItem) {
// First invocation when app subscribes to config changes only returns subscription id
if len(config) == 0 {
fmt.Println("App subscribed to config changes with subscription id: " + id)
subscriptionId = id
return
}
})
```
Unsubscribe from configuration updates and exit the application:
```go
// Unsubscribe to config updates and exit app after 20 seconds
select {
case <-ctx.Done():
err = client.UnsubscribeConfigurationItems(context.Background(), DAPR_CONFIGURATION_STORE, subscriptionId)
//...
{
fmt.Println("App unsubscribed to config changes")
}
```
{{% /codetab %}}
{{< /tabs >}}
## Tell us what you think!
We're continuously working to improve our Quickstart examples and value your feedback. Did you find this quickstart helpful? Do you have suggestions for improvement?
Join the discussion in our [discord channel](https://discord.com/channels/778680217417809931/953427615916638238).
## Next steps
- Use Dapr Configuration with HTTP instead of an SDK.
- [Python](https://github.com/dapr/quickstarts/tree/master/configuration/python/http)
- [JavaScript](https://github.com/dapr/quickstarts/tree/master/configuration/javascript/http)
- [.NET](https://github.com/dapr/quickstarts/tree/master/configuration/csharp/http)
- [Java](https://github.com/dapr/quickstarts/tree/master/configuration/java/http)
- [Go](https://github.com/dapr/quickstarts/tree/master/configuration/go/http)
- Learn more about [Configuration building block]({{< ref configuration-api-overview >}})
{{< button text="Explore Dapr tutorials >>" page="getting-started/tutorials/_index.md" >}}

View File

@ -56,7 +56,7 @@ pip3 install -r requirements.txt
Run the `order-processor` subscriber service alongside a Dapr sidecar.
```bash
dapr run --app-id order-processor --components-path ../../../components/ --app-port 5001 -- python3 app.py
dapr run --app-id order-processor --resources-path ../../../components/ --app-port 5001 -- python3 app.py
```
> **Note**: Since Python3.exe is not defined in Windows, you may need to use `python app.py` instead of `python3 app.py`.
@ -105,7 +105,7 @@ pip3 install -r requirements.txt
Run the `checkout` publisher service alongside a Dapr sidecar.
```bash
dapr run --app-id checkout --components-path ../../../components/ -- python3 app.py
dapr run --app-id checkout --resources-path ../../../components/ -- python3 app.py
```
> **Note**: Since Python3.exe is not defined in Windows, you may need to use `python app.py` instead of `python3 app.py`.
@ -235,7 +235,7 @@ Verify you have the following files included in the service directory:
Run the `order-processor` subscriber service alongside a Dapr sidecar.
```bash
dapr run --app-port 5001 --app-id order-processing --app-protocol http --dapr-http-port 3501 --components-path ../../../components -- npm run start
dapr run --app-port 5001 --app-id order-processing --app-protocol http --dapr-http-port 3501 --resources-path ../../../components -- npm run start
```
In the `order-processor` subscriber, we're subscribing to the Redis instance called `orderpubsub` [(as defined in the `pubsub.yaml` component)]({{< ref "#pubsubyaml-component-file" >}}) and topic `orders`. This enables your app code to talk to the Redis component instance through the Dapr sidecar.
@ -267,7 +267,7 @@ Verify you have the following files included in the service directory:
Run the `checkout` publisher service alongside a Dapr sidecar.
```bash
dapr run --app-id checkout --app-protocol http --dapr-http-port 3500 --components-path ../../../components -- npm run start
dapr run --app-id checkout --app-protocol http --dapr-http-port 3500 --resources-path ../../../components -- npm run start
```
In the `checkout` publisher service, we're publishing the orderId message to the Redis instance called `orderpubsub` [(as defined in the `pubsub.yaml` component)]({{< ref "#pubsubyaml-component-file" >}}) and topic `orders`. As soon as the service starts, it publishes in a loop:
@ -389,7 +389,7 @@ dotnet build
Run the `order-processor` subscriber service alongside a Dapr sidecar.
```bash
dapr run --app-id order-processor --components-path ../../../components --app-port 7002 -- dotnet run
dapr run --app-id order-processor --resources-path ../../../components --app-port 7002 -- dotnet run
```
In the `order-processor` subscriber, we're subscribing to the Redis instance called `orderpubsub` [(as defined in the `pubsub.yaml` component)]({{< ref "#pubsubyaml-component-file" >}}) and topic `orders`. This enables your app code to talk to the Redis component instance through the Dapr sidecar.
@ -423,7 +423,7 @@ dotnet build
Run the `checkout` publisher service alongside a Dapr sidecar.
```bash
dapr run --app-id checkout --components-path ../../../components -- dotnet run
dapr run --app-id checkout --resources-path ../../../components -- dotnet run
```
In the `checkout` publisher, we're publishing the orderId message to the Redis instance called `orderpubsub` [(as defined in the `pubsub.yaml` component)]({{< ref "#pubsubyaml-component-file" >}}) and topic `orders`. As soon as the service starts, it publishes in a loop:
@ -544,7 +544,7 @@ mvn clean install
Run the `order-processor` subscriber service alongside a Dapr sidecar.
```bash
dapr run --app-port 8080 --app-id order-processor --components-path ../../../components -- java -jar target/OrderProcessingService-0.0.1-SNAPSHOT.jar
dapr run --app-port 8080 --app-id order-processor --resources-path ../../../components -- java -jar target/OrderProcessingService-0.0.1-SNAPSHOT.jar
```
In the `order-processor` subscriber, we're subscribing to the Redis instance called `orderpubsub` [(as defined in the `pubsub.yaml` component)]({{< ref "#pubsubyaml-component-file" >}}) and topic `orders`. This enables your app code to talk to the Redis component instance through the Dapr sidecar.
@ -582,7 +582,7 @@ mvn clean install
Run the `checkout` publisher service alongside a Dapr sidecar.
```bash
dapr run --app-id checkout --components-path ../../../components -- java -jar target/CheckoutService-0.0.1-SNAPSHOT.jar
dapr run --app-id checkout --resources-path ../../../components -- java -jar target/CheckoutService-0.0.1-SNAPSHOT.jar
```
In the `checkout` publisher, we're publishing the orderId message to the Redis instance called `orderpubsub` [(as defined in the `pubsub.yaml` component)]({{< ref "#pubsubyaml-component-file" >}}) and topic `orders`. As soon as the service starts, it publishes in a loop:
@ -706,7 +706,7 @@ go build .
Run the `order-processor` subscriber service alongside a Dapr sidecar.
```bash
dapr run --app-port 6002 --app-id order-processor-sdk --app-protocol http --dapr-http-port 3501 --components-path ../../../components -- go run .
dapr run --app-port 6002 --app-id order-processor-sdk --app-protocol http --dapr-http-port 3501 --resources-path ../../../components -- go run .
```
In the `order-processor` subscriber, we're subscribing to the Redis instance called `orderpubsub` [(as defined in the `pubsub.yaml` component)]({{< ref "#pubsubyaml-component-file" >}}) and topic `orders`. This enables your app code to talk to the Redis component instance through the Dapr sidecar.
@ -736,7 +736,7 @@ go build .
Run the `checkout` publisher service alongside a Dapr sidecar.
```bash
dapr run --app-id checkout --app-protocol http --dapr-http-port 3500 --components-path ../../../components -- go run .
dapr run --app-id checkout --app-protocol http --dapr-http-port 3500 --resources-path ../../../components -- go run .
```
In the `checkout` publisher, we're publishing the orderId message to the Redis instance called `orderpubsub` [(as defined in the `pubsub.yaml` component)]({{< ref "#pubsubyaml-component-file" >}}) and topic `orders`. As soon as the service starts, it publishes in a loop:

View File

@ -0,0 +1,7 @@
---
type: docs
title: "Resiliency Quickstarts"
linkTitle: "Resiliency"
weight: 100
description: "Get started with Dapr's resiliency component"
---

View File

@ -0,0 +1,880 @@
---
type: docs
title: "Quickstart: Service-to-component resiliency"
linkTitle: "Resiliency: Service-to-component"
weight: 110
description: "Get started with Dapr's resiliency capabilities via the state management API"
---
{{% alert title="Note" color="primary" %}}
Resiliency is currently a preview feature.
{{% /alert %}}
Observe Dapr resiliency capabilities by simulating a system failure. In this Quickstart, you will:
- Execute a microservice application with resiliency enabled that continuously persists and retrieves state via Dapr's state management API.
- Trigger resiliency policies by simulating a system failure.
- Resolve the failure and the microservice application will resume.
<img src="/images/resiliency-quickstart-svc-component.png" width="1000" alt="Diagram showing the resiliency applied to Dapr APIs">
Select your preferred language-specific Dapr SDK before proceeding with the Quickstart.
{{< tabs "Python" "JavaScript" ".NET" "Java" "Go" >}}
<!-- Python -->
{{% codetab %}}
### Pre-requisites
For this example, you will need:
- [Dapr CLI and initialized environment](https://docs.dapr.io/getting-started).
- [Python 3.7+ installed](https://www.python.org/downloads/).
<!-- IGNORE_LINKS -->
- [Docker Desktop](https://www.docker.com/products/docker-desktop)
<!-- END_IGNORE -->
### Step 1: Set up the environment
Clone the [sample provided in the Quickstarts repo](https://github.com/dapr/quickstarts/tree/master/resiliency).
```bash
git clone https://github.com/dapr/quickstarts.git
```
In a terminal window, navigate to the `order-processor` directory.
```bash
cd ../state_management/python/sdk/order-processor
```
Install dependencies
```bash
pip3 install -r requirements.txt
```
### Step 2: Run the application with resiliency enabled
Run the `order-processor` service alongside a Dapr sidecar. In the `dapr run` command below, the `--config` parameter applies a Dapr configuration that enables the resiliency feature. By enabling resiliency, the resiliency spec located in the components directory is loaded by the `order-processor` sidecar. The resilency spec is:
```yaml
apiVersion: dapr.io/v1alpha1
kind: Resiliency
metadata:
name: myresiliency
scopes:
- checkout
spec:
policies:
retries:
retryForever:
policy: constant
maxInterval: 5s
maxRetries: -1
circuitBreakers:
simpleCB:
maxRequests: 1
timeout: 5s
trip: consecutiveFailures >= 5
targets:
apps:
order-processor:
retry: retryForever
circuitBreaker: simpleCB
```
```bash
dapr run --app-id order-processor --config ../config.yaml --components-path ../../../components/ -- python3
```
Once the application has started, the `order-processor`service writes and reads `orderId` key/value pairs to the `statestore` Redis instance [defined in the `statestore.yaml` component]({{< ref "statemanagement-quickstart.md#statestoreyaml-component-file" >}}).
```bash
== APP == Saving Order: { orderId: '1' }
== APP == Getting Order: { orderId: '1' }
== APP == Saving Order: { orderId: '2' }
== APP == Getting Order: { orderId: '2' }
== APP == Saving Order: { orderId: '3' }
== APP == Getting Order: { orderId: '3' }
== APP == Saving Order: { orderId: '4' }
== APP == Getting Order: { orderId: '4' }
```
### Step 3: Introduce a fault
Simulate a fault by stopping the Redis container instance that was initialized when executing `dapr init` on your development machine. Once the instance is stopped, write and read operations from the `order-processor` service begin to fail.
Since the `resiliency.yaml` spec defines `statestore` as a component target, all failed requests will apply retry and circuit breaker policies:
```yaml
targets:
components:
statestore:
outbound:
retry: retryForever
circuitBreaker: simpleCB
```
In a new terminal window, run the following command to stop Redis:
```bash
docker stop dapr_redis
```
Once Redis is stopped, the requests begin to fail and the retry policy titled `retryForever` is applied. The output below shows the logs from the `order-processor` service:
```bash
INFO[0006] Error processing operation component[statestore] output. Retrying...
```
As per the `retryFroever` policy, retries will continue for each failed request indefinitely, in 5 second intervals.
```yaml
retryForever:
policy: constant
maxInterval: 5s
maxRetries: -1
```
Once 5 consecutive retries have failed, the circuit breaker policy, `simpleCB`, is tripped and the breaker opens, halting all requests:
```bash
INFO[0026] Circuit breaker "simpleCB-statestore" changed state from closed to open
```
```yaml
circuitBreakers:
simpleCB:
maxRequests: 1
timeout: 5s
trip: consecutiveFailures >= 5
```
After 5 seconds has surpassed, the circuit breaker will switch to a half-open state, allowing one request through to verify if the fault has been resolved. If the request continues to fail, the circuit will trip back to the open state.
```bash
INFO[0031] Circuit breaker "simpleCB-statestore" changed state from open to half-open
INFO[0031] Circuit breaker "simpleCB-statestore" changed state from half-open to open
INFO[0036] Circuit breaker "simpleCB-statestore" changed state from open to half-open
INFO[0036] Circuit breaker "simpleCB-statestore" changed state from half-open to closed
```
This half-open/open behavior will continue for as long as the Redis container is stopped.
### Step 3: Remove the fault
Once you restart the Redis container on your machine, the application will recover seamlessly, picking up where it left off.
```bash
docker start dapr_redis
```
```bash
INFO[0036] Recovered processing operation component[statestore] output.
== APP == Saving Order: { orderId: '5' }
== APP == Getting Order: { orderId: '5' }
== APP == Saving Order: { orderId: '6' }
== APP == Getting Order: { orderId: '6' }
== APP == Saving Order: { orderId: '7' }
== APP == Getting Order: { orderId: '7' }
== APP == Saving Order: { orderId: '8' }
== APP == Getting Order: { orderId: '8' }
== APP == Saving Order: { orderId: '9' }
== APP == Getting Order: { orderId: '9' }
```
{{% /codetab %}}
<!-- JavaScript -->
{{% codetab %}}
### Pre-requisites
For this example, you will need:
- [Dapr CLI and initialized environment](https://docs.dapr.io/getting-started).
- [Latest Node.js installed](https://nodejs.org/download/).
<!-- IGNORE_LINKS -->
- [Docker Desktop](https://www.docker.com/products/docker-desktop)
<!-- END_IGNORE -->
### Step 1: Set up the environment
Clone the [sample provided in the Quickstarts repo](https://github.com/dapr/quickstarts/tree/master/resiliency).
```bash
git clone https://github.com/dapr/quickstarts.git
```
In a terminal window, navigate to the `order-processor` directory.
```bash
cd ../state_management/javascript/sdk/order-processor
```
Install dependencies
```bash
npm install
```
### Step 2: Run the application with resiliency enabled
Run the `order-processor` service alongside a Dapr sidecar. In the `dapr run` command below, the `--config` parameter applies a Dapr configuration that enables the resiliency feature. By enabling resiliency, the resiliency spec located in the components directory is loaded by the `order-processor` sidecar. The resilency spec is:
```yaml
apiVersion: dapr.io/v1alpha1
kind: Resiliency
metadata:
name: myresiliency
scopes:
- checkout
spec:
policies:
retries:
retryForever:
policy: constant
maxInterval: 5s
maxRetries: -1
circuitBreakers:
simpleCB:
maxRequests: 1
timeout: 5s
trip: consecutiveFailures >= 5
targets:
apps:
order-processor:
retry: retryForever
circuitBreaker: simpleCB
```
```bash
dapr run --app-id order-processor --config ../config.yaml --components-path ../../../components/ -- npm start
```
Once the application has started, the `order-processor`service writes and reads `orderId` key/value pairs to the `statestore` Redis instance [defined in the `statestore.yaml` component]({{< ref "statemanagement-quickstart.md#statestoreyaml-component-file" >}}).
```bash
== APP == Saving Order: { orderId: '1' }
== APP == Getting Order: { orderId: '1' }
== APP == Saving Order: { orderId: '2' }
== APP == Getting Order: { orderId: '2' }
== APP == Saving Order: { orderId: '3' }
== APP == Getting Order: { orderId: '3' }
== APP == Saving Order: { orderId: '4' }
== APP == Getting Order: { orderId: '4' }
```
### Step 3: Introduce a fault
Simulate a fault by stopping the Redis container instance that was initialized when executing `dapr init` on your development machine. Once the instance is stopped, write and read operations from the `order-processor` service begin to fail.
Since the `resiliency.yaml` spec defines `statestore` as a component target, all failed requests will apply retry and circuit breaker policies:
```yaml
targets:
components:
statestore:
outbound:
retry: retryForever
circuitBreaker: simpleCB
```
In a new terminal window, run the following command to stop Redis:
```bash
docker stop dapr_redis
```
Once Redis is stopped, the requests begin to fail and the retry policy titled `retryForever` is applied. The output below shows the logs from the `order-processor` service:
```bash
INFO[0006] Error processing operation component[statestore] output. Retrying...
```
As per the `retryFroever` policy, retries will continue for each failed request indefinitely, in 5 second intervals.
```yaml
retryForever:
policy: constant
maxInterval: 5s
maxRetries: -1
```
Once 5 consecutive retries have failed, the circuit breaker policy, `simpleCB`, is tripped and the breaker opens, halting all requests:
```bash
INFO[0026] Circuit breaker "simpleCB-statestore" changed state from closed to open
```
```yaml
circuitBreakers:
simpleCB:
maxRequests: 1
timeout: 5s
trip: consecutiveFailures >= 5
```
After 5 seconds has surpassed, the circuit breaker will switch to a half-open state, allowing one request through to verify if the fault has been resolved. If the request continues to fail, the circuit will trip back to the open state.
```bash
INFO[0031] Circuit breaker "simpleCB-statestore" changed state from open to half-open
INFO[0031] Circuit breaker "simpleCB-statestore" changed state from half-open to open
INFO[0036] Circuit breaker "simpleCB-statestore" changed state from open to half-open
INFO[0036] Circuit breaker "simpleCB-statestore" changed state from half-open to closed
```
This half-open/open behavior will continue for as long as the Redis container is stopped.
### Step 3: Remove the fault
Once you restart the Redis container on your machine, the application will recover seamlessly, picking up where it left off.
```bash
docker start dapr_redis
```
```bash
INFO[0036] Recovered processing operation component[statestore] output.
== APP == Saving Order: { orderId: '5' }
== APP == Getting Order: { orderId: '5' }
== APP == Saving Order: { orderId: '6' }
== APP == Getting Order: { orderId: '6' }
== APP == Saving Order: { orderId: '7' }
== APP == Getting Order: { orderId: '7' }
== APP == Saving Order: { orderId: '8' }
== APP == Getting Order: { orderId: '8' }
== APP == Saving Order: { orderId: '9' }
== APP == Getting Order: { orderId: '9' }
```
{{% /codetab %}}
<!-- .NET -->
{{% codetab %}}
### Pre-requisites
For this example, you will need:
- [Dapr CLI and initialized environment](https://docs.dapr.io/getting-started).
- [.NET SDK or .NET 6 SDK installed](https://dotnet.microsoft.com/download).
<!-- IGNORE_LINKS -->
- [Docker Desktop](https://www.docker.com/products/docker-desktop)
<!-- END_IGNORE -->
### Step 1: Set up the environment
Clone the [sample provided in the Quickstarts repo](https://github.com/dapr/quickstarts/tree/master/resiliency).
```bash
git clone https://github.com/dapr/quickstarts.git
```
In a terminal window, navigate to the `order-processor` directory.
```bash
cd ../state_management/csharp/sdk/order-processor
```
Install dependencies
```bash
dotnet restore
dotnet build
```
### Step 2: Run the application with resiliency enabled
Run the `order-processor` service alongside a Dapr sidecar. In the `dapr run` command below, the `--config` parameter applies a Dapr configuration that enables the resiliency feature. By enabling resiliency, the resiliency spec located in the components directory is loaded by the `order-processor` sidecar. The resilency spec is:
```yaml
apiVersion: dapr.io/v1alpha1
kind: Resiliency
metadata:
name: myresiliency
scopes:
- checkout
spec:
policies:
retries:
retryForever:
policy: constant
maxInterval: 5s
maxRetries: -1
circuitBreakers:
simpleCB:
maxRequests: 1
timeout: 5s
trip: consecutiveFailures >= 5
targets:
apps:
order-processor:
retry: retryForever
circuitBreaker: simpleCB
```
```bash
dapr run --app-id order-processor --config ../config.yaml --components-path ../../../components/ -- dotnet run
```
Once the application has started, the `order-processor`service writes and reads `orderId` key/value pairs to the `statestore` Redis instance [defined in the `statestore.yaml` component]({{< ref "statemanagement-quickstart.md#statestoreyaml-component-file" >}}).
```bash
== APP == Saving Order: { orderId: '1' }
== APP == Getting Order: { orderId: '1' }
== APP == Saving Order: { orderId: '2' }
== APP == Getting Order: { orderId: '2' }
== APP == Saving Order: { orderId: '3' }
== APP == Getting Order: { orderId: '3' }
== APP == Saving Order: { orderId: '4' }
== APP == Getting Order: { orderId: '4' }
```
### Step 3: Introduce a fault
Simulate a fault by stopping the Redis container instance that was initialized when executing `dapr init` on your development machine. Once the instance is stopped, write and read operations from the `order-processor` service begin to fail.
Since the `resiliency.yaml` spec defines `statestore` as a component target, all failed requests will apply retry and circuit breaker policies:
```yaml
targets:
components:
statestore:
outbound:
retry: retryForever
circuitBreaker: simpleCB
```
In a new terminal window, run the following command to stop Redis:
```bash
docker stop dapr_redis
```
Once Redis is stopped, the requests begin to fail and the retry policy titled `retryForever` is applied. The output below shows the logs from the `order-processor` service:
```bash
INFO[0006] Error processing operation component[statestore] output. Retrying...
```
As per the `retryFroever` policy, retries will continue for each failed request indefinitely, in 5 second intervals.
```yaml
retryForever:
policy: constant
maxInterval: 5s
maxRetries: -1
```
Once 5 consecutive retries have failed, the circuit breaker policy, `simpleCB`, is tripped and the breaker opens, halting all requests:
```bash
INFO[0026] Circuit breaker "simpleCB-statestore" changed state from closed to open
```
```yaml
circuitBreakers:
simpleCB:
maxRequests: 1
timeout: 5s
trip: consecutiveFailures >= 5
```
After 5 seconds has surpassed, the circuit breaker will switch to a half-open state, allowing one request through to verify if the fault has been resolved. If the request continues to fail, the circuit will trip back to the open state.
```bash
INFO[0031] Circuit breaker "simpleCB-statestore" changed state from open to half-open
INFO[0031] Circuit breaker "simpleCB-statestore" changed state from half-open to open
INFO[0036] Circuit breaker "simpleCB-statestore" changed state from open to half-open
INFO[0036] Circuit breaker "simpleCB-statestore" changed state from half-open to closed
```
This half-open/open behavior will continue for as long as the Redis container is stopped.
### Step 3: Remove the fault
Once you restart the Redis container on your machine, the application will recover seamlessly, picking up where it left off.
```bash
docker start dapr_redis
```
```bash
INFO[0036] Recovered processing operation component[statestore] output.
== APP == Saving Order: { orderId: '5' }
== APP == Getting Order: { orderId: '5' }
== APP == Saving Order: { orderId: '6' }
== APP == Getting Order: { orderId: '6' }
== APP == Saving Order: { orderId: '7' }
== APP == Getting Order: { orderId: '7' }
== APP == Saving Order: { orderId: '8' }
== APP == Getting Order: { orderId: '8' }
== APP == Saving Order: { orderId: '9' }
== APP == Getting Order: { orderId: '9' }
```
{{% /codetab %}}
<!-- Java -->
{{% codetab %}}
### Pre-requisites
For this example, you will need:
- [Dapr CLI and initialized environment](https://docs.dapr.io/getting-started).
- Java JDK 11 (or greater):
- [Oracle JDK](https://www.oracle.com/technetwork/java/javase/downloads/index.html#JDK11), or
- OpenJDK
- [Apache Maven](https://maven.apache.org/install.html), version 3.x.
<!-- IGNORE_LINKS -->
- [Docker Desktop](https://www.docker.com/products/docker-desktop)
<!-- END_IGNORE -->
### Step 1: Set up the environment
Clone the [sample provided in the Quickstarts repo](https://github.com/dapr/quickstarts/tree/master/resiliency).
```bash
git clone https://github.com/dapr/quickstarts.git
```
In a terminal window, navigate to the `order-processor` directory.
```bash
cd ../state_management/java/sdk/order-processor
```
Install dependencies
```bash
mvn clean install
```
### Step 2: Run the application with resiliency enabled
Run the `order-processor` service alongside a Dapr sidecar. In the `dapr run` command below, the `--config` parameter applies a Dapr configuration that enables the resiliency feature. By enabling resiliency, the resiliency spec located in the components directory is loaded by the `order-processor` sidecar. The resilency spec is:
```yaml
apiVersion: dapr.io/v1alpha1
kind: Resiliency
metadata:
name: myresiliency
scopes:
- checkout
spec:
policies:
retries:
retryForever:
policy: constant
maxInterval: 5s
maxRetries: -1
circuitBreakers:
simpleCB:
maxRequests: 1
timeout: 5s
trip: consecutiveFailures >= 5
targets:
apps:
order-processor:
retry: retryForever
circuitBreaker: simpleCB
```
```bash
dapr run --app-id order-processor --config ../config.yaml --components-path ../../../components/ -- java -jar target/OrderProcessingService-0.0.1-SNAPSHOT.jar
```
Once the application has started, the `order-processor`service writes and reads `orderId` key/value pairs to the `statestore` Redis instance [defined in the `statestore.yaml` component]({{< ref "statemanagement-quickstart.md#statestoreyaml-component-file" >}}).
```bash
== APP == Saving Order: { orderId: '1' }
== APP == Getting Order: { orderId: '1' }
== APP == Saving Order: { orderId: '2' }
== APP == Getting Order: { orderId: '2' }
== APP == Saving Order: { orderId: '3' }
== APP == Getting Order: { orderId: '3' }
== APP == Saving Order: { orderId: '4' }
== APP == Getting Order: { orderId: '4' }
```
### Step 3: Introduce a fault
Simulate a fault by stopping the Redis container instance that was initialized when executing `dapr init` on your development machine. Once the instance is stopped, write and read operations from the `order-processor` service begin to fail.
Since the `resiliency.yaml` spec defines `statestore` as a component target, all failed requests will apply retry and circuit breaker policies:
```yaml
targets:
components:
statestore:
outbound:
retry: retryForever
circuitBreaker: simpleCB
```
In a new terminal window, run the following command to stop Redis:
```bash
docker stop dapr_redis
```
Once Redis is stopped, the requests begin to fail and the retry policy titled `retryForever` is applied. The output below shows the logs from the `order-processor` service:
```bash
INFO[0006] Error processing operation component[statestore] output. Retrying...
```
As per the `retryFroever` policy, retries will continue for each failed request indefinitely, in 5 second intervals.
```yaml
retryForever:
policy: constant
maxInterval: 5s
maxRetries: -1
```
Once 5 consecutive retries have failed, the circuit breaker policy, `simpleCB`, is tripped and the breaker opens, halting all requests:
```bash
INFO[0026] Circuit breaker "simpleCB-statestore" changed state from closed to open
```
```yaml
circuitBreakers:
simpleCB:
maxRequests: 1
timeout: 5s
trip: consecutiveFailures >= 5
```
After 5 seconds has surpassed, the circuit breaker will switch to a half-open state, allowing one request through to verify if the fault has been resolved. If the request continues to fail, the circuit will trip back to the open state.
```bash
INFO[0031] Circuit breaker "simpleCB-statestore" changed state from open to half-open
INFO[0031] Circuit breaker "simpleCB-statestore" changed state from half-open to open
INFO[0036] Circuit breaker "simpleCB-statestore" changed state from open to half-open
INFO[0036] Circuit breaker "simpleCB-statestore" changed state from half-open to closed
```
This half-open/open behavior will continue for as long as the Redis container is stopped.
### Step 3: Remove the fault
Once you restart the Redis container on your machine, the application will recover seamlessly, picking up where it left off.
```bash
docker start dapr_redis
```
```bash
INFO[0036] Recovered processing operation component[statestore] output.
== APP == Saving Order: { orderId: '5' }
== APP == Getting Order: { orderId: '5' }
== APP == Saving Order: { orderId: '6' }
== APP == Getting Order: { orderId: '6' }
== APP == Saving Order: { orderId: '7' }
== APP == Getting Order: { orderId: '7' }
== APP == Saving Order: { orderId: '8' }
== APP == Getting Order: { orderId: '8' }
== APP == Saving Order: { orderId: '9' }
== APP == Getting Order: { orderId: '9' }
```
{{% /codetab %}}
<!-- Go -->
{{% codetab %}}
### Pre-requisites
For this example, you will need:
- [Dapr CLI and initialized environment](https://docs.dapr.io/getting-started).
- [Latest version of Go](https://go.dev/dl/).
<!-- IGNORE_LINKS -->
- [Docker Desktop](https://www.docker.com/products/docker-desktop)
<!-- END_IGNORE -->
### Step 1: Set up the environment
Clone the [sample provided in the Quickstarts repo](https://github.com/dapr/quickstarts/tree/master/resiliency).
```bash
git clone https://github.com/dapr/quickstarts.git
```
In a terminal window, navigate to the `order-processor` directory.
```bash
cd ../state_management/go/sdk/order-processor
```
Install dependencies
```bash
go build .
```
### Step 2: Run the application with resiliency enabled
Run the `order-processor` service alongside a Dapr sidecar. In the `dapr run` command below, the `--config` parameter applies a Dapr configuration that enables the resiliency feature. By enabling resiliency, the resiliency spec located in the components directory is loaded by the `order-processor` sidecar. The resilency spec is:
```yaml
apiVersion: dapr.io/v1alpha1
kind: Resiliency
metadata:
name: myresiliency
scopes:
- checkout
spec:
policies:
retries:
retryForever:
policy: constant
maxInterval: 5s
maxRetries: -1
circuitBreakers:
simpleCB:
maxRequests: 1
timeout: 5s
trip: consecutiveFailures >= 5
targets:
apps:
order-processor:
retry: retryForever
circuitBreaker: simpleCB
```
```bash
dapr run --app-id order-processor --config ../config.yaml --components-path ../../../components -- go run .
```
Once the application has started, the `order-processor`service writes and reads `orderId` key/value pairs to the `statestore` Redis instance [defined in the `statestore.yaml` component]({{< ref "statemanagement-quickstart.md#statestoreyaml-component-file" >}}).
```bash
== APP == Saving Order: { orderId: '1' }
== APP == Getting Order: { orderId: '1' }
== APP == Saving Order: { orderId: '2' }
== APP == Getting Order: { orderId: '2' }
== APP == Saving Order: { orderId: '3' }
== APP == Getting Order: { orderId: '3' }
== APP == Saving Order: { orderId: '4' }
== APP == Getting Order: { orderId: '4' }
```
### Step 3: Introduce a fault
Simulate a fault by stopping the Redis container instance that was initialized when executing `dapr init` on your development machine. Once the instance is stopped, write and read operations from the `order-processor` service begin to fail.
Since the `resiliency.yaml` spec defines `statestore` as a component target, all failed requests will apply retry and circuit breaker policies:
```yaml
targets:
components:
statestore:
outbound:
retry: retryForever
circuitBreaker: simpleCB
```
In a new terminal window, run the following command to stop Redis:
```bash
docker stop dapr_redis
```
Once Redis is stopped, the requests begin to fail and the retry policy titled `retryForever` is applied. The output belows shows the logs from the `order-processor` service:
```bash
INFO[0006] Error processing operation component[statestore] output. Retrying...
```
As per the `retryFroever` policy, retries will continue for each failed request indefinitely, in 5 second intervals.
```yaml
retryForever:
policy: constant
maxInterval: 5s
maxRetries: -1
```
Once 5 consecutive retries have failed, the circuit breaker policy, `simpleCB`, is tripped and the breaker opens, halting all requests:
```bash
INFO[0026] Circuit breaker "simpleCB-statestore" changed state from closed to open
```
```yaml
circuitBreakers:
simpleCB:
maxRequests: 1
timeout: 5s
trip: consecutiveFailures >= 5
```
After 5 seconds has surpassed, the circuit breaker will switch to a half-open state, allowing one request through to verify if the fault has been resolved. If the request continues to fail, the circuit will trip back to the open state.
```bash
INFO[0031] Circuit breaker "simpleCB-statestore" changed state from open to half-open
INFO[0031] Circuit breaker "simpleCB-statestore" changed state from half-open to open
INFO[0036] Circuit breaker "simpleCB-statestore" changed state from open to half-open
INFO[0036] Circuit breaker "simpleCB-statestore" changed state from half-open to closed
```
This half-open/open behavior will continue for as long as the Redis container is stopped.
### Step 3: Remove the fault
Once you restart the Redis container on your machine, the application will recover seamlessly, picking up where it left off.
```bash
docker start dapr_redis
```
```bash
INFO[0036] Recovered processing operation component[statestore] output.
== APP == Saving Order: { orderId: '5' }
== APP == Getting Order: { orderId: '5' }
== APP == Saving Order: { orderId: '6' }
== APP == Getting Order: { orderId: '6' }
== APP == Saving Order: { orderId: '7' }
== APP == Getting Order: { orderId: '7' }
== APP == Saving Order: { orderId: '8' }
== APP == Getting Order: { orderId: '8' }
== APP == Saving Order: { orderId: '9' }
== APP == Getting Order: { orderId: '9' }
```
{{% /codetab %}}
{{< /tabs >}}
## Tell us what you think!
We're continuously working to improve our Quickstart examples and value your feedback. Did you find this quickstart helpful? Do you have suggestions for improvement?
Join the discussion in our [discord channel](https://discord.com/channels/778680217417809931/953427615916638238).
## Next steps
Learn more about [the resiliency feature]({{< ref resiliency-overview.md >}}) and how it works with Dapr's building block APIs.
{{< button text="Explore Dapr tutorials >>" page="getting-started/tutorials/_index.md" >}}

View File

@ -55,7 +55,7 @@ pip3 install -r requirements.txt
Run the `order-processor` service alongside a Dapr sidecar.
```bash
dapr run --app-id order-processor --components-path ../../../components/ -- python3 app.py
dapr run --app-id order-processor --resources-path ../../../components/ -- python3 app.py
```
> **Note**: Since Python3.exe is not defined in Windows, you may need to use `python app.py` instead of `python3 app.py`.
@ -106,7 +106,7 @@ In the YAML file:
**`secrets.json` file**
`SECRET_NAME` is defined in the `secrets.json` file, located in [secrets_management/python/sdk/order-processor](https://github.com/dapr/quickstarts/tree/master/secrets_management/javascript/sdk/order-processor/secrets.json):
`SECRET_NAME` is defined in the `secrets.json` file, located in [secrets_management/python/sdk/order-processor](https://github.com/dapr/quickstarts/tree/master/secrets_management/python/sdk/order-processor/secrets.json):
```json
{
@ -164,7 +164,7 @@ npm install
Run the `order-processor` service alongside a Dapr sidecar.
```bash
dapr run --app-id order-processor --components-path ../../../components/ -- npm start
dapr run --app-id order-processor --resources-path ../../../components/ -- npm start
```
#### Behind the scenes
@ -278,7 +278,7 @@ dotnet build
Run the `order-processor` service alongside a Dapr sidecar.
```bash
dapr run --app-id order-processor --components-path ../../../components/ -- dotnet run
dapr run --app-id order-processor --resources-path ../../../components/ -- dotnet run
```
#### Behind the scenes
@ -328,7 +328,7 @@ In the YAML file:
**`secrets.json` file**
`SECRET_NAME` is defined in the `secrets.json` file, located in [secrets_management/csharp/sdk/order-processor](https://github.com/dapr/quickstarts/tree/master/secrets_management/javascript/sdk/order-processor/secrets.json):
`SECRET_NAME` is defined in the `secrets.json` file, located in [secrets_management/csharp/sdk/order-processor](https://github.com/dapr/quickstarts/tree/master/secrets_management/csharp/sdk/order-processor/secrets.json):
```json
{
@ -389,7 +389,7 @@ mvn clean install
Run the `order-processor` service alongside a Dapr sidecar.
```bash
dapr run --app-id order-processor --components-path ../../../components/ -- java -jar target/OrderProcessingService-0.0.1-SNAPSHOT.jar
dapr run --app-id order-processor --resources-path ../../../components/ -- java -jar target/OrderProcessingService-0.0.1-SNAPSHOT.jar
```
#### Behind the scenes
@ -436,7 +436,7 @@ In the YAML file:
**`secrets.json` file**
`SECRET_NAME` is defined in the `secrets.json` file, located in [secrets_management/python/sdk/order-processor](https://github.com/dapr/quickstarts/tree/master/secrets_management/javascript/sdk/order-processor/secrets.json):
`SECRET_NAME` is defined in the `secrets.json` file, located in [secrets_management/java/sdk/order-processor](https://github.com/dapr/quickstarts/tree/master/secrets_management/java/sdk/order-processor/secrets.json):
```json
{
@ -494,7 +494,7 @@ go build .
Run the `order-processor` service alongside a Dapr sidecar.
```bash
dapr run --app-id order-processor --components-path ../../../components/ -- go run .
dapr run --app-id order-processor --resources-path ../../../components/ -- go run .
```
#### Behind the scenes
@ -543,7 +543,7 @@ In the YAML file:
**`secrets.json` file**
`SECRET_NAME` is defined in the `secrets.json` file, located in [secrets_management/python/sdk/order-processor](https://github.com/dapr/quickstarts/tree/master/secrets_management/javascript/sdk/order-processor/secrets.json):
`SECRET_NAME` is defined in the `secrets.json` file, located in [secrets_management/go/sdk/order-processor](https://github.com/dapr/quickstarts/tree/master/secrets_management/go/sdk/order-processor/secrets.json):
```json
{

View File

@ -51,7 +51,7 @@ pip3 install -r requirements.txt
Run the `order-processor` service alongside a Dapr sidecar.
```bash
dapr run --app-id order-processor --components-path ../../../components/ -- python3 app.py
dapr run --app-id order-processor --resources-path ../../../components/ -- python3 app.py
```
> **Note**: Since Python3.exe is not defined in Windows, you may need to use `python app.py` instead of `python3 app.py`.
@ -172,7 +172,7 @@ Verify you have the following files included in the service directory:
Run the `order-processor` service alongside a Dapr sidecar.
```bash
dapr run --app-id order-processor --components-path ../../../components/ -- npm run start
dapr run --app-id order-processor --resources-path ../../../components/ -- npm run start
```
The `order-processor` service writes, reads, and deletes an `orderId` key/value pair to the `statestore` instance [defined in the `statestore.yaml` component]({{< ref "#statestoreyaml-component-file" >}}). As soon as the service starts, it performs a loop.
@ -300,7 +300,7 @@ dotnet build
Run the `order-processor` service alongside a Dapr sidecar.
```bash
dapr run --app-id order-processor --components-path ../../../components/ -- dotnet run
dapr run --app-id order-processor --resources-path ../../../components/ -- dotnet run
```
The `order-processor` service writes, reads, and deletes an `orderId` key/value pair to the `statestore` instance [defined in the `statestore.yaml` component]({{< ref "#statestoreyaml-component-file" >}}). As soon as the service starts, it performs a loop.
@ -419,7 +419,7 @@ mvn clean install
Run the `order-processor` service alongside a Dapr sidecar.
```bash
dapr run --app-id order-processor --components-path ../../../components -- java -jar target/OrderProcessingService-0.0.1-SNAPSHOT.jar
dapr run --app-id order-processor --resources-path ../../../components -- java -jar target/OrderProcessingService-0.0.1-SNAPSHOT.jar
```
The `order-processor` service writes, reads, and deletes an `orderId` key/value pair to the `statestore` instance [defined in the `statestore.yaml` component]({{< ref "#statestoreyaml-component-file" >}}). As soon as the service starts, it performs a loop.
@ -538,7 +538,7 @@ go build .
Run the `order-processor` service alongside a Dapr sidecar.
```bash
dapr run --app-id order-processor --components-path ../../../components -- go run .
dapr run --app-id order-processor --resources-path ../../../components -- go run .
```
The `order-processor` service writes, reads, and deletes an `orderId` key/value pair to the `statestore` instance [defined in the `statestore.yaml` component]({{< ref "#statestoreyaml-component-file" >}}). As soon as the service starts, it performs a loop.

View File

@ -46,6 +46,7 @@ helm install redis bitnami/redis --set image.tag=6.2
```
For Dapr's Pub/sub functionality, you'll need at least Redis version 5. For state store, you can use a lower version.
Note that adding `--set architecture=standalone` to the `install` command creates a single replica Redis setup, which can save memory and resources if you are working in a local environment.
Run `kubectl get pods` to see the Redis containers now running in your cluster:
@ -64,9 +65,7 @@ For Kubernetes:
{{% /codetab %}}
{{% codetab %}}
<!-- IGNORE_LINKS -->
Verify you have an [Azure subscription](https://azure.microsoft.com/free/).
<!-- END_IGNORE -->
Verify you have an Azure subscription.
1. Open and log into the [Azure portal](https://ms.portal.azure.com/#create/Microsoft.Cache) to start the Azure Redis Cache creation flow.
1. Fill out the necessary information.
@ -318,9 +317,9 @@ When you run `dapr init`, Dapr creates a default redis `pubsub.yaml` on your loc
For new component files:
1. Create a new `components` directory in your app folder containing the YAML files.
1. Provide the path to the `dapr run` command with the flag `--components-path`
1. Provide the path to the `dapr run` command with the flag `--resources-path`
If you initialized Dapr in [slim mode]({{< ref self-hosted-no-docker.md >}}) (without Docker), you need to manually create the default directory, or always specify a components directory using `--components-path`.
If you initialized Dapr in [slim mode]({{< ref self-hosted-no-docker.md >}}) (without Docker), you need to manually create the default directory, or always specify a components directory using `--resources-path`.
{{% /codetab %}}

View File

@ -65,7 +65,7 @@ In the above file definition:
Launch a Dapr sidecar that will listen on port 3500 for a blank application named `myapp`:
```bash
dapr run --app-id myapp --dapr-http-port 3500 --components-path ./my-components
dapr run --app-id myapp --dapr-http-port 3500 --resources-path ./my-components
```
{{% alert title="Tip" color="primary" %}}

View File

@ -44,7 +44,7 @@ spec:
### Special metadata values
Metadata values can contain a `{uuid}` tag that is replaced with a randomly generate UUID when the Dapr sidecar starts up. A new UUID is generated on every start up. It can be used, for example, to have a pod on Kubernetes with multiple application instances consuming a [shared MQTT subscription]({{< ref "setup-mqtt.md" >}}). Below is an example of using the `{uuid}` tag.
Metadata values can contain a `{uuid}` tag that is replaced with a randomly generate UUID when the Dapr sidecar starts up. A new UUID is generated on every start up. It can be used, for example, to have a pod on Kubernetes with multiple application instances consuming a [shared MQTT subscription]({{< ref "setup-mqtt3.md" >}}). Below is an example of using the `{uuid}` tag.
```yaml
apiVersion: dapr.io/v1alpha1
@ -52,7 +52,7 @@ kind: Component
metadata:
name: messagebus
spec:
type: pubsub.mqtt
type: pubsub.mqtt3
version: v1
metadata:
- name: consumerID

View File

@ -2,7 +2,7 @@
type: docs
title: "How-To: Scope components to one or more applications"
linkTitle: "Scope access to components"
weight: 300
weight: 400
description: "Limit component access to particular Dapr instances"
---

View File

@ -2,7 +2,7 @@
type: docs
title: "How-To: Reference secrets in components"
linkTitle: "Reference secrets in components"
weight: 400
weight: 500
description: "How to securly reference secrets from a component definition"
---

View File

@ -2,7 +2,7 @@
type: docs
title: "Updating components"
linkTitle: "Updating components"
weight: 250
weight: 300
description: "Updating deployed components used by applications"
---

View File

@ -1,12 +1,9 @@
---
type: docs
title: "Middleware"
linkTitle: "Middleware"
weight: 50
title: "Configure middleware components"
linkTitle: "Configure middleware"
weight: 2000
description: "Customize processing pipelines by adding middleware components"
aliases:
- /developing-applications/middleware/middleware-overview/
- /concepts/middleware-concept/
---
Dapr allows custom processing pipelines to be defined by chaining a series of middleware components. There are two places that you can use a middleware pipeline;
@ -14,7 +11,7 @@ Dapr allows custom processing pipelines to be defined by chaining a series of mi
1) Building block APIs - HTTP middleware components are executed when invoking any Dapr HTTP APIs.
2) Service-to-Service invocation - HTTP middleware components are applied to service-to-service invocation calls.
## Configuring API middleware pipelines
## Configure API middleware pipelines
When launched, a Dapr sidecar constructs a middleware processing pipeline for incoming HTTP calls. By default, the pipeline consists of [tracing middleware]({{< ref tracing-overview.md >}}) and CORS middleware. Additional middleware, configured by a Dapr [configuration]({{< ref configuration-concept.md >}}), can be added to the pipeline in the order they are defined. The pipeline applies to all Dapr API endpoints, including state, pub/sub, service invocation, bindings, secrets, configuration, distributed lock, and others.
@ -45,7 +42,7 @@ As with other components, middleware components can be found in the [supported M
{{< button page="supported-middleware" text="See all middleware components">}}
## Configuring app middleware pipelines
## Configure app middleware pipelines
You can also use any middleware components when making service-to-service invocation calls. For example, for token validation in a zero-trust environment, a request transformation for a specific app endpoint, or to apply OAuth policies.
@ -68,35 +65,9 @@ spec:
type: middleware.http.uppercase
```
## Writing a custom middleware
Dapr uses [FastHTTP](https://github.com/valyala/fasthttp) to implement its HTTP server. Hence, your HTTP middleware needs to be written as a FastHTTP handler. Your middleware needs to implement a middleware interface, which defines a **GetHandler** method that returns **fasthttp.RequestHandler** and **error**:
```go
type Middleware interface {
GetHandler(metadata Metadata) (func(h fasthttp.RequestHandler) fasthttp.RequestHandler, error)
}
```
Your handler implementation can include any inbound logic, outbound logic, or both:
```go
func (m *customMiddleware) GetHandler(metadata Metadata) (func(fasthttp.RequestHandler) fasthttp.RequestHandler, error) {
var err error
return func(h fasthttp.RequestHandler) fasthttp.RequestHandler {
return func(ctx *fasthttp.RequestCtx) {
// inbound logic
h(ctx) // call the downstream handler
// outbound logic
}
}, err
}
```
## Related links
- [Learn how to author middleware components]({{< ref develop-middleware.md >}})
- [Component schema]({{< ref component-schema.md >}})
- [Configuration overview]({{< ref configuration-overview.md >}})
- [API middleware sample](https://github.com/dapr/samples/tree/master/middleware-oauth-google)

View File

@ -1,8 +1,8 @@
---
type: docs
title: "How-To: Register a pluggable component"
linkTitle: "How To: Register a pluggable component"
weight: 4500
linkTitle: "Register a pluggable component"
weight: 1000
description: "Learn how to register a pluggable component"
---
@ -10,7 +10,7 @@ description: "Learn how to register a pluggable component"
## Component registration process
Pluggable, [gRPC-based](https://grpc.io/) components are typically run as containers or processes that need to communicate with the Dapr runtime via [Unix Domain Sockets][uds]. They are automatically discovered and registered in the runtime with the following steps:
[Pluggable, gRPC-based components]({{< ref pluggable-components-overview >}}) are typically run as containers or processes that need to communicate with the Dapr runtime via [Unix Domain Sockets][uds]. They are automatically discovered and registered in the runtime with the following steps:
1. The component listens to an [Unix Domain Socket][uds] placed on the shared volume.
2. The Dapr runtime lists all [Unix Domain Socket][uds] in the shared volume.
@ -174,11 +174,8 @@ spec:
- name: component
volumeMounts: # required, the sockets volume mount
- name: dapr-unix-domain-socket
mountPath: /dapr-unix-domain-sockets
mountPath: /tmp/dapr-components-sockets
image: YOUR_IMAGE_GOES_HERE:YOUR_IMAGE_VERSION
env:
- name: DAPR_COMPONENTS_SOCKETS_FOLDER # Tells the component where the sockets should be created.
value: /dapr-unix-domain-sockets
```
Before applying the deployment, let's add one more configuration: the component spec.

View File

@ -1,134 +0,0 @@
---
type: docs
title: "Pluggable components overview"
linkTitle: "Overview"
weight: 4400
description: "Overview of pluggable component anatomy and supported component types"
---
Pluggable components are components that are not included as part the runtime, as opposed to built-in ones that are included. You can configure Dapr to use pluggable components that leverage the building block APIs, but these are registered differently from the [built-in Dapr components](https://github.com/dapr/components-contrib). For example, you can configure a pluggable component for scenarios where you require a private component.
<img src="/images/concepts-building-blocks.png" width=400>
## Pluggable components vs. built-in components
Dapr provides two approaches for registering and creating components:
- The built-in components included in the runtime and found in the [components-contrib repository ](https://github.com/dapr/components-contrib).
- Pluggable components which are deployed and registered independently.
While both registration options leverage Dapr's building block APIs, each has a different implementation processes.
| Component details | [Built-in Component](https://github.com/dapr/components-contrib/blob/master/docs/developing-component.md) | Pluggable Components |
| ---------------------------- | :--------------------------------------------------------------------------------------------------------- | :--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| **Language** | Can only be written in Go | [Can be written in any gRPC-supported language](https://grpc.io/docs/what-is-grpc/introduction/#protocol-buffer-versions) |
| **Where it runs** | As part of the Dapr runtime executable | As a distinct process or container in a pod. Runs separate from Dapr itself. |
| **Registers with Dapr** | Included into the Dapr codebase | Registers with Dapr via Unix Domain Sockets (using gRPC ) |
| **Distribution** | Distributed with Dapr release. New features added to component are aligned with Dapr releases | Distributed independently from Dapr itself. New features can be added when needed and follows its own release cycle. |
| **How component is activated** | Dapr starts runs the component (automatic) | User starts component (manual) |
## When to create a pluggable component
- This is a private component.
- You want to keep your component separate from the Dapr release process.
- You are not as familiar with Go, or implementing your component in Go is not ideal.
## Implementing a pluggable component
In order to implement a pluggable component you need to implement a gRPC service in the component. Implementing the gRPC service requires three steps:
1. **Find the proto definition file.** Proto definitions are provided for each supported service interface (state store, pub/sub, bindings).
Currently, the following component APIs are supported:
- State stores
- Pub/sub
- Bindings
| Component | Type | gRPC definition | Built-in Reference Implementation | Docs |
| :---------: | :--------: | :--------------: | :----------------------------------------------------------------------------: | --------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| State Store | `state` | [state.proto] | [Redis](https://github.com/dapr/components-contrib/tree/master/state/redis) | [concept]({{<ref "state-management-overview">}}), [howto]({{<ref "howto-get-save-state">}}), [api spec]({{<ref "state_api">}}) |
| Pub/sub | `pubsub` | [pubsub.proto] | [Redis](https://github.com/dapr/components-contrib/tree/master/pubsub/redis) | [concept]({{<ref "pubsub-overview">}}), [howto]({{<ref "howto-publish-subscribe">}}), [api spec]({{<ref "pubsub_api">}}) |
| Bindings | `bindings` | [bindings.proto] | [Kafka](https://github.com/dapr/components-contrib/tree/master/bindings/kafka) | [concept]({{<ref "bindings-overview">}}), [input howto]({{<ref "howto-triggers">}}), [output howto]({{<ref "howto-bindings">}}), [api spec]({{<ref "bindings_api">}}) |
Here's a snippet of the gRPC service definition for pluggable component state stores ([state.proto]).
```protobuf
// StateStore service provides a gRPC interface for state store components.
service StateStore {
// Initializes the state store component with the given metadata.
rpc Init(InitRequest) returns (InitResponse) {}
// Returns a list of implemented state store features.
rpc Features(FeaturesRequest) returns (FeaturesResponse) {}
// Ping the state store. Used for liveness purposes.
rpc Ping(PingRequest) returns (PingResponse) {}
// Deletes the specified key from the state store.
rpc Delete(DeleteRequest) returns (DeleteResponse) {}
// Get data from the given key.
rpc Get(GetRequest) returns (GetResponse) {}
// Sets the value of the specified key.
rpc Set(SetRequest) returns (SetResponse) {}
// Deletes many keys at once.
rpc BulkDelete(BulkDeleteRequest) returns (BulkDeleteResponse) {}
// Retrieves many keys at once.
rpc BulkGet(BulkGetRequest) returns (BulkGetResponse) {}
// Set the value of many keys at once.
rpc BulkSet(BulkSetRequest) returns (BulkSetResponse) {}
}
```
The interface for the `StateStore` service exposes 9 methods:
- 2 methods for initialization and components capability advertisement (Init and Features)
- 1 method for health-ness or liveness check (Ping)
- 3 methods for CRUD (Get, Set, Delete)
- 3 methods for bulk CRUD operations (BulkGet, BulkSet, BulkDelete)
2. **Create service scaffolding.** Use [protocol buffers and gRPC tools](https://grpc.io) to create the necessary scaffolding for the service. You may want to get acquainted with [the gRPC concepts documentation](https://grpc.io/docs/what-is-grpc/core-concepts/).
The tools can generate code targeting [any gRPC-supported language](https://grpc.io/docs/what-is-grpc/introduction/#protocol-buffer-versions). This code serves as the base for your server and it provides functionality to handle client calls along with infrastructure to decode incoming requests, execute service methods, and encode service responses.
The generated code is not complete. It is missing a concrete implementation for the methods your target service defines, i.e., the core of your pluggable component. This is further explored in the next topic. Additionally, you also have to provide code on how to handle Unix Socket Domain integration, which is Dapr specific, and code handling integration with your downstream services.
3. **Define the service.** Provide a concrete implementation of the desired service.
As a first step, [protocol buffers](https://developers.google.com/protocol-buffers/docs/overview) and [gRPC](https://grpc.io/docs/what-is-grpc/introduction/) tools are used to create the server code for this service. After that, the next step is to define concrete implementations for these 9 methods.
Each component has a gRPC service definition for its core functionality which is the same as the core component interface. For example:
- **State stores**
A pluggable state store **must** provide an implementation of the `StateStore` service interface. In addition to this core functionality, some components might also expose functionality under other **optional** services. For example, you can add extra functionality by defining the implementation for a `QueriableStateStore` service and a `TransactionalStateStore` service.
- **Pub/sub**
Pluggable pub/sub components only have a single core service interface defined ([pubsub.proto]). They have no optional service interfaces.
- **Bindings**
Pluggable input and output bindings have a single core service definition on [bindings.proto]. They have no optional service interfaces.
Following the State Store example from step 1, after generating its service scaffolding code using gRPC and protocol buffers tools (step 2),
the next step is to define concrete implementations for the 9 methods defined under `service StateStore`, along with code to initialize and communicate with your dependencies.
This concrete implementation and auxiliary code are the core of your pluggable component: they define how your component behaves when handling gRPC requests from Dapr.
### Leveraging multiple building blocks for a component
In addition to implementing multiple gRPC services from the same component (for example `StateStore`, `QueriableStateStore`, `TransactionalStateStore` etc.), a pluggable component can also expose implementations for other component interfaces. This means that a single pluggable component can function as a state store, pub/sub, and input or output binding, all at the same time. In other words, you can implement multiple component interfaces into a pluggable component and exposes these as gRPC services.
While exposing multiple component interfaces on the same pluggable component lowers the operational burden of deploying multiple components, it makes implementing and debugging your component harder. If in doubt, stick to a "separation of concerns" by merging multiple components interfaces into the same pluggable component only when necessary.
## Operationalizing a pluggable component
Built-in components and pluggable components share one thing in common: both need a [component specification]({{<ref "components-concept.md#component-specification">}}). Built-in components do not require any extra steps to be used: Dapr is ready to use them automatically.
In contrast, pluggable components require additional steps before they can communicate with Dapr. You need to first run the component and facilitate Dapr-component communication to kick off the registration process.
## Next steps
- [Pluggable component registration]({{<ref "pluggable-components-registration">}})
[state.proto]: https://github.com/dapr/dapr/blob/master/dapr/proto/components/v1/state.proto
[pubsub.proto]: https://github.com/dapr/dapr/blob/master/dapr/proto/components/v1/pubsub.proto
[bindings.proto]: https://github.com/dapr/dapr/blob/master/dapr/proto/components/v1/bindings.proto

View File

@ -3,7 +3,7 @@ type: docs
title: "Bindings components"
linkTitle: "Bindings"
description: "Guidance on setting up Dapr bindings components"
weight: 4000
weight: 900
---
Dapr integrates with external resources to allow apps to both be triggered by external events and interact with the resources. Each binding component has a name and this name is used when interacting with the resource.
@ -62,7 +62,7 @@ Once you have created the component's YAML file, follow these instructions to ap
{{< tabs "Self-Hosted" "Kubernetes" >}}
{{% codetab %}}
To run locally, create a `components` dir containing the YAML file and provide the path to the `dapr run` command with the flag `--components-path`.
To run locally, create a `components` dir containing the YAML file and provide the path to the `dapr run` command with the flag `--resources-path`.
{{% /codetab %}}
{{% codetab %}}

View File

@ -3,7 +3,7 @@ type: docs
title: "Pub/Sub brokers"
linkTitle: "Pub/sub brokers"
description: "Guidance on setting up different message brokers for Dapr Pub/Sub"
weight: 2000
weight: 700
aliases:
- "/operations/components/setup-pubsub/setup-pubsub-overview/"
---

View File

@ -2,7 +2,7 @@
type: docs
title: "HowTo: Configure Pub/Sub components with multiple namespaces"
linkTitle: "Multiple namespaces"
weight: 20000
weight: 10000
description: "Use Dapr Pub/Sub with multiple namespaces"
---

View File

@ -3,7 +3,7 @@ type: docs
title: "Secret store components"
linkTitle: "Secret stores"
description: "Guidance on setting up different secret store components"
weight: 3000
weight: 800
aliases:
- "/operations/components/setup-state-store/secret-stores-overview/"
---
@ -65,7 +65,7 @@ Once you have created the component's YAML file, follow these instructions to ap
{{< tabs "Self-Hosted" "Kubernetes" >}}
{{% codetab %}}
To run locally, create a `components` dir containing the YAML file and provide the path to the `dapr run` command with the flag `--components-path`.
To run locally, create a `components` dir containing the YAML file and provide the path to the `dapr run` command with the flag `--resources-path`.
{{% /codetab %}}
{{% codetab %}}

View File

@ -3,7 +3,7 @@ type: docs
title: "State stores components"
linkTitle: "State stores"
description: "Guidance on setting up different state stores for Dapr state management"
weight: 1000
weight: 600
aliases:
- "/operations/components/setup-state-store/setup-state-store-overview/"
---

View File

@ -48,6 +48,7 @@ The following configuration settings can be applied to Dapr application sidecars
- [Tracing](#tracing)
- [Metrics](#metrics)
- [Logging](#logging)
- [Middleware](#middleware)
- [Scope secret store access](#scope-secret-store-access)
- [Access Control allow lists for building block APIs](#access-control-allow-lists-for-building-block-apis)
@ -85,7 +86,7 @@ The following table lists the properties for tracing:
`samplingRate` is used to enable or disable the tracing. To disable the sampling rate ,
set `samplingRate : "0"` in the configuration. The valid range of samplingRate is between 0 and 1 inclusive. The sampling rate determines whether a trace span should be sampled or not based on value. `samplingRate : "1"` samples all traces. By default, the sampling rate is (0.0001) or 1 in 10,000 traces.
See [Observability distributed tracing]({{< ref "tracing-overview.md" >}}) for more information
See [Observability distributed tracing]({{< ref "tracing-overview.md" >}}) for more information.
#### Metrics
@ -104,7 +105,29 @@ The following table lists the properties for metrics:
|--------------|--------|-------------|
| `enabled` | boolean | Whether metrics should to be enabled.
See [metrics documentation]({{< ref "metrics-overview.md" >}}) for more information
See [metrics documentation]({{< ref "metrics-overview.md" >}}) for more information.
#### Logging
The logging section can be used to configure how logging works in the Dapr Runtime.
The `logging` section under the `Configuration` spec contains the following properties:
```yml
logging:
apiLogging:
enabled: false
omitHealthChecks: false
```
The following table lists the properties for logging:
| Property | Type | Description |
|--------------|--------|-------------|
| `apiLogging.enabled` | boolean | The default value for the `--enable-api-logging` flag for `daprd` (and the corresponding `dapr.io/enable-api-logging` annotation): the value set in the Configuration spec is used as default unless a `true` or `false` value is passed to each Dapr Runtime. Default: `false`.
| `apiLogging.omitHealthChecks` | boolean | If `true`, calls to health check endpoints (e.g. `/v1.0/healthz`) are not logged when API logging is enabled. This is useful if those calls are adding a lot of noise in your logs. Default: `false`
See [logging documentation]({{< ref "logs.md" >}}) for more information.
#### Middleware
@ -130,8 +153,8 @@ The following table lists the properties for HTTP handlers:
| Property | Type | Description |
|----------|--------|-------------|
| name | string | Name of the middleware component
| type | string | Type of middleware component
| `name` | string | Name of the middleware component
| `type` | string | Type of middleware component
See [Middleware pipelines]({{< ref "middleware.md" >}}) for more information

View File

@ -64,9 +64,11 @@ There are some scenarios where it's necessary to install Dapr from a private Hel
- having a custom Dapr deployment
- pulling Helm charts from trusted registries that are managed and maintained by your organization
```
export DAPR_HELM_REPO_URL="https://helm.custom-domain.com/dapr/dapr"
export DAPR_HELM_REPO_USERNAME="username_xxx"
export DAPR_HELM_REPO_PASSWORD="passwd_xxx"
```
Setting the above parameters will allow `dapr init -k` to install Dapr images from the configured Helm repository.

View File

@ -14,9 +14,12 @@ To address this issue the Dapr sidecar has an endpoint to `Shutdown` the sidecar
When running a basic [Kubernetes Job](https://kubernetes.io/docs/concepts/workloads/controllers/job/) you will need to call the `/shutdown` endpoint for the sidecar to gracefully stop and the job will be considered `Completed`.
When a job is finish without calling `Shutdown` your job will be in a `NotReady` state with only the `daprd` container running endlessly.
When a job is finished without calling `Shutdown`, your job will be in a `NotReady` state with only the `daprd` container running endlessly.
Be sure and use the *POST* HTTP verb when calling the shutdown API.
Stopping the dapr sidecar will cause its readiness and liveness probes to fail in your container because the dapr sidecar was shutdown.
To prevent Kubernetes from trying to restart your job, set your job's `restartPolicy` to `Never`.
Be sure to use the *POST* HTTP verb when calling the shutdown HTTP API.
```yaml
apiVersion: batch/v1
@ -37,7 +40,7 @@ spec:
restartPolicy: Never
```
You can also call the `Shutdown` from any of the Dapr SDK
You can also call the `Shutdown` from any of the Dapr SDKs
```go
package main

View File

@ -6,36 +6,49 @@ weight: 30000
description: "How to deploy and run Dapr in self-hosted mode without Docker installed on the local machine"
---
This article provides guidance on running Dapr in self-hosted mode without Docker.
## Prerequisites
- [Dapr CLI]({{< ref "install-dapr-selfhost.md#installing-dapr-cli" >}})
- [Install the Dapr CLI]({{< ref "install-dapr-selfhost.md#installing-dapr-cli" >}})
## Initialize Dapr without containers
The Dapr CLI provides an option to initialize Dapr using slim init, without the default creation of a development environment which has a dependency on Docker. To initialize Dapr with slim init, after installing the Dapr CLI use the following command:
The Dapr CLI provides an option to initialize Dapr using slim init, without the default creation of a development environment with a dependency on Docker. To initialize Dapr with slim init, after installing the Dapr CLI, use the following command:
```bash
dapr init --slim
```
In this mode two different binaries are installed `daprd` and `placement`. The `placement` binary is needed to enable [actors]({{< ref "actors-overview.md" >}}) in a Dapr self-hosted installation.
Two different binaries are installed:
- `daprd`
- `placement`
In this mode no default components such as Redis are installed for state management or pub/sub. This means, that aside from [Service Invocation]({{< ref "service-invocation-overview.md" >}}), no other building block functionality is available on install out of the box. Users are free to setup their own environment and custom components. Furthermore, actor based service invocation is possible if a state store is configured as explained in the following sections.
The `placement` binary is needed to enable [actors]({{< ref "actors-overview.md" >}}) in a Dapr self-hosted installation.
## Service invocation
See [this sample](https://github.com/dapr/samples/tree/master/hello-dapr-slim) for an example on how to perform service invocation in this mode.
In slim init mode, no default components (such as Redis) are installed for state management or pub/sub. This means that, aside from [service invocation]({{< ref "service-invocation-overview.md" >}}), no other building block functionality is available "out-of-the-box" on install. Instead, you can set up your own environment and custom components.
## Enabling state management or pub/sub
Actor-based service invocation is possible if a state store is configured, as explained in the following sections.
See configuring Redis in self-hosted mode [without docker](https://redis.io/topics/quickstart) to enable a local state store or pub/sub broker for messaging.
## Perform service invocation
See [the _Hello Dapr slim_ sample](https://github.com/dapr/samples/tree/master/hello-dapr-slim) for an example on how to perform service invocation in slim init mode.
## Enabling actors
## Enable state management or pub/sub
The placement service must be run locally to enable actor placement. Also, a [transactional state store that supports ETags]({{< ref "supported-state-stores.md" >}}) must be enabled to use actors, for example, [Redis configured in self-hosted mode](https://redis.io/topics/quickstart).
See documentation around [configuring Redis in self-hosted mode without Docker](https://redis.io/topics/quickstart) to enable a local state store or pub/sub broker for messaging.
By default for Linux/MacOS the `placement` binary is installed in `/$HOME/.dapr/bin` or for Windows at `%USERPROFILE%\.dapr\bin`.
## Enable actors
To enable actor placement:
- Run the placement service locally.
- Enable a [transactional state store that supports ETags]({{< ref "supported-state-stores.md" >}}) to use actors. For example, [Redis configured in self-hosted mode](https://redis.io/topics/quickstart).
By default, the `placement` binary is installed in:
- For Linux/MacOS: `/$HOME/.dapr/bin`
- For Windows: `%USERPROFILE%\.dapr\bin`
{{< tabs "Linux/MacOS" "Windows">}}
{{% codetab %}}
```bash
$ $HOME/.dapr/bin/placement
@ -51,16 +64,48 @@ INFO[0001] leader is established. instance=Nicoletaz-L10.
```
From here on you can follow the sample example created for the [java-sdk](https://github.com/dapr/java-sdk/tree/master/examples/src/main/java/io/dapr/examples/actors), [python-sdk](https://github.com/dapr/python-sdk/tree/master/examples/demo_actor) or [dotnet-sdk]({{< ref "dotnet-actors-howto.md" >}}) for running an application with Actors enabled.
{{% /codetab %}}
Update the state store configuration files to have the Redis host and password match the setup that you have. Additionally to enable it as a actor state store have the metadata piece added similar to the [sample Java Redis component](https://github.com/dapr/java-sdk/blob/master/examples/components/state/redis.yaml) definition.
{{% codetab %}}
When running standalone placement on Windows, specify port 6050:
```bash
%USERPROFILE%/.dapr/bin/placement.exe -port 6050
time="2022-10-17T14:56:55.4055836-05:00" level=info msg="starting Dapr Placement Service -- version 1.9.0 -- commit fdce5f1f1b76012291c888113169aee845f25ef8" instance=LAPTOP-OMK50S19 scope=dapr.placement type=log ver=1.9.0
time="2022-10-17T14:56:55.4066226-05:00" level=info msg="log level set to: info" instance=LAPTOP-OMK50S19 scope=dapr.placement type=log ver=1.9.0
time="2022-10-17T14:56:55.4067306-05:00" level=info msg="metrics server started on :9090/" instance=LAPTOP-OMK50S19 scope=dapr.metrics type=log ver=1.9.0
time="2022-10-17T14:56:55.4077529-05:00" level=info msg="Raft server is starting on 127.0.0.1:8201..." instance=LAPTOP-OMK50S19 scope=dapr.placement.raft type=log ver=1.9.0
time="2022-10-17T14:56:55.4077529-05:00" level=info msg="placement service started on port 6050" instance=LAPTOP-OMK50S19 scope=dapr.placement type=log ver=1.9.0
time="2022-10-17T14:56:55.4082772-05:00" level=info msg="Healthz server is listening on :8080" instance=LAPTOP-OMK50S19 scope=dapr.placement type=log ver=1.9.0
time="2022-10-17T14:56:56.8232286-05:00" level=info msg="cluster leadership acquired" instance=LAPTOP-OMK50S19 scope=dapr.placement type=log ver=1.9.0
time="2022-10-17T14:56:56.8232286-05:00" level=info msg="leader is established." instance=LAPTOP-OMK50S19 scope=dapr.placement type=log ver=1.9.0
```
{{% /codetab %}}
{{< /tabs >}}
Now, to run an application with actors enabled, you can follow the sample example created for:
- [java-sdk](https://github.com/dapr/java-sdk/tree/master/examples/src/main/java/io/dapr/examples/actors)
- [python-sdk](https://github.com/dapr/python-sdk/tree/master/examples/demo_actor)
- [dotnet-sdk]({{< ref "dotnet-actors-howto.md" >}})
Update the state store configuration files to match the Redis host and password with your setup.
Enable it as a actor state store by making the metadata piece similar to the [sample Java Redis component](https://github.com/dapr/java-sdk/blob/master/examples/components/state/redis.yaml) definition.
```yaml
- name: actorStateStore
value: "true"
```
## Clean up
## Cleanup
When finished, remove the binaries by following [Uninstall Dapr in a self-hosted environment]({{< ref self-hosted-uninstall >}}) to remove the binaries.
Follow the uninstall [instructions]({{< ref "install-dapr-selfhost.md#uninstall-dapr-in-a-self-hosted-mode" >}}) to remove the binaries.
## Next steps
- Run Dapr with [Podman]({{< ref self-hosted-with-podman.md >}}), using the default [Docker]({{< ref install-dapr-selfhost.md >}}), or in an [airgap environment]({{< ref self-hosted-airgap.md >}})
- [Upgrade Dapr in self-hosted mode]({{< ref self-hosted-upgrade >}})

View File

@ -123,10 +123,10 @@ services:
"--app-id", "nodeapp",
"--app-port", "3000",
"--placement-host-address", "placement:50006", # Dapr's placement service can be reach via the docker DNS entry
"--components-path", "./components"
"--resources-path", "./components"
]
volumes:
- "./components/:/components" # Mount our components folder for the runtime to use. The mounted location must match the --components-path argument.
- "./components/:/components" # Mount our components folder for the runtime to use. The mounted location must match the --resources-path argument.
depends_on:
- nodeapp
network_mode: "service:nodeapp" # Attach the nodeapp-dapr service to the nodeapp network namespace

View File

@ -32,25 +32,25 @@ description: "How to install Fluentd, Elastic Search, and Kibana to search logs
By default, the chart creates 3 replicas which must be on different nodes. If your cluster has fewer than 3 nodes, specify a smaller number of replicas. For example, this sets the number of replicas to 1:
```bash
helm install elasticsearch elastic/elasticsearch -n dapr-monitoring --set replicas=1
helm install elasticsearch elastic/elasticsearch --version 7.17.3 -n dapr-monitoring --set replicas=1
```
Otherwise:
```bash
helm install elasticsearch elastic/elasticsearch -n dapr-monitoring
helm install elasticsearch elastic/elasticsearch --version 7.17.3 -n dapr-monitoring
```
If you are using minikube or simply want to disable persistent volumes for development purposes, you can do so by using the following command:
```bash
helm install elasticsearch elastic/elasticsearch -n dapr-monitoring --set persistence.enabled=false,replicas=1
helm install elasticsearch elastic/elasticsearch --version 7.17.3 -n dapr-monitoring --set persistence.enabled=false,replicas=1
```
4. Install Kibana
```bash
helm install kibana elastic/kibana -n dapr-monitoring
helm install kibana elastic/kibana --version 7.17.3 -n dapr-monitoring
```
5. Ensure that Elastic Search and Kibana are running in your Kubernetes cluster

View File

@ -58,7 +58,7 @@ When using the Dapr CLI to run an application, pass the `--log-as-json` option t
```sh
dapr run \
--app-id orderprocessing \
--components-path ./components/ \
--resources-path ./components/ \
--log-as-json \
-- python3 OrderProcessingService.py
```
@ -109,7 +109,9 @@ spec:
## API Logging
API logging enables you to see the API calls your application makes to the Dapr sidecar, to debug issues or monitor the behavior of your application. You can combine both Dapr API logging with Dapr log events. See [configure and view Dapr Logs]({{< ref "logs-troubleshooting.md" >}}) and [configure and view Dapr API Logs]({{< ref "api-logs-troubleshooting.md" >}}) for more information.
API logging enables you to see the API calls your application makes to the Dapr sidecar, to debug issues or monitor the behavior of your application. You can combine both Dapr API logging with Dapr log events.
See [configure and view Dapr Logs]({{< ref "logs-troubleshooting.md" >}}) and [configure and view Dapr API Logs]({{< ref "api-logs-troubleshooting.md" >}}) for more information.
## Log collectors

View File

@ -42,6 +42,7 @@ The `grafana-actor-dashboard.json` template shows Dapr Sidecar status, actor inv
```bash
helm repo add grafana https://grafana.github.io/helm-charts
helm repo update
```
1. Install the chart:
@ -176,4 +177,4 @@ First you need to connect Prometheus as a data source to Grafana.
<div class="embed-responsive embed-responsive-16by9">
<iframe width="560" height="315" src="https://www.youtube.com/embed/8W-iBDNvCUM?start=2577" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe>
</div>
</div>

View File

@ -9,8 +9,6 @@ description: "Configure Dapr retries, timeouts, and circuit breakers"
Resiliency is currently a preview feature. Before you can utilize a resiliency spec, you must first [enable the resiliency preview feature]({{< ref support-preview-features >}}).
{{% /alert %}}
Distributed applications are commonly comprised of many microservices, with dozens, even hundreds, of instances for any given application. With so many microservices, the likelihood of a system failure increases. For example, an instance can fail or be unresponsive due to hardware, an overwhelming number of requests, application restarts/scale outs, or several other reasons. These events can cause a network call between services to fail. Designing and implementing your application with fault tolerance, the ability to detect, mitigate, and respond to failures, allows your application to recover to a functioning state and become self healing.
Dapr provides a capability for defining and applying fault tolerance resiliency policies via a [resiliency spec]({{< ref "resiliency-overview.md#complete-example-policy" >}}). Resiliency specs are saved in the same location as components specs and are applied when the Dapr sidecar starts. The sidecar determines how to apply resiliency policies to your Dapr API calls. In self-hosted mode, the resiliency spec must be named `resiliency.yaml`. In Kubernetes Dapr finds the named resiliency specs used by your application. Within the resiliency spec, you can define policies for popular resiliency patterns, such as:
- [Timeouts]({{< ref "policies.md#timeouts" >}})

View File

@ -17,7 +17,6 @@ For CLI there is no explicit opt-in, just the version that this was first made a
| ---------- |-------------|---------|---------------|-----------------|
| **`--image-registry`** flag in Dapr CLI| In self hosted mode you can set this flag to specify any private registry to pull the container images required to install Dapr| N/A | [CLI init command reference]({{<ref "dapr-init.md#self-hosted-environment" >}}) | v1.7 |
| **Resiliency** | Allows configuring of fine-grained policies for retries, timeouts and circuitbreaking. | `Resiliency` | [Configure Resiliency Policies]({{<ref "resiliency-overview">}}) | v1.7|
| **Service invocation without default `content-type`** | When enabled removes the default service invocation content-type header value `application/json` when no content-type is provided. This will become the default behavior in release v1.9.0. This requires you to explicitly set content-type headers where required for your apps. | `ServiceInvocation.NoDefaultContentType` | [Service Invocation]({{<ref "service_invocation_api.md#request-contents" >}}) | v1.7 |
| **App Middleware** | Allow middleware components to be executed when making service-to-service calls | N/A | [App Middleware]({{<ref "middleware.md#app-middleware" >}}) | v1.9 |
| **App health checks** | Allows configuring app health checks | `AppHealthCheck` | [App health checks]({{<ref "app-health.md" >}}) | v1.9 |
| **Pluggable components** | Allows creating self-hosted gRPC-based components written in any language that supports gRPC. The following component APIs are supported: State stores, Pub/sub, Bindings | N/A | [Pluggable components concept]({{<ref "components-concept#pluggable-components" >}})| v1.9 |

View File

@ -34,13 +34,20 @@ The table below shows the versions of Dapr releases that have been tested togeth
| Release date | Runtime | CLI | SDKs | Dashboard | Status |
|--------------------|:--------:|:--------|---------|---------|---------|
| October 13th 2022 | 1.9.0</br> | 1.9.0 | Java 1.7.0 </br>Go 1.6.0 </br>PHP 1.1.0 </br>Python 1.8.1 </br>.NET 1.9.0 </br>JS 2.4.2 | 0.11.0 | Supported (current) |
| December 2nd 2022 | 1.9.5</br> | 1.9.1 | Java 1.7.0 </br>Go 1.6.0 </br>PHP 1.1.0 </br>Python 1.8.3 </br>.NET 1.9.0 </br>JS 2.4.2 | 0.11.0 | Supported (current) |
| November 17th 2022 | 1.9.4</br> | 1.9.1 | Java 1.7.0 </br>Go 1.6.0 </br>PHP 1.1.0 </br>Python 1.8.3 </br>.NET 1.9.0 </br>JS 2.4.2 | 0.11.0 | Supported |
| November 4th 2022 | 1.9.3</br> | 1.9.1 | Java 1.7.0 </br>Go 1.6.0 </br>PHP 1.1.0 </br>Python 1.8.3 </br>.NET 1.9.0 </br>JS 2.4.2 | 0.11.0 | Supported |
| November 1st 2022 | 1.9.2</br> | 1.9.1 | Java 1.7.0 </br>Go 1.6.0 </br>PHP 1.1.0 </br>Python 1.8.1 </br>.NET 1.9.0 </br>JS 2.4.2 | 0.11.0 | Supported |
| October 26th 2022 | 1.9.1</br> | 1.9.1 | Java 1.7.0 </br>Go 1.6.0 </br>PHP 1.1.0 </br>Python 1.8.1 </br>.NET 1.9.0 </br>JS 2.4.2 | 0.11.0 | Supported |
| October 13th 2022 | 1.9.0</br> | 1.9.1 | Java 1.7.0 </br>Go 1.6.0 </br>PHP 1.1.0 </br>Python 1.8.3 </br>.NET 1.9.0 </br>JS 2.4.2 | 0.11.0 | Supported |
| October 26th 2022 | 1.8.6</br> | 1.8.1 | Java 1.6.0 </br>Go 1.5.0 </br>PHP 1.1.0 </br>Python 1.7.0 </br>.NET 1.8.0 </br>JS 2.3.0 | 0.11.0 | Supported |
| October 13th 2022 | 1.8.5</br> | 1.8.1 | Java 1.6.0 </br>Go 1.5.0 </br>PHP 1.1.0 </br>Python 1.7.0 </br>.NET 1.8.0 </br>JS 2.3.0 | 0.11.0 | Supported |
| August 10th 2022 | 1.8.4</br> | 1.8.1 | Java 1.6.0 </br>Go 1.5.0 </br>PHP 1.1.0 </br>Python 1.7.0 </br>.NET 1.8.0 </br>JS 2.3.0 | 0.11.0 | Supported |
| July 29th 2022 | 1.8.3</br> | 1.8.0 | Java 1.6.0 </br>Go 1.5.0 </br>PHP 1.1.0 </br>Python 1.7.0 </br>.NET 1.8.0 </br>JS 2.3.0 | 0.11.0 | Supported |
| July 21st 2022 | 1.8.2</br> | 1.8.0 | Java 1.6.0 </br>Go 1.5.0 </br>PHP 1.1.0 </br>Python 1.7.0 </br>.NET 1.8.0 </br>JS 2.3.0 | 0.11.0 | Supported |
| July 20th 2022 | 1.8.1</br> | 1.8.0 | Java 1.6.0 </br>Go 1.5.0 </br>PHP 1.1.0 </br>Python 1.7.0 </br>.NET 1.8.0 </br>JS 2.3.0 | 0.11.0 | Supported |
| July 7th 2022 | 1.8.0</br> | 1.8.0 | Java 1.6.0 </br>Go 1.5.0 </br>PHP 1.1.0 </br>Python 1.7.0 </br>.NET 1.8.0 </br>JS 2.3.0 | 0.11.0 | Supported |
| October 26th 2022 | 1.7.5</br> | 1.7.0 | Java 1.5.0 </br>Go 1.4.0 </br>PHP 1.1.0 </br>Python 1.6.0 </br>.NET 1.7.0 </br>JS 2.2.1 | 0.10.0 | Supported |
| May 31st 2022 | 1.7.4</br> | 1.7.0 | Java 1.5.0 </br>Go 1.4.0 </br>PHP 1.1.0 </br>Python 1.6.0 </br>.NET 1.7.0 </br>JS 2.2.1 | 0.10.0 | Supported |
| May 17th 2022 | 1.7.3</br> | 1.7.0 | Java 1.5.0 </br>Go 1.4.0 </br>PHP 1.1.0 </br>Python 1.6.0 </br>.NET 1.7.0 </br>JS 2.2.1 | 0.10.0 | Supported |
| Apr 22th 2022 | 1.7.2</br> | 1.7.0 | Java 1.5.0 </br>Go 1.4.0 </br>PHP 1.1.0 </br>Python 1.6.0 </br>.NET 1.7.0 </br>JS 2.1.0 | 0.10.0 | Supported |
@ -70,19 +77,18 @@ General guidance on upgrading can be found for [self hosted mode]({{< ref self-h
| Current Runtime version | Must upgrade through | Target Runtime version |
|--------------------------|-----------------------|------------------------- |
| 1.4.0 to 1.4.2 | N/A | 1.4.4 |
| | 1.4.4 | 1.5.2 |
| | 1.5.2 | 1.6.0 |
| | 1.6.0 | 1.6.2 |
| | 1.6.0 | 1.7.4 |
| 1.5.0 to 1.5.2 | N/A | 1.6.0 |
| | 1.6.0 | 1.6.2 |
| | 1.6.0 | 1.7.4 |
| 1.6.0 | N/A | 1.6.2 |
| 1.6.0 | N/A | 1.7.4 |
| 1.7.0 to 1.7.4 | N/A | 1.8.0 |
| 1.8.0 | N/A | 1.8.5 |
| 1.9.0 | N/A | 1.9.0 |
| | 1.6.2 | 1.7.5 |
| | 1.7.5 | 1.8.6 |
| | 1.8.6 | 1.9.5 |
| 1.6.0 to 1.6.2 | N/A | 1.7.5 |
| | 1.7.5 | 1.8.6 |
| | 1.8.6 | 1.9.5 |
| 1.7.0 to 1.7.5 | N/A | 1.8.6 |
| | 1.8.6 | 1.9.5 |
| 1.8.0 to 1.8.6 | N/A | 1.9.5 |
| 1.9.0 | N/A | 1.9.5 |
## Breaking changes and deprecations

View File

@ -77,3 +77,37 @@ time="2022-03-16T18:32:02.917629403Z" level=info msg="HTTP API Called" method="P
time="2022-03-16T18:32:03.137830112Z" level=info msg="HTTP API Called" method="POST /v1.0/invoke/{id}/method/{method:*}" app_id=invoke-caller instance=invokecaller-f4f949886-cbnmt scope=dapr.runtime.http-info type=log useragent=Go-http-client/1.1 ver=edge
time="2022-03-16T18:32:03.359097916Z" level=info msg="HTTP API Called" method="POST /v1.0/invoke/{id}/method/{method:*}" app_id=invoke-caller instance=invokecaller-f4f949886-cbnmt scope=dapr.runtime.http-info type=log useragent=Go-http-client/1.1 ver=edge
```
## API logging configuration
Using the [Dapr Configuration spec]({{< ref "configuration-overview.md" >}}#sidecar-configuration), you can configure the default behavior of API logging in Dapr runtimes.
### Enable API logging by default
Using the Dapr Configuration spec, you can set the default value for the `--enable-api-logging` flag (and the correspondent annotation when running on Kubernetes), with the `logging.apiLogging.enabled` option. This value applies to all Dapr runtimes that reference the Configuration document or resource in which it's defined.
- If `logging.apiLogging.enabled` is set to `false`, the default value, API logging is disabled for Dapr runtimes unless `--enable-api-logging` is set to `true` (or the `dapr.io/enable-api-logging: true` annotation is added).
- When `logging.apiLogging.enabled` is `true`, Dapr runtimes have API logging enabled by default, and it can be disabled by setting
`--enable-api-logging=false` or with the `dapr.io/enable-api-logging: false` annotation.
For example:
```yaml
logging:
apiLogging:
enabled: true
```
### Omit health checks from API logging
When API logging is enabled, all calls to the Dapr API server are logged, including those to health check endpoints (e.g. `/v1.0/healthz`). Depending on your environment, this may generate multiple log lines per minute and could create unwanted noise.
You can configure Dapr to not log calls to health check endpoints when API logging is enabled using the Dapr Configuration spec, by setting `logging.apiLogging.omitHealthChecks: true`. The default value is `false`, which means that health checks calls are logged in the API logs.
For example:
```yaml
logging:
apiLogging:
omitHealthChecks: true
```

View File

@ -50,45 +50,54 @@ spec:
imagePullPolicy: Always
```
If your pod spec template is annotated correctly and you still don't see the sidecar injected, make sure Dapr was deployed to the cluster before your deployment or pod were deployed.
There are some known cases where this might not properly work:
If this is the case, restarting the pods will fix the issue.
- If your pod spec template is annotated correctly, and you still don't see the sidecar injected, make sure Dapr was deployed to the cluster before your deployment or pod were deployed.
If you are deploying Dapr on a private GKE cluster, sidecar injection does not work without extra steps. See [Setup a Google Kubernetes Engine cluster]({{< ref setup-gke.md >}}).
If this is the case, restarting the pods will fix the issue.
In order to further diagnose any issue, check the logs of the Dapr sidecar injector:
- If you are deploying Dapr on a private GKE cluster, sidecar injection does not work without extra steps. See [Setup a Google Kubernetes Engine cluster]({{< ref setup-gke.md >}}).
```bash
kubectl logs -l app=dapr-sidecar-injector -n dapr-system
```
In order to further diagnose any issue, check the logs of the Dapr sidecar injector:
*Note: If you installed Dapr to a different namespace, replace dapr-system above with the desired namespace*
```bash
kubectl logs -l app=dapr-sidecar-injector -n dapr-system
```
If you are deploying Dapr on Amazon EKS and using an overlay network such as Calico, you will need to set `hostNetwork` parameter to true, this is a limitation of EKS with such CNIs.
*Note: If you installed Dapr to a different namespace, replace dapr-system above with the desired namespace*
You can set this parameter using Helm `values.yaml` file:
- If you are deploying Dapr on Amazon EKS and using an overlay network such as Calico, you will need to set `hostNetwork` parameter to true, this is a limitation of EKS with such CNIs.
```
helm upgrade --install dapr dapr/dapr \
You can set this parameter using Helm `values.yaml` file:
```
helm upgrade --install dapr dapr/dapr \
--namespace dapr-system \
--create-namespace \
--values values.yaml
```
```
`values.yaml`
```yaml
dapr_sidecar_injector:
hostNetwork: true
```
`values.yaml`
```yaml
dapr_sidecar_injector:
hostNetwork: true
```
or using command line:
or using command line:
```
helm upgrade --install dapr dapr/dapr \
```
helm upgrade --install dapr dapr/dapr \
--namespace dapr-system \
--create-namespace \
--set dapr_sidecar_injector.hostNetwork=true
```
```
- Make sure the kube api server can reach the following webhooks services:
- [Sidecar Mutating Webhook Injector Service](https://github.com/dapr/dapr/blob/44235fe8e8799589bb393a3124d2564db2dd6885/charts/dapr/charts/dapr_sidecar_injector/templates/dapr_sidecar_injector_deployment.yaml#L157) at port __4000__ that is served from the sidecar injector.
- [CRD Conversion Webhook Service](https://github.com/dapr/dapr/blob/44235fe8e8799589bb393a3124d2564db2dd6885/charts/dapr/charts/dapr_operator/templates/dapr_operator_service.yaml#L28) at port __19443__ that is served from the operator.
Check with your cluster administrators to setup allow ingress
rules to the above ports, __4000__ and __19443__, in the cluster from the kube api servers.
## My pod is in CrashLoopBackoff or another failed state due to the daprd sidecar
@ -225,7 +234,7 @@ export DAPR_HOST_IP=127.0.0.1
This is usually due to one of the following issues
- You may have defined the `NAMESPACE` environment variable locally or deployed your components into a different namespace in Kubernetes. Check which namespace your app and the components are deployed to. Read [scoping components to one or more applications]({{< ref "component-scopes.md" >}}) for more information.
- You may have not provided a `--components-path` with the Dapr `run` commands or not placed your components into the default components folder for your OS. Read [define a component]({{< ref "get-started-component.md" >}}) for more information.
- You may have not provided a `--resources-path` with the Dapr `run` commands or not placed your components into the default components folder for your OS. Read [define a component]({{< ref "get-started-component.md" >}}) for more information.
- You may have a syntax issue in component YAML file. Check your component YAML with the component [YAML samples]({{< ref "components.md" >}}).
## Service invocation is failing and my Dapr service is missing an appId (macOS)

View File

@ -21,7 +21,7 @@ POST http://localhost:<daprPort>/v1.0-alpha1/lock/<storename>
Parameter | Description
--------- | -----------
`daprPort` | The Dapr port
`storename` | The `metadata.name` field component file. Refer to the [component schema] ({{< ref component-schema.md>}})
`storename` | The `metadata.name` field component file. Refer to the [component schema]({{< ref component-schema.md >}})
#### Query Parameters
@ -95,7 +95,7 @@ POST http://localhost:<daprPort>/v1.0-alpha1/unlock/<storename>
Parameter | Description
--------- | -----------
`daprPort` | The Dapr port
`storename` | The `metadata.name` field component file. Refer to the [component schema] ({{< ref component-schema.md>}})
`storename` | The `metadata.name` field component file. Refer to the [component schema]({{< ref component-schema.md >}})
#### Query Parameters

View File

@ -16,7 +16,8 @@ This table is meant to help users understand the equivalent options for running
| `--app-id` | `--app-id` | `-i` | `dapr.io/app-id` | The unique ID of the application. Used for service discovery, state encapsulation and the pub/sub consumer ID |
| `--app-port` | `--app-port` | `-p` | `dapr.io/app-port` | This parameter tells Dapr which port your application is listening on |
| `--app-ssl` | `--app-ssl` | | `dapr.io/app-ssl` | Sets the URI scheme of the app to https and attempts an SSL connection |
| `--components-path` | `--components-path` | `-d` | not supported | Path for components directory. If empty, components will not be loaded. |
| `--components-path` | `--components-path` | `-d` | not supported | **Deprecated** in favor of `--resources-path` |
| `--resources-path` | `--resources-path` | `-d` | not supported | Path for components directory. If empty, components will not be loaded. |
| `--config` | `--config` | `-c` | `dapr.io/config` | Tells Dapr which Configuration CRD to use |
| `--control-plane-address` | not supported | | not supported | Address for a Dapr control plane |
| `--dapr-grpc-port` | `--dapr-grpc-port` | | not supported | gRPC port for the Dapr API to listen on (default "50001") |

View File

@ -42,6 +42,7 @@ Available Commands:
stop Stop Dapr instances and their associated apps. Supported platforms: Self-hosted
uninstall Uninstall Dapr runtime. Supported platforms: Kubernetes and self-hosted
upgrade Upgrades a Dapr control plane installation in a cluster. Supported platforms: Kubernetes
version Print the Dapr runtime and CLI version
Flags:
-h, --help help for dapr
@ -73,8 +74,8 @@ You can learn more about each Dapr command from the links below.
- [`dapr stop`]({{< ref dapr-stop.md >}})
- [`dapr uninstall`]({{< ref dapr-uninstall.md >}})
- [`dapr upgrade`]({{< ref dapr-upgrade.md >}})
- [`dapr-version`]({{< ref dapr-version.md >}})
- [`dapr version`]({{< ref dapr-version.md >}})
### Environment Variables
Some Dapr flags can be set via environment variables (e.g. `DAPR_NETWORK` for the `--network` flag of the `dapr init` command). Note that specifying the flag on the command line overrides any set environment variable.
Some Dapr flags can be set via environment variables (e.g. `DAPR_NETWORK` for the `--network` flag of the `dapr init` command). Note that specifying the flag on the command line overrides any set environment variable.

View File

@ -28,7 +28,8 @@ dapr run [flags] [command]
| `--app-port`, `-p` | `APP_PORT` | | The port your application is listening on |
| `--app-protocol`, `-P` | | `http` | The protocol Dapr uses to talk to the application. Valid values are: `http` or `grpc` |
| `--app-ssl` | | `false` | Enable https when Dapr invokes the application |
| `--components-path`, `-d` | | Linux/Mac: `$HOME/.dapr/components` <br/>Windows: `%USERPROFILE%\.dapr\components` | The path for components directory |
| `--components-path`, `-d` | | Linux/Mac: `$HOME/.dapr/components` <br/>Windows: `%USERPROFILE%\.dapr\components` | **Deprecated** in favor of `--resources-path` |
| `--resources-path`, `-d` | | Linux/Mac: `$HOME/.dapr/components` <br/>Windows: `%USERPROFILE%\.dapr\components` | The path for components directory |
| `--config`, `-c` | | Linux/Mac: `$HOME/.dapr/config.yaml` <br/>Windows: `%USERPROFILE%\.dapr\config.yaml` | Dapr configuration file |
| `--dapr-grpc-port` | `DAPR_GRPC_PORT` | `50001` | The gRPC port for Dapr to listen on |
| `--dapr-http-port` | `DAPR_HTTP_PORT` | `3500` | The HTTP port for Dapr to listen on |

View File

@ -0,0 +1,250 @@
---
type: docs
title: "Cloudflare Queues bindings spec"
linkTitle: "Cloudflare Queues"
description: "Detailed documentation on the Cloudflare Queues component"
aliases:
- "/operations/components/setup-bindings/supported-bindings/cloudflare-queues/"
- "/operations/components/setup-bindings/supported-bindings/cfqueues/"
---
## Component format
This output binding for Dapr allows interacting with [Cloudflare Queues](https://developers.cloudflare.com/queues/) to **publish** new messages. It is currently not possible to consume messages from a Queue using Dapr.
To setup a Cloudflare Queues binding, create a component of type `bindings.cloudflare.queues`. See [this guide]({{< ref "howto-bindings.md#1-create-a-binding" >}}) on how to create and apply a binding configuration.
```yaml
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
name: <NAME>
spec:
type: bindings.cloudflare.queues
version: v1
# Increase the initTimeout if Dapr is managing the Worker for you
initTimeout: "120s"
metadata:
# Name of the existing Cloudflare Queue (required)
- name: queueName
value: ""
# Name of the Worker (required)
- name: workerName
value: ""
# PEM-encoded private Ed25519 key (required)
- name: key
value: |
-----BEGIN PRIVATE KEY-----
MC4CAQ...
-----END PRIVATE KEY-----
# Cloudflare account ID (required to have Dapr manage the Worker)
- name: cfAccountID
value: ""
# API token for Cloudflare (required to have Dapr manage the Worker)
- name: cfAPIToken
value: ""
# URL of the Worker (required if the Worker has been pre-created outside of Dapr)
- name: workerUrl
value: ""
```
{{% alert title="Warning" color="warning" %}}
The above example uses secrets as plain strings. It is recommended to use a secret store for the secrets as described [here]({{< ref component-secrets.md >}}).
{{% /alert %}}
## Spec metadata fields
| Field | Required | Binding support | Details | Example |
|--------------------|:--------:|-------|--------|---------|
| `queueName` | Y | Output | Name of the existing Cloudflare Queue | `"mydaprqueue"`
| `key` | Y | Output | Ed25519 private key, PEM-encoded | *See example above*
| `cfAccountID` | Y/N | Output | Cloudflare account ID. Required to have Dapr manage the worker. | `"456789abcdef8b5588f3d134f74ac"def`
| `cfAPIToken` | Y/N | Output | API token for Cloudflare. Required to have Dapr manage the Worker. | `"secret-key"`
| `workerUrl` | Y/N | Output | URL of the Worker. Required if the Worker has been pre-provisioned outside of Dapr. | `"https://mydaprqueue.mydomain.workers.dev"`
> When you configure Dapr to create your Worker for you, you may need to set a longer value for the `initTimeout` property of the component, to allow enough time for the Worker script to be deployed. For example: `initTimeout: "120s"`
## Binding support
This component supports **output binding** with the following operations:
- `publish` (alias: `create`): Publish a message to the Queue.
The data passed to the binding is used as-is for the body of the message published to the Queue.
This operation does not accept any metadata property.
## Create a Cloudflare Queue
To use this component, you must have a Cloudflare Queue created in your Cloudflare account.
You can create a new Queue in one of two ways:
<!-- IGNORE_LINKS -->
- Using the [Cloudflare dashboard](https://dash.cloudflare.com/)
- Using the [Wrangler CLI](https://developers.cloudflare.com/workers/wrangler/):
```sh
# Authenticate if needed with `npx wrangler login` first
npx wrangler queues create <NAME>
# For example: `npx wrangler queues create myqueue`
```
<!-- END_IGNORE -->
## Configuring the Worker
Because Cloudflare Queues can only be accessed by scripts running on Workers, Dapr needs to maintain a Worker to communicate with the Queue.
Dapr can manage the Worker for you automatically, or you can pre-provision a Worker yourself. Pre-provisioning the Worker is the only supported option when running on [workerd](https://github.com/cloudflare/workerd).
{{% alert title="Important" color="warning" %}}
Use a separate Worker for each Dapr component. Do not use the same Worker script for different Cloudflare Queues bindings, and do not use the same Worker script for different Cloudflare components in Dapr (for example, the Workers KV state store and the Queues binding).
{{% /alert %}}
{{< tabs "Let Dapr manage the Worker" "Manually provision the Worker script" >}}
{{% codetab %}}
<!-- Let Dapr manage the Worker -->
If you want to let Dapr manage the Worker for you, you will need to provide these 3 metadata options:
<!-- IGNORE_LINKS -->
- **`workerName`**: Name of the Worker script. This will be the first part of the URL of your Worker. For example, if the "workers.dev" domain configured for your Cloudflare account is `mydomain.workers.dev` and you set `workerName` to `mydaprqueue`, the Worker that Dapr deploys will be available at `https://mydaprqueue.mydomain.workers.dev`.
- **`cfAccountID`**: ID of your Cloudflare account. You can find this in your browser's URL bar after logging into the [Cloudflare dashboard](https://dash.cloudflare.com/), with the ID being the hex string right after `dash.cloudflare.com`. For example, if the URL is `https://dash.cloudflare.com/456789abcdef8b5588f3d134f74acdef`, the value for `cfAccountID` is `456789abcdef8b5588f3d134f74acdef`.
- **`cfAPIToken`**: API token with permission to create and edit Workers. You can create it from the ["API Tokens" page](https://dash.cloudflare.com/profile/api-tokens) in the "My Profile" section in the Cloudflare dashboard:
1. Click on **"Create token"**.
1. Select the **"Edit Cloudflare Workers"** template.
1. Follow the on-screen instructions to generate a new API token.
<!-- END_IGNORE -->
When Dapr is configured to manage the Worker for you, when a Dapr Runtime is started it checks that the Worker exists and it's up-to-date. If the Worker doesn't exist, or if it's using an outdated version, Dapr creates or upgrades it for you automatically.
{{% /codetab %}}
{{% codetab %}}
<!-- Manually provision the Worker script -->
If you'd rather not give Dapr permissions to deploy Worker scripts for you, you can manually provision a Worker for Dapr to use. Note that if you have multiple Dapr components that interact with Cloudflare services via a Worker, you will need to create a separate Worker for each one of them.
To manually provision a Worker script, you will need to have Node.js installed on your local machine.
1. Create a new folder where you'll place the source code of the Worker, for example: `daprworker`.
2. If you haven't already, authenticate with Wrangler (the Cloudflare Workers CLI) using: `npx wrangler login`.
3. Inside the newly-created folder, create a new `wrangler.toml` file with the contents below, filling in the missing information as appropriate:
```toml
# Name of your Worker, for example "mydaprqueue"
name = ""
# Do not change these options
main = "worker.js"
compatibility_date = "2022-12-09"
usage_model = "bundled"
[vars]
# Set this to the **public** part of the Ed25519 key, PEM-encoded (with newlines replaced with `\n`).
# Example:
# PUBLIC_KEY = "-----BEGIN PUBLIC KEY-----\nMCowB...=\n-----END PUBLIC KEY-----
PUBLIC_KEY = ""
# Set this to the name of your Worker (same as the value of the "name" property above), for example "mydaprqueue".
TOKEN_AUDIENCE = ""
# Set the next two values to the name of your Queue, for example "myqueue".
# Note that they will both be set to the same value.
[[queues.producers]]
queue = ""
binding = ""
```
> Note: see the next section for how to generate an Ed25519 key pair. Make sure you use the **public** part of the key when deploying a Worker!
4. Copy the (pre-compiled and minified) code of the Worker in the `worker.js` file. You can do that with this command:
```sh
# Set this to the version of Dapr that you're using
DAPR_VERSION="release-{{% dapr-latest-version short="true" %}}"
curl -LfO "https://raw.githubusercontent.com/dapr/components-contrib/${DAPR_VERSION}/internal/component/cloudflare/workers/code/worker.js"
```
5. Deploy the Worker using Wrangler:
```sh
npx wrangler publish
```
Once your Worker has been deployed, you will need to initialize the component with these two metadata options:
- **`workerName`**: Name of the Worker script. This is the value you set in the `name` property in the `wrangler.toml` file.
- **`workerUrl`**: URL of the deployed Worker. The `npx wrangler command` will show the full URL to you, for example `https://mydaprqueue.mydomain.workers.dev`.
{{% /codetab %}}
{{< /tabs >}}
## Generate an Ed25519 key pair
All Cloudflare Workers listen on the public Internet, so Dapr needs to use additional authentication and data protection measures to ensure that no other person or application can communicate with your Worker (and thus, with your Cloudflare Queue). These include industry-standard measures such as:
- All requests made by Dapr to the Worker are authenticated via a bearer token (technically, a JWT) which is signed with an Ed25519 key.
- All communications between Dapr and your Worker happen over an encrypted connection, using TLS (HTTPS).
- The bearer token is generated on each request and is valid for a brief period of time only (currently, one minute).
To let Dapr issue bearer tokens, and have your Worker validate them, you will need to generate a new Ed25519 key pair. Here are examples of generating the key pair using OpenSSL or the step CLI.
{{< tabs "Generate with OpenSSL" "Generate with the step CLI" >}}
{{% codetab %}}
<!-- Generate with OpenSSL -->
> Support for generating Ed25519 keys is available since OpenSSL 1.1.0, so the commands below will not work if you're using an older version of OpenSSL.
> Note for Mac users: on macOS, the "openssl" binary that is shipped by Apple is actually based on LibreSSL, which as of writing doesn't support Ed25519 keys. If you're using macOS, either use the step CLI, or install OpenSSL 3.0 from Homebrew using `brew install openssl@3` then replacing `openssl` in the commands below with `$(brew --prefix)/opt/openssl@3/bin/openssl`.
You can generate a new Ed25519 key pair with OpenSSL using:
```sh
openssl genpkey -algorithm ed25519 -out private.pem
openssl pkey -in private.pem -pubout -out public.pem
```
> On macOS, using openssl@3 from Homebrew:
>
> ```sh
> $(brew --prefix)/opt/openssl@3/bin/openssl genpkey -algorithm ed25519 -out private.pem
> $(brew --prefix)/opt/openssl@3/bin/openssl pkey -in private.pem -pubout -out public.pem
> ```
{{% /codetab %}}
{{% codetab %}}
<!-- Generate with the step CLI -->
If you don't have the step CLI already, install it following the [official instructions](https://smallstep.com/docs/step-cli/installation).
Next, you can generate a new Ed25519 key pair with the step CLI using:
```sh
step crypto keypair \
public.pem private.pem \
--kty OKP --curve Ed25519 \
--insecure --no-password
```
{{% /codetab %}}
{{< /tabs >}}
Regardless of how you generated your key pair, with the instructions above you'll have two files:
- `private.pem` contains the private part of the key; use the contents of this file for the **`key`** property of the component's metadata.
- `public.pem` contains the public part of the key, which you'll need only if you're deploying a Worker manually (as per the instructions in the previoius section).
{{% alert title="Warning" color="warning" %}}
Protect the private part of your key and treat it as a secret value!
{{% /alert %}}
## Related links
- [Basic schema for a Dapr component]({{< ref component-schema >}})
- [Bindings building block]({{< ref bindings >}})
- [How-To: Use bindings to interface with external resources]({{< ref howto-bindings.md >}})
- [Bindings API reference]({{< ref bindings_api.md >}})
- Documentation for [Cloudflare Queues](https://developers.cloudflare.com/queues/)

View File

@ -51,7 +51,7 @@ The above example uses secrets as plain strings. It is recommended to use a secr
| clientSecret | Y | Output | The commercetools client secret for the project | `"client secret"` |
| scopes | Y | Output | The commercetools scopes for the project | `"manage_project:project-key"` |
For more information see [commercetools - Creating an API Client](https://docs.commercetools.com/tutorials/getting-started#creating-an-api-client) and [commercetools - Regions](https://docs.commercetools.com/api/general-concepts#regions).
For more information see [commercetools - Creating an API Client](https://docs.commercetools.com/getting-started/create-api-client#create-an-api-client) and [commercetools - Regions](https://docs.commercetools.com/api/general-concepts#regions).
## Binding support
@ -67,4 +67,4 @@ This component supports **output binding** with the following operations:
- [How-To: Trigger application with input binding]({{< ref howto-triggers.md >}})
- [How-To: Use bindings to interface with external resources]({{< ref howto-bindings.md >}})
- [Bindings API reference]({{< ref bindings_api.md >}})
- [Sample app](https://github.com/dapr/samples/tree/master/commercetools-graphql-sample) that leverages the commercetools binding with sample GraphQL query
- [Sample app](https://github.com/dapr/samples/tree/master/commercetools-graphql-sample) that leverages the commercetools binding with sample GraphQL query

View File

@ -44,7 +44,7 @@ spec:
- name: accessKey
value: "[AccessKey]"
- name: topicEndpoint
value: "[TopicEndpoint]
value: "[TopicEndpoint]"
```
{{% alert title="Warning" color="warning" %}}
@ -102,7 +102,7 @@ _Make sure to also to add quotes around the `[HandshakePort]` in your Event Grid
```bash
# Using random port 9000 as an example
ngrok http -host-header=localhost 9000
ngrok http --host-header=localhost 9000
```
- Configure the ngrok's HTTPS endpoint and custom port to input binding metadata

View File

@ -1,15 +1,16 @@
---
type: docs
title: "MQTT binding spec"
linkTitle: "MQTT"
description: "Detailed documentation on the MQTT binding component"
title: "MQTT3 binding spec"
linkTitle: "MQTT3"
description: "Detailed documentation on the MQTT3 binding component"
aliases:
- "/operations/components/setup-bindings/supported-bindings/mqtt3/"
- "/operations/components/setup-bindings/supported-bindings/mqtt/"
---
## Component format
To setup MQTT binding create a component of type `bindings.mqtt`. See [this guide]({{< ref "howto-bindings.md#1-create-a-binding" >}}) on how to create and apply a binding configuration.
To setup a MQTT3 binding create a component of type `bindings.mqtt3`. See [this guide]({{< ref "howto-bindings.md#1-create-a-binding" >}}) on how to create and apply a binding configuration.
```yaml
apiVersion: dapr.io/v1alpha1
@ -17,7 +18,7 @@ kind: Component
metadata:
name: <NAME>
spec:
type: bindings.mqtt
type: bindings.mqtt3
version: v1
metadata:
- name: url
@ -63,7 +64,7 @@ kind: Component
metadata:
name: mqtt-binding
spec:
type: bindings.mqtt
type: bindings.mqtt3
version: v1
metadata:
- name: url
@ -103,7 +104,7 @@ metadata:
name: mqtt-binding
namespace: default
spec:
type: bindings.mqtt
type: bindings.mqtt3
version: v1
metadata:
- name: consumerID

View File

@ -66,10 +66,14 @@ The above example uses secrets as plain strings. It is recommended to use a secr
This component supports **output binding** with the following operations:
- `create`
- `get`
- `delete`
### create
You can store a record in Redis using the `create` operation. This sets a key to hold a value. If the key already exists, the value is overwritten.
### Request
#### Request
```json
{
@ -84,10 +88,58 @@ You can store a record in Redis using the `create` operation. This sets a key to
}
```
### Response
#### Response
An HTTP 204 (No Content) and empty body is returned if successful.
### get
You can get a record in Redis using the `get` operation. This gets a key that was previously set.
#### Request
```json
{
"operation": "get",
"metadata": {
"key": "key1"
},
"data": {
}
}
```
#### Response
```json
{
"data": {
"Hello": "World",
"Lorem": "Ipsum"
}
}
```
### delete
You can delete a record in Redis using the `delete` operation. Returns success whether the key exists or not.
#### Request
```json
{
"operation": "delete",
"metadata": {
"key": "key1"
}
}
```
#### Response
An HTTP 204 (No Content) and empty body is returned if successful.
## Create a Redis instance
Dapr can use any Redis instance - containerized, running on your local dev machine, or a managed cloud service, provided the version of Redis is 5.0.0 or later.

View File

@ -32,7 +32,7 @@ spec:
# - name: decodeBase64
# value: "false"
# - name: endpoint
# value: "http://127.0.0.1:10000"
# value: "http://127.0.0.1:10001"
```
{{% alert title="Warning" color="warning" %}}
@ -48,7 +48,7 @@ The above example uses secrets as plain strings. It is recommended to use a secr
| `queueName` | Y | Input/Output | The name of the Azure Storage queue | `"myqueue"` |
| `ttlInSeconds` | N | Output | Parameter to set the default message time to live. If this parameter is omitted, messages will expire after 10 minutes. See [also](#specifying-a-ttl-per-message) | `"60"` |
| `decodeBase64` | N | Output | Configuration to decode base64 file content before saving to Blob Storage. (In case of saving a file with binary content). `true` is the only allowed positive value. Other positive variations like `"True", "1"` are not acceptable. Defaults to `false` | `true`, `false` |
| `endpoint` | N | Input/Output | Optional custom endpoint URL. This is useful when using the [Azurite emulator](https://github.com/Azure/azurite) or when using custom domains for Azure Storage (although this is not officially supported). The endpoint must be the full base URL, including the protocol (`http://` or `https://`), the IP or FQDN, and optional port. | `"http://127.0.0.1:10000"` or `"https://accountName.queue.example.com"` |
| `endpoint` | N | Input/Output | Optional custom endpoint URL. This is useful when using the [Azurite emulator](https://github.com/Azure/azurite) or when using custom domains for Azure Storage (although this is not officially supported). The endpoint must be the full base URL, including the protocol (`http://` or `https://`), the IP or FQDN, and optional port. | `"http://127.0.0.1:10001"` or `"https://accountName.queue.example.com"` |
### Azure Active Directory (Azure AD) authentication

View File

@ -49,7 +49,7 @@ The above example uses secrets as plain strings. It is recommended to use a secr
| redisPassword | Y | Password for Redis host. No Default. Can be `secretKeyRef` to use a secret reference | `""`, `"KeFg23!"`
| enableTLS | N | If the Redis instance supports TLS with public certificates, can be configured to be enabled or disabled. Defaults to `"false"` | `"true"`, `"false"`
| maxRetries | N | Maximum number of retries before giving up. Defaults to `3` | `5`, `10`
| maxRetryBackoff | N | Minimum backoff between each retry. Defaults to `2` seconds; `"-1"` disables backoff. | `3000000000`
| maxRetryBackoff | N | Maximum backoff between each retry. Defaults to `2` seconds; `"-1"` disables backoff. | `3000000000`
| failover | N | Property to enabled failover configuration. Needs sentinalMasterName to be set. The redisHost should be the sentinel host address. See [Redis Sentinel Documentation](https://redis.io/docs/manual/sentinel/). Defaults to `"false"` | `"true"`, `"false"`
| sentinelMasterName | N | The sentinel master name. See [Redis Sentinel Documentation](https://redis.io/docs/manual/sentinel/) | `""`, `"127.0.0.1:6379"`

View File

@ -84,9 +84,9 @@ The above example uses secrets as plain strings. It is recommended to use a secr
| redisUsername | N | Username for Redis host. Defaults to empty. Make sure your redis server version is 6 or above, and have created acl rule correctly. | `""`, `"default"`
| enableTLS | N | If the Redis instance supports TLS with public certificates, can be configured to be enabled or disabled. Defaults to `"false"` | `"true"`, `"false"`
| maxRetries | N | Maximum number of retries before giving up. Defaults to `3` | `5`, `10`
| maxRetryBackoff | N | Minimum backoff between each retry. Defaults to `2` seconds; `"-1"` disables backoff. | `3000000000`
| failover | N | Property to enabled failover configuration. Needs sentinalMasterName to be set. The redisHost should be the sentinel host address. See [Redis Sentinel Documentation](https://redis.io/docs/manual/sentinel/). Defaults to `"false"` | `"true"`, `"false"`
| sentinelMasterName | N | The sentinel master name. See [Redis Sentinel Documentation](https://redis.io/docs/manual/sentinel/) | `""`, `"127.0.0.1:6379"`
| maxRetryBackoff | N | Maximum backoff between each retry. Defaults to `2` seconds; `"-1"` disables backoff. | `3000000000`
| failover | N | Property to enabled failover configuration. Needs sentinelMasterName to be set. The redisHost should be the sentinel host address. See [Redis Sentinel Documentation](https://redis.io/docs/manual/sentinel/). Defaults to `"false"` | `"true"`, `"false"`
| sentinelMasterName | N | The sentinel master name. See [Redis Sentinel Documentation](https://redis.io/docs/manual/sentinel/) | `"mymaster"`
| redeliverInterval | N | The interval between checking for pending messages to redelivery. Defaults to `"60s"`. `"0"` disables redelivery. | `"30s"`
| processingTimeout | N | The amount time a message must be pending before attempting to redeliver it. Defaults to `"15s"`. `"0"` disables redelivery. | `"30s"`
| redisType | N | The type of redis. There are two valid values, one is `"node"` for single node mode, the other is `"cluster"` for redis cluster mode. Defaults to `"node"`. | `"cluster"`

View File

@ -14,10 +14,10 @@ extension.
The Wasm [HTTP middleware]({{< ref middleware.md >}}) allows you to rewrite a
request URI with custom logic compiled to a Wasm binary. In other words, you
can extend Dapr using external files that are not pre-compiled into the `daprd`
binary. Dapr embeds [wazero][https://wazero.io] to accomplish this without CGO.
binary. Dapr embeds [wazero](https://wazero.io) to accomplish this without CGO.
Wasm modules are loaded from a filesystem path. On Kubernetes, see [mounting
volumes to the Dapr sidecar]({{> kubernetes-volume-mounts.md >}}) to configure
volumes to the Dapr sidecar]({{< ref kubernetes-volume-mounts.md >}}) to configure
a filesystem mount that can contain Wasm modules.
## Component format

View File

@ -34,6 +34,8 @@ spec:
secretKeyRef:
name: kafka-secrets
key: saslPasswordSecret
- name: saslMechanism
value: "SHA-512"
- name: maxMessageBytes # Optional.
value: 1024
- name: consumeRetryInterval # Optional.
@ -55,6 +57,7 @@ spec:
| authType | Y | Configure or disable authentication. Supported values: `none`, `password`, `mtls`, or `oidc` | `"password"`, `"none"`
| saslUsername | N | The SASL username used for authentication. Only required if `authType` is set to `"password"`. | `"adminuser"`
| saslPassword | N | The SASL password used for authentication. Can be `secretKeyRef` to use a [secret reference]({{< ref component-secrets.md >}}). Only required if `authType is set to `"password"`. | `""`, `"KeFg23!"`
| saslMechanism | N | The SASL Authentication Mechanism you wish to use. Only required if `authType` is set to `"password"`. Defaults to `PLAINTEXT` | `"SHA-512", "SHA-256", "PLAINTEXT"`
| initialOffset | N | The initial offset to use if no offset was previously committed. Should be "newest" or "oldest". Defaults to "newest". | `"oldest"`
| maxMessageBytes | N | The maximum size in bytes allowed for a single Kafka message. Defaults to 1024. | `2048`
| consumeRetryInterval | N | The interval between retries when attempting to consume topics. Treats numbers without suffix as milliseconds. Defaults to 100ms. | `200ms` |
@ -111,8 +114,7 @@ spec:
#### SASL Password
Setting `authType` to `password` enables [SASL](https://en.wikipedia.org/wiki/Simple_Authentication_and_Security_Layer) authentication using the **PLAIN** mechanism. This requires setting
the `saslUsername` and `saslPassword` fields.
Setting `authType` to `password` enables [SASL](https://en.wikipedia.org/wiki/Simple_Authentication_and_Security_Layer) authentication. This requires setting the `saslUsername` and `saslPassword` fields.
```yaml
apiVersion: dapr.io/v1alpha1
@ -137,6 +139,8 @@ spec:
secretKeyRef:
name: kafka-secrets
key: saslPasswordSecret
- name: saslMechanism
value: "SHA-512"
- name: maxMessageBytes # Optional.
value: 1024
- name: consumeRetryInterval # Optional.
@ -333,7 +337,7 @@ To run without Docker, see the getting started guide [here](https://kafka.apache
{{% /codetab %}}
{{% codetab %}}
To run Kafka on Kubernetes, you can use any Kafka operator, such as [Strimzi](https://strimzi.io/docs/operators/latest/quickstart.html#ref-install-prerequisites-str).
To run Kafka on Kubernetes, you can use any Kafka operator, such as [Strimzi](https://strimzi.io/quickstarts/).
{{% /codetab %}}
{{< /tabs >}}

View File

@ -105,6 +105,14 @@ spec:
value: "myeventhubstoragecontainer"
```
## Sending multiple messages
Azure Event Hubs natively supports sending multiple messages in a single operation. To set the metadata for bulk operations, set the query parameters on the HTTP request or the gRPC metadata as documented [here]({{< ref pubsub_api >}})
| Metadata | Default |
|----------|---------|
| `metadata.maxBulkPubBytes` | `1000000` |
## Create an Azure Event Hub
Follow the instructions [here](https://docs.microsoft.com/azure/event-hubs/event-hubs-create) on setting up Azure Event Hubs.

View File

@ -7,6 +7,10 @@ aliases:
- "/operations/components/setup-pubsub/supported-pubsub/setup-hazelcast/"
---
{{% alert title="Deprecation notice" color="warning" %}}
The Hazelcast PubSub component has been deprecated due to inherent lack of support for "at least once" delivery guarantee, and will be removed in a future Dapr release.
{{% /alert %}}
## Component format
To setup hazelcast pubsub create a component of type `pubsub.hazelcast`. See [this guide]({{< ref "howto-publish-subscribe.md#step-1-setup-the-pubsub-component" >}}) on how to create and apply a pubsub configuration.

View File

@ -46,8 +46,6 @@ spec:
value: 1
- name: startTime # In Unix format
value: 1630349391
- name: deliverAll
value: false
- name: flowControl
value: false
- name: ackWait
@ -64,10 +62,16 @@ spec:
value: false
- name: rateLimit
value: 1024
- name: hearbeat
- name: heartbeat
value: 15s
- name: ackPolicy
value: explicit
- name: deliverPolicy
value: all
- name: domain
value: hub
- name: apiPrefix
value: PREFIX
```
## Spec metadata fields
@ -86,7 +90,6 @@ spec:
| queueGroupName | N | Queue group name | `"my-queue"` |
| startSequence | N | [Start Sequence] | `1` |
| startTime | N | [Start Time] in Unix format | `1630349391` |
| deliverAll | N | Set deliver all as [Replay Policy] | `true` |
| flowControl | N | [Flow Control] | `true` |
| ackWait | N | [Ack Wait] | `10s` |
| maxDeliver | N | [Max Deliver] | `15` |
@ -95,8 +98,11 @@ spec:
| replicas | N | [Replicas] | `3` |
| memoryStorage | N | [Memory Storage] | `false` |
| rateLimit | N | [Rate Limit] | `1024` |
| hearbeat | N | [Hearbeat] | `10s` |
| heartbeat | N | [Heartbeat] | `10s` |
| ackPolicy | N | [Ack Policy] | `explicit` |
| deliverPolicy | N | One of: all, last, new, sequence, time | `all` |
| domain | N | [JetStream Leafondes] | `HUB` |
| apiPrefix | N | [JetStream Leafnodes] | `PREFIX` |
## Create a NATS server
@ -160,7 +166,8 @@ nats -s localhost:4222 stream add myStream --subjects mySubject
[Replicas]: https://docs.nats.io/jetstream/concepts/consumers#replicas
[Memory Storage]: https://docs.nats.io/jetstream/concepts/consumers#memorystorage
[Rate Limit]: https://docs.nats.io/jetstream/concepts/consumers#ratelimit
[Hearbeat]: https://docs.nats.io/jetstream/concepts/consumers#hearbeat
[Heartbeat]: https://docs.nats.io/jetstream/concepts/consumers#heartbeat
[Ack Policy]: https://docs.nats.io/nats-concepts/jetstream/consumers#ackpolicy
[JetStream Leafonodes]: https://docs.nats.io/running-a-nats-service/configuration/leafnodes/jetstream_leafnodes
[Decentralized JWT Authentication/Authorization]: https://docs.nats.io/running-a-nats-service/configuration/securing_nats/auth_intro/jwt
[NATS token based authentication]: https://docs.nats.io/running-a-nats-service/configuration/securing_nats/auth_intro/tokens

View File

@ -1,15 +1,16 @@
---
type: docs
title: "MQTT"
linkTitle: "MQTT"
description: "Detailed documentation on the MQTT pubsub component"
title: "MQTT3"
linkTitle: "MQTT3"
description: "Detailed documentation on the MQTT3 pubsub component"
aliases:
- "/operations/components/setup-pubsub/supported-pubsub/setup-mqtt3/"
- "/operations/components/setup-pubsub/supported-pubsub/setup-mqtt/"
---
## Component format
To setup MQTT pubsub create a component of type `pubsub.mqtt`. See [this guide]({{< ref "howto-publish-subscribe.md#step-1-setup-the-pubsub-component" >}}) on how to create and apply a pubsub configuration
To setup a MQTT3 pubsub create a component of type `pubsub.mqtt3`. See [this guide]({{< ref "howto-publish-subscribe.md#step-1-setup-the-pubsub-component" >}}) on how to create and apply a pubsub configuration
```yaml
apiVersion: dapr.io/v1alpha1
@ -17,7 +18,7 @@ kind: Component
metadata:
name: mqtt-pubsub
spec:
type: pubsub.mqtt
type: pubsub.mqtt3
version: v1
metadata:
- name: url
@ -49,8 +50,6 @@ The above example uses secrets as plain strings. It is recommended to use a secr
| caCert | Required for using TLS | Certificate Authority (CA) certificate in PEM format for verifying server TLS certificates. | `"-----BEGIN CERTIFICATE-----\n<base64-encoded DER>\n-----END CERTIFICATE-----"`
| clientCert | Required for using TLS | TLS client certificate in PEM format. Must be used with `clientKey`. | `"-----BEGIN CERTIFICATE-----\n<base64-encoded DER>\n-----END CERTIFICATE-----"`
| clientKey | Required for using TLS | TLS client key in PEM format. Must be used with `clientCert`. Can be `secretKeyRef` to use a secret reference. | `"-----BEGIN RSA PRIVATE KEY-----\n<base64-encoded PKCS8>\n-----END RSA PRIVATE KEY-----"`
| backOffMaxRetries | N | The maximum number of retries to process the message before returning an error. Defaults to `"0"`, which means that no retries will be attempted. `"-1"` can be specified to indicate that messages should be retried indefinitely until they are successfully processed or the application is shutdown. The component will wait 5 seconds between retries. | `"3"`
### Communication using TLS
To configure communication using TLS, ensure that the MQTT broker (e.g. mosquitto) is configured to support certificates and provide the `caCert`, `clientCert`, `clientKey` metadata in the component configuration. For example:
@ -61,7 +60,7 @@ kind: Component
metadata:
name: mqtt-pubsub
spec:
type: pubsub.mqtt
type: pubsub.mqtt3
version: v1
metadata:
- name: url
@ -98,7 +97,7 @@ kind: Component
metadata:
name: mqtt-pubsub
spec:
type: pubsub.mqtt
type: pubsub.mqtt3
version: v1
metadata:
- name: consumerID
@ -121,7 +120,7 @@ The above example uses secrets as plain strings. It is recommended to use a secr
Note that in the case, the value of the consumer ID is random every time Dapr restarts, so we are setting `cleanSession` to true as well.
## Create a MQTT broker
## Create a MQTT3 broker
{{< tabs "Self-Hosted" "Kubernetes">}}
@ -136,7 +135,7 @@ You can then interact with the server using the client port: `mqtt://localhost:1
{{% /codetab %}}
{{% codetab %}}
You can run a MQTT broker in kubernetes using following yaml:
You can run a MQTT3 broker in kubernetes using following yaml:
```yaml
apiVersion: apps/v1

View File

@ -40,12 +40,6 @@ spec:
value: parallel
- name: publisherConfirm
value: false
- name: backOffPolicy
value: exponential
- name: backOffInitialInterval
value: 100
- name: backOffMaxRetries
value: 16
- name: enableDeadLetter # Optional enable dead Letter or not
value: true
- name: maxLen # Optional max message count in a queue
@ -75,24 +69,22 @@ The above example uses secrets as plain strings. It is recommended to use a secr
| publisherConfirm | N | If enabled, client waits for [publisher confirms](https://www.rabbitmq.com/confirms.html#publisher-confirms) after publishing a message. Defaults to `"false"` | `"true"`, `"false"`
| reconnectWait | N | How long to wait (in seconds) before reconnecting if a connection failure occurs | `"0"`
| concurrencyMode | N | `parallel` is the default, and allows processing multiple messages in parallel (limited by the `app-max-concurrency` annotation, if configured). Set to `single` to disable parallel processing. In most situations there's no reason to change this. | `parallel`, `single`
| backOffPolicy | N | Retry policy, `"constant"` is a backoff policy that always returns the same backoff delay. `"exponential"` is a backoff policy that increases the backoff period for each retry attempt using a randomization function that grows exponentially. Defaults to `"constant"`. | `constant`、`exponential` |
| backOffDuration | N | The fixed interval only takes effect when the policy is constant. There are two valid formats, one is the fraction with a unit suffix format, and the other is the pure digital format that will be processed as milliseconds. Valid time units are "ns", "us" (or "µs"), "ms", "s", "m", "h". Defaults to `"5s"`. | `"5s"`、`"5000"` |
| backOffInitialInterval | N | The backoff initial interval on retry. Only takes effect when the policy is exponential. There are two valid formats, one is the fraction with a unit suffix format, and the other is the pure digital format that will be processed as milliseconds. Valid time units are "ns", "us" (or "µs"), "ms", "s", "m", "h". Defaults to `"500"` | `"50"` |
| backOffMaxInterval | N | The backoff initial interval on retry. Only takes effect when the policy is exponential. There are two valid formats, one is the fraction with a unit suffix format, and the other is the pure digital format that will be processed as milliseconds. Valid time units are "ns", "us" (or "µs"), "ms", "s", "m", "h". Defaults to `"60s"` | `"60000"` |
| backOffMaxRetries | N | The maximum number of retries to process the message before returning an error. Defaults to `"0"` which means the component will not retry processing the message. `"-1"` will retry indefinitely until the message is processed or the application is shutdown. Any positive number is treated as the maximum retry count. | `"3"` |
| backOffRandomizationFactor | N | Randomization factor, between 1 and 0, including 0 but not 1. Randomized interval = RetryInterval * (1 ± backOffRandomizationFactor). Defaults to `"0.5"`. | `"0.5"` |
| backOffMultiplier | N | Backoff multiplier for the policy. Increments the interval by multiplying it with the multiplier. Defaults to `"1.5"` | `"1.5"` |
| backOffMaxElapsedTime | N | After MaxElapsedTime the ExponentialBackOff returns Stop. There are two valid formats, one is the fraction with a unit suffix format, and the other is the pure digital format that will be processed as milliseconds. Valid time units are "ns", "us" (or "µs"), "ms", "s", "m", "h". Defaults to `"15m"` | `"15m"` |
| enableDeadLetter | N | Enable forwarding Messages that cannot be handled to a dead-letter topic. Defaults to `"false"` | `"true"`, `"false"` |
| maxLen | N | The maximum number of messages of a queue and its dead letter queue (if dead letter enabled). If both `maxLen` and `maxLenBytes` are set then both will apply; whichever limit is hit first will be enforced. Defaults to no limit. | `"1000"` |
| maxLenBytes | N | Maximum length in bytes of a queue and its dead letter queue (if dead letter enabled). If both `maxLen` and `maxLenBytes` are set then both will apply; whichever limit is hit first will be enforced. Defaults to no limit. | `"1048576"` |
| exchangeKind | N | Exchange kind of the rabbitmq exchange. Defaults to `"fanout"`. | `"fanout"`,`"topic"` |
### Backoff policy introduction
### Enabling message delivery retries
Backoff retry strategy can instruct the dapr sidecar how to resend the message. By default, the retry strategy is turned off, which means that the sidecar will send a message to the service once. When the service returns a result, the message will be marked as consumption regardless of whether it is correct or not. The above is based on the condition of `autoAck` and `requeueInFailure` is setting to false(if `requeueInFailure` is set to true, the message will get a second chance).
The RabbitMQ pub/sub component has no built-in support for retry strategies. This means that the sidecar sends a message to the service only once. When the service returns a result, the message will be marked as consumed regardless of whether it was processed correctly or not. Note that this is common among all Dapr PubSub components and not just RabbitMQ.
Dapr can try redelivering a message a second time, when `autoAck` is set to `false` and `requeueInFailure` is set to `true`.
But in some cases, you may want dapr to retry pushing message with an (exponential or constant) backoff strategy until the message is processed normally or the number of retries is exhausted. This maybe useful when your service breaks down abnormally but the sidecar is not stopped together. Adding backoff policy will retry the message pushing during the service downtime, instead of marking these message as consumed.
To make Dapr use more sophisticated retry policies, you can apply a [retry resiliency policy]({{< ref "policies.md#retries" >}}) to the RabbitMQ pub/sub component.
There is a crucial difference between the two ways to retry messages:
1. When using `autoAck = false` and `requeueInFailure = true`, RabbitMQ is the one responsible for re-delivering messages and _any_ subscriber can get the redelivered message. If you have more than one instance of your consumer, then its possible that another consumer will get it. This is usually the better approach because if theres a transient failure, its more likely that a different worker will be in a better position to successfully process the message.
2. Using Resiliency makes the same Dapr sidecar retry redelivering the messages. So it will be the same Dapr sidecar and the same app receiving the same message.
## Create a RabbitMQ server

View File

@ -0,0 +1,113 @@
---
type: docs
title: "Solace-AMQP"
linkTitle: "Solace-AMQP"
description: "Detailed documentation on the Solace-AMQP pubsub component"
aliases:
- "/operations/components/setup-pubsub/supported-pubsub/setup-solace-amqp/"
---
## Component format
To setup Solace-AMQP pub/sub, create a component of type `pubsub.solace.amqp`. See [this guide]({{< ref "howto-publish-subscribe.md#step-1-setup-the-pubsub-component" >}}) on how to create and apply a pub/sub configuration.
```yaml
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
name: solace
spec:
type: pubsub.solace.amqp
version: v1
metadata:
- name: url
value: 'amqp://localhost:5672'
- name: username
value: 'default'
- name: password
value: 'default'
```
{{% alert title="Warning" color="warning" %}}
The above example uses secrets as plain strings. It is recommended to use a secret store for the secrets as described [here]({{< ref component-secrets.md >}}).
{{% /alert %}}
## Spec metadata fields
| Field | Required | Details | Example |
|--------------------|:--------:|---------|---------|
| url | Y | Address of the AMQP broker. Can be `secretKeyRef` to use a secret reference. <br> Use the **`amqp://`** URI scheme for non-TLS communication. <br> Use the **`amqps://`** URI scheme for TLS communication. | `"amqp://host.domain[:port]"`
| username | Y | The username to connect to the broker. Only required if anonymous is not specified or set to `false` .| `default`
| password | Y | The password to connect to the broker. Only required if anonymous is not specified or set to `false`. | `default`
| anonymous | N | To connect to the broker without credential validation. Only works if enabled on the broker. A username and password would not be required if this is set to `true`. | `true`
| caCert | Required for using TLS | Certificate Authority (CA) certificate in PEM format for verifying server TLS certificates. | `"-----BEGIN CERTIFICATE-----\n<base64-encoded DER>\n-----END CERTIFICATE-----"`
| clientCert | Required for using TLS | TLS client certificate in PEM format. Must be used with `clientKey`. | `"-----BEGIN CERTIFICATE-----\n<base64-encoded DER>\n-----END CERTIFICATE-----"`
| clientKey | Required for using TLS | TLS client key in PEM format. Must be used with `clientCert`. Can be `secretKeyRef` to use a secret reference. | `"-----BEGIN RSA PRIVATE KEY-----\n<base64-encoded PKCS8>\n-----END RSA PRIVATE KEY-----"`
### Communication using TLS
To configure communication using TLS:
1. Ensure that the Solace broker is configured to support certificates.
1. Provide the `caCert`, `clientCert`, and `clientKey` metadata in the component configuration.
For example:
```yaml
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
name: solace
spec:
type: pubsub.solace.amqp
version: v1
metadata:
- name: url
value: "amqps://host.domain[:port]"
- name: username
value: 'default'
- name: password
value: 'default'
- name: caCert
value: ${{ myLoadedCACert }}
- name: clientCert
value: ${{ myLoadedClientCert }}
- name: clientKey
secretKeyRef:
name: mySolaceClientKey
key: mySolaceClientKey
auth:
secretStore: <SECRET_STORE_NAME>
```
> While the `caCert` and `clientCert` values may not be secrets, they can be referenced from a Dapr secret store as well for convenience.
### Publishing/subscribing to topics and queues
By default, messages are published and subscribed over topics. If you would like your destination to be a queue, prefix the topic with `queue:` and the Solace AMQP component will connect to a queue.
## Create a Solace broker
{{< tabs "Self-Hosted" "SaaS">}}
{{% codetab %}}
You can run a Solace broker [locally using Docker](https://hub.docker.com/r/solace/solace-pubsub-standard):
```bash
docker run -d -p 8080:8080 -p 55554:55555 -p 8008:8008 -p 1883:1883 -p 8000:8000 -p 5672:5672 -p 9000:9000 -p 2222:2222 --shm-size=2g --env username_admin_globalaccesslevel=admin --env username_admin_password=admin --name=solace solace/solace-pubsub-standard
```
You can then interact with the server using the client port: `mqtt://localhost:5672`
{{% /codetab %}}
{{% codetab %}}
You can also sign up for a free SaaS broker on [Solace Cloud](https://console.solace.cloud/login/new-account?product=event-streaming).
{{% /codetab %}}
{{< /tabs >}}
## Related links
- [Basic schema for a Dapr component]({{< ref component-schema >}})
- Read [this guide]({{< ref "howto-publish-subscribe.md#step-2-publish-a-topic" >}}) for instructions on configuring pub/sub components
- [Pub/sub building block]({{< ref pubsub >}})

View File

@ -2,7 +2,7 @@
type: docs
title: "AWS Secrets Manager"
linkTitle: "AWS Secrets Manager"
description: Detailed information on the decret store component
description: Detailed information on the secret store component
aliases:
- "/operations/components/setup-secret-store/supported-secret-stores/aws-secret-manager/"
---

View File

@ -53,7 +53,7 @@ Additionally, you must provide the authentication fields as explained in the [Au
### Prerequisites
- [Azure Subscription](https://azure.microsoft.com/free/)
- Azure Subscription
- [Azure CLI](https://docs.microsoft.com/cli/azure/install-azure-cli)
- [jq](https://stedolan.github.io/jq/download/)
- The scripts below are optimized for a bash or zsh shell

View File

@ -40,7 +40,6 @@ spec:
|--------------------|:--------:|-------------------------------------------------------------------------|--------------------------|
| secretsFile | Y | The path to the file where secrets are stored | `"path/to/file.json"` |
| nestedSeparator | N | Used by the store when flattening the JSON hierarchy to a map. Defaults to `":"` | `":"`
| multiValued | N | Allows one level of multi-valued key/value pairs before flattening JSON hierarchy. Defaults to `"false"` | `"true"` |
| multiValued | N | `"true"` sets the `multipleKeyValuesPerSecret` behavior. Allows one level of multi-valued key/value pairs before flattening JSON hierarchy. Defaults to `"false"` | `"true"` |
## Setup JSON file to hold the secrets

View File

@ -54,17 +54,17 @@ The above example uses secrets as plain strings. It is recommended to use a loca
| Field | Required | Details | Example |
|--------------------|:--------:|--------------------------------|---------------------|
| vaultAddr | N | The address of the Vault server. Defaults to `"https://127.0.0.1:8200"` | `"https://127.0.0.1:8200"` |
| caCert | N | Certificate Authority use only one of the options. The encoded cacerts to use | `"cacerts"` |
| caPath | N | Certificate Authority use only one of the options. The path to a CA cert file | `"path/to/cacert/file"` |
| caPem | N | Certificate Authority use only one of the options. The encoded cacert pem to use | `"encodedpem"` |
| caPem | N | The inlined contents of the CA certificate to use, in PEM format. If defined, takes precedence over `caPath` and `caCert`. | See below |
| caPath | N | The path to a folder holding the CA certificate file to use, in PEM format. If the folder contains multiple files, only the first file found will be used. If defined, takes precedence over `caCert`. | `"path/to/cacert/holding/folder"` |
| caCert | N | The path to the CA certificate to use, in PEM format. | `""path/to/cacert.pem"` |
| skipVerify | N | Skip TLS verification. Defaults to `"false"` | `"true"`, `"false"` |
| tlsServerName | N | TLS config server name | `"tls-server"` |
| tlsServerName | N | The name of the server requested during TLS handshake in order to support virtual hosting. This value is also used to verify the TLS certificate presented by Vault server. | `"tls-server"` |
| vaultTokenMountPath | Y | Path to file containing token | `"path/to/file"` |
| vaultToken | Y | [Token](https://learn.hashicorp.com/tutorials/vault/tokens) for authentication within Vault. | `"tokenValue"` |
| vaultKVPrefix | N | The prefix in vault. Defaults to `"dapr"` | `"dapr"`, `"myprefix"` |
| vaultKVUsePrefix | N | If false, vaultKVPrefix is forced to be empty. If the value is not given or set to true, vaultKVPrefix is used when accessing the vault. Setting it to false is needed to be able to use the BulkGetSecret method of the store. | `"true"`, `"false"` |
| enginePath | N | The [engine](https://www.vaultproject.io/api-docs/secret/kv/kv-v2) path in vault. Defaults to `"secret"` | `"kv"`, `"any"` |
| vaultValueType | N | Vault value type. `map` means to parse the value into `map[string]string`, `text` means to use the value as a string. 'map' sets the `multipleKeyValuesPerSecret` behavior. `text' makes Vault behave as a secret store with name/value semantics. Defaults to `"map"` | `"map"`, `"text"` |
| vaultValueType | N | Vault value type. `map` means to parse the value into `map[string]string`, `text` means to use the value as a string. 'map' sets the `multipleKeyValuesPerSecret` behavior. `text` makes Vault behave as a secret store with name/value semantics. Defaults to `"map"` | `"map"`, `"text"` |
## Setup Hashicorp Vault instance
@ -109,9 +109,37 @@ $ curl http://localhost:3501/v1.0/secrets/my-hashicorp-vault/mysecret
}
```
Notice that the name of the secret (`mysecret`) is not repeated in the result.
Notice that the name of the secret (`mysecret`) is not repeated in the result.
## TLS Server verification
The fields `skipVerify`, `tlsServerName`, `caCert`, `caPath`, and `caPem` control if and how Dapr verifies the vault server's certificate while connecting using TLS/HTTPS.
### Inline CA PEM caPem
The `caPem` field value should be the contents of the PEM CA certificate you want to use. Given PEM certificates are made of multiple lines, defining that value might seem challenging at first. YAML allows for a few ways of [defining a multiline values](https://yaml-multiline.info/).
Below is one way to define a `caPem` field.
```yaml
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
name: vault
spec:
type: secretstores.hashicorp.vault
version: v1
metadata:
- name: vaultAddr
value: https://127.0.0.1:8200
- name: caPem
value: |-
-----BEGIN CERTIFICATE-----
<< the rest of your PEM file content's here, indented appropriately. >>
-----END CERTIFICATE-----
```
## Related links
- [Secrets building block]({{< ref secrets >}})
- [How-To: Retrieve a secret]({{< ref "howto-secrets.md" >}})

View File

@ -133,7 +133,7 @@ kubectl apply -f azureblob.yaml
### Running locally
To run locally, create a `components` dir containing the YAML file and provide the path to the `dapr run` command with the flag `--components-path`.
To run locally, create a `components` dir containing the YAML file and provide the path to the `dapr run` command with the flag `--resources-path`.
This state store creates a blob file in the container and puts raw state inside it.

View File

@ -168,62 +168,6 @@ az cosmosdb sql role assignment create \
--role-definition-id "$ROLE_ID"
```
### Creating the stored procedures for Dapr
When using Cosmos DB as a state store for Dapr, we need to create two stored procedures in your collection. When you configure the state store using a "master key", Dapr creates those for you, automatically. However, when your state store authenticates with Cosmos DB using Azure AD, because of limitations in the platform we are not able to do it automatically.
If you are using Azure AD to authenticate your Cosmos DB state store and have not created the stored procedures (or if you are using an outdated version of them), your Dapr sidecar will fail to start and you will see an error similar to this in your logs:
```text
Dapr requires stored procedures created in Cosmos DB before it can be used as state store. Those stored procedures are currently not existing or are using a different version than expected. When you authenticate using Azure AD we cannot automatically create them for you: please start this state store with a Cosmos DB master key just once so we can create the stored procedures for you; otherwise, you can check our docs to learn how to create them yourself: https://aka.ms/dapr/cosmosdb-aad
```
To fix this issue, you have two options:
1. Configure your component to authenticate with the "master key" just once, to have Dapr automatically initialize the stored procedures for you. While you need to use a "master key" the first time you launch your application, you should be able to remove that and use Azure AD credentials (including Managed Identities) after.
2. Alternatively, you can follow the steps below to create the stored procedures manually. These steps must be performed before you can start your application the first time.
To create the stored procedures manually, you can use the commands below.
First, download the code of the stored procedures for the version of Dapr that you're using. This will create two `.js` files in your working directory:
```sh
# Set this to the version of Dapr that you're using
DAPR_VERSION="release-{{% dapr-latest-version short="true" %}}"
curl -LfO "https://raw.githubusercontent.com/dapr/components-contrib/${DAPR_VERSION}/state/azure/cosmosdb/storedprocedures/__daprver__.js"
curl -LfO "https://raw.githubusercontent.com/dapr/components-contrib/${DAPR_VERSION}/state/azure/cosmosdb/storedprocedures/__dapr_v2__.js"
```
> You won't need to update the code for the stored procedures every time you update Dapr. Although the code for the stored procedures doesn't change often, sometimes we may make updates to that: when that happens, if you're using Azure AD authentication your Dapr sidecar will fail to launch until you update the stored procedures, re-running the commands above.
Then, using the Azure CLI create the stored procedures in Cosmos DB, for your account, database, and collection (or container):
```sh
# Name of the Resource Group that contains your Cosmos DB
RESOURCE_GROUP="..."
# Name of your Cosmos DB account
ACCOUNT_NAME="..."
# Name of your database in the Cosmos DB account
DATABASE_NAME="..."
# Name of the container (collection) in your database
CONTAINER_NAME="..."
az cosmosdb sql stored-procedure create \
--resource-group "$RESOURCE_GROUP" \
--account-name "$ACCOUNT_NAME" \
--database-name "$DATABASE_NAME" \
--container-name "$CONTAINER_NAME" \
--name "__daprver__" \
--body @__daprver__.js
az cosmosdb sql stored-procedure create \
--resource-group "$RESOURCE_GROUP" \
--account-name "$ACCOUNT_NAME" \
--database-name "$DATABASE_NAME" \
--container-name "$CONTAINER_NAME" \
--name "__dapr_v2__" \
--body @__dapr_v2__.js
```
## Related links
- [Basic schema for a Dapr component]({{< ref component-schema >}})

View File

@ -0,0 +1,250 @@
---
type: docs
title: "Cloudflare Workers KV"
linkTitle: "Cloudflare Workers KV"
description: Detailed information on the Cloudflare Workers KV state store component
aliases:
- "/operations/components/setup-state-store/supported-state-stores/setup-cloudflare-workerskv/"
---
## Create a Dapr component
To setup a [Cloudflare Workers KV](https://developers.cloudflare.com/workers/learning/how-kv-works/) state store, create a component of type `state.cloudflare.workerskv`. See [this guide]({{< ref "howto-get-save-state.md#step-1-setup-a-state-store" >}}) on how to create and apply a state store configuration.
```yaml
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
name: <NAME>
spec:
type: state.cloudflare.workerskv
version: v1
# Increase the initTimeout if Dapr is managing the Worker for you
initTimeout: "120s"
metadata:
# ID of the Workers KV namespace (required)
- name: kvNamespaceID
value: ""
# Name of the Worker (required)
- name: workerName
value: ""
# PEM-encoded private Ed25519 key (required)
- name: key
value: |
-----BEGIN PRIVATE KEY-----
MC4CAQ...
-----END PRIVATE KEY-----
# Cloudflare account ID (required to have Dapr manage the Worker)
- name: cfAccountID
value: ""
# API token for Cloudflare (required to have Dapr manage the Worker)
- name: cfAPIToken
value: ""
# URL of the Worker (required if the Worker has been pre-created outside of Dapr)
- name: workerUrl
value: ""
```
{{% alert title="Warning" color="warning" %}}
The above example uses secrets as plain strings. It is recommended to use a secret store for the secrets as described [here]({{< ref component-secrets.md >}}).
{{% /alert %}}
## Spec metadata fields
| Field | Required | Details | Example |
|--------------------|:--------:|---------|---------|
| `kvNamespaceID` | Y | ID of the pre-created Workers KV namespace | `"123456789abcdef8b5588f3d134f74ac"`
| `workerName` | Y | Name of the Worker to connect to | `"mydaprkv"`
| `key` | Y | Ed25519 private key, PEM-encoded | *See example above*
| `cfAccountID` | Y/N | Cloudflare account ID. Required to have Dapr manage the worker. | `"456789abcdef8b5588f3d134f74ac"def`
| `cfAPIToken` | Y/N | API token for Cloudflare. Required to have Dapr manage the Worker. | `"secret-key"`
| `workerUrl` | Y/N | URL of the Worker. Required if the Worker has been pre-provisioned outside of Dapr. | `"https://mydaprkv.mydomain.workers.dev"`
> When you configure Dapr to create your Worker for you, you may need to set a longer value for the `initTimeout` property of the component, to allow enough time for the Worker script to be deployed. For example: `initTimeout: "120s"`
## Create a Workers KV namespace
To use this component, you must have a Workers KV namespace created in your Cloudflare account.
You can create a new Workers KV namespace in one of two ways:
<!-- IGNORE_LINKS -->
- Using the [Cloudflare dashboard](https://dash.cloudflare.com/)
Make note of the "ID" of the Workers KV namespace that you can see in the dashboard. This is a hex string (for example `123456789abcdef8b5588f3d134f74ac`)not the name you used when you created it!
- Using the [Wrangler CLI](https://developers.cloudflare.com/workers/wrangler/):
```sh
# Authenticate if needed with `npx wrangler login` first
wrangler kv:namespace create <NAME>
```
The output contains the ID of the namespace, for example:
```text
{ binding = "<NAME>", id = "123456789abcdef8b5588f3d134f74ac" }
```
<!-- END_IGNORE -->
## Configuring the Worker
Because Cloudflare Workers KV namespaces can only be accessed by scripts running on Workers, Dapr needs to maintain a Worker to communicate with the Workers KV storage.
Dapr can manage the Worker for you automatically, or you can pre-provision a Worker yourself. Pre-provisioning the Worker is the only supported option when running on [workerd](https://github.com/cloudflare/workerd).
{{% alert title="Important" color="warning" %}}
Use a separate Worker for each Dapr component. Do not use the same Worker script for different Cloudflare Workers KV state store components, and do not use the same Worker script for different Cloudflare components in Dapr (e.g. the Workers KV state store and the Queues binding).
{{% /alert %}}
{{< tabs "Let Dapr manage the Worker" "Manually provision the Worker script" >}}
{{% codetab %}}
<!-- Let Dapr manage the Worker -->
If you want to let Dapr manage the Worker for you, you will need to provide these 3 metadata options:
<!-- IGNORE_LINKS -->
- **`workerName`**: Name of the Worker script. This will be the first part of the URL of your Worker. For example, if the "workers.dev" domain configured for your Cloudflare account is `mydomain.workers.dev` and you set `workerName` to `mydaprkv`, the Worker that Dapr deploys will be available at `https://mydaprkv.mydomain.workers.dev`.
- **`cfAccountID`**: ID of your Cloudflare account. You can find this in your browser's URL bar after logging into the [Cloudflare dashboard](https://dash.cloudflare.com/), with the ID being the hex string right after `dash.cloudflare.com`. For example, if the URL is `https://dash.cloudflare.com/456789abcdef8b5588f3d134f74acdef`, the value for `cfAccountID` is `456789abcdef8b5588f3d134f74acdef`.
- **`cfAPIToken`**: API token with permission to create and edit Workers and Workers KV namespaces. You can create it from the ["API Tokens" page](https://dash.cloudflare.com/profile/api-tokens) in the "My Profile" section in the Cloudflare dashboard:
1. Click on **"Create token"**.
1. Select the **"Edit Cloudflare Workers"** template.
1. Follow the on-screen instructions to generate a new API token.
<!-- END_IGNORE -->
When Dapr is configured to manage the Worker for you, when a Dapr Runtime is started it checks that the Worker exists and it's up-to-date. If the Worker doesn't exist, or if it's using an outdated version, Dapr will create or upgrade it for you automatically.
{{% /codetab %}}
{{% codetab %}}
<!-- Manually provision the Worker script -->
If you'd rather not give Dapr permissions to deploy Worker scripts for you, you can manually provision a Worker for Dapr to use. Note that if you have multiple Dapr components that interact with Cloudflare services via a Worker, you will need to create a separate Worker for each one of them.
To manually provision a Worker script, you will need to have Node.js installed on your local machine.
1. Create a new folder where you'll place the source code of the Worker, for example: `daprworker`.
2. If you haven't already, authenticate with Wrangler (the Cloudflare Workers CLI) using: `npx wrangler login`.
3. Inside the newly-created folder, create a new `wrangler.toml` file with the contents below, filling in the missing information as appropriate:
```toml
# Name of your Worker, for example "mydaprkv"
name = ""
# Do not change these options
main = "worker.js"
compatibility_date = "2022-12-09"
usage_model = "bundled"
[vars]
# Set this to the **public** part of the Ed25519 key, PEM-encoded (with newlines replaced with `\n`).
# Example:
# PUBLIC_KEY = "-----BEGIN PUBLIC KEY-----\nMCowB...=\n-----END PUBLIC KEY-----
PUBLIC_KEY = ""
# Set this to the name of your Worker (same as the value of the "name" property above), for example "mydaprkv".
TOKEN_AUDIENCE = ""
[[kv_namespaces]]
# Set the next two values to the ID (not name) of your KV namespace, for example "123456789abcdef8b5588f3d134f74ac".
# Note that they will both be set to the same value.
binding = ""
id = ""
```
> Note: see the next section for how to generate an Ed25519 key pair. Make sure you use the **public** part of the key when deploying a Worker!
4. Copy the (pre-compiled and minified) code of the Worker in the `worker.js` file. You can do that with this command:
```sh
# Set this to the version of Dapr that you're using
DAPR_VERSION="release-{{% dapr-latest-version short="true" %}}"
curl -LfO "https://raw.githubusercontent.com/dapr/components-contrib/${DAPR_VERSION}/internal/component/cloudflare/workers/code/worker.js"
```
5. Deploy the Worker using Wrangler:
```sh
npx wrangler publish
```
Once your Worker has been deployed, you will need to initialize the component with these two metadata options:
- **`workerName`**: Name of the Worker script. This is the value you set in the `name` property in the `wrangler.toml` file.
- **`workerUrl`**: URL of the deployed Worker. The `npx wrangler command` will show the full URL to you, for example `https://mydaprkv.mydomain.workers.dev`.
{{% /codetab %}}
{{< /tabs >}}
## Generate an Ed25519 key pair
All Cloudflare Workers listen on the public Internet, so Dapr needs to use additional authentication and data protection measures to ensure that no other person or application can communicate with your Worker (and thus, with your Worker KV namespace). These include industry-standard measures such as:
- All requests made by Dapr to the Worker are authenticated via a bearer token (technically, a JWT) which is signed with an Ed25519 key.
- All communications between Dapr and your Worker happen over an encrypted connection, using TLS (HTTPS).
- The bearer token is generated on each request and is valid for a brief period of time only (currently, one minute).
To let Dapr issue bearer tokens, and have your Worker validate them, you will need to generate a new Ed25519 key pair. Here are examples of generating the key pair using OpenSSL or the step CLI.
{{< tabs "Generate with OpenSSL" "Generate with the step CLI" >}}
{{% codetab %}}
<!-- Generate with OpenSSL -->
> Support for generating Ed25519 keys is available since OpenSSL 1.1.0, so the commands below will not work if you're using an older version of OpenSSL.
> Note for Mac users: on macOS, the "openssl" binary that is shipped by Apple is actually based on LibreSSL, which as of writing doesn't support Ed25519 keys. If you're using macOS, either use the step CLI, or install OpenSSL 3.0 from Homebrew using `brew install openssl@3` then replacing `openssl` in the commands below with `$(brew --prefix)/opt/openssl@3/bin/openssl`.
You can generate a new Ed25519 key pair with OpenSSL using:
```sh
openssl genpkey -algorithm ed25519 -out private.pem
openssl pkey -in private.pem -pubout -out public.pem
```
> On macOS, using openssl@3 from Homebrew:
>
> ```sh
> $(brew --prefix)/opt/openssl@3/bin/openssl genpkey -algorithm ed25519 -out private.pem
> $(brew --prefix)/opt/openssl@3/bin/openssl pkey -in private.pem -pubout -out public.pem
> ```
{{% /codetab %}}
{{% codetab %}}
<!-- Generate with the step CLI -->
If you don't have the step CLI already, install it following the [official instructions](https://smallstep.com/docs/step-cli/installation).
Next, you can generate a new Ed25519 key pair with the step CLI using:
```sh
step crypto keypair \
public.pem private.pem \
--kty OKP --curve Ed25519 \
--insecure --no-password
```
{{% /codetab %}}
{{< /tabs >}}
Regardless of how you generated your key pair, with the instructions above you'll have two files:
- `private.pem` contains the private part of the key; use the contents of this file for the **`key`** property of the component's metadata.
- `public.pem` contains the public part of the key, which you'll need only if you're deploying a Worker manually (as per the instructions in the previoius section).
{{% alert title="Warning" color="warning" %}}
Protect the private part of your key and treat it as a secret value!
{{% /alert %}}
## Additional notes
- Note that Cloudflare Workers KV doesn't guarantee strong data consistency. Although changes are visible immediately (usually) for requests made to the same Cloudflare datacenter, it can take a certain amount of time (usually up to one minute) for changes to be replicated across all Cloudflare regions.
- This state store supports TTLs with Dapr, but the minimum value for the TTL is 1 minute.
## Related links
- [Basic schema for a Dapr component]({{< ref component-schema >}})
- Read [this guide]({{< ref "howto-get-save-state.md#step-2-save-and-retrieve-a-single-state" >}}) for instructions on configuring state store components
- [State management building block]({{< ref state-management >}})
- Documentation for [Cloudflare Workers KV](https://developers.cloudflare.com/workers/learning/how-kv-works/)

View File

@ -21,7 +21,7 @@ spec:
version: v1
metadata:
- name: table
value: "mytable"
value: "Contracts"
- name: accessKey
value: "AKIAIOSFODNN7EXAMPLE" # Optional
- name: secretKey
@ -34,6 +34,8 @@ spec:
value: "myTOKEN" # Optional
- name: ttlAttributeName
value: "expiresAt" # Optional
- name: partitionKey
value: "ContractID" # Optional
```
{{% alert title="Warning" color="warning" %}}
@ -42,19 +44,20 @@ The above example uses secrets as plain strings. It is recommended to use a secr
## Primary Key
In order to use DynamoDB as a Dapr state store, the table must have a primary key named `key`.
In order to use DynamoDB as a Dapr state store, the table must have a primary key named `key`. See the section [Partition Keys]({{< ref "setup-dynamodb.md#partition-keys" >}}) for an option to change this behavior.
## Spec metadata fields
| Field | Required | Details | Example |
|--------------------|:--------:|---------|---------|
| table | Y | name of the DynamoDB table to use | `"mytable"`
| table | Y | name of the DynamoDB table to use | `"Contracts"`
| accessKey | N | ID of the AWS account with appropriate permissions to SNS and SQS. Can be `secretKeyRef` to use a secret reference | `"AKIAIOSFODNN7EXAMPLE"`
| secretKey | N | Secret for the AWS user. Can be `secretKeyRef` to use a secret reference |`"wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY"`
| region | N | The AWS region to the instance. See this page for valid regions: https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Concepts.RegionsAndAvailabilityZones.html. Ensure that DynamoDB are available in that region.| `"us-east-1"`
| endpoint | N |AWS endpoint for the component to use. Only used for local development. The `endpoint` is unncessary when running against production AWS | `"http://localhost:4566"`
| sessionToken | N |AWS session token to use. A session token is only required if you are using temporary security credentials. | `"TOKEN"`
| ttlAttributeName | N |The table attribute name which should be used for TTL. | `"expiresAt"`
| partitionKey | N |The table primary key or partition key attribute name. This field is used to replace the default primary key attribute name `"key"`. See the section [Partition Keys]({{< ref "setup-dynamodb.md#partition-keys" >}}). | `"ContractID"`
{{% alert title="Important" color="warning" %}}
When running the Dapr sidecar (daprd) with your application on EKS (AWS Kubernetes), if you're using a node/pod that has already been attached to an IAM policy defining access to AWS resources, you **must not** provide AWS access-key, secret-key, and tokens in the definition of the component spec you're using.
@ -70,6 +73,87 @@ In order to use DynamoDB TTL feature, you must enable TTL on your table and defi
The attribute name must be defined in the `ttlAttributeName` field.
See official [AWS docs](https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/TTL.html).
## Partition Keys
By default, the DynamoDB state store component uses the table attribute name `key` as primary/partition key in the DynamoDB table.
This can be overridden by specifying a metadata field in the component configuration with a key of `partitionKey` and a value of the desired attribute name.
To learn more about DynamoDB primary/partition keys, read the [AWS DynamoDB Developer Guide.](https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/HowItWorks.CoreComponents.html#HowItWorks.CoreComponents.PrimaryKey)
The following `statestore.yaml` file shows how to configure the DynamoDB state store component to use the partition key attribute name of `ContractID`:
```yaml
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
name: statestore
spec:
type: state.aws.dynamodb
version: v1
metadata:
- name: table
value: "Contracts"
- name: partitionKey
value: "ContractID"
```
The above component specification assumes the following DynamoDB Table Layout:
```console
{
"Table": {
"AttributeDefinitions": [
{
"AttributeName": "ContractID",
"AttributeType": "S"
}
],
"TableName": "Contracts",
"KeySchema": [
{
"AttributeName": "ContractID",
"KeyType": "HASH"
}
],
}
```
The following operation passes `"A12345"` as the value for `key`, and based on the component specification provided above, the Dapr runtime will replace the `key` attribute name
with `ContractID` as the Partition/Primary Key sent to DynamoDB:
```shell
$ dapr run --app-id contractsprocessing --app-port ...
$ curl -X POST http://localhost:3500/v1.0/state/<store_name> \
-H "Content-Type: application/json"
-d '[
{
"key": "A12345",
"value": "Dapr Contract"
}
]'
```
The following AWS CLI Command displays the contents of the DynamoDB `Contracts` table:
```shell
$ aws dynamodb get-item \
--table-name Contracts \
--key '{"ContractID":{"S":"contractsprocessing||A12345"}}'
{
"Item": {
"value": {
"S": "Dapr Contract"
},
"etag": {
"S": "....."
},
"ContractID": {
"S": "contractsprocessing||A12345"
}
}
}
```
## Related links
- [Basic schema for a Dapr component]({{< ref component-schema >}})

Some files were not shown because too many files have changed in this diff Show More