Merge branch 'v1.10' of https://github.com/dapr/docs into issue_2798b

This commit is contained in:
Hannah Hunter 2023-01-11 15:26:13 -06:00
commit 4a7145bc08
35 changed files with 847 additions and 210 deletions

View File

@ -1,11 +1,11 @@
---
type: docs
title: "Building blocks"
linkTitle: "Building blocks"
title: "API building blocks"
linkTitle: "API building blocks"
weight: 10
description: "Dapr capabilities that solve common development challenges for distributed applications"
---
Get a high-level [overview of Dapr building blocks]({{< ref building-blocks-concept >}}) in the **Concepts** section.
Get a high-level [overview of Dapr API building blocks]({{< ref building-blocks-concept >}}) in the **Concepts** section.
<img src="/images/buildingblocks-overview.png" alt="Diagram showing the different Dapr API building blocks" width=1000>

View File

@ -2,6 +2,6 @@
type: docs
title: "Debugging Dapr applications and the Dapr control plane"
linkTitle: "Debugging"
weight: 60
weight: 50
description: "Guides on how to debug Dapr applications and the Dapr control plane"
---

View File

@ -0,0 +1,7 @@
---
type: docs
title: "Components"
linkTitle: "Components"
weight: 30
description: "Learn more about developing Dapr's pluggable and middleware components"
---

View File

@ -0,0 +1,44 @@
---
type: docs
title: "How to: Author middleware components"
linkTitle: "Middleware components"
weight: 200
description: "Learn how to develop middleware components"
aliases:
- /developing-applications/middleware/middleware-overview/
- /concepts/middleware-concept/
---
Dapr allows custom processing pipelines to be defined by chaining a series of middleware components. In this guide, you'll learn how to create a middleware component. To learn how to configure an existing middleware component, see [Configure middleware components]({{< ref middleware.md >}})
## Writing a custom middleware
Dapr uses [FastHTTP](https://github.com/valyala/fasthttp) to implement its HTTP server. Hence, your HTTP middleware needs to be written as a FastHTTP handler. Your middleware needs to implement a middleware interface, which defines a **GetHandler** method that returns **fasthttp.RequestHandler** and **error**:
```go
type Middleware interface {
GetHandler(metadata Metadata) (func(h fasthttp.RequestHandler) fasthttp.RequestHandler, error)
}
```
Your handler implementation can include any inbound logic, outbound logic, or both:
```go
func (m *customMiddleware) GetHandler(metadata Metadata) (func(fasthttp.RequestHandler) fasthttp.RequestHandler, error) {
var err error
return func(h fasthttp.RequestHandler) fasthttp.RequestHandler {
return func(ctx *fasthttp.RequestCtx) {
// inbound logic
h(ctx) // call the downstream handler
// outbound logic
}
}, err
}
```
## Related links
- [Component schema]({{< ref component-schema.md >}})
- [Configuration overview]({{< ref configuration-overview.md >}})
- [API middleware sample](https://github.com/dapr/samples/tree/master/middleware-oauth-google)

View File

@ -1,9 +1,9 @@
---
type: docs
title: "Pluggable Components"
linkTitle: "Pluggable Components"
title: "Pluggable components"
linkTitle: "Pluggable components"
description: "Guidance on how to work with pluggable components"
weight: 4000
weight: 100
aliases:
- "/operations/components/pluggable-components/pluggable-components-overview/"
---
---

View File

@ -0,0 +1,112 @@
---
type: docs
title: "How to: Implement pluggable components"
linkTitle: "Pluggable components"
weight: 1100
description: "Learn how to author and implement pluggable components"
---
In this guide, you'll learn why and how to implement a [pluggable component]({{< ref pluggable-components-overview >}}). To learn how to configure and register a pluggable component, refer to [How to: Register a pluggable component]({{< ref pluggable-components-registration.md >}})
## Implement a pluggable component
In order to implement a pluggable component, you need to implement a gRPC service in the component. Implementing the gRPC service requires three steps:
### Find the proto definition file
Proto definitions are provided for each supported service interface (state store, pub/sub, bindings).
Currently, the following component APIs are supported:
- State stores
- Pub/sub
- Bindings
| Component | Type | gRPC definition | Built-in Reference Implementation | Docs |
| :---------: | :--------: | :--------------: | :----------------------------------------------------------------------------: | --------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| State Store | `state` | [state.proto] | [Redis](https://github.com/dapr/components-contrib/tree/master/state/redis) | [concept]({{< ref "state-management-overview" >}}), [howto]({{< ref "howto-get-save-state" >}}), [api spec]({{< ref "state_api" >}}) |
| Pub/sub | `pubsub` | [pubsub.proto] | [Redis](https://github.com/dapr/components-contrib/tree/master/pubsub/redis) | [concept]({{< ref "pubsub-overview" >}}), [howto]({{< ref "howto-publish-subscribe" >}}), [api spec]({{< ref "pubsub_api" >}}) |
| Bindings | `bindings` | [bindings.proto] | [Kafka](https://github.com/dapr/components-contrib/tree/master/bindings/kafka) | [concept]({{< ref "bindings-overview" >}}), [input howto]({{< ref "howto-triggers" >}}), [output howto]({{< ref "howto-bindings" >}}), [api spec]({{< ref "bindings_api" >}}) |
Below is a snippet of the gRPC service definition for pluggable component state stores ([state.proto]):
```protobuf
// StateStore service provides a gRPC interface for state store components.
service StateStore {
// Initializes the state store component with the given metadata.
rpc Init(InitRequest) returns (InitResponse) {}
// Returns a list of implemented state store features.
rpc Features(FeaturesRequest) returns (FeaturesResponse) {}
// Ping the state store. Used for liveness purposes.
rpc Ping(PingRequest) returns (PingResponse) {}
// Deletes the specified key from the state store.
rpc Delete(DeleteRequest) returns (DeleteResponse) {}
// Get data from the given key.
rpc Get(GetRequest) returns (GetResponse) {}
// Sets the value of the specified key.
rpc Set(SetRequest) returns (SetResponse) {}
// Deletes many keys at once.
rpc BulkDelete(BulkDeleteRequest) returns (BulkDeleteResponse) {}
// Retrieves many keys at once.
rpc BulkGet(BulkGetRequest) returns (BulkGetResponse) {}
// Set the value of many keys at once.
rpc BulkSet(BulkSetRequest) returns (BulkSetResponse) {}
}
```
The interface for the `StateStore` service exposes a total of 9 methods:
- 2 methods for initialization and components capability advertisement (Init and Features)
- 1 method for health-ness or liveness check (Ping)
- 3 methods for CRUD (Get, Set, Delete)
- 3 methods for bulk CRUD operations (BulkGet, BulkSet, BulkDelete)
### Create service scaffolding
Use [protocol buffers and gRPC tools](https://grpc.io) to create the necessary scaffolding for the service. Learn more about these tools via [the gRPC concepts documentation](https://grpc.io/docs/what-is-grpc/core-concepts/).
These tools generate code targeting [any gRPC-supported language](https://grpc.io/docs/what-is-grpc/introduction/#protocol-buffer-versions). This code serves as the base for your server and it provides:
- Functionality to handle client calls
- Infrastructure to:
- Decode incoming requests
- Execute service methods
- Encode service responses
The generated code is incomplete. It is missing:
- A concrete implementation for the methods your target service defines (the core of your pluggable component).
- Code on how to handle Unix Socket Domain integration, which is Dapr specific.
- Code handling integration with your downstream services.
Learn more about filling these gaps in the next step.
### Define the service
Provide a concrete implementation of the desired service. Each component has a gRPC service definition for its core functionality which is the same as the core component interface. For example:
- **State stores**
A pluggable state store **must** provide an implementation of the `StateStore` service interface.
In addition to this core functionality, some components might also expose functionality under other **optional** services. For example, you can add extra functionality by defining the implementation for a `QueriableStateStore` service and a `TransactionalStateStore` service.
- **Pub/sub**
Pluggable pub/sub components only have a single core service interface defined ([pubsub.proto]). They have no optional service interfaces.
- **Bindings**
Pluggable input and output bindings have a single core service definition on [bindings.proto]. They have no optional service interfaces.
After generating the above state store example's service scaffolding code using gRPC and protocol buffers tools, you can define concrete implementations for the 9 methods defined under `service StateStore`, along with code to initialize and communicate with your dependencies.
This concrete implementation and auxiliary code are the **core** of your pluggable component. They define how your component behaves when handling gRPC requests from Dapr.
## Next steps
- Get started with developing .NET pluggable component using this [sample code](https://github.com/dapr/samples/tree/master/pluggable-components-dotnet-template)
- [Review the pluggable components overview]({{< ref pluggable-components-overview.md >}})
- [Learn how to register your pluggable component]({{< ref pluggable-components-registration >}})

View File

@ -0,0 +1,69 @@
---
type: docs
title: "Pluggable components overview"
linkTitle: "Overview"
weight: 1000
description: "Overview of pluggable component anatomy and supported component types"
---
Pluggable components are components that are not included as part the runtime, as opposed to the built-in components included with `dapr init`. You can configure Dapr to use pluggable components that leverage the building block APIs, but are registered differently from the [built-in Dapr components](https://github.com/dapr/components-contrib).
<img src="/images/concepts-building-blocks.png" width=400>
## Pluggable components vs. built-in components
Dapr provides two approaches for registering and creating components:
- The built-in components included in the runtime and found in the [components-contrib repository ](https://github.com/dapr/components-contrib).
- Pluggable components which are deployed and registered independently.
While both registration options leverage Dapr's building block APIs, each has a different implementation processes.
| Component details | [Built-in Component](https://github.com/dapr/components-contrib/blob/master/docs/developing-component.md) | Pluggable Components |
| ---------------------------- | :--------------------------------------------------------------------------------------------------------- | :--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| **Language** | Can only be written in Go | [Can be written in any gRPC-supported language](https://grpc.io/docs/what-is-grpc/introduction/#protocol-buffer-versions) |
| **Where it runs** | As part of the Dapr runtime executable | As a distinct process or container in a pod. Runs separate from Dapr itself. |
| **Registers with Dapr** | Included into the Dapr codebase | Registers with Dapr via Unix Domain Sockets (using gRPC ) |
| **Distribution** | Distributed with Dapr release. New features added to component are aligned with Dapr releases | Distributed independently from Dapr itself. New features can be added when needed and follows its own release cycle. |
| **How component is activated** | Dapr starts runs the component (automatic) | User starts component (manual) |
## Why create a pluggable component?
Pluggable components prove useful in scenarios where:
- You require a private component.
- You want to keep your component separate from the Dapr release process.
- You are not as familiar with Go, or implementing your component in Go is not ideal.
## Features
### Implement a pluggable component
In order to implement a pluggable component, you need to implement a gRPC service in the component. Implementing the gRPC service requires three steps:
1. Find the proto definition file
1. Create service scaffolding
1. Define the service
Learn more about [how to develop and implement a pluggable component]({{< ref develop-pluggable.md >}})
### Leverage multiple building blocks for a component
In addition to implementing multiple gRPC services from the same component (for example `StateStore`, `QueriableStateStore`, `TransactionalStateStore` etc.), a pluggable component can also expose implementations for other component interfaces. This means that a single pluggable component can simultaneously function as a state store, pub/sub, and input or output binding. In other words, you can implement multiple component interfaces into a pluggable component and expose them as gRPC services.
While exposing multiple component interfaces on the same pluggable component lowers the operational burden of deploying multiple components, it makes implementing and debugging your component harder. If in doubt, stick to a "separation of concerns" by merging multiple components interfaces into the same pluggable component only when necessary.
## Operationalize a pluggable component
Built-in components and pluggable components share one thing in common: both need a [component specification]({{< ref "components-concept.md#component-specification" >}}). Built-in components do not require any extra steps to be used: Dapr is ready to use them automatically.
In contrast, pluggable components require additional steps before they can communicate with Dapr. You need to first run the component and facilitate Dapr-component communication to kick off the registration process.
## Next steps
- [Implement a pluggable component]({{< ref develop-pluggable.md >}})
- [Pluggable component registration]({{< ref "pluggable-components-registration" >}})
[state.proto]: https://github.com/dapr/dapr/blob/master/dapr/proto/components/v1/state.proto
[pubsub.proto]: https://github.com/dapr/dapr/blob/master/dapr/proto/components/v1/pubsub.proto
[bindings.proto]: https://github.com/dapr/dapr/blob/master/dapr/proto/components/v1/bindings.proto

View File

@ -2,6 +2,6 @@
type: docs
title: "Integrations"
linkTitle: "Integrations"
weight: 10
weight: 60
description: "Dapr integrations with other technologies"
---

View File

@ -0,0 +1,7 @@
---
type: docs
title: "Local development"
linkTitle: "Local development"
weight: 40
description: "Capabilities for developing Dapr applications locally"
---

View File

@ -2,6 +2,6 @@
type: docs
title: "IDE support"
linkTitle: "IDE support"
weight: 30
weight: 200
description: "Support for common Integrated Development Environments (IDEs)"
---

View File

@ -2,7 +2,7 @@
type: docs
title: "How-To: Scope components to one or more applications"
linkTitle: "Scope access to components"
weight: 300
weight: 400
description: "Limit component access to particular Dapr instances"
---

View File

@ -2,7 +2,7 @@
type: docs
title: "How-To: Reference secrets in components"
linkTitle: "Reference secrets in components"
weight: 400
weight: 500
description: "How to securly reference secrets from a component definition"
---

View File

@ -2,7 +2,7 @@
type: docs
title: "Updating components"
linkTitle: "Updating components"
weight: 250
weight: 300
description: "Updating deployed components used by applications"
---

View File

@ -1,12 +1,9 @@
---
type: docs
title: "Middleware"
linkTitle: "Middleware"
weight: 50
title: "Configure middleware components"
linkTitle: "Configure middleware"
weight: 2000
description: "Customize processing pipelines by adding middleware components"
aliases:
- /developing-applications/middleware/middleware-overview/
- /concepts/middleware-concept/
---
Dapr allows custom processing pipelines to be defined by chaining a series of middleware components. There are two places that you can use a middleware pipeline;
@ -14,7 +11,7 @@ Dapr allows custom processing pipelines to be defined by chaining a series of mi
1) Building block APIs - HTTP middleware components are executed when invoking any Dapr HTTP APIs.
2) Service-to-Service invocation - HTTP middleware components are applied to service-to-service invocation calls.
## Configuring API middleware pipelines
## Configure API middleware pipelines
When launched, a Dapr sidecar constructs a middleware processing pipeline for incoming HTTP calls. By default, the pipeline consists of [tracing middleware]({{< ref tracing-overview.md >}}) and CORS middleware. Additional middleware, configured by a Dapr [configuration]({{< ref configuration-concept.md >}}), can be added to the pipeline in the order they are defined. The pipeline applies to all Dapr API endpoints, including state, pub/sub, service invocation, bindings, secrets, configuration, distributed lock, and others.
@ -45,7 +42,7 @@ As with other components, middleware components can be found in the [supported M
{{< button page="supported-middleware" text="See all middleware components">}}
## Configuring app middleware pipelines
## Configure app middleware pipelines
You can also use any middleware components when making service-to-service invocation calls. For example, for token validation in a zero-trust environment, a request transformation for a specific app endpoint, or to apply OAuth policies.
@ -68,35 +65,9 @@ spec:
type: middleware.http.uppercase
```
## Writing a custom middleware
Dapr uses [FastHTTP](https://github.com/valyala/fasthttp) to implement its HTTP server. Hence, your HTTP middleware needs to be written as a FastHTTP handler. Your middleware needs to implement a middleware interface, which defines a **GetHandler** method that returns **fasthttp.RequestHandler** and **error**:
```go
type Middleware interface {
GetHandler(metadata Metadata) (func(h fasthttp.RequestHandler) fasthttp.RequestHandler, error)
}
```
Your handler implementation can include any inbound logic, outbound logic, or both:
```go
func (m *customMiddleware) GetHandler(metadata Metadata) (func(fasthttp.RequestHandler) fasthttp.RequestHandler, error) {
var err error
return func(h fasthttp.RequestHandler) fasthttp.RequestHandler {
return func(ctx *fasthttp.RequestCtx) {
// inbound logic
h(ctx) // call the downstream handler
// outbound logic
}
}, err
}
```
## Related links
- [Learn how to author middleware components]({{< ref develop-middleware.md >}})
- [Component schema]({{< ref component-schema.md >}})
- [Configuration overview]({{< ref configuration-overview.md >}})
- [API middleware sample](https://github.com/dapr/samples/tree/master/middleware-oauth-google)

View File

@ -1,8 +1,8 @@
---
type: docs
title: "How-To: Register a pluggable component"
linkTitle: "How To: Register a pluggable component"
weight: 4500
linkTitle: "Register a pluggable component"
weight: 1000
description: "Learn how to register a pluggable component"
---
@ -10,7 +10,7 @@ description: "Learn how to register a pluggable component"
## Component registration process
Pluggable, [gRPC-based](https://grpc.io/) components are typically run as containers or processes that need to communicate with the Dapr runtime via [Unix Domain Sockets][uds]. They are automatically discovered and registered in the runtime with the following steps:
[Pluggable, gRPC-based components]({{< ref pluggable-components-overview >}}) are typically run as containers or processes that need to communicate with the Dapr runtime via [Unix Domain Sockets][uds]. They are automatically discovered and registered in the runtime with the following steps:
1. The component listens to an [Unix Domain Socket][uds] placed on the shared volume.
2. The Dapr runtime lists all [Unix Domain Socket][uds] in the shared volume.

View File

@ -1,134 +0,0 @@
---
type: docs
title: "Pluggable components overview"
linkTitle: "Overview"
weight: 4400
description: "Overview of pluggable component anatomy and supported component types"
---
Pluggable components are components that are not included as part the runtime, as opposed to built-in ones that are included. You can configure Dapr to use pluggable components that leverage the building block APIs, but these are registered differently from the [built-in Dapr components](https://github.com/dapr/components-contrib). For example, you can configure a pluggable component for scenarios where you require a private component.
<img src="/images/concepts-building-blocks.png" width=400>
## Pluggable components vs. built-in components
Dapr provides two approaches for registering and creating components:
- The built-in components included in the runtime and found in the [components-contrib repository ](https://github.com/dapr/components-contrib).
- Pluggable components which are deployed and registered independently.
While both registration options leverage Dapr's building block APIs, each has a different implementation processes.
| Component details | [Built-in Component](https://github.com/dapr/components-contrib/blob/master/docs/developing-component.md) | Pluggable Components |
| ---------------------------- | :--------------------------------------------------------------------------------------------------------- | :--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| **Language** | Can only be written in Go | [Can be written in any gRPC-supported language](https://grpc.io/docs/what-is-grpc/introduction/#protocol-buffer-versions) |
| **Where it runs** | As part of the Dapr runtime executable | As a distinct process or container in a pod. Runs separate from Dapr itself. |
| **Registers with Dapr** | Included into the Dapr codebase | Registers with Dapr via Unix Domain Sockets (using gRPC ) |
| **Distribution** | Distributed with Dapr release. New features added to component are aligned with Dapr releases | Distributed independently from Dapr itself. New features can be added when needed and follows its own release cycle. |
| **How component is activated** | Dapr starts runs the component (automatic) | User starts component (manual) |
## When to create a pluggable component
- This is a private component.
- You want to keep your component separate from the Dapr release process.
- You are not as familiar with Go, or implementing your component in Go is not ideal.
## Implementing a pluggable component
In order to implement a pluggable component you need to implement a gRPC service in the component. Implementing the gRPC service requires three steps:
1. **Find the proto definition file.** Proto definitions are provided for each supported service interface (state store, pub/sub, bindings).
Currently, the following component APIs are supported:
- State stores
- Pub/sub
- Bindings
| Component | Type | gRPC definition | Built-in Reference Implementation | Docs |
| :---------: | :--------: | :--------------: | :----------------------------------------------------------------------------: | --------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| State Store | `state` | [state.proto] | [Redis](https://github.com/dapr/components-contrib/tree/master/state/redis) | [concept]({{<ref "state-management-overview">}}), [howto]({{<ref "howto-get-save-state">}}), [api spec]({{<ref "state_api">}}) |
| Pub/sub | `pubsub` | [pubsub.proto] | [Redis](https://github.com/dapr/components-contrib/tree/master/pubsub/redis) | [concept]({{<ref "pubsub-overview">}}), [howto]({{<ref "howto-publish-subscribe">}}), [api spec]({{<ref "pubsub_api">}}) |
| Bindings | `bindings` | [bindings.proto] | [Kafka](https://github.com/dapr/components-contrib/tree/master/bindings/kafka) | [concept]({{<ref "bindings-overview">}}), [input howto]({{<ref "howto-triggers">}}), [output howto]({{<ref "howto-bindings">}}), [api spec]({{<ref "bindings_api">}}) |
Here's a snippet of the gRPC service definition for pluggable component state stores ([state.proto]).
```protobuf
// StateStore service provides a gRPC interface for state store components.
service StateStore {
// Initializes the state store component with the given metadata.
rpc Init(InitRequest) returns (InitResponse) {}
// Returns a list of implemented state store features.
rpc Features(FeaturesRequest) returns (FeaturesResponse) {}
// Ping the state store. Used for liveness purposes.
rpc Ping(PingRequest) returns (PingResponse) {}
// Deletes the specified key from the state store.
rpc Delete(DeleteRequest) returns (DeleteResponse) {}
// Get data from the given key.
rpc Get(GetRequest) returns (GetResponse) {}
// Sets the value of the specified key.
rpc Set(SetRequest) returns (SetResponse) {}
// Deletes many keys at once.
rpc BulkDelete(BulkDeleteRequest) returns (BulkDeleteResponse) {}
// Retrieves many keys at once.
rpc BulkGet(BulkGetRequest) returns (BulkGetResponse) {}
// Set the value of many keys at once.
rpc BulkSet(BulkSetRequest) returns (BulkSetResponse) {}
}
```
The interface for the `StateStore` service exposes 9 methods:
- 2 methods for initialization and components capability advertisement (Init and Features)
- 1 method for health-ness or liveness check (Ping)
- 3 methods for CRUD (Get, Set, Delete)
- 3 methods for bulk CRUD operations (BulkGet, BulkSet, BulkDelete)
2. **Create service scaffolding.** Use [protocol buffers and gRPC tools](https://grpc.io) to create the necessary scaffolding for the service. You may want to get acquainted with [the gRPC concepts documentation](https://grpc.io/docs/what-is-grpc/core-concepts/).
The tools can generate code targeting [any gRPC-supported language](https://grpc.io/docs/what-is-grpc/introduction/#protocol-buffer-versions). This code serves as the base for your server and it provides functionality to handle client calls along with infrastructure to decode incoming requests, execute service methods, and encode service responses.
The generated code is not complete. It is missing a concrete implementation for the methods your target service defines, i.e., the core of your pluggable component. This is further explored in the next topic. Additionally, you also have to provide code on how to handle Unix Socket Domain integration, which is Dapr specific, and code handling integration with your downstream services.
3. **Define the service.** Provide a concrete implementation of the desired service.
As a first step, [protocol buffers](https://developers.google.com/protocol-buffers/docs/overview) and [gRPC](https://grpc.io/docs/what-is-grpc/introduction/) tools are used to create the server code for this service. After that, the next step is to define concrete implementations for these 9 methods.
Each component has a gRPC service definition for its core functionality which is the same as the core component interface. For example:
- **State stores**
A pluggable state store **must** provide an implementation of the `StateStore` service interface. In addition to this core functionality, some components might also expose functionality under other **optional** services. For example, you can add extra functionality by defining the implementation for a `QueriableStateStore` service and a `TransactionalStateStore` service.
- **Pub/sub**
Pluggable pub/sub components only have a single core service interface defined ([pubsub.proto]). They have no optional service interfaces.
- **Bindings**
Pluggable input and output bindings have a single core service definition on [bindings.proto]. They have no optional service interfaces.
Following the State Store example from step 1, after generating its service scaffolding code using gRPC and protocol buffers tools (step 2),
the next step is to define concrete implementations for the 9 methods defined under `service StateStore`, along with code to initialize and communicate with your dependencies.
This concrete implementation and auxiliary code are the core of your pluggable component: they define how your component behaves when handling gRPC requests from Dapr.
### Leveraging multiple building blocks for a component
In addition to implementing multiple gRPC services from the same component (for example `StateStore`, `QueriableStateStore`, `TransactionalStateStore` etc.), a pluggable component can also expose implementations for other component interfaces. This means that a single pluggable component can function as a state store, pub/sub, and input or output binding, all at the same time. In other words, you can implement multiple component interfaces into a pluggable component and exposes these as gRPC services.
While exposing multiple component interfaces on the same pluggable component lowers the operational burden of deploying multiple components, it makes implementing and debugging your component harder. If in doubt, stick to a "separation of concerns" by merging multiple components interfaces into the same pluggable component only when necessary.
## Operationalizing a pluggable component
Built-in components and pluggable components share one thing in common: both need a [component specification]({{<ref "components-concept.md#component-specification">}}). Built-in components do not require any extra steps to be used: Dapr is ready to use them automatically.
In contrast, pluggable components require additional steps before they can communicate with Dapr. You need to first run the component and facilitate Dapr-component communication to kick off the registration process.
## Next steps
- [Pluggable component registration]({{<ref "pluggable-components-registration">}})
[state.proto]: https://github.com/dapr/dapr/blob/master/dapr/proto/components/v1/state.proto
[pubsub.proto]: https://github.com/dapr/dapr/blob/master/dapr/proto/components/v1/pubsub.proto
[bindings.proto]: https://github.com/dapr/dapr/blob/master/dapr/proto/components/v1/bindings.proto

View File

@ -3,7 +3,7 @@ type: docs
title: "Bindings components"
linkTitle: "Bindings"
description: "Guidance on setting up Dapr bindings components"
weight: 4000
weight: 900
---
Dapr integrates with external resources to allow apps to both be triggered by external events and interact with the resources. Each binding component has a name and this name is used when interacting with the resource.

View File

@ -3,7 +3,7 @@ type: docs
title: "Pub/Sub brokers"
linkTitle: "Pub/sub brokers"
description: "Guidance on setting up different message brokers for Dapr Pub/Sub"
weight: 2000
weight: 700
aliases:
- "/operations/components/setup-pubsub/setup-pubsub-overview/"
---

View File

@ -2,7 +2,7 @@
type: docs
title: "HowTo: Configure Pub/Sub components with multiple namespaces"
linkTitle: "Multiple namespaces"
weight: 20000
weight: 10000
description: "Use Dapr Pub/Sub with multiple namespaces"
---

View File

@ -3,7 +3,7 @@ type: docs
title: "Secret store components"
linkTitle: "Secret stores"
description: "Guidance on setting up different secret store components"
weight: 3000
weight: 800
aliases:
- "/operations/components/setup-state-store/secret-stores-overview/"
---

View File

@ -3,7 +3,7 @@ type: docs
title: "State stores components"
linkTitle: "State stores"
description: "Guidance on setting up different state stores for Dapr state management"
weight: 1000
weight: 600
aliases:
- "/operations/components/setup-state-store/setup-state-store-overview/"
---

View File

@ -0,0 +1,250 @@
---
type: docs
title: "Cloudflare Queues bindings spec"
linkTitle: "Cloudflare Queues"
description: "Detailed documentation on the Cloudflare Queues component"
aliases:
- "/operations/components/setup-bindings/supported-bindings/cloudflare-queues/"
- "/operations/components/setup-bindings/supported-bindings/cfqueues/"
---
## Component format
This output binding for Dapr allows interacting with [Cloudflare Queues](https://developers.cloudflare.com/queues/) to **publish** new messages. It is currently not possible to consume messages from a Queue using Dapr.
To setup a Cloudflare Queues binding, create a component of type `bindings.cloudflare.queues`. See [this guide]({{< ref "howto-bindings.md#1-create-a-binding" >}}) on how to create and apply a binding configuration.
```yaml
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
name: <NAME>
spec:
type: bindings.cloudflare.queues
version: v1
# Increase the initTimeout if Dapr is managing the Worker for you
initTimeout: "120s"
metadata:
# Name of the existing Cloudflare Queue (required)
- name: queueName
value: ""
# Name of the Worker (required)
- name: workerName
value: ""
# PEM-encoded private Ed25519 key (required)
- name: key
value: |
-----BEGIN PRIVATE KEY-----
MC4CAQ...
-----END PRIVATE KEY-----
# Cloudflare account ID (required to have Dapr manage the Worker)
- name: cfAccountID
value: ""
# API token for Cloudflare (required to have Dapr manage the Worker)
- name: cfAPIToken
value: ""
# URL of the Worker (required if the Worker has been pre-created outside of Dapr)
- name: workerUrl
value: ""
```
{{% alert title="Warning" color="warning" %}}
The above example uses secrets as plain strings. It is recommended to use a secret store for the secrets as described [here]({{< ref component-secrets.md >}}).
{{% /alert %}}
## Spec metadata fields
| Field | Required | Binding support | Details | Example |
|--------------------|:--------:|-------|--------|---------|
| `queueName` | Y | Output | Name of the existing Cloudflare Queue | `"mydaprqueue"`
| `key` | Y | Output | Ed25519 private key, PEM-encoded | *See example above*
| `cfAccountID` | Y/N | Output | Cloudflare account ID. Required to have Dapr manage the worker. | `"456789abcdef8b5588f3d134f74ac"def`
| `cfAPIToken` | Y/N | Output | API token for Cloudflare. Required to have Dapr manage the Worker. | `"secret-key"`
| `workerUrl` | Y/N | Output | URL of the Worker. Required if the Worker has been pre-provisioned outside of Dapr. | `"https://mydaprqueue.mydomain.workers.dev"`
> When you configure Dapr to create your Worker for you, you may need to set a longer value for the `initTimeout` property of the component, to allow enough time for the Worker script to be deployed. For example: `initTimeout: "120s"`
## Binding support
This component supports **output binding** with the following operations:
- `publish` (alias: `create`): Publish a message to the Queue.
The data passed to the binding is used as-is for the body of the message published to the Queue.
This operation does not accept any metadata property.
## Create a Cloudflare Queue
To use this component, you must have a Cloudflare Queue created in your Cloudflare account.
You can create a new Queue in one of two ways:
<!-- IGNORE_LINKS -->
- Using the [Cloudflare dashboard](https://dash.cloudflare.com/)
- Using the [Wrangler CLI](https://developers.cloudflare.com/workers/wrangler/):
```sh
# Authenticate if needed with `npx wrangler login` first
npx wrangler queues create <NAME>
# For example: `npx wrangler queues create myqueue`
```
<!-- END_IGNORE -->
## Configuring the Worker
Because Cloudflare Queues can only be accessed by scripts running on Workers, Dapr needs to maintain a Worker to communicate with the Queue.
Dapr can manage the Worker for you automatically, or you can pre-provision a Worker yourself. Pre-provisioning the Worker is the only supported option when running on [workerd](https://github.com/cloudflare/workerd).
{{% alert title="Important" color="warning" %}}
Use a separate Worker for each Dapr component. Do not use the same Worker script for different Cloudflare Queues bindings, and do not use the same Worker script for different Cloudflare components in Dapr (for example, the Workers KV state store and the Queues binding).
{{% /alert %}}
{{< tabs "Let Dapr manage the Worker" "Manually provision the Worker script" >}}
{{% codetab %}}
<!-- Let Dapr manage the Worker -->
If you want to let Dapr manage the Worker for you, you will need to provide these 3 metadata options:
<!-- IGNORE_LINKS -->
- **`workerName`**: Name of the Worker script. This will be the first part of the URL of your Worker. For example, if the "workers.dev" domain configured for your Cloudflare account is `mydomain.workers.dev` and you set `workerName` to `mydaprqueue`, the Worker that Dapr deploys will be available at `https://mydaprqueue.mydomain.workers.dev`.
- **`cfAccountID`**: ID of your Cloudflare account. You can find this in your browser's URL bar after logging into the [Cloudflare dashboard](https://dash.cloudflare.com/), with the ID being the hex string right after `dash.cloudflare.com`. For example, if the URL is `https://dash.cloudflare.com/456789abcdef8b5588f3d134f74acdef`, the value for `cfAccountID` is `456789abcdef8b5588f3d134f74acdef`.
- **`cfAPIToken`**: API token with permission to create and edit Workers. You can create it from the ["API Tokens" page](https://dash.cloudflare.com/profile/api-tokens) in the "My Profile" section in the Cloudflare dashboard:
1. Click on **"Create token"**.
1. Select the **"Edit Cloudflare Workers"** template.
1. Follow the on-screen instructions to generate a new API token.
<!-- END_IGNORE -->
When Dapr is configured to manage the Worker for you, when a Dapr Runtime is started it checks that the Worker exists and it's up-to-date. If the Worker doesn't exist, or if it's using an outdated version, Dapr creates or upgrades it for you automatically.
{{% /codetab %}}
{{% codetab %}}
<!-- Manually provision the Worker script -->
If you'd rather not give Dapr permissions to deploy Worker scripts for you, you can manually provision a Worker for Dapr to use. Note that if you have multiple Dapr components that interact with Cloudflare services via a Worker, you will need to create a separate Worker for each one of them.
To manually provision a Worker script, you will need to have Node.js installed on your local machine.
1. Create a new folder where you'll place the source code of the Worker, for example: `daprworker`.
2. If you haven't already, authenticate with Wrangler (the Cloudflare Workers CLI) using: `npx wrangler login`.
3. Inside the newly-created folder, create a new `wrangler.toml` file with the contents below, filling in the missing information as appropriate:
```toml
# Name of your Worker, for example "mydaprqueue"
name = ""
# Do not change these options
main = "worker.js"
compatibility_date = "2022-12-09"
usage_model = "bundled"
[vars]
# Set this to the **public** part of the Ed25519 key, PEM-encoded (with newlines replaced with `\n`).
# Example:
# PUBLIC_KEY = "-----BEGIN PUBLIC KEY-----\nMCowB...=\n-----END PUBLIC KEY-----
PUBLIC_KEY = ""
# Set this to the name of your Worker (same as the value of the "name" property above), for example "mydaprqueue".
TOKEN_AUDIENCE = ""
# Set the next two values to the name of your Queue, for example "myqueue".
# Note that they will both be set to the same value.
[[queues.producers]]
queue = ""
binding = ""
```
> Note: see the next section for how to generate an Ed25519 key pair. Make sure you use the **public** part of the key when deploying a Worker!
4. Copy the (pre-compiled and minified) code of the Worker in the `worker.js` file. You can do that with this command:
```sh
# Set this to the version of Dapr that you're using
DAPR_VERSION="release-{{% dapr-latest-version short="true" %}}"
curl -LfO "https://raw.githubusercontent.com/dapr/components-contrib/${DAPR_VERSION}/internal/component/cloudflare/workers/code/worker.js"
```
5. Deploy the Worker using Wrangler:
```sh
npx wrangler publish
```
Once your Worker has been deployed, you will need to initialize the component with these two metadata options:
- **`workerName`**: Name of the Worker script. This is the value you set in the `name` property in the `wrangler.toml` file.
- **`workerUrl`**: URL of the deployed Worker. The `npx wrangler command` will show the full URL to you, for example `https://mydaprqueue.mydomain.workers.dev`.
{{% /codetab %}}
{{< /tabs >}}
## Generate an Ed25519 key pair
All Cloudflare Workers listen on the public Internet, so Dapr needs to use additional authentication and data protection measures to ensure that no other person or application can communicate with your Worker (and thus, with your Cloudflare Queue). These include industry-standard measures such as:
- All requests made by Dapr to the Worker are authenticated via a bearer token (technically, a JWT) which is signed with an Ed25519 key.
- All communications between Dapr and your Worker happen over an encrypted connection, using TLS (HTTPS).
- The bearer token is generated on each request and is valid for a brief period of time only (currently, one minute).
To let Dapr issue bearer tokens, and have your Worker validate them, you will need to generate a new Ed25519 key pair. Here are examples of generating the key pair using OpenSSL or the step CLI.
{{< tabs "Generate with OpenSSL" "Generate with the step CLI" >}}
{{% codetab %}}
<!-- Generate with OpenSSL -->
> Support for generating Ed25519 keys is available since OpenSSL 1.1.0, so the commands below will not work if you're using an older version of OpenSSL.
> Note for Mac users: on macOS, the "openssl" binary that is shipped by Apple is actually based on LibreSSL, which as of writing doesn't support Ed25519 keys. If you're using macOS, either use the step CLI, or install OpenSSL 3.0 from Homebrew using `brew install openssl@3` then replacing `openssl` in the commands below with `$(brew --prefix)/opt/openssl@3/bin/openssl`.
You can generate a new Ed25519 key pair with OpenSSL using:
```sh
openssl genpkey -algorithm ed25519 -out private.pem
openssl pkey -in private.pem -pubout -out public.pem
```
> On macOS, using openssl@3 from Homebrew:
>
> ```sh
> $(brew --prefix)/opt/openssl@3/bin/openssl genpkey -algorithm ed25519 -out private.pem
> $(brew --prefix)/opt/openssl@3/bin/openssl pkey -in private.pem -pubout -out public.pem
> ```
{{% /codetab %}}
{{% codetab %}}
<!-- Generate with the step CLI -->
If you don't have the step CLI already, install it following the [official instructions](https://smallstep.com/docs/step-cli/installation).
Next, you can generate a new Ed25519 key pair with the step CLI using:
```sh
step crypto keypair \
public.pem private.pem \
--kty OKP --curve Ed25519 \
--insecure --no-password
```
{{% /codetab %}}
{{< /tabs >}}
Regardless of how you generated your key pair, with the instructions above you'll have two files:
- `private.pem` contains the private part of the key; use the contents of this file for the **`key`** property of the component's metadata.
- `public.pem` contains the public part of the key, which you'll need only if you're deploying a Worker manually (as per the instructions in the previoius section).
{{% alert title="Warning" color="warning" %}}
Protect the private part of your key and treat it as a secret value!
{{% /alert %}}
## Related links
- [Basic schema for a Dapr component]({{< ref component-schema >}})
- [Bindings building block]({{< ref bindings >}})
- [How-To: Use bindings to interface with external resources]({{< ref howto-bindings.md >}})
- [Bindings API reference]({{< ref bindings_api.md >}})
- Documentation for [Cloudflare Queues](https://developers.cloudflare.com/queues/)

View File

@ -46,8 +46,6 @@ spec:
value: 1
- name: startTime # In Unix format
value: 1630349391
- name: deliverAll
value: false
- name: flowControl
value: false
- name: ackWait
@ -68,6 +66,8 @@ spec:
value: 15s
- name: ackPolicy
value: explicit
- name: deliverPolicy
value: all
```
## Spec metadata fields
@ -86,7 +86,6 @@ spec:
| queueGroupName | N | Queue group name | `"my-queue"` |
| startSequence | N | [Start Sequence] | `1` |
| startTime | N | [Start Time] in Unix format | `1630349391` |
| deliverAll | N | Set deliver all as [Replay Policy] | `true` |
| flowControl | N | [Flow Control] | `true` |
| ackWait | N | [Ack Wait] | `10s` |
| maxDeliver | N | [Max Deliver] | `15` |
@ -97,6 +96,7 @@ spec:
| rateLimit | N | [Rate Limit] | `1024` |
| hearbeat | N | [Hearbeat] | `10s` |
| ackPolicy | N | [Ack Policy] | `explicit` |
| deliverPolicy | N | One of: all, last, new, sequence, time | `all` |
## Create a NATS server

View File

@ -0,0 +1,250 @@
---
type: docs
title: "Cloudflare Workers KV"
linkTitle: "Cloudflare Workers KV"
description: Detailed information on the Cloudflare Workers KV state store component
aliases:
- "/operations/components/setup-state-store/supported-state-stores/setup-cloudflare-workerskv/"
---
## Create a Dapr component
To setup a [Cloudflare Workers KV](https://developers.cloudflare.com/workers/learning/how-kv-works/) state store, create a component of type `state.cloudflare.workerskv`. See [this guide]({{< ref "howto-get-save-state.md#step-1-setup-a-state-store" >}}) on how to create and apply a state store configuration.
```yaml
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
name: <NAME>
spec:
type: state.cloudflare.workerskv
version: v1
# Increase the initTimeout if Dapr is managing the Worker for you
initTimeout: "120s"
metadata:
# ID of the Workers KV namespace (required)
- name: kvNamespaceID
value: ""
# Name of the Worker (required)
- name: workerName
value: ""
# PEM-encoded private Ed25519 key (required)
- name: key
value: |
-----BEGIN PRIVATE KEY-----
MC4CAQ...
-----END PRIVATE KEY-----
# Cloudflare account ID (required to have Dapr manage the Worker)
- name: cfAccountID
value: ""
# API token for Cloudflare (required to have Dapr manage the Worker)
- name: cfAPIToken
value: ""
# URL of the Worker (required if the Worker has been pre-created outside of Dapr)
- name: workerUrl
value: ""
```
{{% alert title="Warning" color="warning" %}}
The above example uses secrets as plain strings. It is recommended to use a secret store for the secrets as described [here]({{< ref component-secrets.md >}}).
{{% /alert %}}
## Spec metadata fields
| Field | Required | Details | Example |
|--------------------|:--------:|---------|---------|
| `kvNamespaceID` | Y | ID of the pre-created Workers KV namespace | `"123456789abcdef8b5588f3d134f74ac"`
| `workerName` | Y | Name of the Worker to connect to | `"mydaprkv"`
| `key` | Y | Ed25519 private key, PEM-encoded | *See example above*
| `cfAccountID` | Y/N | Cloudflare account ID. Required to have Dapr manage the worker. | `"456789abcdef8b5588f3d134f74ac"def`
| `cfAPIToken` | Y/N | API token for Cloudflare. Required to have Dapr manage the Worker. | `"secret-key"`
| `workerUrl` | Y/N | URL of the Worker. Required if the Worker has been pre-provisioned outside of Dapr. | `"https://mydaprkv.mydomain.workers.dev"`
> When you configure Dapr to create your Worker for you, you may need to set a longer value for the `initTimeout` property of the component, to allow enough time for the Worker script to be deployed. For example: `initTimeout: "120s"`
## Create a Workers KV namespace
To use this component, you must have a Workers KV namespace created in your Cloudflare account.
You can create a new Workers KV namespace in one of two ways:
<!-- IGNORE_LINKS -->
- Using the [Cloudflare dashboard](https://dash.cloudflare.com/)
Make note of the "ID" of the Workers KV namespace that you can see in the dashboard. This is a hex string (for example `123456789abcdef8b5588f3d134f74ac`)not the name you used when you created it!
- Using the [Wrangler CLI](https://developers.cloudflare.com/workers/wrangler/):
```sh
# Authenticate if needed with `npx wrangler login` first
wrangler kv:namespace create <NAME>
```
The output contains the ID of the namespace, for example:
```text
{ binding = "<NAME>", id = "123456789abcdef8b5588f3d134f74ac" }
```
<!-- END_IGNORE -->
## Configuring the Worker
Because Cloudflare Workers KV namespaces can only be accessed by scripts running on Workers, Dapr needs to maintain a Worker to communicate with the Workers KV storage.
Dapr can manage the Worker for you automatically, or you can pre-provision a Worker yourself. Pre-provisioning the Worker is the only supported option when running on [workerd](https://github.com/cloudflare/workerd).
{{% alert title="Important" color="warning" %}}
Use a separate Worker for each Dapr component. Do not use the same Worker script for different Cloudflare Workers KV state store components, and do not use the same Worker script for different Cloudflare components in Dapr (e.g. the Workers KV state store and the Queues binding).
{{% /alert %}}
{{< tabs "Let Dapr manage the Worker" "Manually provision the Worker script" >}}
{{% codetab %}}
<!-- Let Dapr manage the Worker -->
If you want to let Dapr manage the Worker for you, you will need to provide these 3 metadata options:
<!-- IGNORE_LINKS -->
- **`workerName`**: Name of the Worker script. This will be the first part of the URL of your Worker. For example, if the "workers.dev" domain configured for your Cloudflare account is `mydomain.workers.dev` and you set `workerName` to `mydaprkv`, the Worker that Dapr deploys will be available at `https://mydaprkv.mydomain.workers.dev`.
- **`cfAccountID`**: ID of your Cloudflare account. You can find this in your browser's URL bar after logging into the [Cloudflare dashboard](https://dash.cloudflare.com/), with the ID being the hex string right after `dash.cloudflare.com`. For example, if the URL is `https://dash.cloudflare.com/456789abcdef8b5588f3d134f74acdef`, the value for `cfAccountID` is `456789abcdef8b5588f3d134f74acdef`.
- **`cfAPIToken`**: API token with permission to create and edit Workers and Workers KV namespaces. You can create it from the ["API Tokens" page](https://dash.cloudflare.com/profile/api-tokens) in the "My Profile" section in the Cloudflare dashboard:
1. Click on **"Create token"**.
1. Select the **"Edit Cloudflare Workers"** template.
1. Follow the on-screen instructions to generate a new API token.
<!-- END_IGNORE -->
When Dapr is configured to manage the Worker for you, when a Dapr Runtime is started it checks that the Worker exists and it's up-to-date. If the Worker doesn't exist, or if it's using an outdated version, Dapr will create or upgrade it for you automatically.
{{% /codetab %}}
{{% codetab %}}
<!-- Manually provision the Worker script -->
If you'd rather not give Dapr permissions to deploy Worker scripts for you, you can manually provision a Worker for Dapr to use. Note that if you have multiple Dapr components that interact with Cloudflare services via a Worker, you will need to create a separate Worker for each one of them.
To manually provision a Worker script, you will need to have Node.js installed on your local machine.
1. Create a new folder where you'll place the source code of the Worker, for example: `daprworker`.
2. If you haven't already, authenticate with Wrangler (the Cloudflare Workers CLI) using: `npx wrangler login`.
3. Inside the newly-created folder, create a new `wrangler.toml` file with the contents below, filling in the missing information as appropriate:
```toml
# Name of your Worker, for example "mydaprkv"
name = ""
# Do not change these options
main = "worker.js"
compatibility_date = "2022-12-09"
usage_model = "bundled"
[vars]
# Set this to the **public** part of the Ed25519 key, PEM-encoded (with newlines replaced with `\n`).
# Example:
# PUBLIC_KEY = "-----BEGIN PUBLIC KEY-----\nMCowB...=\n-----END PUBLIC KEY-----
PUBLIC_KEY = ""
# Set this to the name of your Worker (same as the value of the "name" property above), for example "mydaprkv".
TOKEN_AUDIENCE = ""
[[kv_namespaces]]
# Set the next two values to the ID (not name) of your KV namespace, for example "123456789abcdef8b5588f3d134f74ac".
# Note that they will both be set to the same value.
binding = ""
id = ""
```
> Note: see the next section for how to generate an Ed25519 key pair. Make sure you use the **public** part of the key when deploying a Worker!
4. Copy the (pre-compiled and minified) code of the Worker in the `worker.js` file. You can do that with this command:
```sh
# Set this to the version of Dapr that you're using
DAPR_VERSION="release-{{% dapr-latest-version short="true" %}}"
curl -LfO "https://raw.githubusercontent.com/dapr/components-contrib/${DAPR_VERSION}/internal/component/cloudflare/workers/code/worker.js"
```
5. Deploy the Worker using Wrangler:
```sh
npx wrangler publish
```
Once your Worker has been deployed, you will need to initialize the component with these two metadata options:
- **`workerName`**: Name of the Worker script. This is the value you set in the `name` property in the `wrangler.toml` file.
- **`workerUrl`**: URL of the deployed Worker. The `npx wrangler command` will show the full URL to you, for example `https://mydaprkv.mydomain.workers.dev`.
{{% /codetab %}}
{{< /tabs >}}
## Generate an Ed25519 key pair
All Cloudflare Workers listen on the public Internet, so Dapr needs to use additional authentication and data protection measures to ensure that no other person or application can communicate with your Worker (and thus, with your Worker KV namespace). These include industry-standard measures such as:
- All requests made by Dapr to the Worker are authenticated via a bearer token (technically, a JWT) which is signed with an Ed25519 key.
- All communications between Dapr and your Worker happen over an encrypted connection, using TLS (HTTPS).
- The bearer token is generated on each request and is valid for a brief period of time only (currently, one minute).
To let Dapr issue bearer tokens, and have your Worker validate them, you will need to generate a new Ed25519 key pair. Here are examples of generating the key pair using OpenSSL or the step CLI.
{{< tabs "Generate with OpenSSL" "Generate with the step CLI" >}}
{{% codetab %}}
<!-- Generate with OpenSSL -->
> Support for generating Ed25519 keys is available since OpenSSL 1.1.0, so the commands below will not work if you're using an older version of OpenSSL.
> Note for Mac users: on macOS, the "openssl" binary that is shipped by Apple is actually based on LibreSSL, which as of writing doesn't support Ed25519 keys. If you're using macOS, either use the step CLI, or install OpenSSL 3.0 from Homebrew using `brew install openssl@3` then replacing `openssl` in the commands below with `$(brew --prefix)/opt/openssl@3/bin/openssl`.
You can generate a new Ed25519 key pair with OpenSSL using:
```sh
openssl genpkey -algorithm ed25519 -out private.pem
openssl pkey -in private.pem -pubout -out public.pem
```
> On macOS, using openssl@3 from Homebrew:
>
> ```sh
> $(brew --prefix)/opt/openssl@3/bin/openssl genpkey -algorithm ed25519 -out private.pem
> $(brew --prefix)/opt/openssl@3/bin/openssl pkey -in private.pem -pubout -out public.pem
> ```
{{% /codetab %}}
{{% codetab %}}
<!-- Generate with the step CLI -->
If you don't have the step CLI already, install it following the [official instructions](https://smallstep.com/docs/step-cli/installation).
Next, you can generate a new Ed25519 key pair with the step CLI using:
```sh
step crypto keypair \
public.pem private.pem \
--kty OKP --curve Ed25519 \
--insecure --no-password
```
{{% /codetab %}}
{{< /tabs >}}
Regardless of how you generated your key pair, with the instructions above you'll have two files:
- `private.pem` contains the private part of the key; use the contents of this file for the **`key`** property of the component's metadata.
- `public.pem` contains the public part of the key, which you'll need only if you're deploying a Worker manually (as per the instructions in the previoius section).
{{% alert title="Warning" color="warning" %}}
Protect the private part of your key and treat it as a secret value!
{{% /alert %}}
## Additional notes
- Note that Cloudflare Workers KV doesn't guarantee strong data consistency. Although changes are visible immediately (usually) for requests made to the same Cloudflare datacenter, it can take a certain amount of time (usually up to one minute) for changes to be replicated across all Cloudflare regions.
- This state store supports TTLs with Dapr, but the minimum value for the TTL is 1 minute.
## Related links
- [Basic schema for a Dapr component]({{< ref component-schema >}})
- Read [this guide]({{< ref "howto-get-save-state.md#step-2-save-and-retrieve-a-single-state" >}}) for instructions on configuring state store components
- [State management building block]({{< ref state-management >}})
- Documentation for [Cloudflare Workers KV](https://developers.cloudflare.com/workers/learning/how-kv-works/)

View File

@ -7,11 +7,13 @@ aliases:
- "/operations/components/setup-state-store/supported-state-stores/setup-postgresql/"
---
This component allows using PostgreSQL (Postgres) as state store for Dapr.
## Create a Dapr component
Create a file called `postgres.yaml`, paste the following and replace the `<CONNECTION STRING>` value with your connection string. The connection string is a standard PostgreSQL connection string. For example, `"host=localhost user=postgres password=example port=5432 connect_timeout=10 database=dapr_test"`. See the PostgreSQL [documentation on database connections](https://www.postgresql.org/docs/current/libpq-connect.html), specifically Keyword/Value Connection Strings, for information on how to define a connection string.
Create a file called `postgres.yaml`, paste the following and replace the `<CONNECTION STRING>` value with your connection string. The connection string is a standard PostgreSQL connection string. For example, `"host=localhost user=postgres password=example port=5432 connect_timeout=10 database=dapr_test"`. See the PostgreSQL [documentation on database connections](https://www.postgresql.org/docs/current/libpq-connect.html) for information on how to define a connection string.
If you want to also configure PostgreSQL to store actors, add the `actorStateStore` configuration element shown below.
If you want to also configure PostgreSQL to store actors, add the `actorStateStore` option as in the example below.
```yaml
apiVersion: dapr.io/v1alpha1
@ -22,8 +24,27 @@ spec:
type: state.postgresql
version: v1
metadata:
# Connection string
- name: connectionString
value: "<CONNECTION STRING>"
# Timeout for database operations, in seconds (optional)
#- name: timeoutInSeconds
# value: 20
# Name of the table where to store the state (optional)
#- name: tableName
# value: "state"
# Name of the table where to store metadata used by Dapr (optional)
#- name: metadataTableName
# value: "dapr_metadata"
# Cleanup interval in seconds, to remove expired rows (optional)
#- name: cleanupIntervalInSeconds
# value: 3600
# Max idle time for connections before they're closed (optional)
#- name: connectionMaxIdleTime
# value: 0
# Uncomment this if you wish to use PostgreSQL as a state store for actors (optional)
#- name: actorStateStore
# value: "true"
```
{{% alert title="Warning" color="warning" %}}
The above example uses secrets as plain strings. It is recommended to use a secret store for the secrets as described [here]({{< ref component-secrets.md >}}).
@ -31,21 +52,17 @@ The above example uses secrets as plain strings. It is recommended to use a secr
## Spec metadata fields
| Field | Required | Details | Example |
| Field | Required | Details | Example |
|--------------------|:--------:|---------|---------|
| connectionString | Y | The connection string for PostgreSQL | `"host=localhost user=postgres password=example port=5432 connect_timeout=10 database=dapr_test"`
| actorStateStore | N | Consider this state store for actors. Defaults to `"false"` | `"true"`, `"false"`
| `connectionString` | Y | The connection string for the PostgreSQL database | `"host=localhost user=postgres password=example port=5432 connect_timeout=10 database=dapr_test"`
| `timeoutInSeconds` | N | Timeout, in seconds, for all database operations. Defaults to `20` | `30`
| `tableName` | N | Name of the table where the data is stored. Defaults to `state`. Can optionally have the schema name as prefix, such as `public.state` | `"state"`, `"public.state"`
| `metadataTableName` | N | Name of the table Dapr uses to store a few metadata properties. Defaults to `dapr_metadata`. Can optionally have the schema name as prefix, such as `public.dapr_metadata` | `"dapr_metadata"`, `"public.dapr_metadata"`
| `cleanupIntervalInSeconds` | N | Interval, in seconds, to clean up rows with an expired TTL. Default: `3600` (i.e. 1 hour). Setting this to values <=0 disables the periodic cleanup. | `1800`, `-1`
| `connectionMaxIdleTime` | N | Max idle time before unused connections are automatically closed in the connection pool. By default, there's no value and this is left to the database driver to choose. | `"5m"`
| `actorStateStore` | N | Consider this state store for actors. Defaults to `"false"` | `"true"`, `"false"`
If you wish to use PostgreSQL as an actor store, append the following to the yaml.
```yaml
- name: actorStateStore
value: "true"
```
## Create PostgreSQL
## Setup PostgreSQL
{{< tabs "Self-Hosted" >}}
@ -65,13 +82,35 @@ Either the default "postgres" database can be used, or create a new database for
To create a new database in PostgreSQL, run the following SQL command:
```SQL
create database dapr_test
CREATE DATABASE dapr_test;
```
{{% /codetab %}}
{{% /tabs %}}
## Advanced
### TTLs and cleanups
This state store supports [Time-To-Live (TTL)](https://docs.dapr.io/developing-applications/building-blocks/state-management/state-store-ttl/) for records stored with Dapr. When storing data using Dapr, you can set the `ttlInSeconds` metadata property to indicate after how many seconds the data should be considered "expired".
Because PostgreSQL doesn't have built-in support for TTLs, this is implemented in Dapr by adding a column in the state table indicating when the data is to be considered "expired". Records that are "expired" are not returned to the caller, even if they're still physically stored in the database. A background "garbage collector" periodically scans the state table for expired rows and deletes them.
The interval at which the deletion of expired records happens is set with the `cleanupIntervalInSeconds` metadata property, which defaults to 3600 seconds (that is, 1 hour).
- Longer intervals require less frequent scans for expired rows, but can require storing expired records for longer, potentially requiring more storage space. If you plan to store many records in your state table, with short TTLs, consider setting `cleanupIntervalInSeconds` to a smaller value, for example `300` (300 seconds, or 5 minutes).
- If you do not plan to use TTLs with Dapr and the PostgreSQL state store, you should consider setting `cleanupIntervalInSeconds` to a value <= 0 (e.g. `0` or `-1`) to disable the periodic cleanup and reduce the load on the database.
The column in the state table where the expiration date for records is stored in, `expiredate`, **does not have an index by default**, so each periodic cleanup must perform a full-table scan. If you have a table with a very large number of records, and only some of them use a TTL, you may find it useful to create an index on that column. Assuming that your state table name is `state` (the default), you can use this query:
```sql
CREATE INDEX expiredate_idx
ON state
USING btree (expiredate ASC NULLS LAST);
```
## Related links
- [Basic schema for a Dapr component]({{< ref component-schema >}})
- Read [this guide]({{< ref "howto-get-save-state.md#step-2-save-and-retrieve-a-single-state" >}}) for instructions on configuring state store components
- [State management building block]({{< ref state-management >}})

View File

@ -0,0 +1,8 @@
- component: Cloudflare Queues
link: cloudflare-queues
state: Alpha
version: v1
since: "1.10"
features:
input: false
output: true

View File

@ -0,0 +1,12 @@
- component: Cloudflare Workers KV
link: setup-cloudflare-workerskv
state: Beta
version: v1
since: "1.10"
features:
crud: true
transactions: false
etag: false
ttl: true
query: false

View File

@ -139,7 +139,7 @@
crud: true
transactions: true
etag: true
ttl: false
ttl: true
query: true
- component: Redis
link: setup-redis

View File

@ -4,6 +4,7 @@
"Alibaba Cloud" $.Site.Data.components.bindings.alibaba
"Google Cloud Platform (GCP)" $.Site.Data.components.bindings.gcp
"Amazon Web Services (AWS)" $.Site.Data.components.bindings.aws
"Cloudflare" $.Site.Data.components.bindings.cloudflare
"Zeebe (Camunda Cloud)" $.Site.Data.components.bindings.zeebe
}}

View File

@ -3,6 +3,7 @@
"Microsoft Azure" $.Site.Data.components.state_stores.azure
"Google Cloud Platform (GCP)" $.Site.Data.components.state_stores.gcp
"Amazon Web Services (AWS)" $.Site.Data.components.state_stores.aws
"Cloudflare" $.Site.Data.components.state_stores.cloudflare
"Oracle Cloud" $.Site.Data.components.state_stores.oracle
}}