moving components setup howtos to operations section

This commit is contained in:
Ori Zohar 2020-10-09 14:26:50 -07:00
parent c9cd431e01
commit 0bf521388f
47 changed files with 3582 additions and 6 deletions

View File

@ -5,3 +5,4 @@ weight: 10
description: "Dapr capabilities that solve common development challenges for distributed applications"
---
Get a high-level [overview of Dapr building blocks](/docs/concepts/building-blocks/) at the **Concepts** section

View File

@ -1,6 +1,6 @@
---
title: "Bindings overview"
linkTitle: "Bindings overview"
linkTitle: "Overview"
weight: 100
description: Overview of the Dapr bindings building block
---

View File

@ -5,6 +5,4 @@ weight: 60
description: Dapr capabilities for tracing, logs and metrics
---
This section includes guides for developers in the context of observability.
For a general overview of the observability concept in Dapr see the **Concepts** section
For operations guidance on observability see the **Operations** section.
This section includes guides for developers in the context of observability. See other sections for a [general overview of the observability concept](/docs/concepts/observability/) in Dapr and for [operations guidance on monitoring](/docs/operations/monitoring/).

View File

@ -1,4 +1,10 @@
# Limit the secrets that can be read from secret stores
---
title: "How To: Use secret scoping"
linkTitle: "How To: Use secret scoping"
weight: 3000
description: "Use scoping to limit the secrets that can be read from secret stores"
type: docs
---
Follow [these instructions](../setup-secret-store) to configure secret store for an application. Once configured, any secret defined within that store will be accessible from the Dapr application.

View File

@ -1,4 +1,10 @@
# Apply Open Policy Agent Polices
---
title: "How-To: Apply OPA policies"
linkTitle: "How-To: Apply OPA policies"
weight: 1000
description: "Use Dapr middleware to apply Open Policy Agent (OPA) policies on incoming requests"
type: docs
---
The Dapr Open Policy Agent (OPA) [HTTP middleware](https://github.com/dapr/docs/blob/master/concepts/middleware/README.md) allows applying [OPA Policies](https://www.openpolicyagent.org/) to incoming Dapr HTTP requests. This can be used to apply reusable authorization policies to app endpoints.

View File

@ -0,0 +1,7 @@
---
title: "Setup Pub/Sub components"
linkTitle: "Setup Pub/Sub"
description: "Guidance on setting up different message brokers for Dapr Pub/Sub"
weight: 3000
type: docs
---

View File

@ -0,0 +1,60 @@
---
title: "Apache Kafka"
linkTitle: "Apache Kafka"
type: docs
---
## Locally
You can run Kafka locally using [this](https://github.com/wurstmeister/kafka-docker) Docker image.
To run without Docker, see the getting started guide [here](https://kafka.apache.org/quickstart).
## Kubernetes
To run Kafka on Kubernetes, you can use the [Helm Chart](https://github.com/helm/charts/tree/master/incubator/kafka#installing-the-chart).
## Create a Dapr component
The next step is to create a Dapr component for Kafka.
Create the following YAML file named `kafka.yaml`:
```yaml
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
name: <NAME>
namespace: <NAMESPACE>
spec:
type: pubsub.kafka
metadata:
# Kafka broker connection setting
- name: brokers
# Comma separated list of kafka brokers
value: "dapr-kafka.dapr-tests.svc.cluster.local:9092"
# Enable auth. Default is "false"
- name: authRequired
value: "false"
# Only available is authRequired is set to true
- name: saslUsername
value: <username>
# Only available is authRequired is set to true
- name: saslPassword
value: <password>
```
The above example uses secrets as plain strings. It is recommended to use a secret store for the secrets as described [here](../../concepts/secrets/README.md).
## Apply the configuration
### In Kubernetes
To apply the Kafka component to Kubernetes, use the `kubectl`:
```
kubectl apply -f kafka.yaml
```
### Running locally
To run locally, create a `components` dir containing the YAML file and provide the path to the `dapr run` command with the flag `--components-path`.

View File

@ -0,0 +1,129 @@
---
title: "AWS SNS/SQS"
linkTitle: "AWS SNS/SQS"
type: docs
---
This article describes configuring Dapr to use AWS SNS/SQS for pub/sub on local and Kubernetes environments. For local development, the [localstack project](https://github.com/localstack/localstack) is used to integrate AWS SNS/SQS.
Follow the instructions [here](https://github.com/localstack/localstack#installing) to install the localstack CLI.
## Locally
In order to use localstack with your pubsub binding, you need to provide the `awsEndpoint` configuration
in the component metadata. The `awsEndpoint` is unncessary when running against production AWS.
```yaml
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
name: messagebus
spec:
type: pubsub.snssqs
metadata:
- name: awsEndpoint
value: http://localhost:4566
# Use us-east-1 for localstack
- name: awsRegion
value: us-east-1
```
## Kubernetes
To run localstack on Kubernetes, you can apply the configuration below. Localstack is then
reachable at the DNS name `http://localstack.default.svc.cluster.local:4566`
(assuming this was applied to the default namespace) and this should be used as the `awsEndpoint`
```yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: localstack
spec:
# using the selector, we will expose the running deployments
# this is how Kubernetes knows, that a given service belongs to a deployment
selector:
matchLabels:
app: localstack
replicas: 1
template:
metadata:
labels:
app: localstack
spec:
containers:
- name: localstack
image: localstack/localstack:latest
ports:
# Expose the edge endpoint
- containerPort: 4566
---
kind: Service
apiVersion: v1
metadata:
name: localstack
labels:
app: localstack
spec:
selector:
app: localstack
ports:
- protocol: TCP
port: 4566
targetPort: 4566
type: LoadBalancer
```
## Run in AWS
In order to run in AWS, you should create an IAM user with permissions to the SNS and SQS services.
Use the account ID and account secret and plug them into the `awsAccountID` and `awsAccountSecret`
in the component metadata using kubernetes secrets.
## Create a Dapr component
The next step is to create a Dapr component for SNS/SQS.
Create the following YAML file named `snssqs.yaml`:
```yaml
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
name: <NAME>
namespace: <NAMESPACE>
spec:
type: pubsub.snssqs
metadata:
# ID of the AWS account with appropriate permissions to SNS and SQS
- name: awsAccountID
value: <AWS account ID>
# Secret for the AWS user
- name: awsSecret
value: <AWS secret>
# The AWS region you want to operate in.
# See this page for valid regions: https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Concepts.RegionsAndAvailabilityZones.html
# Make sure that SNS and SQS are available in that region.
- name: awsRegion
value: us-east-1
```
The above example uses secrets as plain strings. It is recommended to use a secret store for the secrets as described [here](../../concepts/secrets/README.md).
## Apply the configuration
### In Kubernetes
To apply the SNS/SQS component to Kubernetes, use the `kubectl` command:
```
kubectl apply -f snssqs.yaml
```
### Running locally
Place the above components file `snssqs.yaml` in the local components directory (either the default directory or in a path you define when running the CLI command `dapr run`)
## Related Links
- [AWS SQS as subscriber to SNS](https://docs.aws.amazon.com/sns/latest/dg/sns-sqs-as-subscriber.html)
- [AWS SNS API refernce](https://docs.aws.amazon.com/sns/latest/api/Welcome.html)
- [AWS SQS API refernce](https://docs.aws.amazon.com/AWSSimpleQueueService/latest/APIReference/Welcome.html)

View File

@ -0,0 +1,56 @@
---
title: "Azure Events Hub"
linkTitle: "Azure Events Hub"
type: docs
---
Follow the instructions [here](https://docs.microsoft.com/en-us/azure/event-hubs/event-hubs-create) on setting up Azure Event Hubs.
Since this implementation uses the Event Processor Host, you will also need an [Azure Storage Account](https://docs.microsoft.com/en-us/azure/storage/common/storage-account-create?tabs=azure-portal).
## Create a Dapr component
The next step is to create a Dapr component for Azure Event Hubs.
Create the following YAML file named `eventhubs.yaml`:
```yaml
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
name: <NAME>
namespace: <NAMESPACE>
spec:
type: pubsub.azure.eventhubs
metadata:
- name: connectionString
value: <REPLACE-WITH-CONNECTION-STRING> # Required. "Endpoint=sb://****"
- name: storageAccountName
value: <REPLACE-WITH-STORAGE-ACCOUNT-NAME> # Required.
- name: storageAccountKey
value: <REPLACE-WITH-STORAGE-ACCOUNT-KEY> # Required.
- name: storageContainerName
value: <REPLACE-WITH-CONTAINER-NAME > # Required.
```
See [here](https://docs.microsoft.com/en-us/azure/event-hubs/authorize-access-shared-access-signature) on how to get the Event Hubs connection string. Note this is not the Event Hubs namespace.
The above example uses secrets as plain strings. It is recommended to use a secret store for the secrets as described [here](../../concepts/secrets/README.md)
## Create consumer groups for each subscriber
For every Dapr app that wants to subscribe to events, create an Event Hubs consumer group with the name of the `dapr id`.
For example, a Dapr app running on Kubernetes with `dapr.io/app-id: "myapp"` will need an Event Hubs consumer group named `myapp`.
## Apply the configuration
### In Kubernetes
To apply the Azure Event Hubs pub/sub to Kubernetes, use the `kubectl` CLI:
```bash
kubectl apply -f eventhubs.yaml
```
### Running locally
To run locally, create a `components` dir containing the YAML file and provide the path to the `dapr run` command with the flag `--components-path`.

View File

@ -0,0 +1,68 @@
---
title: "Azure Service Bus"
linkTitle: "Azure Events Hub"
type: docs
---
Follow the instructions [here](https://docs.microsoft.com/en-us/azure/service-bus-messaging/service-bus-quickstart-topics-subscriptions-portal) on setting up Azure Service Bus Topics.
## Create a Dapr component
The next step is to create a Dapr component for Azure Service Bus.
Create the following YAML file named `azuresb.yaml`:
```yaml
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
name: <NAME>
namespace: <NAMESPACE>
spec:
type: pubsub.azure.servicebus
metadata:
- name: connectionString
value: <REPLACE-WITH-CONNECTION-STRING> # Required.
- name: timeoutInSec
value: <REPLACE-WITH-TIMEOUT-IN-SEC> # Optional. Default: "60". Timeout for sending messages and management operations.
- name: handlerTimeoutInSec
value: <REPLACE-WITH-HANDLER-TIMEOUT-IN-SEC> # Optional. Default: "60". Timeout for invoking app handler.
- name: disableEntityManagement
value: <REPLACE-WITH-DISABLE-ENTITY-MANAGEMENT> # Optional. Default: false. When set to true, topics and subscriptions do not get created automatically.
- name: maxDeliveryCount
value: <REPLACE-WITH-MAX-DELIVERY-COUNT> # Optional. Defines the number of attempts the server will make to deliver a message.
- name: lockDurationInSec
value: <REPLACE-WITH-LOCK-DURATION-IN-SEC> # Optional. Defines the length in seconds that a message will be locked for before expiring.
- name: lockRenewalInSec
value: <REPLACE-WITH-LOCK-RENEWAL-IN-SEC> # Optional. Default: "20". Defines the frequency at which buffered message locks will be renewed.
- name: maxActiveMessages
value: <REPLACE-WITH-MAX-ACTIVE-MESSAGES> # Optional. Default: "10000". Defines the maximum number of messages to be buffered or processing at once.
- name: maxActiveMessagesRecoveryInSec
value: <REPLACE-WITH-MAX-ACTIVE-MESSAGES-RECOVERY-IN-SEC> # Optional. Default: "2". Defines the number of seconds to wait once the maximum active message limit is reached.
- name: maxConcurrentHandlers
value: <REPLACE-WITH-MAX-CONCURRENT-HANDLERS> # Optional. Defines the maximum number of concurrent message handlers
- name: prefetchCount
value: <REPLACE-WITH-PREFETCH-COUNT> # Optional. Defines the number of prefetched messages (use for high throughput / low latency scenarios)
- name: defaultMessageTimeToLiveInSec
value: <REPLACE-WITH-MESSAGE-TIME-TO-LIVE-IN-SEC> # Optional.
- name: autoDeleteOnIdleInSec
value: <REPLACE-WITH-AUTO-DELETE-ON-IDLE-IN-SEC> # Optional.
```
> __NOTE:__ The above settings are shared across all topics that use this component.
The above example uses secrets as plain strings. It is recommended to use a secret store for the secrets as described [here](../../concepts/secrets/README.md)
## Apply the configuration
### In Kubernetes
To apply the Azure Service Bus pub/sub to Kubernetes, use the `kubectl` CLI:
```bash
kubectl apply -f azuresb.yaml
```
### Running locally
To run locally, create a `components` dir containing the YAML file and provide the path to the `dapr run` command with the flag `--components-path`.

View File

@ -0,0 +1,77 @@
---
title: "GCP Pub/Sub"
linkTitle: "GCP Pub/Sub"
type: docs
---
Follow the instructions [here](https://cloud.google.com/pubsub/docs/quickstart-console) on setting up Google Cloud Pub/Sub system.
## Create a Dapr component
The next step is to create a Dapr component for Google Cloud Pub/Sub
Create the following YAML file named `messagebus.yaml`:
```yaml
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
name: <NAME>
namespace: <NAMESPACE>
spec:
type: pubsub.gcp.pubsub
metadata:
- name: topic
value: <TOPIC_NAME>
- name: type
value: service_account
- name: project_id
value: <PROJECT_ID> # replace
- name: private_key_id
value: <PRIVATE_KEY_ID> #replace
- name: client_email
value: <CLIENT_EMAIL> #replace
- name: client_id
value: <CLIENT_ID> # replace
- name: auth_uri
value: https://accounts.google.com/o/oauth2/auth
- name: token_uri
value: https://oauth2.googleapis.com/token
- name: auth_provider_x509_cert_url
value: https://www.googleapis.com/oauth2/v1/certs
- name: client_x509_cert_url
value: https://www.googleapis.com/robot/v1/metadata/x509/<PROJECT_NAME>.iam.gserviceaccount.com
- name: private_key
value: <PRIVATE_KEY> # replace x509 cert here
- name: disableEntityManagement
value: <REPLACE-WITH-DISABLE-ENTITY-MANAGEMENT> # Optional. Default: false. When set to true, topics and subscriptions do not get created automatically.
```
- `topic` is the Pub/Sub topic name.
- `type` is the GCP credentials type.
- `project_id` is the GCP project id.
- `private_key_id` is the GCP private key id.
- `client_email` is the GCP client email.
- `client_id` is the GCP client id.
- `auth_uri` is Google account OAuth endpoint.
- `token_uri` is Google account token uri.
- `auth_provider_x509_cert_url` is the GCP credentials cert url.
- `client_x509_cert_url` is the GCP credentials project x509 cert url.
- `private_key` is the GCP credentials private key.
- `disableEntityManagement` Optional. Default: false. When set to true, topics and subscriptions do not get created automatically.
The above example uses secrets as plain strings. It is recommended to use a secret store for the secrets as described [here](../../concepts/secrets/README.md)
## Apply the configuration
### In Kubernetes
To apply the Google Cloud pub/sub to Kubernetes, use the `kubectl` CLI:
```bash
kubectl apply -f messagebus.yaml
```
### Running locally
To run locally, create a `components` dir containing the YAML file and provide the path to the `dapr run` command with the flag `--components-path`.

View File

@ -0,0 +1,55 @@
---
title: "Hazelcast"
linkTitle: "Hazelcast"
type: docs
---
## Locally
You can run Hazelcast locally using Docker:
```
docker run -e JAVA_OPTS="-Dhazelcast.local.publicAddress=127.0.0.1:5701" -p 5701:5701 hazelcast/hazelcast
```
You can then interact with the server using the `127.0.0.1:5701`.
## Kubernetes
The easiest way to install Hazelcast on Kubernetes is by using the [Helm chart](https://github.com/helm/charts/tree/master/stable/hazelcast):
## Create a Dapr component
The next step is to create a Dapr component for Hazelcast.
Create the following YAML file named `hazelcast.yaml`:
```yaml
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
name: <NAME>
namespace: <NAMESPACE>
spec:
type: pubsub.hazelcast
metadata:
- name: hazelcastServers
value: <REPLACE-WITH-HOSTS> # Required. A comma delimited string of servers. Example: "hazelcast:3000,hazelcast2:3000"
```
The above example uses secrets as plain strings. It is recommended to use a secret store for the secrets as described [here](../../concepts/secrets/README.md).
## Apply the configuration
### In Kubernetes
To apply the Hazelcast state store to Kubernetes, use the `kubectl` CLI:
```
kubectl apply -f hazelcast.yaml
```
### Running locally
To run locally, create a `components` dir containing the YAML file and provide the path to the `dapr run` command with the flag `--components-path`.

View File

@ -0,0 +1,140 @@
---
title: "MQTT"
linkTitle: "MQTT"
type: docs
---
## Locally
You can run a MQTT broker [locally using Docker](https://hub.docker.com/_/eclipse-mosquitto):
```bash
docker run -d -p 1883:1883 -p 9001:9001 --name mqtt eclipse-mosquitto:1.6.9
```
You can then interact with the server using the client port: `mqtt://localhost:1883`
## Kubernetes
You can run a MQTT broker in kubernetes using following yaml:
```yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: mqtt-broker
labels:
app-name: mqtt-broker
spec:
replicas: 1
selector:
matchLabels:
app-name: mqtt-broker
template:
metadata:
labels:
app-name: mqtt-broker
spec:
containers:
- name: mqtt
image: eclipse-mosquitto:1.6.9
imagePullPolicy: IfNotPresent
ports:
- name: default
containerPort: 1883
protocol: TCP
- name: websocket
containerPort: 9001
protocol: TCP
---
apiVersion: v1
kind: Service
metadata:
name: mqtt-broker
labels:
app-name: mqtt-broker
spec:
type: ClusterIP
selector:
app-name: mqtt-broker
ports:
- port: 1883
targetPort: default
name: default
protocol: TCP
- port: 9001
targetPort: websocket
name: websocket
protocol: TCP
```
You can then interact with the server using the client port: `tcp://mqtt-broker.default.svc.cluster.local:1883`
## Create a Dapr component
The next step is to create a Dapr component for MQTT.
Create the following yaml file named `mqtt.yaml`
```yaml
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
name: <NAME>
namespace: <NAMESPACE>
spec:
type: pubsub.mqtt
metadata:
- name: url
value: "tcp://[username][:password]@host.domain[:port]"
- name: qos
value: 1
- name: retain
value: "false"
- name: cleanSession
value: "false"
```
To configure communication using TLS, ensure mosquitto broker is configured to support certificates.
Pre-requisite includes `certficate authority certificate`, `ca issued client certificate`, `client private key`.
Make following additional changes to mqtt pubsub components for supporting TLS.
```yaml
...
spec:
type: pubsub.mqtt
metadata:
- name: url
value: "tcps://host.domain[:port]"
- name: caCert
value: ''
- name: clientCert
value: ''
- name: clientKey
value: ''
```
Where:
* **url** (required) is the address of the MQTT broker.
- use **tcp://** scheme for non-TLS communication.
- use **tcps://** scheme for TLS communication.
* **qos** (optional) indicates the Quality of Service Level (QoS) of the message. (Default 0)
* **retain** (optional) defines whether the message is saved by the broker as the last known good value for a specified topic. (Default false)
* **cleanSession** (optional) will set the "clean session" in the connect message when client connects to an MQTT broker . (Default true)
* **caCert** (required for using TLS) is the certificate authority certificate.
* **clientCert** (required for using TLS) is the client certificate.
* **clientKey** (required for using TLS) is the client key.
The above example uses secrets as plain strings. It is recommended to use a secret store for the secrets as described [here](../../concepts/secrets/README.md)
## Apply the configuration
### In Kubernetes
To apply the MQTT pubsub to Kubernetes, use the `kubectl` CLI:
```bash
kubectl apply -f mqtt.yaml
```
### Running locally
To run locally, create a `components` dir containing the YAML file and provide the path to the `dapr run` command with the flag `--components-path`.

View File

@ -0,0 +1,95 @@
---
title: "NATS streaming"
linkTitle: "NATS streaming"
type: docs
---
## Locally
You can run a NATS server locally using Docker:
```bash
docker run -d -name nats-streaming -p 4222:4222 -p 8222:8222 nats-streaming
```
You can then interact with the server using the client port: `localhost:4222`.
## Kubernetes
Install NATS on Kubernetes by using the [kubectl](https://docs.nats.io/nats-on-kubernetes/minimal-setup):
```bash
# Single server NATS
kubectl apply -f https://raw.githubusercontent.com/nats-io/k8s/master/nats-server/single-server-nats.yml
kubectl apply -f https://raw.githubusercontent.com/nats-io/k8s/master/nats-streaming-server/single-server-stan.yml
```
This will install a single NATS-Streaming and Nats into the `default` namespace.
To interact with NATS, find the service with: `kubectl get svc stan`.
For example, if installing using the example above, the NATS Streaming address would be:
`<YOUR-HOST>:4222`
## Create a Dapr component
The next step is to create a Dapr component for NATS-Streaming.
Create the following YAML file named `nats-stan.yaml`:
```yaml
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
name: <NAME>
namespace: <NAMESPACE>
spec:
type: pubsub.natsstreaming
metadata:
- name: natsURL
value: <REPLACE-WITH-NATS-SERVER-ADDRESS> # Required. example nats://localhost:4222
- name: natsStreamingClusterID
value: <REPLACE-WITH-NATS-CLUSTERID> # Required.
# blow are subscription configuration.
- name: subscriptionType
value: <REPLACE-WITH-SUBSCRIPTION-TYPE> # Required. Allowed values: topic, queue.
# following subscription options - only one can be used
# - name: consumerID
# value: queuename
# - name: durableSubscriptionName
# value: ""
# - name: startAtSequence
# value: 1
# - name: startWithLastReceived
# value: false
- name: deliverAll
value: true
# - name: deliverNew
# value: false
# - name: startAtTimeDelta
# value: ""
# - name: startAtTime
# value: ""
# - name: startAtTimeFormat
# value: ""
```
The above example uses secrets as plain strings. It is recommended to use a secret store for the secrets as described [here](../../concepts/secrets/README.md)
## Apply the configuration
### In Kubernetes
To apply the NATS pub/sub to Kubernetes, use the `kubectl` CLI:
```bash
kubectl apply -f nats-stan.yaml
```
### Running locally
To run locally, create a `components` dir containing the YAML file and provide the path to the `dapr run` command with the flag `--components-path`.

View File

@ -0,0 +1,63 @@
---
title: "Overview"
linkTitle: "Overview"
description: "General overview on set up of message brokers for Dapr Pub/Sub"
weight: 10000
type: docs
---
Dapr integrates with pub/sub message buses to provide apps with the ability to create event-driven, loosely coupled architectures where producers send events to consumers via topics.
Dapr supports the configuration of multiple, named, pub/sub components *per application*. Each pub/sub component has a name and this name is used when publishing a message topic
Pub/Sub message buses are extensible and can be found in the [components-contrib repo](https://github.com/dapr/components-contrib).
A pub/sub in Dapr is described using a `Component` file:
```yaml
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
name: pubsub
namespace: default
spec:
type: pubsub.<NAME>
metadata:
- name: <KEY>
value: <VALUE>
- name: <KEY>
value: <VALUE>
...
```
The type of pub/sub is determined by the `type` field, and things like connection strings and other metadata are put in the `.metadata` section.
Even though you can put plain text secrets in there, it is recommended you use a [secret store](../../concepts/secrets) using a `secretKeyRef`
## Running locally
When running locally with the Dapr CLI, a component file for a Redis pub/sub is created in a `components` directory, which for Linux/MacOS is `$HOME/.dapr/components` and for Windows is `%USERPROFILE%\.dapr\components`. See [Environment Setup](../getting-started/environment-setup.md#installing-dapr-in-self-hosted-mode)
You can make changes to this file the way you see fit, whether to change connection values or replace it with a different pub/sub.
## Running in Kubernetes
Dapr uses a Kubernetes Operator to update the sidecars running in the cluster with different components.
To setup a pub/sub in Kubernetes, use `kubectl` to apply the component file:
```bash
kubectl apply -f pubsub.yaml
```
## Related links
- [Setup Redis Streams](./setup-redis.md)
- [Setup NATS Streaming](./setup-nats-streaming.md)
- [Setup Azure Service bus](./setup-azure-servicebus.md)
- [Setup RabbitMQ](./setup-rabbitmq.md)
- [Setup GCP Pubsub](./setup-gcp.md)
- [Setup Hazelcast Pubsub](./setup-hazelcast.md)
- [Setup Azure Event Hubs](./setup-azure-eventhubs.md)
- [Setup SNS/SQS](./setup-snssqs.md)
- [Setup MQTT](./setup-mqtt.md)
- [Setup Apache Pulsar](./setup-pulsar.md)
- [Setup Kafka](./setup-kafka.md)

View File

@ -0,0 +1,53 @@
---
title: "Pulsar"
linkTitle: "Pulsar"
type: docs
---
## Locally
```
docker run -it \
-p 6650:6650 \
-p 8080:8080 \
--mount source=pulsardata,target=/pulsar/data \
--mount source=pulsarconf,target=/pulsar/conf \
apachepulsar/pulsar:2.5.1 \
bin/pulsar standalone
```
## Kubernetes
Please refer to the following [Helm chart](https://pulsar.apache.org/docs/en/kubernetes-helm/) Documentation.
## Create a Dapr component
The next step is to create a Dapr component for Pulsar.
Create the following YAML file named pulsar.yaml:
```yaml
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
name: <NAME>
namespace: <NAMESPACE>
spec:
type: pubsub.pulsar
metadata:
- name: host
value: <REPLACE WITH PULSAR URL> #default is localhost:6650
- name: enableTLS
value: <TRUE/FALSE>
```
## Apply the configuration
To apply the Pulsar pub/sub to Kubernetes, use the kubectl CLI:
`` kubectl apply -f pulsar.yaml ``
### Running locally ###
To run locally, create a `components` dir containing the YAML file and provide the path to the `dapr run` command with the flag `--components-path`.

View File

@ -0,0 +1,79 @@
---
title: "RabbitMQ"
linkTitle: "RabbitMQ"
type: docs
---
## Locally
You can run a RabbitMQ server locally using Docker:
```bash
docker run -d --hostname my-rabbit --name some-rabbit rabbitmq:3
```
You can then interact with the server using the client port: `localhost:5672`.
## Kubernetes
The easiest way to install RabbitMQ on Kubernetes is by using the [Helm chart](https://github.com/helm/charts/tree/master/stable/rabbitmq):
```bash
helm install rabbitmq stable/rabbitmq
```
Look at the chart output and get the username and password.
This will install RabbitMQ into the `default` namespace.
To interact with RabbitMQ, find the service with: `kubectl get svc rabbitmq`.
For example, if installing using the example above, the RabbitMQ server client address would be:
`rabbitmq.default.svc.cluster.local:5672`
## Create a Dapr component
The next step is to create a Dapr component for RabbitMQ.
Create the following YAML file named `rabbitmq.yaml`:
```yaml
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
name: <NAME>
namespace: <NAMESPACE>
spec:
type: pubsub.rabbitmq
metadata:
- name: host
value: <REPLACE-WITH-HOST> # Required. Example: "amqp://rabbitmq.default.svc.cluster.local:5672", "amqp://localhost:5672"
- name: consumerID
value: <REPLACE-WITH-CONSUMER-ID> # Required. Any unique ID. Example: "myConsumerID"
- name: durable
value: <REPLACE-WITH-DURABLE> # Optional. Default: "false"
- name: deletedWhenUnused
value: <REPLACE-WITH-DELETE-WHEN-UNUSED> # Optional. Default: "false"
- name: autoAck
value: <REPLACE-WITH-AUTO-ACK> # Optional. Default: "false"
- name: deliveryMode
value: <REPLACE-WITH-DELIVERY-MODE> # Optional. Default: "0". Values between 0 - 2.
- name: requeueInFailure
value: <REPLACE-WITH-REQUEUE-IN-FAILURE> # Optional. Default: "false".
```
The above example uses secrets as plain strings. It is recommended to use a secret store for the secrets as described [here](../../concepts/secrets/README.md)
## Apply the configuration
### In Kubernetes
To apply the RabbitMQ pub/sub to Kubernetes, use the `kubectl` CLI:
```bash
kubectl apply -f rabbitmq.yaml
```
### Running locally
To run locally, create a `components` dir containing the YAML file and provide the path to the `dapr run` command with the flag `--components-path`.

View File

@ -0,0 +1,85 @@
# Setup Redis Streams
## Creating a Redis instance
Dapr can use any Redis instance - containerized, running on your local dev machine, or a managed cloud service, provided the version of Redis is 5.0.0 or later. If you already have a Redis instance > 5.0.0 installed, move on to the [Configuration](#configuration) section.
### Running locally
The Dapr CLI will automatically create and setup a Redis Streams instance for you.
The Redis instance will be installed via Docker when you run `dapr init`, and the component file will be created in default directory. (`$HOME/.dapr/components` directory (Mac/Linux) or `%USERPROFILE%\.dapr\components` on Windows).
### Creating a Redis instance in your Kubernetes Cluster using Helm
We can use [Helm](https://helm.sh/) to quickly create a Redis instance in our Kubernetes cluster. This approach requires [Installing Helm](https://github.com/helm/helm#install).
1. Install Redis into your cluster.
```bash
helm repo add bitnami https://charts.bitnami.com/bitnami
helm install redis bitnami/redis
```
2. Run `kubectl get pods` to see the Redis containers now running in your cluster.
3. Add `redis-master:6379` as the `redisHost` in your redis.yaml file. For example:
```yaml
metadata:
- name: redisHost
value: redis-master:6379
```
4. Next, we'll get our Redis password, which is slightly different depending on the OS we're using:
- **Windows**: Run `kubectl get secret --namespace default redis -o jsonpath="{.data.redis-password}" > encoded.b64`, which will create a file with your encoded password. Next, run `certutil -decode encoded.b64 password.txt`, which will put your redis password in a text file called `password.txt`. Copy the password and delete the two files.
- **Linux/MacOS**: Run `kubectl get secret --namespace default redis -o jsonpath="{.data.redis-password}" | base64 --decode` and copy the outputted password.
Add this password as the `redisPassword` value in your redis.yaml file. For example:
```yaml
- name: redisPassword
value: "lhDOkwTlp0"
```
### Other ways to create a Redis Database
- [AWS Redis](https://aws.amazon.com/redis/)
- [GCP Cloud MemoryStore](https://cloud.google.com/memorystore/)
## Configuration
To setup Redis, you need to create a component for `pubsub.redis`.
The following yaml files demonstrates how to define each. If the Redis instance supports TLS with public certificates it can be configured to enable or disable TLS in the yaml. **Note:** yaml files below illustrate secret management in plain text. In a production-grade application, follow [secret management](../../concepts/secrets/README.md) instructions to securely manage your secrets.
### Configuring Redis Streams for Pub/Sub
Create a file called pubsub.yaml, and paste the following:
```yaml
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
name: messagebus
namespace: default
spec:
type: pubsub.redis
metadata:
- name: redisHost
value: <HOST>
- name: redisPassword
value: <PASSWORD>
- name: enableTLS
value: <bool>
```
## Apply the configuration
### Kubernetes
```bash
kubectl apply -f pubsub.yaml
```
### Standalone
To run locally, create a `components` dir containing the YAML file and provide the path to the `dapr run` command with the flag `--components-path`.

View File

@ -0,0 +1,13 @@
# How To: Setup Secret Stores
The following list shows the supported secret stores by Dapr. The links here will walk you through setting up and using the secret store.
* [AWS Secret Manager](./aws-secret-manager.md)
* [Azure Key Vault](./azure-keyvault.md)
* [Azure Key Vault with Managed Identity](./azure-keyvault-managed-identity.md)
* [GCP Secret Manager](./gcp-secret-manager.md)
* [Hashicorp Vault](./hashicorp-vault.md)
* [Kubernetes](./kubernetes.md)
* For Development
* [JSON file secret store](./file-secret-store.md)
* [Environment variable secret store](./envvar-secret-store.md)

View File

@ -0,0 +1,60 @@
# Secret store for AWS Secret Manager
This document shows how to enable AWS Secret Manager secret store using [Dapr Secrets Component](../../concepts/secrets/README.md) for self hosted and Kubernetes mode.
## Create an AWS Secret Manager instance
Setup AWS Secret Manager using the AWS documentation: https://docs.aws.amazon.com/secretsmanager/latest/userguide/tutorials_basic.html.
## Create the component
```yaml
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
name: awssecretmanager
namespace: default
spec:
type: secretstores.aws.secretmanager
metadata:
- name: region
value: [aws_region] # Required.
- name: accessKey # Required.
value: "[aws_access_key]"
- name: secretKey # Required.
value: "[aws_secret_key]"
- name: sessionToken # Required.
value: "[aws_session_token]"
```
To deploy in Kubernetes, save the file above to `aws_secret_manager.yaml` and then run:
```bash
kubectl apply -f aws_secret_manager.yaml
```
To run locally, create a `components` dir containing the YAML file and provide the path to the `dapr run` command with the flag `--components-path`.
## AWS Secret Manager reference example
This example shows you how to set the Redis password from the AWS Secret Manager secret store.
Here, you created a secret named `redisPassword` in AWS Secret Manager. Note its important to set it both as the `name` and `key` properties.
```yaml
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
name: statestore
namespace: default
spec:
type: state.redis
metadata:
- name: redisHost
value: "[redis]:6379"
- name: redisPassword
secretKeyRef:
name: redisPassword
key: redisPassword
auth:
secretStore: awssecretmanager
```

View File

@ -0,0 +1,276 @@
# Use Azure Key Vault secret store in Kubernetes mode using Managed Identities
This document shows how to enable Azure Key Vault secret store using [Dapr Secrets Component](../../concepts/secrets/README.md) for Kubernetes mode using Managed Identities to authenticate to a Key Vault.
## Contents
- [Use Azure Key Vault secret store in Kubernetes mode using Managed Identities](#use-azure-key-vault-secret-store-in-kubernetes-mode-using-managed-identities)
- [Contents](#contents)
- [Prerequisites](#prerequisites)
- [Setup Kubernetes to use Managed identities and Azure Key Vault](#setup-kubernetes-to-use-managed-identities-and-azure-key-vault)
- [Use Azure Key Vault secret store in Kubernetes mode with managed identities](#use-azure-key-vault-secret-store-in-kubernetes-mode-with-managed-identities)
- [References](#references)
## Prerequisites
* [Azure Subscription](https://azure.microsoft.com/en-us/free/)
* [Azure CLI](https://docs.microsoft.com/en-us/cli/azure/install-azure-cli?view=azure-cli-latest)
## Setup Kubernetes to use Managed identities and Azure Key Vault
1. Login to Azure and set the default subscription
```bash
# Log in Azure
az login
# Set your subscription to the default subscription
az account set -s [your subscription id]
```
2. Create an Azure Key Vault in a region
```bash
az keyvault create --location [region] --name [your keyvault] --resource-group [your resource group]
```
3. Create the managed identity(Optional)
This step is required only if the AKS Cluster is provisoned without the flag "--enable-managed-identity". If the cluster is provisioned with manahed identity, than is suggested to use the autogenerated managed identity that is associated to the Resource Group MC_*.
```bash
$identity = az identity create -g [your resource group] -n [you managed identity name] -o json | ConvertFrom-Json
```
Below the command to retrieve the managed identity in the autogenerated scenario:
```bash
az aks show -g <AKSResourceGroup> -n <AKSClusterName>
```
For more detail about the roles to assign to integrate AKS with Azure Services [Role Assignment](https://github.com/Azure/aad-pod-identity/blob/master/docs/readmes/README.role-assignment.md).
4. Retrieve Managed Identity ID
The two main scenario are:
- Service Principal, in this case the Resource Group is the one in which is deployed the AKS Service Cluster
```bash
$clientId= az aks show -g <AKSResourceGroup> -n <AKSClusterName> --query servicePrincipalProfile.clientId -otsv
```
- Managed Identity, in this case the Resource Group is the one in which is deployed the AKS Service Cluster
```bash
$clientId= az aks show -g <AKSResourceGroup> -n <AKSClusterName> --query identityProfile.kubeletidentity.clientId -otsv
```
5. Assign the Reader role to the managed identity
For AKS cluster, the cluster resource group refers to the resource group with a MC_ prefix, which contains all of the infrastructure resources associated with the cluster like VM/VMSS.
```bash
az role assignment create --role "Reader" --assignee $clientId --scope /subscriptions/[your subscription id]/resourcegroups/[your resource group]
```
6. Assign the Managed Identity Operator role to the AKS Service Principal
Refer to previous step about the Resource Group to use and which identity to assign
```bash
az role assignment create --role "Managed Identity Operator" --assignee $clientId --scope /subscriptions/[your subscription id]/resourcegroups/[your resource group]
az role assignment create --role "Virtual Machine Contributor" --assignee $clientId --scope /subscriptions/[your subscription id]/resourcegroups/[your resource group]
```
7. Add a policy to the Key Vault so the managed identity can read secrets
```bash
az keyvault set-policy --name [your keyvault] --spn $clientId --secret-permissions get list
```
8. Enable AAD Pod Identity on AKS
```bash
kubectl apply -f https://raw.githubusercontent.com/Azure/aad-pod-identity/master/deploy/infra/deployment-rbac.yaml
# For AKS clusters, deploy the MIC and AKS add-on exception by running -
kubectl apply -f https://raw.githubusercontent.com/Azure/aad-pod-identity/master/deploy/infra/mic-exception.yaml
```
9. Configure the Azure Identity and AzureIdentityBinding yaml
Save the following yaml as azure-identity-config.yaml:
```yaml
apiVersion: "aadpodidentity.k8s.io/v1"
kind: AzureIdentity
metadata:
name: [you managed identity name]
spec:
type: 0
ResourceID: [you managed identity id]
ClientID: [you managed identity Client ID]
---
apiVersion: "aadpodidentity.k8s.io/v1"
kind: AzureIdentityBinding
metadata:
name: [you managed identity name]-identity-binding
spec:
AzureIdentity: [you managed identity name]
Selector: [you managed identity selector]
```
10. Deploy the azure-identity-config.yaml:
```yaml
kubectl apply -f azure-identity-config.yaml
```
## Use Azure Key Vault secret store in Kubernetes mode with managed identities
In Kubernetes mode, you store the certificate for the service principal into the Kubernetes Secret Store and then enable Azure Key Vault secret store with this certificate in Kubernetes secretstore.
1. Create azurekeyvault.yaml component file
The component yaml uses the name of your key vault and the Cliend ID of the managed identity to setup the secret store.
```yaml
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
name: azurekeyvault
namespace: default
spec:
type: secretstores.azure.keyvault
metadata:
- name: vaultName
value: [your_keyvault_name]
- name: spnClientId
value: [your_managed_identity_client_id]
```
2. Apply azurekeyvault.yaml component
```bash
kubectl apply -f azurekeyvault.yaml
```
3. Store the redisPassword as a secret into your keyvault
Now store the redisPassword as a secret into your keyvault
```bash
az keyvault secret set --name redisPassword --vault-name [your_keyvault_name] --value "your redis passphrase"
```
4. Create redis.yaml state store component
This redis state store component refers to `azurekeyvault` component as a secretstore and uses the secret for `redisPassword` stored in Azure Key Vault.
```yaml
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
name: statestore
namespace: default
spec:
type: state.redis
metadata:
- name: redisHost
value: "[redis_url]:6379"
- name: redisPassword
secretKeyRef:
name: redisPassword
auth:
secretStore: azurekeyvault
```
5. Apply redis statestore component
```bash
kubectl apply -f redis.yaml
```
6. Create node.yaml deployment
```yaml
kind: Service
apiVersion: v1
metadata:
name: nodeapp
namespace: default
labels:
app: node
spec:
selector:
app: node
ports:
- protocol: TCP
port: 80
targetPort: 3000
type: LoadBalancer
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: nodeapp
namespace: default
labels:
app: node
spec:
replicas: 1
selector:
matchLabels:
app: node
template:
metadata:
labels:
app: node
aadpodidbinding: [you managed identity selector]
annotations:
dapr.io/enabled: "true"
dapr.io/app-id: "nodeapp"
dapr.io/app-port: "3000"
spec:
containers:
- name: node
image: dapriosamples/hello-k8s-node
ports:
- containerPort: 3000
imagePullPolicy: Always
```
7. Apply the node app deployment
```bash
kubectl apply -f redis.yaml
```
Make sure that `secretstores.azure.keyvault` is loaded successfully in `daprd` sidecar log
Here is the nodeapp log of the sidecar. Note: use the nodeapp name for your deployed container instance.
```bash
$ kubectl logs $(kubectl get po --selector=app=node -o jsonpath='{.items[*].metadata.name}') daprd
time="2020-02-05T09:15:03Z" level=info msg="starting Dapr Runtime -- version edge -- commit v0.3.0-rc.0-58-ge540a71-dirty"
time="2020-02-05T09:15:03Z" level=info msg="log level set to: info"
time="2020-02-05T09:15:03Z" level=info msg="kubernetes mode configured"
time="2020-02-05T09:15:03Z" level=info msg="app id: nodeapp"
time="2020-02-05T09:15:03Z" level=info msg="mTLS enabled. creating sidecar authenticator"
time="2020-02-05T09:15:03Z" level=info msg="trust anchors extracted successfully"
time="2020-02-05T09:15:03Z" level=info msg="authenticator created"
time="2020-02-05T09:15:03Z" level=info msg="loaded component azurekeyvault (secretstores.azure.keyvault)"
time="2020-02-05T09:15:04Z" level=info msg="loaded component statestore (state.redis)"
...
2020-02-05 09:15:04.636348 I | redis: connecting to redis-master:6379
2020-02-05 09:15:04.639435 I | redis: connected to redis-master:6379 (localAddr: 10.244.0.11:38294, remAddr: 10.0.74.145:6379)
...
```
## References
- [Azure CLI Keyvault CLI](https://docs.microsoft.com/en-us/cli/azure/keyvault?view=azure-cli-latest#az-keyvault-create)
- [Create an Azure service principal with Azure CLI](https://docs.microsoft.com/en-us/cli/azure/create-an-azure-service-principal-azure-cli?view=azure-cli-latest)
- [AAD Pod Identity](https://github.com/Azure/aad-pod-identity)
- [Secrets Component](../../concepts/secrets/README.md)

View File

@ -0,0 +1,292 @@
# Secret Store for Azure Key Vault
This document shows how to enable Azure Key Vault secret store using [Dapr Secrets Component](../../concepts/secrets/README.md) for Standalone and Kubernetes mode. The Dapr secret store component uses Service Principal using certificate authorization to authenticate Key Vault.
> **Note:** Find the Managed Identity for Azure Key Vault instructions [here](azure-keyvault-managed-identity.md).
## Contents
- [Prerequisites](#prerequisites)
- [Create Azure Key Vault and Service principal](#create-azure-key-vault-and-service-principal)
- [Use Azure Key Vault secret store in Standalone mode](#use-azure-key-vault-secret-store-in-standalone-mode)
- [Use Azure Key Vault secret store in Kubernetes mode](#use-azure-key-vault-secret-store-in-kubernetes-mode)
- [References](#references)
## Prerequisites
- [Azure Subscription](https://azure.microsoft.com/en-us/free/)
- [Azure CLI](https://docs.microsoft.com/en-us/cli/azure/install-azure-cli?view=azure-cli-latest)
## Create an Azure Key Vault and a service principal
This creates new service principal and grants it the permission to keyvault.
1. Login to Azure and set the default subscription
```bash
# Log in Azure
az login
# Set your subscription to the default subscription
az account set -s [your subscription id]
```
2. Create an Azure Key Vault in a region
```bash
az keyvault create --location [region] --name [your_keyvault] --resource-group [your resource group]
```
3. Create a service principal
Create a service principal with a new certificate and store the 1-year certificate inside [your keyvault]'s certificate vault.
> **Note** you can skip this step if you want to use an existing service principal for keyvault instead of creating new one
```bash
az ad sp create-for-rbac --name [your_service_principal_name] --create-cert --cert [certificate_name] --keyvault [your_keyvault] --skip-assignment --years 1
{
"appId": "a4f90000-0000-0000-0000-00000011d000",
"displayName": "[your_service_principal_name]",
"name": "http://[your_service_principal_name]",
"password": null,
"tenant": "34f90000-0000-0000-0000-00000011d000"
}
```
**Save the both the appId and tenant from the output which will be used in the next step**
4. Get the Object Id for [your_service_principal_name]
```bash
az ad sp show --id [service_principal_app_id]
{
...
"objectId": "[your_service_principal_object_id]",
"objectType": "ServicePrincipal",
...
}
```
5. Grant the service principal the GET permission to your Azure Key Vault
```bash
az keyvault set-policy --name [your_keyvault] --object-id [your_service_principal_object_id] --secret-permissions get
```
Now, your service principal has access to your keyvault, you are ready to configure the secret store component to use secrets stored in your keyvault to access other components securely.
6. Download the certificate in PFX format from your Azure Key Vault either using the Azure portal or the Azure CLI:
- **Using the Azure portal:**
Go to your key vault on the Azure portal and navigate to the *Certificates* tab under *Settings*. Find the certificate that was created during the service principal creation, named [certificate_name] and click on it.
Click *Download in PFX/PEM format* to download the certificate.
- **Using the Azure CLI:**
```bash
az keyvault secret download --vault-name [your_keyvault] --name [certificate_name] --encoding base64 --file [certificate_name].pfx
```
## Use Azure Key Vault secret store in Standalone mode
This section walks you through how to enable an Azure Key Vault secret store to store a password to securely access a Redis state store in Standalone mode.
1. Create a components directory in your application root
```bash
mkdir components
```
2. Copy downloaded PFX cert from your Azure Keyvault Certificate Vault into `./components` or a secure location in your local disk
3. Create a file called azurekeyvault.yaml in the components directory
Now create an Dapr azurekeyvault component. Create a file called azurekeyvault.yaml in the components directory with the content below
```yaml
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
name: azurekeyvault
namespace: default
spec:
type: secretstores.azure.keyvault
metadata:
- name: vaultName
value: [your_keyvault_name]
- name: spnTenantId
value: "[your_service_principal_tenant_id]"
- name: spnClientId
value: "[your_service_principal_app_id]"
- name: spnCertificateFile
value : "[pfx_certificate_file_local_path]"
```
To run locally, create a `components` dir containing the YAML file and provide the path to the `dapr run` command with the flag `--components-path`.
4. Store redisPassword secret to keyvault
```bash
az keyvault secret set --name redisPassword --vault-name [your_keyvault_name] --value "your redis passphrase"
```
5. Create redis.yaml in the components directory with the content below
Create a statestore component file. This Redis component yaml shows how to use the `redisPassword` secret stored in an Azure Key Vault called azurekeyvault as a Redis connection password.
```yaml
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
name: statestore
namespace: default
spec:
type: state.redis
metadata:
- name: redisHost
value: "[redis]:6379"
- name: redisPassword
secretKeyRef:
name: redisPassword
key: redisPassword
auth:
secretStore: azurekeyvault
```
6. Run your app
You can check that `secretstores.azure.keyvault` component is loaded and redis server connects successfully by looking at the log output when using the dapr `run` command
Here is the log when you run [HelloWorld sample](https://github.com/dapr/quickstarts/tree/master/hello-world) with Azure Key Vault secret store.
```bash
$ dapr run --app-id mynode --app-port 3000 --dapr-http-port 3500 node app.js
Starting Dapr with id mynode on port 3500
✅ You're up and running! Both Dapr and your app logs will appear here.
...
== DAPR == time="2019-09-25T17:57:37-07:00" level=info msg="loaded component azurekeyvault (secretstores.azure.keyvault)"
== APP == Node App listening on port 3000!
== DAPR == time="2019-09-25T17:57:38-07:00" level=info msg="loaded component statestore (state.redis)"
== DAPR == time="2019-09-25T17:57:38-07:00" level=info msg="loaded component messagebus (pubsub.redis)"
...
== DAPR == 2019/09/25 17:57:38 redis: connecting to [redis]:6379
== DAPR == 2019/09/25 17:57:38 redis: connected to [redis]:6379 (localAddr: x.x.x.x:62137, remAddr: x.x.x.x:6379)
...
```
## Use Azure Key Vault secret store in Kubernetes mode
In Kubernetes mode, you store the certificate for the service principal into the Kubernetes Secret Store and then enable Azure Key Vault secret store with this certificate in Kubernetes secretstore.
1. Create a kubernetes secret using the following command
- **[pfx_certificate_file_local_path]** is the path of PFX cert file you downloaded from [Create Azure Key Vault and Service principal](#create-azure-key-vault-and-service-principal)
- **[your_k8s_spn_secret_name]** is secret name in Kubernetes secret store
```bash
kubectl create secret generic [your_k8s_spn_secret_name] --from-file=[pfx_certificate_file_local_path]
```
2. Create azurekeyvault.yaml component file
The component yaml refers to the Kubernetes secretstore using `auth` property and `secretKeyRef` refers to the certificate stored in Kubernetes secret store.
```yaml
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
name: azurekeyvault
namespace: default
spec:
type: secretstores.azure.keyvault
metadata:
- name: vaultName
value: [your_keyvault_name]
- name: spnTenantId
value: "[your_service_principal_tenant_id]"
- name: spnClientId
value: "[your_service_principal_app_id]"
- name: spnCertificate
secretKeyRef:
name: [your_k8s_spn_secret_name]
key: [pfx_certificate_file_local_name]
auth:
secretStore: kubernetes
```
3. Apply azurekeyvault.yaml component
```bash
kubectl apply -f azurekeyvault.yaml
```
4. Store the redisPassword as a secret into your keyvault
Now store the redisPassword as a secret into your keyvault
```bash
az keyvault secret set --name redisPassword --vault-name [your_keyvault_name] --value "your redis passphrase"
```
5. Create redis.yaml state store component
This redis state store component refers to `azurekeyvault` component as a secretstore and uses the secret for `redisPassword` stored in Azure Key Vault.
```yaml
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
name: statestore
namespace: default
spec:
type: state.redis
metadata:
- name: redisHost
value: "[redis_url]:6379"
- name: redisPassword
secretKeyRef:
name: redisPassword
key: redisPassword
auth:
secretStore: azurekeyvault
```
6. Apply redis statestore component
```bash
kubectl apply -f redis.yaml
```
7. Deploy your app to Kubernetes
Make sure that `secretstores.azure.keyvault` is loaded successfully in `daprd` sidecar log
Here is the nodeapp log of [HelloWorld Kubernetes sample](https://github.com/dapr/quickstarts/tree/master/hello-kubernetes). Note: use the nodeapp name for your deployed container instance.
```bash
$ kubectl logs nodeapp-f7b7576f4-4pjrj daprd
time="2019-09-26T20:34:23Z" level=info msg="starting Dapr Runtime -- version 0.4.0-alpha.4 -- commit 876474b-dirty"
time="2019-09-26T20:34:23Z" level=info msg="log level set to: info"
time="2019-09-26T20:34:23Z" level=info msg="kubernetes mode configured"
time="2019-09-26T20:34:23Z" level=info msg="app id: nodeapp"
time="2019-09-26T20:34:24Z" level=info msg="loaded component azurekeyvault (secretstores.azure.keyvault)"
time="2019-09-26T20:34:25Z" level=info msg="loaded component statestore (state.redis)"
...
2019/09/26 20:34:25 redis: connecting to redis-master:6379
2019/09/26 20:34:25 redis: connected to redis-master:6379 (localAddr: 10.244.3.67:42686, remAddr: 10.0.1.26:6379)
...
```
## References
- [Azure CLI Keyvault CLI](https://docs.microsoft.com/en-us/cli/azure/keyvault?view=azure-cli-latest#az-keyvault-create)
- [Create an Azure service principal with Azure CLI](https://docs.microsoft.com/en-us/cli/azure/create-an-azure-service-principal-azure-cli?view=azure-cli-latest)
- [Secrets Component](../../concepts/secrets/README.md)

View File

@ -0,0 +1,69 @@
# Local secret store using environment variable (for Development)
This document shows how to enable [environment variable](https://en.wikipedia.org/wiki/Environment_variable) secret store using [Dapr Secrets Component](../../concepts/secrets/README.md) for Development scenarios in Standalone mode. This Dapr secret store component locally defined environment variable and does not use authentication.
> Note, this approach to secret management is not recommended for production environments.
## How to enable environment variable secret store
To enable environment variable secret store, create a file with the following content in your components directory:
```yaml
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
name: envvar-secret-store
namespace: default
spec:
type: secretstores.local.env
metadata:
```
## How to use the environment variable secret store in other components
To use the environment variable secrets in other component you can replace the `value` with `secretKeyRef` containing the name of your local environment variable like this:
```yaml
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
name: statestore
namespace: default
spec:
type: state.redis
metadata:
- name: redisHost
value: "[redis]:6379"
- name: redisPassword
secretKeyRef:
name: REDIS_PASSWORD
auth:
secretStore: envvar-secret-store
```
# How to confirm the secrets are being used
To confirm the secrets are being used you can check the logs output by `dapr run` command. Here is the log when you run [HelloWorld sample](https://github.com/dapr/quickstarts/tree/master/hello-world) with Local Secret secret store.
```bash
$ dapr run --app-id mynode --app-port 3000 --dapr-http-port 3500 node app.js
Starting Dapr with id mynode on port 3500
✅ You're up and running! Both Dapr and your app logs will appear here.
...
== DAPR == time="2019-09-25T17:57:37-07:00" level=info msg="loaded component envvar-secret-store (secretstores.local.env)"
== APP == Node App listening on port 3000!
== DAPR == time="2019-09-25T17:57:38-07:00" level=info msg="loaded component statestore (state.redis)"
== DAPR == time="2019-09-25T17:57:38-07:00" level=info msg="loaded component messagebus (pubsub.redis)"
...
== DAPR == 2019/09/25 17:57:38 redis: connecting to [redis]:6379
== DAPR == 2019/09/25 17:57:38 redis: connected to [redis]:6379 (localAddr: x.x.x.x:62137, remAddr: x.x.x.x:6379)
...
```
## Related Links
- [Secrets Component](../../concepts/secrets/README.md)
- [Secrets API](../../reference/api/secrets_api.md)
- [Secrets API Samples](https://github.com/dapr/quickstarts/blob/master/secretstore/README.md)

View File

@ -0,0 +1,115 @@
# Local secret store using file (for Development)
This document shows how to enable file-based secret store using [Dapr Secrets Component](../../concepts/secrets/README.md) for Development scenarios in Standalone mode. This Dapr secret store component reads plain text JSON from a given file and does not use authentication.
> Note, this approach to secret management is not recommended for production environments.
## Contents
- [Create JSON file to hold the secrets](#create-json-file-to-hold-the-secrets)
- [Use Local secret store in Standalone mode](#use-local-secret-store-in-standalone-mode)
- [References](#references)
## Create JSON file to hold the secrets
This creates new JSON file to hold the secrets.
1. Create a json file (i.e. secrets.json) with the following contents:
```json
{
"redisPassword": "your redis passphrase"
}
```
## Use file-based secret store in Standalone mode
This section walks you through how to enable a Local secret store to store a password to access a Redis state store in Standalone mode.
1. Create a JSON file in components directory with following content.
```yaml
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
name: local-secret-store
namespace: default
spec:
type: secretstores.local.file
metadata:
- name: secretsFile
value: [path to the JSON file]
- name: nestedSeparator
value: ":"
```
The `nestedSeparator` parameter, is not required (default value is ':') and it's used by the store when flattening the json hierarchy to a map. So given the following json:
```json
{
"redisPassword": "your redis password",
"connectionStrings": {
"sql": "your sql connection string",
"mysql": "your mysql connection string"
}
}
```
the store will load the file and create a map with the following key value pairs:
| flattened key | value |
| --- | --- |
|"redis" | "your redis password" |
|"connectionStrings:sql" | "your sql connection string" |
|"connectionStrings:mysql"| "your mysql connection string" |
> Use the flattened key to access the secret.
2. Use secrets in other components
To use the previously created secrets for example in a Redis state store component you can replace the `value` with `secretKeyRef` and a nested name of the key like this:
```yaml
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
name: statestore
namespace: default
spec:
type: state.redis
metadata:
- name: redisHost
value: "[redis]:6379"
- name: redisPassword
secretKeyRef:
name: redisPassword
auth:
secretStore: local-secret-store
```
# Confirm the secrets are being used
To confirm the secrets are being used you can check the logs output by `dapr run` command. Here is the log when you run [HelloWorld sample](https://github.com/dapr/quickstarts/tree/master/hello-world) with Local Secret secret store.
```bash
$ dapr run --app-id mynode --app-port 3000 --dapr-http-port 3500 node app.js
Starting Dapr with id mynode on port 3500
✅ You're up and running! Both Dapr and your app logs will appear here.
...
== DAPR == time="2019-09-25T17:57:37-07:00" level=info msg="loaded component local-secret-store (secretstores.local.file)"
== APP == Node App listening on port 3000!
== DAPR == time="2019-09-25T17:57:38-07:00" level=info msg="loaded component statestore (state.redis)"
== DAPR == time="2019-09-25T17:57:38-07:00" level=info msg="loaded component messagebus (pubsub.redis)"
...
== DAPR == 2019/09/25 17:57:38 redis: connecting to [redis]:6379
== DAPR == 2019/09/25 17:57:38 redis: connected to [redis]:6379 (localAddr: x.x.x.x:62137, remAddr: x.x.x.x:6379)
...
```
## Related Links
- [Secrets Component](../../concepts/secrets/README.md)
- [Secrets API](../../reference/api/secrets_api.md)
- [Secrets API Samples](https://github.com/dapr/quickstarts/blob/master/secretstore/README.md)

View File

@ -0,0 +1,72 @@
# Secret Store for GCP Secret Manager
This document shows how to enable GCP Secret Manager secret store using [Dapr Secrets Component](../../concepts/secrets/README.md) for self hosted and Kubernetes mode.
## Create an GCP Secret Manager instance
Setup GCP Secret Manager using the GCP documentation: https://cloud.google.com/secret-manager/docs/quickstart.
## Create the component
```yaml
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
name: gcpsecretmanager
namespace: default
spec:
type: secretstores.gcp.secretmanager
metadata:
- name: type
value: service_account
- name: project_id
value: project_111
- name: private_key_id
value: *************
- name: client_email
value: name@domain.com
- name: client_id
value: '1111111111111111'
- name: auth_uri
value: https://accounts.google.com/o/oauth2/auth
- name: token_uri
value: https://oauth2.googleapis.com/token
- name: auth_provider_x509_cert_url
value: https://www.googleapis.com/oauth2/v1/certs
- name: client_x509_cert_url
value: https://www.googleapis.com/robot/v1/metadata/x509/<project-name>.iam.gserviceaccount.com
- name: private_key
value: PRIVATE KEY
```
To deploy in Kubernetes, save the file above to `gcp_secret_manager.yaml` and then run:
```bash
kubectl apply -f gcp_secret_manager.yaml
```
To run locally, create a `components` dir containing the YAML file and provide the path to the `dapr run` command with the flag `--components-path`.
## GCP Secret Manager reference example
This example shows you how to take the Redis password from the GCP Secret Manager secret store.
Here, you created a secret named `redisPassword` in GCP Secret Manager. Note its important to set it both as the `name` and `key` properties.
```yaml
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
name: statestore
namespace: default
spec:
type: state.redis
metadata:
- name: redisHost
value: "[redis]:6379"
- name: redisPassword
secretKeyRef:
name: redisPassword
key: redisPassword
auth:
secretStore: gcpsecretmanager
```

View File

@ -0,0 +1,69 @@
# Secret Store for Hashicorp Vault
This document shows how to enable Hashicorp Vault secret store using [Dapr Secrets Component](../../concepts/secrets/README.md) for Standalone and Kubernetes mode.
## Create a Hashicorp Vault instance
Setup Hashicorp Vault using the Vault documentation: https://www.vaultproject.io/docs/install/index.html.
For Kubernetes, you can use the Helm Chart: <https://github.com/hashicorp/vault-helm>.
## Create the Vault component
```yaml
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
name: vault
namespace: default
spec:
type: secretstores.hashicorp.vault
metadata:
- name: vaultAddr
value: [vault_address] # Optional. Default: "https://127.0.0.1:8200"
- name: caCert # Optional. This or caPath or caPem
value: "[ca_cert]"
- name: caPath # Optional. This or CaCert or caPem
value: "[path_to_ca_cert_file]"
- name: caPem # Optional. This or CaCert or CaPath
value : "[encoded_ca_cert_pem]"
- name: skipVerify # Optional. Default: false
value : "[skip_tls_verification]"
- name: tlsServerName # Optional.
value : "[tls_config_server_name]"
- name: vaultTokenMountPath # Required. Path to token file.
value : "[path_to_file_containing_token]"
- name: vaultKVPrefix # Optional. Default: "dapr"
value : "[vault_prefix]"
```
To deploy in Kubernetes, save the file above to `vault.yaml` and then run:
```bash
kubectl apply -f vault.yaml
```
To run locally, create a `components` dir containing the YAML file and provide the path to the `dapr run` command with the flag `--components-path`.
## Vault reference example
This example shows you how to take the Redis password from the Vault secret store.
```yaml
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
name: statestore
namespace: default
spec:
type: state.redis
metadata:
- name: redisHost
value: "[redis]:6379"
- name: redisPassword
secretKeyRef:
name: redisPassword
key: redisPassword
auth:
secretStore: vault
```

View File

@ -0,0 +1,6 @@
# Secret Store for Kubernetes
Kubernetes has a built-in state store which Dapr components can use to fetch secrets from.
No special configuration is needed to setup the Kubernetes state store.
Please refer to [this](../../concepts/secrets/README.md) document for information and examples on how to fetch secrets from Kubernetes using Dapr.

View File

@ -0,0 +1,67 @@
# Setup a state store component
Dapr integrates with existing databases to provide apps with state management capabilities for CRUD operations, transactions and more.
Dapr supports the configuration of multiple, named, state store components *per application*.
State stores are extensible and can be found in the [components-contrib repo](https://github.com/dapr/components-contrib).
A state store in Dapr is described using a `Component` file:
```yaml
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
name: statestore
namespace: default
spec:
type: state.<DATABASE>
metadata:
- name: <KEY>
value: <VALUE>
- name: <KEY>
value: <VALUE>
...
```
The type of database is determined by the `type` field, and things like connection strings and other metadata are put in the `.metadata` section.
Even though you can put plain text secrets in there, it is recommended you use a [secret store](../../concepts/secrets/README.md).
## Running locally
When running locally with the Dapr CLI, a component file for a Redis state store will be automatically created in a `components` directory in your current working directory.
You can make changes to this file the way you see fit, whether to change connection values or replace it with a different store.
## Running in Kubernetes
Dapr uses a Kubernetes Operator to update the sidecars running in the cluster with different components.
To setup a state store in Kubernetes, use `kubectl` to apply the component file:
```bash
kubectl apply -f statestore.yaml
```
## Related Topics
* [State management concepts](../../concepts/state-management/README.md)
* [State management API specification](../../reference/api/state_api.md)
## Reference
* [Setup Aerospike](./setup-aerospike.md)
* [Setup Cassandra](./setup-cassandra.md)
* [Setup Cloudstate](./setup-cloudstate.md)
* [Setup Couchbase](./setup-couchbase.md)
* [Setup etcd](./setup-etcd.md)
* [Setup Hashicorp Consul](./setup-consul.md)
* [Setup Hazelcast](./setup-hazelcast.md)
* [Setup Memcached](./setup-memcached.md)
* [Setup MongoDB](./setup-mongodb.md)
* [Setup PostgreSQL](./setup-postgresql.md)
* [Setup Redis](./setup-redis.md)
* [Setup Zookeeper](./setup-zookeeper.md)
* [Setup Azure CosmosDB](./setup-azure-cosmosdb.md)
* [Setup Azure SQL Server](./setup-sqlserver.md)
* [Setup Azure Table Storage](./setup-azure-tablestorage.md)
* [Setup Azure Blob Storage](./setup-azure-blobstorage.md)
* [Setup Google Cloud Firestore (Datastore mode)](./setup-firestore.md)
* [Supported State Stores](./supported-state-stores.md)

View File

@ -0,0 +1,67 @@
# Setup Aerospike
## Locally
You can run Aerospike locally using Docker:
```
docker run -d --name aerospike -p 3000:3000 -p 3001:3001 -p 3002:3002 -p 3003:3003 aerospike
```
You can then interact with the server using `localhost:3000`.
## Kubernetes
The easiest way to install Aerospike on Kubernetes is by using the [Helm chart](https://github.com/helm/charts/tree/master/stable/aerospike):
```
helm repo add incubator http://storage.googleapis.com/kubernetes-charts-incubator
helm install --name my-aerospike --namespace aerospike stable/aerospike
```
This will install Aerospike into the `aerospike` namespace.
To interact with Aerospike, find the service with: `kubectl get svc aerospike -n aerospike`.
For example, if installing using the example above, the Aerospike host address would be:
`aerospike-my-aerospike.aerospike.svc.cluster.local:3000`
## Create a Dapr component
The next step is to create a Dapr component for Aerospike.
Create the following YAML file named `aerospike.yaml`:
```yaml
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
name: <NAME>
namespace: <NAMESPACE>
spec:
type: state.Aerospike
metadata:
- name: hosts
value: <REPLACE-WITH-HOSTS> # Required. A comma delimited string of hosts. Example: "aerospike:3000,aerospike2:3000"
- name: namespace
value: <REPLACE-WITH-NAMESPACE> # Required. The aerospike namespace.
- name: set
value: <REPLACE-WITH-SET> # Optional.
```
The above example uses secrets as plain strings. It is recommended to use a secret store for the secrets as described [here](../../concepts/secrets/README.md).
## Apply the configuration
### In Kubernetes
To apply the Aerospike state store to Kubernetes, use the `kubectl` CLI:
```
kubectl apply -f aerospike.yaml
```
### Running locally
To run locally, create a `components` dir containing the YAML file and provide the path to the `dapr run` command with the flag `--components-path`.

View File

@ -0,0 +1,95 @@
# Setup Azure Blob Storage
## Creating Azure Storage account
[Follow the instructions](https://docs.microsoft.com/en-us/azure/storage/common/storage-account-create?tabs=azure-portal) from the Azure documentation on how to create an Azure Storage Account.
If you wish to create a container for Dapr to use, you can do so beforehand. However, Blob Storage state provider will create one for you automatically if it doesn't exist.
In order to setup Azure Blob Storage as a state store, you will need the following properties:
* **AccountName**: The storage account name. For example: **mystorageaccount**.
* **AccountKey**: Primary or secondary storage key.
* **ContainerName**: The name of the container to be used for Dapr state. The container will be created for you if it doesn't exist.
## Create a Dapr component
The next step is to create a Dapr component for Azure Blob Storage.
Create the following YAML file named `azureblob.yaml`:
```yaml
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
name: <NAME>
namespace: <NAMESPACE>
spec:
type: state.azure.blobstorage
metadata:
- name: accountName
value: <REPLACE-WITH-ACCOUNT-NAME>
- name: accountKey
value: <REPLACE-WITH-ACCOUNT-KEY>
- name: containerName
value: <REPLACE-WITH-CONTAINER-NAME>
```
The above example uses secrets as plain strings. It is recommended to use a secret store for the secrets as described [here](../../concepts/secrets/README.md).
The following example uses the Kubernetes secret store to retrieve the secrets:
```yaml
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
name: <NAME>
namespace: <NAMESPACE>
spec:
type: state.azure.blobstorage
metadata:
- name: accountName
value: <REPLACE-WITH-ACCOUNT-NAME>
- name: accountKey
secretKeyRef:
name: <KUBERNETES-SECRET-NAME>
key: <KUBERNETES-SECRET-KEY>
- name: containerName
value: <REPLACE-WITH-CONTAINER-NAME>
```
## Apply the configuration
### In Kubernetes
To apply Azure Blob Storage state store to Kubernetes, use the `kubectl` CLI:
```
kubectl apply -f azureblob.yaml
```
### Running locally
To run locally, create a `components` dir containing the YAML file and provide the path to the `dapr run` command with the flag `--components-path`.
This state store creates a blob file in the container and puts raw state inside it.
For example, the following operation coming from service called `myservice`
```shell
curl -X POST http://localhost:3500/v1.0/state \
-H "Content-Type: application/json"
-d '[
{
"key": "nihilus",
"value": "darth"
}
]'
```
will create the blob file in the containter with key as filename and value as the contents of file.
## Concurrency
Azure Blob Storage state concurrency is achieved by using `ETag`s according to [the official documenation](https://docs.microsoft.com/en-us/azure/storage/common/storage-concurrency#managing-concurrency-in-blob-storage).

View File

@ -0,0 +1,149 @@
# Setup Azure CosmosDB
## Creating an Azure CosmosDB account
[Follow the instructions](https://docs.microsoft.com/en-us/azure/cosmos-db/how-to-manage-database-account) from the Azure documentation on how to create an Azure CosmosDB account. The database and collection must be created in CosmosDB before Dapr can use it.
**Note : The partition key for the collection must be named "/partitionKey". Note: this is case-sensitive.**
In order to setup CosmosDB as a state store, you need the following properties:
* **URL**: the CosmosDB url. for example: https://******.documents.azure.com:443/
* **Master Key**: The key to authenticate to the CosmosDB account
* **Database**: The name of the database
* **Collection**: The name of the collection
## Create a Dapr component
The next step is to create a Dapr component for CosmosDB.
Create the following YAML file named `cosmosdb.yaml`:
```yaml
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
name: <NAME>
namespace: <NAMESPACE>
spec:
type: state.azure.cosmosdb
metadata:
- name: url
value: <REPLACE-WITH-URL>
- name: masterKey
value: <REPLACE-WITH-MASTER-KEY>
- name: database
value: <REPLACE-WITH-DATABASE>
- name: collection
value: <REPLACE-WITH-COLLECTION>
```
The above example uses secrets as plain strings. It is recommended to use a secret store for the secrets as described [here](../../concepts/secrets/README.md)
Here is an example of what the values could look like:
```yaml
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
name: statestore
spec:
type: state.azure.cosmosdb
metadata:
- name: url
value: https://accountname.documents.azure.com:443
- name: masterKey
value: thekey==
- name: database
value: db1
- name: collection
value: c1
```
The following example uses the Kubernetes secret store to retrieve the secrets:
```yaml
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
name: <NAME>
namespace: <NAMESPACE>
spec:
type: state.azure.cosmosdb
metadata:
- name: url
value: <REPLACE-WITH-URL>
- name: masterKey
secretKeyRef:
name: <KUBERNETES-SECRET-NAME>
key: <KUBERNETES-SECRET-KEY>
- name: database
value: <REPLACE-WITH-DATABASE>
- name: collection
value: <REPLACE-WITH-COLLECTION>
```
If you wish to use CosmosDb as an actor store, append the following to the yaml.
```yaml
- name: actorStateStore
value: "true"
```
## Apply the configuration
### In Kubernetes
To apply the CosmosDB state store to Kubernetes, use the `kubectl` CLI:
```
kubectl apply -f cosmos.yaml
```
### Running locally
To run locally, create a YAML file described above and provide the path to the `dapr run` command with the flag `--components-path`. See [this](https://github.com/dapr/cli#use-non-default-components-path) or run `dapr run --help` for more information on the path.
## Data format
To use the cosmos state store, your data must be sent to Dapr in json-serialized. Having it just json *serializable* will not work.
If you are using the Dapr SDKs (e.g. https://github.com/dapr/dotnet-sdk) the SDK will serialize your data to json.
For examples see the curl operations in the [Partition keys](#partition-keys) section.
## Partition keys
For **non-actor state** operations, the Azure CosmosDB state store will use the `key` property provided in the requests to the Dapr API to determine the CosmosDB partition key. This can be overridden by specifying a metadata field in the request with a key of `partitionKey` and a value of the desired partition.
The following operation will use `nihilus` as the partition key value sent to CosmosDB:
```shell
curl -X POST http://localhost:3500/v1.0/state/<store_name> \
-H "Content-Type: application/json"
-d '[
{
"key": "nihilus",
"value": "darth"
}
]'
```
For **non-actor** state operations, if you want to control the CosmosDB partition, you can specify it in metadata. Reusing the example above, here's how to put it under the `mypartition` partition
```shell
curl -X POST http://localhost:3500/v1.0/state/<store_name> \
-H "Content-Type: application/json"
-d '[
{
"key": "nihilus",
"value": "darth",
"metadata": {
"partitionKey": "mypartition"
}
}
]'
```
For **actor** state operations, the partition key will be generated by Dapr using the appId, the actor type, and the actor id, such that data for the same actor will always end up under the same partition (you do not need to specify it). This is because actor state operations must use transactions, and in CosmosDB the items in a transaction must be on the same partition.

View File

@ -0,0 +1,103 @@
# Setup Azure Table Storage
## Creating Azure Storage account
[Follow the instructions](https://docs.microsoft.com/en-us/azure/storage/common/storage-account-create?tabs=azure-portal) from the Azure documentation on how to create an Azure Storage Account.
If you wish to create a table for Dapr to use, you can do so beforehand. However, Table Storage state provider will create one for you automatically if it doesn't exist.
In order to setup Azure Table Storage as a state store, you will need the following properties:
* **AccountName**: The storage account name. For example: **mystorageaccount**.
* **AccountKey**: Primary or secondary storage key.
* **TableName**: The name of the table to be used for Dapr state. The table will be created for you if it doesn't exist.
## Create a Dapr component
The next step is to create a Dapr component for Azure Table Storage.
Create the following YAML file named `azuretable.yaml`:
```yaml
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
name: <NAME>
namespace: <NAMESPACE>
spec:
type: state.azure.tablestorage
metadata:
- name: accountName
value: <REPLACE-WITH-ACCOUNT-NAME>
- name: accountKey
value: <REPLACE-WITH-ACCOUNT-KEY>
- name: tableName
value: <REPLACE-WITH-TABLE-NAME>
```
The above example uses secrets as plain strings. It is recommended to use a secret store for the secrets as described [here](../../concepts/secrets/README.md).
The following example uses the Kubernetes secret store to retrieve the secrets:
```yaml
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
name: <NAME>
namespace: <NAMESPACE>
spec:
type: state.azure.tablestorage
metadata:
- name: accountName
value: <REPLACE-WITH-ACCOUNT-NAME>
- name: accountKey
secretKeyRef:
name: <KUBERNETES-SECRET-NAME>
key: <KUBERNETES-SECRET-KEY>
- name: tableName
value: <REPLACE-WITH-TABLE-NAME>
```
## Apply the configuration
### In Kubernetes
To apply Azure Table Storage state store to Kubernetes, use the `kubectl` CLI:
```
kubectl apply -f azuretable.yaml
```
### Running locally
To run locally, create a `components` dir containing the YAML file and provide the path to the `dapr run` command with the flag `--components-path`.
## Partitioning
The Azure Table Storage state store will use the `key` property provided in the requests to the Dapr API to determine the `row key`. Service Name is used for `partition key`. This provides best performance, as each service type will store state in it's own table partition.
This state store creates a column called `Value` in the table storage and puts raw state inside it.
For example, the following operation coming from service called `myservice`
```shell
curl -X POST http://localhost:3500/v1.0/state \
-H "Content-Type: application/json"
-d '[
{
"key": "nihilus",
"value": "darth"
}
]'
```
will create the following record in a table:
| PartitionKey | RowKey | Value |
| ------------ | ------- | ----- |
| myservice | nihilus | darth |
## Concurrency
Azure Table Storage state concurrency is achieved by using `ETag`s according to [the official documenation]( https://docs.microsoft.com/en-us/azure/storage/common/storage-concurrency#managing-concurrency-in-table-storage).

View File

@ -0,0 +1,100 @@
# Setup Cassandra
## Locally
You can run Cassandra locally with the Datastax Docker image:
```
docker run -e DS_LICENSE=accept --memory 4g --name my-dse -d datastax/dse-server -g -s -k
```
You can then interact with the server using `localhost:9042`.
## Kubernetes
The easiest way to install Cassandra on Kubernetes is by using the [Helm chart](https://github.com/helm/charts/tree/master/incubator/cassandra):
```
kubectl create namespace cassandra
helm install cassandra incubator/cassandra --namespace cassandra
```
This will install Cassandra into the `cassandra` namespace by default.
To interact with Cassandra, find the service with: `kubectl get svc -n cassandra`.
For example, if installing using the example above, the Cassandra DNS would be:
`cassandra.cassandra.svc.cluster.local`
## Create a Dapr component
The next step is to create a Dapr component for Cassandra.
Create the following YAML file named `cassandra.yaml`:
```yaml
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
name: <NAME>
namespace: <NAMESPACE>
spec:
type: state.cassandra
metadata:
- name: hosts
value: <REPLACE-WITH-COMMA-DELIMITED-HOSTS> # Required. Example: cassandra.cassandra.svc.cluster.local
- name: username
value: <REPLACE-WITH-PASSWORD> # Optional. default: ""
- name: password
value: <REPLACE-WITH-PASSWORD> # Optional. default: ""
- name: consistency
value: <REPLACE-WITH-CONSISTENCY> # Optional. default: "All"
- name: table
value: <REPLACE-WITH-TABLE> # Optional. default: "items"
- name: keyspace
value: <REPLACE-WITH-KEYSPACE> # Optional. default: "dapr"
- name: protoVersion
value: <REPLACE-WITH-PROTO-VERSION> # Optional. default: "4"
- name: replicationFactor
value: <REPLACE-WITH-REPLICATION-FACTOR> # Optional. default: "1"
```
The above example uses secrets as plain strings. It is recommended to use a secret store for the secrets as described [here](../../concepts/secrets/README.md)
The following example uses the Kubernetes secret store to retrieve the username and password:
```yaml
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
name: <NAME>
namespace: <NAMESPACE>
spec:
type: state.cassandra
metadata:
- name: hosts
value: <REPLACE-WITH-HOSTS>
- name: username
secretKeyRef:
name: <KUBERNETES-SECRET-NAME>
key: <KUBERNETES-SECRET-KEY>
- name: password
secretKeyRef:
name: <KUBERNETES-SECRET-NAME>
key: <KUBERNETES-SECRET-KEY>
...
```
## Apply the configuration
### In Kubernetes
To apply the Cassandra state store to Kubernetes, use the `kubectl` CLI:
```
kubectl apply -f cassandra.yaml
```
### Running locally
To run locally, create a `components` dir containing the YAML file and provide the path to the `dapr run` command with the flag `--components-path`.

View File

@ -0,0 +1,154 @@
# Setup Cloudstate
The Cloudstate-Dapr integration is unique in the sense that it enables developers to achieve high-throughput, low latency scenarios by leveraging Cloudstate running as a sidecar *next* to Dapr, keeping the state near the compute unit for optimal performance while providing replication between multiple instances that can be safely scaled up and down. This is due to Cloudstate forming an Akka cluster between its sidecars with replicated in-memory entities.
Dapr leverages Cloudstate's CRDT capabilities with last-write-wins semantics.
## Kubernetes
To install Cloudstate on your Kubernetes cluster, run the following commands:
```
kubectl create namespace cloudstate
kubectl apply -n cloudstate -f https://github.com/cloudstateio/cloudstate/releases/download/v0.5.0/cloudstate-0.5.0.yaml
```
This will install Cloudstate into the `cloudstate` namespace with version `0.5.0`.
## Create a Dapr component
The next step is to create a Dapr component for Cloudstate.
Create the following YAML file named `cloudstate.yaml`:
```yaml
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
name: cloudstate
namespace: default
spec:
type: state.cloudstate
metadata:
- name: host
value: "localhost:8013"
- name: serverPort
value: "8080"
```
The `metadata.host` field specifies the address for the Cloudstate API. Since Cloudstate will be running as an additional sidecar in the pod, you can reach it via `localhost` with the default port of `8013`.
The `metadata.serverPort` field specifies the port to be opened in Dapr for Cloudstate to callback to. This can be any free port that is not used by either your application or Dapr.
## Apply the configuration
### In Kubernetes
To apply the Cloudstate state store to Kubernetes, use the `kubectl` CLI:
```
kubectl apply -f cloudstate.yaml
```
## Running the Cloudstate sidecar alongside Dapr
The next examples shows you how to manually inject a Cloudstate sidecar into a Dapr enabled deployment:
*Notice the `HTTP_PORT` for the `cloudstate-sidecar` container is the port to be used in the Cloudstate component yaml in `host`.*
```
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
annotations:
name: test-dapr-app
namespace: default
labels:
app: test-dapr-app
spec:
replicas: 1
selector:
matchLabels:
app: test-dapr-app
template:
metadata:
annotations:
dapr.io/enabled: "true"
dapr.io/app-id: "testapp"
labels:
app: test-dapr-app
spec:
containers:
- name: user-container
image: nginx
- name: cloudstate-sidecar
env:
- name: HTTP_PORT
value: "8013"
- name: USER_FUNCTION_PORT
value: "8080"
- name: REMOTING_PORT
value: "2552"
- name: MANAGEMENT_PORT
value: "8558"
- name: SELECTOR_LABEL_VALUE
value: test-dapr-app
- name: SELECTOR_LABEL
value: app
- name: REQUIRED_CONTACT_POINT_NR
value: "1"
- name: JAVA_OPTS
value: -Xms256m -Xmx256m
image: cloudstateio/cloudstate-proxy-no-store:0.5.0
livenessProbe:
httpGet:
path: /alive
port: 8558
scheme: HTTP
initialDelaySeconds: 2
failureThreshold: 20
periodSeconds: 2
readinessProbe:
httpGet:
path: /ready
port: 8558
scheme: HTTP
initialDelaySeconds: 2
failureThreshold: 20
periodSeconds: 10
resources:
limits:
memory: 512Mi
requests:
cpu: 400m
memory: 512Mi
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: cloudstate-pod-reader
namespace: default
rules:
- apiGroups:
- ""
resources:
- pods
verbs:
- get
- watch
- list
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: cloudstate-read-pods-default
namespace: default
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: cloudstate-pod-reader
subjects:
- kind: ServiceAccount
name: default
```

View File

@ -0,0 +1,91 @@
# Setup Consul
## Locally
You can run Consul locally using Docker:
```
docker run -d --name=dev-consul -e CONSUL_BIND_INTERFACE=eth0 consul
```
You can then interact with the server using `localhost:8500`.
## Kubernetes
The easiest way to install Consul on Kubernetes is by using the [Helm chart](https://github.com/helm/charts/tree/master/stable/consul):
```
helm install consul stable/consul
```
This will install Consul into the `default` namespace.
To interact with Consul, find the service with: `kubectl get svc consul`.
For example, if installing using the example above, the Consul host address would be:
`consul.default.svc.cluster.local:8500`
## Create a Dapr component
The next step is to create a Dapr component for Consul.
Create the following YAML file named `consul.yaml`:
```yaml
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
name: <NAME>
namespace: <NAMESPACE>
spec:
type: state.consul
metadata:
- name: datacenter
value: <REPLACE-WITH-DATA-CENTER> # Required. Example: dc1
- name: httpAddr
value: <REPLACE-WITH-CONSUL-HTTP-ADDRESS> # Required. Example: "consul.default.svc.cluster.local:8500"
- name: aclToken
value: <REPLACE-WITH-ACL-TOKEN> # Optional. default: ""
- name: scheme
value: <REPLACE-WITH-SCHEME> # Optional. default: "http"
- name: keyPrefixPath
value: <REPLACE-WITH-TABLE> # Optional. default: ""
```
The above example uses secrets as plain strings. It is recommended to use a secret store for the secrets as described [here](../../concepts/secrets/README.md)
The following example uses the Kubernetes secret store to retrieve the acl token:
```yaml
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
name: <NAME>
namespace: <NAMESPACE>
spec:
type: state.consul
metadata:
- name: datacenter
value: <REPLACE-WITH-DATACENTER>
- name: httpAddr
value: <REPLACE-WITH-HTTP-ADDRESS>
- name: aclToken
secretKeyRef:
name: <KUBERNETES-SECRET-NAME>
key: <KUBERNETES-SECRET-KEY>
...
```
## Apply the configuration
### In Kubernetes
To apply the Consul state store to Kubernetes, use the `kubectl` CLI:
```
kubectl apply -f consul.yaml
```
### Running locally
To run locally, create a `components` dir containing the YAML file and provide the path to the `dapr run` command with the flag `--components-path`.

View File

@ -0,0 +1,63 @@
# Setup Couchbase
## Locally
You can run Couchbase locally using Docker:
```
docker run -d --name db -p 8091-8094:8091-8094 -p 11210:11210 couchbase
```
You can then interact with the server using `localhost:8091` and start the server setup.
## Kubernetes
The easiest way to install Couchbase on Kubernetes is by using the [Helm chart](https://github.com/couchbase-partners/helm-charts#deploying-for-development-quick-start):
```
helm repo add couchbase https://couchbase-partners.github.io/helm-charts/
helm install couchbase/couchbase-operator
helm install couchbase/couchbase-cluster
```
## Create a Dapr component
The next step is to create a Dapr component for Couchbase.
Create the following YAML file named `couchbase.yaml`:
```yaml
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
name: <NAME>
namespace: <NAMESPACE>
spec:
type: state.couchbase
metadata:
- name: couchbaseURL
value: <REPLACE-WITH-URL> # Required. Example: "http://localhost:8091"
- name: username
value: <REPLACE-WITH-USERNAME> # Required.
- name: password
value: <REPLACE-WITH-PASSWORD> # Required.
- name: bucketName
value: <REPLACE-WITH-BUCKET> # Required.
```
The above example uses secrets as plain strings. It is recommended to use a secret store for the secrets as described [here](../../concepts/secrets/README.md)
## Apply the configuration
### In Kubernetes
To apply the Couchbase state store to Kubernetes, use the `kubectl` CLI:
```
kubectl apply -f couchbase.yaml
```
### Running locally
To run locally, create a `components` dir containing the YAML file and provide the path to the `dapr run` command with the flag `--components-path`.

View File

@ -0,0 +1,67 @@
# Setup etcd
## Locally
You can run etcd locally using Docker:
```
docker run -d --name etcd bitnami/etcd
```
You can then interact with the server using `localhost:2379`.
## Kubernetes
The easiest way to install etcd on Kubernetes is by using the [Helm chart](https://github.com/helm/charts/tree/master/incubator/etcd):
```
helm repo add incubator http://storage.googleapis.com/kubernetes-charts-incubator
helm install etcd incubator/etcd
```
This will install etcd into the `default` namespace.
To interact with etcd, find the service with: `kubectl get svc etcd-etcd`.
For example, if installing using the example above, the etcd host address would be:
`etcd-etcd.default.svc.cluster.local:2379`
## Create a Dapr component
The next step is to create a Dapr component for etcd.
Create the following YAML file named `etcd.yaml`:
```yaml
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
name: <NAME>
namespace: <NAMESPACE>
spec:
type: state.etcd
metadata:
- name: endpoints
value: <REPLACE-WITH-COMMA-DELIMITED-ENDPOINTS> # Required. Example: "etcd-etcd.default.svc.cluster.local:2379"
- name: dialTimeout
value: <REPLACE-WITH-DIAL-TIMEOUT> # Required. Example: "5s"
- name: operationTimeout
value: <REPLACE-WITH-OPERATION-TIMEOUT> # Optional. default: "10S"
```
The above example uses secrets as plain strings. It is recommended to use a secret store for the secrets as described [here](../../concepts/secrets/README.md)
## Apply the configuration
### In Kubernetes
To apply the etcd state store to Kubernetes, use the `kubectl` CLI:
```
kubectl apply -f etcd.yaml
```
### Running locally
To run locally, create a `components` dir containing the YAML file and provide the path to the `dapr run` command with the flag `--components-path`.

View File

@ -0,0 +1,67 @@
# Setup Google Cloud Firestore (Datastore mode)
## Locally
You can use the GCP Datastore emulator to run locally using the instructions [here](https://cloud.google.com/datastore/docs/tools/datastore-emulator).
You can then interact with the server using `localhost:8081`.
## Google Cloud
Follow the instructions [here](https://cloud.google.com/datastore/docs/quickstart) to get started with setting up Firestore in Google Cloud.
## Create a Dapr component
The next step is to create a Dapr component for Firestore.
Create the following YAML file named `firestore.yaml`:
```yaml
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
name: <NAME>
namespace: <NAMESPACE>
spec:
type: state.gcp.firestore
metadata:
- name: type
value: <REPLACE-WITH-CREDENTIALS-TYPE> # Required. Example: "serviceaccount"
- name: project_id
value: <REPLACE-WITH-PROJECT-ID> # Required.
- name: private_key_id
value: <REPLACE-WITH-PRIVATE-KEY-ID> # Required.
- name: private_key
value: <REPLACE-WITH-PRIVATE-KEY> # Required.
- name: client_email
value: <REPLACE-WITH-CLIENT-EMAIL> # Required.
- name: client_id
value: <REPLACE-WITH-CLIENT-ID> # Required.
- name: auth_uri
value: <REPLACE-WITH-AUTH-URI> # Required.
- name: token_uri
value: <REPLACE-WITH-TOKEN-URI> # Required.
- name: auth_provider_x509_cert_url
value: <REPLACE-WITH-AUTH-X509-CERT-URL> # Required.
- name: client_x509_cert_url
value: <REPLACE-WITH-CLIENT-x509-CERT-URL> # Required.
- name: entity_kind
value: <REPLACE-WITH-ENTITY-KIND> # Optional. default: "DaprState"
```
The above example uses secrets as plain strings. It is recommended to use a secret store for the secrets as described [here](../../concepts/secrets/README.md)
## Apply the configuration
### In Kubernetes
To apply the Firestore state store to Kubernetes, use the `kubectl` CLI:
```
kubectl apply -f firestore.yaml
```
### Running locally
To run locally, create a `components` dir containing the YAML file and provide the path to the `dapr run` command with the flag `--components-path`.

View File

@ -0,0 +1,53 @@
# Setup Hazelcast
## Locally
You can run Hazelcast locally using Docker:
```
docker run -e JAVA_OPTS="-Dhazelcast.local.publicAddress=127.0.0.1:5701" -p 5701:5701 hazelcast/hazelcast
```
You can then interact with the server using the `127.0.0.1:5701`.
## Kubernetes
The easiest way to install Hazelcast on Kubernetes is by using the [Helm chart](https://github.com/helm/charts/tree/master/stable/hazelcast):
## Create a Dapr component
The next step is to create a Dapr component for Hazelcast.
Create the following YAML file named `hazelcast.yaml`:
```yaml
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
name: <NAME>
namespace: <NAMESPACE>
spec:
type: state.hazelcast
metadata:
- name: hazelcastServers
value: <REPLACE-WITH-HOSTS> # Required. A comma delimited string of servers. Example: "hazelcast:3000,hazelcast2:3000"
- name: hazelcastMap
value: <REPLACE-WITH-MAP> # Required. Hazelcast map configuration.
```
The above example uses secrets as plain strings. It is recommended to use a secret store for the secrets as described [here](../../concepts/secrets/README.md).
## Apply the configuration
### In Kubernetes
To apply the Hazelcast state store to Kubernetes, use the `kubectl` CLI:
```
kubectl apply -f hazelcast.yaml
```
### Running locally
To run locally, create a `components` dir containing the YAML file and provide the path to the `dapr run` command with the flag `--components-path`.

View File

@ -0,0 +1,66 @@
# Setup Memcached
## Locally
You can run Memcached locally using Docker:
```
docker run --name my-memcache -d memcached
```
You can then interact with the server using `localhost:11211`.
## Kubernetes
The easiest way to install Memcached on Kubernetes is by using the [Helm chart](https://github.com/helm/charts/tree/master/stable/memcached):
```
helm install memcached stable/memcached
```
This will install Memcached into the `default` namespace.
To interact with Memcached, find the service with: `kubectl get svc memcached`.
For example, if installing using the example above, the Memcached host address would be:
`memcached.default.svc.cluster.local:11211`
## Create a Dapr component
The next step is to create a Dapr component for Memcached.
Create the following YAML file named `memcached.yaml`:
```yaml
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
name: <NAME>
namespace: <NAMESPACE>
spec:
type: state.memcached
metadata:
- name: hosts
value: <REPLACE-WITH-COMMA-DELIMITED-ENDPOINTS> # Required. Example: "memcached.default.svc.cluster.local:11211"
- name: maxIdleConnections
value: <REPLACE-WITH-MAX-IDLE-CONNECTIONS> # Optional. default: "2"
- name: timeout
value: <REPLACE-WITH-TIMEOUT> # Optional. default: "1000ms"
```
The above example uses secrets as plain strings. It is recommended to use a secret store for the secrets as described [here](../../concepts/secrets/README.md)
## Apply the configuration
### In Kubernetes
To apply the Memcached state store to Kubernetes, use the `kubectl` CLI:
```
kubectl apply -f memcached.yaml
```
### Running locally
To run locally, create a `components` dir containing the YAML file and provide the path to the `dapr run` command with the flag `--components-path`.

View File

@ -0,0 +1,104 @@
# Setup MongoDB
## Locally
You can run MongoDB locally using Docker:
```
docker run --name some-mongo -d mongo
```
You can then interact with the server using `localhost:27017`.
## Kubernetes
The easiest way to install MongoDB on Kubernetes is by using the [Helm chart](https://github.com/helm/charts/tree/master/stable/mongodb):
```
helm install mongo stable/mongodb
```
This will install MongoDB into the `default` namespace.
To interact with MongoDB, find the service with: `kubectl get svc mongo-mongodb`.
For example, if installing using the example above, the MongoDB host address would be:
`mongo-mongodb.default.svc.cluster.local:27017`
Follow the on-screen instructions to get the root password for MongoDB.
The username will be `admin` by default.
## Create a Dapr component
The next step is to create a Dapr component for MongoDB.
Create the following YAML file named `mongodb.yaml`:
```yaml
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
name: <NAME>
namespace: <NAMESPACE>
spec:
type: state.mongodb
metadata:
- name: host
value: <REPLACE-WITH-HOST> # Required. Example: "mongo-mongodb.default.svc.cluster.local:27017"
- name: username
value: <REPLACE-WITH-USERNAME> # Optional. Example: "admin"
- name: password
value: <REPLACE-WITH-PASSWORD> # Optional.
- name: databaseName
value: <REPLACE-WITH-DATABASE-NAME> # Optional. default: "daprStore"
- name: collectionName
value: <REPLACE-WITH-COLLECTION-NAME> # Optional. default: "daprCollection"
- name: writeconcern
value: <REPLACE-WITH-WRITE-CONCERN> # Optional.
- name: readconcern
value: <REPLACE-WITH-READ-CONCERN> # Optional.
- name: operationTimeout
value: <REPLACE-WITH-OPERATION-TIMEOUT> # Optional. default: "5s"
```
The above example uses secrets as plain strings. It is recommended to use a secret store for the secrets as described [here](../../concepts/secrets/README.md).
The following example uses the Kubernetes secret store to retrieve the username and password:
```yaml
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
name: <NAME>
namespace: <NAMESPACE>
spec:
type: state.mondodb
metadata:
- name: host
value: <REPLACE-WITH-HOST>
- name: username
secretKeyRef:
name: <KUBERNETES-SECRET-NAME>
key: <KUBERNETES-SECRET-KEY>
- name: password
secretKeyRef:
name: <KUBERNETES-SECRET-NAME>
key: <KUBERNETES-SECRET-KEY>
...
```
## Apply the configuration
### In Kubernetes
To apply the MondoDB state store to Kubernetes, use the `kubectl` CLI:
```
kubectl apply -f mongodb.yaml
```
### Running locally
To run locally, create a `components` dir containing the YAML file and provide the path to the `dapr run` command with the flag `--components-path`.

View File

@ -0,0 +1,58 @@
# Setup PostgreSQL
This article provides guidance on configuring a PostgreSQL state store.
## Create a PostgreSQL Store
Dapr can use any PostgreSQL instance. If you already have a running instance of PostgreSQL, move on to the [Create a Dapr component](#create-a-dapr-component) section.
1. Run an instance of PostgreSQL. You can run a local instance of PostgreSQL in Docker CE with the following command:
This example does not describe a production configuration because it sets the password in plain text and the user name is left as the PostgreSQL default of "postgres".
```bash
docker run -p 5432:5432 -e POSTGRES_PASSWORD=example postgres
```
2. Create a database for state data.
Either the default "postgres" database can be used, or create a new database for storing state data.
To create a new database in PostgreSQL, run the following SQL command:
```SQL
create database dapr_test
```
## Create a Dapr component
Create a file called `postgres.yaml`, paste the following and replace the `<CONNECTION STRING>` value with your connection string. The connection string is a standard PostgreSQL connection string. For example, `"host=localhost user=postgres password=example port=5432 connect_timeout=10 database=dapr_test"`. See the PostgreSQL [documentation on database connections](https://www.postgresql.org/docs/current/libpq-connect.html), specifically Keyword/Value Connection Strings, for information on how to define a connection string.
If you want to also configure PostgreSQL to store actors, add the `actorStateStore` configuration element shown below.
```yaml
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
name: statestore
spec:
type: state.postgresql
metadata:
- name: connectionString
value: "<CONNECTION STRING>"
- name: actorStateStore
value: "true"
```
The above example uses secrets as plain strings. It is recommended to use a secret store for the secrets as described [here](../../concepts/secrets/README.md).
## Apply the configuration
### In Kubernetes
To apply the PostgreSQL state store to Kubernetes, use the `kubectl` CLI:
```yaml
kubectl apply -f postgres.yaml
```
### Running locally
To run locally, create a `components` dir containing the YAML file and provide the path to the `dapr run` command with the flag `--components-path`.

View File

@ -0,0 +1,101 @@
# Setup Redis
## Creating a Redis Store
Dapr can use any Redis instance - containerized, running on your local dev machine, or a managed cloud service. If you already have a Redis store, move on to the [Configuration](#configuration) section.
### Creating a Redis Cache in your Kubernetes Cluster using Helm
We can use [Helm](https://helm.sh/) to quickly create a Redis instance in our Kubernetes cluster. This approach requires [Installing Helm](https://github.com/helm/helm#install).
1. Install Redis into your cluster. Note that we're explicitly setting an image tag to get a version greater than 5, which is what Dapr' pub/sub functionality requires. If you're intending on using Redis as just a state store (and not for pub/sub), you do not have to set the image version.
```bash
helm repo add bitnami https://charts.bitnami.com/bitnami
helm install redis bitnami/redis
```
2. Run `kubectl get pods` to see the Redis containers now running in your cluster.
3. Add `redis-master:6379` as the `redisHost` in your [redis.yaml](#configuration) file. For example:
```yaml
metadata:
- name: redisHost
value: redis-master:6379
```
4. Next, we'll get our Redis password, which is slightly different depending on the OS we're using:
- **Windows**: Run `kubectl get secret --namespace default redis -o jsonpath="{.data.redis-password}" > encoded.b64`, which will create a file with your encoded password. Next, run `certutil -decode encoded.b64 password.txt`, which will put your redis password in a text file called `password.txt`. Copy the password and delete the two files.
- **Linux/MacOS**: Run `kubectl get secret --namespace default redis -o jsonpath="{.data.redis-password}" | base64 --decode` and copy the outputted password.
Add this password as the `redisPassword` value in your [redis.yaml](#configuration) file. For example:
```yaml
metadata:
- name: redisPassword
value: lhDOkwTlp0
```
### Creating an Azure Managed Redis Cache
**Note**: this approach requires having an Azure Subscription.
1. Open [this link](https://ms.portal.azure.com/#create/Microsoft.Cache) to start the Azure Cache for Redis creation flow. Log in if necessary.
2. Fill out necessary information and **check the "Unblock port 6379" box**, which will allow us to persist state without SSL.
3. Click "Create" to kickoff deployment of your Redis instance.
4. Once your instance is created, you'll need to grab the Host name (FQDN) and your access key.
- for the Host name navigate to the resources "Overview" and copy "Host name"
- for your access key navigate to "Access Keys" under "Settings" and copy your key.
5. Finally, we need to add our key and our host to a `redis.yaml` file that Dapr can apply to our cluster. If you're running a sample, you'll add the host and key to the provided `redis.yaml`. If you're creating a project from the ground up, you'll create a `redis.yaml` file as specified in [Configuration](#configuration). Set the `redisHost` key to `[HOST NAME FROM PREVIOUS STEP]:6379` and the `redisPassword` key to the key you copied in step 4. **Note:** In a production-grade application, follow [secret management](https://github.com/dapr/docs/blob/master/concepts/components/secrets.md) instructions to securely manage your secrets.
> **NOTE:** Dapr pub/sub uses [Redis Streams](https://redis.io/topics/streams-intro) that was introduced by Redis 5.0, which isn't currently available on Azure Managed Redis Cache. Consequently, you can use Azure Managed Redis Cache only for state persistence.
### Other ways to create a Redis Database
- [AWS Redis](https://aws.amazon.com/redis/)
- [GCP Cloud MemoryStore](https://cloud.google.com/memorystore/)
## Configuration
To setup Redis, you need to create a component for `state.redis`.
<br>
The following yaml file demonstrates how to define each.
### Configuring Redis for State Persistence and Retrieval
**TLS:** If the Redis instance supports TLS with public certificates it can be configured to enable or disable TLS `true` or `false`.
**Failover:** When set to `true` enables the failover feature. The redisHost should be the sentinel host address. See [Redis Sentinel Documentation](https://redis.io/topics/sentinel)
**Note:** yaml files below illustrate secret management in plain text. In a production-grade application, follow [secret management](../../concepts/secrets/README.md) instructions to securely manage your secrets.
Create a file called redis.yaml, and paste the following:
```yaml
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
name: statestore
namespace: default
spec:
type: state.redis
metadata:
- name: redisHost
value: <HOST>
- name: redisPassword
value: <PASSWORD>
- name: enableTLS
value: <bool> # Optional. Allowed: true, false.
- name: failover
value: <bool> # Optional. Allowed: true, false.
```
## Apply the configuration
### Kubernetes
```
kubectl apply -f redis.yaml
```
### Standalone
To run locally, create a `components` dir containing the YAML file and provide the path to the `dapr run` command with the flag `--components-path`.

View File

@ -0,0 +1,49 @@
# Setup RethinkDB
## Locally
You can run [RethinkDB](https://rethinkdb.com/) locally using Docker:
```
docker run --name rethinkdb -v "$PWD:/rethinkdb-data" -d rethinkdb:latest
```
To connect to the admin UI:
```shell
open "http://$(docker inspect --format '{{ .NetworkSettings.IPAddress }}' rethinkdb):8080"
```
## Create a Dapr component
The next step is to create a Dapr component for RethinkDB.
Create the following YAML file named `rethinkdb.yaml`
```yaml
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
name: <NAME>
namespace: <NAMESPACE>
spec:
type: state.rethinkdb
metadata:
- name: address
value: <REPLACE-RETHINKDB-ADDRESS> # Required, e.g. 127.0.0.1:28015 or rethinkdb.default.svc.cluster.local:28015).
- name: database
value: <REPLACE-RETHINKDB-DB-NAME> # Required, e.g. dapr (alpha-numerics only)
- name: table
value: # Optional
- name: username
value: # Optional
- name: password
value: # Optional
- name: archive
value: # Optional (whether or not store should keep archive table of all the state changes)
```
RethinkDB state store supports transactions so it can be used to persist Dapr Actor state. By default, the state will be stored in table name `daprstate` in the specified database.
Additionally, if the optional `archive` metadata is set to `true`, on each state change, the RethinkDB state store will also log state changes with timestamp in the `daprstate_archive` table. This allows for time series analyses of the state managed by Dapr.

View File

@ -0,0 +1,79 @@
# Setup SQL Server
## Creating an Azure SQL instance
[Follow the instructions](https://docs.microsoft.com/azure/sql-database/sql-database-single-database-get-started?tabs=azure-portal) from the Azure documentation on how to create a SQL database. The database must be created before Dapr consumes it.
**Note: SQL Server state store also supports SQL Server running on VMs.**
In order to setup SQL Server as a state store, you will need the following properties:
* **Connection String**: the SQL Server connection string. For example: server=localhost;user id=sa;password=your-password;port=1433;database=mydatabase;
* **Schema**: The database schema do use (default=dbo). Will be created if not exists
* **Table Name**: The database table name. Will be created if not exists
* **Indexed Properties**: Optional properties from json data which will be indexed and persisted as individual column
### Create a dedicated user
When connecting with a dedicated user (not `sa`), these authorizations are required for the user - even when the user is owner of the desired database schema:
- `CREATE TABLE`
- `CREATE TYPE`
## Create a Dapr component
> currently this component does not support state management for actors
The next step is to create a Dapr component for SQL Server.
Create the following YAML file named `sqlserver.yaml`:
```yaml
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
name: <NAME>
namespace: <NAMESPACE>
spec:
type: state.sqlserver
metadata:
- name: connectionString
value: <REPLACE-WITH-CONNECTION-STRING>
- name: tableName
value: <REPLACE-WITH-TABLE-NAME>
```
The above example uses secrets as plain strings. It is recommended to use a secret store for the secrets as described [here](../../concepts/secrets/README.md)
The following example uses the Kubernetes secret store to retrieve the secrets:
```yaml
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
name: <NAME>
namespace: <NAMESPACE>
spec:
type: state.sqlserver
metadata:
- name: connectionString
secretKeyRef:
name: <KUBERNETES-SECRET-NAME>
key: <KUBERNETES-SECRET-KEY>
- name: tableName
value: <REPLACE-WITH-TABLE-NAME>
```
## Apply the configuration
### In Kubernetes
To apply the SQL Server state store to Kubernetes, use the `kubectl` CLI:
```yaml
kubectl apply -f sqlserver.yaml
```
### Running locally
To run locally, create a `components` dir containing the YAML file and provide the path to the `dapr run` command with the flag `--components-path`.

View File

@ -0,0 +1,71 @@
# Setup Zookeeper
## Locally
You can run Zookeeper locally using Docker:
```
docker run --name some-zookeeper --restart always -d zookeeper
```
You can then interact with the server using `localhost:2181`.
## Kubernetes
The easiest way to install Zookeeper on Kubernetes is by using the [Helm chart](https://github.com/helm/charts/tree/master/incubator/zookeeper):
```
helm repo add incubator http://storage.googleapis.com/kubernetes-charts-incubator
helm install zookeeper incubator/zookeeper
```
This will install Zookeeper into the `default` namespace.
To interact with Zookeeper, find the service with: `kubectl get svc zookeeper`.
For example, if installing using the example above, the Zookeeper host address would be:
`zookeeper.default.svc.cluster.local:2181`
## Create a Dapr component
The next step is to create a Dapr component for Zookeeper.
Create the following YAML file named `zookeeper.yaml`:
```yaml
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
name: <NAME>
namespace: <NAMESPACE>
spec:
type: state.zookeeper
metadata:
- name: servers
value: <REPLACE-WITH-COMMA-DELIMITED-SERVERS> # Required. Example: "zookeeper.default.svc.cluster.local:2181"
- name: sessionTimeout
value: <REPLACE-WITH-SESSION-TIMEOUT> # Required. Example: "5s"
- name: maxBufferSize
value: <REPLACE-WITH-MAX-BUFFER-SIZE> # Optional. default: "1048576"
- name: maxConnBufferSize
value: <REPLACE-WITH-MAX-CONN-BUFFER-SIZE> # Optional. default: "1048576"
- name: keyPrefixPath
value: <REPLACE-WITH-KEY-PREFIX-PATH> # Optional.
```
The above example uses secrets as plain strings. It is recommended to use a secret store for the secrets as described [here](../../concepts/secrets/README.md)
## Apply the configuration
### In Kubernetes
To apply the Zookeeper state store to Kubernetes, use the `kubectl` CLI:
```
kubectl apply -f zookeeper.yaml
```
### Running locally
To run locally, create a `components` dir containing the YAML file and provide the path to the `dapr run` command with the flag `--components-path`.

View File

@ -0,0 +1,22 @@
# Supported state stores
| Name | CRUD | Transactional
| ------------- | -------|------ |
| Aerospike | :white_check_mark: | :x: |
| Cassandra | :white_check_mark: | :x: |
| Cloudstate | :white_check_mark: | :x: |
| Couchbase | :white_check_mark: | :x: |
| etcd | :white_check_mark: | :x: |
| Hashicorp Consul | :white_check_mark: | :x: |
| Hazelcast | :white_check_mark: | :x: |
| Memcached | :white_check_mark: | :x: |
| MongoDB | :white_check_mark: | :white_check_mark: |
| PostgreSQL | :white_check_mark: | :white_check_mark: |
| Redis | :white_check_mark: | :white_check_mark: |
| Zookeeper | :white_check_mark: | :x: |
| Azure CosmosDB | :white_check_mark: | :white_check_mark: |
| Azure SQL Server | :white_check_mark: | :white_check_mark: |
| Azure Table Storage | :white_check_mark: | :x: |
| Azure Blob Storage | :white_check_mark: | :x: |
| Google Cloud Firestore | :white_check_mark: | :x: |