From 9ffef85ac4ba325428f83cc4536e7638dac57e81 Mon Sep 17 00:00:00 2001 From: Avinash Barnwal Date: Sun, 21 Jun 2020 12:47:09 +0530 Subject: [PATCH 01/19] Update README.md Adding PowerShell way to retrieve the password --- howto/configure-redis/README.md | 16 +++++++++++++++- 1 file changed, 15 insertions(+), 1 deletion(-) diff --git a/howto/configure-redis/README.md b/howto/configure-redis/README.md index 612dafb2c..c8b9e919a 100644 --- a/howto/configure-redis/README.md +++ b/howto/configure-redis/README.md @@ -34,8 +34,22 @@ We can use [Helm](https://helm.sh/) to quickly create a Redis instance in our Ku 4. Next, we'll get our Redis password, which is slightly different depending on the OS we're using: - - **Windows**: Run `kubectl get secret --namespace default redis -o jsonpath="{.data.redis-password}" > encoded.b64`, which will create a file with your encoded password. Next, run `certutil -decode encoded.b64 password.txt`, which will put your redis password in a text file called `password.txt`. Copy the password and delete the two files. + - **Windows**: Run below commands + ```powershell + # Create a file with your encoded password. + kubectl get secret --namespace default redis -o jsonpath="{.data.redis-password}" > encoded.b64 + # put your redis password in a text file called `password.txt`. + certutil -decode encoded.b64 password.txt + # Copy the password and delete the two files. + ``` + - **Windows**: If you are using Powershell, it would be even easier. + ```powershell + PS C:\> $base64pwd=kubectl get secret --namespace default redis -o jsonpath="{.data.redis-password}" + PS C:\> $redispassword=[System.Text.Encoding]::UTF8.GetString([System.Convert]::FromBase64String($base64pwd)) + PS C:\> $base64pwd="" + PS C:\> $redispassword + ``` - **Linux/MacOS**: Run `kubectl get secret --namespace default redis -o jsonpath="{.data.redis-password}" | base64 --decode` and copy the outputted password. Add this password as the `redisPassword` value in your [redis.yaml](#configuration) file. For example: From 2968d37dd2bb94eb88a1489bb1a6898c293a05c4 Mon Sep 17 00:00:00 2001 From: Brooke Hamilton Date: Wed, 8 Jul 2020 18:50:34 -0400 Subject: [PATCH 02/19] PostgreSQL updates --- concepts/actors/actors_features.md | 1 + concepts/state-management/README.md | 3 + howto/setup-state-store/setup-postgresql.md | 57 +++++++++++++++++++ .../supported-state-stores.md | 3 +- overview/README.md | 2 +- reference/api/state_api.md | 2 +- 6 files changed, 65 insertions(+), 3 deletions(-) create mode 100644 howto/setup-state-store/setup-postgresql.md diff --git a/concepts/actors/actors_features.md b/concepts/actors/actors_features.md index 5b24fe2c3..76f28f4f7 100644 --- a/concepts/actors/actors_features.md +++ b/concepts/actors/actors_features.md @@ -28,6 +28,7 @@ To use actors, your state store must support multi-item transactions. This mean - Redis - MongoDB +- PostgreSQL - SQL Server ## Actor timers and reminders diff --git a/concepts/state-management/README.md b/concepts/state-management/README.md index b374df2d8..620e08870 100644 --- a/concepts/state-management/README.md +++ b/concepts/state-management/README.md @@ -49,6 +49,7 @@ The following table summarizes the capabilities of existing data store implement Store | Strong consistent write | Strong consistent read | ETag| ----|----|----|---- Cosmos DB | Yes | Yes | Yes +PostgreSQL | Yes | Yes | Yes Redis | Yes | Yes | Yes Redis (clustered)| Yes | No | Yes SQL Server | Yes | Yes | Yes @@ -110,6 +111,8 @@ SELECT AVG(value) FROM StateTable WHERE Id LIKE '||||*||tem * [Spec: Dapr actors specification](../../reference/api/actors_api.md) * [How-to: Set up Azure Cosmos DB store](../../howto/setup-state-store/setup-azure-cosmosdb.md) * [How-to: Query Azure Cosmos DB store](../../howto/query-state-store/query-cosmosdb-store.md) +* [How-to: Set up PostgreSQL store](../../howto/setup-state-store/setup-postgresql.md) +* [How-to: Query PostgreSQL store](../../howto/query-state-store/query-postgresql-store.md) * [How-to: Set up Redis store](../../howto/setup-state-store/setup-redis.md) * [How-to: Query Redis store](../../howto/query-state-store/query-redis-store.md) * [How-to: Set up SQL Server store](../../howto/setup-state-store/setup-sqlserver.md) diff --git a/howto/setup-state-store/setup-postgresql.md b/howto/setup-state-store/setup-postgresql.md new file mode 100644 index 000000000..ff8a1a8ff --- /dev/null +++ b/howto/setup-state-store/setup-postgresql.md @@ -0,0 +1,57 @@ +# Setup PostgreSQL + +This article provides guidance on configuring a PostgreSQL state store. + +## Create a PostgreSQL Store +Dapr can use any PostgreSQL instance. If you already have a running instance of PostgreSQL, move on to the [Configuration](#configuration) section. + +1. Run an instance of PostgreSQL +You can run a local instance of PostgreSQL in Docker CE with the following command: + + This example does not describe a production configuration because it sets the password in plain text and the user name is left as the PostgreSQL default of "postgres". + + ```bash + docker run -p 5432:5432 -e POSTGRES_PASSWORD=example postgres + ``` + +2. Create a database for state data. +Either the default "postgres" database can be used, or create a new database for storing state data. + + To create a new database in PostgreSQL, run the following SQL command: + + ```SQL + create database dapr_test + ``` + +## Create a Dapr component + +Create a file called `postgres.yaml`, and paste the following. If you want to also configure PostgreSQL to store actors, add the `actorStateStore` configuration element shown below. + +```yaml +apiVersion: dapr.io/v1alpha1 +kind: Component +metadata: + name: statestore +spec: + type: state.postgresql + metadata: + - name: connectionString + value: "" + - name: actorStateStore + value: "true" +``` +The above example uses secrets as plain strings. It is recommended to use a secret store for the secrets as described [here](../../concepts/secrets/README.md) + +## Apply the configuration + +### In Kubernetes + +To apply the SQL Server state store to Kubernetes, use the `kubectl` CLI: + +```yaml +kubectl apply -f postgres.yaml +``` + +### Running locally + +To run locally, create a `components` dir containing the YAML file and provide the path to the `dapr run` command with the flag `--components-path`. diff --git a/howto/setup-state-store/supported-state-stores.md b/howto/setup-state-store/supported-state-stores.md index f0e0d64bf..f3ca21385 100644 --- a/howto/setup-state-store/supported-state-stores.md +++ b/howto/setup-state-store/supported-state-stores.md @@ -11,7 +11,8 @@ | Hazelcast | :white_check_mark: | :x: | | Memcached | :white_check_mark: | :x: | | MongoDB | :white_check_mark: | :white_check_mark: | -| Redis | :white_check_mark: | :white_check_mark: | +| PostgreSQL | :white_check_mark: | :white_check_mark: | +| Redis | :white_check_mark: | :white_check_mark: | | Zookeeper | :white_check_mark: | :x: | | Azure CosmosDB | :white_check_mark: | :x: | | Azure SQL Server | :white_check_mark: | :white_check_mark: | diff --git a/overview/README.md b/overview/README.md index d33c1621c..1bc2eb8a3 100644 --- a/overview/README.md +++ b/overview/README.md @@ -35,7 +35,7 @@ Each of these building blocks is independent, meaning that you can use one, some | Building Block | Description | |----------------|-------------| | **[Service Invocation](../concepts/service-invocation)** | Resilient service-to-service invocation enables method calls, including retries, on remote services wherever they are located in the supported hosting environment. -| **[State Management](../concepts/state-management)** | With state management for storing key/value pairs, long running, highly available, stateful services can be easily written alongside stateless services in your application. The state store is pluggable and can include Azure CosmosDB, AWS DynamoDB or Redis among others. +| **[State Management](../concepts/state-management)** | With state management for storing key/value pairs, long running, highly available, stateful services can be easily written alongside stateless services in your application. The state store is pluggable and can include Azure CosmosDB, Azure SQL Server, PostgreSQL, AWS DynamoDB or Redis among others. | **[Publish and Subscribe Messaging](../concepts/publish-subscribe-messaging)** | Publishing events and subscribing to topics | tween services enables event-driven architectures to simplify horizontal scalability and make them | silient to failure. Dapr provides at least once message delivery guarantee. | **[Resource Bindings](../concepts/bindings)** | Resource bindings with triggers builds further on event-driven | chitectures for scale and resiliency by receiving and sending events to and from any external | source such as databases, queues, file systems, etc. | **[Distributed Tracing](../concepts/observability/traces.md)** | Dapr supports distributed tracing to easily diagnose and | serve inter-service calls in production using the W3C Trace Context standard. diff --git a/reference/api/state_api.md b/reference/api/state_api.md index 4318fc5ce..47ecb0f9d 100644 --- a/reference/api/state_api.md +++ b/reference/api/state_api.md @@ -219,7 +219,7 @@ curl -X "DELETE" http://localhost:3500/v1.0/state/starwars/planet -H "ETag: xxxx ## Configuring state store for actors -Actors don't support multiple state stores and require a transactional state store to be used with Dapr. Currently mongodb and redis implement the transactional state store interface. +Actors don't support multiple state stores and require a transactional state store to be used with Dapr. Currently mongodb, Azure SQL Server, redis, and PostgreSQL implement the transactional state store interface. To specify which state store to be used for actors, specify value of property `actorStateStore` as true in the metadata section of the state store component yaml file. Example: Following components yaml will configure redis to be used as the state store for Actors. From a43d06df30b859bdf3bffed83abb732d0e77971b Mon Sep 17 00:00:00 2001 From: Brooke Hamilton Date: Wed, 8 Jul 2020 18:56:43 -0400 Subject: [PATCH 03/19] PostgreSQL updates --- howto/setup-state-store/README.md | 1 + howto/setup-state-store/setup-postgresql.md | 2 +- 2 files changed, 2 insertions(+), 1 deletion(-) diff --git a/howto/setup-state-store/README.md b/howto/setup-state-store/README.md index e9b110af8..e6815d924 100644 --- a/howto/setup-state-store/README.md +++ b/howto/setup-state-store/README.md @@ -55,6 +55,7 @@ kubectl apply -f statestore.yaml * [Setup Hazelcast](./setup-hazelcast.md) * [Setup Memcached](./setup-memcached.md) * [Setup MongoDB](./setup-mongodb.md) +* [Setup PostgreSQL](./setup-postgresql.md) * [Setup Redis](./setup-redis.md) * [Setup Zookeeper](./setup-zookeeper.md) * [Setup Azure CosmosDB](./setup-azure-cosmosdb.md) diff --git a/howto/setup-state-store/setup-postgresql.md b/howto/setup-state-store/setup-postgresql.md index ff8a1a8ff..f7fc18d3a 100644 --- a/howto/setup-state-store/setup-postgresql.md +++ b/howto/setup-state-store/setup-postgresql.md @@ -3,7 +3,7 @@ This article provides guidance on configuring a PostgreSQL state store. ## Create a PostgreSQL Store -Dapr can use any PostgreSQL instance. If you already have a running instance of PostgreSQL, move on to the [Configuration](#configuration) section. +Dapr can use any PostgreSQL instance. If you already have a running instance of PostgreSQL, move on to the [Create a Dapr component](#create-a-dapr-component) section. 1. Run an instance of PostgreSQL You can run a local instance of PostgreSQL in Docker CE with the following command: From 5dd0d627381b41892eb3f48c2377cb6d9d1c6b07 Mon Sep 17 00:00:00 2001 From: Brooke Hamilton Date: Wed, 8 Jul 2020 19:00:49 -0400 Subject: [PATCH 04/19] PostgreSQL updates --- howto/setup-state-store/setup-postgresql.md | 7 +++---- 1 file changed, 3 insertions(+), 4 deletions(-) diff --git a/howto/setup-state-store/setup-postgresql.md b/howto/setup-state-store/setup-postgresql.md index f7fc18d3a..3d1b2d036 100644 --- a/howto/setup-state-store/setup-postgresql.md +++ b/howto/setup-state-store/setup-postgresql.md @@ -5,8 +5,7 @@ This article provides guidance on configuring a PostgreSQL state store. ## Create a PostgreSQL Store Dapr can use any PostgreSQL instance. If you already have a running instance of PostgreSQL, move on to the [Create a Dapr component](#create-a-dapr-component) section. -1. Run an instance of PostgreSQL -You can run a local instance of PostgreSQL in Docker CE with the following command: +1. Run an instance of PostgreSQL. You can run a local instance of PostgreSQL in Docker CE with the following command: This example does not describe a production configuration because it sets the password in plain text and the user name is left as the PostgreSQL default of "postgres". @@ -25,7 +24,7 @@ Either the default "postgres" database can be used, or create a new database for ## Create a Dapr component -Create a file called `postgres.yaml`, and paste the following. If you want to also configure PostgreSQL to store actors, add the `actorStateStore` configuration element shown below. +Create a file called `postgres.yaml`, and paste the following and replace the `` value with your connection string. If you want to also configure PostgreSQL to store actors, add the `actorStateStore` configuration element shown below. ```yaml apiVersion: dapr.io/v1alpha1 @@ -40,7 +39,7 @@ spec: - name: actorStateStore value: "true" ``` -The above example uses secrets as plain strings. It is recommended to use a secret store for the secrets as described [here](../../concepts/secrets/README.md) +The above example uses secrets as plain strings. It is recommended to use a secret store for the secrets as described [here](../../concepts/secrets/README.md). ## Apply the configuration From 5ccc5beed79291aa9c19eaaaa22033f0959b9785 Mon Sep 17 00:00:00 2001 From: Brooke Hamilton Date: Wed, 8 Jul 2020 19:28:04 -0400 Subject: [PATCH 05/19] Fix link to how to query postgresql --- concepts/state-management/README.md | 1 - 1 file changed, 1 deletion(-) diff --git a/concepts/state-management/README.md b/concepts/state-management/README.md index 620e08870..a00c4c178 100644 --- a/concepts/state-management/README.md +++ b/concepts/state-management/README.md @@ -112,7 +112,6 @@ SELECT AVG(value) FROM StateTable WHERE Id LIKE '||||*||tem * [How-to: Set up Azure Cosmos DB store](../../howto/setup-state-store/setup-azure-cosmosdb.md) * [How-to: Query Azure Cosmos DB store](../../howto/query-state-store/query-cosmosdb-store.md) * [How-to: Set up PostgreSQL store](../../howto/setup-state-store/setup-postgresql.md) -* [How-to: Query PostgreSQL store](../../howto/query-state-store/query-postgresql-store.md) * [How-to: Set up Redis store](../../howto/setup-state-store/setup-redis.md) * [How-to: Query Redis store](../../howto/query-state-store/query-redis-store.md) * [How-to: Set up SQL Server store](../../howto/setup-state-store/setup-sqlserver.md) From 327a7216c216b8433a416950c630aa33efea5583 Mon Sep 17 00:00:00 2001 From: Yaron Schneider Date: Thu, 9 Jul 2020 15:02:36 -0700 Subject: [PATCH 06/19] Update bindings_api.md (#678) --- reference/api/bindings_api.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/reference/api/bindings_api.md b/reference/api/bindings_api.md index 3d62fa0b8..ff9422fc0 100644 --- a/reference/api/bindings_api.md +++ b/reference/api/bindings_api.md @@ -159,7 +159,7 @@ See the [different specs](../specs/bindings) on each binding to see the list of ### HTTP Request ```http -POST/GET/PUT/DELETE http://localhost:/v1.0/bindings/ +POST/PUT http://localhost:/v1.0/bindings/ ``` ### HTTP Response codes From 317e1f2e8292711af244bd17e5ce6ae4ef9755b0 Mon Sep 17 00:00:00 2001 From: Mark Fussell Date: Fri, 10 Jul 2020 11:24:42 -0700 Subject: [PATCH 07/19] Update setup-postgresql.md --- howto/setup-state-store/setup-postgresql.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/howto/setup-state-store/setup-postgresql.md b/howto/setup-state-store/setup-postgresql.md index 3d1b2d036..53e54ab17 100644 --- a/howto/setup-state-store/setup-postgresql.md +++ b/howto/setup-state-store/setup-postgresql.md @@ -24,7 +24,7 @@ Either the default "postgres" database can be used, or create a new database for ## Create a Dapr component -Create a file called `postgres.yaml`, and paste the following and replace the `` value with your connection string. If you want to also configure PostgreSQL to store actors, add the `actorStateStore` configuration element shown below. +Create a file called `postgres.yaml`, paste the following and replace the `` value with your connection string. If you want to also configure PostgreSQL to store actors, add the `actorStateStore` configuration element shown below. ```yaml apiVersion: dapr.io/v1alpha1 From ad1f487daaec8b52ea881c8e11e9672c241b003b Mon Sep 17 00:00:00 2001 From: Mark Chmarny Date: Fri, 10 Jul 2020 11:56:05 -0700 Subject: [PATCH 08/19] updated with output binding support (#652) --- reference/specs/bindings/twitter.md | 38 +++++++++++++++++++++++++++++ 1 file changed, 38 insertions(+) diff --git a/reference/specs/bindings/twitter.md b/reference/specs/bindings/twitter.md index 73b1aa43c..3b7fb1ad5 100644 --- a/reference/specs/bindings/twitter.md +++ b/reference/specs/bindings/twitter.md @@ -1,5 +1,7 @@ # Twitter Binding Spec +The Twitter binding supports both `input` and `output` binding configuration. First the common part: + ```yaml apiVersion: dapr.io/v1alpha1 kind: Component @@ -17,6 +19,42 @@ spec: value: "****" # twitter api access token, required - name: accessSecret value: "****" # twitter api access secret, required +``` + +For input bindings, where the query matching Tweets are streamed to the user service, the above component has to also include a query: + +```yaml - name: query value: "dapr" # your search query, required ``` + +For output binding invocation the user code has to invoke the binding: + +```shell +POST http://localhost:3500/v1.0/bindings/twitter +``` + +Where the payload is: + +```json +{ + "data": "", + "metadata": { + "query": "twitter-query", + "lang": "optional-language-code", + "result": "valid-result-type" + }, + "operation": "get" +} +``` + +The metadata parameters are: + +* `query` - any valid Twitter query (e.g. `dapr` or `dapr AND serverless`). See [Twitter docs](https://developer.twitter.com/en/docs/tweets/rules-and-filtering/overview/standard-operators) for more details on advanced query formats +* `lang` - (optional, default: `en`) restricts result tweets to the given language using [ISO 639-1 language code](https://en.wikipedia.org/wiki/List_of_ISO_639-1_codes) +* `result` - (optional, default: `recent`) specifies tweet query result type. Valid values include: + * `mixed` - both popular and real time results + * `recent` - most recent results + * `popular` - most popular results + +You can see the example of the JSON data that Twitter binding returns [here](https://developer.twitter.com/en/docs/tweets/search/api-reference/get-search-tweets) \ No newline at end of file From 7addffb8bfe68bbc7faf80406c247ed3654f4794 Mon Sep 17 00:00:00 2001 From: Joni Collinge Date: Fri, 10 Jul 2020 20:03:22 +0100 Subject: [PATCH 09/19] Update ASB metadata (#642) * updated asb metadata * fix grammar * correct default lockRenewalInSec * add note that settings are shared across topics Co-authored-by: Aman Bhardwaj --- .../setup-azure-servicebus.md | 20 ++++++++++++++++--- 1 file changed, 17 insertions(+), 3 deletions(-) diff --git a/howto/setup-pub-sub-message-broker/setup-azure-servicebus.md b/howto/setup-pub-sub-message-broker/setup-azure-servicebus.md index b57469b61..fc96ddcac 100644 --- a/howto/setup-pub-sub-message-broker/setup-azure-servicebus.md +++ b/howto/setup-pub-sub-message-broker/setup-azure-servicebus.md @@ -20,19 +20,33 @@ spec: - name: connectionString value: # Required. - name: timeoutInSec - value: # Optional. Default: "60". + value: # Optional. Default: "60". Timeout for sending messages and management operations. + - name: handlerTimeoutInSec + value: # Optional. Default: "60". Timeout for invoking app handler. - name: disableEntityManagement value: # Optional. Default: false. When set to true, topics and subscriptions do not get created automatically. - name: maxDeliveryCount - value: # Optional. + value: # Optional. Defines the number of attempts the server will make to deliver a message. - name: lockDurationInSec - value: # Optional. + value: # Optional. Defines the length in seconds that a message will be locked for before expiring. + - name: lockRenewalInSec + value: # Optional. Default: "20". Defines the frequency at which buffered message locks will be renewed. + - name: maxActiveMessages + value: # Optional. Default: "10000". Defines the maximum number of messages to be buffered or processing at once. + - name: maxActiveMessagesRecoveryInSec + value: # Optional. Default: "2". Defines the number of seconds to wait once the maximum active message limit is reached. + - name: maxConcurrentHandlers + valye: # Optional. Defines the maximum number of concurrent message handlers + - name: prefetchCount + value: # Optional. Defines the number of prefetched messages (use for high throughput / low latency scenarios) - name: defaultMessageTimeToLiveInSec value: # Optional. - name: autoDeleteOnIdleInSec value: # Optional. ``` +> __NOTE:__ The above settings are shared across all topics that use this component. + The above example uses secrets as plain strings. It is recommended to use a secret store for the secrets as described [here](../../concepts/secrets/README.md) ## Apply the configuration From ba593eae906b29196e31c362736722a12399c74e Mon Sep 17 00:00:00 2001 From: Charlie Stanley Date: Fri, 10 Jul 2020 13:21:04 -0700 Subject: [PATCH 10/19] Added docs for installing dapr to hybrid windows/linux clusters (#679) --- getting-started/cluster/hybrid-clusters.md | 27 ++++++++++++++++++++++ getting-started/environment-setup.md | 5 ++++ 2 files changed, 32 insertions(+) create mode 100644 getting-started/cluster/hybrid-clusters.md diff --git a/getting-started/cluster/hybrid-clusters.md b/getting-started/cluster/hybrid-clusters.md new file mode 100644 index 000000000..dde40a35a --- /dev/null +++ b/getting-started/cluster/hybrid-clusters.md @@ -0,0 +1,27 @@ +# Deploying to Hybrid Linux/Winodws K8s Clusters + +## Installing the Dapr Control Plane + +If you are installing using the Dapr CLI or via helm chart, you can simply follow our normal deployment procedures: +[Installing Dapr on a Kubernetes cluster](../environment-setup.md#installing-Dapr-on-a-kubernetes-cluster) + +Affinity will be automatically set for kubernetes.io/os=linux. If you need to override linux to another value, you can do so by setting: +``` +helm install dapr dapr/dapr --set global.daprControlPlaneOs=YOUR_OS +``` +Dapr control plane container images are only provided for Linux, so you shouldn't need to do this unless you really know what you are doing. + +## Installing Dapr Apps +The Dapr sidecar container is currently linux only. For this reason, if you are writing a Dapr application, you must run it in a Linux container. You can track the progress of Dapr sidecar support for windows containers via [dapr/dapr#842](https://github.com/dapr/dapr/issues/842). + +When deploying to a hybrid cluster, you must configure your apps to be deployed to only Linux available nodes. One of the simplest ways to do this is to add kubernetes.io/os=linux to your app's nodeSelector. + +```yaml +spec: + nodeSelector: + kubernetes.io/os: linux +``` + +Kubernetes also supports much more advanced configuration via node affinity. +See https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/ for more examples. + diff --git a/getting-started/environment-setup.md b/getting-started/environment-setup.md index b0e3910ae..9a37b3690 100644 --- a/getting-started/environment-setup.md +++ b/getting-started/environment-setup.md @@ -135,6 +135,11 @@ You can install Dapr on any Kubernetes cluster. Here are some helpful links: - [Setup Google Cloud Kubernetes Engine](https://cloud.google.com/kubernetes-engine/docs/quickstart) - [Setup Amazon Elastic Kubernetes Service](https://docs.aws.amazon.com/eks/latest/userguide/getting-started.html) +> **Note:** The dapr control plane containers are curently only distributed on linux containers. +> Your kubernetes cluser must contain available Linux capable nodes. +> Both the dapr cli, and the dapr helm chart will automatically deploy with affinity for nodes with the label kubernetes.io/os=linux. +> For more information see [Deploying to a Hybrid Linux/Windows K8s Cluster](./cluster/hybrid-clusters.md) + ### Using the Dapr CLI You can install Dapr to a Kubernetes cluster using CLI. From f4a63c159a1dd2723e59737ddbe55fdb01eaa185 Mon Sep 17 00:00:00 2001 From: Brooke Hamilton Date: Fri, 10 Jul 2020 17:41:50 -0400 Subject: [PATCH 11/19] Detail on postgre connection strings --- howto/setup-state-store/setup-postgresql.md | 4 +++- 1 file changed, 3 insertions(+), 1 deletion(-) diff --git a/howto/setup-state-store/setup-postgresql.md b/howto/setup-state-store/setup-postgresql.md index 53e54ab17..c94388365 100644 --- a/howto/setup-state-store/setup-postgresql.md +++ b/howto/setup-state-store/setup-postgresql.md @@ -24,7 +24,9 @@ Either the default "postgres" database can be used, or create a new database for ## Create a Dapr component -Create a file called `postgres.yaml`, paste the following and replace the `` value with your connection string. If you want to also configure PostgreSQL to store actors, add the `actorStateStore` configuration element shown below. +Create a file called `postgres.yaml`, paste the following and replace the `` value with your connection string. The connection string is a standard PostgreSQL connection string. For example, `"host=localhost user=postgres password=example port=5432 connect_timeout=10 database=dapr_test"`. See the PostgreSQL [documentation on database connections](https://www.postgresql.org/docs/current/libpq-connect.html), specifically Keyword/Value Connection Strings, for information on how to define a connection string. + +If you want to also configure PostgreSQL to store actors, add the `actorStateStore` configuration element shown below. ```yaml apiVersion: dapr.io/v1alpha1 From cad0f665ac4b5b80351b22aa7cec76ca3dd8fc32 Mon Sep 17 00:00:00 2001 From: Charlie Stanley Date: Fri, 10 Jul 2020 15:14:10 -0700 Subject: [PATCH 12/19] Move hybrid-cluster guide to howto/ --- getting-started/environment-setup.md | 2 +- .../hybrid-clusters.md => howto/hybrid-clusters/README.md | 2 +- 2 files changed, 2 insertions(+), 2 deletions(-) rename getting-started/cluster/hybrid-clusters.md => howto/hybrid-clusters/README.md (91%) diff --git a/getting-started/environment-setup.md b/getting-started/environment-setup.md index 9a37b3690..751912a8e 100644 --- a/getting-started/environment-setup.md +++ b/getting-started/environment-setup.md @@ -138,7 +138,7 @@ You can install Dapr on any Kubernetes cluster. Here are some helpful links: > **Note:** The dapr control plane containers are curently only distributed on linux containers. > Your kubernetes cluser must contain available Linux capable nodes. > Both the dapr cli, and the dapr helm chart will automatically deploy with affinity for nodes with the label kubernetes.io/os=linux. -> For more information see [Deploying to a Hybrid Linux/Windows K8s Cluster](./cluster/hybrid-clusters.md) +> For more information see [Deploying to a Hybrid Linux/Windows K8s Cluster](../howto/hybrid-clusters/) ### Using the Dapr CLI diff --git a/getting-started/cluster/hybrid-clusters.md b/howto/hybrid-clusters/README.md similarity index 91% rename from getting-started/cluster/hybrid-clusters.md rename to howto/hybrid-clusters/README.md index dde40a35a..ca105b67b 100644 --- a/getting-started/cluster/hybrid-clusters.md +++ b/howto/hybrid-clusters/README.md @@ -3,7 +3,7 @@ ## Installing the Dapr Control Plane If you are installing using the Dapr CLI or via helm chart, you can simply follow our normal deployment procedures: -[Installing Dapr on a Kubernetes cluster](../environment-setup.md#installing-Dapr-on-a-kubernetes-cluster) +[Installing Dapr on a Kubernetes cluster](../../getting-started/environment-setup.md#installing-Dapr-on-a-kubernetes-cluster) Affinity will be automatically set for kubernetes.io/os=linux. If you need to override linux to another value, you can do so by setting: ``` From 5311339935f93d06b1d56b727186ac97e5727471 Mon Sep 17 00:00:00 2001 From: Leon Mai Date: Fri, 10 Jul 2020 15:51:33 -0700 Subject: [PATCH 13/19] Update cosmos documentation for transaction changes (#668) * update cosmos documentation * reword * add to additional refs * add to additional refs * Update setup-azure-cosmosdb.md * Update state_api.md * Update actors_features.md * Update state_api.md * merge Co-authored-by: LM Co-authored-by: Mark Chmarny Co-authored-by: Aman Bhardwaj Co-authored-by: Mark Fussell Co-authored-by: Yaron Schneider --- concepts/actors/actors_features.md | 1 + .../setup-state-store/setup-azure-cosmosdb.md | 88 +++++++++++++++++-- .../supported-state-stores.md | 2 +- reference/api/state_api.md | 2 +- 4 files changed, 84 insertions(+), 9 deletions(-) diff --git a/concepts/actors/actors_features.md b/concepts/actors/actors_features.md index 5b24fe2c3..5f21bee12 100644 --- a/concepts/actors/actors_features.md +++ b/concepts/actors/actors_features.md @@ -29,6 +29,7 @@ To use actors, your state store must support multi-item transactions. This mean - Redis - MongoDB - SQL Server +- Azure CosmosDB ## Actor timers and reminders diff --git a/howto/setup-state-store/setup-azure-cosmosdb.md b/howto/setup-state-store/setup-azure-cosmosdb.md index 23e465f23..925d2ef9e 100644 --- a/howto/setup-state-store/setup-azure-cosmosdb.md +++ b/howto/setup-state-store/setup-azure-cosmosdb.md @@ -2,11 +2,11 @@ ## Creating an Azure CosmosDB account -[Follow the instructions](https://docs.microsoft.com/en-us/azure/cosmos-db/how-to-manage-database-account) from the Azure documentation on how to create an Azure CosmosDB account. The database and collection must be created in CosmosDB before Dapr consumes it. +[Follow the instructions](https://docs.microsoft.com/en-us/azure/cosmos-db/how-to-manage-database-account) from the Azure documentation on how to create an Azure CosmosDB account. The database and collection must be created in CosmosDB before Dapr can use it. -**Note : The partition key for the collection must be "/id".** +**Note : The partition key for the collection must be named "/partitionKey". Note: this is case-sensitive.** -In order to setup CosmosDB as a state store, you will need the following properties: +In order to setup CosmosDB as a state store, you need the following properties: * **URL**: the CosmosDB url. for example: https://******.documents.azure.com:443/ * **Master Key**: The key to authenticate to the CosmosDB account @@ -17,7 +17,7 @@ In order to setup CosmosDB as a state store, you will need the following propert The next step is to create a Dapr component for CosmosDB. -Create the following YAML file named `cosmos.yaml`: +Create the following YAML file named `cosmosdb.yaml`: ```yaml apiVersion: dapr.io/v1alpha1 @@ -40,6 +40,25 @@ spec: The above example uses secrets as plain strings. It is recommended to use a secret store for the secrets as described [here](../../concepts/secrets/README.md) +Here is an example of what the values could look like: + +```yaml +apiVersion: dapr.io/v1alpha1 +kind: Component +metadata: + name: statestore +spec: + type: state.azure.cosmosdb + metadata: + - name: url + value: https://accountname.documents.azure.com:443 + - name: masterKey + value: thekey== + - name: database + value: db1 + - name: collection + value: c1 +``` The following example uses the Kubernetes secret store to retrieve the secrets: ```yaml @@ -63,6 +82,13 @@ spec: value: ``` +If you wish to use CosmosDb as an actor store, append the following to the yaml. + +```yaml + - name: actorStateStore + value: "true" +``` + ## Apply the configuration ### In Kubernetes @@ -75,13 +101,14 @@ kubectl apply -f cosmos.yaml ### Running locally -To run locally, create a `components` dir containing the YAML file and provide the path to the `dapr run` command with the flag `--components-path`. +To run locally, create a YAML file described above and provide the path to the `dapr run` command with the flag `--components-path`. See [this](https://github.com/dapr/cli#use-non-default-components-path) or run `dapr run --help` for more information on the path. ## Partition keys -The Azure CosmosDB state store will use the `key` property provided in the requests to the Dapr API to determine the partition key. -For example, the following operation will use the partition key `nihilus` as the partition key value sent to CosmosDB: +For **non-actor state** operations, the Azure CosmosDB state store will use the `key` property provided in the requests to the Dapr API to determine the CosmosDB partition key. This can be overridden by specifying a metadata field in the request with a key of `partitionKey` and a value of the desired partition. + +The following operation will use `nihilus` as the partition key value sent to CosmosDB: ```shell curl -X POST http://localhost:3500/v1.0/state/ \ @@ -93,3 +120,50 @@ curl -X POST http://localhost:3500/v1.0/state/ \ } ]' ``` + +For **non-actor** state operations, if you want to control the CosmosDB partition, you can specify it in metadata. Reusing the example above, here's how to put it under the `mypartition` partition + +```shell +curl -X POST http://localhost:3500/v1.0/state/ \ + -H "Content-Type: application/json" + -d '[ + { + "key": "nihilus", + "value": "darth", + "metadata": { + "partitionKey": "mypartition" + } + } + ]' +``` + +For **non-actor** state operations, here is how you would specify the partition for a transaction. The metadata field `partitionKey` must be specified in all items: + +```shell +curl -X POST http://localhost:3500/v1.0/state/transaction \ + -H "Content-Type: application/json" + -d '[ + { + "operation": "upsert", + "request": { + "key": "key1", + "value": "myData", + "metadata": { + "partitionKey": "mypartition" + } + } + }, + { + "operation": "delete", + "request": { + "key": "key2", + "metadata": { + "partitionKey": "mypartition" + } + } + } + ]' +``` + + +For **actor** state operations, the partition key will be generated by Dapr using the appId, the actor type, and the actor id, such that data for the same actor will always end up under the same partition (you do not need to specify it). This is because actor state operations must use transactions, and in CosmosDB the items in a transaction must be on the same partition. diff --git a/howto/setup-state-store/supported-state-stores.md b/howto/setup-state-store/supported-state-stores.md index f0e0d64bf..579220f47 100644 --- a/howto/setup-state-store/supported-state-stores.md +++ b/howto/setup-state-store/supported-state-stores.md @@ -13,7 +13,7 @@ | MongoDB | :white_check_mark: | :white_check_mark: | | Redis | :white_check_mark: | :white_check_mark: | | Zookeeper | :white_check_mark: | :x: | -| Azure CosmosDB | :white_check_mark: | :x: | +| Azure CosmosDB | :white_check_mark: | :white_check_mark: | | Azure SQL Server | :white_check_mark: | :white_check_mark: | | Azure Table Storage | :white_check_mark: | :x: | | Google Cloud Firestore | :white_check_mark: | :x: | diff --git a/reference/api/state_api.md b/reference/api/state_api.md index 4318fc5ce..578dcf0e9 100644 --- a/reference/api/state_api.md +++ b/reference/api/state_api.md @@ -219,7 +219,7 @@ curl -X "DELETE" http://localhost:3500/v1.0/state/starwars/planet -H "ETag: xxxx ## Configuring state store for actors -Actors don't support multiple state stores and require a transactional state store to be used with Dapr. Currently mongodb and redis implement the transactional state store interface. +Actors don't support multiple state stores and require a transactional state store to be used with Dapr. Currently Mongodb, Redis, PostgreSQL, SQL Server, and Azure CosmosDB implement the transactional state store interface. To specify which state store to be used for actors, specify value of property `actorStateStore` as true in the metadata section of the state store component yaml file. Example: Following components yaml will configure redis to be used as the state store for Actors. From b8ac9497f0018cb26b641a2849776ca6dcd99627 Mon Sep 17 00:00:00 2001 From: Vincenzo Morra <48756634+vimorra@users.noreply.github.com> Date: Sat, 11 Jul 2020 00:52:43 +0200 Subject: [PATCH 14/19] Improvement #655 (#656) Co-authored-by: Mark Chmarny Co-authored-by: Aman Bhardwaj Co-authored-by: Yaron Schneider --- .../azure-keyvault-managed-identity.md | 65 ++++++++++++++----- 1 file changed, 48 insertions(+), 17 deletions(-) diff --git a/howto/setup-secret-store/azure-keyvault-managed-identity.md b/howto/setup-secret-store/azure-keyvault-managed-identity.md index 8999e3cea..be5005944 100644 --- a/howto/setup-secret-store/azure-keyvault-managed-identity.md +++ b/howto/setup-secret-store/azure-keyvault-managed-identity.md @@ -4,10 +4,12 @@ This document shows how to enable Azure Key Vault secret store using [Dapr Secre ## Contents -- [Prerequisites](#prerequisites) -- [Setup Kubernetes to use managed identities and Azure Key Vault](#setup-kubernetes-to-use-managed-identities-and-azure-key-vault) -- [Use Azure Key Vault secret store in Kubernetes mode with managed identities](#use-azure-key-vault-secret-store-in-kubernetes-mode-with-managed-identities) -- [References](#references) +- [Use Azure Key Vault secret store in Kubernetes mode using Managed Identities](#use-azure-key-vault-secret-store-in-kubernetes-mode-using-managed-identities) + - [Contents](#contents) + - [Prerequisites](#prerequisites) + - [Setup Kubernetes to use Managed identities and Azure Key Vault](#setup-kubernetes-to-use-managed-identities-and-azure-key-vault) + - [Use Azure Key Vault secret store in Kubernetes mode with managed identities](#use-azure-key-vault-secret-store-in-kubernetes-mode-with-managed-identities) + - [References](#references) ## Prerequisites @@ -32,39 +34,68 @@ This document shows how to enable Azure Key Vault secret store using [Dapr Secre az keyvault create --location [region] --name [your keyvault] --resource-group [your resource group] ``` -3. Create the managed identity +3. Create the managed identity(Optional) + + This step is required only if the AKS Cluster is provisoned without the flag "--enable-managed-identity". If the cluster is provisioned with manahed identity, than is suggested to use the autogenerated managed identity that is associated to the Resource Group MC_*. ```bash $identity = az identity create -g [your resource group] -n [you managed identity name] -o json | ConvertFrom-Json ``` -4. Assign the Reader role to the managed identity + Below the command to retrieve the managed identity in the autogenerated scenario: + + ```bash + az aks show -g -n + ``` + For more detail about the roles to assign to integrate AKS with Azure Services [Role Assignment](https://github.com/Azure/aad-pod-identity/blob/master/docs/readmes/README.role-assignment.md). + +4. Retrieve Managed Identity ID + + The two main scenario are: + - Service Principal, in this case the Resource Group is the one in which is deployed the AKS Service Cluster ```bash - az role assignment create --role "Reader" --assignee $identity.principalId --scope /subscriptions/[your subscription id]/resourcegroups/[your resource group] + $clientId= az aks show -g -n --query servicePrincipalProfile.clientId -otsv ``` -5. Assign the Managed Identity Operator role to the AKS Service Principal + - Managed Identity, in this case the Resource Group is the one in which is deployed the AKS Service Cluster ```bash - $aks = az aks show -g [your resource group] -n [your AKS name] -o json | ConvertFrom-Json - - az role assignment create --role "Managed Identity Operator" --assignee $aks.servicePrincipalProfile.clientId --scope $identity.id + $clientId= az aks show -g -n --query identityProfile.kubeletidentity.clientId -otsv ``` - -6. Add a policy to the Key Vault so the managed identity can read secrets + +5. Assign the Reader role to the managed identity + + For AKS cluster, the cluster resource group refers to the resource group with a MC_ prefix, which contains all of the infrastructure resources associated with the cluster like VM/VMSS. ```bash - az keyvault set-policy --name [your keyvault] --spn $identity.clientId --secret-permissions get list + az role assignment create --role "Reader" --assignee $clientId --scope /subscriptions/[your subscription id]/resourcegroups/[your resource group] ``` -7. Enable AAD Pod Identity on AKS +6. Assign the Managed Identity Operator role to the AKS Service Principal + Refer to previous step about the Resource Group to use and which identity to assign + ```bash + az role assignment create --role "Managed Identity Operator" --assignee $clientId --scope /subscriptions/[your subscription id]/resourcegroups/[your resource group] + + az role assignment create --role "Virtual Machine Contributor" --assignee $clientId --scope /subscriptions/[your subscription id]/resourcegroups/[your resource group] + ``` + +7. Add a policy to the Key Vault so the managed identity can read secrets + + ```bash + az keyvault set-policy --name [your keyvault] --spn $clientId --secret-permissions get list + ``` + +8. Enable AAD Pod Identity on AKS ```bash kubectl apply -f https://raw.githubusercontent.com/Azure/aad-pod-identity/master/deploy/infra/deployment-rbac.yaml + + # For AKS clusters, deploy the MIC and AKS add-on exception by running - + kubectl apply -f https://raw.githubusercontent.com/Azure/aad-pod-identity/master/deploy/infra/mic-exception.yaml ``` -8. Configure the Azure Identity and AzureIdentityBinding yaml +9. Configure the Azure Identity and AzureIdentityBinding yaml Save the following yaml as azure-identity-config.yaml: @@ -87,7 +118,7 @@ This document shows how to enable Azure Key Vault secret store using [Dapr Secre Selector: [you managed identity selector] ``` -9. Deploy the azure-identity-config.yaml: +10. Deploy the azure-identity-config.yaml: ```yaml kubectl apply -f azure-identity-config.yaml From 11f09071cf50e98c3215425a567fd9dc00dffe47 Mon Sep 17 00:00:00 2001 From: Charlie Stanley Date: Fri, 10 Jul 2020 15:55:34 -0700 Subject: [PATCH 15/19] Arrange Windows support message into a note --- howto/hybrid-clusters/README.md | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/howto/hybrid-clusters/README.md b/howto/hybrid-clusters/README.md index ca105b67b..f7cba6423 100644 --- a/howto/hybrid-clusters/README.md +++ b/howto/hybrid-clusters/README.md @@ -12,7 +12,8 @@ helm install dapr dapr/dapr --set global.daprControlPlaneOs=YOUR_OS Dapr control plane container images are only provided for Linux, so you shouldn't need to do this unless you really know what you are doing. ## Installing Dapr Apps -The Dapr sidecar container is currently linux only. For this reason, if you are writing a Dapr application, you must run it in a Linux container. You can track the progress of Dapr sidecar support for windows containers via [dapr/dapr#842](https://github.com/dapr/dapr/issues/842). +The Dapr sidecar container is currently Linux only. For this reason, if you are writing a Dapr application, you must run it in a Linux container. +> **Note:** Windows support for dapr applications is in progress. Please see: [dapr/dapr#842](https://github.com/dapr/dapr/issues/842). When deploying to a hybrid cluster, you must configure your apps to be deployed to only Linux available nodes. One of the simplest ways to do this is to add kubernetes.io/os=linux to your app's nodeSelector. From d60e9513098d97bb7760095f5fb5369553c9dbdf Mon Sep 17 00:00:00 2001 From: Mukundan Sundararajan Date: Fri, 10 Jul 2020 16:33:05 -0700 Subject: [PATCH 16/19] Add docs for dapr init without docker (#665) * Add docs for dapr init without docker * Update docs for enabling statestore and actors without docker * Update README.md * Update README.md * Update README.md * Update readme * Update README.md * Update environment-setup.md * Update README.md * Update README.md * Removing slim init instructions * Simplifying self hosted mode section * Minor language changes Co-authored-by: Mark Fussell Co-authored-by: Ori Zohar --- getting-started/environment-setup.md | 24 +++++++--- howto/configure-redis/README.md | 6 ++- howto/self-hosted-no-docker/README.md | 63 +++++++++++++++++++++++++++ 3 files changed, 84 insertions(+), 9 deletions(-) create mode 100644 howto/self-hosted-no-docker/README.md diff --git a/getting-started/environment-setup.md b/getting-started/environment-setup.md index b0e3910ae..f0534a95f 100644 --- a/getting-started/environment-setup.md +++ b/getting-started/environment-setup.md @@ -6,11 +6,13 @@ Dapr can be run in either self hosted or Kubernetes modes. Running Dapr runtime - [Prerequisites](#prerequisites) - [Installing Dapr CLI](#installing-dapr-cli) -- [Installing Dapr in standalone mode](#installing-dapr-in-standalone-mode) +- [Installing Dapr in self-hosted mode](#installing-dapr-in-self-hosted-mode) - [Installing Dapr on Kubernetes cluster](#installing-dapr-on-a-kubernetes-cluster) ## Prerequisites +On default Dapr will install with a developer environment using Docker containers to get you started easily. However, Dapr does not depend on Docker to run (see [here](https://github.com/dapr/cli/blob/master/README.md) for instructions on installing Dapr locally without Docker using slim init). This getting started guide assumes Dapr is installed along with this developer environment. + - Install [Docker](https://docs.docker.com/install/) > For Windows user, ensure that `Docker Desktop For Windows` uses Linux containers. @@ -61,9 +63,11 @@ Each release of Dapr CLI includes various OSes and architectures. These binary v ## Installing Dapr in self hosted mode -### Install Dapr runtime using the CLI +### Initialize Dapr using the CLI -Install Dapr by running `dapr init` from a command prompt +On default, during initialization the Dapr CLI will install the Dapr binaries as well as setup a developer environment to help you get started easily with Dapr. This environment uses Docker containers, therefore Docker is listed as a prerequisite. + +>If you prefer to run Dapr without this environment and no dependency on Docker, see the CLI documentation for usage of the `--slim` flag with the init CLI command [here](https://github.com/dapr/cli/blob/master/README.md). Note, if you are a new user, it is strongly recommended to intall Docker and use the regular init command. > For Linux users, if you run your docker cmds with sudo, you need to use "**sudo dapr init**" > For Windows users, make sure that you run the cmd terminal in administrator mode @@ -82,7 +86,7 @@ If you prefer you can also install to an alternate location by using `--install- $ dapr init --install-path /home/user123/mydaprinstall ``` -To see that Dapr has been installed successful, from a command prompt run the `docker ps` command and check that the `daprio/dapr:latest` and `redis` container images are both running. +To see that Dapr has been installed successfully, from a command prompt run the `docker ps` command and check that the `daprio/dapr:latest` and `redis` container images are both running. ### Install a specific runtime version @@ -98,9 +102,9 @@ cli version: v0.1.0 runtime version: v0.1.0 ``` -### Uninstall Dapr in a standalone mode +### Uninstall Dapr in a self hosted mode -Uninstalling removes the Placement service container. +Uninstalling removes the Placement service container or the Placement service binary. ```bash $ dapr uninstall @@ -111,7 +115,13 @@ It won't remove the Redis or Zipkin containers by default in case you were using $ dapr uninstall --all ``` -You should always run `dapr uninstall` before running another `dapr init`. +**You should always run `dapr uninstall` before running another `dapr init`.** + +To specify a custom install path from which you have to uninstall run: + +```bash +$ dapr uninstall --install-path /path/to/binary +``` ## Installing Dapr on a Kubernetes cluster diff --git a/howto/configure-redis/README.md b/howto/configure-redis/README.md index c8b9e919a..b4d243986 100644 --- a/howto/configure-redis/README.md +++ b/howto/configure-redis/README.md @@ -60,7 +60,7 @@ We can use [Helm](https://helm.sh/) to quickly create a Redis instance in our Ku value: lhDOkwTlp0 ``` -### Option 2: Creating an managed Azure Cache for Redis service +### Option 2: Creating an Azure Cache for Redis service > **Note**: This approach requires having an Azure Subscription. @@ -138,7 +138,9 @@ kubectl apply -f redis-state.yaml kubectl apply -f redis-pubsub.yaml ``` -### Standalone +### Self Hosted Mode By default the Dapr CLI creates a local Redis instance when you run `dapr init`. However, if you want to configure a different Redis instance, create a `components` dir containing the YAML file and provide the path to the `dapr run` command with the flag `--components-path`. +If you initialized Dapr using `dapr init --slim`, the Dapr CLI did not create a Redis instance or a default configuration file for it. Follow [these instructions](#Creating-a-Redis-Store) to create a Redis store. Create the `redis.yaml` following the configuration [instructions](#Configuration) in a `components` dir and provide the path to the `dapr run` command with the flag `--components-path`. + diff --git a/howto/self-hosted-no-docker/README.md b/howto/self-hosted-no-docker/README.md new file mode 100644 index 000000000..e7d1cd0f1 --- /dev/null +++ b/howto/self-hosted-no-docker/README.md @@ -0,0 +1,63 @@ +# Self hosted mode without containers + +This article provides guidance on running Dapr in self-hosted mode without Docker. + +## Prerequisites + +- [Dapr CLI](../../getting-started/environment-setup.md#installing-dapr-cli) + +## Initialize Dapr without containers + +The Dapr CLI provides an option to initialize Dapr using slim init, without the default creation of a development environment which has a dependency on Docker. To initialize Dapr with slim init, after installing the Dapr CLI use the following command: + +```bash +dapr init --slim +``` + +In this mode two different binaries are installed `daprd` and `placement`. The `placement` binary is needed to enable [actors](../../concepts/actors/README.md) in a Dapr self-hosted installation. + +In this mode no default components such as Redis are installed for state managment or pub/sub. This means, that aside from [Service Invocation](../../concepts/service-invocation/README.md), no other building block functionality is availble on install out of the box. Users are free to setup their own environemnt and custom components. Furthermore, actor based service invocation is possible if a statestore is configured as explained in the following sections. + +## Service invocation +See [this sample](https://github.com/dapr/samples/tree/master/11.hello-dapr-slim) for an example on how to perform service invocation in this mode. + +## Enabling state management or pub/sub + +See configuring Redis in self hosted mode [without docker](../../howto/configure-redis/README.md#Self-Hosted-Mode-without-Containers) to enable a local state store or pub/sub broker for messaging. + +## Enabling actors + +The placement service must be run locally to enable actor placement. Also a [transactoinal state store](#Enabling-state-management-or-pub/sub) must be enabled for actors. + +By default for Linux/MacOS the `placement` binary is installed in `/usr/local/bin` or for Windows at `c:\dapr`. + +```bash +$ /usr/local/bin/placement + +INFO[0000] starting Dapr Placement Service -- version 0.8.0 -- commit 74db927 instance=host.localhost.name scope=dapr.placement type=log ver=0.8.0 +INFO[0000] log level set to: info instance=host.localhost.name scope=dapr.placement type=log ver=0.8.0 +INFO[0000] metrics server started on :9090/ instance=host.localhost.name scope=dapr.metrics type=log ver=0.8.0 +INFO[0000] placement Service started on port 50005 instance=host.localhost.name scope=dapr.placement type=log ver=0.8.0 +INFO[0000] Healthz server is listening on :8080 instance=host.localhost.name scope=dapr.placement type=log ver=0.8.0 + +``` + +From here on you can follow the sample example created for the [java-sdk](https://github.com/dapr/java-sdk/tree/master/examples/src/main/java/io/dapr/examples/actors/http), [python-sdk](https://github.com/dapr/python-sdk/tree/master/examples/demo_actor) or [dotnet-sdk](https://github.com/dapr/dotnet-sdk/tree/master/samples/Actor) for running an application with Actors enabled. + +Update the state store configuration files to have the Redis host and password match the setup that you have. Additionally to enable it as a actor state store have the metadata piece added similar to the [sample Java Redis component](https://github.com/dapr/java-sdk/blob/master/examples/components/redis.yaml) definition. + +```yaml + - name: actorStateStore + value: "true" +``` + +The logs of the placement service are updated whenever a host that uses actors is added or removed similar to the following output: + +``` +INFO[0446] host added: 192.168.1.6 instance=host.localhost.name scope=dapr.placement type=log ver=0.8.0 +INFO[0450] host removed: 192.168.1.6 instance=host.localhost.name scope=dapr.placement type=log ver=0.8.0 +``` + +## Cleanup + +Follow the uninstall [instructions](../../getting-started/environment-setup.md#Uninstall-Dapr-in-self-hosted-mode-(without-docker)) to remove the binaries. From 4298fdfcdd33d1f5b6606eaf654b8768235939bf Mon Sep 17 00:00:00 2001 From: chloe andresol <53862535+candreso@users.noreply.github.com> Date: Mon, 13 Jul 2020 11:01:04 -0400 Subject: [PATCH 17/19] Fixed typo in setup-azure-servicebus.md Changed "valye" in line 39 to "value" --- howto/setup-pub-sub-message-broker/setup-azure-servicebus.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/howto/setup-pub-sub-message-broker/setup-azure-servicebus.md b/howto/setup-pub-sub-message-broker/setup-azure-servicebus.md index fc96ddcac..74b707b5b 100644 --- a/howto/setup-pub-sub-message-broker/setup-azure-servicebus.md +++ b/howto/setup-pub-sub-message-broker/setup-azure-servicebus.md @@ -36,7 +36,7 @@ spec: - name: maxActiveMessagesRecoveryInSec value: # Optional. Default: "2". Defines the number of seconds to wait once the maximum active message limit is reached. - name: maxConcurrentHandlers - valye: # Optional. Defines the maximum number of concurrent message handlers + value: # Optional. Defines the maximum number of concurrent message handlers - name: prefetchCount value: # Optional. Defines the number of prefetched messages (use for high throughput / low latency scenarios) - name: defaultMessageTimeToLiveInSec From 332b531939f3e77ce54d254a28ff0dc18b29c402 Mon Sep 17 00:00:00 2001 From: Charlie Stanley Date: Mon, 13 Jul 2020 12:53:34 -0700 Subject: [PATCH 18/19] Minor style guide changes to hybrid-clusters howto stub --- howto/README.md | 1 + howto/hybrid-clusters/README.md | 13 +++++++++---- 2 files changed, 10 insertions(+), 4 deletions(-) diff --git a/howto/README.md b/howto/README.md index 3b2a9f1bd..f21b0b454 100644 --- a/howto/README.md +++ b/howto/README.md @@ -89,6 +89,7 @@ For Actors How Tos see the SDK documentation * [Sidecar configuration on Kubernetes](./configure-k8s) * [Autoscale on Kubernetes using KEDA and Dapr bindings](./autoscale-with-keda) +* [Deploy to hybrid Linux/Windows Kubernetes clusters](./hybrid-clusters) ## Developer tooling ### Using Visual Studio Code diff --git a/howto/hybrid-clusters/README.md b/howto/hybrid-clusters/README.md index f7cba6423..2c0208190 100644 --- a/howto/hybrid-clusters/README.md +++ b/howto/hybrid-clusters/README.md @@ -1,8 +1,12 @@ -# Deploying to Hybrid Linux/Winodws K8s Clusters +# Deploy to hybrid Linux/Windows Kubernetes clusters + +If you would like to deploy dapr to a Kubernetes cluster that contains both Windows and Linux nodes, you can do so, but there are known limitiations. All dapr control plane components must be run exclusively on Linux enabled nodes. The same is currently true for all Dapr applications. Thus when deploying to hybrid Kubernetes clusters you will need to ensure that Kubernetes knows to place your application containers exclusively on Linux enabled nodes. + +> **Note:** Windows container support for Dapr applications is in progress. Please see: [dapr/dapr#842](https://github.com/dapr/dapr/issues/842). ## Installing the Dapr Control Plane -If you are installing using the Dapr CLI or via helm chart, you can simply follow our normal deployment procedures: +If you are installing using the Dapr CLI or via helm chart, you can simply follow the normal deployment procedures: [Installing Dapr on a Kubernetes cluster](../../getting-started/environment-setup.md#installing-Dapr-on-a-kubernetes-cluster) Affinity will be automatically set for kubernetes.io/os=linux. If you need to override linux to another value, you can do so by setting: @@ -11,9 +15,8 @@ helm install dapr dapr/dapr --set global.daprControlPlaneOs=YOUR_OS ``` Dapr control plane container images are only provided for Linux, so you shouldn't need to do this unless you really know what you are doing. -## Installing Dapr Apps +## Installing Dapr applications The Dapr sidecar container is currently Linux only. For this reason, if you are writing a Dapr application, you must run it in a Linux container. -> **Note:** Windows support for dapr applications is in progress. Please see: [dapr/dapr#842](https://github.com/dapr/dapr/issues/842). When deploying to a hybrid cluster, you must configure your apps to be deployed to only Linux available nodes. One of the simplest ways to do this is to add kubernetes.io/os=linux to your app's nodeSelector. @@ -23,6 +26,8 @@ spec: kubernetes.io/os: linux ``` +## Related links + Kubernetes also supports much more advanced configuration via node affinity. See https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/ for more examples. From 2d68c85146808e258dd589fe46063833ae93056c Mon Sep 17 00:00:00 2001 From: Ori Zohar Date: Mon, 13 Jul 2020 14:06:15 -0700 Subject: [PATCH 19/19] Minor style changes --- howto/hybrid-clusters/README.md | 13 +++++++------ 1 file changed, 7 insertions(+), 6 deletions(-) diff --git a/howto/hybrid-clusters/README.md b/howto/hybrid-clusters/README.md index 2c0208190..af39685ac 100644 --- a/howto/hybrid-clusters/README.md +++ b/howto/hybrid-clusters/README.md @@ -1,24 +1,26 @@ # Deploy to hybrid Linux/Windows Kubernetes clusters -If you would like to deploy dapr to a Kubernetes cluster that contains both Windows and Linux nodes, you can do so, but there are known limitiations. All dapr control plane components must be run exclusively on Linux enabled nodes. The same is currently true for all Dapr applications. Thus when deploying to hybrid Kubernetes clusters you will need to ensure that Kubernetes knows to place your application containers exclusively on Linux enabled nodes. +Deploying Dapr to a Kubernetes cluster that contains both Windows and Linux nodes has known limitiations. All Dapr control plane components must be run exclusively on Linux enabled nodes. The same is currently true for all Dapr applications. Thus when deploying to hybrid Kubernetes clusters you will need to ensure that Kubernetes knows to place your application containers exclusively on Linux enabled nodes. > **Note:** Windows container support for Dapr applications is in progress. Please see: [dapr/dapr#842](https://github.com/dapr/dapr/issues/842). ## Installing the Dapr Control Plane -If you are installing using the Dapr CLI or via helm chart, you can simply follow the normal deployment procedures: +If you are installing using the Dapr CLI or via a helm chart, simply follow the normal deployment procedures: [Installing Dapr on a Kubernetes cluster](../../getting-started/environment-setup.md#installing-Dapr-on-a-kubernetes-cluster) -Affinity will be automatically set for kubernetes.io/os=linux. If you need to override linux to another value, you can do so by setting: +Affinity will be automatically set for kubernetes.io/os=linux. If you need to override Linux to another value, you can do so by setting: + ``` helm install dapr dapr/dapr --set global.daprControlPlaneOs=YOUR_OS ``` + Dapr control plane container images are only provided for Linux, so you shouldn't need to do this unless you really know what you are doing. ## Installing Dapr applications The Dapr sidecar container is currently Linux only. For this reason, if you are writing a Dapr application, you must run it in a Linux container. -When deploying to a hybrid cluster, you must configure your apps to be deployed to only Linux available nodes. One of the simplest ways to do this is to add kubernetes.io/os=linux to your app's nodeSelector. +When deploying to a hybrid cluster, you must configure your applications to be deployed to only Linux available nodes. One of the simplest ways to do this is to add kubernetes.io/os=linux to your app's nodeSelector. ```yaml spec: @@ -28,6 +30,5 @@ spec: ## Related links -Kubernetes also supports much more advanced configuration via node affinity. -See https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/ for more examples. + - See the [official Kubernetes documentation](https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/) for examples of more advanced configuration via node affinity