mirror of https://github.com/dapr/docs.git
Merge branch 'master' into master
This commit is contained in:
commit
05f8c7a996
|
@ -28,7 +28,9 @@ To use actors, your state store must support multi-item transactions. This mean
|
|||
|
||||
- Redis
|
||||
- MongoDB
|
||||
- PostgreSQL
|
||||
- SQL Server
|
||||
- Azure CosmosDB
|
||||
|
||||
## Actor timers and reminders
|
||||
|
||||
|
|
|
@ -49,6 +49,7 @@ The following table summarizes the capabilities of existing data store implement
|
|||
Store | Strong consistent write | Strong consistent read | ETag|
|
||||
----|----|----|----
|
||||
Cosmos DB | Yes | Yes | Yes
|
||||
PostgreSQL | Yes | Yes | Yes
|
||||
Redis | Yes | Yes | Yes
|
||||
Redis (clustered)| Yes | No | Yes
|
||||
SQL Server | Yes | Yes | Yes
|
||||
|
@ -110,6 +111,7 @@ SELECT AVG(value) FROM StateTable WHERE Id LIKE '<app-id>||<thermometer>||*||tem
|
|||
* [Spec: Dapr actors specification](../../reference/api/actors_api.md)
|
||||
* [How-to: Set up Azure Cosmos DB store](../../howto/setup-state-store/setup-azure-cosmosdb.md)
|
||||
* [How-to: Query Azure Cosmos DB store](../../howto/query-state-store/query-cosmosdb-store.md)
|
||||
* [How-to: Set up PostgreSQL store](../../howto/setup-state-store/setup-postgresql.md)
|
||||
* [How-to: Set up Redis store](../../howto/setup-state-store/setup-redis.md)
|
||||
* [How-to: Query Redis store](../../howto/query-state-store/query-redis-store.md)
|
||||
* [How-to: Set up SQL Server store](../../howto/setup-state-store/setup-sqlserver.md)
|
||||
|
|
|
@ -6,11 +6,13 @@ Dapr can be run in either self hosted or Kubernetes modes. Running Dapr runtime
|
|||
|
||||
- [Prerequisites](#prerequisites)
|
||||
- [Installing Dapr CLI](#installing-dapr-cli)
|
||||
- [Installing Dapr in standalone mode](#installing-dapr-in-standalone-mode)
|
||||
- [Installing Dapr in self-hosted mode](#installing-dapr-in-self-hosted-mode)
|
||||
- [Installing Dapr on Kubernetes cluster](#installing-dapr-on-a-kubernetes-cluster)
|
||||
|
||||
## Prerequisites
|
||||
|
||||
On default Dapr will install with a developer environment using Docker containers to get you started easily. However, Dapr does not depend on Docker to run (see [here](https://github.com/dapr/cli/blob/master/README.md) for instructions on installing Dapr locally without Docker using slim init). This getting started guide assumes Dapr is installed along with this developer environment.
|
||||
|
||||
- Install [Docker](https://docs.docker.com/install/)
|
||||
|
||||
> For Windows user, ensure that `Docker Desktop For Windows` uses Linux containers.
|
||||
|
@ -61,9 +63,11 @@ Each release of Dapr CLI includes various OSes and architectures. These binary v
|
|||
|
||||
## Installing Dapr in self hosted mode
|
||||
|
||||
### Install Dapr runtime using the CLI
|
||||
### Initialize Dapr using the CLI
|
||||
|
||||
Install Dapr by running `dapr init` from a command prompt
|
||||
On default, during initialization the Dapr CLI will install the Dapr binaries as well as setup a developer environment to help you get started easily with Dapr. This environment uses Docker containers, therefore Docker is listed as a prerequisite.
|
||||
|
||||
>If you prefer to run Dapr without this environment and no dependency on Docker, see the CLI documentation for usage of the `--slim` flag with the init CLI command [here](https://github.com/dapr/cli/blob/master/README.md). Note, if you are a new user, it is strongly recommended to intall Docker and use the regular init command.
|
||||
|
||||
> For Linux users, if you run your docker cmds with sudo, you need to use "**sudo dapr init**"
|
||||
> For Windows users, make sure that you run the cmd terminal in administrator mode
|
||||
|
@ -82,7 +86,7 @@ If you prefer you can also install to an alternate location by using `--install-
|
|||
$ dapr init --install-path /home/user123/mydaprinstall
|
||||
```
|
||||
|
||||
To see that Dapr has been installed successful, from a command prompt run the `docker ps` command and check that the `daprio/dapr:latest` and `redis` container images are both running.
|
||||
To see that Dapr has been installed successfully, from a command prompt run the `docker ps` command and check that the `daprio/dapr:latest` and `redis` container images are both running.
|
||||
|
||||
### Install a specific runtime version
|
||||
|
||||
|
@ -98,9 +102,9 @@ cli version: v0.1.0
|
|||
runtime version: v0.1.0
|
||||
```
|
||||
|
||||
### Uninstall Dapr in a standalone mode
|
||||
### Uninstall Dapr in a self hosted mode
|
||||
|
||||
Uninstalling removes the Placement service container.
|
||||
Uninstalling removes the Placement service container or the Placement service binary.
|
||||
|
||||
```bash
|
||||
$ dapr uninstall
|
||||
|
@ -111,7 +115,13 @@ It won't remove the Redis or Zipkin containers by default in case you were using
|
|||
$ dapr uninstall --all
|
||||
```
|
||||
|
||||
You should always run `dapr uninstall` before running another `dapr init`.
|
||||
**You should always run `dapr uninstall` before running another `dapr init`.**
|
||||
|
||||
To specify a custom install path from which you have to uninstall run:
|
||||
|
||||
```bash
|
||||
$ dapr uninstall --install-path /path/to/binary
|
||||
```
|
||||
|
||||
## Installing Dapr on a Kubernetes cluster
|
||||
|
||||
|
@ -135,6 +145,11 @@ You can install Dapr on any Kubernetes cluster. Here are some helpful links:
|
|||
- [Setup Google Cloud Kubernetes Engine](https://cloud.google.com/kubernetes-engine/docs/quickstart)
|
||||
- [Setup Amazon Elastic Kubernetes Service](https://docs.aws.amazon.com/eks/latest/userguide/getting-started.html)
|
||||
|
||||
> **Note:** The dapr control plane containers are curently only distributed on linux containers.
|
||||
> Your kubernetes cluser must contain available Linux capable nodes.
|
||||
> Both the dapr cli, and the dapr helm chart will automatically deploy with affinity for nodes with the label kubernetes.io/os=linux.
|
||||
> For more information see [Deploying to a Hybrid Linux/Windows K8s Cluster](../howto/hybrid-clusters/)
|
||||
|
||||
### Using the Dapr CLI
|
||||
|
||||
You can install Dapr to a Kubernetes cluster using CLI.
|
||||
|
|
|
@ -89,6 +89,7 @@ For Actors How Tos see the SDK documentation
|
|||
|
||||
* [Sidecar configuration on Kubernetes](./configure-k8s)
|
||||
* [Autoscale on Kubernetes using KEDA and Dapr bindings](./autoscale-with-keda)
|
||||
* [Deploy to hybrid Linux/Windows Kubernetes clusters](./hybrid-clusters)
|
||||
|
||||
## Developer tooling
|
||||
### Using Visual Studio Code
|
||||
|
|
|
@ -34,8 +34,22 @@ We can use [Helm](https://helm.sh/) to quickly create a Redis instance in our Ku
|
|||
|
||||
4. Next, we'll get our Redis password, which is slightly different depending on the OS we're using:
|
||||
|
||||
- **Windows**: Run `kubectl get secret --namespace default redis -o jsonpath="{.data.redis-password}" > encoded.b64`, which will create a file with your encoded password. Next, run `certutil -decode encoded.b64 password.txt`, which will put your redis password in a text file called `password.txt`. Copy the password and delete the two files.
|
||||
- **Windows**: Run below commands
|
||||
```powershell
|
||||
# Create a file with your encoded password.
|
||||
kubectl get secret --namespace default redis -o jsonpath="{.data.redis-password}" > encoded.b64
|
||||
# put your redis password in a text file called `password.txt`.
|
||||
certutil -decode encoded.b64 password.txt
|
||||
# Copy the password and delete the two files.
|
||||
```
|
||||
|
||||
- **Windows**: If you are using Powershell, it would be even easier.
|
||||
```powershell
|
||||
PS C:\> $base64pwd=kubectl get secret --namespace default redis -o jsonpath="{.data.redis-password}"
|
||||
PS C:\> $redispassword=[System.Text.Encoding]::UTF8.GetString([System.Convert]::FromBase64String($base64pwd))
|
||||
PS C:\> $base64pwd=""
|
||||
PS C:\> $redispassword
|
||||
```
|
||||
- **Linux/MacOS**: Run `kubectl get secret --namespace default redis -o jsonpath="{.data.redis-password}" | base64 --decode` and copy the outputted password.
|
||||
|
||||
Add this password as the `redisPassword` value in your [redis.yaml](#configuration) file. For example:
|
||||
|
@ -46,7 +60,7 @@ We can use [Helm](https://helm.sh/) to quickly create a Redis instance in our Ku
|
|||
value: lhDOkwTlp0
|
||||
```
|
||||
|
||||
### Option 2: Creating an managed Azure Cache for Redis service
|
||||
### Option 2: Creating an Azure Cache for Redis service
|
||||
|
||||
> **Note**: This approach requires having an Azure Subscription.
|
||||
|
||||
|
@ -124,7 +138,9 @@ kubectl apply -f redis-state.yaml
|
|||
kubectl apply -f redis-pubsub.yaml
|
||||
```
|
||||
|
||||
### Standalone
|
||||
### Self Hosted Mode
|
||||
|
||||
By default the Dapr CLI creates a local Redis instance when you run `dapr init`. However, if you want to configure a different Redis instance, create a `components` dir containing the YAML file and provide the path to the `dapr run` command with the flag `--components-path`.
|
||||
|
||||
If you initialized Dapr using `dapr init --slim`, the Dapr CLI did not create a Redis instance or a default configuration file for it. Follow [these instructions](#Creating-a-Redis-Store) to create a Redis store. Create the `redis.yaml` following the configuration [instructions](#Configuration) in a `components` dir and provide the path to the `dapr run` command with the flag `--components-path`.
|
||||
|
||||
|
|
|
@ -0,0 +1,34 @@
|
|||
# Deploy to hybrid Linux/Windows Kubernetes clusters
|
||||
|
||||
Deploying Dapr to a Kubernetes cluster that contains both Windows and Linux nodes has known limitiations. All Dapr control plane components must be run exclusively on Linux enabled nodes. The same is currently true for all Dapr applications. Thus when deploying to hybrid Kubernetes clusters you will need to ensure that Kubernetes knows to place your application containers exclusively on Linux enabled nodes.
|
||||
|
||||
> **Note:** Windows container support for Dapr applications is in progress. Please see: [dapr/dapr#842](https://github.com/dapr/dapr/issues/842).
|
||||
|
||||
## Installing the Dapr Control Plane
|
||||
|
||||
If you are installing using the Dapr CLI or via a helm chart, simply follow the normal deployment procedures:
|
||||
[Installing Dapr on a Kubernetes cluster](../../getting-started/environment-setup.md#installing-Dapr-on-a-kubernetes-cluster)
|
||||
|
||||
Affinity will be automatically set for kubernetes.io/os=linux. If you need to override Linux to another value, you can do so by setting:
|
||||
|
||||
```
|
||||
helm install dapr dapr/dapr --set global.daprControlPlaneOs=YOUR_OS
|
||||
```
|
||||
|
||||
Dapr control plane container images are only provided for Linux, so you shouldn't need to do this unless you really know what you are doing.
|
||||
|
||||
## Installing Dapr applications
|
||||
The Dapr sidecar container is currently Linux only. For this reason, if you are writing a Dapr application, you must run it in a Linux container.
|
||||
|
||||
When deploying to a hybrid cluster, you must configure your applications to be deployed to only Linux available nodes. One of the simplest ways to do this is to add kubernetes.io/os=linux to your app's nodeSelector.
|
||||
|
||||
```yaml
|
||||
spec:
|
||||
nodeSelector:
|
||||
kubernetes.io/os: linux
|
||||
```
|
||||
|
||||
## Related links
|
||||
|
||||
- See the [official Kubernetes documentation](https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/) for examples of more advanced configuration via node affinity
|
||||
|
|
@ -0,0 +1,63 @@
|
|||
# Self hosted mode without containers
|
||||
|
||||
This article provides guidance on running Dapr in self-hosted mode without Docker.
|
||||
|
||||
## Prerequisites
|
||||
|
||||
- [Dapr CLI](../../getting-started/environment-setup.md#installing-dapr-cli)
|
||||
|
||||
## Initialize Dapr without containers
|
||||
|
||||
The Dapr CLI provides an option to initialize Dapr using slim init, without the default creation of a development environment which has a dependency on Docker. To initialize Dapr with slim init, after installing the Dapr CLI use the following command:
|
||||
|
||||
```bash
|
||||
dapr init --slim
|
||||
```
|
||||
|
||||
In this mode two different binaries are installed `daprd` and `placement`. The `placement` binary is needed to enable [actors](../../concepts/actors/README.md) in a Dapr self-hosted installation.
|
||||
|
||||
In this mode no default components such as Redis are installed for state managment or pub/sub. This means, that aside from [Service Invocation](../../concepts/service-invocation/README.md), no other building block functionality is availble on install out of the box. Users are free to setup their own environemnt and custom components. Furthermore, actor based service invocation is possible if a statestore is configured as explained in the following sections.
|
||||
|
||||
## Service invocation
|
||||
See [this sample](https://github.com/dapr/samples/tree/master/11.hello-dapr-slim) for an example on how to perform service invocation in this mode.
|
||||
|
||||
## Enabling state management or pub/sub
|
||||
|
||||
See configuring Redis in self hosted mode [without docker](../../howto/configure-redis/README.md#Self-Hosted-Mode-without-Containers) to enable a local state store or pub/sub broker for messaging.
|
||||
|
||||
## Enabling actors
|
||||
|
||||
The placement service must be run locally to enable actor placement. Also a [transactoinal state store](#Enabling-state-management-or-pub/sub) must be enabled for actors.
|
||||
|
||||
By default for Linux/MacOS the `placement` binary is installed in `/usr/local/bin` or for Windows at `c:\dapr`.
|
||||
|
||||
```bash
|
||||
$ /usr/local/bin/placement
|
||||
|
||||
INFO[0000] starting Dapr Placement Service -- version 0.8.0 -- commit 74db927 instance=host.localhost.name scope=dapr.placement type=log ver=0.8.0
|
||||
INFO[0000] log level set to: info instance=host.localhost.name scope=dapr.placement type=log ver=0.8.0
|
||||
INFO[0000] metrics server started on :9090/ instance=host.localhost.name scope=dapr.metrics type=log ver=0.8.0
|
||||
INFO[0000] placement Service started on port 50005 instance=host.localhost.name scope=dapr.placement type=log ver=0.8.0
|
||||
INFO[0000] Healthz server is listening on :8080 instance=host.localhost.name scope=dapr.placement type=log ver=0.8.0
|
||||
|
||||
```
|
||||
|
||||
From here on you can follow the sample example created for the [java-sdk](https://github.com/dapr/java-sdk/tree/master/examples/src/main/java/io/dapr/examples/actors/http), [python-sdk](https://github.com/dapr/python-sdk/tree/master/examples/demo_actor) or [dotnet-sdk](https://github.com/dapr/dotnet-sdk/tree/master/samples/Actor) for running an application with Actors enabled.
|
||||
|
||||
Update the state store configuration files to have the Redis host and password match the setup that you have. Additionally to enable it as a actor state store have the metadata piece added similar to the [sample Java Redis component](https://github.com/dapr/java-sdk/blob/master/examples/components/redis.yaml) definition.
|
||||
|
||||
```yaml
|
||||
- name: actorStateStore
|
||||
value: "true"
|
||||
```
|
||||
|
||||
The logs of the placement service are updated whenever a host that uses actors is added or removed similar to the following output:
|
||||
|
||||
```
|
||||
INFO[0446] host added: 192.168.1.6 instance=host.localhost.name scope=dapr.placement type=log ver=0.8.0
|
||||
INFO[0450] host removed: 192.168.1.6 instance=host.localhost.name scope=dapr.placement type=log ver=0.8.0
|
||||
```
|
||||
|
||||
## Cleanup
|
||||
|
||||
Follow the uninstall [instructions](../../getting-started/environment-setup.md#Uninstall-Dapr-in-self-hosted-mode-(without-docker)) to remove the binaries.
|
|
@ -20,19 +20,33 @@ spec:
|
|||
- name: connectionString
|
||||
value: <REPLACE-WITH-CONNECTION-STRING> # Required.
|
||||
- name: timeoutInSec
|
||||
value: <REPLACE-WITH-TIMEOUT-IN-SEC> # Optional. Default: "60".
|
||||
value: <REPLACE-WITH-TIMEOUT-IN-SEC> # Optional. Default: "60". Timeout for sending messages and management operations.
|
||||
- name: handlerTimeoutInSec
|
||||
value: <REPLACE-WITH-HANDLER-TIMEOUT-IN-SEC> # Optional. Default: "60". Timeout for invoking app handler.
|
||||
- name: disableEntityManagement
|
||||
value: <REPLACE-WITH-DISABLE-ENTITY-MANAGEMENT> # Optional. Default: false. When set to true, topics and subscriptions do not get created automatically.
|
||||
- name: maxDeliveryCount
|
||||
value: <REPLACE-WITH-MAX-DELIVERY-COUNT> # Optional.
|
||||
value: <REPLACE-WITH-MAX-DELIVERY-COUNT> # Optional. Defines the number of attempts the server will make to deliver a message.
|
||||
- name: lockDurationInSec
|
||||
value: <REPLACE-WITH-LOCK-DURATION-IN-SEC> # Optional.
|
||||
value: <REPLACE-WITH-LOCK-DURATION-IN-SEC> # Optional. Defines the length in seconds that a message will be locked for before expiring.
|
||||
- name: lockRenewalInSec
|
||||
value: <REPLACE-WITH-LOCK-RENEWAL-IN-SEC> # Optional. Default: "20". Defines the frequency at which buffered message locks will be renewed.
|
||||
- name: maxActiveMessages
|
||||
value: <REPLACE-WITH-MAX-ACTIVE-MESSAGES> # Optional. Default: "10000". Defines the maximum number of messages to be buffered or processing at once.
|
||||
- name: maxActiveMessagesRecoveryInSec
|
||||
value: <REPLACE-WITH-MAX-ACTIVE-MESSAGES-RECOVERY-IN-SEC> # Optional. Default: "2". Defines the number of seconds to wait once the maximum active message limit is reached.
|
||||
- name: maxConcurrentHandlers
|
||||
value: <REPLACE-WITH-MAX-CONCURRENT-HANDLERS> # Optional. Defines the maximum number of concurrent message handlers
|
||||
- name: prefetchCount
|
||||
value: <REPLACE-WITH-PREFETCH-COUNT> # Optional. Defines the number of prefetched messages (use for high throughput / low latency scenarios)
|
||||
- name: defaultMessageTimeToLiveInSec
|
||||
value: <REPLACE-WITH-MESSAGE-TIME-TO-LIVE-IN-SEC> # Optional.
|
||||
- name: autoDeleteOnIdleInSec
|
||||
value: <REPLACE-WITH-AUTO-DELETE-ON-IDLE-IN-SEC> # Optional.
|
||||
```
|
||||
|
||||
> __NOTE:__ The above settings are shared across all topics that use this component.
|
||||
|
||||
The above example uses secrets as plain strings. It is recommended to use a secret store for the secrets as described [here](../../concepts/secrets/README.md)
|
||||
|
||||
## Apply the configuration
|
||||
|
|
|
@ -4,10 +4,12 @@ This document shows how to enable Azure Key Vault secret store using [Dapr Secre
|
|||
|
||||
## Contents
|
||||
|
||||
- [Prerequisites](#prerequisites)
|
||||
- [Setup Kubernetes to use managed identities and Azure Key Vault](#setup-kubernetes-to-use-managed-identities-and-azure-key-vault)
|
||||
- [Use Azure Key Vault secret store in Kubernetes mode with managed identities](#use-azure-key-vault-secret-store-in-kubernetes-mode-with-managed-identities)
|
||||
- [References](#references)
|
||||
- [Use Azure Key Vault secret store in Kubernetes mode using Managed Identities](#use-azure-key-vault-secret-store-in-kubernetes-mode-using-managed-identities)
|
||||
- [Contents](#contents)
|
||||
- [Prerequisites](#prerequisites)
|
||||
- [Setup Kubernetes to use Managed identities and Azure Key Vault](#setup-kubernetes-to-use-managed-identities-and-azure-key-vault)
|
||||
- [Use Azure Key Vault secret store in Kubernetes mode with managed identities](#use-azure-key-vault-secret-store-in-kubernetes-mode-with-managed-identities)
|
||||
- [References](#references)
|
||||
|
||||
## Prerequisites
|
||||
|
||||
|
@ -32,39 +34,68 @@ This document shows how to enable Azure Key Vault secret store using [Dapr Secre
|
|||
az keyvault create --location [region] --name [your keyvault] --resource-group [your resource group]
|
||||
```
|
||||
|
||||
3. Create the managed identity
|
||||
3. Create the managed identity(Optional)
|
||||
|
||||
This step is required only if the AKS Cluster is provisoned without the flag "--enable-managed-identity". If the cluster is provisioned with manahed identity, than is suggested to use the autogenerated managed identity that is associated to the Resource Group MC_*.
|
||||
|
||||
```bash
|
||||
$identity = az identity create -g [your resource group] -n [you managed identity name] -o json | ConvertFrom-Json
|
||||
```
|
||||
|
||||
4. Assign the Reader role to the managed identity
|
||||
Below the command to retrieve the managed identity in the autogenerated scenario:
|
||||
|
||||
```bash
|
||||
az aks show -g <AKSResourceGroup> -n <AKSClusterName>
|
||||
```
|
||||
For more detail about the roles to assign to integrate AKS with Azure Services [Role Assignment](https://github.com/Azure/aad-pod-identity/blob/master/docs/readmes/README.role-assignment.md).
|
||||
|
||||
4. Retrieve Managed Identity ID
|
||||
|
||||
The two main scenario are:
|
||||
- Service Principal, in this case the Resource Group is the one in which is deployed the AKS Service Cluster
|
||||
|
||||
```bash
|
||||
az role assignment create --role "Reader" --assignee $identity.principalId --scope /subscriptions/[your subscription id]/resourcegroups/[your resource group]
|
||||
$clientId= az aks show -g <AKSResourceGroup> -n <AKSClusterName> --query servicePrincipalProfile.clientId -otsv
|
||||
```
|
||||
|
||||
5. Assign the Managed Identity Operator role to the AKS Service Principal
|
||||
- Managed Identity, in this case the Resource Group is the one in which is deployed the AKS Service Cluster
|
||||
|
||||
```bash
|
||||
$aks = az aks show -g [your resource group] -n [your AKS name] -o json | ConvertFrom-Json
|
||||
|
||||
az role assignment create --role "Managed Identity Operator" --assignee $aks.servicePrincipalProfile.clientId --scope $identity.id
|
||||
$clientId= az aks show -g <AKSResourceGroup> -n <AKSClusterName> --query identityProfile.kubeletidentity.clientId -otsv
|
||||
```
|
||||
|
||||
6. Add a policy to the Key Vault so the managed identity can read secrets
|
||||
|
||||
5. Assign the Reader role to the managed identity
|
||||
|
||||
For AKS cluster, the cluster resource group refers to the resource group with a MC_ prefix, which contains all of the infrastructure resources associated with the cluster like VM/VMSS.
|
||||
|
||||
```bash
|
||||
az keyvault set-policy --name [your keyvault] --spn $identity.clientId --secret-permissions get list
|
||||
az role assignment create --role "Reader" --assignee $clientId --scope /subscriptions/[your subscription id]/resourcegroups/[your resource group]
|
||||
```
|
||||
|
||||
7. Enable AAD Pod Identity on AKS
|
||||
6. Assign the Managed Identity Operator role to the AKS Service Principal
|
||||
Refer to previous step about the Resource Group to use and which identity to assign
|
||||
```bash
|
||||
az role assignment create --role "Managed Identity Operator" --assignee $clientId --scope /subscriptions/[your subscription id]/resourcegroups/[your resource group]
|
||||
|
||||
az role assignment create --role "Virtual Machine Contributor" --assignee $clientId --scope /subscriptions/[your subscription id]/resourcegroups/[your resource group]
|
||||
```
|
||||
|
||||
7. Add a policy to the Key Vault so the managed identity can read secrets
|
||||
|
||||
```bash
|
||||
az keyvault set-policy --name [your keyvault] --spn $clientId --secret-permissions get list
|
||||
```
|
||||
|
||||
8. Enable AAD Pod Identity on AKS
|
||||
|
||||
```bash
|
||||
kubectl apply -f https://raw.githubusercontent.com/Azure/aad-pod-identity/master/deploy/infra/deployment-rbac.yaml
|
||||
|
||||
# For AKS clusters, deploy the MIC and AKS add-on exception by running -
|
||||
kubectl apply -f https://raw.githubusercontent.com/Azure/aad-pod-identity/master/deploy/infra/mic-exception.yaml
|
||||
```
|
||||
|
||||
8. Configure the Azure Identity and AzureIdentityBinding yaml
|
||||
9. Configure the Azure Identity and AzureIdentityBinding yaml
|
||||
|
||||
Save the following yaml as azure-identity-config.yaml:
|
||||
|
||||
|
@ -87,7 +118,7 @@ This document shows how to enable Azure Key Vault secret store using [Dapr Secre
|
|||
Selector: [you managed identity selector]
|
||||
```
|
||||
|
||||
9. Deploy the azure-identity-config.yaml:
|
||||
10. Deploy the azure-identity-config.yaml:
|
||||
|
||||
```yaml
|
||||
kubectl apply -f azure-identity-config.yaml
|
||||
|
|
|
@ -55,6 +55,7 @@ kubectl apply -f statestore.yaml
|
|||
* [Setup Hazelcast](./setup-hazelcast.md)
|
||||
* [Setup Memcached](./setup-memcached.md)
|
||||
* [Setup MongoDB](./setup-mongodb.md)
|
||||
* [Setup PostgreSQL](./setup-postgresql.md)
|
||||
* [Setup Redis](./setup-redis.md)
|
||||
* [Setup Zookeeper](./setup-zookeeper.md)
|
||||
* [Setup Azure CosmosDB](./setup-azure-cosmosdb.md)
|
||||
|
|
|
@ -2,11 +2,11 @@
|
|||
|
||||
## Creating an Azure CosmosDB account
|
||||
|
||||
[Follow the instructions](https://docs.microsoft.com/en-us/azure/cosmos-db/how-to-manage-database-account) from the Azure documentation on how to create an Azure CosmosDB account. The database and collection must be created in CosmosDB before Dapr consumes it.
|
||||
[Follow the instructions](https://docs.microsoft.com/en-us/azure/cosmos-db/how-to-manage-database-account) from the Azure documentation on how to create an Azure CosmosDB account. The database and collection must be created in CosmosDB before Dapr can use it.
|
||||
|
||||
**Note : The partition key for the collection must be "/id".**
|
||||
**Note : The partition key for the collection must be named "/partitionKey". Note: this is case-sensitive.**
|
||||
|
||||
In order to setup CosmosDB as a state store, you will need the following properties:
|
||||
In order to setup CosmosDB as a state store, you need the following properties:
|
||||
|
||||
* **URL**: the CosmosDB url. for example: https://******.documents.azure.com:443/
|
||||
* **Master Key**: The key to authenticate to the CosmosDB account
|
||||
|
@ -17,7 +17,7 @@ In order to setup CosmosDB as a state store, you will need the following propert
|
|||
|
||||
The next step is to create a Dapr component for CosmosDB.
|
||||
|
||||
Create the following YAML file named `cosmos.yaml`:
|
||||
Create the following YAML file named `cosmosdb.yaml`:
|
||||
|
||||
```yaml
|
||||
apiVersion: dapr.io/v1alpha1
|
||||
|
@ -40,6 +40,25 @@ spec:
|
|||
|
||||
The above example uses secrets as plain strings. It is recommended to use a secret store for the secrets as described [here](../../concepts/secrets/README.md)
|
||||
|
||||
Here is an example of what the values could look like:
|
||||
|
||||
```yaml
|
||||
apiVersion: dapr.io/v1alpha1
|
||||
kind: Component
|
||||
metadata:
|
||||
name: statestore
|
||||
spec:
|
||||
type: state.azure.cosmosdb
|
||||
metadata:
|
||||
- name: url
|
||||
value: https://accountname.documents.azure.com:443
|
||||
- name: masterKey
|
||||
value: thekey==
|
||||
- name: database
|
||||
value: db1
|
||||
- name: collection
|
||||
value: c1
|
||||
```
|
||||
The following example uses the Kubernetes secret store to retrieve the secrets:
|
||||
|
||||
```yaml
|
||||
|
@ -63,6 +82,13 @@ spec:
|
|||
value: <REPLACE-WITH-COLLECTION>
|
||||
```
|
||||
|
||||
If you wish to use CosmosDb as an actor store, append the following to the yaml.
|
||||
|
||||
```yaml
|
||||
- name: actorStateStore
|
||||
value: "true"
|
||||
```
|
||||
|
||||
## Apply the configuration
|
||||
|
||||
### In Kubernetes
|
||||
|
@ -75,13 +101,14 @@ kubectl apply -f cosmos.yaml
|
|||
|
||||
### Running locally
|
||||
|
||||
To run locally, create a `components` dir containing the YAML file and provide the path to the `dapr run` command with the flag `--components-path`.
|
||||
To run locally, create a YAML file described above and provide the path to the `dapr run` command with the flag `--components-path`. See [this](https://github.com/dapr/cli#use-non-default-components-path) or run `dapr run --help` for more information on the path.
|
||||
|
||||
## Partition keys
|
||||
|
||||
The Azure CosmosDB state store will use the `key` property provided in the requests to the Dapr API to determine the partition key.
|
||||
|
||||
For example, the following operation will use the partition key `nihilus` as the partition key value sent to CosmosDB:
|
||||
For **non-actor state** operations, the Azure CosmosDB state store will use the `key` property provided in the requests to the Dapr API to determine the CosmosDB partition key. This can be overridden by specifying a metadata field in the request with a key of `partitionKey` and a value of the desired partition.
|
||||
|
||||
The following operation will use `nihilus` as the partition key value sent to CosmosDB:
|
||||
|
||||
```shell
|
||||
curl -X POST http://localhost:3500/v1.0/state/<store_name> \
|
||||
|
@ -93,3 +120,50 @@ curl -X POST http://localhost:3500/v1.0/state/<store_name> \
|
|||
}
|
||||
]'
|
||||
```
|
||||
|
||||
For **non-actor** state operations, if you want to control the CosmosDB partition, you can specify it in metadata. Reusing the example above, here's how to put it under the `mypartition` partition
|
||||
|
||||
```shell
|
||||
curl -X POST http://localhost:3500/v1.0/state/<store_name> \
|
||||
-H "Content-Type: application/json"
|
||||
-d '[
|
||||
{
|
||||
"key": "nihilus",
|
||||
"value": "darth",
|
||||
"metadata": {
|
||||
"partitionKey": "mypartition"
|
||||
}
|
||||
}
|
||||
]'
|
||||
```
|
||||
|
||||
For **non-actor** state operations, here is how you would specify the partition for a transaction. The metadata field `partitionKey` must be specified in all items:
|
||||
|
||||
```shell
|
||||
curl -X POST http://localhost:3500/v1.0/state<store_name>/transaction \
|
||||
-H "Content-Type: application/json"
|
||||
-d '[
|
||||
{
|
||||
"operation": "upsert",
|
||||
"request": {
|
||||
"key": "key1",
|
||||
"value": "myData",
|
||||
"metadata": {
|
||||
"partitionKey": "mypartition"
|
||||
}
|
||||
}
|
||||
},
|
||||
{
|
||||
"operation": "delete",
|
||||
"request": {
|
||||
"key": "key2",
|
||||
"metadata": {
|
||||
"partitionKey": "mypartition"
|
||||
}
|
||||
}
|
||||
}
|
||||
]'
|
||||
```
|
||||
|
||||
|
||||
For **actor** state operations, the partition key will be generated by Dapr using the appId, the actor type, and the actor id, such that data for the same actor will always end up under the same partition (you do not need to specify it). This is because actor state operations must use transactions, and in CosmosDB the items in a transaction must be on the same partition.
|
||||
|
|
|
@ -0,0 +1,58 @@
|
|||
# Setup PostgreSQL
|
||||
|
||||
This article provides guidance on configuring a PostgreSQL state store.
|
||||
|
||||
## Create a PostgreSQL Store
|
||||
Dapr can use any PostgreSQL instance. If you already have a running instance of PostgreSQL, move on to the [Create a Dapr component](#create-a-dapr-component) section.
|
||||
|
||||
1. Run an instance of PostgreSQL. You can run a local instance of PostgreSQL in Docker CE with the following command:
|
||||
|
||||
This example does not describe a production configuration because it sets the password in plain text and the user name is left as the PostgreSQL default of "postgres".
|
||||
|
||||
```bash
|
||||
docker run -p 5432:5432 -e POSTGRES_PASSWORD=example postgres
|
||||
```
|
||||
|
||||
2. Create a database for state data.
|
||||
Either the default "postgres" database can be used, or create a new database for storing state data.
|
||||
|
||||
To create a new database in PostgreSQL, run the following SQL command:
|
||||
|
||||
```SQL
|
||||
create database dapr_test
|
||||
```
|
||||
|
||||
## Create a Dapr component
|
||||
|
||||
Create a file called `postgres.yaml`, paste the following and replace the `<CONNECTION STRING>` value with your connection string. The connection string is a standard PostgreSQL connection string. For example, `"host=localhost user=postgres password=example port=5432 connect_timeout=10 database=dapr_test"`. See the PostgreSQL [documentation on database connections](https://www.postgresql.org/docs/current/libpq-connect.html), specifically Keyword/Value Connection Strings, for information on how to define a connection string.
|
||||
|
||||
If you want to also configure PostgreSQL to store actors, add the `actorStateStore` configuration element shown below.
|
||||
|
||||
```yaml
|
||||
apiVersion: dapr.io/v1alpha1
|
||||
kind: Component
|
||||
metadata:
|
||||
name: statestore
|
||||
spec:
|
||||
type: state.postgresql
|
||||
metadata:
|
||||
- name: connectionString
|
||||
value: "<CONNECTION STRING>"
|
||||
- name: actorStateStore
|
||||
value: "true"
|
||||
```
|
||||
The above example uses secrets as plain strings. It is recommended to use a secret store for the secrets as described [here](../../concepts/secrets/README.md).
|
||||
|
||||
## Apply the configuration
|
||||
|
||||
### In Kubernetes
|
||||
|
||||
To apply the SQL Server state store to Kubernetes, use the `kubectl` CLI:
|
||||
|
||||
```yaml
|
||||
kubectl apply -f postgres.yaml
|
||||
```
|
||||
|
||||
### Running locally
|
||||
|
||||
To run locally, create a `components` dir containing the YAML file and provide the path to the `dapr run` command with the flag `--components-path`.
|
|
@ -11,9 +11,10 @@
|
|||
| Hazelcast | :white_check_mark: | :x: |
|
||||
| Memcached | :white_check_mark: | :x: |
|
||||
| MongoDB | :white_check_mark: | :white_check_mark: |
|
||||
| Redis | :white_check_mark: | :white_check_mark: |
|
||||
| PostgreSQL | :white_check_mark: | :white_check_mark: |
|
||||
| Redis | :white_check_mark: | :white_check_mark: |
|
||||
| Zookeeper | :white_check_mark: | :x: |
|
||||
| Azure CosmosDB | :white_check_mark: | :x: |
|
||||
| Azure CosmosDB | :white_check_mark: | :white_check_mark: |
|
||||
| Azure SQL Server | :white_check_mark: | :white_check_mark: |
|
||||
| Azure Table Storage | :white_check_mark: | :x: |
|
||||
| Google Cloud Firestore | :white_check_mark: | :x: |
|
||||
|
|
|
@ -35,7 +35,7 @@ Each of these building blocks is independent, meaning that you can use one, some
|
|||
| Building Block | Description |
|
||||
|----------------|-------------|
|
||||
| **[Service Invocation](../concepts/service-invocation)** | Resilient service-to-service invocation enables method calls, including retries, on remote services wherever they are located in the supported hosting environment.
|
||||
| **[State Management](../concepts/state-management)** | With state management for storing key/value pairs, long running, highly available, stateful services can be easily written alongside stateless services in your application. The state store is pluggable and can include Azure CosmosDB, AWS DynamoDB or Redis among others.
|
||||
| **[State Management](../concepts/state-management)** | With state management for storing key/value pairs, long running, highly available, stateful services can be easily written alongside stateless services in your application. The state store is pluggable and can include Azure CosmosDB, Azure SQL Server, PostgreSQL, AWS DynamoDB or Redis among others.
|
||||
| **[Publish and Subscribe Messaging](../concepts/publish-subscribe-messaging)** | Publishing events and subscribing to topics | tween services enables event-driven architectures to simplify horizontal scalability and make them | silient to failure. Dapr provides at least once message delivery guarantee.
|
||||
| **[Resource Bindings](../concepts/bindings)** | Resource bindings with triggers builds further on event-driven | chitectures for scale and resiliency by receiving and sending events to and from any external | source such as databases, queues, file systems, etc.
|
||||
| **[Distributed Tracing](../concepts/observability/traces.md)** | Dapr supports distributed tracing to easily diagnose and | serve inter-service calls in production using the W3C Trace Context standard.
|
||||
|
|
|
@ -159,7 +159,7 @@ See the [different specs](../specs/bindings) on each binding to see the list of
|
|||
### HTTP Request
|
||||
|
||||
```http
|
||||
POST/GET/PUT/DELETE http://localhost:<daprPort>/v1.0/bindings/<name>
|
||||
POST/PUT http://localhost:<daprPort>/v1.0/bindings/<name>
|
||||
```
|
||||
|
||||
### HTTP Response codes
|
||||
|
|
|
@ -219,7 +219,7 @@ curl -X "DELETE" http://localhost:3500/v1.0/state/starwars/planet -H "ETag: xxxx
|
|||
|
||||
## Configuring state store for actors
|
||||
|
||||
Actors don't support multiple state stores and require a transactional state store to be used with Dapr. Currently mongodb and redis implement the transactional state store interface.
|
||||
Actors don't support multiple state stores and require a transactional state store to be used with Dapr. Currently Mongodb, Redis, PostgreSQL, SQL Server, and Azure CosmosDB implement the transactional state store interface.
|
||||
To specify which state store to be used for actors, specify value of property `actorStateStore` as true in the metadata section of the state store component yaml file.
|
||||
Example: Following components yaml will configure redis to be used as the state store for Actors.
|
||||
|
||||
|
|
|
@ -1,5 +1,7 @@
|
|||
# Twitter Binding Spec
|
||||
|
||||
The Twitter binding supports both `input` and `output` binding configuration. First the common part:
|
||||
|
||||
```yaml
|
||||
apiVersion: dapr.io/v1alpha1
|
||||
kind: Component
|
||||
|
@ -17,6 +19,42 @@ spec:
|
|||
value: "****" # twitter api access token, required
|
||||
- name: accessSecret
|
||||
value: "****" # twitter api access secret, required
|
||||
```
|
||||
|
||||
For input bindings, where the query matching Tweets are streamed to the user service, the above component has to also include a query:
|
||||
|
||||
```yaml
|
||||
- name: query
|
||||
value: "dapr" # your search query, required
|
||||
```
|
||||
|
||||
For output binding invocation the user code has to invoke the binding:
|
||||
|
||||
```shell
|
||||
POST http://localhost:3500/v1.0/bindings/twitter
|
||||
```
|
||||
|
||||
Where the payload is:
|
||||
|
||||
```json
|
||||
{
|
||||
"data": "",
|
||||
"metadata": {
|
||||
"query": "twitter-query",
|
||||
"lang": "optional-language-code",
|
||||
"result": "valid-result-type"
|
||||
},
|
||||
"operation": "get"
|
||||
}
|
||||
```
|
||||
|
||||
The metadata parameters are:
|
||||
|
||||
* `query` - any valid Twitter query (e.g. `dapr` or `dapr AND serverless`). See [Twitter docs](https://developer.twitter.com/en/docs/tweets/rules-and-filtering/overview/standard-operators) for more details on advanced query formats
|
||||
* `lang` - (optional, default: `en`) restricts result tweets to the given language using [ISO 639-1 language code](https://en.wikipedia.org/wiki/List_of_ISO_639-1_codes)
|
||||
* `result` - (optional, default: `recent`) specifies tweet query result type. Valid values include:
|
||||
* `mixed` - both popular and real time results
|
||||
* `recent` - most recent results
|
||||
* `popular` - most popular results
|
||||
|
||||
You can see the example of the JSON data that Twitter binding returns [here](https://developer.twitter.com/en/docs/tweets/search/api-reference/get-search-tweets)
|
Loading…
Reference in New Issue