Merge branch 'master' into use-cases

This commit is contained in:
Aaron Schlesinger 2020-06-05 16:17:47 -07:00 committed by GitHub
commit 1bd5caf7e8
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
75 changed files with 870 additions and 205 deletions

View File

@ -1,17 +1,15 @@
# Description
Thank you for helping make the Dapr documentation better!
If you are a new contributor, please see the [this contribution guidance](https://github.com/dapr/docs/blob/master/contributing/README.md) which helps keep the Dapr documentation readable, consistent and useful for Dapr users.
If you are contributing a new authored "How To" article, [this template](https://github.com/dapr/docs/blob/master/contributing/howto-template.md) was created to assist you.
In addition, please fill out the following to help reviewers understand this pull request:
## Description
_Please explain the changes you've made_
## Issue reference
We strive to have all PR being opened based on an issue, where the problem or feature have been discussed prior to implementation.
Please reference the issue this PR will close: #_[issue number]_
## Checklist
Please make sure you've completed the relevant tasks for this PR, out of the following list:
* [ ] Code compiles correctly
* [ ] Created/updated tests
* [ ] Extended the documentation
_Please reference the issue this PR will close: #[issue number]_

2
FAQ.md
View File

@ -28,7 +28,7 @@ Istio is not a programming model and does not focus on application level feature
### Relationship between Dapr, Orleans and Service Fabric Reliable Actors
The actors in Dapr are based on the same virtual actor concept that [Orleans](https://www.microsoft.com/research/project/orleans-virtual-actors/) started, meaning that they are activated when called and garbage collected after a period of time. If you are familiar with Orleans, Dapr C# actors will be familiar. Dapr C# actors are based on [Service Fabric Reliable Actors](https://docs.microsoft.com/azure/service-fabric/service-fabric-reliable-actors-introduction) (which also came from Orleans) and enable you to take Reliable Actors in Service Fabric and migrate them to other hosting platforms such as Kubernetes or other on-premise environments.
The actors in Dapr are based on the same virtual actor concept that [Orleans](https://www.microsoft.com/research/project/orleans-virtual-actors/) started, meaning that they are activated when called and deactivated after a period of time. If you are familiar with Orleans, Dapr C# actors will be familiar. Dapr C# actors are based on [Service Fabric Reliable Actors](https://docs.microsoft.com/azure/service-fabric/service-fabric-reliable-actors-introduction) (which also came from Orleans) and enable you to take Reliable Actors in Service Fabric and migrate them to other hosting platforms such as Kubernetes or other on-premise environments.
Also Dapr is about more than just actors. It provides you with a set of best practice building blocks to build into any microservices application. See [Dapr overview](https://github.com/dapr/docs/blob/master/overview.md)
### Differences between Dapr from an actor framework

View File

@ -2,6 +2,23 @@
Welcome to the Dapr documentation repository. You can learn more about Dapr from the links below.
## Document versions
Dapr is currently under community development in preview phase and master branch could include breaking changes. Therefore, please ensure that you refer to the right version of the documents for your Dapr runtime version.
| Version | Repo |
|:-------:|:----:|
| v0.7.0 | [Docs](https://github.com/dapr/docs/tree/v0.7.0) - [Runtime](https://github.com/dapr/dapr/tree/v0.7.0) - [CLI](https://github.com/dapr/cli/tree/release-0.7)
| v0.6.0 | [Docs](https://github.com/dapr/docs/tree/v0.6.0) - [Runtime](https://github.com/dapr/dapr/tree/v0.6.0) - [CLI](https://github.com/dapr/cli/tree/release-0.6)
| v0.5.0 | [Docs](https://github.com/dapr/docs/tree/v0.5.0) - [Runtime](https://github.com/dapr/dapr/tree/v0.5.0) - [CLI](https://github.com/dapr/cli/tree/release-0.5)
| v0.4.0 | [Docs](https://github.com/dapr/docs/tree/v0.4.0) - [Runtime](https://github.com/dapr/dapr/tree/v0.4.0) - [CLI](https://github.com/dapr/cli/tree/release-0.4)
| v0.3.0 | [Docs](https://github.com/dapr/docs/tree/v0.3.0) - [Runtime](https://github.com/dapr/dapr/tree/v0.3.0) - [CLI](https://github.com/dapr/cli/tree/release-0.3)
| v0.2.0 | [Docs](https://github.com/dapr/docs/tree/v0.2.0) - [Runtime](https://github.com/dapr/dapr/tree/v0.2.0) - [CLI](https://github.com/dapr/cli/tree/release-0.2)
| v0.1.0 | [Docs](https://github.com/dapr/docs/tree/v0.1.0) - [Runtime](https://github.com/dapr/dapr/tree/v0.1.0) - [CLI](https://github.com/dapr/cli/tree/release-0.1)
## Contents
| Topic | Description |
|-------|-------------|
|**[Overview](./overview)** | An overview of Dapr and how it enables you to build event driven, distributed applications
@ -22,18 +39,3 @@ Welcome to the Dapr documentation repository. You can learn more about Dapr from
| **SDKs** | - [Go SDK](https://github.com/dapr/go-sdk)<br>- [Java SDK](https://github.com/dapr/java-sdk)<br>- [Javascript SDK](https://github.com/dapr/js-sdk)<br>- [Python SDK](https://github.com/dapr/python-sdk)<br>- [.NET SDK](https://github.com/dapr/dotnet-sdk)<br>- [C++ SDK](https://github.com/dapr/cpp-sdk)<br>- [RUST SDK](https://github.com/dapr/rust-sdk)<br>**Note:** Dapr is language agnostic and provides a [RESTful HTTP API](./reference/api/README.md) in addition to the protobuf clients.
| **[Dapr Presentations](./presentations)** | Previous Dapr presentations and information on how to give your own Dapr presentation
## Document versions
Dapr is currently under community development in preview phase and master branch could include breaking changes. Therefore, please ensure that you refer to the right version of the documents for your Dapr runtime version.
| Version | Repo |
|:-------:|:----:|
| v0.7.0 | [Docs](https://github.com/dapr/docs/tree/v0.7.0) - [Runtime](https://github.com/dapr/dapr/tree/v0.7.0) - [CLI](https://github.com/dapr/cli/tree/release-0.7)
| v0.6.0 | [Docs](https://github.com/dapr/docs/tree/v0.6.0) - [Runtime](https://github.com/dapr/dapr/tree/v0.6.0) - [CLI](https://github.com/dapr/cli/tree/release-0.6)
| v0.5.0 | [Docs](https://github.com/dapr/docs/tree/v0.5.0) - [Runtime](https://github.com/dapr/dapr/tree/v0.5.0) - [CLI](https://github.com/dapr/cli/tree/release-0.5)
| v0.4.0 | [Docs](https://github.com/dapr/docs/tree/v0.4.0) - [Runtime](https://github.com/dapr/dapr/tree/v0.4.0) - [CLI](https://github.com/dapr/cli/tree/release-0.4)
| v0.3.0 | [Docs](https://github.com/dapr/docs/tree/v0.3.0) - [Runtime](https://github.com/dapr/dapr/tree/v0.3.0) - [CLI](https://github.com/dapr/cli/tree/release-0.3)
| v0.2.0 | [Docs](https://github.com/dapr/docs/tree/v0.2.0) - [Runtime](https://github.com/dapr/dapr/tree/v0.2.0) - [CLI](https://github.com/dapr/cli/tree/release-0.2)
| v0.1.0 | [Docs](https://github.com/dapr/docs/tree/v0.1.0) - [Runtime](https://github.com/dapr/dapr/tree/v0.1.0) - [CLI](https://github.com/dapr/cli/tree/release-0.1)

View File

@ -1,12 +1,15 @@
# Bindings
Dapr provides a mechanism that can either trigger your app with events coming in from external systems, or invoke external systems.
Using bindings, you can trigger your app with events coming in from external systems, or invoke external systems.
Bindings give you some additional advantages:
Specifically, bindings give you some additional advantages:
* Your code doesn't have to deal with the complexities of connecting to, and polling from, messaging systems such as queues, message buses, etc...
* You can focus on business logic and not the implementation details of how interact with a system
* You can keep your code free from SDKs or libraries
* Remove the complexities of connecting to, and polling from, messaging systems such as queues, message buses, etc.
* Focus on business logic and not the implementation details of how to interact with a system
* Keep the code free from SDKs or libraries
* Handles retries and failure recovery
* Switch between bindings at run time
* Enable portable applications where environment-specific bindings are set-up and no code changes are required
For a specific example, bindings allow your microservice to respond to incoming Twilio/SMS messages without adding/configuring a third-party Twilio SDK, worrying about polling from Twilio (or doing websockets, etc...).
@ -27,6 +30,7 @@ Every binding has its own unique set of properties. Click the name link to see t
| [RabbitMQ](../../reference/specs/bindings/rabbitmq.md) | ✅ | ✅ | Experimental |
| [Redis](../../reference/specs/bindings/redis.md) | | ✅ | Experimental |
| [Twilio](../../reference/specs/bindings/twilio.md) | | ✅ | Experimental |
| [Twitter](../../reference/specs/bindings/twitter.md) | ✅ | | Experimental |
| [SendGrid](../../reference/specs/bindings/sendgrid.md) | | ✅ | Experimental |
### Amazon Web Service (AWS)
@ -75,7 +79,7 @@ Read the [Create an event-driven app using input bindings](../../howto/trigger-a
## Output bindings
Output bindings allow users to invoke external resources
Output bindings allow users to invoke external resources.
An optional payload and metadata can be sent with the invocation request.
In order to invoke an output binding:

View File

@ -55,4 +55,4 @@ func GetHandler(metadata Metadata) fasthttp.RequestHandler {
Your middleware component can be contributed to the https://github.com/dapr/components-contrib repository, under the */middleware* folder. Then submit another pull request against the https://github.com/dapr/dapr repository to register the new middleware type. You'll need to modify the **Load()** method under https://github.com/dapr/dapr/blob/master/pkg/components/middleware/http/registry.go to register your middleware using the **Register** method.
## References
* [How-To: Configure API authorization with OAuth](../../howto/authorization-with-oauth/readme.md)
* [How-To: Configure API authorization with OAuth](../../howto/authorization-with-oauth/README.md)

View File

@ -22,7 +22,7 @@ Dapr enables mTLS and all the features described in this document in your applic
## Sidecar-to-App communication
The Dapr sidecar runs close to the application through **localhost**. Dapr assumes it runs in the same security domain of the application. As a result, there are no authentication, authorization or encryption between a Dapr sidecar and the application.
The Dapr sidecar runs close to the application through **localhost**, and is recommended to run under the same network boundary as the app. While many cloud-native systems today consider the pod level (on Kubernetes, for example) as a trusted security boundary, Dapr provides user with API level authentication using tokens. This feature guarantees that even on localhost, only an authenticated caller may call into Dapr.
## Sidecar-to-Sidecar communication
@ -66,7 +66,7 @@ In Kubernetes, when the Dapr system services start, they automatically mount the
In self hosted mode, each system service can be mounted to a filesystem path to get the credentials.
When the Dapr sidecars initialize, they authenticate with the system services using the workload cert that was issued to them by Sentry, the Certificate Authority.
When the Dapr sidecar initializes, it authenticates with the system pods using the mounted leaf certificates and issuer private key. these are mounted as environment variables on the sidecar container.
### mTLS to system services in Kubernetes
The diagram below shows secure communication between the Dapr sidecar and the Dapr Sentry (Certificate Authority), Placement (actor placement) and the Kubernetes Operator system services
@ -103,4 +103,4 @@ Dapr uses the configured authentication method to authenticate with the underlyi
When deploying on Kubernetes, you can use regular [Kubernetes RBAC]( https://kubernetes.io/docs/reference/access-authn-authz/rbac/) to control access to management activities.
When deploying on Azure Kubernetes Service (AKS), you can use [Azure Active Directory (AD) service principals]( https://docs.microsoft.com/en-us/azure/active-directory/develop/app-objects-and-service-principals) to control access to management activities and resource management.
When deploying on Azure Kubernetes Service (AKS), you can use [Azure Active Directory (AD) service principals]( https://docs.microsoft.com/en-us/azure/active-directory/develop/app-objects-and-service-principals) to control access to management activities and resource management.

View File

@ -34,7 +34,7 @@ az group create --name [your_resource_group] --location [region]
Use 1.13.x or newer version of Kubernetes with `--kubernetes-version`
```bash
az aks create --resource-group [your_resource_group] --name [your_aks_cluster_name] --node-count 2 --kubernetes-version 1.14.6 --enable-addons http_application_routing --enable-rbac --generate-ssh-keys
az aks create --resource-group [your_resource_group] --name [your_aks_cluster_name] --node-count 2 --kubernetes-version 1.14.7 --enable-addons http_application_routing --enable-rbac --generate-ssh-keys
```
5. Get the access credentials for the Azure Kubernetes cluster
@ -43,8 +43,15 @@ az aks create --resource-group [your_resource_group] --name [your_aks_cluster_na
az aks get-credentials -n [your_aks_cluster_name] -g [your_resource_group]
```
## (optional) Install Helm v3
1. [Install Helm v3 client](https://helm.sh/docs/intro/install/)
> **Note:** The latest Dapr helm chart no longer supports Helm v2. Please migrate from helm v2 to helm v3 by following [this guide](https://helm.sh/blog/migrate-from-helm-v2-to-helm-v3/).
2. In case you need permissions the kubernetes dashboard (i.e. configmaps is forbidden: User "system:serviceaccount:kube-system:kubernetes-dashboard" cannot list configmaps in the namespace "default", etc.) execute this command
```bash
kubectl create clusterrolebinding kubernetes-dashboard -n kube-system --clusterrole=cluster-admin --serviceaccount=kube-system:kubernetes-dashboard
```

View File

@ -61,6 +61,11 @@ For Actors How Tos see the SDK documentation
* [Diagnose your services with distributed tracing](./diagnose-with-tracing)
## Security
### Dapr APIs Authentication
* [Enable Dapr APIs token-based authentication](./enable-dapr-api-token-based-authentication)
### Mutual Transport Layer Security (mTLS)
* [Setup and configure mutual TLS between Dapr instances](./configure-mtls)

View File

@ -21,6 +21,10 @@ The following table shows all the supported pod Spec annotations supported by Da
| `dapr.io/sidecar-memory-limit` | Maximum amount of Memory that the Dapr sidecar can use. See valid values [here](https://kubernetes.io/docs/tasks/administer-cluster/manage-resources/quota-memory-cpu-namespace/). By default this is not set
| `dapr.io/sidecar-cpu-request` | Amount of CPU that the Dapr sidecar requests. See valid values [here](https://kubernetes.io/docs/tasks/administer-cluster/manage-resources/quota-memory-cpu-namespace/). By default this is not set
| `dapr.io/sidecar-memory-request` | Amount of Memory that the Dapr sidecar requests .See valid values [here](https://kubernetes.io/docs/tasks/administer-cluster/manage-resources/quota-memory-cpu-namespace/). By default this is not set
| `dapr.io/sidecar-liveness-probe-delay-seconds` | Number of seconds after the sidecar container has started before liveness probe is initiated. Read more [here](https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/#configure-probes). Default is `3`
| `dapr.io/sidecar-liveness-probe-timeout-seconds` | Number of seconds after which the sidecar liveness probe times out. Read more [here](https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/#configure-probes). Default is `3`
| `dapr.io/sidecar-liveness-probe-period-seconds` | How often (in seconds) to perform the sidecar liveness probe. Read more [here](https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/#configure-probes). Default is `6`
| `dapr.io/sidecar-liveness-probe-threshold` | When the sidecar liveness probe fails, Kubernetes will try N times before giving up. In this case, the Pod will be marked Unhealthy. Read more about `failureThreshold` [here](https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/#configure-probes). Default is `3`
| `dapr.io/sidecar-readiness-probe-delay-seconds` | Number of seconds after the sidecar container has started before readiness probe is initiated. Read more [here](https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/#configure-probes). Default is `3`
| `dapr.io/sidecar-readiness-probe-timeout-seconds` | Number of seconds after which the sidecar readiness probe times out. Read more [here](https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/#configure-probes). Default is `3`
| `dapr.io/sidecar-readiness-probe-period-seconds` | How often (in seconds) to perform the sidecar readiness probe. Read more [here](https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/#configure-probes). Default is `6`

View File

@ -9,57 +9,65 @@ Dapr can use Redis in two ways:
Dapr can use any Redis instance - containerized, running on your local dev machine, or a managed cloud service. If you already have a Redis store, move on to the [Configuration](#configuration) section.
### Creating a Redis Cache in your Kubernetes Cluster using Helm
### Option 1: Creating a Redis Cache in your Kubernetes Cluster using Helm
We can use [Helm](https://helm.sh/) to quickly create a Redis instance in our Kubernetes cluster. This approach requires [Installing Helm v3](https://github.com/helm/helm#install).
1. Install Redis into your cluster:
```bash
helm repo add bitnami https://charts.bitnami.com/bitnami
helm install redis bitnami/redis
```
```bash
helm repo add bitnami https://charts.bitnami.com/bitnami
helm install redis bitnami/redis
```
> Note that you need a Redis version greater than 5, which is what Dapr' pub/sub functionality requires. If you're intending on using Redis as just a state store (and not for pub/sub), also a lower version can be used.
> Note that you need a Redis version greater than 5, which is what Dapr' pub/sub functionality requires. If you're intending on using Redis as just a state store (and not for pub/sub), also a lower version can be used.
2. Run `kubectl get pods` to see the Redis containers now running in your cluster.
3. Add `redis-master:6379` as the `redisHost` in your [redis.yaml](#configuration) file. For example:
```yaml
metadata:
- name: redisHost
value: redis-master:6379
```
```yaml
metadata:
- name: redisHost
value: redis-master:6379
```
4. Next, we'll get our Redis password, which is slightly different depending on the OS we're using:
- **Windows**: Run `kubectl get secret --namespace default redis -o jsonpath="{.data.redis-password}" > encoded.b64`, which will create a file with your encoded password. Next, run `certutil -decode encoded.b64 password.txt`, which will put your redis password in a text file called `password.txt`. Copy the password and delete the two files.
- **Windows**: Run `kubectl get secret --namespace default redis -o jsonpath="{.data.redis-password}" > encoded.b64`, which will create a file with your encoded password. Next, run `certutil -decode encoded.b64 password.txt`, which will put your redis password in a text file called `password.txt`. Copy the password and delete the two files.
- **Linux/MacOS**: Run `kubectl get secret --namespace default redis -o jsonpath="{.data.redis-password}" | base64 --decode` and copy the outputted password.
- **Linux/MacOS**: Run `kubectl get secret --namespace default redis -o jsonpath="{.data.redis-password}" | base64 --decode` and copy the outputted password.
Add this password as the `redisPassword` value in your [redis.yaml](#configuration) file. For example:
Add this password as the `redisPassword` value in your [redis.yaml](#configuration) file. For example:
```yaml
metadata:
- name: redisPassword
value: lhDOkwTlp0
```
```yaml
metadata:
- name: redisPassword
value: lhDOkwTlp0
```
### Creating an Azure Managed Redis Cache
### Option 2: Creating an managed Azure Cache for Redis service
**Note**: this approach requires having an Azure Subscription.
> **Note**: This approach requires having an Azure Subscription.
1. Open [this link](https://ms.portal.azure.com/#create/Microsoft.Cache) to start the Azure Redis Cache creation flow. Log in if necessary.
2. Fill out necessary information and **check the "Unblock port 6379" box**, which will allow us to persist state without SSL.
3. Click "Create" to kickoff deployment of your Redis instance.
4. Once your instance is created, you'll need to grab your access key. Navigate to "Access Keys" under "Settings" and copy your key.
5. Run `kubectl get svc` and copy the cluster IP of your `redis-master`.
6. Finally, we need to add our key and our host to a `redis.yaml` file that Dapr can apply to our cluster. If you're running a sample, you'll add the host and key to the provided `redis.yaml`. If you're creating a project from the ground up, you'll create a `redis.yaml` file as specified in [Configuration](#configuration). Set the `redisHost` key to `[IP FROM PREVIOUS STEP]:6379` and the `redisPassword` key to the key you copied in step 4. **Note:** In a production-grade application, follow [secret management](../../concepts/secrets/README.md) instructions to securely manage your secrets.
1. Open the [Azure Portal](https://ms.portal.azure.com/#create/Microsoft.Cache) to start the Azure Redis Cache creation flow. Log in if necessary.
1. Fill out the necessary information
1. Click "Create" to kickoff deployment of your Redis instance.
1. Once your instance is created, you'll need to grab your access key. Navigate to "Access Keys" under "Settings" and copy your key.
1. We need the hostname of your Redis instance, which we can retrieve from the "Overview" in Azure. It should look like `xxxxxx.redis.cache.windows.net:6380`.
1. Finally, we need to add our key and our host to a `redis.yaml` file that Dapr can apply to our cluster. If you're running a sample, you'll add the host and key to the provided `redis.yaml`. If you're creating a project from the ground up, you'll create a `redis.yaml` file as specified in [Configuration](#configuration).
As the connection to Azure is encrypted, make sure to add the following block to the `metadata` section of your `redis.yaml` file.
```yaml
metadata:
- name: enableTLS
value: "true"
```
> **NOTE:** Dapr pub/sub uses [Redis Streams](https://redis.io/topics/streams-intro) that was introduced by Redis 5.0, which isn't currently available on Azure Managed Redis Cache. Consequently, you can use Azure Managed Redis Cache only for state persistence.
### Other ways to Create a Redis Database
### Other options to create a Redis Database
- [AWS Redis](https://aws.amazon.com/redis/)
- [GCP Cloud MemoryStore](https://cloud.google.com/memorystore/)
@ -118,4 +126,5 @@ kubectl apply -f redis-pubsub.yaml
### Standalone
By default the Dapr CLI creates a local Redis instance when you run `dapr init`. However, if you want to configure a different Redis instance, create a directory named `components` in the root path of your Dapr binary and then copy your `redis.yaml` into that directory.
By default the Dapr CLI creates a local Redis instance when you run `dapr init`. However, if you want to configure a different Redis instance, create a `components` dir containing the YAML file and provide the path to the `dapr run` command with the flag `--components-path`.

View File

@ -11,7 +11,7 @@ Dapr provides different implementation of the underlying system, and allows oper
The first step is to setup the Pub/Sub component.
For this guide, we'll use Redis Streams, which is also installed by default on a local machine when running `dapr init`.
*Note: When running Dapr locally, a pub/sub component YAML will automatically be created if it doesn't exist in a directory called `components` in your current working directory.*
*Note: When running Dapr locally, a pub/sub component YAML is automatically created for you locally. To override, create a `components` directory containing the file and use the flag `--components-path` with the `dapr run` CLI command.*
```yaml
apiVersion: dapr.io/v1alpha1

View File

@ -36,8 +36,8 @@ spec:
annotations:
dapr.io/enabled: "true"
dapr.io/id: "myapp"
<b>dapr.io/protocol: "grpc"
dapr.io/port: "5005"</b>
dapr.io/protocol: "grpc"
dapr.io/port: "5005"
...
```

View File

@ -0,0 +1,106 @@
# Enable Dapr APIs token-based authentication
By default, Dapr relies on the network boundary to limit access to its public API. If you plan on exposing the Dapr API outside of that boundary, or if your deployment demands an additional level of security, consider enabling the token authentication for Dapr APIs. This will cause Dapr to require every incoming gRPC and HTTP request for its APIs for to include authentication token, before allowing that request to pass through.
## Create a token
Dapr uses [JWT](https://jwt.io/) tokens for API authentication.
> Note, while Dapr itself is actually not the JWT token issuer in this implementation, being explicit about the use of JWT standard enables federated implementations in the future (e.g. OAuth2).
To configure APIs authentication, start by generating your token using any JWT token compatible tool (e.g. https://jwt.io/) and your secret.
> Note, that secret is only necessary to generate the token, and Dapr doesn't need to know about or store it
## Configure API token authentication in Dapr
The token authentication configuration is slightly different for either Kubernetes or self-hosted Dapr deployments:
### Self-hosted
In self-hosting scenario, Dapr looks for the presence of `DAPR_API_TOKEN` environment variable. If that environment variable is set while `daprd`process launches, Dapr will enforce authentication on its public APIs:
```shell
export DAPR_API_TOKEN=<token>
```
To rotate the configured token, simply set the `DAPR_API_TOKEN` environment variable to the new value and restart the `daprd`process.
### Kubernetes
In Kubernetes deployment, Dapr leverages Kubernetes secrets store to hold the JWT token. To configure Dapr APIs authentication start by creating a new secret:
```shell
kubectl create secret generic dapr-api-token --from-literal=token=<token>
```
> Note, the above secret needs to be created in each namespace in which you want to enable Dapr token authentication
To indicate to Dapr to use that secret to secure its public APIs, add an annotation to your Deployment template spec:
```yaml
annotations:
dapr.io/enabled: "true"
dapr.io/api-token-secret: "dapr-api-token" # name of the Kubernetes secret
```
When deployed, Dapr sidecar injector will automatically create a secret reference and inject the actual value into `DAPR_API_TOKEN` environment variable.
## Rotate a token
### Self-hosted
To rotate the configured token in self-hosted, simply set the `DAPR_API_TOKEN` environment variable to the new value and restart the `daprd`process.
### Kubernetes
To rotate the configured token in Kubernates, update the previously created secret with the new token in each namespace. You can do that using `kubectl patch` command, but the easiest way to update these in each namespace is by using manifest:
```yaml
apiVersion: v1
kind: Secret
metadata:
name: dapr-api-token
type: Opaque
data:
token: <your-new-token>
```
And then apply it to each namespace:
```shell
kubectl apply --file token-secret.yaml --namespace <namespace-name>
```
To tell Dapr to start using the new token, trigger a rolling upgrade to each one of your deployments:
```shell
kubectl rollout restart deployment/<deployment-name> --namespace <namespace-name>
```
> Note, assuming your service is configured with more than one replica, the key rotation process does not result in any downtime.
## Adding JWT token to client API invocations
Once token authentication is configured in Dapr, all clients invoking Dapr API will have to append the JWT token to every request:
### HTTP
In case of HTTP, Dapr inspect the incoming request for presence of `dapr-api-token` parameter in HTTP header:
```shell
dapr-api-token: <token>
```
### gRPC
When using gRPC protocol, Dapr will inspect the incoming calls for the API token on the gRPC metadata:
```shell
dapr-api-token[0].
```
## Related Links
* [Other security related topics](https://github.com/dapr/docs/blob/master/concepts/security/README.md)

View File

@ -47,7 +47,7 @@ spec:
app: python-app
annotations:
dapr.io/enabled: "true"
<b>dapr.io/id: "cart"</b>
dapr.io/id: "cart"
dapr.io/port: "5000"
...
```

View File

@ -11,7 +11,7 @@ Dapr provides different implementation of the underlying system, and allows oper
The first step is to setup the Pub/Sub component.
For this guide, we'll use Redis Streams, which is also installed by default on a local machine when running `dapr init`.
*Note: When running Dapr locally, a pub/sub component YAML will automatically be created if it doesn't exist in a directory called `components` in your current working directory.*
*Note: When running Dapr locally, a pub/sub component YAML is automatically created for you locally. To override, create a `components` directory containing the file and use the flag `--components-path` with the `dapr run` CLI command.*
```yaml
apiVersion: dapr.io/v1alpha1

View File

@ -9,7 +9,8 @@ An output binding represents a resource that Dapr will use invoke and send messa
For the purpose of this guide, we'll use a Kafka binding. You can find a list of the different binding specs [here](../../concepts/bindings/README.md).
Create the following YAML file, named binding.yaml, and save this to the /components sub-folder in your application directory.
Create the following YAML file, named binding.yaml, and save this to a `components` sub-folder in your application directory.
(Use the `--components-path` flag with `dapr run` to point to your custom components dir)
*Note: When running in Kubernetes, apply this file to your cluster using `kubectl apply -f binding.yaml`*
@ -39,12 +40,15 @@ All that's left now is to invoke the bindings endpoint on a running Dapr instanc
We can do so using HTTP:
```bash
curl -X POST -H http://localhost:3500/v1.0/bindings/myEvent -d '{ "data": { "message": "Hi!" } }'
curl -X POST -H http://localhost:3500/v1.0/bindings/myEvent -d '{ "data": { "message": "Hi!" }, "operation": "create" }'
```
As seen above, we invoked the `/binding` endpoint with the name of the binding to invoke, in our case its `myEvent`.
The payload goes inside the mandatory `data` field, and can be any JSON serializable value.
You'll also notice that there's an `operation` field that tells the binding what we need it to do.
You can check [here](../../reference/specs/bindings) which operations are supported for every output binding.
## References

View File

@ -50,3 +50,6 @@ kubectl apply -f pubsub.yaml
- [Setup GCP Pubsub](./setup-gcp.md)
- [Setup Hazelcast Pubsub](./setup-hazelcast.md)
- [Setup Azure Event Hubs](./setup-azure-eventhubs.md)
- [Setup SNS/SQS](./setup-snssqs.md)
- [Setup MQTT](./setup-mqtt.md)
- [Setup Apache Pulsar](./setup-pulsar.md)

View File

@ -19,7 +19,7 @@ spec:
type: pubsub.azure.eventhubs
metadata:
- name: connectionString
value: <REPLACE-WITH-CONNECTION-STRING> # Required.
value: <REPLACE-WITH-CONNECTION-STRING> # Required. "Endpoint=sb://****"
- name: storageAccountName
value: <REPLACE-WITH-STORAGE-ACCOUNT-NAME> # Required.
- name: storageAccountKey
@ -28,8 +28,15 @@ spec:
value: <REPLACE-WITH-CONTAINER-NAME > # Required.
```
See [here](https://docs.microsoft.com/en-us/azure/event-hubs/authorize-access-shared-access-signature) on how to get the Event Hubs connection string. Note this is not the Event Hubs namespace.
The above example uses secrets as plain strings. It is recommended to use a secret store for the secrets as described [here](../../concepts/secrets/README.md)
## Create consumer groups for each subscriber
For every Dapr app that wants to subscribe to events, create an Event Hubs consumer group with the name of the `dapr id`.
For example, a Dapr app running on Kubernetes with `dapr.io/id: "myapp"` will need an Event Hubs consumer group named `myapp`.
## Apply the configuration
### In Kubernetes
@ -42,5 +49,4 @@ kubectl apply -f eventhubs.yaml
### Running locally
The Dapr CLI will automatically create a directory named `components` in your current working directory with a Redis component.
To use Azure Event Hubs, replace the contents of `pubsub.yaml` (or `messagebus.yaml` for Dapr < 0.6.0) file with the contents of `eventhubs.yaml` above (Don't change the filename).
To run locally, create a `components` dir containing the YAML file and provide the path to the `dapr run` command with the flag `--components-path`.

View File

@ -47,5 +47,4 @@ kubectl apply -f azuresb.yaml
### Running locally
The Dapr CLI will automatically create a directory named `components` in your current working directory with a Redis component.
To use Azure Service Bus, replace the contents of `pubsub.yaml` (or `messagebus.yaml` for Dapr < 0.6.0) file with the contents of `azuresb.yaml` above (Don't change the filename).
To run locally, create a `components` dir containing the YAML file and provide the path to the `dapr run` command with the flag `--components-path`.

View File

@ -70,4 +70,4 @@ kubectl apply -f messagebus.yaml
### Running locally
The Dapr CLI will automatically create a directory named `components` in your current working directory. To use Cloud Pubsub, replace the contents of `pubsub.yaml` (or `messagebus.yaml` for Dapr < 0.6.0) file with the contents of yaml above.
To run locally, create a `components` dir containing the YAML file and provide the path to the `dapr run` command with the flag `--components-path`.

View File

@ -48,5 +48,4 @@ kubectl apply -f hazelcast.yaml
### Running locally
The Dapr CLI will automatically create a directory named `components` in your current working directory with a Redis component.
To use Hazelcast, replace the `pubsub.yaml` (or `messagebus.yaml` for Dapr < 0.6.0) file with the hazelcast.yaml above.
To run locally, create a `components` dir containing the YAML file and provide the path to the `dapr run` command with the flag `--components-path`.

View File

@ -53,5 +53,4 @@ kubectl apply -f kafka.yaml
### Running locally
The Dapr CLI will automatically create a directory named `components` in your current working directory with a Redis component.
To use Kafka, replace the `pubsub.yaml` (or `messagebus.yaml` for Dapr < 0.6.0) file with the kafka.yaml above.
To run locally, create a `components` dir containing the YAML file and provide the path to the `dapr run` command with the flag `--components-path`.

View File

@ -0,0 +1,113 @@
# Setup MQTT
## Locally
You can run a MQTT broker [locally using Docker](https://hub.docker.com/_/eclipse-mosquitto):
```bash
docker run -d -p 1883:1883 -p 9001:9001 --name mqtt eclipse-mosquitto:1.6.9
```
You can then interact with the server using the client port: `mqtt://localhost:1883`
## Kubernetes
You can run a MQTT broker in kubernetes using following yaml:
```yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: mqtt-broker
labels:
app-name: mqtt-broker
spec:
replicas: 1
selector:
matchLabels:
app-name: mqtt-broker
template:
metadata:
labels:
app-name: mqtt-broker
spec:
containers:
- name: mqtt
image: eclipse-mosquitto:1.6.9
imagePullPolicy: IfNotPresent
ports:
- name: default
containerPort: 1883
protocol: TCP
- name: websocket
containerPort: 9001
protocol: TCP
---
apiVersion: v1
kind: Service
metadata:
name: mqtt-broker
labels:
app-name: mqtt-broker
spec:
type: ClusterIP
selector:
app-name: mqtt-broker
ports:
- port: 1883
targetPort: default
name: default
protocol: TCP
- port: 9001
targetPort: websocket
name: websocket
protocol: TCP
```
You can then interact with the server using the client port: `mqtt://mqtt-broker.default.svc.cluster.local:1883`
## Create a Dapr component
The next step is to create a Dapr component for MQTT.
Create the following yaml file named `mqtt.yaml`
```yaml
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
name: <NAME>
namespace: <NAMESPACE>
spec:
type: pubsub.mqtt
metadata:
- name: url
value: "mqtt://[username][:password]@host.domain[:port]"
- name: qos
value: 1
- name: retain
value: "false"
- name: cleanSession
value: "false"
```
Where:
* **url** (required) is the address of the MQTT broker.
* **qos** (optional) indicates the Quality of Service Level (QoS) of the message. (Default 0)
* **retain** (optional) defines whether the message is saved by the broker as the last known good value for a specified topic. (Default false)
* **cleanSession** (optional) will set the "clean session" in the connect message when client connects to an MQTT broker . (Default true)
The above example uses secrets as plain strings. It is recommended to use a secret store for the secrets as described [here](../../concepts/secrets/README.md)
## Apply the configuration
### In Kubernetes
To apply the MQTT pubsub to Kubernetes, use the `kubectl` CLI:
```bash
kubectl apply -f mqtt.yaml
```
### Running locally
To run locally, create a `components` dir containing the YAML file and provide the path to the `dapr run` command with the flag `--components-path`.

View File

@ -58,5 +58,4 @@ kubectl apply -f nats.yaml
### Running locally
The Dapr CLI will automatically create a directory named `components` in your current working directory with a Redis component.
To use NATS, replace the contents of `pubsub.yaml` (or `messagebus.yaml` for Dapr < 0.6.0) file with the contents of `nats.yaml` above (Don't change the filename).
To run locally, create a `components` dir containing the YAML file and provide the path to the `dapr run` command with the flag `--components-path`.

View File

@ -0,0 +1,49 @@
# Setup-Pulsar
## Locally
```
docker run -it \
-p 6650:6650 \
-p 8080:8080 \
--mount source=pulsardata,target=/pulsar/data \
--mount source=pulsarconf,target=/pulsar/conf \
apachepulsar/pulsar:2.5.1 \
bin/pulsar standalone
```
## Kubernetes
Please refer to the following [Helm chart](https://pulsar.apache.org/docs/en/kubernetes-helm/) Documentation.
## Create a Dapr component
The next step is to create a Dapr component for Pulsar.
Create the following YAML file named pulsar.yaml:
```yaml
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
name: <NAME>
namespace: <NAMESPACE>
spec:
type: pubsub.pulsar
metadata:
- name: host
value: <REPLACE WITH PULSAR URL> #default is localhost:6650
- name: enableTLS
value: <TRUE/FALSE>
```
## Apply the configuration
To apply the Pulsar pub/sub to Kubernetes, use the kubectl CLI:
`` kubectl apply -f pulsar.yaml ``
### Running locally ###
To run locally, create a `components` dir containing the YAML file and provide the path to the `dapr run` command with the flag `--components-path`.

View File

@ -72,5 +72,4 @@ kubectl apply -f rabbitmq.yaml
### Running locally
The Dapr CLI will automatically create a directory named `components` in your current working directory with a Redis component.
To use RabbitMQ, replace the contents of `pubsub.yaml` (or `messagebus.yaml` for Dapr < 0.6.0) file with the contents of `rabbitmq.yaml` above (Don't change the filename).
To run locally, create a `components` dir containing the YAML file and provide the path to the `dapr run` command with the flag `--components-path`.

View File

@ -6,8 +6,8 @@ Dapr can use any Redis instance - containerized, running on your local dev machi
### Running locally
The Dapr CLI will automatically create and setup a Redis Streams instance for you when you.
The Redis instance will be installed via Docker when you run `dapr init`, and the component file will be setup with `dapr run`.
The Dapr CLI will automatically create and setup a Redis Streams instance for you.
The Redis instance will be installed via Docker when you run `dapr init`, and the component file will be created in default directory. (`$HOME/.dapr/components` directory (Mac/Linux) or `%USERPROFILE%\.dapr\components` on Windows).
### Creating a Redis instance in your Kubernetes Cluster using Helm
@ -82,4 +82,4 @@ kubectl apply -f pubsub.yaml
### Standalone
By default the Dapr CLI creates a local Redis instance when you run `dapr init`. When you run an app using `dapr run`, the component file will automatically be created for you in a `components` dir in your current working directory.
To run locally, create a `components` dir containing the YAML file and provide the path to the `dapr run` command with the flag `--components-path`.

View File

@ -0,0 +1,125 @@
# Setup AWS SNS/SQS for pub/sub
This article describes configuring Dapr to use AWS SNS/SQS for pub/sub on local and Kubernetes environments. For local development, the [localstack project](https://github.com/localstack/localstack) is used to integrate AWS SNS/SQS.
Follow the instructions [here](https://github.com/localstack/localstack#installing) to install the localstack CLI.
## Locally
In order to use localstack with your pubsub binding, you need to provide the `awsEndpoint` configuration
in the component metadata. The `awsEndpoint` is unncessary when running against production AWS.
```yaml
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
name: messagebus
spec:
type: pubsub.snssqs
metadata:
- name: awsEndpoint
value: http://localhost:4566
# Use us-east-1 for localstack
- name: awsRegion
value: us-east-1
```
## Kubernetes
To run localstack on Kubernetes, you can apply the configuration below. Localstack is then
reachable at the DNS name `http://localstack.default.svc.cluster.local:4566`
(assuming this was applied to the default namespace) and this should be used as the `awsEndpoint`
```yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: localstack
spec:
# using the selector, we will expose the running deployments
# this is how Kubernetes knows, that a given service belongs to a deployment
selector:
matchLabels:
app: localstack
replicas: 1
template:
metadata:
labels:
app: localstack
spec:
containers:
- name: localstack
image: localstack/localstack:latest
ports:
# Expose the edge endpoint
- containerPort: 4566
---
kind: Service
apiVersion: v1
metadata:
name: localstack
labels:
app: localstack
spec:
selector:
app: localstack
ports:
- protocol: TCP
port: 4566
targetPort: 4566
type: LoadBalancer
```
## Run in AWS
In order to run in AWS, you should create an IAM user with permissions to the SNS and SQS services.
Use the account ID and account secret and plug them into the `awsAccountID` and `awsAccountSecret`
in the component metadata using kubernetes secrets.
## Create a Dapr component
The next step is to create a Dapr component for SNS/SQS.
Create the following YAML file named `snssqs.yaml`:
```yaml
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
name: <NAME>
namespace: <NAMESPACE>
spec:
type: pubsub.snssqs
metadata:
# ID of the AWS account with appropriate permissions to SNS and SQS
- name: awsAccountID
value: <AWS account ID>
# Secret for the AWS user
- name: awsSecret
value: <AWS secret>
# The AWS region you want to operate in.
# See this page for valid regions: https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Concepts.RegionsAndAvailabilityZones.html
# Make sure that SNS and SQS are available in that region.
- name: awsRegion
value: us-east-1
```
The above example uses secrets as plain strings. It is recommended to use a secret store for the secrets as described [here](../../concepts/secrets/README.md).
## Apply the configuration
### In Kubernetes
To apply the SNS/SQS component to Kubernetes, use the `kubectl` command:
```
kubectl apply -f kafka.yaml
```
### Running locally
Place the above components file `snssqs.yaml` in the local components directory (either the default directory or in a path you define when running the CLI command `dapr run`)
## Related Links
- [AWS SQS as subscriber to SNS](https://docs.aws.amazon.com/sns/latest/dg/sns-sqs-as-subscriber.html)
- [AWS SNS API refernce](https://docs.aws.amazon.com/sns/latest/api/Welcome.html)
- [AWS SQS API refernce](https://docs.aws.amazon.com/AWSSimpleQueueService/latest/APIReference/Welcome.html)

View File

@ -33,7 +33,7 @@ To deploy in Kubernetes, save the file above to `aws_secret_manager.yaml` and th
kubectl apply -f aws_secret_manager.yaml
```
When running in self hosted mode, place this file in a `components` directory under the Dapr working directory.
To run locally, create a `components` dir containing the YAML file and provide the path to the `dapr run` command with the flag `--components-path`.
## AWS Secret Manager reference example

View File

@ -98,8 +98,6 @@ This section walks you through how to enable an Azure Key Vault secret store to
1. Create a components directory in your application root
All Dapr components are stored in a directory called 'components' below at application root. Create this directory.
```bash
mkdir components
```
@ -129,6 +127,8 @@ spec:
value : "[pfx_certificate_file_local_path]"
```
To run locally, create a `components` dir containing the YAML file and provide the path to the `dapr run` command with the flag `--components-path`.
4. Store redisPassword secret to keyvault
```bash

View File

@ -45,7 +45,7 @@ To deploy in Kubernetes, save the file above to `gcp_secret_manager.yaml` and th
kubectl apply -f gcp_secret_manager.yaml
```
When running in self hosted mode, place this file in a `components` directory under the Dapr working directory.
To run locally, create a `components` dir containing the YAML file and provide the path to the `dapr run` command with the flag `--components-path`.
## GCP Secret Manager reference example

View File

@ -43,7 +43,7 @@ To deploy in Kubernetes, save the file above to `vault.yaml` and then run:
kubectl apply -f vault.yaml
```
When running in self hosted mode, place this file in a `components` directory from the Dapr working directory.
To run locally, create a `components` dir containing the YAML file and provide the path to the `dapr run` command with the flag `--components-path`.
## Vault reference example

View File

@ -64,5 +64,4 @@ kubectl apply -f aerospike.yaml
### Running locally
The Dapr CLI will automatically create a directory named `components` in your current working directory with a Redis component.
To use Aerospike, replace the redis.yaml file with the aerospike.yaml above.
To run locally, create a `components` dir containing the YAML file and provide the path to the `dapr run` command with the flag `--components-path`.

View File

@ -75,8 +75,7 @@ kubectl apply -f cosmos.yaml
### Running locally
The Dapr CLI will automatically create a directory named `components` in your current working directory with a Redis component.
To use CosmosDB, replace the redis.yaml file with cosmos.yaml file above.
To run locally, create a `components` dir containing the YAML file and provide the path to the `dapr run` command with the flag `--components-path`.
## Partition keys

View File

@ -70,8 +70,7 @@ kubectl apply -f azuretable.yaml
### Running locally
The Dapr CLI will automatically create a directory named `components` in your current working directory with a Redis component.
To use Azure Table Storage, replace the redis.yaml file with azuretable.yaml file above.
To run locally, create a `components` dir containing the YAML file and provide the path to the `dapr run` command with the flag `--components-path`.
## Partitioning

View File

@ -97,5 +97,4 @@ kubectl apply -f cassandra.yaml
### Running locally
The Dapr CLI will automatically create a directory named `components` in your current working directory with a Redis component.
To use Cassandra, replace the redis.yaml file with the cassandra.yaml above.
To run locally, create a `components` dir containing the YAML file and provide the path to the `dapr run` command with the flag `--components-path`.

View File

@ -88,5 +88,4 @@ kubectl apply -f consul.yaml
### Running locally
The Dapr CLI will automatically create a directory named `components` in your current working directory with a Redis component.
To use Consul, replace the redis.yaml file with the consul.yaml above.
To run locally, create a `components` dir containing the YAML file and provide the path to the `dapr run` command with the flag `--components-path`.

View File

@ -60,5 +60,4 @@ kubectl apply -f couchbase.yaml
### Running locally
The Dapr CLI will automatically create a directory named `components` in your current working directory with a Redis component.
To use Couchbase, replace the redis.yaml file with the couchbase.yaml above.
To run locally, create a `components` dir containing the YAML file and provide the path to the `dapr run` command with the flag `--components-path`.

View File

@ -64,5 +64,4 @@ kubectl apply -f etcd.yaml
### Running locally
The Dapr CLI will automatically create a directory named `components` in your current working directory with a Redis component.
To use etcd, replace the redis.yaml file with the etcd.yaml above.
To run locally, create a `components` dir containing the YAML file and provide the path to the `dapr run` command with the flag `--components-path`.

View File

@ -64,5 +64,4 @@ kubectl apply -f firestore.yaml
### Running locally
The Dapr CLI will automatically create a directory named `components` in your current working directory with a Redis component.
To use Firestore, replace the redis.yaml file with firestore.yaml above.
To run locally, create a `components` dir containing the YAML file and provide the path to the `dapr run` command with the flag `--components-path`.

View File

@ -50,5 +50,4 @@ kubectl apply -f hazelcast.yaml
### Running locally
The Dapr CLI will automatically create a directory named `components` in your current working directory with a Redis component.
To use Hazelcast, replace the redis.yaml file with the hazelcast.yaml above.
To run locally, create a `components` dir containing the YAML file and provide the path to the `dapr run` command with the flag `--components-path`.

View File

@ -63,5 +63,4 @@ kubectl apply -f memcached.yaml
### Running locally
The Dapr CLI will automatically create a directory named `components` in your current working directory with a Redis component.
To use Memcached, replace the redis.yaml file with the memcached.yaml above.
To run locally, create a `components` dir containing the YAML file and provide the path to the `dapr run` command with the flag `--components-path`.

View File

@ -101,5 +101,4 @@ kubectl apply -f mongodb.yaml
### Running locally
The Dapr CLI will automatically create a directory named `components` in your current working directory with a Redis component.
To use MongoDB, replace the redis.yaml file with the mongodb.yaml above.
To run locally, create a `components` dir containing the YAML file and provide the path to the `dapr run` command with the flag `--components-path`.

View File

@ -37,12 +37,13 @@ We can use [Helm](https://helm.sh/) to quickly create a Redis instance in our Ku
**Note**: this approach requires having an Azure Subscription.
1. Open [this link](https://ms.portal.azure.com/#create/Microsoft.Cache) to start the Azure Redis Cache creation flow. Log in if necessary.
1. Open [this link](https://ms.portal.azure.com/#create/Microsoft.Cache) to start the Azure Cache for Redis creation flow. Log in if necessary.
2. Fill out necessary information and **check the "Unblock port 6379" box**, which will allow us to persist state without SSL.
3. Click "Create" to kickoff deployment of your Redis instance.
4. Once your instance is created, you'll need to grab your access key. Navigate to "Access Keys" under "Settings" and copy your key.
5. Run `kubectl get svc` and copy the cluster IP of your `redis-master`.
6. Finally, we need to add our key and our host to a `redis.yaml` file that Dapr can apply to our cluster. If you're running a sample, you'll add the host and key to the provided `redis.yaml`. If you're creating a project from the ground up, you'll create a `redis.yaml` file as specified in [Configuration](#configuration). Set the `redisHost` key to `[IP FROM PREVIOUS STEP]:6379` and the `redisPassword` key to the key you copied in step 4. **Note:** In a production-grade application, follow [secret management](https://github.com/dapr/docs/blob/master/concepts/components/secrets.md) instructions to securely manage your secrets.
4. Once your instance is created, you'll need to grab the Host name (FQDN) and your access key.
- for the Host name navigate to the resources "Overview" and copy "Host name"
- for your access key navigate to "Access Keys" under "Settings" and copy your key.
5. Finally, we need to add our key and our host to a `redis.yaml` file that Dapr can apply to our cluster. If you're running a sample, you'll add the host and key to the provided `redis.yaml`. If you're creating a project from the ground up, you'll create a `redis.yaml` file as specified in [Configuration](#configuration). Set the `redisHost` key to `[HOST NAME FROM PREVIOUS STEP]:6379` and the `redisPassword` key to the key you copied in step 4. **Note:** In a production-grade application, follow [secret management](https://github.com/dapr/docs/blob/master/concepts/components/secrets.md) instructions to securely manage your secrets.
> **NOTE:** Dapr pub/sub uses [Redis Streams](https://redis.io/topics/streams-intro) that was introduced by Redis 5.0, which isn't currently available on Azure Managed Redis Cache. Consequently, you can use Azure Managed Redis Cache only for state persistence.
@ -89,4 +90,4 @@ kubectl apply -f redis.yaml
### Standalone
By default the Dapr CLI creates a local Redis instance when you run `dapr init`. When you run an app using `dapr run`, the component file will automatically be created for you in a `components` dir in your current working directory.
To run locally, create a `components` dir containing the YAML file and provide the path to the `dapr run` command with the flag `--components-path`.

View File

@ -4,7 +4,7 @@
[Follow the instructions](https://docs.microsoft.com/azure/sql-database/sql-database-single-database-get-started?tabs=azure-portal) from the Azure documentation on how to create a SQL database. The database must be created before Dapr consumes it.
**Note: SQL Server state store also supports SQL Server running on VMs.**
**Note: SQL Server state store also supports SQL Server running on VMs.**
In order to setup SQL Server as a state store, you will need the following properties:
@ -13,8 +13,17 @@ In order to setup SQL Server as a state store, you will need the following prope
* **Table Name**: The database table name. Will be created if not exists
* **Indexed Properties**: Optional properties from json data which will be indexed and persisted as individual column
### Create a dedicated user
When connecting with a dedicated user (not `sa`), these authorizations are required for the user - even when the user is owner of the desired database schema:
- `CREATE TABLE`
- `CREATE TYPE`
## Create a Dapr component
> currently this component does not support state management for actors
The next step is to create a Dapr component for SQL Server.
Create the following YAML file named `sqlserver.yaml`:
@ -67,5 +76,4 @@ kubectl apply -f sqlserver.yaml
### Running locally
The Dapr CLI will automatically create a directory named `components` in your current working directory with a Redis component.
To use SQL Server, replace the redis.yaml file with sqlserver.yaml file above.
To run locally, create a `components` dir containing the YAML file and provide the path to the `dapr run` command with the flag `--components-path`.

View File

@ -68,5 +68,4 @@ kubectl apply -f zookeeper.yaml
### Running locally
The Dapr CLI will automatically create a directory named `components` in your current working directory with a Redis component.
To use Zookeeper, replace the redis.yaml file with the zookeeper.yaml above.
To run locally, create a `components` dir containing the YAML file and provide the path to the `dapr run` command with the flag `--components-path`.

View File

@ -20,7 +20,8 @@ An input binding represents an event resource that Dapr uses to read events from
For the purpose of this HowTo, we'll use a Kafka binding. You can find a list of the different binding specs [here](../../reference/specs/bindings/README.md).
Create the following YAML file, named binding.yaml, and save this to the /components sub-folder in your application directory:
Create the following YAML file, named binding.yaml, and save this to a `components` sub-folder in your application directory.
(Use the `--components-path` flag with `dapr run` to point to your custom components dir)
*Note: When running in Kubernetes, apply this file to your cluster using `kubectl apply -f binding.yaml`*

View File

@ -1,7 +1,7 @@
# Dapr Overview
Dapr is a portable, event-driven runtime that makes it easy for enterprise developers to build resilient, microservice stateless and stateful applications that run on the cloud and edge and embraces the diversity of languages and developer frameworks.
Dapr is a portable, event-driven runtime that makes it easy for enterprise developers to build resilient, stateless and stateful microservice applications that run on the cloud and edge and embraces the diversity of languages and developer frameworks.
## Contents:
@ -47,7 +47,7 @@ Dapr exposes its APIs as a sidecar architecture, either as a container or as a p
### Standalone
In standaline mode dapr runs as a separate process from which your service code can call via HTTP or gRPC.
In standalone mode dapr runs as a separate process from which your service code can call via HTTP or gRPC.
<img src="../images/overview-sidecar.png" width=600>
@ -80,7 +80,8 @@ Furthermore, Dapr can be integrated with any developer framework. For example, i
Dapr can be configured to run on your local developer machine in [self hosted mode](../getting-started). Each running service has a Dapr runtime process (or sidecar) which is configured to use state stores, pub/sub, binding components and the other building blocks.
In self hosted mode, Redis running locally in a container, is installed as the both a default a state store and pub/sub message bus components. See the local Components folder for the yaml files.
In self hosted mode, Redis is running locally in a container and is configured to server as both the default state store and pub/sub.
After running `dapr init`, see the `$HOME/.dapr/components` directory (Mac/Linux) or `%USERPROFILE%\.dapr\components` on Windows.
The `dapr-placement` service is responsible for managing the actor distribution scheme and key range settings. This service is only required if you are using Dapr actors. For more information on the actor `Placement` service read [actor overview](../concepts/actors).

View File

@ -17,8 +17,7 @@ Besides the language specific Dapr SDKs, a developer can invoke an actor using t
- [Create Actor Timer](#create-actor-timer)
- [Delete Actor Timer](#delete-actor-timer)
- [Dapr Calling to Service Code](#specifications-for-dapr-calling-to-user-service-code)
- [Get Registered Actors](#get-registered-actors)
- [Activate Actor](#activate-actor)
- [Get Registered Actors](#get-registered-actors)
- [Deactivate Actor](#deactivate-actor)
- [Invoke Actor Method](#invoke-actor-method-1)
- [Invoke Reminder](#invoke-reminder)
@ -523,41 +522,6 @@ drainRebalancedActors | A bool. If true, Dapr will wait for `drainOngoingCallTi
}
```
### Activate actor
Activates an actor by creating an instance of the actor with the specified actorId
#### HTTP Request
```http
POST http://localhost:<appPort>/actors/<actorType>/<actorId>
```
#### HTTP Response Codes
Code | Description
---- | -----------
200 | Request successful
500 | Request failed
404 | Actor not found
#### URL Parameters
Parameter | Description
--------- | -----------
appPort | The application port.
actorType | The actor type.
actorId | The actor ID.
#### Examples:
Example of activating an actor: The example creates an actor of type stormtrooper with an actorId of 50
```shell
curl -X POST http://localhost:3000/actors/stormtrooper/50 \
-H "Content-Type: application/json"
```
### Deactivate actor
Deactivates an actor by persisting the instance of the actor to the state store with the specified actorId
@ -595,7 +559,7 @@ curl -X DELETE http://localhost:3000/actors/stormtrooper/50 \
### Invoke actor method
Invokes a method for an actor with the specified methodName where parameters to the method are passed in the body of the request message and return values are provided in the body of the response message
Invokes a method for an actor with the specified methodName where parameters to the method are passed in the body of the request message and return values are provided in the body of the response message. If the actor is not already running, the app side should [activate](#activating-an-actor) it.
#### HTTP Request
@ -631,7 +595,7 @@ curl -X POST http://localhost:3000/actors/stormtrooper/50/method/performAction \
### Invoke reminder
Invokes a reminder for an actor with the specified reminderName
Invokes a reminder for an actor with the specified reminderName. If the actor is not already running, the app side should [activate](#activating-an-actor) it.
#### HTTP Request
@ -667,7 +631,7 @@ curl -X POST http://localhost:3000/actors/stormtrooper/50/method/remind/checkReb
### Invoke timer
Invokes a timer for an actor rwith the specified timerName
Invokes a timer for an actor rwith the specified timerName. If the actor is not already running, the app side should [activate](#activating-an-actor) it.
#### HTTP Request
@ -734,6 +698,10 @@ Example of getting a health check response from the app:
curl -X GET http://localhost:3000/healthz \
```
## Activating an Actor
Conceptually, activating an actor means creating the actor's object and adding the actor to a tracking table. Here is an [example](https://github.com/dapr/dotnet-sdk/blob/6c271262231c41b21f3ca866eb0d55f7ce8b7dbc/src/Dapr.Actors/Runtime/ActorManager.cs#L199) from the .NET SDK.
## Querying actor state externally
In order to enable visibility into the state of an actor and allow for complex scenarios such as state aggregation, Dapr saves actor state in external state stores such as databases. As such, it is possible to query for an actor state externally by composing the correct key or query.

View File

@ -34,7 +34,7 @@ If running self hosted locally, place this file in your `components` folder next
If running on kubernetes apply the component to your cluster.
> **Note:** In production never place passwords or secrets within Dapr component files. For information on securely storing and retrieving secrets using secret stores refer to [Setup Secret Store](../../../howto/setup-secret-store)
> **Note:** In production never place passwords or secrets within Dapr component files. For information on securely storing and retrieving secrets using secret stores refer to [Setup Secret Store](../../howto/setup-secret-store)
## Invoking Service Code Through Input Bindings
@ -149,9 +149,12 @@ If ```concurrency``` is not set, it is sent out sequential (the example below sh
}
```
## Sending Messages to Output Bindings
## Invoking Output Bindings
This endpoint lets you invoke an Dapr output binding.
This endpoint lets you invoke a Dapr output binding.
Dapr bindings support various operations, such as `create`.
See the [different specs](../specs/bindings) on each binding to see the list of supported operations.
### HTTP Request
@ -175,12 +178,14 @@ The bindings endpoint receives the following JSON payload:
"data": "",
"metadata": {
"": ""
}
},
"operation": ""
}
```
The `data` field takes any JSON serializable value and acts as the payload to be sent to the output binding.
The metadata is an array of key/value pairs and allows you to set binding specific metadata for each call.
The `metadata` field is an array of key/value pairs and allows you to set binding specific metadata for each call.
The `operation` field tells the Dapr binding which operation it should perform.
### URL Parameters
@ -200,7 +205,8 @@ curl -X POST http://localhost:3500/v1.0/bindings/myKafka \
},
"metadata": {
"key": "redis-key-1"
}
},
"operation": "create"
}'
```

View File

@ -2,7 +2,7 @@
## Publish a message to a given topic
This endpoint lets you publish a payload to multiple consumers who are listening on a ```topic```.
This endpoint lets you publish data to multiple consumers who are listening on a ```topic```.
Dapr guarantees at least once semantics for this endpoint.
### HTTP Request

View File

@ -89,3 +89,8 @@ app.listen(port, () => console.log(`Listening on port ${port}!`));
```
> The response from the remote endpoint will be returned in the request body.
In case when your service listens on a more nested path (e.g. `/api/v1/add`), Dapr implements a full reverse proxy so you can append all the necessary path fragments to your request URL like this:
`http://localhost:3500/v1.0/invoke/mathService/method/api/v1/add`

View File

@ -121,7 +121,7 @@ This endpoint lets you get the state for a specific key.
### HTTP Request
```http
GET http://localhost:<daprPor>/v1.0/state/<storename>/<key>
GET http://localhost:<daprPort>/v1.0/state/<storename>/<key>
```

View File

@ -23,6 +23,10 @@ spec:
> **Note:** In production never place passwords or secrets within Dapr components. For information on securely storing and retrieving secrets refer to [Setup Secret Store](../../../howto/setup-secret-store)
## Output Binding Supported Operations
* create
## Additional information
By default the Azure Blob Storage output binding will auto generate a UUID as blob filename and not assign any system or custom metadata to it. It is configurable in the Metadata property of the message (all optional).
@ -30,9 +34,7 @@ By default the Azure Blob Storage output binding will auto generate a UUID as bl
Applications publishing to an Azure Blob Storage output binding should send a message with the following contract:
```json
{
"data": {
"message": "Hi"
},
"data": "file content",
"metadata": {
"blobName" : "filename.txt",
"ContentType" : "text/plain",
@ -42,6 +44,7 @@ Applications publishing to an Azure Blob Storage output binding should send a me
"ContentDisposition" : "attachment",
"CacheControl" : "no-cache",
"Custom" : "hello-world",
}
},
"operation": "create"
}
```
```

View File

@ -27,4 +27,8 @@ spec:
- `collection` is name of the collection inside the database.
- `partitionKey` is the name of the partitionKey to extract from the payload.
> **Note:** In production never place passwords or secrets within Dapr components. For information on securely storing and retrieving secrets refer to [Setup Secret Store](../../../howto/setup-secret-store)
> **Note:** In production never place passwords or secrets within Dapr components. For information on securely storing and retrieving secrets refer to [Setup Secret Store](../../../howto/setup-secret-store)
## Output Binding Supported Operations
* create

View File

@ -24,4 +24,8 @@ spec:
- `secretKey` is the AWS secret key.
- `table` is the DynamoDB table name.
> **Note:** In production never place passwords or secrets within Dapr components. For information on securely storing and retrieving secrets refer to [Setup Secret Store](../../../howto/setup-secret-store)
> **Note:** In production never place passwords or secrets within Dapr components. For information on securely storing and retrieving secrets refer to [Setup Secret Store](../../../howto/setup-secret-store)
## Output Binding Supported Operations
* create

View File

@ -59,9 +59,181 @@ spec:
- `eventSubscriptionName` (Optional) is the name of the event subscription. Event subscription names must be between 3 and 64 characters in length and should use alphanumeric letters only.
## Output Binding Supported Operations
* create
## Output Binding Metadata
- `accessKey` is the Access Key to be used for publishing an Event Grid Event to a custom topic
- `topicEndpoint` is the topic endpoint in which this output binding should publish events
> **Note:** In production never place passwords or secrets within Dapr components. For information on securely storing and retrieving secrets refer to [Setup Secret Store](../../../howto/setup-secret-store)
## Additional information
Event Grid Binding creates an [event subscription](https://docs.microsoft.com/en-us/azure/event-grid/concepts#event-subscriptions) when Dapr initializes. Your Service Principal needs to have the RBAC permissions to enable this.
```bash
# First ensure that Azure Resource Manager provider is registered for Event Grid
az provider register --namespace Microsoft.EventGrid
az provider show --namespace Microsoft.EventGrid --query "registrationState"
# Give the SP needed permissions so that it can create event subscriptions to Event Grid
az role assignment create --assignee <clientId> --role "EventGrid EventSubscription Contributor" --scopes <scope>
```
_Make sure to also to add quotes around the `[HandshakePort]` in your Event Grid binding component because Kubernetes expects string values from the config._
### Testing locally using ngrok and Dapr standalone mode
- Install [ngrok](https://ngrok.com/download)
- Run locally using custom port `9000` for handshakes
```bash
# Using random port 9000 as an example
ngrok http -host-header=localhost 9000
```
- Configure the ngrok's HTTPS endpoint and custom port to input binding metadata
- Run Dapr
```bash
# Using default ports for .NET core web api and Dapr as an example
dapr run --app-id dotnetwebapi --app-port 5000 --port 3500 dotnet run
```
### Testing from Kubernetes cluster
Azure Event Grid requires a valid HTTPS endpoint for custom webhooks. Self signed certificates won't do. In order to enable traffic from public internet to your app's Dapr sidecar you need an ingress controller enabled with Dapr. There's a good article on this topic: [Kubernetes NGINX ingress controller with Dapr](https://carlos.mendible.com/2020/04/05/kubernetes-nginx-ingress-controller-with-dapr/).
To get started, first create `dapr-annotations.yaml` for Dapr annotations
```yaml
controller:
podAnnotations:
dapr.io/enabled: "true"
dapr.io/id: "nginx-ingress"
dapr.io/port: "80"
```
Then install NGINX ingress controller to your Kubernetes cluster with Helm 3 using the annotations
```bash
helm repo add stable https://kubernetes-charts.storage.googleapis.com/
helm install nginx stable/nginx-ingress -f ./dapr-annotations.yaml -n default
# Get the public IP for the ingress controller
kubectl get svc -l component=controller -o jsonpath='Public IP is: {.items[0].status.loadBalancer.ingress[0].ip}{"\n"}'
```
If deploying to Azure Kubernetes Service, you can follow [the official MS documentation for rest of the steps](https://docs.microsoft.com/en-us/azure/aks/ingress-tls)
- Add an A record to your DNS zone
- Install cert-manager
- Create a CA cluster issuer
Final step for enabling communication between Event Grid and Dapr is to define `http` and custom port to your app's service and an `ingress` in Kubernetes. This example uses .NET Core web api and Dapr default ports and custom port 9000 for handshakes.
```yaml
# dotnetwebapi.yaml
kind: Service
apiVersion: v1
metadata:
name: dotnetwebapi
labels:
app: dotnetwebapi
spec:
selector:
app: dotnetwebapi
ports:
- name: webapi
protocol: TCP
port: 80
targetPort: 80
- name: dapr-eventgrid
protocol: TCP
port: 9000
targetPort: 9000
type: ClusterIP
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: eventgrid-input-rule
annotations:
kubernetes.io/ingress.class: nginx
cert-manager.io/cluster-issuer: letsencrypt
spec:
tls:
- hosts:
- dapr.<your custom domain>
secretName: dapr-tls
rules:
- host: dapr.<your custom domain>
http:
paths:
- path: /api/events
backend:
serviceName: dotnetwebapi
servicePort: 9000
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: dotnetwebapi
labels:
app: dotnetwebapi
spec:
replicas: 1
selector:
matchLabels:
app: dotnetwebapi
template:
metadata:
labels:
app: dotnetwebapi
annotations:
dapr.io/enabled: "true"
dapr.io/id: "dotnetwebapi"
dapr.io/port: "5000"
spec:
containers:
- name: webapi
image: <your container image>
ports:
- containerPort: 5000
imagePullPolicy: Always
```
Deploy binding and app (including ingress) to Kubernetes
```bash
# Deploy Dapr components
kubectl apply -f eventgrid.yaml
# Deploy your app and Nginx ingress
kubectl apply -f dotnetwebapi.yaml
```
> **Note:** This manifest deploys everything to Kubernetes default namespace.
**Troubleshooting possible issues with Nginx controller**
After initial deployment the "Daprized" Nginx controller can malfunction. To check logs and fix issue (if it exists) follow these steps.
```bash
$ kubectl get pods -l app=nginx-ingress
NAME READY STATUS RESTARTS AGE
nginx-nginx-ingress-controller-649df94867-fp6mg 2/2 Running 0 51m
nginx-nginx-ingress-default-backend-6d96c457f6-4nbj5 1/1 Running 0 55m
$ kubectl logs nginx-nginx-ingress-controller-649df94867-fp6mg nginx-ingress-controller
# If you see 503s logged from calls to webhook endpoint '/api/events' restart the pod
# .."OPTIONS /api/events HTTP/1.1" 503..
$ kubectl delete pod nginx-nginx-ingress-controller-649df94867-fp6mg
# Check the logs again - it should start returning 200
# .."OPTIONS /api/events HTTP/1.1" 200..
```

View File

@ -33,3 +33,7 @@ spec:
- `partitionID` (Optional) ID of the partition to send and receive events.
> **Note:** In production never place passwords or secrets within Dapr components. For information on securely storing and retrieving secrets refer to [Setup Secret Store](../../../howto/setup-secret-store)
## Output Binding Supported Operations
* create

View File

@ -45,4 +45,8 @@ spec:
- `client_x509_cert_url` is the GCP credentials project x509 cert url.
- `private_key` is the GCP credentials private key.
> **Note:** In production never place passwords or secrets within Dapr components. For information on securely storing and retrieving secrets refer to [Setup Secret Store](../../../howto/setup-secret-store)
> **Note:** In production never place passwords or secrets within Dapr components. For information on securely storing and retrieving secrets refer to [Setup Secret Store](../../../howto/setup-secret-store)
## Output Binding Supported Operations
* create

View File

@ -48,4 +48,8 @@ spec:
- `client_x509_cert_url` is the GCP credentials project x509 cert url.
- `private_key` is the GCP credentials private key.
> **Note:** In production never place passwords or secrets within Dapr components. For information on securely storing and retrieving secrets refer to [Setup Secret Store](../../../howto/setup-secret-store)
> **Note:** In production never place passwords or secrets within Dapr components. For information on securely storing and retrieving secrets refer to [Setup Secret Store](../../../howto/setup-secret-store)
## Output Binding Supported Operations
* create

View File

@ -17,3 +17,7 @@ spec:
- `url` is the HTTP url to invoke.
- `method` is the HTTP verb to use for the request.
## Output Binding Supported Operations
* create

View File

@ -52,6 +52,11 @@ curl -X POST http://localhost:3500/v1.0/bindings/myKafka \
},
"metadata": {
"partitionKey": "key1"
}
},
"operation": "create"
}'
```
## Output Binding Supported Operations
* create

View File

@ -34,3 +34,7 @@ spec:
> **Note:** In production never place passwords or secrets within Dapr components. For information on securely storing and retrieving secrets refer to [Setup Secret Store](../../../howto/setup-secret-store)
## Output Binding Supported Operations
* create

View File

@ -18,4 +18,8 @@ spec:
- `url` is the MQTT broker url.
- `topic` is the topic to listen on or send events to.
> **Note:** In production never place passwords or secrets within Dapr components. For information on securely storing and retrieving secrets refer to [Setup Secret Store](../../../howto/setup-secret-store)
> **Note:** In production never place passwords or secrets within Dapr components. For information on securely storing and retrieving secrets refer to [Setup Secret Store](../../../howto/setup-secret-store)
## Output Binding Supported Operations
* create

View File

@ -48,6 +48,11 @@ curl -X POST http://localhost:3500/v1.0/bindings/myRabbitMQ \
},
"metadata": {
"ttlInSeconds": "60"
}
},
"operation": "create"
}'
```
## Output Binding Supported Operations
* create

View File

@ -22,3 +22,7 @@ spec:
- `enableTLS` - If the Redis instance supports TLS with public certificates it can be configured to enable or disable TLS.
> **Note:** In production never place passwords or secrets within Dapr components. For information on securely storing and retrieving secrets refer to [Setup Secret Store](../../../howto/setup-secret-store)
## Output Binding Supported Operations
* create

View File

@ -24,4 +24,8 @@ spec:
- `secretKey` is the AWS secret key.
- `table` is the name of the S3 bucket to write to.
> **Note:** In production never place passwords or secrets within Dapr components. For information on securely storing and retrieving secrets refer to [Setup Secret Store](../../../howto/setup-secret-store)
> **Note:** In production never place passwords or secrets within Dapr components. For information on securely storing and retrieving secrets refer to [Setup Secret Store](../../../howto/setup-secret-store)
## Output Binding Supported Operations
* create

View File

@ -40,3 +40,7 @@ Example request payload
```
> **Note:** In production never place passwords or secrets within Dapr components. For information on securely storing and retrieving secrets refer to [Setup Secret Store](../../../howto/setup-secret-store)
## Output Binding Supported Operations
* create

View File

@ -42,6 +42,11 @@ curl -X POST http://localhost:3500/v1.0/bindings/myServiceBusQueue \
},
"metadata": {
"ttlInSeconds": "60"
}
},
"operation": "create"
}'
```
## Output Binding Supported Operations
* create

View File

@ -42,8 +42,13 @@ Applications publishing to an Azure SignalR output binding should send a message
},
"metadata": {
"group": "chat123"
}
},
"operation": "create"
}
```
For more information on integration Azure SignalR into a solution check the [documentation](https://docs.microsoft.com/en-us/azure/azure-signalr/)
For more information on integration Azure SignalR into a solution check the [documentation](https://docs.microsoft.com/en-us/azure/azure-signalr/)
## Output Binding Supported Operations
* create

View File

@ -24,4 +24,8 @@ spec:
- `secretKey` is the AWS secret key.
- `topicArn` is the SNS topic name.
> **Note:** In production never place passwords or secrets within Dapr components. For information on securely storing and retrieving secrets refer to [Setup Secret Store](../../../howto/setup-secret-store)
> **Note:** In production never place passwords or secrets within Dapr components. For information on securely storing and retrieving secrets refer to [Setup Secret Store](../../../howto/setup-secret-store)
## Output Binding Supported Operations
* create

View File

@ -24,4 +24,8 @@ spec:
- `secretKey` is the AWS secret key.
- `queueName` is the SQS queue name.
> **Note:** In production never place passwords or secrets within Dapr components. For information on securely storing and retrieving secrets refer to [Setup Secret Store](../../../howto/setup-secret-store)
> **Note:** In production never place passwords or secrets within Dapr components. For information on securely storing and retrieving secrets refer to [Setup Secret Store](../../../howto/setup-secret-store)
## Output Binding Supported Operations
* create

View File

@ -45,6 +45,11 @@ curl -X POST http://localhost:3500/v1.0/bindings/myStorageQueue \
},
"metadata": {
"ttlInSeconds": "60"
}
},
"operation": "create"
}'
```
## Output Binding Supported Operations
* create

View File

@ -24,4 +24,8 @@ spec:
- `accountSid` is the Twilio account SID.
- `authToken` is the Twilio auth token.
> **Note:** In production never place passwords or secrets within Dapr components. For information on securely storing and retrieving secrets refer to [Setup Secret Store](../../../howto/setup-secret-store)
> **Note:** In production never place passwords or secrets within Dapr components. For information on securely storing and retrieving secrets refer to [Setup Secret Store](../../../howto/setup-secret-store)
## Output Binding Supported Operations
* create

View File

@ -7,7 +7,7 @@ Terminology used below:
- Dapr CLI - the Dapr command line tool. The binary name is dapr (dapr.exe on Windows)
- Dapr runtime - this runs alongside each app. The binary name is daprd (daprd.exe on Windows)
In self hosting mode, running `dapr init` copies the Dapr runtime onto your box and starts the placement service (used for actors) and Redis in containers. These must be present before running `dapr run`.
In self hosting mode, running `dapr init` copies the Dapr runtime onto your box and starts the placement service (used for actors) and Redis in containers. These must be present before running `dapr run`. The Dapr CLI also creates the default components directory which for Linux/MacOS is: `$HOME/.dapr/components` and for Windows: `%USERPROFILE%\.dapr\components` if it does not already exist.
What happens when `dapr run` is executed?
@ -15,11 +15,12 @@ What happens when `dapr run` is executed?
dapr run --app-id nodeapp --app-port 3000 --port 3500 node app.js
```
First, the Dapr CLI creates the `\components` directory if it does not not already exist, and writes two component files representing the default state store and the default message bus: `redis.yaml` and `redis_messagebus.yaml`, respectively. [Code](https://github.com/dapr/cli/blob/d585612185a4a525c05fb62b86e288ccad510006/pkg/standalone/run.go#L254-L288).
First, the Dapr CLI loads the components from the default directory (specified above) for the state store and pub/sub: `statestore.yaml` and `pubsub.yaml`, respectively. [Code](https://github.com/dapr/cli/blob/51b99a988c4d1545fdc04909d6308be121a7fe0c/pkg/standalone/run.go#L196-L266).
*Note as of this writing (Dec 2019) the names have been changed to `statestore.yaml` and `messagebus.yaml` in the master branch, but this change is not in the latest release, 0.3.0*.
You can either add components to the default directory or create your own `components` directory and provide the path to the CLI using the `--components-path` flag.
yaml files in components directory contains configuration for various Dapr components (e.g. statestore, pubsub, bindings etc.). The components must be created prior to using them with Dapr, for example, redis is launched as a container when running dapr init. If these component files already exist, they are not overwritten. This means you could overwrite `statestore.yaml`, which by default uses Redis, with a content for a different statestore (e.g. Mongo) and the latter would be what gets used. If you did this and ran `dapr run` again, the Dapr runtime would use the specified Mongo state store.
In order to switch components, simply replace or add the YAML files in the components directory and run `dapr run` again.
For example, by default Dapr will use the Redis state store in the default components dir. You can either override it with a different YAML, or supply your own components path.
Then, the Dapr CLI will [launch](https://github.com/dapr/cli/blob/d585612185a4a525c05fb62b86e288ccad510006/pkg/standalone/run.go#L290) two proceses: the Dapr runtime and your app (in this sample `node app.js`).