Move component specs out of the docs repo

This commit is contained in:
Aaron Crawfis 2020-03-03 12:51:02 -08:00
parent c63784fc8d
commit e96de9d7cc
21 changed files with 1 additions and 691 deletions

View File

@ -16,26 +16,7 @@ Bindings are developed independently of Dapr runtime. You can view and contribut
Every binding has its own unique set of properties. Click the name link to see the component YAML for each binding.
| Name | Input Binding | Output Binding | Status
| ------------- | -------------- | ------------- | ------------- |
| [HTTP](./specs/http.md) | | V | Experimental |
| [Kafka](./specs/kafka.md) | V | V | Experimental |
| [Kubernetes Events](./specs/kubernetes.md) | V | | Experimental |
| [MQTT](./specs/mqtt.md) | V | V | Experimental |
| [RabbitMQ](./specs/rabbitmq.md) | V | V | Experimental |
| [Redis](./specs/redis.md) | | V | Experimental |
| [Twilio SMS](./specs/twilio.md) | | V | Experimental |
| [AWS DynamoDB](./specs/dynamodb.md) | | V | Experimental |
| [AWS S3](./specs/s3.md) | | V | Experimental |
| [AWS SNS](./specs/sns.md) | | V | Experimental |
| [AWS SQS](./specs/sqs.md) | V | V | Experimental |
| [Azure Blob Storage](./specs/blobstorage.md) | | V | Experimental |
| [Azure CosmosDB](./specs/cosmosdb.md) | | V | Experimental |
| [Azure EventHubs](./specs/eventhubs.md) | V | V | Experimental |
| [Azure Service Bus Queues](./specs/servicebusqueues.md) | V | V | Experimental |
| [Azure SignalR](./specs/signalr.md) | | V | Experimental |
| [GCP Cloud Pub/Sub](./specs/gcppubsub.md) | V | V | Experimental |
| [GCP Storage Bucket](./specs/gcpbucket.md) | | V | Experimental |
View the full list in the [Components-Contrib Repo](https://github.com/dapr/components-contrib/tree/master/bindings).
## Input Bindings

View File

@ -1,21 +0,0 @@
# Azure Blob Storage Binding Spec
```yml
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
name: <name>
spec:
type: bindings.azure.blobstorage
metadata:
- name: storageAccount
value: myStorageAccountName
- name: storageAccessKey
value: ***********
- name: container
value: container1
```
`storageAccount` is the Blob Storage account name.
`storageAccessKey` is the Blob Storage access key.
`container` is the name of the Blob Storage container to write to.

View File

@ -1,27 +0,0 @@
# Azure CosmosDB Binding Spec
```yml
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
name: <name>
spec:
type: bindings.azure.cosmosdb
metadata:
- name: url
value: https://******.documents.azure.com:443/
- name: masterKey
value: *****
- name: database
value: db
- name: collection
value: collection
- name: partitionKey
value: message
```
`url` is the CosmosDB url.
`masterKey` is the CosmosDB account master key.
`database` is the name of the CosmosDB database.
`collection` is name of the collection inside the database.
`partitionKey` is the name of the partitionKey to extract from the payload.

View File

@ -1,24 +0,0 @@
# AWS DynamoDB Binding Spec
```yml
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
name: <name>
spec:
type: bindings.aws.dynamodb
metadata:
- name: region
value: us-west-2
- name: accessKey
value: *****************
- name: secretKey
value: *****************
- name: table
value: items
```
`region` is the AWS region.
`accessKey` is the AWS access key.
`secretKey` is the AWS secret key.
`table` is the DynamoDB table name.

View File

@ -1,21 +0,0 @@
# Azure EventHubs Binding Spec
```yml
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
name: <name>
spec:
type: bindings.azure.eventhubs
metadata:
- name: connectionString # connectionString of EventHub, not namespace
value: Endpoint=sb://*****************
- name: consumerGroup # Optional
value: group1
- name: messageAge
value: 5s # Optional. Golang duration
```
- `connectionString` is the [EventHubs connection string](https://docs.microsoft.com/en-us/azure/event-hubs/authorize-access-shared-access-signature). Note that this is the EventHub itself and not the EventHubs namespace. Make sure to use the child EventHub shared access policy connection string.
- `consumerGroup` is the name of an [EventHubs consumerGroup](https://docs.microsoft.com/en-us/azure/event-hubs/event-hubs-features#consumer-groups) to listen on.
- `messageAge` allows to receive messages that are not older than the specified age.

View File

@ -1,45 +0,0 @@
# GCP Storage Bucket Spec
```yml
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
name: <name>
spec:
type: bindings.gcp.bucket
metadata:
- name: bucket
value: mybucket
- name: type
value: service_account
- name: project_id
value: project_111
- name: private_key_id
value: *************
- name: client_email
value: name@domain.com
- name: client_id
value: '1111111111111111'
- name: auth_uri
value: https://accounts.google.com/o/oauth2/auth
- name: token_uri
value: https://oauth2.googleapis.com/token
- name: auth_provider_x509_cert_url
value: https://www.googleapis.com/oauth2/v1/certs
- name: client_x509_cert_url
value: https://www.googleapis.com/robot/v1/metadata/x509/<project-name>.iam.gserviceaccount.com
- name: private_key
value: PRIVATE KEY
```
`bucket` is the bucket name.
`type` is the GCP credentials type.
`project_id` is the GCP project id.
`private_key_id` is the GCP private key id.
`client_email` is the GCP client email.
`client_id` is the GCP client id.
`auth_uri` is Google account oauth endpoint.
`token_uri` is Google account token uri.
`auth_provider_x509_cert_url` is the GCP credentials cert url.
`client_x509_cert_url` is the GCP credentials project x509 cert url.
`private_key` is the GCP credentials private key.

View File

@ -1,48 +0,0 @@
# GCP Cloud Pub/Sub Binding Spec
```yml
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
name: <name>
spec:
type: bindings.gcp.pubsub
metadata:
- name: topic
value: topic1
- name: subscription
value: subscription1
- name: type
value: service_account
- name: project_id
value: project_111
- name: private_key_id
value: *************
- name: client_email
value: name@domain.com
- name: client_id
value: '1111111111111111'
- name: auth_uri
value: https://accounts.google.com/o/oauth2/auth
- name: token_uri
value: https://oauth2.googleapis.com/token
- name: auth_provider_x509_cert_url
value: https://www.googleapis.com/oauth2/v1/certs
- name: client_x509_cert_url
value: https://www.googleapis.com/robot/v1/metadata/x509/<project-name>.iam.gserviceaccount.com
- name: private_key
value: PRIVATE KEY
```
`topic` is the Pub/Sub topic name.
`subscription` is the Pub/Sub subscription name.
`type` is the GCP credentials type.
`project_id` is the GCP project id.
`private_key_id` is the GCP private key id.
`client_email` is the GCP client email.
`client_id` is the GCP client id.
`auth_uri` is Google account OAuth endpoint.
`token_uri` is Google account token uri.
`auth_provider_x509_cert_url` is the GCP credentials cert url.
`client_x509_cert_url` is the GCP credentials project x509 cert url.
`private_key` is the GCP credentials private key.

View File

@ -1,18 +0,0 @@
# HTTP Binding Spec
```yml
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
name: <name>
spec:
type: bindings.http
metadata:
- name: url
value: http://something.com
- name: method
value: GET
```
`url` is the HTTP url to invoke.
`method` is the HTTP verb to use for the request.

View File

@ -1,33 +0,0 @@
# Kafka Binding Spec
```yml
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
name: <name>
spec:
type: bindings.kafka
metadata:
- name: topics # Optional. in use for input bindings
value: topic1,topic2
- name: brokers
value: localhost:9092,localhost:9093
- name: consumerGroup
value: group1
- name: publishTopic # Optional. in use for output bindings
value: topic3
- name: authRequired # Required. default: "true"
value: "false"
- name: saslUsername # Optional.
value: "user"
- name: saslPassword # Optional.
value: "password"
```
`topics` is a comma separated string of topics for an input binding.
`brokers` is a comma separated string of kafka brokers.
`consumerGroup` is a kafka consumer group to listen on.
`publishTopic` is the topic to publish for an output binding.
`authRequired` determines whether to use SASL authentication or not.
`saslUsername` is the SASL username for authentication. Only used if `authRequired` is set to `"true"`.
`saslPassword` is the SASL password for authentication. Only used if `authRequired` is set to `"true"`.

View File

@ -1,15 +0,0 @@
# Kubernetes Events Binding Spec
```yml
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
name: <name>
spec:
type: bindings.kubernetes
metadata:
- name: namespace
value: default
```
`namespace` is the Kubernetes namespace to read events from. Default is `default`.

View File

@ -1,18 +0,0 @@
# MQTT Binding Spec
```yml
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
name: <name>
spec:
type: bindings.mqtt
metadata:
- name: url
value: mqtt[s]://[username][:password]@host.domain[:port]
- name: topic
value: topic1
```
`url` is the MQTT broker url.
`topic` is the topic to listen on or send events to.

View File

@ -1,24 +0,0 @@
# RabbitMQ Binding Spec
```yml
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
name: <name>
spec:
type: bindings.rabbitmq
metadata:
- name: queueName
value: queue1
- name: host
value: amqp://guest:guest@localhost:5672
- name: durable
value: true
- name: deleteWhenUnused
value: false
```
`queueName` is the RabbitMQ queue name.
`host` is the RabbitMQ host address.
`durable` tells RabbitMQ to persist message in storage.
`deleteWhenUnused` enables or disables auto-delete.

View File

@ -1,18 +0,0 @@
# Redis Binding Spec
```yml
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
name: <name>
spec:
type: bindings.redis
metadata:
- name: redisHost
value: <address>:6379
- name: redisPassword
value: **************
```
`redisHost` is the Redis host address.
`redisPassword` is the Redis password.

View File

@ -1,24 +0,0 @@
# AWS S3 Binding Spec
```yml
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
name: <name>
spec:
type: bindings.aws.s3
metadata:
- name: region
value: us-west-2
- name: accessKey
value: *****************
- name: secretKey
value: *****************
- name: bucket
value: mybucket
```
`region` is the AWS region.
`accessKey` is the AWS access key.
`secretKey` is the AWS secret key.
`table` is the name of the S3 bucket to write to.

View File

@ -1,18 +0,0 @@
# Azure Service Bus Queues Binding Spec
```yml
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
name: <name>
spec:
type: bindings.azure.servicebusqueues
metadata:
- name: connectionString
value: sb://************
- name: queueName
value: queue1
```
`connectionString` is the Service Bus connection string.
`queueName` is the Service Bus queue name.

View File

@ -1,46 +0,0 @@
# Azure SignalR Binding Spec
```yaml
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
name: <name>
spec:
type: bindings.azure.signalr
metadata:
- name: connectionString
value: Endpoint=https://<your-azure-signalr>.service.signalr.net;AccessKey=<your-access-key>;Version=1.0;
- name: hub # Optional
value: <hub name>
```
The metadata `connectionString` contains the Azure SignalR connection string.
The optional `hub` metadata value defines the hub in which the message will be send. The hub can be dynamically defined as a metadata value when publishing to an output binding (key is "hub").
## Additional information
By default the Azure SignalR output binding will broadcast messages to all connected users. To narrow the audience there are two options, both configurable in the Metadata property of the message:
- group: will send the message to a specific Azure SignalR group
- user: will send the message to a specific Azure SignalR user
Applications publishing to an Azure SignalR output binding should send a message with the following contract:
```json
{
"data": {
"Target": "<enter message name>",
"Arguments": [
{
"sender": "dapr",
"text": "Message from dapr output binding"
}
]
},
"metadata": {
"group": "chat123"
}
}
```
For more information on integration Azure SignalR into a solution check the [documentation](https://docs.microsoft.com/en-us/azure/azure-signalr/)

View File

@ -1,24 +0,0 @@
# AWS SNS Binding Spec
```yml
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
name: <name>
spec:
type: bindings.aws.sns
metadata:
- name: region
value: us-west-2
- name: accessKey
value: *****************
- name: secretKey
value: *****************
- name: topicArn
value: mytopic
```
`region` is the AWS region.
`accessKey` is the AWS access key.
`secretKey` is the AWS secret key.
`topicArn` is the SNS topic name.

View File

@ -1,24 +0,0 @@
# AWS SQS Binding Spec
```yml
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
name: <name>
spec:
type: bindings.aws.sqs
metadata:
- name: region
value: us-west-2
- name: accessKey
value: *****************
- name: secretKey
value: *****************
- name: queueName
value: items
```
`region` is the AWS region.
`accessKey` is the AWS access key.
`secretKey` is the AWS secret key.
`queueName` is the SQS queue name.

View File

@ -1,24 +0,0 @@
# Twilio SMS Binding Spec
```yml
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
name: <name>
spec:
type: bindings.twilio.sms
metadata:
- name: toNumber # required.
value: 111-111-1111
- name: fromNumber # required.
value: 222-222-2222
- name: accountSid # required.
value: *****************
- name: authToken # required.
value: *****************
```
`toNumber` is the target number to send the sms to.
`fromNumber` is the sender phone number.
`accountSid` is the Twilio account SID.
`authToken` is the Twilio auth token.

View File

@ -1,118 +0,0 @@
# Redis and Dapr
Dapr can use Redis in two ways:
1. For state persistence and restoration
2. For enabling pub/sub async style message delivery
## Creating a Redis Store
Dapr can use any Redis instance - containerized, running on your local dev machine, or a managed cloud service. If you already have a Redis store, move on to the [Configuration](#configuration) section.
### Creating a Redis Cache in your Kubernetes Cluster using Helm
We can use [Helm](https://helm.sh/) to quickly create a Redis instance in our Kubernetes cluster. This approach requires [Installing Helm v3](https://github.com/helm/helm#install).
1. Install Redis into your cluster:
```bash
helm install redis stable/redis
```
> Note that you need a Redis version greater than 5, which is what Dapr' pub/sub functionality requires. If you're intending on using Redis as just a state store (and not for pub/sub), also a lower version can be used.
2. Run `kubectl get pods` to see the Redis containers now running in your cluster.
3. Add `redis-master:6379` as the `redisHost` in your [redis.yaml](#configuration) file. For example:
```yaml
metadata:
- name: redisHost
value: redis-master:6379
```
4. Next, we'll get our Redis password, which is slightly different depending on the OS we're using:
- **Windows**: Run `kubectl get secret --namespace default redis -o jsonpath="{.data.redis-password}" > encoded.b64`, which will create a file with your encoded password. Next, run `certutil -decode encoded.b64 password.txt`, which will put your redis password in a text file called `password.txt`. Copy the password and delete the two files.
- **Linux/MacOS**: Run `kubectl get secret --namespace default redis -o jsonpath="{.data.redis-password}" | base64 --decode` and copy the outputted password.
Add this password as the `redisPassword` value in your [redis.yaml](#configuration) file. For example:
```yaml
metadata:
- name: redisPassword
value: lhDOkwTlp0
```
### Creating an Azure Managed Redis Cache
**Note**: this approach requires having an Azure Subscription.
1. Open [this link](https://ms.portal.azure.com/#create/Microsoft.Cache) to start the Azure Redis Cache creation flow. Log in if necessary.
2. Fill out necessary information and **check the "Unblock port 6379" box**, which will allow us to persist state without SSL.
3. Click "Create" to kickoff deployment of your Redis instance.
4. Once your instance is created, you'll need to grab your access key. Navigate to "Access Keys" under "Settings" and copy your key.
5. Run `kubectl get svc` and copy the cluster IP of your `redis-master`.
6. Finally, we need to add our key and our host to a `redis.yaml` file that Dapr can apply to our cluster. If you're running a sample, you'll add the host and key to the provided `redis.yaml`. If you're creating a project from the ground up, you'll create a `redis.yaml` file as specified in [Configuration](#configuration). Set the `redisHost` key to `[IP FROM PREVIOUS STEP]:6379` and the `redisPassword` key to the key you copied in step 4. **Note:** In a production-grade application, follow [secret management](https://github.com/dapr/docs/blob/master/concepts/components/secrets.md) instructions to securely manage your secrets.
> **NOTE:** Dapr pub/sub uses [Redis Streams](https://redis.io/topics/streams-intro) that was introduced by Redis 5.0, which isn't currently available on Azure Managed Redis Cache. Consequently, you can use Azure Managed Redis Cache only for state persistence.
### Other ways to Create a Redis Database
- [AWS Redis](https://aws.amazon.com/redis/)
- [GCP Cloud MemoryStore](https://cloud.google.com/memorystore/)
## Configuration
Dapr can use Redis as a `statestore` component (for state persistence and retrieval) or as a `messagebus` component (for pub/sub). The following yaml files demonstrates how to define each. **Note:** yaml files below illustrate secret management in plain text. In a production-grade application, follow [secret management](https://github.com/dapr/docs/blob/master/concepts/components/secrets.md) instructions to securely manage your secrets.
### Configuring Redis for State Persistence and Retrieval
Create a file called redis-state.yaml, and paste the following:
```yaml
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
name: statestore
spec:
type: state.redis
metadata:
- name: redisHost
value: <HOST>
- name: redisPassword
value: <PASSWORD>
```
### Configuring Redis for Pub/Sub
Create a file called redis-pubsub.yaml, and paste the following:
```yaml
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
name: messagebus
spec:
type: pubsub.redis
metadata:
- name: redisHost
value: <HOST>
- name: redisPassword
value: <PASSWORD>
```
## Apply the configuration
### Kubernetes
```bash
kubectl apply -f redis-state.yaml
kubectl apply -f redis-pubsub.yaml
```
### Standalone
By default the Dapr CLI creates a local Redis instance when you run `dapr init`. However, if you want to configure a different Redis instance, create a directory named `components` in the root path of your Dapr binary and then copy your `redis.yaml` into that directory.

View File

@ -1,81 +0,0 @@
# Secrets
Components can reference secrets for the `spec.metadata` section.
In order to reference a secret, you need to set the `auth.secretStore` field to specify the name of the secret store that holds the secrets.
When running in Kubernetes, if the `auth.secretStore` is empty, the Kubernetes secret store is assumed.
## Examples
Using plain text:
```yml
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
name: statestore
spec:
type: state.redis
metadata:
- name: redisHost
value: localhost:6379
- name: redisPassword
value: MyPassword
```
Using a Kubernetes secret:
```yml
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
name: statestore
spec:
type: state.redis
metadata:
- name: redisHost
value: localhost:6379
- name: redisPassword
secretKeyRef:
name: redis-secret
key: redis-password
auth:
secretStore: kubernetes
```
The above example tells Dapr to use the `kubernetes` secret store, extract a secret named `redis-secret` and assign the value of the `redis-password` key in the secret to the `redisPassword` field in the Component.
### Creating a secret and referencing it in a Component
The following example shows you how to create a Kubernetes secret to hold the connection string for an Event Hubs binding.
First, create the Kubernetes secret:
```bash
kubectl create secret generic eventhubs-secret --from-literal=connectionString=*********
```
Next, reference the secret in your binding:
```yml
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
name: eventhubs
spec:
type: bindings.azure.eventhubs
metadata:
- name: connectionString
secretKeyRef:
name: eventhubs-secret
key: connectionString
```
Finally, apply the component to the Kubernetes cluster:
```bash
kubectl apply -f ./eventhubs.yaml
```
All done!