Latest changes from v1.1

This commit is contained in:
Ori Zohar 2021-04-22 13:38:06 -07:00
commit 98ab13da78
23 changed files with 215 additions and 71 deletions

10
.github/CODEOWNERS vendored Normal file
View File

@ -0,0 +1,10 @@
# Documentation and examples for what this does:
#
# https://docs.github.com/en/github/creating-cloning-and-archiving-repositories/about-code-owners
# This file is a list of rules, with the last rule being most specific
# All of the people (and only those people) from the matching rule will be notified
# Default rule: anything that doesn't match a more specific rule goes here
* @dapr/approvers-docs @dapr/maintainers-docs

View File

@ -80,7 +80,7 @@ All that's left now is to invoke the output bindings endpoint on a running Dapr
You can do so using HTTP:
```bash
curl -X POST -H http://localhost:3500/v1.0/bindings/myevent -d '{ "data": { "message": "Hi!" }, "operation": "create" }'
curl -X POST -H 'Content-Type: application/json' http://localhost:3500/v1.0/bindings/myevent -d '{ "data": { "message": "Hi!" }, "operation": "create" }'
```
As seen above, you invoked the `/binding` endpoint with the name of the binding to invoke, in our case its `myevent`.

View File

@ -480,7 +480,7 @@ dapr --app-id app2 run -- php app2.php
Dapr automatically takes the data sent on the publish request and wraps it in a CloudEvent 1.0 envelope.
If you want to use your own custom CloudEvent, make sure to specify the content type as `application/cloudevents+json`.
See info about content types [here](#Content-Types).
Read about content types [here](#content-types), and about the [Cloud Events message format]({{< ref "pubsub-overview.md#cloud-events-message-format" >}}).
## Next steps

View File

@ -1,7 +1,8 @@
---
type: docs
title: "Autoscaling a Dapr app with KEDA"
linkTitle: "Autoscale"
linkTitle: "Autoscale with KEDA"
description: "How to configure your Dapr application to autoscale using KEDA"
weight: 2000
---
@ -9,7 +10,7 @@ Dapr, with its modular building-block approach, along with the 10+ different [pu
For Kubernetes, Dapr integrates with [KEDA](https://github.com/kedacore/keda), an event driven autoscaler for Kubernetes. Many of Dapr's pub/sub components overlap with the scalers provided by [KEDA](https://github.com/kedacore/keda) so it's easy to configure your Dapr deployment on Kubernetes to autoscale based on the back pressure using KEDA.
This how-to walks through the configuration of a scalable Dapr application along with the back pressure on Kafka topic, however you can apply this approach to [pub/sub components]({{< ref pubsub >}}) offered by Dapr.
This how-to walks through the configuration of a scalable Dapr application along with the back pressure on Kafka topic, however you can apply this approach to any [pub/sub components]({{< ref pubsub >}}) offered by Dapr.
## Install KEDA
@ -138,4 +139,4 @@ All done!
Now, that the `ScaledObject` KEDA object is configured, your deployment will scale based on the lag of the Kafka topic. More information on configuring KEDA for Kafka topics is available [here](https://keda.sh/docs/2.0/scalers/apache-kafka/).
You can now start publishing messages to your Kafka topic `demo-topic` and watch the pods autoscale when the lag threshold is higher than `5` topics, as we have defined in the KEDA scaler manifest. You can publish messages to the Kafka Dapr component by using the Dapr [Publish](https://github.com/dapr/CLI#publishsubscribe) CLI command
You can now start publishing messages to your Kafka topic `demo-topic` and watch the pods autoscale when the lag threshold is higher than `5` topics, as we have defined in the KEDA scaler manifest. You can publish messages to the Kafka Dapr component by using the Dapr [Publish]({{< ref dapr-publish >}}) CLI command

View File

@ -0,0 +1,10 @@
---
type: docs
title: "Dapr extension for Azure Functions runtime"
linkTitle: "Azure Functions"
description: "Access Dapr capabilities from your Functions runtime application"
weight: 3000
---
Dapr integrates with the Azure Functions runtime via an extension that lets a function seamlessly interact with Dapr. Azure Functions provides an event-driven programming model and Dapr provides cloud-native building blocks. With this extension, you can bring both together for serverless and event-driven apps. For more information read
[Azure Functions extension for Dapr](https://cloudblogs.microsoft.com/opensource/2020/07/01/announcing-azure-functions-extension-for-dapr/) and visit the [Azure Functions extension](https://github.com/dapr/azure-functions-extension) repo to try out the samples.

View File

@ -1,7 +1,7 @@
---
type: docs
title: "Authenticating to services"
linkTitle: "Authenticating to services"
weight: 3000
title: "Integrations with cloud providers"
linkTitle: "Cloud providers"
weight: 5000
description: "Information about authentication and configuration for various cloud providers"
---

View File

@ -4,6 +4,8 @@ title: "Authenticating to AWS"
linkTitle: "Authenticating to AWS"
weight: 10
description: "Information about authentication and configuration options for AWS"
aliases:
- /developing-applications/integrations/authenticating/authenticating-aws/
---
All Dapr components using various AWS services (DynamoDB, SQS, S3, etc) use a standardized set of attributes for configuration, these are described below.
@ -12,11 +14,11 @@ All Dapr components using various AWS services (DynamoDB, SQS, S3, etc) use a st
None of the following attributes are required, since the AWS SDK may be configured using the default provider chain described in the link above. It's important to test the component configuration and inspect the log output from the Dapr runtime to ensure that components initialize correctly.
`region`: Which AWS region to connect to. In some situations (when running Dapr in self-hosted mode, for example) this flag can be provided by the environment variable `AWS_REGION`. Since Dapr sidecar injection doesn't allow configuring environment variables on the Dapr sidecar, it is recommended to always set the `region` attribute in the component spec.
`endpoint`: The endpoint is normally handled internally by the AWS SDK. However, in some situations it might make sense to set it locally - for example if developing against [DynamoDB Local](https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/DynamoDBLocal.html).
`accessKey`: AWS Access key id.
`secretKey`: AWS Secret access key. Use together with `accessKey` to explicitly specify credentials.
`sessionToken`: AWS Session token. Used together with `accessKey` and `secretKey`. When using a regular IAM user's access key and secret, a session token is normally not required.
- `region`: Which AWS region to connect to. In some situations (when running Dapr in self-hosted mode, for example) this flag can be provided by the environment variable `AWS_REGION`. Since Dapr sidecar injection doesn't allow configuring environment variables on the Dapr sidecar, it is recommended to always set the `region` attribute in the component spec.
- `endpoint`: The endpoint is normally handled internally by the AWS SDK. However, in some situations it might make sense to set it locally - for example if developing against [DynamoDB Local](https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/DynamoDBLocal.html).
- `accessKey`: AWS Access key id.
- `secretKey`: AWS Secret access key. Use together with `accessKey` to explicitly specify credentials.
- `sessionToken`: AWS Session token. Used together with `accessKey` and `secretKey`. When using a regular IAM user's access key and secret, a session token is normally not required.
## Alternatives to explicitly specifying credentials in component manifest files
In production scenarios, it is recommended to use a solution such as [Kiam](https://github.com/uswitch/kiam) or [Kube2iam](https://github.com/jtblin/kube2iam). If running on AWS EKS, you can [link an IAM role to a Kubernetes service account](https://docs.aws.amazon.com/eks/latest/userguide/create-service-account-iam-policy-and-role.html), which your pod can use.

View File

@ -1,7 +1,7 @@
---
type: docs
title: "Dapr's gRPC Interface"
linkTitle: "gRPC"
linkTitle: "gRPC interface"
weight: 1000
description: "Use the Dapr gRPC API in your application"
type: docs
@ -86,7 +86,7 @@ data := []byte("ping")
// create the client
client, err := dapr.NewClient()
if err != nil {
logger.Panic(err)
log.Panic(err)
}
defer client.Close()
```
@ -95,11 +95,11 @@ defer client.Close()
```go
// save state with the key key1
err = client.SaveStateData(ctx, "statestore", "key1", "1", data)
err = client.SaveState(ctx, "statestore", "key1", data)
if err != nil {
logger.Panic(err)
log.Panic(err)
}
logger.Println("data saved")
log.Println("data saved")
```
Hooray!
@ -135,6 +135,7 @@ import (
```go
// server is our user app
type server struct {
pb.UnimplementedAppCallbackServer
}
// EchoMethod is a simple demo method to invoke
@ -183,9 +184,9 @@ func (s *server) OnBindingEvent(ctx context.Context, in *pb.BindingEventRequest)
}
// This method is fired whenever a message has been published to a topic that has been subscribed. Dapr sends published messages in a CloudEvents 0.3 envelope.
func (s *server) OnTopicEvent(ctx context.Context, in *pb.TopicEventRequest) (*empty.Empty, error) {
func (s *server) OnTopicEvent(ctx context.Context, in *pb.TopicEventRequest) (*pb.TopicEventResponse, error) {
fmt.Println("Topic message arrived")
return &empty.Empty{}, nil
return &pb.TopicEventResponse{}, nil
}
```

View File

@ -0,0 +1,9 @@
---
type: docs
title: "Build workflows with Logic Apps"
linkTitle: "Workflows"
description: "Learn how to build workflows using Dapr Workflows and Logic Apps"
weight: 4000
---
To enable developers to easily build workflow applications that use Daprs capabilities including diagnostics and multi-language support, you can use Dapr workflows. Dapr integrates with workflow engines such as Logic Apps runtime. For more information read [cloud-native workflows using Dapr and Logic Apps](https://cloudblogs.microsoft.com/opensource/2020/05/26/announcing-cloud-native-workflows-dapr-logic-apps/) and visit the [Dapr workflow](https://github.com/dapr/workflows) repo to try out the samples.

View File

@ -53,7 +53,7 @@ dapr --version
Output should look like this:
```
CLI version: 1.1.0
Runtime version: 1.1.1
Runtime version: 1.1.2
```
### Step 4: Verify containers are running
@ -96,7 +96,7 @@ bin components config.yaml
{{% /codetab %}}
{{% codetab %}}
Open `%USERPROFILE%\.dapr\` in file explorer:
Using Command Prompt (not PowerShell), open `%USERPROFILE%\.dapr\` in file explorer:
```powershell
explorer "%USERPROFILE%\.dapr\"

View File

@ -122,7 +122,7 @@ The latest Dapr helm chart no longer supports Helm v2. Please migrate from Helm
```bash
helm upgrade --install dapr dapr/dapr \
--version=1.1.1 \
--version=1.1.2 \
--namespace dapr-system \
--create-namespace \
--wait
@ -132,7 +132,7 @@ The latest Dapr helm chart no longer supports Helm v2. Please migrate from Helm
```bash
helm upgrade --install dapr dapr/dapr \
--version=1.1.1 \
--version=1.1.2 \
--namespace dapr-system \
--create-namespace \
--set global.ha.enabled=true \

View File

@ -59,7 +59,9 @@ The CPU and memory limits above account for the fact that Dapr is intended to a
## Highly-available mode
When deploying Dapr in a production-ready configuration, it's recommended to deploy with a highly available configuration of the control plane, which creates 3 replicas of each control plane pod in the dapr-system namespace.
When deploying Dapr in a production-ready configuration, it's recommended to deploy with a highly available (HA) configuration of the control plane, which creates 3 replicas of each control plane pod in the dapr-system namespace. This configuration allows for the Dapr control plane to survive node failures and other outages.
HA mode can be enabled with both the [Dapr CLI]({{< ref "kubernetes-deploy.md#install-in-highly-available-mode" >}} and with [Helm charts]({{< ref "kubernetes-deploy.md#add-and-install-dapr-helm-chart" >}}).
## Deploying Dapr with Helm

View File

@ -11,17 +11,27 @@ description: "Follow these steps to upgrade Dapr on Kubernetes and ensure a smoo
- [Dapr CLI]({{< ref install-dapr-cli.md >}})
- [Helm 3](https://github.com/helm/helm/releases) (if using Helm)
## Upgrade existing cluster to 1.1.1
## Upgrade existing cluster to 1.1.2
There are two ways to upgrade the Dapr control plane on a Kubernetes cluster using either the Dapr CLI or Helm.
### Dapr CLI
The example below shows how to upgrade to version 1.1.1:
The example below shows how to upgrade to version 1.1.2:
```bash
dapr upgrade -k --runtime-version=1.1.1
dapr upgrade -k --runtime-version=1.1.2
```
{{% alert title="Note" color="warning" %}}
If you are using Dapr CLI v1.1.0 there is a known issue where mTLS will be enabled by default, even on clusters where it is disabled. If your cluster has mTLS disabled, and you would like it to stay disabled, add `--set global.mtls.enabled=false` to your upgrade command:
```bash
dapr upgrade -k --runtime-version 1.1.1 --set global.mtls.enabled=false
```
You can track the issue here: [#664](https://github.com/dapr/cli/issues/664).
{{% /alert %}}
You can provide all the available Helm chart configurations using the Dapr CLI.
See [here](https://github.com/dapr/cli#supplying-helm-values) for more info.
@ -43,7 +53,7 @@ To resolve this issue please run the follow command to upgrade the CustomResourc
kubectl replace -f https://raw.githubusercontent.com/dapr/dapr/5a15b3e0f093d2d0938b12f144c7047474a290fe/charts/dapr/crds/configuration.yaml
```
Then proceed with the `dapr upgrade --runtime-version 1.1.1 -k` command as above.
Then proceed with the `dapr upgrade --runtime-version 1.1.2 -k` command as above.
### Helm

View File

@ -25,11 +25,11 @@ description: "Follow these steps to upgrade Dapr in self-hosted mode and ensure
dapr init
```
1. Ensure you are using the latest version of Dapr (v1.1.1) with:
1. Ensure you are using the latest version of Dapr (v1.1.2) with:
```bash
$ dapr --version
CLI version: 1.1.0
Runtime version: 1.1.1
Runtime version: 1.1.2
```

View File

@ -35,6 +35,7 @@ The table below shows the versions of Dapr releases that have been tested togeth
| Mar 4th 2021 | 1.0.1</br>| 1.0.1 | Java 1.0.2 </br>Go 1.0.0 </br>PHP 1.0.0 </br>Python 1.0.0 </br>.NET 1.0.0 | 0.6.0 | Supported |
| Apr 1st 2021 | 1.1.0</br> | 1.1.0 | Java 1.0.2 </br>Go 1.1.0 </br>PHP 1.0.0 </br>Python 1.1.0 </br>.NET 1.1.0 | 0.6.0 | Supported |
| Apr 6th 2021 | 1.1.1</br> | 1.1.0 | Java 1.0.2 </br>Go 1.1.0 </br>PHP 1.0.0 </br>Python 1.1.0 </br>.NET 1.1.0 | 0.6.0 | Supported (current) |
| Apr 16th 2021 | 1.1.2</br> | 1.1.0 | Java 1.0.2 </br>Go 1.1.0 </br>PHP 1.0.0 </br>Python 1.1.0 </br>.NET 1.1.0 | 0.6.0 | Supported (current) |
## Upgrade paths
After the 1.0 release of the runtime there may be situations where it is necessary to explicitly upgrade through an additional release to reach the desired target. For example an upgrade from v1.0 to v1.2 may need go pass through v1.1
@ -46,10 +47,10 @@ General guidance on upgrading can be found for [self hosted mode]({{<ref self-ho
| Current Runtime version | Must upgrade through | Target Runtime version |
|--------------------------|-----------------------|------------------------- |
| 0.11 | N/A | 1.0.1 |
| | 1.0.1 | 1.1.1 |
| | 1.0.1 | 1.1.2 |
| 1.0-rc1 to 1.0-rc4 | N/A | 1.0.1 |
| 1.0.0 or 1.0.1 | N/A | 1.1.1 |
| 1.1.0 | N/A | 1.1.1 |
| 1.0.0 or 1.0.1 | N/A | 1.1.2 |
| 1.1.0 or 1.1.1 | N/A | 1.1.2 |
## Feature and deprecations
There is a process for announcing feature deprecations. Deprecations are applied two (2) releases after the release in which they were announced. For example Feature X is announced to be deprecated in the 1.0.0 release notes and will then be removed in 1.2.0.

View File

@ -9,9 +9,16 @@ description: "Common issues and problems faced when running Dapr applications"
## I don't see the Dapr sidecar injected to my pod
There could be several reasons to why a sidecar will not be injected into a pod.
First, check your Deployment or Pod YAML file, and check that you have the following annotations in the right place:
First, check your deployment or pod YAML file, and check that you have the following annotations in the right place:
Sample deployment:
```yaml
annotations:
dapr.io/enabled: "true"
dapr.io/app-id: "nodeapp"
dapr.io/app-port: "3000"
```
### Sample deployment:
```yaml
apiVersion: apps/v1
@ -31,9 +38,9 @@ spec:
labels:
app: node
annotations:
<b>dapr.io/enabled: "true"</b>
<b>dapr.io/app-id: "nodeapp"</b>
<b>dapr.io/app-port: "3000"</b>
dapr.io/enabled: "true"
dapr.io/app-id: "nodeapp"
dapr.io/app-port: "3000"
spec:
containers:
- name: node
@ -84,7 +91,7 @@ The most common cause of this failure is that a component (such as a state store
To diagnose the root cause:
- Significantly increase the liveness probe delay - [link]({{< ref "kubernetes-overview.md" >}})
- Significantly increase the liveness probe delay - [link]({{< ref "kubernetes-annotations.md" >}})
- Set the log level of the sidecar to debug - [link]({{< ref "logs-troubleshooting.md#setting-the-sidecar-log-level" >}})
- Watch the logs for meaningful information - [link]({{< ref "logs-troubleshooting.md#viewing-logs-on-kubernetes" >}})
@ -162,9 +169,9 @@ In Kubernetes, make sure the `dapr.io/app-port` annotation is specified:
```yaml
annotations:
dapr.io/enabled: "true"
dapr.io/app-id: "nodeapp"
dapr.io/app-port: "3000"
dapr.io/enabled: "true"
dapr.io/app-id: "nodeapp"
dapr.io/app-port: "3000"
```
If using Dapr Standalone and the Dapr CLI, make sure you pass the `--app-port` flag to the `dapr run` command.

View File

@ -36,12 +36,12 @@ dapr upgrade -k
### Upgrade specified version of Dapr runtime in Kubernetes
```bash
dapr upgrade -k --runtime-version 1.1.1
dapr upgrade -k --runtime-version 1.1.2
```
### Upgrade specified version of Dapr runtime in Kubernetes with value set
```bash
dapr upgrade -k --runtime-version 1.1.1 --set global.logAsJson=true
dapr upgrade -k --runtime-version 1.1.2 --set global.logAsJson=true
```
# Related links

View File

@ -54,6 +54,7 @@ This component supports **output binding** with the following operations:
- `create` : [Create blob](#create-blob)
- `get` : [Get blob](#get-blob)
- `delete` : [Delete blob](#delete-blob)
### Create blob
@ -201,6 +202,82 @@ To perform a get blob operation, invoke the Azure Blob Storage binding with a `P
The response body contains the value stored in the blob object.
### Delete blob
To perform a delete blob operation, invoke the Azure Blob Storage binding with a `POST` method and the following JSON body:
```json
{
"operation": "delete",
"metadata": {
"blobName": "myblob"
}
}
```
#### Examples
##### Delete blob
{{< tabs Windows Linux >}}
{{% codetab %}}
```bash
curl -d '{ \"operation\": \"delete\", \"metadata\": { \"blobName\": \"myblob\" }}' http://localhost:<dapr-port>/v1.0/bindings/<binding-name>
```
{{% /codetab %}}
{{% codetab %}}
```bash
curl -d '{ "operation": "delete", "metadata": { "blobName": "myblob" }}' \
http://localhost:<dapr-port>/v1.0/bindings/<binding-name>
```
{{% /codetab %}}
{{< /tabs >}}
##### Delete blob snapshots only
{{< tabs Windows Linux >}}
{{% codetab %}}
```bash
curl -d '{ \"operation\": \"delete\", \"metadata\": { \"blobName\": \"myblob\", \"DeleteSnapshotOptions\": \"only\" }}' http://localhost:<dapr-port>/v1.0/bindings/<binding-name>
```
{{% /codetab %}}
{{% codetab %}}
```bash
curl -d '{ "operation": "delete", "metadata": { "blobName": "myblob", "DeleteSnapshotOptions": "only" }}' \
http://localhost:<dapr-port>/v1.0/bindings/<binding-name>
```
{{% /codetab %}}
{{< /tabs >}}
##### Delete blob including snapshots
{{< tabs Windows Linux >}}
{{% codetab %}}
```bash
curl -d '{ \"operation\": \"delete\", \"metadata\": { \"blobName\": \"myblob\", \"DeleteSnapshotOptions\": \"include\" }}' http://localhost:<dapr-port>/v1.0/bindings/<binding-name>
```
{{% /codetab %}}
{{% codetab %}}
```bash
curl -d '{ "operation": "delete", "metadata": { "blobName": "myblob", "DeleteSnapshotOptions": "include" }}' \
http://localhost:<dapr-port>/v1.0/bindings/<binding-name>
```
{{% /codetab %}}
{{< /tabs >}}
#### Response
An HTTP 204 (No Content) and empty body will be retuned if successful.
## Metadata information
By default the Azure Blob Storage output binding auto generates a UUID as the blob filename and is not assigned any system or custom metadata to it. It is configurable in the metadata property of the message (all optional).

View File

@ -77,7 +77,7 @@ To run without Docker, see the getting started guide [here](https://kafka.apache
{{% /codetab %}}
{{% codetab %}}
To run Kafka on Kubernetes, you can use the [Helm Chart](https://github.com/helm/charts/tree/master/incubator/kafka#installing-the-chart).
To run Kafka on Kubernetes, you can use any Kafka operator, such as [Strimzi](https://strimzi.io/docs/operators/latest/quickstart.html#ref-install-prerequisites-str).
{{% /codetab %}}
{{< /tabs >}}

View File

@ -8,7 +8,7 @@ aliases:
---
## Component format
To setup Azure Event Hubs pubsub create a component of type `pubsub.azure.servicebus`. See [this guide]({{< ref "howto-publish-subscribe.md#step-1-setup-the-pubsub-component" >}}) on how to create and apply a pubsub configuration.
To setup Azure Service Bus pubsub create a component of type `pubsub.azure.servicebus`. See [this guide]({{< ref "howto-publish-subscribe.md#step-1-setup-the-pubsub-component" >}}) on how to create and apply a pubsub configuration.
```yaml
apiVersion: dapr.io/v1alpha1
@ -50,6 +50,10 @@ spec:
value: 30
- name: connectionRecoveryInSec # Optional
value: 2
- name: publishMaxRetries # Optional
value: 5
- name: publishInitialRetryInternalInMs # Optional
value: 500
```
> __NOTE:__ The above settings are shared across all topics that use this component.
@ -62,7 +66,7 @@ The above example uses secrets as plain strings. It is recommended to use a secr
| Field | Required | Details | Example |
|--------------------|:--------:|---------|---------|
| connectionString | Y | Connection-string for the Event Hubs | "`Endpoint=sb://{ServiceBusNamespace}.servicebus.windows.net/;SharedAccessKeyName={PolicyName};SharedAccessKey={Key};EntityPath={ServiceBus}`"
| connectionString | Y | Connection-string for the Service Bus | "`Endpoint=sb://{ServiceBusNamespace}.servicebus.windows.net/;SharedAccessKeyName={PolicyName};SharedAccessKey={Key};EntityPath={ServiceBus}`"
| timeoutInSec | N | Timeout for sending messages and management operations. Default: `60` |`30`
| handlerTimeoutInSec| N | Timeout for invoking app handler. # Optional. Default: `60` | `30`
| disableEntityManagement | N | When set to true, topics and subscriptions do not get created automatically. Default: `"false"` | `"true"`, `"false"`
@ -77,6 +81,8 @@ The above example uses secrets as plain strings. It is recommended to use a secr
| autoDeleteOnIdleInSec | N |Time in seconds to wait before auto deleting messages. | `10`
| maxReconnectionAttempts | N |Defines the maximum number of reconnect attempts. Default: `30` | `30`
| connectionRecoveryInSec | N |Time in seconds to wait between connection recovery attempts. Defaults: `2` | `2`
| publishMaxRetries | N | The max number of retries for when Azure Service Bus responds with "too busy" in order to throttle messages. Defaults: `5` | `5`
| publishInitialRetryInternalInMs | N | Time in milliseconds for the initial exponential backoff when Azure Service Bus throttle messages. Defaults: `500` | `500`
## Create an Azure Service Bus

View File

@ -24,23 +24,25 @@ spec:
metadata:
- name: type
value: service_account
- name: project_id
- name: projectId
value: <PROJECT_ID> # replace
- name: private_key_id
- name: identityProjectId
value: <IDENTITY_PROJECT_ID> # replace
- name: privateKeyId
value: <PRIVATE_KEY_ID> #replace
- name: client_email
- name: clientEmail
value: <CLIENT_EMAIL> #replace
- name: client_id
- name: clientId
value: <CLIENT_ID> # replace
- name: auth_uri
- name: authUri
value: https://accounts.google.com/o/oauth2/auth
- name: token_uri
- name: tokenUri
value: https://oauth2.googleapis.com/token
- name: auth_provider_x509_cert_url
- name: authProviderX509CertUrl
value: https://www.googleapis.com/oauth2/v1/certs
- name: client_x509_cert_url
- name: clientX509CertUrl
value: https://www.googleapis.com/robot/v1/metadata/x509/<PROJECT_NAME>.iam.gserviceaccount.com #replace PROJECT_NAME
- name: private_key
- name: privateKey
value: <PRIVATE_KEY> # replace x509 cert
- name: disableEntityManagement
value: "false"
@ -53,19 +55,21 @@ The above example uses secrets as plain strings. It is recommended to use a secr
| Field | Required | Details | Example |
|--------------------|:--------:|---------|---------|
| type | Y | GCP credentials type | `service_account`
| project_id | Y | GCP project id| `projectId`
| private_key_id | Y | GCP private key id | `"privateKeyId"`
| private_key | Y | GCP credentials private key. Replace with x509 cert | `12345-12345`
| client_email | Y | GCP client email | `"client@email.com"`
| client_id | Y | GCP client id | `0123456789-0123456789`
| auth_uri | Y | Google account OAuth endpoint | `https://accounts.google.com/o/oauth2/auth`
| token_uri | Y | Google account token uri | `https://oauth2.googleapis.com/token`
| auth_provider_x509_cert_url | Y | GCP credentials cert url | `https://www.googleapis.com/oauth2/v1/certs`
| client_x509_cert_url | Y | GCP credentials project x509 cert url | `https://www.googleapis.com/robot/v1/metadata/x509/<PROJECT_NAME>.iam.gserviceaccount.com`
| type | N | GCP credentials type. Only `service_account` is supported. Defaults to `service_account` | `service_account`
| projectId | Y | GCP project id| `myproject-123`
| identityProjectId | N | If the GCP pubsub project is different from the identity project, specify the identity project using this attribute | `"myproject-123"`
| privateKeyId | N | If using explicit credentials, this field should contain the `private_key_id` field from the service account json document | `"my-private-key"`
| privateKey | N | If using explicit credentials, this field should contain the `private_key` field from the service account json | `-----BEGIN PRIVATE KEY-----MIIBVgIBADANBgkqhkiG9w0B`
| clientEmail | N | If using explicit credentials, this field should contain the `client_email` field from the service account json | `"myservice@myproject-123.iam.gserviceaccount.com"`
| clientId | N | If using explicit credentials, this field should contain the `client_id` field from the service account json | `106234234234`
| authUri | N | If using explicit credentials, this field should contain the `auth_uri` field from the service account json | `https://accounts.google.com/o/oauth2/auth`
| tokenUri | N | If using explicit credentials, this field should contain the `token_uri` field from the service account json | `https://oauth2.googleapis.com/token`
| authProviderX509CertUrl | N | If using explicit credentials, this field should contain the `auth_provider_x509_cert_url` field from the service account json | `https://www.googleapis.com/oauth2/v1/certs`
| clientX509CertUrl | N | If using explicit credentials, this field should contain the `client_x509_cert_url` field from the service account json | `https://www.googleapis.com/robot/v1/metadata/x509/myserviceaccount%40myproject.iam.gserviceaccount.com`
| disableEntityManagement | N | When set to `"true"`, topics and subscriptions do not get created automatically. Default: `"false"` | `"true"`, `"false"`
## Create a GCP Pub/Sub
You can use either "explicit" or "implicit" credentials to configure access to your GCP pubsub instance. If using explicit, most fields are required. Implicit relies on dapr running under a Kubernetes service acccount (KSA) mapped to a Google service account (GSA) which has the necessary permissions to access pubsub. In implicit mode, only the `projectId` attribute is needed, all other are optional.
Follow the instructions [here](https://cloud.google.com/pubsub/docs/quickstart-console) on setting up Google Cloud Pub/Sub system.

View File

@ -41,9 +41,12 @@ spec:
The above example uses secrets as plain strings. It is recommended to use a secret store for the secrets as described [here]({{< ref component-secrets.md >}}).
{{% /alert %}}
{{% alert title="Note" color="primary" %}}
Currently this component does not support state management for actors
{{% /alert %}}
If you wish to use Redis as an [actor state store]({{< ref "state_api.md#configuring-state-store-for-actors" >}}), append the following to the yaml.
```yaml
- name: actorStateStore
value: "true"
```
## Spec metadata fields
@ -55,6 +58,7 @@ Currently this component does not support state management for actors
| keyLength | N | The max length of key. Used along with `"string"` keytype. Defaults to `"200"` | `"200"`
| schema | N | The schema to use. Defaults to `"dbo"` | `"dapr"`,`"dbo"`
| indexedProperties | N | List of IndexedProperties. | `"[{"ColumnName": "column", "Property": "property", "Type": "type"}]"`
| actorStateStore | N | Indicates that Dapr should configure this component for the actor state store ([more information]({{< ref "state_api.md#configuring-state-store-for-actors" >}})). | `"true"`
## Create Azure SQL instance

@ -1 +1 @@
Subproject commit 036fc63bf0a919843827e263ec287d55e3188b7b
Subproject commit e86abb5f4cecb77fb9e08ca9dd02832e312b04a8