From 3711926c5f98bc3d5d4ccfb6f602cd539bf3c5b9 Mon Sep 17 00:00:00 2001 From: Chris Gillum Date: Mon, 1 May 2023 18:20:51 -0700 Subject: [PATCH 01/20] Update internal actor type documentation Signed-off-by: Chris Gillum --- .../workflow/workflow-architecture.md | 20 +++++++++---------- 1 file changed, 10 insertions(+), 10 deletions(-) diff --git a/daprdocs/content/en/developing-applications/building-blocks/workflow/workflow-architecture.md b/daprdocs/content/en/developing-applications/building-blocks/workflow/workflow-architecture.md index 412257dba..da8bf0a44 100644 --- a/daprdocs/content/en/developing-applications/building-blocks/workflow/workflow-architecture.md +++ b/daprdocs/content/en/developing-applications/building-blocks/workflow/workflow-architecture.md @@ -52,8 +52,10 @@ Each workflow instance managed by the engine is represented as one or more spans There are two types of actors that are internally registered within the Dapr sidecar in support of the workflow engine: -- `dapr.internal.wfengine.workflow` -- `dapr.internal.wfengine.activity` +- `dapr.internal.{namespace}.{appID}.workflow` +- `dapr.internal.{namespace}.{appID}.activity` + +The `{namespace}` value is the Dapr namespace and defaults to `default` if no namespace is configured. The `{appID}` value is the app's ID. For example, if you have a workflow app named "wfapp", then the type of the workflow actor would be `dapr.internal.default.wfapp.workflow` and the type of the activity actor would be `dapr.internal.default.wfapp.activity`. The following diagram demonstrates how internal workflow actors operate in a Kubernetes scenario: @@ -61,11 +63,13 @@ The following diagram demonstrates how internal workflow actors operate in a Kub Just like user-defined actors, internal workflow actors are distributed across the cluster by the actor placement service. They also maintain their own state and make use of reminders. However, unlike actors that live in application code, these _internal_ actors are embedded into the Dapr sidecar. Application code is completely unaware that these actors exist. -There are two types of actors registered by the Dapr sidecar for workflow: the _workflow_ actor and the _activity_ actor. The next sections will go into more details on each. +{{% alert title="Note" color="primary" %}} +The internal workflow actor types are only registered after an app has registered a workflow using a Dapr Workflow SDK. If an app never registers a workflow, then the internal workflow actors are never registered. +{{% /alert %}} ### Workflow actors -A new instance of the `dapr.internal.wfengine.workflow` actor is activated for every workflow instance that gets created. The ID of the _workflow_ actor is the ID of the workflow. This internal actor stores the state of the workflow as it progresses and determines the node on which the workflow code executes via the actor placement service. +Workflow actors are responsible for managing the state and placement of all workflows running in the app. A new instance of the workflow actor is activated for every workflow instance that gets created. The ID of the workflow actor is the ID of the workflow. This internal actor stores the state of the workflow as it progresses and determines the node on which the workflow code executes via the actor placement service. Each workflow actor saves its state using the following keys in the configured state store: @@ -94,17 +98,13 @@ To summarize: ### Activity actors -A new instance of the `dapr.internal.wfengine.activity` actor is activated for every activity task that gets scheduled by a workflow. The ID of the _activity_ actor is the ID of the workflow combined with a sequence number (sequence numbers start with 0). For example, if a workflow has an ID of `876bf371` and is the third activity to be scheduled by the workflow, it's ID will be `876bf371#2` where `2` is the sequence number. +Activity actors are responsible for managing the state and placement of all workflow activity invocations. A new instance of the activity actor is activated for every activity task that gets scheduled by a workflow. The ID of the activity actor is the ID of the workflow combined with a sequence number (sequence numbers start with 0). For example, if a workflow has an ID of `876bf371` and is the third activity to be scheduled by the workflow, it's ID will be `876bf371::2` where `2` is the sequence number. Each activity actor stores a single key into the state store: | Key | Description | | --- | ----------- | -| `activityreq-N` | The key contains the activity invocation payload, which includes the serialized activity input data. The `N` value is a 64-bit unsigned integer that represents the _generation_ of the workflow, a concept which is outside the scope of this documentation. | - -{{% alert title="Warning" color="warning" %}} -In the [Alpha release of the Dapr Workflow engine]({{< ref support-preview-features.md >}}), activity actor state will remain in the state store even after the activity task has completed. Scheduling a large number of workflow activities could result in unbounded storage usage. In a future release, data retention policies will be introduced that can automatically purge the state store of completed activity state. -{{% /alert %}} +| `activityState` | The key contains the activity invocation payload, which includes the serialized activity input data. This key is deleted automatically after the activity invocation has completed. | The following diagram illustrates the typical lifecycle of an activity actor. From 0ef572f3eb391ac35b7fd790eb7d68c00c420512 Mon Sep 17 00:00:00 2001 From: Chris Gillum Date: Mon, 1 May 2023 18:21:21 -0700 Subject: [PATCH 02/20] Changes to v1.10 HTTP APIs for v1.11 Signed-off-by: Chris Gillum --- .../workflow/howto-manage-workflow.md | 6 +- .../content/en/reference/api/workflow_api.md | 66 ++++++------------- 2 files changed, 25 insertions(+), 47 deletions(-) diff --git a/daprdocs/content/en/developing-applications/building-blocks/workflow/howto-manage-workflow.md b/daprdocs/content/en/developing-applications/building-blocks/workflow/howto-manage-workflow.md index b876c5d9f..34aa10603 100644 --- a/daprdocs/content/en/developing-applications/building-blocks/workflow/howto-manage-workflow.md +++ b/daprdocs/content/en/developing-applications/building-blocks/workflow/howto-manage-workflow.md @@ -45,9 +45,11 @@ Manage your workflow using HTTP calls. The example below plugs in the properties To start your workflow with an ID `12345678`, run: ```bash -POST http://localhost:3500/v1.0-alpha1/workflows/dapr/OrderProcessingWorkflow/12345678/start +POST http://localhost:3500/v1.0-alpha1/workflows/dapr/OrderProcessingWorkflow/start?instanceID=12345678 ``` +Note that workflow instance IDs can only contain alphanumeric characters, underscores, and dashes. + ### Terminate workflow To terminate your workflow with an ID `12345678`, run: @@ -61,7 +63,7 @@ POST http://localhost:3500/v1.0-alpha1/workflows/dapr/12345678/terminate To fetch workflow information (outputs and inputs) with an ID `12345678`, run: ```bash -GET http://localhost:3500/v1.0-alpha1/workflows/dapr/OrderProcessingWorkflow/12345678 +GET http://localhost:3500/v1.0-alpha1/workflows/dapr/12345678 ``` Learn more about these HTTP calls in the [workflow API reference guide]({{< ref workflow_api.md >}}). diff --git a/daprdocs/content/en/reference/api/workflow_api.md b/daprdocs/content/en/reference/api/workflow_api.md index ebf7ad2c7..442b206ad 100644 --- a/daprdocs/content/en/reference/api/workflow_api.md +++ b/daprdocs/content/en/reference/api/workflow_api.md @@ -10,29 +10,25 @@ Dapr provides users with the ability to interact with workflows and comes with a ## Start workflow request -Start a workflow instance with the given name and instance ID. +Start a workflow instance with the given name and optionally, an instance ID. ```bash -POST http://localhost:3500/v1.0-alpha1/workflows////start +POST http://localhost:3500/v1.0-alpha1/workflows///start[?instanceId=] ``` +Note that workflow instance IDs can only contain alphanumeric characters, underscores, and dashes. + ### URL parameters Parameter | Description --------- | ----------- `workflowComponentName` | Current default is `dapr` for Dapr Workflows `workflowName` | Identify the workflow type -`instanceId` | Unique value created for each run of a specific workflow +`instanceId` | (Optional) Unique value created for each run of a specific workflow ### Request content -In the request you can pass along relevant input information that will be passed to the workflow: - -```json -{ - "input": // argument(s) to pass to the workflow which can be any valid JSON data type (such as objects, strings, numbers, arrays, etc.) -} -``` +Any request content will be passed to the workflow as input. The Dapr API passes the content as-is without attempting to interpret it. ### HTTP response codes @@ -48,9 +44,7 @@ The API call will provide a response similar to this: ```json { -  "WFInfo": { -    "instance_id": "SampleWorkflow" -  } + "instanceID": "12345678" } ``` @@ -59,7 +53,7 @@ The API call will provide a response similar to this: Terminate a running workflow instance with the given name and instance ID. ```bash -POST http://localhost:3500/v1.0-alpha1/workflows///terminate +POST http://localhost:3500/v1.0-alpha1/workflows//terminate ``` ### URL parameters @@ -67,7 +61,6 @@ POST http://localhost:3500/v1.0-alpha1/workflows//// +GET http://localhost:3500/v1.0-alpha1/workflows// ``` ### URL parameters @@ -103,39 +88,30 @@ GET http://localhost:3500/v1.0-alpha1/workflows// Date: Tue, 2 May 2023 15:22:57 -0400 Subject: [PATCH 03/20] GCP PubSub Certification Tests updates Signed-off-by: Roberto J Rojas --- .../supported-pubsub/setup-gcp-pubsub.md | 68 ++++++++++++++++++- daprdocs/data/components/pubsub/gcp.yaml | 2 +- 2 files changed, 68 insertions(+), 2 deletions(-) diff --git a/daprdocs/content/en/reference/components-reference/supported-pubsub/setup-gcp-pubsub.md b/daprdocs/content/en/reference/components-reference/supported-pubsub/setup-gcp-pubsub.md index 4f06517af..79b695a21 100644 --- a/daprdocs/content/en/reference/components-reference/supported-pubsub/setup-gcp-pubsub.md +++ b/daprdocs/content/en/reference/components-reference/supported-pubsub/setup-gcp-pubsub.md @@ -25,6 +25,10 @@ spec: value: service_account - name: projectId value: # replace + - name: endpoint # Optional. + value: "http://localhost:8085" + - name: consumerID # Optional - defaults to the app's own ID + value: - name: identityProjectId value: # replace - name: privateKeyId @@ -46,11 +50,17 @@ spec: - name: disableEntityManagement value: "false" - name: enableMessageOrdering - value: "false" + value: "false" + - name: orderingKey # Optional + value: - name: maxReconnectionAttempts # Optional value: 30 - name: connectionRecoveryInSec # Optional value: 2 + - name: deadLetterTopic # Optional + value: + - name: maxDeliveryAttempts # Optional + value: 5 ``` {{% alert title="Warning" color="warning" %}} The above example uses secrets as plain strings. It is recommended to use a secret store for the secrets as described [here]({{< ref component-secrets.md >}}). @@ -62,6 +72,8 @@ The above example uses secrets as plain strings. It is recommended to use a secr |--------------------|:--------:|---------|---------| | type | N | GCP credentials type. Only `service_account` is supported. Defaults to `service_account` | `service_account` | projectId | Y | GCP project id| `myproject-123` +| endpoint | N | GCP endpoint for the component to use. Only used for local development with, for example, [GCP Pub/Sub Emaulator](https://cloud.google.com/pubsub/docs/emulator). The `endpoint` is unncessary when running against the GCP production API. | `"http://localhost:8085"` +| `consumerID` | N | the Consumer ID organizes one or more consumers into a group. Consumers with the same consumer ID work as one virtual consumer, i.e. a message is processed only once by one of the consumers in the group. If the consumer ID is not set, the dapr runtime will set it to the dapr application ID. The `consumerID` along with the `topic` provided as part of the request are used to build the Pub/Sub subcription ID | | identityProjectId | N | If the GCP pubsub project is different from the identity project, specify the identity project using this attribute | `"myproject-123"` | privateKeyId | N | If using explicit credentials, this field should contain the `private_key_id` field from the service account json document | `"my-private-key"` | privateKey | N | If using explicit credentials, this field should contain the `private_key` field from the service account json | `-----BEGIN PRIVATE KEY-----MIIBVgIBADANBgkqhkiG9w0B` @@ -73,18 +85,72 @@ The above example uses secrets as plain strings. It is recommended to use a secr | clientX509CertUrl | N | If using explicit credentials, this field should contain the `client_x509_cert_url` field from the service account json | `https://www.googleapis.com/robot/v1/metadata/x509/myserviceaccount%40myproject.iam.gserviceaccount.com` | disableEntityManagement | N | When set to `"true"`, topics and subscriptions do not get created automatically. Default: `"false"` | `"true"`, `"false"` | enableMessageOrdering | N | When set to `"true"`, subscribed messages will be received in order, depending on publishing and permissions configuration. | `"true"`, `"false"` +| orderingKey |N | The key provided in the request. It's used when `enableMessageOrdering` is set to `true` to order messages based on such key. | "my-orderingkey" | maxReconnectionAttempts | N |Defines the maximum number of reconnect attempts. Default: `30` | `30` | connectionRecoveryInSec | N |Time in seconds to wait between connection recovery attempts. Default: `2` | `2` +| deadLetterTopic | N | Name of the GCP Pub/Sub Topic. This topic **must** exist before using this component. | `"myapp-dlq"` +| maxDeliveryAttempts | N | Maximun number of attempts to deliver the message. If `deadLetterTopic` is specified, `maxDeliveryAttempts` is the maximun number of attempts for failed processing of messages, once that number is reached, the message will be moved to the dead-letter Topic. Default: `5` | `5` + {{% alert title="Warning" color="warning" %}} If `enableMessageOrdering` is set to "true", the roles/viewer or roles/pubsub.viewer role will be required on the service account in order to guarantee ordering in cases where order tokens are not embedded in the messages. If this role is not given, or the call to Subscription.Config() fails for any other reason, ordering by embedded order tokens will still function correctly. {{% /alert %}} ## Create a GCP Pub/Sub + +{{< tabs "Self-Hosted" "GCP" >}} + +{{% codetab %}} +For local development, the [GCP Pub/Sub Emulator](https://cloud.google.com/pubsub/docs/emulator) is used to test the GCP Pub/Sub Component. Follow [these instructions](https://cloud.google.com/pubsub/docs/emulator#start) to run the GCP Pub/Sub Emulator. + +To run the GCP Pub/Sub Emulator locally using Docker, use the following `docker-compose.yaml`: + +```yaml +version: '3' +services: + pubsub: + image: gcr.io/google.com/cloudsdktool/cloud-sdk:422.0.0-emulators + ports: + - "8085:8085" + container_name: gcp-pubsub + entrypoint: gcloud beta emulators pubsub start --project local-test-prj --host-port 0.0.0.0:8085 + +``` + +In order to use the GCP Pub/Sub Emulator with your pub/sub binding, you need to provide the `endpoint` configuration in the component metadata. The `endpoint` is unnecessary when running against the GCP Production API. + +The **projectId** attribute must match the `--project` used in either the `docker-compose.yaml` or Docker command. + +```yaml +apiVersion: dapr.io/v1alpha1 +kind: Component +metadata: + name: gcp-pubsub +spec: + type: pubsub.gcp.pubsub + version: v1 + metadata: + - name: projectId + value: "local-test-prj" + - name: consumerID + value: "testConsumer" + - name: endpoint + value: "localhost:8085" +``` + +{{% /codetab %}} + + +{{% codetab %}} + You can use either "explicit" or "implicit" credentials to configure access to your GCP pubsub instance. If using explicit, most fields are required. Implicit relies on dapr running under a Kubernetes service account (KSA) mapped to a Google service account (GSA) which has the necessary permissions to access pubsub. In implicit mode, only the `projectId` attribute is needed, all other are optional. Follow the instructions [here](https://cloud.google.com/pubsub/docs/quickstart-console) on setting up Google Cloud Pub/Sub system. +{{% /codetab %}} + +{{< /tabs >}} + ## Related links - [Basic schema for a Dapr component]({{< ref component-schema >}}) - Read [this guide]({{< ref "howto-publish-subscribe.md#step-2-publish-a-topic" >}}) for instructions on configuring pub/sub components diff --git a/daprdocs/data/components/pubsub/gcp.yaml b/daprdocs/data/components/pubsub/gcp.yaml index 815ced19b..ce654f136 100644 --- a/daprdocs/data/components/pubsub/gcp.yaml +++ b/daprdocs/data/components/pubsub/gcp.yaml @@ -1,6 +1,6 @@ - component: GCP Pub/Sub link: setup-gcp-pubsub - state: Alpha + state: Stable version: v1 since: "1.0" features: From 12a53d7f3bea13c748807b9cedad29ea5286cbc8 Mon Sep 17 00:00:00 2001 From: Roberto J Rojas Date: Tue, 2 May 2023 16:09:21 -0400 Subject: [PATCH 04/20] updates Signed-off-by: Roberto J Rojas --- .../supported-pubsub/setup-gcp-pubsub.md | 9 +++++++-- 1 file changed, 7 insertions(+), 2 deletions(-) diff --git a/daprdocs/content/en/reference/components-reference/supported-pubsub/setup-gcp-pubsub.md b/daprdocs/content/en/reference/components-reference/supported-pubsub/setup-gcp-pubsub.md index 79b695a21..59f71d5d4 100644 --- a/daprdocs/content/en/reference/components-reference/supported-pubsub/setup-gcp-pubsub.md +++ b/daprdocs/content/en/reference/components-reference/supported-pubsub/setup-gcp-pubsub.md @@ -70,9 +70,8 @@ The above example uses secrets as plain strings. It is recommended to use a secr | Field | Required | Details | Example | |--------------------|:--------:|---------|---------| -| type | N | GCP credentials type. Only `service_account` is supported. Defaults to `service_account` | `service_account` | projectId | Y | GCP project id| `myproject-123` -| endpoint | N | GCP endpoint for the component to use. Only used for local development with, for example, [GCP Pub/Sub Emaulator](https://cloud.google.com/pubsub/docs/emulator). The `endpoint` is unncessary when running against the GCP production API. | `"http://localhost:8085"` +| endpoint | N | GCP endpoint for the component to use. Only used for local development with, for example, [GCP Pub/Sub Emulator](https://cloud.google.com/pubsub/docs/emulator). The `endpoint` is unncessary when running against the GCP production API. | `"http://localhost:8085"` | `consumerID` | N | the Consumer ID organizes one or more consumers into a group. Consumers with the same consumer ID work as one virtual consumer, i.e. a message is processed only once by one of the consumers in the group. If the consumer ID is not set, the dapr runtime will set it to the dapr application ID. The `consumerID` along with the `topic` provided as part of the request are used to build the Pub/Sub subcription ID | | identityProjectId | N | If the GCP pubsub project is different from the identity project, specify the identity project using this attribute | `"myproject-123"` | privateKeyId | N | If using explicit credentials, this field should contain the `private_key_id` field from the service account json document | `"my-private-key"` @@ -90,12 +89,18 @@ The above example uses secrets as plain strings. It is recommended to use a secr | connectionRecoveryInSec | N |Time in seconds to wait between connection recovery attempts. Default: `2` | `2` | deadLetterTopic | N | Name of the GCP Pub/Sub Topic. This topic **must** exist before using this component. | `"myapp-dlq"` | maxDeliveryAttempts | N | Maximun number of attempts to deliver the message. If `deadLetterTopic` is specified, `maxDeliveryAttempts` is the maximun number of attempts for failed processing of messages, once that number is reached, the message will be moved to the dead-letter Topic. Default: `5` | `5` +| type | N | **DEPRECATED** GCP credentials type. Only `service_account` is supported. Defaults to `service_account` | `service_account` + {{% alert title="Warning" color="warning" %}} If `enableMessageOrdering` is set to "true", the roles/viewer or roles/pubsub.viewer role will be required on the service account in order to guarantee ordering in cases where order tokens are not embedded in the messages. If this role is not given, or the call to Subscription.Config() fails for any other reason, ordering by embedded order tokens will still function correctly. {{% /alert %}} +## GCP Credentials + +The GCP Pub/Sub component uses the GCP Go Client Libraries and by default it authenticates using **Application Default Credentials** as explained in the [Authenticate to GCP Cloud services using client libraries](https://cloud.google.com/docs/authentication/client-libraries) + ## Create a GCP Pub/Sub {{< tabs "Self-Hosted" "GCP" >}} From 3e33c52191312e20e99e8d85e28bcea92bbc4c13 Mon Sep 17 00:00:00 2001 From: Roberto J Rojas Date: Tue, 2 May 2023 16:43:21 -0400 Subject: [PATCH 05/20] AWS Bindings S3 Certification Tests 2247 Signed-off-by: Roberto J Rojas --- .../supported-bindings/s3.md | 69 +++++++++++++++++++ daprdocs/data/components/bindings/aws.yaml | 2 +- 2 files changed, 70 insertions(+), 1 deletion(-) diff --git a/daprdocs/content/en/reference/components-reference/supported-bindings/s3.md b/daprdocs/content/en/reference/components-reference/supported-bindings/s3.md index bb5e1f619..a783eef8c 100644 --- a/daprdocs/content/en/reference/components-reference/supported-bindings/s3.md +++ b/daprdocs/content/en/reference/components-reference/supported-bindings/s3.md @@ -70,6 +70,11 @@ The above example uses secrets as plain strings. It is recommended to use a secr When running the Dapr sidecar (daprd) with your application on EKS (AWS Kubernetes), if you're using a node/pod that has already been attached to an IAM policy defining access to AWS resources, you **must not** provide AWS access-key, secret-key, and tokens in the definition of the component spec you're using. {{% /alert %}} + +### S3 Bucket Creation +{{< tabs "Minio" "Localstack" "AWS" >}} + +{{% codetab %}} ### Using with Minio [Minio](https://min.io/) is a service that exposes local storage as S3-compatible block storage, and it's a popular alternative to S3 especially in development environments. You can use the S3 binding with Minio too, with some configuration tweaks: @@ -78,6 +83,70 @@ When running the Dapr sidecar (daprd) with your application on EKS (AWS Kubernet 3. The value for `region` is not important; you can set it to `us-east-1`. 4. Depending on your environment, you may need to set `disableSSL` to `true` if you're connecting to Minio using a non-secure connection (using the `http://` protocol). If you are using a secure connection (`https://` protocol) but with a self-signed certificate, you may need to set `insecureSSL` to `true`. +{{% /codetab %}} + +{{% codetab %}} +For local development, the [localstack project](https://github.com/localstack/localstack) is used to integrate AWS S3. Follow [these instructions](https://github.com/localstack/localstack#running) to run localstack. + +To run localstack locally from the command line using Docker, use a `docker-compose.yaml` similar to the following: + +```yaml +version: "3.8" + +services: + localstack: + container_name: "cont-aws-s3" + image: localstack/localstack:1.4.0 + ports: + - "127.0.0.1:4566:4566" + environment: + - DEBUG=1 + - DOCKER_HOST=unix:///var/run/docker.sock + volumes: + - "/init-aws.sh:/etc/localstack/init/ready.d/init-aws.sh" # init hook + - "${LOCALSTACK_VOLUME_DIR:-./volume}:/var/lib/localstack" + - "/var/run/docker.sock:/var/run/docker.sock" +``` + +To able to use the S3 component, the bucket to be used needs to existing prior to use, so the example above uses a [LocalStack Initialization Hook](https://docs.localstack.cloud/references/init-hooks/) Hook to setup the bucket. + +In order to use localstack with your S3 binding, you need to provide the `endpoint` configuration in the component metadata. The `endpoint` is unnecessary when running against production AWS. + + +```yaml +apiVersion: dapr.io/v1alpha1 +kind: Component +metadata: + name: aws-s3 + namespace: default +spec: + type: bindings.aws.s3 + version: v1 + metadata: + - name: bucket + value: conformance-test-docker + - name: endpoint + value: "http://localhost:4566" + - name: accessKey + value: "my-access" + - name: secretKey + value: "my-secret" + - name: region + value: "us-east-1" +``` + +{{% /codetab %}} + +{{% codetab %}} + +To able to use the S3 component, the bucket to be used needs to existing prior to use, [Follow](https://docs.aws.amazon.com/AmazonS3/latest/userguide/create-bucket-overview.html) + +{{% /codetab %}} + + + +{{< /tabs >}} + ## Binding support This component supports **output binding** with the following operations: diff --git a/daprdocs/data/components/bindings/aws.yaml b/daprdocs/data/components/bindings/aws.yaml index 50b5a3607..e75089445 100644 --- a/daprdocs/data/components/bindings/aws.yaml +++ b/daprdocs/data/components/bindings/aws.yaml @@ -8,7 +8,7 @@ output: true - component: AWS S3 link: s3 - state: Alpha + state: Stable version: v1 since: "1.0" features: From 113875fb3af1464a02804d4eabb8dfcb7b4afb59 Mon Sep 17 00:00:00 2001 From: Roberto Rojas Date: Fri, 5 May 2023 08:57:08 -0400 Subject: [PATCH 06/20] Update daprdocs/content/en/reference/components-reference/supported-pubsub/setup-gcp-pubsub.md Co-authored-by: Hannah Hunter <94493363+hhunter-ms@users.noreply.github.com> Signed-off-by: Roberto Rojas --- .../components-reference/supported-pubsub/setup-gcp-pubsub.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/daprdocs/content/en/reference/components-reference/supported-pubsub/setup-gcp-pubsub.md b/daprdocs/content/en/reference/components-reference/supported-pubsub/setup-gcp-pubsub.md index 59f71d5d4..643b69078 100644 --- a/daprdocs/content/en/reference/components-reference/supported-pubsub/setup-gcp-pubsub.md +++ b/daprdocs/content/en/reference/components-reference/supported-pubsub/setup-gcp-pubsub.md @@ -72,7 +72,7 @@ The above example uses secrets as plain strings. It is recommended to use a secr |--------------------|:--------:|---------|---------| | projectId | Y | GCP project id| `myproject-123` | endpoint | N | GCP endpoint for the component to use. Only used for local development with, for example, [GCP Pub/Sub Emulator](https://cloud.google.com/pubsub/docs/emulator). The `endpoint` is unncessary when running against the GCP production API. | `"http://localhost:8085"` -| `consumerID` | N | the Consumer ID organizes one or more consumers into a group. Consumers with the same consumer ID work as one virtual consumer, i.e. a message is processed only once by one of the consumers in the group. If the consumer ID is not set, the dapr runtime will set it to the dapr application ID. The `consumerID` along with the `topic` provided as part of the request are used to build the Pub/Sub subcription ID | +| `consumerID` | N | The Consumer ID organizes one or more consumers into a group. Consumers with the same consumer ID work as one virtual consumer; for example, a message is processed only once by one of the consumers in the group. If the consumer ID is not set, the Dapr runtime will set it to the Dapr application ID. The `consumerID`, along with the `topic` provided as part of the request, are used to build the Pub/Sub subscription ID | | identityProjectId | N | If the GCP pubsub project is different from the identity project, specify the identity project using this attribute | `"myproject-123"` | privateKeyId | N | If using explicit credentials, this field should contain the `private_key_id` field from the service account json document | `"my-private-key"` | privateKey | N | If using explicit credentials, this field should contain the `private_key` field from the service account json | `-----BEGIN PRIVATE KEY-----MIIBVgIBADANBgkqhkiG9w0B` From fbfb0381d45ac0ec8c6e194251f15184e4116440 Mon Sep 17 00:00:00 2001 From: Roberto Rojas Date: Fri, 5 May 2023 08:57:18 -0400 Subject: [PATCH 07/20] Update daprdocs/content/en/reference/components-reference/supported-pubsub/setup-gcp-pubsub.md Co-authored-by: Hannah Hunter <94493363+hhunter-ms@users.noreply.github.com> Signed-off-by: Roberto Rojas --- .../components-reference/supported-pubsub/setup-gcp-pubsub.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/daprdocs/content/en/reference/components-reference/supported-pubsub/setup-gcp-pubsub.md b/daprdocs/content/en/reference/components-reference/supported-pubsub/setup-gcp-pubsub.md index 643b69078..326c614c3 100644 --- a/daprdocs/content/en/reference/components-reference/supported-pubsub/setup-gcp-pubsub.md +++ b/daprdocs/content/en/reference/components-reference/supported-pubsub/setup-gcp-pubsub.md @@ -71,7 +71,7 @@ The above example uses secrets as plain strings. It is recommended to use a secr | Field | Required | Details | Example | |--------------------|:--------:|---------|---------| | projectId | Y | GCP project id| `myproject-123` -| endpoint | N | GCP endpoint for the component to use. Only used for local development with, for example, [GCP Pub/Sub Emulator](https://cloud.google.com/pubsub/docs/emulator). The `endpoint` is unncessary when running against the GCP production API. | `"http://localhost:8085"` +| endpoint | N | GCP endpoint for the component to use. Only used for local development (for example) with [GCP Pub/Sub Emulator](https://cloud.google.com/pubsub/docs/emulator). The `endpoint` is unnecessary when running against the GCP production API. | `"http://localhost:8085"` | `consumerID` | N | The Consumer ID organizes one or more consumers into a group. Consumers with the same consumer ID work as one virtual consumer; for example, a message is processed only once by one of the consumers in the group. If the consumer ID is not set, the Dapr runtime will set it to the Dapr application ID. The `consumerID`, along with the `topic` provided as part of the request, are used to build the Pub/Sub subscription ID | | identityProjectId | N | If the GCP pubsub project is different from the identity project, specify the identity project using this attribute | `"myproject-123"` | privateKeyId | N | If using explicit credentials, this field should contain the `private_key_id` field from the service account json document | `"my-private-key"` From e6ef2605ab015399cb290bd553b07d3152df600d Mon Sep 17 00:00:00 2001 From: Roberto Rojas Date: Fri, 5 May 2023 08:57:25 -0400 Subject: [PATCH 08/20] Update daprdocs/content/en/reference/components-reference/supported-pubsub/setup-gcp-pubsub.md Co-authored-by: Hannah Hunter <94493363+hhunter-ms@users.noreply.github.com> Signed-off-by: Roberto Rojas --- .../components-reference/supported-pubsub/setup-gcp-pubsub.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/daprdocs/content/en/reference/components-reference/supported-pubsub/setup-gcp-pubsub.md b/daprdocs/content/en/reference/components-reference/supported-pubsub/setup-gcp-pubsub.md index 326c614c3..cc9a3e222 100644 --- a/daprdocs/content/en/reference/components-reference/supported-pubsub/setup-gcp-pubsub.md +++ b/daprdocs/content/en/reference/components-reference/supported-pubsub/setup-gcp-pubsub.md @@ -88,7 +88,7 @@ The above example uses secrets as plain strings. It is recommended to use a secr | maxReconnectionAttempts | N |Defines the maximum number of reconnect attempts. Default: `30` | `30` | connectionRecoveryInSec | N |Time in seconds to wait between connection recovery attempts. Default: `2` | `2` | deadLetterTopic | N | Name of the GCP Pub/Sub Topic. This topic **must** exist before using this component. | `"myapp-dlq"` -| maxDeliveryAttempts | N | Maximun number of attempts to deliver the message. If `deadLetterTopic` is specified, `maxDeliveryAttempts` is the maximun number of attempts for failed processing of messages, once that number is reached, the message will be moved to the dead-letter Topic. Default: `5` | `5` +| maxDeliveryAttempts | N | Maximum number of attempts to deliver the message. If `deadLetterTopic` is specified, `maxDeliveryAttempts` is the maximum number of attempts for failed processing of messages. Once that number is reached, the message will be moved to the dead-letter topic. Default: `5` | `5` | type | N | **DEPRECATED** GCP credentials type. Only `service_account` is supported. Defaults to `service_account` | `service_account` From 786cf24982053b686c2c04b98217e0e364f94709 Mon Sep 17 00:00:00 2001 From: Roberto Rojas Date: Fri, 5 May 2023 08:57:32 -0400 Subject: [PATCH 09/20] Update daprdocs/content/en/reference/components-reference/supported-pubsub/setup-gcp-pubsub.md Co-authored-by: Hannah Hunter <94493363+hhunter-ms@users.noreply.github.com> Signed-off-by: Roberto Rojas --- .../components-reference/supported-pubsub/setup-gcp-pubsub.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/daprdocs/content/en/reference/components-reference/supported-pubsub/setup-gcp-pubsub.md b/daprdocs/content/en/reference/components-reference/supported-pubsub/setup-gcp-pubsub.md index cc9a3e222..69e98e64b 100644 --- a/daprdocs/content/en/reference/components-reference/supported-pubsub/setup-gcp-pubsub.md +++ b/daprdocs/content/en/reference/components-reference/supported-pubsub/setup-gcp-pubsub.md @@ -99,7 +99,7 @@ If `enableMessageOrdering` is set to "true", the roles/viewer or roles/pubsub.vi ## GCP Credentials -The GCP Pub/Sub component uses the GCP Go Client Libraries and by default it authenticates using **Application Default Credentials** as explained in the [Authenticate to GCP Cloud services using client libraries](https://cloud.google.com/docs/authentication/client-libraries) +Since the GCP Pub/Sub component uses the GCP Go Client Libraries, by default it authenticates using **Application Default Credentials**. This is explained further in the [Authenticate to GCP Cloud services using client libraries](https://cloud.google.com/docs/authentication/client-libraries) guide. ## Create a GCP Pub/Sub From aabbaea8d0ac6d9093bcddf2d3331d36133cbf7f Mon Sep 17 00:00:00 2001 From: Roberto Rojas Date: Fri, 5 May 2023 08:58:30 -0400 Subject: [PATCH 10/20] Update daprdocs/content/en/reference/components-reference/supported-bindings/s3.md Co-authored-by: Hannah Hunter <94493363+hhunter-ms@users.noreply.github.com> Signed-off-by: Roberto Rojas --- .../en/reference/components-reference/supported-bindings/s3.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/daprdocs/content/en/reference/components-reference/supported-bindings/s3.md b/daprdocs/content/en/reference/components-reference/supported-bindings/s3.md index a783eef8c..33759b91a 100644 --- a/daprdocs/content/en/reference/components-reference/supported-bindings/s3.md +++ b/daprdocs/content/en/reference/components-reference/supported-bindings/s3.md @@ -108,7 +108,7 @@ services: - "/var/run/docker.sock:/var/run/docker.sock" ``` -To able to use the S3 component, the bucket to be used needs to existing prior to use, so the example above uses a [LocalStack Initialization Hook](https://docs.localstack.cloud/references/init-hooks/) Hook to setup the bucket. +To use the S3 component, you need to use an existing bucket. The example above uses a [LocalStack Initialization Hook](https://docs.localstack.cloud/references/init-hooks/) to setup the bucket. In order to use localstack with your S3 binding, you need to provide the `endpoint` configuration in the component metadata. The `endpoint` is unnecessary when running against production AWS. From 1a18078605db82cdb24e399933bf8214e93eaa10 Mon Sep 17 00:00:00 2001 From: Roberto Rojas Date: Fri, 5 May 2023 08:58:36 -0400 Subject: [PATCH 11/20] Update daprdocs/content/en/reference/components-reference/supported-bindings/s3.md Co-authored-by: Hannah Hunter <94493363+hhunter-ms@users.noreply.github.com> Signed-off-by: Roberto Rojas --- .../en/reference/components-reference/supported-bindings/s3.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/daprdocs/content/en/reference/components-reference/supported-bindings/s3.md b/daprdocs/content/en/reference/components-reference/supported-bindings/s3.md index 33759b91a..38986ec6e 100644 --- a/daprdocs/content/en/reference/components-reference/supported-bindings/s3.md +++ b/daprdocs/content/en/reference/components-reference/supported-bindings/s3.md @@ -139,7 +139,7 @@ spec: {{% codetab %}} -To able to use the S3 component, the bucket to be used needs to existing prior to use, [Follow](https://docs.aws.amazon.com/AmazonS3/latest/userguide/create-bucket-overview.html) +To use the S3 component, you need to use an existing bucket. Follow the [AWS documentation for creating a bucket](https://docs.aws.amazon.com/AmazonS3/latest/userguide/create-bucket-overview.html). {{% /codetab %}} From f9170853d433f2db404d878b62fc32ed2720d627 Mon Sep 17 00:00:00 2001 From: Roberto Rojas Date: Fri, 5 May 2023 08:58:43 -0400 Subject: [PATCH 12/20] Update daprdocs/content/en/reference/components-reference/supported-bindings/s3.md Co-authored-by: Hannah Hunter <94493363+hhunter-ms@users.noreply.github.com> Signed-off-by: Roberto Rojas --- .../en/reference/components-reference/supported-bindings/s3.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/daprdocs/content/en/reference/components-reference/supported-bindings/s3.md b/daprdocs/content/en/reference/components-reference/supported-bindings/s3.md index 38986ec6e..ef3fe03c7 100644 --- a/daprdocs/content/en/reference/components-reference/supported-bindings/s3.md +++ b/daprdocs/content/en/reference/components-reference/supported-bindings/s3.md @@ -72,7 +72,7 @@ When running the Dapr sidecar (daprd) with your application on EKS (AWS Kubernet ### S3 Bucket Creation -{{< tabs "Minio" "Localstack" "AWS" >}} +{{< tabs "Minio" "LocalStack" "AWS" >}} {{% codetab %}} ### Using with Minio From 30d11effa5ca32c40458ddd7a2334f9e956ba4ee Mon Sep 17 00:00:00 2001 From: Roberto Rojas Date: Fri, 5 May 2023 08:58:50 -0400 Subject: [PATCH 13/20] Update daprdocs/content/en/reference/components-reference/supported-bindings/s3.md Co-authored-by: Hannah Hunter <94493363+hhunter-ms@users.noreply.github.com> Signed-off-by: Roberto Rojas --- .../en/reference/components-reference/supported-bindings/s3.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/daprdocs/content/en/reference/components-reference/supported-bindings/s3.md b/daprdocs/content/en/reference/components-reference/supported-bindings/s3.md index ef3fe03c7..211a4889a 100644 --- a/daprdocs/content/en/reference/components-reference/supported-bindings/s3.md +++ b/daprdocs/content/en/reference/components-reference/supported-bindings/s3.md @@ -88,7 +88,7 @@ When running the Dapr sidecar (daprd) with your application on EKS (AWS Kubernet {{% codetab %}} For local development, the [localstack project](https://github.com/localstack/localstack) is used to integrate AWS S3. Follow [these instructions](https://github.com/localstack/localstack#running) to run localstack. -To run localstack locally from the command line using Docker, use a `docker-compose.yaml` similar to the following: +To run LocalStack locally from the command line using Docker, use a `docker-compose.yaml` similar to the following: ```yaml version: "3.8" From 069bf8714c4ff393b3364edbf783b74ffb337523 Mon Sep 17 00:00:00 2001 From: Roberto Rojas Date: Fri, 5 May 2023 08:58:57 -0400 Subject: [PATCH 14/20] Update daprdocs/content/en/reference/components-reference/supported-bindings/s3.md Co-authored-by: Hannah Hunter <94493363+hhunter-ms@users.noreply.github.com> Signed-off-by: Roberto Rojas --- .../en/reference/components-reference/supported-bindings/s3.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/daprdocs/content/en/reference/components-reference/supported-bindings/s3.md b/daprdocs/content/en/reference/components-reference/supported-bindings/s3.md index 211a4889a..68d39ed55 100644 --- a/daprdocs/content/en/reference/components-reference/supported-bindings/s3.md +++ b/daprdocs/content/en/reference/components-reference/supported-bindings/s3.md @@ -86,7 +86,7 @@ When running the Dapr sidecar (daprd) with your application on EKS (AWS Kubernet {{% /codetab %}} {{% codetab %}} -For local development, the [localstack project](https://github.com/localstack/localstack) is used to integrate AWS S3. Follow [these instructions](https://github.com/localstack/localstack#running) to run localstack. +For local development, the [LocalStack project](https://github.com/localstack/localstack) is used to integrate AWS S3. Follow [these instructions](https://github.com/localstack/localstack#running) to run LocalStack. To run LocalStack locally from the command line using Docker, use a `docker-compose.yaml` similar to the following: From c27be5444e3e545dc30739500e2c5f1e06905c68 Mon Sep 17 00:00:00 2001 From: Roberto Rojas Date: Fri, 5 May 2023 08:59:03 -0400 Subject: [PATCH 15/20] Update daprdocs/content/en/reference/components-reference/supported-bindings/s3.md Co-authored-by: Hannah Hunter <94493363+hhunter-ms@users.noreply.github.com> Signed-off-by: Roberto Rojas --- .../en/reference/components-reference/supported-bindings/s3.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/daprdocs/content/en/reference/components-reference/supported-bindings/s3.md b/daprdocs/content/en/reference/components-reference/supported-bindings/s3.md index 68d39ed55..4d50e3447 100644 --- a/daprdocs/content/en/reference/components-reference/supported-bindings/s3.md +++ b/daprdocs/content/en/reference/components-reference/supported-bindings/s3.md @@ -110,7 +110,7 @@ services: To use the S3 component, you need to use an existing bucket. The example above uses a [LocalStack Initialization Hook](https://docs.localstack.cloud/references/init-hooks/) to setup the bucket. -In order to use localstack with your S3 binding, you need to provide the `endpoint` configuration in the component metadata. The `endpoint` is unnecessary when running against production AWS. +To use LocalStack with your S3 binding, you need to provide the `endpoint` configuration in the component metadata. The `endpoint` is unnecessary when running against production AWS. ```yaml From c0e9ba1e0dfd25c1e6d6dd6fab1e653432a06bd4 Mon Sep 17 00:00:00 2001 From: Roberto Rojas Date: Mon, 8 May 2023 12:01:42 -0400 Subject: [PATCH 16/20] [AWS DynamoDB] Support for transaction 2712 (#3389) Signed-off-by: Roberto J Rojas --- daprdocs/data/components/state_stores/aws.yaml | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/daprdocs/data/components/state_stores/aws.yaml b/daprdocs/data/components/state_stores/aws.yaml index e8af47bc1..1d5be544f 100644 --- a/daprdocs/data/components/state_stores/aws.yaml +++ b/daprdocs/data/components/state_stores/aws.yaml @@ -5,7 +5,7 @@ since: "1.10" features: crud: true - transactions: false + transactions: true etag: true ttl: true query: false From 72b23f52240c054ed3bb6f72316e0bcbbecb58f5 Mon Sep 17 00:00:00 2001 From: Yaron Schneider Date: Mon, 8 May 2023 20:08:06 -0700 Subject: [PATCH 17/20] Attempt fix for build system (#3390) * attempt fix for build system Signed-off-by: yaron2 * change checkout version Signed-off-by: yaron2 * elevated Signed-off-by: yaron2 * change system to global Signed-off-by: yaron2 * custom build command Signed-off-by: yaron2 * fix app_build_command Signed-off-by: yaron2 * fix app_build_command Signed-off-by: yaron2 --------- Signed-off-by: yaron2 --- .github/workflows/website-v1-11.yml | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/.github/workflows/website-v1-11.yml b/.github/workflows/website-v1-11.yml index 5929133c0..f9df31cf3 100644 --- a/.github/workflows/website-v1-11.yml +++ b/.github/workflows/website-v1-11.yml @@ -18,7 +18,7 @@ jobs: - uses: actions/checkout@v2 with: submodules: recursive - fetch-depth: 0 + fetch-depth: 0 - name: Setup Docsy run: cd daprdocs && git submodule update --init --recursive && sudo npm install -D --save autoprefixer && sudo npm install -D --save postcss-cli - name: Build And Deploy @@ -37,7 +37,7 @@ jobs: app_location: "/daprdocs" # App source code path api_location: "api" # Api source code path - optional output_location: "public" # Built app content directory - optional - app_build_command: "hugo" + app_build_command: "git config --global --add safe.directory /github/workspace && hugo" ###### End of Repository/Build Configurations ###### close_pull_request_job: From e83ec17619194da5af3a38d0f4e87622584b2285 Mon Sep 17 00:00:00 2001 From: joshvanl Date: Tue, 9 May 2023 13:50:07 +0100 Subject: [PATCH 18/20] Fix link to podman installation Signed-off-by: joshvanl --- .../operations/hosting/self-hosted/self-hosted-with-podman.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/daprdocs/content/en/operations/hosting/self-hosted/self-hosted-with-podman.md b/daprdocs/content/en/operations/hosting/self-hosted/self-hosted-with-podman.md index 698026b4a..b53aaf923 100644 --- a/daprdocs/content/en/operations/hosting/self-hosted/self-hosted-with-podman.md +++ b/daprdocs/content/en/operations/hosting/self-hosted/self-hosted-with-podman.md @@ -11,7 +11,7 @@ This article provides guidance on running Dapr with Podman on a Windows/Linux/ma ## Prerequisites - [Dapr CLI]({{< ref install-dapr-cli.md >}}) -- [Podman](https://podman.io/getting-started/installation.html) +- [Podman](https://podman.io/docs/tutorials/installation) ## Initialize Dapr environment From 06252e102db04b9349e929c60b43d9f8af73135d Mon Sep 17 00:00:00 2001 From: Roberto Rojas Date: Tue, 9 May 2023 11:10:21 -0400 Subject: [PATCH 19/20] GCP Firestore Certification Tests (#3375) * GCP Firestore Certification Tests Signed-off-by: Roberto J Rojas * Update daprdocs/content/en/reference/components-reference/supported-state-stores/setup-firestore.md Co-authored-by: Hannah Hunter <94493363+hhunter-ms@users.noreply.github.com> Signed-off-by: Roberto Rojas * Update daprdocs/content/en/reference/components-reference/supported-state-stores/setup-firestore.md Co-authored-by: Hannah Hunter <94493363+hhunter-ms@users.noreply.github.com> Signed-off-by: Roberto Rojas * gcp firestore: adds privatekey doc Signed-off-by: Roberto J Rojas --------- Signed-off-by: Roberto J Rojas Signed-off-by: Roberto Rojas Co-authored-by: Hannah Hunter <94493363+hhunter-ms@users.noreply.github.com> --- .../supported-state-stores/setup-firestore.md | 46 +++++++++++-------- .../data/components/state_stores/gcp.yaml | 2 +- 2 files changed, 28 insertions(+), 20 deletions(-) diff --git a/daprdocs/content/en/reference/components-reference/supported-state-stores/setup-firestore.md b/daprdocs/content/en/reference/components-reference/supported-state-stores/setup-firestore.md index 9c489bbf1..6062b05e2 100644 --- a/daprdocs/content/en/reference/components-reference/supported-state-stores/setup-firestore.md +++ b/daprdocs/content/en/reference/components-reference/supported-state-stores/setup-firestore.md @@ -21,30 +21,32 @@ spec: type: state.gcp.firestore version: v1 metadata: - - name: type - value: # Required. Example: "serviceaccount" - name: project_id value: # Required. + - name: endpoint # Optional. + value: "http://localhost:8432" - name: private_key_id - value: # Required. + value: # Optional. - name: private_key - value: # Required. + value: # Optional, but Required if `private_key_id` is specified. - name: client_email - value: # Required. + value: # Optional, but Required if `private_key_id` is specified. - name: client_id - value: # Required. + value: # Optional, but Required if `private_key_id` is specified. - name: auth_uri - value: # Required. + value: # Optional. - name: token_uri - value: # Required. + value: # Optional. - name: auth_provider_x509_cert_url - value: # Required. + value: # Optional. - name: client_x509_cert_url - value: # Required. + value: # Optional. - name: entity_kind value: # Optional. default: "DaprState" - name: noindex value: # Optional. default: "false" + - name: type + value: # Deprecated. ``` {{% alert title="Warning" color="warning" %}} @@ -55,17 +57,23 @@ The above example uses secrets as plain strings. It is recommended to use a secr | Field | Required | Details | Example | |--------------------|:--------:|---------|---------| -| type | Y | The credentials type | `"serviceaccount"` | project_id | Y | The ID of the GCP project to use | `"project-id"` -| private_key_id | Y | The ID of the prvate key to use | `"private-key-id"` -| client_email | Y | The email address for the client | `"eample@example.com"` -| client_id | Y | The client id value to use for authentication | `"client-id"` -| auth_uri | Y | The authentication URI to use | `"https://accounts.google.com/o/oauth2/auth"` -| token_uri | Y | The token URI to query for Auth token | `"https://oauth2.googleapis.com/token"` -| auth_provider_x509_cert_url | Y | The auth provider certificate URL | `"https://www.googleapis.com/oauth2/v1/certs"` -| client_x509_cert_url | Y | The client certificate URL | `"https://www.googleapis.com/robot/v1/metadata/x509/x"` +| endpoint | N | GCP endpoint for the component to use. Only used for local development with (for example) [GCP Datastore Emulator](https://cloud.google.com/datastore/docs/tools/datastore-emulator). The `endpoint` is unnecessary when running against the GCP production API. | `"localhost:8432"` +| private_key_id | N | The ID of the prvate key to use | `"private-key-id"` +| privateKey | N | If using explicit credentials, this field should contain the `private_key` field from the service account json | `-----BEGIN PRIVATE KEY-----MIIBVgIBADANBgkqhkiG9w0B` +| client_email | N | The email address for the client | `"eample@example.com"` +| client_id | N | The client id value to use for authentication | `"client-id"` +| auth_uri | N | The authentication URI to use | `"https://accounts.google.com/o/oauth2/auth"` +| token_uri | N | The token URI to query for Auth token | `"https://oauth2.googleapis.com/token"` +| auth_provider_x509_cert_url | N | The auth provider certificate URL | `"https://www.googleapis.com/oauth2/v1/certs"` +| client_x509_cert_url | N | The client certificate URL | `"https://www.googleapis.com/robot/v1/metadata/x509/x"` | entity_kind | N | The entity name in Filestore. Defaults to `"DaprState"` | `"DaprState"` | noindex | N | Whether to disable indexing of state entities. Use this setting if you encounter Firestore index size limitations. Defaults to `"false"` | `"true"` +| type | N | **DEPRECATED** The credentials type | `"serviceaccount"` + + +## GCP Credentials +Since the GCP Firestore component uses the GCP Go Client Libraries, by default it authenticates using **Application Default Credentials**. This is explained in the [Authenticate to GCP Cloud services using client libraries](https://cloud.google.com/docs/authentication/client-libraries) guide. ## Setup GCP Firestore @@ -74,7 +82,7 @@ The above example uses secrets as plain strings. It is recommended to use a secr {{% codetab %}} You can use the GCP Datastore emulator to run locally using the instructions [here](https://cloud.google.com/datastore/docs/tools/datastore-emulator). -You can then interact with the server using `localhost:8081`. +You can then interact with the server using `http://localhost:8432`. {{% /codetab %}} {{% codetab %}} diff --git a/daprdocs/data/components/state_stores/gcp.yaml b/daprdocs/data/components/state_stores/gcp.yaml index bd8fdc9bd..c129ebbf7 100644 --- a/daprdocs/data/components/state_stores/gcp.yaml +++ b/daprdocs/data/components/state_stores/gcp.yaml @@ -1,6 +1,6 @@ - component: GCP Firestore link: setup-firestore - state: Alpha + state: Stable version: v1 since: "1.0" features: From 6874659e3e32c84f2a5d2e14fcd3532ba738778b Mon Sep 17 00:00:00 2001 From: Michael DePouw Date: Tue, 9 May 2023 12:34:17 -0400 Subject: [PATCH 20/20] fix previous pattern link (#3396) --- .../building-blocks/workflow/workflow-patterns.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/daprdocs/content/en/developing-applications/building-blocks/workflow/workflow-patterns.md b/daprdocs/content/en/developing-applications/building-blocks/workflow/workflow-patterns.md index b92f1981b..e82217874 100644 --- a/daprdocs/content/en/developing-applications/building-blocks/workflow/workflow-patterns.md +++ b/daprdocs/content/en/developing-applications/building-blocks/workflow/workflow-patterns.md @@ -78,7 +78,7 @@ In the fan-out/fan-in design pattern, you execute multiple tasks simultaneously Diagram showing how the fan-out/fan-in workflow pattern works -In addition to the challenges mentioned in [the previous pattern]({{< ref "workflow-overview.md#task-chaining" >}}), there are several important questions to consider when implementing the fan-out/fan-in pattern manually: +In addition to the challenges mentioned in [the previous pattern]({{< ref "workflow-patterns.md#task-chaining" >}}), there are several important questions to consider when implementing the fan-out/fan-in pattern manually: - How do you control the degree of parallelism? - How do you know when to trigger subsequent aggregation steps?