mirror of https://github.com/dapr/docs.git
Merge branch 'v1.11' into wasm-url
This commit is contained in:
commit
ec08c7e1ef
|
@ -18,7 +18,7 @@ jobs:
|
|||
- uses: actions/checkout@v2
|
||||
with:
|
||||
submodules: recursive
|
||||
fetch-depth: 0
|
||||
fetch-depth: 0
|
||||
- name: Setup Docsy
|
||||
run: cd daprdocs && git submodule update --init --recursive && sudo npm install -D --save autoprefixer && sudo npm install -D --save postcss-cli
|
||||
- name: Build And Deploy
|
||||
|
@ -37,7 +37,7 @@ jobs:
|
|||
app_location: "/daprdocs" # App source code path
|
||||
api_location: "api" # Api source code path - optional
|
||||
output_location: "public" # Built app content directory - optional
|
||||
app_build_command: "hugo"
|
||||
app_build_command: "git config --global --add safe.directory /github/workspace && hugo"
|
||||
###### End of Repository/Build Configurations ######
|
||||
|
||||
close_pull_request_job:
|
||||
|
|
|
@ -45,9 +45,11 @@ Manage your workflow using HTTP calls. The example below plugs in the properties
|
|||
To start your workflow with an ID `12345678`, run:
|
||||
|
||||
```bash
|
||||
POST http://localhost:3500/v1.0-alpha1/workflows/dapr/OrderProcessingWorkflow/12345678/start
|
||||
POST http://localhost:3500/v1.0-alpha1/workflows/dapr/OrderProcessingWorkflow/start?instanceID=12345678
|
||||
```
|
||||
|
||||
Note that workflow instance IDs can only contain alphanumeric characters, underscores, and dashes.
|
||||
|
||||
### Terminate workflow
|
||||
|
||||
To terminate your workflow with an ID `12345678`, run:
|
||||
|
@ -61,7 +63,7 @@ POST http://localhost:3500/v1.0-alpha1/workflows/dapr/12345678/terminate
|
|||
To fetch workflow information (outputs and inputs) with an ID `12345678`, run:
|
||||
|
||||
```bash
|
||||
GET http://localhost:3500/v1.0-alpha1/workflows/dapr/OrderProcessingWorkflow/12345678
|
||||
GET http://localhost:3500/v1.0-alpha1/workflows/dapr/12345678
|
||||
```
|
||||
|
||||
Learn more about these HTTP calls in the [workflow API reference guide]({{< ref workflow_api.md >}}).
|
||||
|
|
|
@ -52,8 +52,10 @@ Each workflow instance managed by the engine is represented as one or more spans
|
|||
|
||||
There are two types of actors that are internally registered within the Dapr sidecar in support of the workflow engine:
|
||||
|
||||
- `dapr.internal.wfengine.workflow`
|
||||
- `dapr.internal.wfengine.activity`
|
||||
- `dapr.internal.{namespace}.{appID}.workflow`
|
||||
- `dapr.internal.{namespace}.{appID}.activity`
|
||||
|
||||
The `{namespace}` value is the Dapr namespace and defaults to `default` if no namespace is configured. The `{appID}` value is the app's ID. For example, if you have a workflow app named "wfapp", then the type of the workflow actor would be `dapr.internal.default.wfapp.workflow` and the type of the activity actor would be `dapr.internal.default.wfapp.activity`.
|
||||
|
||||
The following diagram demonstrates how internal workflow actors operate in a Kubernetes scenario:
|
||||
|
||||
|
@ -61,11 +63,13 @@ The following diagram demonstrates how internal workflow actors operate in a Kub
|
|||
|
||||
Just like user-defined actors, internal workflow actors are distributed across the cluster by the actor placement service. They also maintain their own state and make use of reminders. However, unlike actors that live in application code, these _internal_ actors are embedded into the Dapr sidecar. Application code is completely unaware that these actors exist.
|
||||
|
||||
There are two types of actors registered by the Dapr sidecar for workflow: the _workflow_ actor and the _activity_ actor. The next sections will go into more details on each.
|
||||
{{% alert title="Note" color="primary" %}}
|
||||
The internal workflow actor types are only registered after an app has registered a workflow using a Dapr Workflow SDK. If an app never registers a workflow, then the internal workflow actors are never registered.
|
||||
{{% /alert %}}
|
||||
|
||||
### Workflow actors
|
||||
|
||||
A new instance of the `dapr.internal.wfengine.workflow` actor is activated for every workflow instance that gets created. The ID of the _workflow_ actor is the ID of the workflow. This internal actor stores the state of the workflow as it progresses and determines the node on which the workflow code executes via the actor placement service.
|
||||
Workflow actors are responsible for managing the state and placement of all workflows running in the app. A new instance of the workflow actor is activated for every workflow instance that gets created. The ID of the workflow actor is the ID of the workflow. This internal actor stores the state of the workflow as it progresses and determines the node on which the workflow code executes via the actor placement service.
|
||||
|
||||
Each workflow actor saves its state using the following keys in the configured state store:
|
||||
|
||||
|
@ -94,17 +98,13 @@ To summarize:
|
|||
|
||||
### Activity actors
|
||||
|
||||
A new instance of the `dapr.internal.wfengine.activity` actor is activated for every activity task that gets scheduled by a workflow. The ID of the _activity_ actor is the ID of the workflow combined with a sequence number (sequence numbers start with 0). For example, if a workflow has an ID of `876bf371` and is the third activity to be scheduled by the workflow, it's ID will be `876bf371#2` where `2` is the sequence number.
|
||||
Activity actors are responsible for managing the state and placement of all workflow activity invocations. A new instance of the activity actor is activated for every activity task that gets scheduled by a workflow. The ID of the activity actor is the ID of the workflow combined with a sequence number (sequence numbers start with 0). For example, if a workflow has an ID of `876bf371` and is the third activity to be scheduled by the workflow, it's ID will be `876bf371::2` where `2` is the sequence number.
|
||||
|
||||
Each activity actor stores a single key into the state store:
|
||||
|
||||
| Key | Description |
|
||||
| --- | ----------- |
|
||||
| `activityreq-N` | The key contains the activity invocation payload, which includes the serialized activity input data. The `N` value is a 64-bit unsigned integer that represents the _generation_ of the workflow, a concept which is outside the scope of this documentation. |
|
||||
|
||||
{{% alert title="Warning" color="warning" %}}
|
||||
In the [Alpha release of the Dapr Workflow engine]({{< ref support-preview-features.md >}}), activity actor state will remain in the state store even after the activity task has completed. Scheduling a large number of workflow activities could result in unbounded storage usage. In a future release, data retention policies will be introduced that can automatically purge the state store of completed activity state.
|
||||
{{% /alert %}}
|
||||
| `activityState` | The key contains the activity invocation payload, which includes the serialized activity input data. This key is deleted automatically after the activity invocation has completed. |
|
||||
|
||||
The following diagram illustrates the typical lifecycle of an activity actor.
|
||||
|
||||
|
|
|
@ -78,7 +78,7 @@ In the fan-out/fan-in design pattern, you execute multiple tasks simultaneously
|
|||
|
||||
<img src="/images/workflow-overview/workflows-fanin-fanout.png" width=800 alt="Diagram showing how the fan-out/fan-in workflow pattern works">
|
||||
|
||||
In addition to the challenges mentioned in [the previous pattern]({{< ref "workflow-overview.md#task-chaining" >}}), there are several important questions to consider when implementing the fan-out/fan-in pattern manually:
|
||||
In addition to the challenges mentioned in [the previous pattern]({{< ref "workflow-patterns.md#task-chaining" >}}), there are several important questions to consider when implementing the fan-out/fan-in pattern manually:
|
||||
|
||||
- How do you control the degree of parallelism?
|
||||
- How do you know when to trigger subsequent aggregation steps?
|
||||
|
|
|
@ -11,7 +11,7 @@ This article provides guidance on running Dapr with Podman on a Windows/Linux/ma
|
|||
## Prerequisites
|
||||
|
||||
- [Dapr CLI]({{< ref install-dapr-cli.md >}})
|
||||
- [Podman](https://podman.io/getting-started/installation.html)
|
||||
- [Podman](https://podman.io/docs/tutorials/installation)
|
||||
|
||||
## Initialize Dapr environment
|
||||
|
||||
|
|
|
@ -10,29 +10,25 @@ Dapr provides users with the ability to interact with workflows and comes with a
|
|||
|
||||
## Start workflow request
|
||||
|
||||
Start a workflow instance with the given name and instance ID.
|
||||
Start a workflow instance with the given name and optionally, an instance ID.
|
||||
|
||||
```bash
|
||||
POST http://localhost:3500/v1.0-alpha1/workflows/<workflowComponentName>/<workflowName>/<instanceId>/start
|
||||
POST http://localhost:3500/v1.0-alpha1/workflows/<workflowComponentName>/<workflowName>/start[?instanceId=<instanceId>]
|
||||
```
|
||||
|
||||
Note that workflow instance IDs can only contain alphanumeric characters, underscores, and dashes.
|
||||
|
||||
### URL parameters
|
||||
|
||||
Parameter | Description
|
||||
--------- | -----------
|
||||
`workflowComponentName` | Current default is `dapr` for Dapr Workflows
|
||||
`workflowName` | Identify the workflow type
|
||||
`instanceId` | Unique value created for each run of a specific workflow
|
||||
`instanceId` | (Optional) Unique value created for each run of a specific workflow
|
||||
|
||||
### Request content
|
||||
|
||||
In the request you can pass along relevant input information that will be passed to the workflow:
|
||||
|
||||
```json
|
||||
{
|
||||
"input": // argument(s) to pass to the workflow which can be any valid JSON data type (such as objects, strings, numbers, arrays, etc.)
|
||||
}
|
||||
```
|
||||
Any request content will be passed to the workflow as input. The Dapr API passes the content as-is without attempting to interpret it.
|
||||
|
||||
### HTTP response codes
|
||||
|
||||
|
@ -48,9 +44,7 @@ The API call will provide a response similar to this:
|
|||
|
||||
```json
|
||||
{
|
||||
"WFInfo": {
|
||||
"instance_id": "SampleWorkflow"
|
||||
}
|
||||
"instanceID": "12345678"
|
||||
}
|
||||
```
|
||||
|
||||
|
@ -59,7 +53,7 @@ The API call will provide a response similar to this:
|
|||
Terminate a running workflow instance with the given name and instance ID.
|
||||
|
||||
```bash
|
||||
POST http://localhost:3500/v1.0-alpha1/workflows/<workflowComponentName>/<instanceId>/terminate
|
||||
POST http://localhost:3500/v1.0-alpha1/workflows/<instanceId>/terminate
|
||||
```
|
||||
|
||||
### URL parameters
|
||||
|
@ -67,7 +61,6 @@ POST http://localhost:3500/v1.0-alpha1/workflows/<workflowComponentName>/<instan
|
|||
Parameter | Description
|
||||
--------- | -----------
|
||||
`workflowComponentName` | Current default is `dapr` for Dapr Workflows
|
||||
`workflowName` | Identify the workflow type
|
||||
`instanceId` | Unique value created for each run of a specific workflow
|
||||
|
||||
### HTTP response codes
|
||||
|
@ -80,22 +73,14 @@ Code | Description
|
|||
|
||||
### Response content
|
||||
|
||||
The API call will provide a response similar to this:
|
||||
|
||||
```bash
|
||||
HTTP/1.1 202 Accepted
|
||||
Server: fasthttp
|
||||
Date: Thu, 12 Jan 2023 21:31:16 GMT
|
||||
Traceparent: 00-e3dedffedbeb9efbde9fbed3f8e2d8-5f38960d43d24e98-01
|
||||
Connection: close
|
||||
```
|
||||
This API does not return any content.
|
||||
|
||||
### Get workflow request
|
||||
|
||||
Get information about a given workflow instance.
|
||||
|
||||
```bash
|
||||
GET http://localhost:3500/v1.0-alpha1/workflows/<workflowComponentName>/<workflowName>/<instanceId>
|
||||
GET http://localhost:3500/v1.0-alpha1/workflows/<workflowComponentName>/<instanceId>
|
||||
```
|
||||
|
||||
### URL parameters
|
||||
|
@ -103,39 +88,30 @@ GET http://localhost:3500/v1.0-alpha1/workflows/<workflowComponentName>/<workflo
|
|||
Parameter | Description
|
||||
--------- | -----------
|
||||
`workflowComponentName` | Current default is `dapr` for Dapr Workflows
|
||||
`workflowName` | Identify the workflow type
|
||||
`instanceId` | Unique value created for each run of a specific workflow
|
||||
|
||||
### HTTP response codes
|
||||
|
||||
Code | Description
|
||||
---- | -----------
|
||||
`202` | Accepted
|
||||
`200` | OK
|
||||
`400` | Request was malformed
|
||||
`500` | Request formatted correctly, error in dapr code or underlying component
|
||||
|
||||
### Response content
|
||||
|
||||
The API call will provide a response similar to this:
|
||||
|
||||
```bash
|
||||
HTTP/1.1 202 Accepted
|
||||
Server: fasthttp
|
||||
Date: Thu, 12 Jan 2023 21:31:16 GMT
|
||||
Content-Type: application/json
|
||||
Content-Length: 139
|
||||
Traceparent: 00-e3dedffedbeb9efbde9fbed3f8e2d8-5f38960d43d24e98-01
|
||||
Connection: close
|
||||
The API call will provide a JSON response similar to this:
|
||||
|
||||
```json
|
||||
{
|
||||
"WFInfo": {
|
||||
"instance_id": "SampleWorkflow"
|
||||
},
|
||||
"start_time": "2023-01-12T21:31:13Z",
|
||||
"metadata": {
|
||||
"status": "Running",
|
||||
"task_queue": "WorkflowSampleQueue"
|
||||
}
|
||||
"createdAt": "2023-01-12T21:31:13Z",
|
||||
"instanceID": "12345678",
|
||||
"lastUpdatedAt": "2023-01-12T21:31:13Z",
|
||||
"properties": {
|
||||
"property1": "value1",
|
||||
"property2": "value2",
|
||||
},
|
||||
"runtimeStatus": "RUNNING",
|
||||
}
|
||||
```
|
||||
|
||||
|
|
|
@ -70,6 +70,11 @@ The above example uses secrets as plain strings. It is recommended to use a secr
|
|||
When running the Dapr sidecar (daprd) with your application on EKS (AWS Kubernetes), if you're using a node/pod that has already been attached to an IAM policy defining access to AWS resources, you **must not** provide AWS access-key, secret-key, and tokens in the definition of the component spec you're using.
|
||||
{{% /alert %}}
|
||||
|
||||
|
||||
### S3 Bucket Creation
|
||||
{{< tabs "Minio" "LocalStack" "AWS" >}}
|
||||
|
||||
{{% codetab %}}
|
||||
### Using with Minio
|
||||
|
||||
[Minio](https://min.io/) is a service that exposes local storage as S3-compatible block storage, and it's a popular alternative to S3 especially in development environments. You can use the S3 binding with Minio too, with some configuration tweaks:
|
||||
|
@ -78,6 +83,70 @@ When running the Dapr sidecar (daprd) with your application on EKS (AWS Kubernet
|
|||
3. The value for `region` is not important; you can set it to `us-east-1`.
|
||||
4. Depending on your environment, you may need to set `disableSSL` to `true` if you're connecting to Minio using a non-secure connection (using the `http://` protocol). If you are using a secure connection (`https://` protocol) but with a self-signed certificate, you may need to set `insecureSSL` to `true`.
|
||||
|
||||
{{% /codetab %}}
|
||||
|
||||
{{% codetab %}}
|
||||
For local development, the [LocalStack project](https://github.com/localstack/localstack) is used to integrate AWS S3. Follow [these instructions](https://github.com/localstack/localstack#running) to run LocalStack.
|
||||
|
||||
To run LocalStack locally from the command line using Docker, use a `docker-compose.yaml` similar to the following:
|
||||
|
||||
```yaml
|
||||
version: "3.8"
|
||||
|
||||
services:
|
||||
localstack:
|
||||
container_name: "cont-aws-s3"
|
||||
image: localstack/localstack:1.4.0
|
||||
ports:
|
||||
- "127.0.0.1:4566:4566"
|
||||
environment:
|
||||
- DEBUG=1
|
||||
- DOCKER_HOST=unix:///var/run/docker.sock
|
||||
volumes:
|
||||
- "<PATH>/init-aws.sh:/etc/localstack/init/ready.d/init-aws.sh" # init hook
|
||||
- "${LOCALSTACK_VOLUME_DIR:-./volume}:/var/lib/localstack"
|
||||
- "/var/run/docker.sock:/var/run/docker.sock"
|
||||
```
|
||||
|
||||
To use the S3 component, you need to use an existing bucket. The example above uses a [LocalStack Initialization Hook](https://docs.localstack.cloud/references/init-hooks/) to setup the bucket.
|
||||
|
||||
To use LocalStack with your S3 binding, you need to provide the `endpoint` configuration in the component metadata. The `endpoint` is unnecessary when running against production AWS.
|
||||
|
||||
|
||||
```yaml
|
||||
apiVersion: dapr.io/v1alpha1
|
||||
kind: Component
|
||||
metadata:
|
||||
name: aws-s3
|
||||
namespace: default
|
||||
spec:
|
||||
type: bindings.aws.s3
|
||||
version: v1
|
||||
metadata:
|
||||
- name: bucket
|
||||
value: conformance-test-docker
|
||||
- name: endpoint
|
||||
value: "http://localhost:4566"
|
||||
- name: accessKey
|
||||
value: "my-access"
|
||||
- name: secretKey
|
||||
value: "my-secret"
|
||||
- name: region
|
||||
value: "us-east-1"
|
||||
```
|
||||
|
||||
{{% /codetab %}}
|
||||
|
||||
{{% codetab %}}
|
||||
|
||||
To use the S3 component, you need to use an existing bucket. Follow the [AWS documentation for creating a bucket](https://docs.aws.amazon.com/AmazonS3/latest/userguide/create-bucket-overview.html).
|
||||
|
||||
{{% /codetab %}}
|
||||
|
||||
|
||||
|
||||
{{< /tabs >}}
|
||||
|
||||
## Binding support
|
||||
|
||||
This component supports **output binding** with the following operations:
|
||||
|
|
|
@ -25,6 +25,10 @@ spec:
|
|||
value: service_account
|
||||
- name: projectId
|
||||
value: <PROJECT_ID> # replace
|
||||
- name: endpoint # Optional.
|
||||
value: "http://localhost:8085"
|
||||
- name: consumerID # Optional - defaults to the app's own ID
|
||||
value: <CONSUMER_ID>
|
||||
- name: identityProjectId
|
||||
value: <IDENTITY_PROJECT_ID> # replace
|
||||
- name: privateKeyId
|
||||
|
@ -46,11 +50,17 @@ spec:
|
|||
- name: disableEntityManagement
|
||||
value: "false"
|
||||
- name: enableMessageOrdering
|
||||
value: "false"
|
||||
value: "false"
|
||||
- name: orderingKey # Optional
|
||||
value: <ORDERING_KEY>
|
||||
- name: maxReconnectionAttempts # Optional
|
||||
value: 30
|
||||
- name: connectionRecoveryInSec # Optional
|
||||
value: 2
|
||||
- name: deadLetterTopic # Optional
|
||||
value: <EXISTING_PUBSUB_TOPIC>
|
||||
- name: maxDeliveryAttempts # Optional
|
||||
value: 5
|
||||
```
|
||||
{{% alert title="Warning" color="warning" %}}
|
||||
The above example uses secrets as plain strings. It is recommended to use a secret store for the secrets as described [here]({{< ref component-secrets.md >}}).
|
||||
|
@ -60,8 +70,9 @@ The above example uses secrets as plain strings. It is recommended to use a secr
|
|||
|
||||
| Field | Required | Details | Example |
|
||||
|--------------------|:--------:|---------|---------|
|
||||
| type | N | GCP credentials type. Only `service_account` is supported. Defaults to `service_account` | `service_account`
|
||||
| projectId | Y | GCP project id| `myproject-123`
|
||||
| endpoint | N | GCP endpoint for the component to use. Only used for local development (for example) with [GCP Pub/Sub Emulator](https://cloud.google.com/pubsub/docs/emulator). The `endpoint` is unnecessary when running against the GCP production API. | `"http://localhost:8085"`
|
||||
| `consumerID` | N | The Consumer ID organizes one or more consumers into a group. Consumers with the same consumer ID work as one virtual consumer; for example, a message is processed only once by one of the consumers in the group. If the consumer ID is not set, the Dapr runtime will set it to the Dapr application ID. The `consumerID`, along with the `topic` provided as part of the request, are used to build the Pub/Sub subscription ID |
|
||||
| identityProjectId | N | If the GCP pubsub project is different from the identity project, specify the identity project using this attribute | `"myproject-123"`
|
||||
| privateKeyId | N | If using explicit credentials, this field should contain the `private_key_id` field from the service account json document | `"my-private-key"`
|
||||
| privateKey | N | If using explicit credentials, this field should contain the `private_key` field from the service account json | `-----BEGIN PRIVATE KEY-----MIIBVgIBADANBgkqhkiG9w0B`
|
||||
|
@ -73,18 +84,78 @@ The above example uses secrets as plain strings. It is recommended to use a secr
|
|||
| clientX509CertUrl | N | If using explicit credentials, this field should contain the `client_x509_cert_url` field from the service account json | `https://www.googleapis.com/robot/v1/metadata/x509/myserviceaccount%40myproject.iam.gserviceaccount.com`
|
||||
| disableEntityManagement | N | When set to `"true"`, topics and subscriptions do not get created automatically. Default: `"false"` | `"true"`, `"false"`
|
||||
| enableMessageOrdering | N | When set to `"true"`, subscribed messages will be received in order, depending on publishing and permissions configuration. | `"true"`, `"false"`
|
||||
| orderingKey |N | The key provided in the request. It's used when `enableMessageOrdering` is set to `true` to order messages based on such key. | "my-orderingkey"
|
||||
| maxReconnectionAttempts | N |Defines the maximum number of reconnect attempts. Default: `30` | `30`
|
||||
| connectionRecoveryInSec | N |Time in seconds to wait between connection recovery attempts. Default: `2` | `2`
|
||||
| deadLetterTopic | N | Name of the GCP Pub/Sub Topic. This topic **must** exist before using this component. | `"myapp-dlq"`
|
||||
| maxDeliveryAttempts | N | Maximum number of attempts to deliver the message. If `deadLetterTopic` is specified, `maxDeliveryAttempts` is the maximum number of attempts for failed processing of messages. Once that number is reached, the message will be moved to the dead-letter topic. Default: `5` | `5`
|
||||
| type | N | **DEPRECATED** GCP credentials type. Only `service_account` is supported. Defaults to `service_account` | `service_account`
|
||||
|
||||
|
||||
|
||||
{{% alert title="Warning" color="warning" %}}
|
||||
If `enableMessageOrdering` is set to "true", the roles/viewer or roles/pubsub.viewer role will be required on the service account in order to guarantee ordering in cases where order tokens are not embedded in the messages. If this role is not given, or the call to Subscription.Config() fails for any other reason, ordering by embedded order tokens will still function correctly.
|
||||
{{% /alert %}}
|
||||
|
||||
## GCP Credentials
|
||||
|
||||
Since the GCP Pub/Sub component uses the GCP Go Client Libraries, by default it authenticates using **Application Default Credentials**. This is explained further in the [Authenticate to GCP Cloud services using client libraries](https://cloud.google.com/docs/authentication/client-libraries) guide.
|
||||
|
||||
## Create a GCP Pub/Sub
|
||||
|
||||
{{< tabs "Self-Hosted" "GCP" >}}
|
||||
|
||||
{{% codetab %}}
|
||||
For local development, the [GCP Pub/Sub Emulator](https://cloud.google.com/pubsub/docs/emulator) is used to test the GCP Pub/Sub Component. Follow [these instructions](https://cloud.google.com/pubsub/docs/emulator#start) to run the GCP Pub/Sub Emulator.
|
||||
|
||||
To run the GCP Pub/Sub Emulator locally using Docker, use the following `docker-compose.yaml`:
|
||||
|
||||
```yaml
|
||||
version: '3'
|
||||
services:
|
||||
pubsub:
|
||||
image: gcr.io/google.com/cloudsdktool/cloud-sdk:422.0.0-emulators
|
||||
ports:
|
||||
- "8085:8085"
|
||||
container_name: gcp-pubsub
|
||||
entrypoint: gcloud beta emulators pubsub start --project local-test-prj --host-port 0.0.0.0:8085
|
||||
|
||||
```
|
||||
|
||||
In order to use the GCP Pub/Sub Emulator with your pub/sub binding, you need to provide the `endpoint` configuration in the component metadata. The `endpoint` is unnecessary when running against the GCP Production API.
|
||||
|
||||
The **projectId** attribute must match the `--project` used in either the `docker-compose.yaml` or Docker command.
|
||||
|
||||
```yaml
|
||||
apiVersion: dapr.io/v1alpha1
|
||||
kind: Component
|
||||
metadata:
|
||||
name: gcp-pubsub
|
||||
spec:
|
||||
type: pubsub.gcp.pubsub
|
||||
version: v1
|
||||
metadata:
|
||||
- name: projectId
|
||||
value: "local-test-prj"
|
||||
- name: consumerID
|
||||
value: "testConsumer"
|
||||
- name: endpoint
|
||||
value: "localhost:8085"
|
||||
```
|
||||
|
||||
{{% /codetab %}}
|
||||
|
||||
|
||||
{{% codetab %}}
|
||||
|
||||
You can use either "explicit" or "implicit" credentials to configure access to your GCP pubsub instance. If using explicit, most fields are required. Implicit relies on dapr running under a Kubernetes service account (KSA) mapped to a Google service account (GSA) which has the necessary permissions to access pubsub. In implicit mode, only the `projectId` attribute is needed, all other are optional.
|
||||
|
||||
Follow the instructions [here](https://cloud.google.com/pubsub/docs/quickstart-console) on setting up Google Cloud Pub/Sub system.
|
||||
|
||||
{{% /codetab %}}
|
||||
|
||||
{{< /tabs >}}
|
||||
|
||||
## Related links
|
||||
- [Basic schema for a Dapr component]({{< ref component-schema >}})
|
||||
- Read [this guide]({{< ref "howto-publish-subscribe.md#step-2-publish-a-topic" >}}) for instructions on configuring pub/sub components
|
||||
|
|
|
@ -21,30 +21,32 @@ spec:
|
|||
type: state.gcp.firestore
|
||||
version: v1
|
||||
metadata:
|
||||
- name: type
|
||||
value: <REPLACE-WITH-CREDENTIALS-TYPE> # Required. Example: "serviceaccount"
|
||||
- name: project_id
|
||||
value: <REPLACE-WITH-PROJECT-ID> # Required.
|
||||
- name: endpoint # Optional.
|
||||
value: "http://localhost:8432"
|
||||
- name: private_key_id
|
||||
value: <REPLACE-WITH-PRIVATE-KEY-ID> # Required.
|
||||
value: <REPLACE-WITH-PRIVATE-KEY-ID> # Optional.
|
||||
- name: private_key
|
||||
value: <REPLACE-WITH-PRIVATE-KEY> # Required.
|
||||
value: <REPLACE-WITH-PRIVATE-KEY> # Optional, but Required if `private_key_id` is specified.
|
||||
- name: client_email
|
||||
value: <REPLACE-WITH-CLIENT-EMAIL> # Required.
|
||||
value: <REPLACE-WITH-CLIENT-EMAIL> # Optional, but Required if `private_key_id` is specified.
|
||||
- name: client_id
|
||||
value: <REPLACE-WITH-CLIENT-ID> # Required.
|
||||
value: <REPLACE-WITH-CLIENT-ID> # Optional, but Required if `private_key_id` is specified.
|
||||
- name: auth_uri
|
||||
value: <REPLACE-WITH-AUTH-URI> # Required.
|
||||
value: <REPLACE-WITH-AUTH-URI> # Optional.
|
||||
- name: token_uri
|
||||
value: <REPLACE-WITH-TOKEN-URI> # Required.
|
||||
value: <REPLACE-WITH-TOKEN-URI> # Optional.
|
||||
- name: auth_provider_x509_cert_url
|
||||
value: <REPLACE-WITH-AUTH-X509-CERT-URL> # Required.
|
||||
value: <REPLACE-WITH-AUTH-X509-CERT-URL> # Optional.
|
||||
- name: client_x509_cert_url
|
||||
value: <REPLACE-WITH-CLIENT-x509-CERT-URL> # Required.
|
||||
value: <REPLACE-WITH-CLIENT-x509-CERT-URL> # Optional.
|
||||
- name: entity_kind
|
||||
value: <REPLACE-WITH-ENTITY-KIND> # Optional. default: "DaprState"
|
||||
- name: noindex
|
||||
value: <REPLACE-WITH-BOOLEAN> # Optional. default: "false"
|
||||
- name: type
|
||||
value: <REPLACE-WITH-CREDENTIALS-TYPE> # Deprecated.
|
||||
```
|
||||
|
||||
{{% alert title="Warning" color="warning" %}}
|
||||
|
@ -55,17 +57,23 @@ The above example uses secrets as plain strings. It is recommended to use a secr
|
|||
|
||||
| Field | Required | Details | Example |
|
||||
|--------------------|:--------:|---------|---------|
|
||||
| type | Y | The credentials type | `"serviceaccount"`
|
||||
| project_id | Y | The ID of the GCP project to use | `"project-id"`
|
||||
| private_key_id | Y | The ID of the prvate key to use | `"private-key-id"`
|
||||
| client_email | Y | The email address for the client | `"eample@example.com"`
|
||||
| client_id | Y | The client id value to use for authentication | `"client-id"`
|
||||
| auth_uri | Y | The authentication URI to use | `"https://accounts.google.com/o/oauth2/auth"`
|
||||
| token_uri | Y | The token URI to query for Auth token | `"https://oauth2.googleapis.com/token"`
|
||||
| auth_provider_x509_cert_url | Y | The auth provider certificate URL | `"https://www.googleapis.com/oauth2/v1/certs"`
|
||||
| client_x509_cert_url | Y | The client certificate URL | `"https://www.googleapis.com/robot/v1/metadata/x509/x"`
|
||||
| endpoint | N | GCP endpoint for the component to use. Only used for local development with (for example) [GCP Datastore Emulator](https://cloud.google.com/datastore/docs/tools/datastore-emulator). The `endpoint` is unnecessary when running against the GCP production API. | `"localhost:8432"`
|
||||
| private_key_id | N | The ID of the prvate key to use | `"private-key-id"`
|
||||
| privateKey | N | If using explicit credentials, this field should contain the `private_key` field from the service account json | `-----BEGIN PRIVATE KEY-----MIIBVgIBADANBgkqhkiG9w0B`
|
||||
| client_email | N | The email address for the client | `"eample@example.com"`
|
||||
| client_id | N | The client id value to use for authentication | `"client-id"`
|
||||
| auth_uri | N | The authentication URI to use | `"https://accounts.google.com/o/oauth2/auth"`
|
||||
| token_uri | N | The token URI to query for Auth token | `"https://oauth2.googleapis.com/token"`
|
||||
| auth_provider_x509_cert_url | N | The auth provider certificate URL | `"https://www.googleapis.com/oauth2/v1/certs"`
|
||||
| client_x509_cert_url | N | The client certificate URL | `"https://www.googleapis.com/robot/v1/metadata/x509/x"`
|
||||
| entity_kind | N | The entity name in Filestore. Defaults to `"DaprState"` | `"DaprState"`
|
||||
| noindex | N | Whether to disable indexing of state entities. Use this setting if you encounter Firestore index size limitations. Defaults to `"false"` | `"true"`
|
||||
| type | N | **DEPRECATED** The credentials type | `"serviceaccount"`
|
||||
|
||||
|
||||
## GCP Credentials
|
||||
Since the GCP Firestore component uses the GCP Go Client Libraries, by default it authenticates using **Application Default Credentials**. This is explained in the [Authenticate to GCP Cloud services using client libraries](https://cloud.google.com/docs/authentication/client-libraries) guide.
|
||||
|
||||
## Setup GCP Firestore
|
||||
|
||||
|
@ -74,7 +82,7 @@ The above example uses secrets as plain strings. It is recommended to use a secr
|
|||
{{% codetab %}}
|
||||
You can use the GCP Datastore emulator to run locally using the instructions [here](https://cloud.google.com/datastore/docs/tools/datastore-emulator).
|
||||
|
||||
You can then interact with the server using `localhost:8081`.
|
||||
You can then interact with the server using `http://localhost:8432`.
|
||||
{{% /codetab %}}
|
||||
|
||||
{{% codetab %}}
|
||||
|
|
|
@ -8,7 +8,7 @@
|
|||
output: true
|
||||
- component: AWS S3
|
||||
link: s3
|
||||
state: Alpha
|
||||
state: Stable
|
||||
version: v1
|
||||
since: "1.0"
|
||||
features:
|
||||
|
|
|
@ -1,6 +1,6 @@
|
|||
- component: GCP Pub/Sub
|
||||
link: setup-gcp-pubsub
|
||||
state: Alpha
|
||||
state: Stable
|
||||
version: v1
|
||||
since: "1.0"
|
||||
features:
|
||||
|
|
|
@ -5,7 +5,7 @@
|
|||
since: "1.10"
|
||||
features:
|
||||
crud: true
|
||||
transactions: false
|
||||
transactions: true
|
||||
etag: true
|
||||
ttl: true
|
||||
query: false
|
||||
|
|
|
@ -1,6 +1,6 @@
|
|||
- component: GCP Firestore
|
||||
link: setup-firestore
|
||||
state: Alpha
|
||||
state: Stable
|
||||
version: v1
|
||||
since: "1.0"
|
||||
features:
|
||||
|
|
Loading…
Reference in New Issue