Merge branch 'v1.15' into issue_4374-2

This commit is contained in:
Mark Fussell 2024-12-18 21:33:29 -08:00 committed by GitHub
commit 97f5b15670
No known key found for this signature in database
GPG Key ID: B5690EEEBB952194
23 changed files with 330 additions and 94 deletions

View File

@ -13,7 +13,7 @@ jobs:
validate:
runs-on: ubuntu-latest
env:
PYTHON_VER: 3.7
PYTHON_VER: 3.12
steps:
- uses: actions/checkout@v2
- name: Check Microsoft URLs do not pin localized versions
@ -27,7 +27,7 @@ jobs:
exit 1
fi
- name: Set up Python ${{ env.PYTHON_VER }}
uses: actions/setup-python@v2
uses: actions/setup-python@v5
with:
python-version: ${{ env.PYTHON_VER }}
- name: Install dependencies

View File

@ -13,7 +13,9 @@ The Placement service Docker container is started automatically as part of [`dap
## Kubernetes mode
The Placement service is deployed as part of `dapr init -k`, or via the Dapr Helm charts. For more information on running Dapr on Kubernetes, visit the [Kubernetes hosting page]({{< ref kubernetes >}}).
The Placement service is deployed as part of `dapr init -k`, or via the Dapr Helm charts. You can run Placement in high availability (HA) mode. [Learn more about setting HA mode in your Kubernetes service.]({{< ref "kubernetes-production.md#individual-service-ha-helm-configuration" >}})
For more information on running Dapr on Kubernetes, visit the [Kubernetes hosting page]({{< ref kubernetes >}}).
## Placement tables

View File

@ -11,13 +11,21 @@ The diagram below shows how the Scheduler service is used via the jobs API when
<img src="/images/scheduler/scheduler-architecture.png" alt="Diagram showing the Scheduler control plane service and the jobs API">
## Actor reminders
Prior to Dapr v1.15, [actor reminders]({{< ref "actors-timers-reminders.md#actor-reminders" >}}) were run using the Placement service. Now, by default, the [`SchedulerReminders` feature flag]({{< ref "support-preview-features.md#current-preview-features" >}}) is set to `true`, and all new actor reminders you create are run using the Scheduler service to make them more scalable.
When you deploy Dapr v1.15, any _existing_ actor reminders are migrated from the Placement service to the Scheduler service as a one time operation for each actor type. You can prevent this migration by setting the `SchedulerReminders` flag to `false` in application configuration file for the actor type.
## Self-hosted mode
The Scheduler service Docker container is started automatically as part of `dapr init`. It can also be run manually as a process if you are running in [slim-init mode]({{< ref self-hosted-no-docker.md >}}).
## Kubernetes mode
The Scheduler service is deployed as part of `dapr init -k`, or via the Dapr Helm charts. For more information on running Dapr on Kubernetes, visit the [Kubernetes hosting page]({{< ref kubernetes >}}).
The Scheduler service is deployed as part of `dapr init -k`, or via the Dapr Helm charts. You can run Scheduler in high availability (HA) mode. [Learn more about setting HA mode in your Kubernetes service.]({{< ref "kubernetes-production.md#individual-service-ha-helm-configuration" >}})
For more information on running Dapr on Kubernetes, visit the [Kubernetes hosting page]({{< ref kubernetes >}}).
## Related links

View File

@ -107,6 +107,10 @@ Refer [api spec]({{< ref "actors_api.md#invoke-timer" >}}) for more details.
## Actor reminders
{{% alert title="Note" color="primary" %}}
In Dapr v1.15, actor reminders are stored by default in the [Scheduler service]({{< ref "scheduler.md#actor-reminders" >}}).
{{% /alert %}}
Reminders are a mechanism to trigger *persistent* callbacks on an actor at specified times. Their functionality is similar to timers. But unlike timers, reminders are triggered under all circumstances until the actor explicitly unregisters them or the actor is explicitly deleted or the number in invocations is exhausted. Specifically, reminders are triggered across actor deactivations and failovers because the Dapr actor runtime persists the information about the actors' reminders using Dapr actor state provider.
You can create a persistent reminder for an actor by calling the HTTP/gRPC request to Dapr as shown below, or via Dapr SDK.
@ -148,7 +152,9 @@ If an invocation of the method fails, the timer is not removed. Timers are only
## Reminder data serialization format
Actor reminder data is serialized to JSON by default. Dapr v1.13 onwards supports a protobuf serialization format for reminders data which, depending on throughput and size of the payload, can result in significant performance improvements, giving developers a higher throughput and lower latency. Another benefit is storing smaller data in the actor underlying database, which can result in cost optimizations when using some cloud databases. A restriction with using protobuf serialization is that the reminder data can no longer be queried.
Actor reminder data is serialized to JSON by default. Dapr v1.13 onwards supports a protobuf serialization format for internal reminders data for workflow via both the Placement and Scheduler services. Depending on throughput and size of the payload, this can result in significant performance improvements, giving developers a higher throughput and lower latency.
Another benefit is storing smaller data in the actor underlying database, which can result in cost optimizations when using some cloud databases. A restriction with using protobuf serialization is that the reminder data can no longer be queried.
{{% alert title="Note" color="primary" %}}
Protobuf serialization will become the default format in Dapr 1.14

View File

@ -59,10 +59,6 @@ The jobs API provides several features to make it easy for you to schedule jobs.
The Scheduler service enables the scheduling of jobs to scale across multiple replicas, while guaranteeing that a job is only triggered by 1 scheduler service instance.
### Actor reminders
Actors have actor reminders, but present some limitations involving scalability using the Placement service implementation. You can make reminders more scalable by using [`SchedulerReminders`]({{< ref support-preview-features.md >}}). This is set in the configuration for your actor application.
## Try out the jobs API
You can try out the jobs API in your application. After [Dapr is installed]({{< ref install-dapr-cli.md >}}), you can begin using the jobs API, starting with [the How-to: Schedule jobs guide]({{< ref howto-schedule-and-handle-triggered-jobs.md >}}).

View File

@ -8,24 +8,70 @@ aliases:
- /developing-applications/integrations/authenticating/authenticating-aws/
---
All Dapr components using various AWS services (DynamoDB, SQS, S3, etc) use a standardized set of attributes for configuration via the AWS SDK. [Learn more about how the AWS SDK handles credentials](https://docs.aws.amazon.com/sdk-for-go/v1/developer-guide/configuring-sdk.html#specifying-credentials).
Dapr components leveraging AWS services (for example, DynamoDB, SQS, S3) utilize standardized configuration attributes via the AWS SDK. [Learn more about how the AWS SDK handles credentials](https://docs.aws.amazon.com/sdk-for-go/v1/developer-guide/configuring-sdk.html#specifying-credentials).
Since you can configure the AWS SDK using the default provider chain, all of the following attributes are optional. Test the component configuration and inspect the log output from the Dapr runtime to ensure that components initialize correctly.
You can configure authentication using the AWS SDKs default provider chain or one of the predefined AWS authentication profiles outlined below. Verify your component configuration by testing and inspecting Dapr runtime logs to confirm proper initialization.
| Attribute | Description |
| --------- | ----------- |
| `region` | Which AWS region to connect to. In some situations (when running Dapr in self-hosted mode, for example), this flag can be provided by the environment variable `AWS_REGION`. Since Dapr sidecar injection doesn't allow configuring environment variables on the Dapr sidecar, it is recommended to always set the `region` attribute in the component spec. |
| `endpoint` | The endpoint is normally handled internally by the AWS SDK. However, in some situations it might make sense to set it locally - for example if developing against [DynamoDB Local](https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/DynamoDBLocal.html). |
| `accessKey` | AWS Access key id. |
| `secretKey` | AWS Secret access key. Use together with `accessKey` to explicitly specify credentials. |
| `sessionToken` | AWS Session token. Used together with `accessKey` and `secretKey`. When using a regular IAM user's access key and secret, a session token is normally not required. |
### Terminology
- **ARN (Amazon Resource Name):** A unique identifier used to specify AWS resources. Format: `arn:partition:service:region:account-id:resource`. Example: `arn:aws:iam::123456789012:role/example-role`.
- **IAM (Identity and Access Management):** AWS's service for managing access to AWS resources securely.
### Authentication Profiles
#### Access Key ID and Secret Access Key
Use static Access Key and Secret Key credentials, either through component metadata fields or via [default AWS configuration](https://docs.aws.amazon.com/sdkref/latest/guide/creds-config-files.html).
{{% alert title="Important" color="warning" %}}
You **must not** provide AWS access-key, secret-key, and tokens in the definition of the component spec you're using:
- When running the Dapr sidecar (`daprd`) with your application on EKS (AWS Kubernetes)
- If using a node/pod that has already been attached to an IAM policy defining access to AWS resources
Prefer loading credentials via the default AWS configuration in scenarios such as:
- Running the Dapr sidecar (`daprd`) with your application on EKS (AWS Kubernetes).
- Using nodes or pods attached to IAM policies that define AWS resource access.
{{% /alert %}}
| Attribute | Required | Description | Example |
| --------- | ----------- | ----------- | ----------- |
| `region` | Y | AWS region to connect to. | "us-east-1" |
| `accessKey` | N | AWS Access key id. Will be required in Dapr v1.17. | "AKIAIOSFODNN7EXAMPLE" |
| `secretKey` | N | AWS Secret access key, used alongside `accessKey`. Will be required in Dapr v1.17. | "wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY" |
| `sessionToken` | N | AWS Session token, used with `accessKey` and `secretKey`. Often unnecessary for IAM user keys. | |
#### Assume IAM Role
This profile allows Dapr to assume a specific IAM Role. Typically used when the Dapr sidecar runs on EKS or nodes/pods linked to IAM policies. Currently supported by Kafka and PostgreSQL components.
| Attribute | Required | Description | Example |
| --------- | ----------- | ----------- | ----------- |
| `region` | Y | AWS region to connect to. | "us-east-1" |
| `assumeRoleArn` | N | ARN of the IAM role with AWS resource access. Will be required in Dapr v1.17. | "arn:aws:iam::123456789:role/mskRole" |
| `sessionName` | N | Session name for role assumption. Default is `"DaprDefaultSession"`. | "MyAppSession" |
#### Credentials from Environment Variables
Authenticate using [environment variables](https://docs.aws.amazon.com/sdkref/latest/guide/environment-variables.html). This is especially useful for Dapr in self-hosted mode where sidecar injectors dont configure environment variables.
There are no metadata fields required for this authentication profile.
#### IAM Roles Anywhere
[IAM Roles Anywhere](https://aws.amazon.com/iam/roles-anywhere/) extends IAM role-based authentication to external workloads. It eliminates the need for long-term credentials by using cryptographically signed certificates, anchored in a trust relationship using Dapr PKI. Dapr SPIFFE identity X.509 certificates are used to authenticate to AWS services, and Dapr handles credential rotation at half the session lifespan.
To configure this authentication profile:
1. Create a Trust Anchor in the trusting AWS account using the Dapr certificate bundle as an `External certificate bundle`.
2. Create an IAM role with the resource permissions policy necessary, as well as a trust entity for the Roles Anywhere AWS service. Here, you specify SPIFFE identities allowed.
3. Create an IAM Profile under the Roles Anywhere service, linking the IAM Role.
| Attribute | Required | Description | Example |
| --------- | ----------- | ----------- | ----------- |
| `trustAnchorArn` | Y | ARN of the Trust Anchor in the AWS account granting trust to the Dapr Certificate Authority. | arn:aws:rolesanywhere:us-west-1:012345678910:trust-anchor/01234568-0123-0123-0123-012345678901 |
| `trustProfileArn` | Y | ARN of the AWS IAM Profile in the trusting AWS account. | arn:aws:rolesanywhere:us-west-1:012345678910:profile/01234568-0123-0123-0123-012345678901 |
| `assumeRoleArn` | Y | ARN of the AWS IAM role to assume in the trusting AWS account. | arn:aws:iam:012345678910:role/exampleIAMRoleName |
### Additional Fields
Some AWS components include additional optional fields:
| Attribute | Required | Description | Example |
| --------- | ----------- | ----------- | ----------- |
| `endpoint` | N | The endpoint is normally handled internally by the AWS SDK. However, in some situations it might make sense to set it locally - for example if developing against [DynamoDB Local](https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/DynamoDBLocal.html). | |
Furthermore, non-native AWS components such as Kafka and PostgreSQL that support AWS authentication profiles have metadata fields to trigger the AWS authentication logic. Be sure to check specific component documentation.
## Alternatives to explicitly specifying credentials in component manifest files
In production scenarios, it is recommended to use a solution such as:

View File

@ -8,14 +8,14 @@ description: "Learn how to control how many requests and events can invoke your
Typically, in distributed computing, you may only want to allow for a given number of requests to execute concurrently. Using Dapr's `app-max-concurrency`, you can control how many requests and events can invoke your application simultaneously.
Default `app-max-concurreny` is set to `-1`, meaning no concurrency.
Default `app-max-concurreny` is set to `-1`, meaning no concurrency limit is enforced.
## Different approaches
While this guide focuses on `app-max-concurrency`, you can also limit request rate per second using the **`middleware.http.ratelimit`** middleware. However, it's important to understand the difference between the two approaches:
- `middleware.http.ratelimit`: Time bound and limits the number of requests per second
- `app-max-concurrency`: Specifies the number of concurrent requests (and events) at any point of time.
- `app-max-concurrency`: Specifies the max number of concurrent requests (and events) at any point of time.
See [Rate limit middleware]({{< ref middleware-rate-limit.md >}}) for more information about that approach.
@ -46,7 +46,7 @@ To set concurrency limits with the Dapr CLI for running on your local dev machin
dapr run --app-max-concurrency 1 --app-port 5000 python ./app.py
```
The above example effectively turns your app into a single concurrent service.
The above example effectively turns your app into a sequential processing service.
{{% /codetab %}}

View File

@ -231,6 +231,19 @@ You can install Dapr on Kubernetes using a Helm v3 chart.
--wait
```
To install in **high availability** mode and scale select services independently of global:
```bash
helm upgrade --install dapr dapr/dapr \
--version={{% dapr-latest-version short="true" %}} \
--namespace dapr-system \
--create-namespace \
--set global.ha.enabled=false \
--set dapr_scheduler.ha=true \
--set dapr_placement.ha=true \
--wait
```
See [Guidelines for production ready deployments on Kubernetes]({{< ref kubernetes-production.md >}}) for more information on installing and upgrading Dapr using Helm.
### (optional) Install the Dapr dashboard as part of the control plane

View File

@ -172,8 +172,8 @@ helm upgrade --install dapr dapr/dapr \
## Ephemeral Storage
Scheduler can be optionally made to use Ephemeral storage, which is in-memory storage which is **not** resilient to restarts, i.e. all Job data will be lost after a Scheduler restart.
This is useful for deployments where storage is not available or required, or for testing purposes.
When running in non-HA mode, the Scheduler can be optionally made to use ephemeral storage, which is in-memory storage that is **not** resilient to restarts. For example, all jobs data is lost after a Scheduler restart.
This is useful in non-production deployments or for testing where storage is not available or required.
{{% alert title="Note" color="primary" %}}
If Dapr is already installed, the control plane needs to be completely [uninstalled]({{< ref dapr-uninstall.md >}}) in order for the Scheduler `StatefulSet` to be recreated without the persistent volume.

View File

@ -95,6 +95,25 @@ For a new Dapr deployment, HA mode can be set with both:
For an existing Dapr deployment, [you can enable HA mode in a few extra steps]({{< ref "#enabling-high-availability-in-an-existing-dapr-deployment" >}}).
### Individual service HA Helm configuration
You can configure HA mode via Helm across all services by setting the `global.ha.enabled` flag to `true`. By default, `--set global.ha.enabled=true` is fully respected and cannot be overridden, making it impossible to simultaneously have either the placement or scheduler service as a single instance.
> **Note:** HA for scheduler and placement services is not the default setting.
To scale scheduler and placement to three instances independently of the `global.ha.enabled` flag, set `global.ha.enabled` to `false` and `dapr_scheduler.ha` and `dapr_placement.ha` to `true`. For example:
```bash
helm upgrade --install dapr dapr/dapr \
--version={{% dapr-latest-version short="true" %}} \
--namespace dapr-system \
--create-namespace \
--set global.ha.enabled=false \
--set dapr_scheduler.ha=true \
--set dapr_placement.ha=true \
--wait
```
## Setting cluster critical priority class name for control plane services
In some scenarios, nodes may have memory and/or cpu pressure and the Dapr control plane pods might get selected

View File

@ -70,6 +70,38 @@ spec:
enabled: false
```
## Configuring metrics for error codes
You can enable additional metrics for [Dapr API error codes](https://docs.dapr.io/reference/api/error_codes/) by setting `spec.metrics.recordErrorCodes` to `true`. Dapr APIs which communicate back to their caller may return standardized error codes. As described in the [Dapr development docs](https://github.com/dapr/dapr/blob/master/docs/development/dapr-metrics.md), a new metric called `error_code_total` is recorded, which allows monitoring of error codes triggered by application, code, and category. See [the `errorcodes` package](https://github.com/dapr/dapr/blob/master/pkg/messages/errorcodes/errorcodes.go) for specific codes and categories.
Example configuration:
```yaml
apiVersion: dapr.io/v1alpha1
kind: Configuration
metadata:
name: tracing
namespace: default
spec:
metrics:
enabled: true
recordErrorCodes: true
```
Example metric:
```json
{
"app_id": "publisher-app",
"category": "state",
"dapr_io_enabled": "true",
"error_code": "ERR_STATE_STORE_NOT_CONFIGURED",
"instance": "10.244.1.64:9090",
"job": "kubernetes-service-endpoints",
"namespace": "my-app",
"node": "my-node",
"service": "publisher-app-dapr"
}
```
## Optimizing HTTP metrics reporting with path matching
When invoking Dapr using HTTP, metrics are created for each requested method by default. This can result in a high number of metrics, known as high cardinality, which can impact memory usage and CPU.

View File

@ -96,7 +96,7 @@ spec:
policy: constant
duration: 5s
maxRetries: 3
matches:
matching:
httpStatusCodes: "429,500-599" # retry the HTTP status codes in this range. All others are not retried.
gRPCStatusCodes: "1-4,8-11,13,14" # retry gRPC status codes in these ranges and separate single codes.
```

View File

@ -22,4 +22,4 @@ For CLI there is no explicit opt-in, just the version that this was first made a
| **Actor State TTL** | Allow actors to save records to state stores with Time To Live (TTL) set to automatically clean up old data. In its current implementation, actor state with TTL may not be reflected correctly by clients, read [Actor State Transactions]({{< ref actors_api.md >}}) for more information. | `ActorStateTTL` | [Actor State Transactions]({{< ref actors_api.md >}}) | v1.11 |
| **Component Hot Reloading** | Allows for Dapr-loaded components to be "hot reloaded". A component spec is reloaded when it is created/updated/deleted in Kubernetes or on file when running in self-hosted mode. Ignores changes to actor state stores and workflow backends. | `HotReload`| [Hot Reloading]({{< ref components-concept.md >}}) | v1.13 |
| **Subscription Hot Reloading** | Allows for declarative subscriptions to be "hot reloaded". A subscription is reloaded either when it is created/updated/deleted in Kubernetes, or on file in self-hosted mode. In-flight messages are unaffected when reloading. | `HotReload`| [Hot Reloading]({{< ref "subscription-methods.md#declarative-subscriptions" >}}) | v1.14 |
| **Scheduler Actor Reminders** | Whilst the [Scheduler service]({{< ref "concepts/dapr-services/scheduler.md" >}}) is deployed by default, Scheduler actor reminders (actor reminders stored in the Scheduler control plane service as opposed to the Placement control plane service actor reminder system) are enabled through a preview feature and needs a feature flag. | `SchedulerReminders`| [Scheduler actor reminders]({{< ref "jobs-overview.md#actor-reminders" >}}) | v1.14 |
| **Scheduler Actor Reminders** | Scheduler actor reminders are actor reminders stored in the Scheduler control plane service, as opposed to the Placement control plane service actor reminder system. The `SchedulerReminders` preview feature defaults to `true`, but you can disable Scheduler actor reminders by setting it to `false`. | `SchedulerReminders`| [Scheduler actor reminders]({{< ref "scheduler.md#actor-reminders" >}}) | v1.14 |

View File

@ -20,7 +20,7 @@ This endpoint lets you encrypt a value provided as a byte array using a specifie
### HTTP Request
```
PUT http://localhost:<daprPort>/v1.0/crypto/<crypto-store-name>/encrypt
PUT http://localhost:<daprPort>/v1.0-alpha1/crypto/<crypto-store-name>/encrypt
```
#### URL Parameters
@ -59,7 +59,7 @@ returns an array of bytes with the encrypted payload.
### Examples
```shell
curl http://localhost:3500/v1.0/crypto/myAzureKeyVault/encrypt \
curl http://localhost:3500/v1.0-alpha1/crypto/myAzureKeyVault/encrypt \
-X PUT \
-H "dapr-key-name: myCryptoKey" \
-H "dapr-key-wrap-algorithm: aes-gcm" \
@ -81,7 +81,7 @@ This endpoint lets you decrypt a value provided as a byte array using a specifie
#### HTTP Request
```
PUT curl http://localhost:3500/v1.0/crypto/<crypto-store-name>/decrypt
PUT curl http://localhost:3500/v1.0-alpha1/crypto/<crypto-store-name>/decrypt
```
#### URL Parameters
@ -116,7 +116,7 @@ returns an array of bytes representing the decrypted payload.
### Examples
```bash
curl http://localhost:3500/v1.0/crypto/myAzureKeyVault/decrypt \
curl http://localhost:3500/v1.0-alpha1/crypto/myAzureKeyVault/decrypt \
-X PUT
-H "dapr-key-name: myCryptoKey"\
-H "Content-Type: application/octet-stream" \

View File

@ -7,6 +7,7 @@ weight: 1400
---
For http calls made to Dapr runtime, when an error is encountered, an error json is returned in http response body. The json contains an error code and an descriptive error message, e.g.
```
{
"errorCode": "ERR_STATE_GET",
@ -14,36 +15,142 @@ For http calls made to Dapr runtime, when an error is encountered, an error json
}
```
Following table lists the error codes returned by Dapr runtime:
The following tables list the error codes returned by Dapr runtime:
| Error Code | Description |
|-----------------------------------|-------------|
| ERR_ACTOR_INSTANCE_MISSING | Error getting an actor instance. This means that actor is now hosted in some other service replica.
| ERR_ACTOR_RUNTIME_NOT_FOUND | Error getting the actor instance.
| ERR_ACTOR_REMINDER_CREATE | Error creating a reminder for an actor.
| ERR_ACTOR_REMINDER_DELETE | Error deleting a reminder for an actor.
| ERR_ACTOR_TIMER_CREATE | Error creating a timer for an actor.
| ERR_ACTOR_TIMER_DELETE | Error deleting a timer for an actor.
| ERR_ACTOR_REMINDER_GET | Error getting a reminder for an actor.
| ERR_ACTOR_INVOKE_METHOD | Error invoking a method on an actor.
| ERR_ACTOR_STATE_DELETE | Error deleting the state for an actor.
| ERR_ACTOR_STATE_GET | Error getting the state for an actor.
| ERR_ACTOR_STATE_TRANSACTION_SAVE | Error storing actor state transactionally.
| ERR_PUBSUB_NOT_FOUND | Error referencing the Pub/Sub component in Dapr runtime.
| ERR_PUBSUB_PUBLISH_MESSAGE | Error publishing a message.
| ERR_PUBSUB_FORBIDDEN | Error message forbidden by access controls.
| ERR_PUBSUB_CLOUD_EVENTS_SER | Error serializing Pub/Sub event envelope.
| ERR_STATE_STORE_NOT_FOUND | Error referencing a state store not found.
| ERR_STATE_STORES_NOT_CONFIGURED | Error no state stores configured.
| ERR_NOT_SUPPORTED_STATE_OPERATION | Error transaction requested on a state store with no transaction support.
| ERR_STATE_GET | Error getting a state for state store.
| ERR_STATE_DELETE | Error deleting a state from state store.
| ERR_STATE_SAVE | Error saving a state in state store.
| ERR_INVOKE_OUTPUT_BINDING | Error invoking an output binding.
| ERR_MALFORMED_REQUEST | Error with a malformed request.
| ERR_DIRECT_INVOKE | Error in direct invocation.
| ERR_DESERIALIZE_HTTP_BODY | Error deserializing an HTTP request body.
| ERR_SECRET_STORES_NOT_CONFIGURED | Error that no secret store is configured.
| ERR_SECRET_STORE_NOT_FOUND | Error that specified secret store is not found.
| ERR_HEALTH_NOT_READY | Error that Dapr is not ready.
| ERR_METADATA_GET | Error parsing the Metadata information.
### Actors API
| Error Code | Description |
| -------------------------------- | ------------------------------------------ |
| ERR_ACTOR_INSTANCE_MISSING | Error when an actor instance is missing. |
| ERR_ACTOR_RUNTIME_NOT_FOUND | Error the actor instance. |
| ERR_ACTOR_REMINDER_CREATE | Error creating a reminder for an actor. |
| ERR_ACTOR_REMINDER_DELETE | Error deleting a reminder for an actor. |
| ERR_ACTOR_TIMER_CREATE | Error creating a timer for an actor. |
| ERR_ACTOR_TIMER_DELETE | Error deleting a timer for an actor. |
| ERR_ACTOR_REMINDER_GET | Error getting a reminder for an actor. |
| ERR_ACTOR_INVOKE_METHOD | Error invoking a method on an actor. |
| ERR_ACTOR_STATE_DELETE | Error deleting the state for an actor. |
| ERR_ACTOR_STATE_GET | Error getting the state for an actor. |
| ERR_ACTOR_STATE_TRANSACTION_SAVE | Error storing actor state transactionally. |
| ERR_ACTOR_REMINDER_NON_HOSTED | Error setting reminder for an actor. |
### Workflows API
| Error Code | Description |
| -------------------------------- | ----------------------------------------------------------- |
| ERR_GET_WORKFLOW | Error getting workflow. |
| ERR_START_WORKFLOW | Error starting the workflow. |
| ERR_PAUSE_WORKFLOW | Error pausing the workflow. |
| ERR_RESUME_WORKFLOW | Error resuming the workflow. |
| ERR_TERMINATE_WORKFLOW | Error terminating the workflow. |
| ERR_PURGE_WORKFLOW | Error purging workflow. |
| ERR_RAISE_EVENT_WORKFLOW | Error raising an event within the workflow. |
| ERR_WORKFLOW_COMPONENT_MISSING | Error when a workflow component is missing a configuration. |
| ERR_WORKFLOW_COMPONENT_NOT_FOUND | Error when a workflow component is not found. |
| ERR_WORKFLOW_EVENT_NAME_MISSING | Error when the event name for a workflow is missing. |
| ERR_WORKFLOW_NAME_MISSING | Error when the workflow name is missing. |
| ERR_INSTANCE_ID_INVALID | Error invalid workflow instance ID provided. |
| ERR_INSTANCE_ID_NOT_FOUND | Error workflow instance ID not found. |
| ERR_INSTANCE_ID_PROVIDED_MISSING | Error workflow instance ID was provided but missing. |
| ERR_INSTANCE_ID_TOO_LONG | Error workflow instance ID exceeds allowable length. |
### State Management API
| Error Code | Description |
| ------------------------------------- | ------------------------------------------------------------------------- |
| ERR_STATE_STORE_NOT_FOUND | Error referencing a state store not found. |
| ERR_STATE_STORES_NOT_CONFIGURED | Error no state stores configured. |
| ERR_NOT_SUPPORTED_STATE_OPERATION | Error transaction requested on a state store with no transaction support. |
| ERR_STATE_GET | Error getting a state for state store. |
| ERR_STATE_DELETE | Error deleting a state from state store. |
| ERR_STATE_SAVE | Error saving a state in state store. |
| ERR_STATE_TRANSACTION | Error encountered during state transaction. |
| ERR_STATE_BULK_GET | Error performing bulk retrieval of state entries. |
| ERR_STATE_QUERY | Error querying the state store. |
| ERR_STATE_STORE_NOT_CONFIGURED | Error state store is not configured. |
| ERR_STATE_STORE_NOT_SUPPORTED | Error state store is not supported. |
| ERR_STATE_STORE_TOO_MANY_TRANSACTIONS | Error exceeded maximum allowable transactions. |
### Configuration API
| Error Code | Description |
| -------------------------------------- | -------------------------------------------- |
| ERR_CONFIGURATION_GET | Error retrieving configuration. |
| ERR_CONFIGURATION_STORE_NOT_CONFIGURED | Error configuration store is not configured. |
| ERR_CONFIGURATION_STORE_NOT_FOUND | Error configuration store not found. |
| ERR_CONFIGURATION_SUBSCRIBE | Error subscribing to a configuration. |
| ERR_CONFIGURATION_UNSUBSCRIBE | Error unsubscribing from a configuration. |
### Crypto API
| Error Code | Description |
| ----------------------------------- | ------------------------------------------ |
| ERR_CRYPTO | General crypto building block error. |
| ERR_CRYPTO_KEY | Error related to a crypto key. |
| ERR_CRYPTO_PROVIDER_NOT_FOUND | Error specified crypto provider not found. |
| ERR_CRYPTO_PROVIDERS_NOT_CONFIGURED | Error no crypto providers configured. |
### Secrets API
| Error Code | Description |
| -------------------------------- | ---------------------------------------------------- |
| ERR_SECRET_STORES_NOT_CONFIGURED | Error that no secret store is configured. |
| ERR_SECRET_STORE_NOT_FOUND | Error that specified secret store is not found. |
| ERR_SECRET_GET | Error retrieving the specified secret. |
| ERR_PERMISSION_DENIED | Error access denied due to insufficient permissions. |
### Pub/Sub API
| Error Code | Description |
| --------------------------- | -------------------------------------------------------- |
| ERR_PUBSUB_NOT_FOUND | Error referencing the Pub/Sub component in Dapr runtime. |
| ERR_PUBSUB_PUBLISH_MESSAGE | Error publishing a message. |
| ERR_PUBSUB_FORBIDDEN | Error message forbidden by access controls. |
| ERR_PUBSUB_CLOUD_EVENTS_SER | Error serializing Pub/Sub event envelope. |
| ERR_PUBSUB_EMPTY | Error empty Pub/Sub. |
| ERR_PUBSUB_NOT_CONFIGURED | Error Pub/Sub component is not configured. |
| ERR_PUBSUB_REQUEST_METADATA | Error with metadata in Pub/Sub request. |
| ERR_PUBSUB_EVENTS_SER | Error serializing Pub/Sub events. |
| ERR_PUBLISH_OUTBOX | Error publishing message to the outbox. |
| ERR_TOPIC_NAME_EMPTY | Error topic name for Pub/Sub message is empty. |
### Conversation API
| Error Code | Description |
| ------------------------------- | ----------------------------------------------- |
| ERR_INVOKE_OUTPUT_BINDING | Error invoking an output binding. |
| ERR_DIRECT_INVOKE | Error in direct invocation. |
| ERR_CONVERSATION_INVALID_PARMS | Error invalid parameters for conversation. |
| ERR_CONVERSATION_INVOKE | Error invoking the conversation. |
| ERR_CONVERSATION_MISSING_INPUTS | Error missing required inputs for conversation. |
| ERR_CONVERSATION_NOT_FOUND | Error conversation not found. |
### Distributed Lock API
| Error Code | Description |
| ----------------------------- | ----------------------------------- |
| ERR_TRY_LOCK | Error attempting to acquire a lock. |
| ERR_UNLOCK | Error attempting to release a lock. |
| ERR_LOCK_STORE_NOT_CONFIGURED | Error lock store is not configured. |
| ERR_LOCK_STORE_NOT_FOUND | Error lock store not found. |
### Healthz
| Error Code | Description |
| ----------------------------- | --------------------------------------------------------------- |
| ERR_HEALTH_NOT_READY | Error that Dapr is not ready. |
| ERR_HEALTH_APPID_NOT_MATCH | Error the app-id does not match expected value in health check. |
| ERR_OUTBOUND_HEALTH_NOT_READY | Error outbound connection health is not ready. |
### Common
| Error Code | Description |
| -------------------------- | ------------------------------------------------ |
| ERR_API_UNIMPLEMENTED | Error API is not implemented. |
| ERR_APP_CHANNEL_NIL | Error application channel is nil. |
| ERR_BAD_REQUEST | Error client request is badly formed or invalid. |
| ERR_BODY_READ | Error reading body. |
| ERR_INTERNAL | Internal server error encountered. |
| ERR_MALFORMED_REQUEST | Error with a malformed request. |
| ERR_MALFORMED_REQUEST_DATA | Error request data is malformed. |
| ERR_MALFORMED_RESPONSE | Error response data is malformed. |

View File

@ -64,10 +64,10 @@ The AWS authentication token will be dynamically rotated before it's expiration
|--------|:--------:|---------|---------|
| `useAWSIAM` | Y | Must be set to `true` to enable the component to retrieve access tokens from AWS IAM. This authentication method only works with AWS Relational Database Service for PostgreSQL databases. | `"true"` |
| `connectionString` | Y | The connection string for the PostgreSQL database.<br>This must contain an already existing user, which corresponds to the name of the user created inside PostgreSQL that maps to the AWS IAM policy. This connection string should not contain any password. Note that the database name field is denoted by dbname with AWS. | `"host=mydb.postgres.database.aws.com user=myapplication port=5432 dbname=my_db sslmode=require"`|
| `awsRegion` | Y | The AWS Region where the AWS Relational Database Service is deployed to. | `"us-east-1"` |
| `awsAccessKey` | Y | AWS access key associated with an IAM account | `"AKIAIOSFODNN7EXAMPLE"` |
| `awsSecretKey` | Y | The secret key associated with the access key | `"wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY"` |
| `awsSessionToken` | N | AWS session token to use. A session token is only required if you are using temporary security credentials. | `"TOKEN"` |
| `awsRegion` | N | This maintains backwards compatibility with existing fields. It will be deprecated as of Dapr 1.17. Use 'region' instead. The AWS Region where the AWS Relational Database Service is deployed to. | `"us-east-1"` |
| `awsAccessKey` | N | This maintains backwards compatibility with existing fields. It will be deprecated as of Dapr 1.17. Use 'accessKey' instead. AWS access key associated with an IAM account | `"AKIAIOSFODNN7EXAMPLE"` |
| `awsSecretKey` | N | This maintains backwards compatibility with existing fields. It will be deprecated as of Dapr 1.17. Use 'secretKey' instead. The secret key associated with the access key | `"wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY"` |
| `awsSessionToken` | N | This maintains backwards compatibility with existing fields. It will be deprecated as of Dapr 1.17. Use 'sessionToken' instead. AWS session token to use. A session token is only required if you are using temporary security credentials. | `"TOKEN"` |
### Other metadata options

View File

@ -90,10 +90,10 @@ The AWS authentication token will be dynamically rotated before it's expiration
|--------|:--------:|---------|---------|
| `useAWSIAM` | Y | Must be set to `true` to enable the component to retrieve access tokens from AWS IAM. This authentication method only works with AWS Relational Database Service for PostgreSQL databases. | `"true"` |
| `connectionString` | Y | The connection string for the PostgreSQL database.<br>This must contain an already existing user, which corresponds to the name of the user created inside PostgreSQL that maps to the AWS IAM policy. This connection string should not contain any password. Note that the database name field is denoted by dbname with AWS. | `"host=mydb.postgres.database.aws.com user=myapplication port=5432 dbname=my_db sslmode=require"`|
| `awsRegion` | Y | The AWS Region where the AWS Relational Database Service is deployed to. | `"us-east-1"` |
| `awsAccessKey` | Y | AWS access key associated with an IAM account | `"AKIAIOSFODNN7EXAMPLE"` |
| `awsSecretKey` | Y | The secret key associated with the access key | `"wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY"` |
| `awsSessionToken` | N | AWS session token to use. A session token is only required if you are using temporary security credentials. | `"TOKEN"` |
| `awsRegion` | N | This maintains backwards compatibility with existing fields. It will be deprecated as of Dapr 1.17. Use 'region' instead. The AWS Region where the AWS Relational Database Service is deployed to. | `"us-east-1"` |
| `awsAccessKey` | N | This maintains backwards compatibility with existing fields. It will be deprecated as of Dapr 1.17. Use 'accessKey' instead. AWS access key associated with an IAM account | `"AKIAIOSFODNN7EXAMPLE"` |
| `awsSecretKey` | N | This maintains backwards compatibility with existing fields. It will be deprecated as of Dapr 1.17. Use 'secretKey' instead. The secret key associated with the access key | `"wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY"` |
| `awsSessionToken` | N | This maintains backwards compatibility with existing fields. It will be deprecated as of Dapr 1.17. Use 'sessionToken' instead. AWS session token to use. A session token is only required if you are using temporary security credentials. | `"TOKEN"` |
### Other metadata options

View File

@ -104,12 +104,12 @@ spec:
| oidcClientSecret | N | The OAuth2 client secret that has been provisioned in the identity provider: Required when `authType` is set to `oidc` | `"KeFg23!"` |
| oidcScopes | N | Comma-delimited list of OAuth2/OIDC scopes to request with the access token. Recommended when `authType` is set to `oidc`. Defaults to `"openid"` | `"openid,kafka-prod"` |
| oidcExtensions | N | String containing a JSON-encoded dictionary of OAuth2/OIDC extensions to request with the access token | `{"cluster":"kafka","poolid":"kafkapool"}` |
| awsRegion | N | The AWS region where the Kafka cluster is deployed to. Required when `authType` is set to `awsiam` | `us-west-1` |
| awsAccessKey | N | AWS access key associated with an IAM account. | `"accessKey"`
| awsSecretKey | N | The secret key associated with the access key. | `"secretKey"`
| awsSessionToken | N | AWS session token to use. A session token is only required if you are using temporary security credentials. | `"sessionToken"`
| awsIamRoleArn | N | IAM role that has access to AWS Managed Streaming for Apache Kafka (MSK). This is another option to authenticate with MSK aside from the AWS Credentials. | `"arn:aws:iam::123456789:role/mskRole"`
| awsStsSessionName | N | Represents the session name for assuming a role. | `"MSKSASLDefaultSession"`
| awsRegion | N | This maintains backwards compatibility with existing fields. It will be deprecated as of Dapr 1.17. Use 'region' instead. The AWS region where the Kafka cluster is deployed to. Required when `authType` is set to `awsiam` | `us-west-1` |
| awsAccessKey | N | This maintains backwards compatibility with existing fields. It will be deprecated as of Dapr 1.17. Use 'accessKey' instead. AWS access key associated with an IAM account. | `"accessKey"`
| awsSecretKey | N | This maintains backwards compatibility with existing fields. It will be deprecated as of Dapr 1.17. Use 'secretKey' instead. The secret key associated with the access key. | `"secretKey"`
| awsSessionToken | N | This maintains backwards compatibility with existing fields. It will be deprecated as of Dapr 1.17. Use 'sessionToken' instead. AWS session token to use. A session token is only required if you are using temporary security credentials. | `"sessionToken"`
| awsIamRoleArn | N | This maintains backwards compatibility with existing fields. It will be deprecated as of Dapr 1.17. Use 'assumeRoleArn' instead. IAM role that has access to AWS Managed Streaming for Apache Kafka (MSK). This is another option to authenticate with MSK aside from the AWS Credentials. | `"arn:aws:iam::123456789:role/mskRole"`
| awsStsSessionName | N | This maintains backwards compatibility with existing fields. It will be deprecated as of Dapr 1.17. Use 'sessionName' instead. Represents the session name for assuming a role. | `"DaprDefaultSession"`
| schemaRegistryURL | N | Required when using Schema Registry Avro serialization/deserialization. The Schema Registry URL. | `http://localhost:8081` |
| schemaRegistryAPIKey | N | When using Schema Registry Avro serialization/deserialization. The Schema Registry credentials API Key. | `XYAXXAZ` |
| schemaRegistryAPISecret | N | When using Schema Registry Avro serialization/deserialization. The Schema Registry credentials API Secret. | `ABCDEFGMEADFF` |
@ -332,7 +332,7 @@ spec:
Authenticating with AWS IAM is supported with MSK. Setting `authType` to `awsiam` uses AWS SDK to generate auth tokens to authenticate.
{{% alert title="Note" color="primary" %}}
The only required metadata field is `awsRegion`. If no `awsAccessKey` and `awsSecretKey` are provided, you can use AWS IAM roles for service accounts to have password-less authentication to your Kafka cluster.
The only required metadata field is `region`. If no `acessKey` and `secretKey` are provided, you can use AWS IAM roles for service accounts to have password-less authentication to your Kafka cluster.
{{% /alert %}}
```yaml
@ -352,18 +352,18 @@ spec:
value: "my-dapr-app-id"
- name: authType # Required.
value: "awsiam"
- name: awsRegion # Required.
- name: region # Required.
value: "us-west-1"
- name: awsAccessKey # Optional.
- name: accessKey # Optional.
value: <AWS_ACCESS_KEY>
- name: awsSecretKey # Optional.
- name: secretKey # Optional.
value: <AWS_SECRET_KEY>
- name: awsSessionToken # Optional.
- name: sessionToken # Optional.
value: <AWS_SESSION_KEY>
- name: awsIamRoleArn # Optional.
- name: assumeRoleArn # Optional.
value: "arn:aws:iam::123456789:role/mskRole"
- name: awsStsSessionName # Optional.
value: "MSKSASLDefaultSession"
- name: sessionName # Optional.
value: "DaprDefaultSession"
```
### Communication using TLS
@ -540,6 +540,8 @@ app.include_router(router)
```
{{% /codetab %}}
{{< /tabs >}}
## Receiving message headers with special characters
The consumer application may be required to receive message headers that include special characters, which may cause HTTP protocol validation errors.

View File

@ -68,7 +68,8 @@ spec:
# value: 5
# - name: concurrencyMode # Optional
# value: "single"
# - name: concurrencyLimit # Optional
# value: "0"
```
@ -98,6 +99,7 @@ The above example uses secrets as plain strings. It is recommended to use [a sec
| disableDeleteOnRetryLimit | N | When set to true, after retrying and failing of `messageRetryLimit` times processing a message, reset the message visibility timeout so that other consumers can try processing, instead of deleting the message from SQS (the default behvior). Default: `"false"` | `"true"`, `"false"`
| assetsManagementTimeoutSeconds | N | Amount of time in seconds, for an AWS asset management operation, before it times out and cancelled. Asset management operations are any operations performed on STS, SNS and SQS, except message publish and consume operations that implement the default Dapr component retry behavior. The value can be set to any non-negative float/integer. Default: `5` | `0.5`, `10`
| concurrencyMode | N | When messages are received in bulk from SQS, call the subscriber sequentially (“single” message at a time), or concurrently (in “parallel”). Default: `"parallel"` | `"single"`, `"parallel"`
| concurrencyLimit | N | Defines the maximum number of concurrent workers handling messages. This value is ignored when concurrencyMode is set to `"single"`. To avoid limiting the number of concurrent workers, set this to `0`. Default: `0` | `100`
### Additional info

View File

@ -94,9 +94,9 @@ The AWS authentication token will be dynamically rotated before it's expiration
|--------|:--------:|---------|---------|
| `useAWSIAM` | Y | Must be set to `true` to enable the component to retrieve access tokens from AWS IAM. This authentication method only works with AWS Relational Database Service for PostgreSQL databases. | `"true"` |
| `connectionString` | Y | The connection string for the PostgreSQL database.<br>This must contain an already existing user, which corresponds to the name of the user created inside PostgreSQL that maps to the AWS IAM policy. This connection string should not contain any password. Note that the database name field is denoted by dbname with AWS. | `"host=mydb.postgres.database.aws.com user=myapplication port=5432 dbname=my_db sslmode=require"`|
| `awsRegion` | Y | The AWS Region where the AWS Relational Database Service is deployed to. | `"us-east-1"` |
| `awsAccessKey` | Y | AWS access key associated with an IAM account | `"AKIAIOSFODNN7EXAMPLE"` |
| `awsSecretKey` | Y | The secret key associated with the access key | `"wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY"` |
| `awsRegion` | N | The AWS Region where the AWS Relational Database Service is deployed to. | `"us-east-1"` |
| `awsAccessKey` | N | AWS access key associated with an IAM account | `"AKIAIOSFODNN7EXAMPLE"` |
| `awsSecretKey` | N | The secret key associated with the access key | `"wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY"` |
| `awsSessionToken` | N | AWS session token to use. A session token is only required if you are using temporary security credentials. | `"TOKEN"` |
### Other metadata options

View File

@ -94,10 +94,10 @@ The AWS authentication token will be dynamically rotated before it's expiration
|--------|:--------:|---------|---------|
| `useAWSIAM` | Y | Must be set to `true` to enable the component to retrieve access tokens from AWS IAM. This authentication method only works with AWS Relational Database Service for PostgreSQL databases. | `"true"` |
| `connectionString` | Y | The connection string for the PostgreSQL database.<br>This must contain an already existing user, which corresponds to the name of the user created inside PostgreSQL that maps to the AWS IAM policy. This connection string should not contain any password. Note that the database name field is denoted by dbname with AWS. | `"host=mydb.postgres.database.aws.com user=myapplication port=5432 dbname=my_db sslmode=require"`|
| `awsRegion` | Y | The AWS Region where the AWS Relational Database Service is deployed to. | `"us-east-1"` |
| `awsAccessKey` | Y | AWS access key associated with an IAM account | `"AKIAIOSFODNN7EXAMPLE"` |
| `awsSecretKey` | Y | The secret key associated with the access key | `"wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY"` |
| `awsSessionToken` | N | AWS session token to use. A session token is only required if you are using temporary security credentials. | `"TOKEN"` |
| `awsRegion` | N | This maintains backwards compatibility with existing fields. It will be deprecated as of Dapr 1.17. Use 'region' instead. The AWS Region where the AWS Relational Database Service is deployed to. | `"us-east-1"` |
| `awsAccessKey` | N | This maintains backwards compatibility with existing fields. It will be deprecated as of Dapr 1.17. Use 'accessKey' instead. AWS access key associated with an IAM account | `"AKIAIOSFODNN7EXAMPLE"` |
| `awsSecretKey` | N | This maintains backwards compatibility with existing fields. It will be deprecated as of Dapr 1.17. Use 'secretKey' instead. The secret key associated with the access key | `"wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY"` |
| `awsSessionToken` | N | This maintains backwards compatibility with existing fields. It will be deprecated as of Dapr 1.17. Use 'sessionToken' instead. AWS session token to use. A session token is only required if you are using temporary security credentials. | `"TOKEN"` |
### Other metadata options

View File

@ -32,6 +32,9 @@ spec:
duration: <REPLACE-WITH-VALUE>
maxInterval: <REPLACE-WITH-VALUE>
maxRetries: <REPLACE-WITH-VALUE>
matching:
httpStatusCodes: <REPLACE-WITH-VALUE>
gRPCStatusCodes: <REPLACE-WITH-VALUE>
circuitBreakers:
circuitBreakerName: # Replace with any unique name
maxRequests: <REPLACE-WITH-VALUE>

Binary file not shown.

Before

Width:  |  Height:  |  Size: 37 KiB

After

Width:  |  Height:  |  Size: 145 KiB