diff --git a/daprdocs/content/en/operations/resiliency/policies.md b/daprdocs/content/en/operations/resiliency/policies.md
index db72dd78c..99a71eaef 100644
--- a/daprdocs/content/en/operations/resiliency/policies.md
+++ b/daprdocs/content/en/operations/resiliency/policies.md
@@ -35,7 +35,13 @@ If you don't specify a timeout value, the policy does not enforce a time and def
## Retries
-With `retries`, you can define a retry strategy for failed operations, including requests failed due to triggering a defined timeout or circuit breaker policy. The following retry options are configurable:
+With `retries`, you can define a retry strategy for failed operations, including requests failed due to triggering a defined timeout or circuit breaker policy.
+
+{{% alert title="Pub/sub component retries vs inbound resiliency" color="warning" %}}
+Each [pub/sub component]({{< ref supported-pubsub >}}) has its own built-in retry behaviors. Explicity applying a Dapr resiliency policy doesn't override these implicit retry policies. Rather, the resiliency policy augments the built-in retry, which can cause repetitive clustering of messages.
+{{% /alert %}}
+
+The following retry options are configurable:
| Retry option | Description |
| ------------ | ----------- |
@@ -43,6 +49,15 @@ With `retries`, you can define a retry strategy for failed operations, including
| `duration` | Determines the time interval between retries. Only applies to the `constant` policy.
Valid values are of the form `200ms`, `15s`, `2m`, etc.
Defaults to `5s`.|
| `maxInterval` | Determines the maximum interval between retries to which the `exponential` back-off policy can grow.
Additional retries always occur after a duration of `maxInterval`. Defaults to `60s`. Valid values are of the form `5s`, `1m`, `1m30s`, etc |
| `maxRetries` | The maximum number of retries to attempt.
`-1` denotes an unlimited number of retries, while `0` means the request will not be retried (essentially behaving as if the retry policy were not set).
Defaults to `-1`. |
+| `matching.httpStatusCodes` | Optional: a comma-separated string of HTTP status codes or code ranges to retry. Status codes not listed are not retried.
Valid values: 100-599, [Reference](https://developer.mozilla.org/en-US/docs/Web/HTTP/Status)
Format: `
` or range `-`
Example: "429,501-503"
Default: empty string `""` or field is not set. Retries on all HTTP errors. |
+| `matching.gRPCStatusCodes` | Optional: a comma-separated string of gRPC status codes or code ranges to retry. Status codes not listed are not retried.
Valid values: 0-16, [Reference](https://grpc.io/docs/guides/status-codes/)
Format: `` or range `-`
Example: "1,501-503"
Default: empty string `""` or field is not set. Retries on all gRPC errors. |
+
+
+{{% alert title="httpStatusCodes and gRPCStatusCodes format" color="warning" %}}
+The field values should follow the format as specified in the field description or in the "Example 2" below.
+An incorrectly formatted value will produce an error log ("Could not read resiliency policy") and `daprd` startup sequence will proceed.
+{{% /alert %}}
+
The exponential back-off window uses the following formula:
@@ -71,7 +86,20 @@ spec:
maxRetries: -1 # Retry indefinitely
```
+Example 2:
+```yaml
+spec:
+ policies:
+ retries:
+ retry5xxOnly:
+ policy: constant
+ duration: 5s
+ maxRetries: 3
+ matching:
+ httpStatusCodes: "429,500-599" # retry the HTTP status codes in this range. All others are not retried.
+ gRPCStatusCodes: "1-4,8-11,13,14" # retry gRPC status codes in these ranges and separate single codes.
+```
## Circuit Breakers
@@ -82,7 +110,7 @@ Circuit Breaker (CB) policies are used when other applications/services/componen
| `maxRequests` | The maximum number of requests allowed to pass through when the CB is half-open (recovering from failure). Defaults to `1`. |
| `interval` | The cyclical period of time used by the CB to clear its internal counts. If set to 0 seconds, this never clears. Defaults to `0s`. |
| `timeout` | The period of the open state (directly after failure) until the CB switches to half-open. Defaults to `60s`. |
-| `trip` | A [Common Expression Language (CEL)](https://github.com/google/cel-spec) statement that is evaluated by the CB. When the statement evaluates to true, the CB trips and becomes open. Defaults to `consecutiveFailures > 5`. |
+| `trip` | A [Common Expression Language (CEL)](https://github.com/google/cel-spec) statement that is evaluated by the CB. When the statement evaluates to true, the CB trips and becomes open. Defaults to `consecutiveFailures > 5`. Other possible values are `requests` and `totalFailures` where `requests` represents the number of either successful or failed calls before the circuit opens and `totalFailures` represents the total (not necessarily consecutive) number of failed attempts before the circuit opens. Example: `requests > 5` and `totalFailures >3`.|
Example:
diff --git a/daprdocs/content/en/operations/support/alpha-beta-apis.md b/daprdocs/content/en/operations/support/alpha-beta-apis.md
index 66d9470ff..7516bfc47 100644
--- a/daprdocs/content/en/operations/support/alpha-beta-apis.md
+++ b/daprdocs/content/en/operations/support/alpha-beta-apis.md
@@ -15,13 +15,13 @@ description: "List of current alpha and beta APIs"
| Bulk Publish | [Bulk publish proto](https://github.com/dapr/dapr/blob/5aba3c9aa4ea9b3f388df125f9c66495b43c5c9e/dapr/proto/runtime/v1/dapr.proto#L59) | `v1.0-alpha1/publish/bulk` | The bulk publish API allows you to publish multiple messages to a topic in a single request. | [Bulk Publish and Subscribe API]({{< ref "pubsub-bulk.md" >}}) | v1.10 |
| Bulk Subscribe | [Bulk subscribe proto](https://github.com/dapr/dapr/blob/5aba3c9aa4ea9b3f388df125f9c66495b43c5c9e/dapr/proto/runtime/v1/appcallback.proto#L57) | N/A | The bulk subscribe application callback receives multiple messages from a topic in a single call. | [Bulk Publish and Subscribe API]({{< ref "pubsub-bulk.md" >}}) | v1.10 |
| Cryptography | [Crypto proto](https://github.com/dapr/dapr/blob/5aba3c9aa4ea9b3f388df125f9c66495b43c5c9e/dapr/proto/runtime/v1/dapr.proto#L118) | `v1.0-alpha1/crypto` | The cryptography API enables you to perform **high level** cryptography operations for encrypting and decrypting messages. | [Cryptography API]({{< ref "cryptography-overview.md" >}}) | v1.11 |
-| Jobs | [Jobs proto](https://github.com/dapr/dapr/blob/master/dapr/proto/runtime/v1/dapr.proto#L198-L204) | `v1.0-alpha1/jobs` | The jobs API enables you to schedule and orchestrate jobs. | [Jobs API]({{< ref "jobs-overview.md" >}}) | v1.14 |
+| Jobs | [Jobs proto](https://github.com/dapr/dapr/blob/master/dapr/proto/runtime/v1/dapr.proto#L212-219) | `v1.0-alpha1/jobs` | The jobs API enables you to schedule and orchestrate jobs. | [Jobs API]({{< ref "jobs-overview.md" >}}) | v1.14 |
+| Conversation | [Conversation proto](https://github.com/dapr/dapr/blob/master/dapr/proto/runtime/v1/dapr.proto#L221-222) | `v1.0-alpha1/conversation` | Converse between different large language models using the conversation API. | [Conversation API]({{< ref "conversation-overview.md" >}}) | v1.15 |
+
## Beta APIs
-| Building block/API | gRPC | HTTP | Description | Documentation | Version introduced |
-| ------------------ | ---- | ---- | ----------- | ------------- | ------------------ |
-| Workflow | [Workflow proto](https://github.com/dapr/dapr/blob/5aba3c9aa4ea9b3f388df125f9c66495b43c5c9e/dapr/proto/runtime/v1/dapr.proto#L151) | `/v1.0-beta1/workflow` | The workflow API enables you to define long running, persistent processes or data flows. | [Workflow API]({{< ref "workflow-overview.md" >}}) | v1.10 |
+No current beta APIs.
## Related links
diff --git a/daprdocs/content/en/operations/support/breaking-changes-and-deprecations.md b/daprdocs/content/en/operations/support/breaking-changes-and-deprecations.md
index 50dc764c8..995f62287 100644
--- a/daprdocs/content/en/operations/support/breaking-changes-and-deprecations.md
+++ b/daprdocs/content/en/operations/support/breaking-changes-and-deprecations.md
@@ -68,6 +68,7 @@ After announcing a future breaking change, the change will happen in 2 releases
| Hazelcast PubSub Component | 1.9.0 | 1.11.0 |
| Twitter Binding Component | 1.10.0 | 1.11.0 |
| NATS Streaming PubSub Component | 1.11.0 | 1.13.0 |
+| Workflows API Alpha1 `/v1.0-alpha1/workflows` being deprecated in favor of Workflow Client | 1.15.0 | 1.17.0 |
## Related links
diff --git a/daprdocs/content/en/operations/support/support-preview-features.md b/daprdocs/content/en/operations/support/support-preview-features.md
index 943b35d0e..07ae1b9a6 100644
--- a/daprdocs/content/en/operations/support/support-preview-features.md
+++ b/daprdocs/content/en/operations/support/support-preview-features.md
@@ -22,4 +22,4 @@ For CLI there is no explicit opt-in, just the version that this was first made a
| **Actor State TTL** | Allow actors to save records to state stores with Time To Live (TTL) set to automatically clean up old data. In its current implementation, actor state with TTL may not be reflected correctly by clients, read [Actor State Transactions]({{< ref actors_api.md >}}) for more information. | `ActorStateTTL` | [Actor State Transactions]({{< ref actors_api.md >}}) | v1.11 |
| **Component Hot Reloading** | Allows for Dapr-loaded components to be "hot reloaded". A component spec is reloaded when it is created/updated/deleted in Kubernetes or on file when running in self-hosted mode. Ignores changes to actor state stores and workflow backends. | `HotReload`| [Hot Reloading]({{< ref components-concept.md >}}) | v1.13 |
| **Subscription Hot Reloading** | Allows for declarative subscriptions to be "hot reloaded". A subscription is reloaded either when it is created/updated/deleted in Kubernetes, or on file in self-hosted mode. In-flight messages are unaffected when reloading. | `HotReload`| [Hot Reloading]({{< ref "subscription-methods.md#declarative-subscriptions" >}}) | v1.14 |
-| **Job actor reminders** | Whilst the [Scheduler service]({{< ref "concepts/dapr-services/scheduler.md" >}}) is deployed by default, job actor reminders (used for scheduling actor reminders) are enabled through a preview feature and needs a feature flag. | `SchedulerReminders`| [Job actor reminders]({{< ref "jobs-overview.md#actor-reminders" >}}) | v1.14 |
+| **Scheduler Actor Reminders** | Scheduler actor reminders are actor reminders stored in the Scheduler control plane service, as opposed to the Placement control plane service actor reminder system. The `SchedulerReminders` preview feature defaults to `true`, but you can disable Scheduler actor reminders by setting it to `false`. | `SchedulerReminders`| [Scheduler actor reminders]({{< ref "scheduler.md#actor-reminders" >}}) | v1.14 |
\ No newline at end of file
diff --git a/daprdocs/content/en/operations/support/support-release-policy.md b/daprdocs/content/en/operations/support/support-release-policy.md
index 76a1a5730..fbba03b5f 100644
--- a/daprdocs/content/en/operations/support/support-release-policy.md
+++ b/daprdocs/content/en/operations/support/support-release-policy.md
@@ -24,7 +24,7 @@ A supported release means:
From the 1.8.0 release onwards three (3) versions of Dapr are supported; the current and previous two (2) versions. Typically these are `MINOR`release updates. This means that there is a rolling window that moves forward for supported releases and it is your operational responsibility to remain up to date with these supported versions. If you have an older version of Dapr you may have to do intermediate upgrades to get to a supported version.
-There will be at least 6 weeks between major.minor version releases giving users a 12 week (3 month) rolling window for upgrading.
+There will be at least 13 weeks (3 months) between major.minor version releases giving users at least a 9 month rolling window for upgrading from a non-supported version. For more details on the release process read [release cycle and cadence](https://github.com/dapr/community/blob/master/release-process.md)
Patch support is for supported versions (current and previous).
@@ -45,6 +45,10 @@ The table below shows the versions of Dapr releases that have been tested togeth
| Release date | Runtime | CLI | SDKs | Dashboard | Status | Release notes |
|--------------------|:--------:|:--------|---------|---------|---------|------------|
+| September 16th 2024 | 1.14.4 | 1.14.1 | Java 1.12.0 Go 1.11.0 PHP 1.2.0 Python 1.14.0 .NET 1.14.0 JS 3.3.1 | 0.15.0 | Supported (current) | [v1.14.4 release notes](https://github.com/dapr/dapr/releases/tag/v1.14.4) |
+| September 13th 2024 | 1.14.3 | 1.14.1 | Java 1.12.0 Go 1.11.0 PHP 1.2.0 Python 1.14.0 .NET 1.14.0 JS 3.3.1 | 0.15.0 | ⚠️ Recalled | [v1.14.3 release notes](https://github.com/dapr/dapr/releases/tag/v1.14.3) |
+| September 6th 2024 | 1.14.2 | 1.14.1 | Java 1.12.0 Go 1.11.0 PHP 1.2.0 Python 1.14.0 .NET 1.14.0 JS 3.3.1 | 0.15.0 | Supported (current) | [v1.14.2 release notes](https://github.com/dapr/dapr/releases/tag/v1.14.2) |
+| August 14th 2024 | 1.14.1 | 1.14.1 | Java 1.12.0 Go 1.11.0 PHP 1.2.0 Python 1.14.0 .NET 1.14.0 JS 3.3.1 | 0.15.0 | Supported (current) | [v1.14.1 release notes](https://github.com/dapr/dapr/releases/tag/v1.14.1) |
| August 14th 2024 | 1.14.0 | 1.14.0 | Java 1.12.0 Go 1.11.0 PHP 1.2.0 Python 1.14.0 .NET 1.14.0 JS 3.3.1 | 0.15.0 | Supported (current) | [v1.14.0 release notes](https://github.com/dapr/dapr/releases/tag/v1.14.0) |
| May 29th 2024 | 1.13.4 | 1.13.0 | Java 1.11.0 Go 1.10.0 PHP 1.2.0 Python 1.13.0 .NET 1.13.0 JS 3.3.0 | 0.14.0 | Supported | [v1.13.4 release notes](https://github.com/dapr/dapr/releases/tag/v1.13.4) |
| May 21st 2024 | 1.13.3 | 1.13.0 | Java 1.11.0 Go 1.10.0 PHP 1.2.0 Python 1.13.0 .NET 1.13.0 JS 3.3.0 | 0.14.0 | Supported | [v1.13.3 release notes](https://github.com/dapr/dapr/releases/tag/v1.13.3) |
@@ -134,13 +138,12 @@ General guidance on upgrading can be found for [self hosted mode]({{< ref self-h
| | 1.8.6 | 1.9.6 |
| | 1.9.6 | 1.10.7 |
| 1.8.0 to 1.8.6 | N/A | 1.9.6 |
-| 1.9.0 | N/A | 1.9.6 |
-| 1.10.0 | N/A | 1.10.8 |
-| 1.11.0 | N/A | 1.11.4 |
-| 1.12.0 | N/A | 1.12.4 |
-| 1.12.0 to 1.13.0 | N/A | 1.13.4 |
-| 1.13.0 | N/A | 1.13.4 |
-| 1.13.0 to 1.14.0 | N/A | 1.14.0 |
+| 1.9.0 to 1.9.6 | N/A | 1.10.8 |
+| 1.10.0 to 1.10.8 | N/A | 1.11.4 |
+| 1.11.0 to 1.11.4 | N/A | 1.12.4 |
+| 1.12.0 to 1.12.4 | N/A | 1.13.5 |
+| 1.13.0 to 1.13.5 | N/A | 1.14.0 |
+| 1.14.0 to 1.14.2 | N/A | 1.14.2 |
## Upgrade on Hosting platforms
diff --git a/daprdocs/content/en/operations/support/support-security-issues.md b/daprdocs/content/en/operations/support/support-security-issues.md
index 1ae3fce27..6e7b24a2d 100644
--- a/daprdocs/content/en/operations/support/support-security-issues.md
+++ b/daprdocs/content/en/operations/support/support-security-issues.md
@@ -52,7 +52,7 @@ The people who should have access to read your security report are listed in [`m
code which allows the issue to be reproduced. Explain why you believe this
to be a security issue in Dapr.
2. Put that information into an email. Use a descriptive title.
-3. Send the email to [Dapr Maintainers (dapr@dapr.io)](mailto:dapr@dapr.io?subject=[Security%20Disclosure]:%20ISSUE%20TITLE)
+3. Send an email to [Security (security@dapr.io)](mailto:security@dapr.io?subject=[Security%20Disclosure]:%20ISSUE%20TITLE)
## Response
diff --git a/daprdocs/content/en/reference/api/conversation_api.md b/daprdocs/content/en/reference/api/conversation_api.md
new file mode 100644
index 000000000..366625006
--- /dev/null
+++ b/daprdocs/content/en/reference/api/conversation_api.md
@@ -0,0 +1,74 @@
+---
+type: docs
+title: "Conversation API reference"
+linkTitle: "Conversation API"
+description: "Detailed documentation on the conversation API"
+weight: 1400
+---
+
+{{% alert title="Alpha" color="primary" %}}
+The conversation API is currently in [alpha]({{< ref "certification-lifecycle.md#certification-levels" >}}).
+{{% /alert %}}
+
+Dapr provides an API to interact with Large Language Models (LLMs) and enables critical performance and security functionality with features like prompt caching and PII data obfuscation.
+
+## Converse
+
+This endpoint lets you converse with LLMs.
+
+```
+POST /v1.0-alpha1/conversation//converse
+```
+
+### URL parameters
+
+| Parameter | Description |
+| --------- | ----------- |
+| `llm-name` | The name of the LLM component. [See a list of all available conversation components.]({{< ref supported-conversation >}})
+
+### Request body
+
+| Field | Description |
+| --------- | ----------- |
+| `conversationContext` | |
+| `inputs` | |
+| `parameters` | |
+
+
+### Request content
+
+```json
+REQUEST = {
+ "inputs": ["what is Dapr", "Why use Dapr"],
+ "parameters": {},
+}
+```
+
+### HTTP response codes
+
+Code | Description
+---- | -----------
+`202` | Accepted
+`400` | Request was malformed
+`500` | Request formatted correctly, error in dapr code or underlying component
+
+### Response content
+
+```json
+RESPONSE = {
+ "outputs": {
+ {
+ "result": "Dapr is distribution application runtime ...",
+ "parameters": {},
+ },
+ {
+ "result": "Dapr can help developers ...",
+ "parameters": {},
+ }
+ },
+}
+```
+
+## Next steps
+
+[Conversation API overview]({{< ref conversation-overview.md >}})
\ No newline at end of file
diff --git a/daprdocs/content/en/reference/api/cryptography_api.md b/daprdocs/content/en/reference/api/cryptography_api.md
index 336088f23..c0c482427 100644
--- a/daprdocs/content/en/reference/api/cryptography_api.md
+++ b/daprdocs/content/en/reference/api/cryptography_api.md
@@ -20,7 +20,7 @@ This endpoint lets you encrypt a value provided as a byte array using a specifie
### HTTP Request
```
-PUT http://localhost:/v1.0/crypto//encrypt
+PUT http://localhost:/v1.0-alpha1/crypto//encrypt
```
#### URL Parameters
@@ -59,7 +59,7 @@ returns an array of bytes with the encrypted payload.
### Examples
```shell
-curl http://localhost:3500/v1.0/crypto/myAzureKeyVault/encrypt \
+curl http://localhost:3500/v1.0-alpha1/crypto/myAzureKeyVault/encrypt \
-X PUT \
-H "dapr-key-name: myCryptoKey" \
-H "dapr-key-wrap-algorithm: aes-gcm" \
@@ -81,7 +81,7 @@ This endpoint lets you decrypt a value provided as a byte array using a specifie
#### HTTP Request
```
-PUT curl http://localhost:3500/v1.0/crypto//decrypt
+PUT curl http://localhost:3500/v1.0-alpha1/crypto//decrypt
```
#### URL Parameters
@@ -116,7 +116,7 @@ returns an array of bytes representing the decrypted payload.
### Examples
```bash
-curl http://localhost:3500/v1.0/crypto/myAzureKeyVault/decrypt \
+curl http://localhost:3500/v1.0-alpha1/crypto/myAzureKeyVault/decrypt \
-X PUT
-H "dapr-key-name: myCryptoKey"\
-H "Content-Type: application/octet-stream" \
diff --git a/daprdocs/content/en/reference/api/error_codes.md b/daprdocs/content/en/reference/api/error_codes.md
deleted file mode 100644
index 19d3b8cc3..000000000
--- a/daprdocs/content/en/reference/api/error_codes.md
+++ /dev/null
@@ -1,49 +0,0 @@
----
-type: docs
-title: "Error codes returned by APIs"
-linkTitle: "Error codes"
-description: "Detailed reference of the Dapr API error codes"
-weight: 1400
----
-
-For http calls made to Dapr runtime, when an error is encountered, an error json is returned in http response body. The json contains an error code and an descriptive error message, e.g.
-```
-{
- "errorCode": "ERR_STATE_GET",
- "message": "Requested state key does not exist in state store."
-}
-```
-
-Following table lists the error codes returned by Dapr runtime:
-
-| Error Code | Description |
-|-----------------------------------|-------------|
-| ERR_ACTOR_INSTANCE_MISSING | Error getting an actor instance. This means that actor is now hosted in some other service replica.
-| ERR_ACTOR_RUNTIME_NOT_FOUND | Error getting the actor instance.
-| ERR_ACTOR_REMINDER_CREATE | Error creating a reminder for an actor.
-| ERR_ACTOR_REMINDER_DELETE | Error deleting a reminder for an actor.
-| ERR_ACTOR_TIMER_CREATE | Error creating a timer for an actor.
-| ERR_ACTOR_TIMER_DELETE | Error deleting a timer for an actor.
-| ERR_ACTOR_REMINDER_GET | Error getting a reminder for an actor.
-| ERR_ACTOR_INVOKE_METHOD | Error invoking a method on an actor.
-| ERR_ACTOR_STATE_DELETE | Error deleting the state for an actor.
-| ERR_ACTOR_STATE_GET | Error getting the state for an actor.
-| ERR_ACTOR_STATE_TRANSACTION_SAVE | Error storing actor state transactionally.
-| ERR_PUBSUB_NOT_FOUND | Error referencing the Pub/Sub component in Dapr runtime.
-| ERR_PUBSUB_PUBLISH_MESSAGE | Error publishing a message.
-| ERR_PUBSUB_FORBIDDEN | Error message forbidden by access controls.
-| ERR_PUBSUB_CLOUD_EVENTS_SER | Error serializing Pub/Sub event envelope.
-| ERR_STATE_STORE_NOT_FOUND | Error referencing a state store not found.
-| ERR_STATE_STORES_NOT_CONFIGURED | Error no state stores configured.
-| ERR_NOT_SUPPORTED_STATE_OPERATION | Error transaction requested on a state store with no transaction support.
-| ERR_STATE_GET | Error getting a state for state store.
-| ERR_STATE_DELETE | Error deleting a state from state store.
-| ERR_STATE_SAVE | Error saving a state in state store.
-| ERR_INVOKE_OUTPUT_BINDING | Error invoking an output binding.
-| ERR_MALFORMED_REQUEST | Error with a malformed request.
-| ERR_DIRECT_INVOKE | Error in direct invocation.
-| ERR_DESERIALIZE_HTTP_BODY | Error deserializing an HTTP request body.
-| ERR_SECRET_STORES_NOT_CONFIGURED | Error that no secret store is configured.
-| ERR_SECRET_STORE_NOT_FOUND | Error that specified secret store is not found.
-| ERR_HEALTH_NOT_READY | Error that Dapr is not ready.
-| ERR_METADATA_GET | Error parsing the Metadata information.
diff --git a/daprdocs/content/en/reference/api/jobs_api.md b/daprdocs/content/en/reference/api/jobs_api.md
index 3a04ed1a9..454598676 100644
--- a/daprdocs/content/en/reference/api/jobs_api.md
+++ b/daprdocs/content/en/reference/api/jobs_api.md
@@ -32,7 +32,7 @@ At least one of `schedule` or `dueTime` must be provided, but they can also be p
Parameter | Description
--------- | -----------
`name` | Name of the job you're scheduling
-`data` | A protobuf message `@type`/`value` pair. `@type` must be of a [well-known type](https://protobuf.dev/reference/protobuf/google.protobuf). `value` is the serialized data.
+`data` | A JSON serialized value or object.
`schedule` | An optional schedule at which the job is to be run. Details of the format are below.
`dueTime` | An optional time at which the job should be active, or the "one shot" time, if other scheduling type fields are not provided. Accepts a "point in time" string in the format of RFC3339, Go duration string (calculated from creation time), or non-repeating ISO8601.
`repeats` | An optional number of times in which the job should be triggered. If not set, the job runs indefinitely or until expiration.
@@ -43,9 +43,13 @@ Parameter | Description
Systemd timer style cron accepts 6 fields:
seconds | minutes | hours | day of month | month | day of week
-0-59 | 0-59 | 0-23 | 1-31 | 1-12/jan-dec | 0-7/sun-sat
+--- | --- | --- | --- | --- | ---
+0-59 | 0-59 | 0-23 | 1-31 | 1-12/jan-dec | 0-6/sun-sat
+##### Example 1
"0 30 * * * *" - every hour on the half hour
+
+##### Example 2
"0 15 3 * * *" - every day at 03:15
Period string expressions:
@@ -63,13 +67,8 @@ Entry | Description | Equivalent
```json
{
- "job": {
- "data": {
- "@type": "type.googleapis.com/google.protobuf.StringValue",
- "value": "\"someData\""
- },
- "dueTime": "30s"
- }
+ "data": "some data",
+ "dueTime": "30s"
}
```
@@ -88,20 +87,14 @@ The following example curl command creates a job, naming the job `jobforjabba` a
```bash
$ curl -X POST \
http://localhost:3500/v1.0-alpha1/jobs/jobforjabba \
- -H "Content-Type: application/json"
+ -H "Content-Type: application/json" \
-d '{
- "job": {
- "data": {
- "@type": "type.googleapis.com/google.protobuf.StringValue",
- "value": "Running spice"
- },
- "schedule": "@every 1m",
- "repeats": 5
- }
+ "data": "{\"value\":\"Running spice\"}",
+ "schedule": "@every 1m",
+ "repeats": 5
}'
```
-
## Get job data
Get a job from its name.
@@ -137,10 +130,7 @@ $ curl -X GET http://localhost:3500/v1.0-alpha1/jobs/jobforjabba -H "Content-Typ
"name": "jobforjabba",
"schedule": "@every 1m",
"repeats": 5,
- "data": {
- "@type": "type.googleapis.com/google.protobuf.StringValue",
- "value": "Running spice"
- }
+ "data": 123
}
```
## Delete a job
diff --git a/daprdocs/content/en/reference/api/pubsub_api.md b/daprdocs/content/en/reference/api/pubsub_api.md
index f53677108..8fbf0f615 100644
--- a/daprdocs/content/en/reference/api/pubsub_api.md
+++ b/daprdocs/content/en/reference/api/pubsub_api.md
@@ -302,7 +302,7 @@ other | warning is logged and all messages to be retried
## Message envelope
-Dapr pub/sub adheres to version 1.0 of CloudEvents.
+Dapr pub/sub adheres to [version 1.0 of CloudEvents](https://github.com/cloudevents/spec/blob/v1.0/spec.md).
## Related links
diff --git a/daprdocs/content/en/reference/api/workflow_api.md b/daprdocs/content/en/reference/api/workflow_api.md
index 91a19d864..c9dddaa61 100644
--- a/daprdocs/content/en/reference/api/workflow_api.md
+++ b/daprdocs/content/en/reference/api/workflow_api.md
@@ -6,10 +6,6 @@ description: "Detailed documentation on the workflow API"
weight: 300
---
-{{% alert title="Note" color="primary" %}}
-Dapr Workflow is currently in beta. [See known limitations for {{% dapr-latest-version cli="true" %}}]({{< ref "workflow-overview.md#limitations" >}}).
-{{% /alert %}}
-
Dapr provides users with the ability to interact with workflows and comes with a built-in `dapr` component.
## Start workflow request
@@ -17,7 +13,7 @@ Dapr provides users with the ability to interact with workflows and comes with a
Start a workflow instance with the given name and optionally, an instance ID.
```
-POST http://localhost:3500/v1.0-beta1/workflows///start[?instanceID=]
+POST http://localhost:3500/v1.0/workflows///start[?instanceID=]
```
Note that workflow instance IDs can only contain alphanumeric characters, underscores, and dashes.
@@ -57,7 +53,7 @@ The API call will provide a response similar to this:
Terminate a running workflow instance with the given name and instance ID.
```
-POST http://localhost:3500/v1.0-beta1/workflows///terminate
+POST http://localhost:3500/v1.0/workflows///terminate
```
{{% alert title="Note" color="primary" %}}
@@ -91,7 +87,7 @@ This API does not return any content.
For workflow components that support subscribing to external events, such as the Dapr Workflow engine, you can use the following "raise event" API to deliver a named event to a specific workflow instance.
```
-POST http://localhost:3500/v1.0-beta1/workflows///raiseEvent/
+POST http://localhost:3500/v1.0/workflows///raiseEvent/
```
{{% alert title="Note" color="primary" %}}
@@ -124,7 +120,7 @@ None.
Pause a running workflow instance.
```
-POST http://localhost:3500/v1.0-beta1/workflows///pause
+POST http://localhost:3500/v1.0/workflows///pause
```
### URL parameters
@@ -151,7 +147,7 @@ None.
Resume a paused workflow instance.
```
-POST http://localhost:3500/v1.0-beta1/workflows///resume
+POST http://localhost:3500/v1.0/workflows///resume
```
### URL parameters
@@ -178,7 +174,7 @@ None.
Purge the workflow state from your state store with the workflow's instance ID.
```
-POST http://localhost:3500/v1.0-beta1/workflows///purge
+POST http://localhost:3500/v1.0/workflows///purge
```
{{% alert title="Note" color="primary" %}}
@@ -209,7 +205,7 @@ None.
Get information about a given workflow instance.
```
-GET http://localhost:3500/v1.0-beta1/workflows//
+GET http://localhost:3500/v1.0/workflows//
```
### URL parameters
diff --git a/daprdocs/content/en/reference/arguments-annotations-overview.md b/daprdocs/content/en/reference/arguments-annotations-overview.md
index 6a0c2f60b..72d50b01c 100644
--- a/daprdocs/content/en/reference/arguments-annotations-overview.md
+++ b/daprdocs/content/en/reference/arguments-annotations-overview.md
@@ -16,15 +16,17 @@ This table is meant to help users understand the equivalent options for running
| `--app-id` | `--app-id` | `-i` | `dapr.io/app-id` | The unique ID of the application. Used for service discovery, state encapsulation and the pub/sub consumer ID |
| `--app-port` | `--app-port` | `-p` | `dapr.io/app-port` | This parameter tells Dapr which port your application is listening on |
| `--components-path` | `--components-path` | `-d` | not supported | **Deprecated** in favor of `--resources-path` |
-| `--resources-path` | `--resources-path` | `-d` | not supported | Path for components directory. If empty, components will not be loaded. |
+| `--resources-path` | `--resources-path` | `-d` | not supported | Path for components directory. If empty, components will not be loaded |
| `--config` | `--config` | `-c` | `dapr.io/config` | Tells Dapr which Configuration resource to use |
| `--control-plane-address` | not supported | | not supported | Address for a Dapr control plane |
-| `--dapr-grpc-port` | `--dapr-grpc-port` | | not supported | gRPC port for the Dapr API to listen on (default "50001") |
-| `--dapr-http-port` | `--dapr-http-port` | | not supported | The HTTP port for the Dapr API |
-| `--dapr-http-max-request-size` | --dapr-http-max-request-size | | `dapr.io/http-max-request-size` | Increasing max size of request body http and grpc servers parameter in MB to handle uploading of big files. Default is `4` MB |
-| `--dapr-http-read-buffer-size` | --dapr-http-read-buffer-size | | `dapr.io/http-read-buffer-size` | Increasing max size of http header read buffer in KB to handle when sending multi-KB headers. The default 4 KB. When sending bigger than default 4KB http headers, you should set this to a larger value, for example 16 (for 16KB) |
+| `--dapr-grpc-port` | `--dapr-grpc-port` | | `dapr.io/grpc-port` | Sets the Dapr API gRPC port (default `50001`); all cluster services must use the same port for communication |
+| `--dapr-http-port` | `--dapr-http-port` | | not supported | HTTP port for the Dapr API to listen on (default `3500`) |
+| `--dapr-http-max-request-size` | `--dapr-http-max-request-size` | | `dapr.io/http-max-request-size` | **Deprecated** in favor of `--max-body-size`. Inreasing the request max body size to handle large file uploads using http and grpc protocols. Default is `4` MB |
+| `--max-body-size` | not supported | | `dapr.io/max-body-size` | Inreasing the request max body size to handle large file uploads using http and grpc protocols. Set the value using size units (e.g., `16Mi` for 16MB). The default is `4Mi` |
+| `--dapr-http-read-buffer-size` | `--dapr-http-read-buffer-size` | | `dapr.io/http-read-buffer-size` | **Deprecated** in favor of `--read-buffer-size`. Increasing max size of http header read buffer in KB to to support larger header values, for example `16` to support headers up to 16KB . Default is `16` for 16KB |
+| `--read-buffer-size` | not supported | | `dapr.io/read-buffer-size` | Increasing max size of http header read buffer in KB to to support larger header values. Set the value using size units, for example `32Ki` will support headers up to 32KB . Default is `4` for 4KB |
| not supported | `--image` | | `dapr.io/sidecar-image` | Dapr sidecar image. Default is daprio/daprd:latest. The Dapr sidecar uses this image instead of the latest default image. Use this when building your own custom image of Dapr and or [using an alternative stable Dapr image]({{< ref "support-release-policy.md#build-variations" >}}) |
-| `--internal-grpc-port` | not supported | | not supported | gRPC port for the Dapr Internal API to listen on |
+| `--internal-grpc-port` | not supported | | `dapr.io/internal-grpc-port` | Sets the internal Dapr gRPC port (default `50002`); all cluster services must use the same port for communication |
| `--enable-metrics` | not supported | | configuration spec | Enable [prometheus metric]({{< ref prometheus >}}) (default true) |
| `--enable-mtls` | not supported | | configuration spec | Enables automatic mTLS for daprd to daprd communication channels |
| `--enable-profiling` | `--enable-profiling` | | `dapr.io/enable-profiling` | [Enable profiling]({{< ref profiling-debugging >}}) |
@@ -32,11 +34,11 @@ This table is meant to help users understand the equivalent options for running
| `--log-as-json` | not supported | | `dapr.io/log-as-json` | Setting this parameter to `true` outputs [logs in JSON format]({{< ref logs >}}). Default is `false` |
| `--log-level` | `--log-level` | | `dapr.io/log-level` | Sets the [log level]({{< ref logs-troubleshooting >}}) for the Dapr sidecar. Allowed values are `debug`, `info`, `warn`, `error`. Default is `info` |
| `--enable-api-logging` | `--enable-api-logging` | | `dapr.io/enable-api-logging` | [Enables API logging]({{< ref "api-logs-troubleshooting.md#configuring-api-logging-in-kubernetes" >}}) for the Dapr sidecar |
-| `--app-max-concurrency` | `--app-max-concurrency` | | `dapr.io/app-max-concurrency` | Limit the [concurrency of your application]({{< ref "control-concurrency.md#setting-app-max-concurrency" >}}). A valid value is any number larger than `0`|
+| `--app-max-concurrency` | `--app-max-concurrency` | | `dapr.io/app-max-concurrency` | Limit the [concurrency of your application]({{< ref "control-concurrency.md#setting-app-max-concurrency" >}}). A valid value is any number larger than `0`. Default value: `-1`, meaning no concurrency. |
| `--metrics-port` | `--metrics-port` | | `dapr.io/metrics-port` | Sets the port for the sidecar metrics server. Default is `9090` |
| `--mode` | not supported | | not supported | Runtime hosting option mode for Dapr, either `"standalone"` or `"kubernetes"` (default `"standalone"`). [Learn more.]({{< ref hosting >}}) |
-| `--placement-host-address` | `--placement-host-address` | | `dapr.io/placement-host-address` | Comma separated list of addresses for Dapr Actor Placement servers. When no annotation is set, the default value is set by the Sidecar Injector. When the annotation is set and the value is empty, the sidecar does not connect to Placement server. This can be used when there are no actors running in the sidecar. When the annotation is set and the value is not empty, the sidecar connects to the configured address. For example: `127.0.0.1:50057,127.0.0.1:50058` |
-| `--scheduler-host-address` | `--scheduler-host-address` | | `dapr.io/scheduler-host-address` | Comma separated list of addresses for Dapr Scheduler servers. When no annotation is set, the default value is set by the Sidecar Injector. When the annotation is set and the value is empty, the sidecar does not connect to Scheduler server. When the annotation is set and the value is not empty, the sidecar connects to the configured address. For example: `127.0.0.1:50055,127.0.0.1:50056` |
+| `--placement-host-address` | `--placement-host-address` | | `dapr.io/placement-host-address` | Comma separated list of addresses for Dapr Actor Placement servers.
When no annotation is set, the default value is set by the Sidecar Injector.
When the annotation is set and the value is a single space (`' '`), or "empty", the sidecar does not connect to Placement server. This can be used when there are no actors running in the sidecar.
When the annotation is set and the value is not empty, the sidecar connects to the configured address. For example: `127.0.0.1:50057,127.0.0.1:50058` |
+| `--scheduler-host-address` | `--scheduler-host-address` | | `dapr.io/scheduler-host-address` | Comma separated list of addresses for Dapr Scheduler servers.
When no annotation is set, the default value is set by the Sidecar Injector.
When the annotation is set and the value is a single space (`' '`), or "empty", the sidecar does not connect to Scheduler server.
When the annotation is set and the value is not empty, the sidecar connects to the configured address. For example: `127.0.0.1:50055,127.0.0.1:50056` |
| `--actors-service` | not supported | | not supported | Configuration for the service that offers actor placement information. The format is `:`. For example, setting this value to `placement:127.0.0.1:50057,127.0.0.1:50058` is an alternative to using the `--placement-host-address` flag. |
| `--reminders-service` | not supported | | not supported | Configuration for the service that enables actor reminders. The format is `[:]`. Currently, the only supported value is `"default"` (which is also the default value), which uses the built-in reminders subsystem in the Dapr sidecar. |
| `--profiling-port` | `--profiling-port` | | not supported | The port for the profile server (default `7777`) |
@@ -67,6 +69,7 @@ This table is meant to help users understand the equivalent options for running
| not supported | not supported | | `dapr.io/sidecar-readiness-probe-period-seconds` | How often (in seconds) to perform the sidecar readiness probe. Read more [here](https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/#configure-probes). Default is `6`|
| not supported | not supported | | `dapr.io/sidecar-readiness-probe-threshold` | When the sidecar readiness probe fails, Kubernetes will try N times before giving up. In this case, the Pod will be marked Unready. Read more about `failureThreshold` [here](https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/#configure-probes). Default is `3`|
| not supported | not supported | | `dapr.io/env` | List of environment variable to be injected into the sidecar. Strings consisting of key=value pairs separated by a comma.|
+| not supported | not supported | | `dapr.io/env-from-secret` | List of environment variables to be injected into the sidecar from secret. Strings consisting of `"key=secret-name:secret-key"` pairs are separated by a comma. |
| not supported | not supported | | `dapr.io/volume-mounts` | List of [pod volumes to be mounted to the sidecar container]({{< ref "kubernetes-volume-mounts" >}}) in read-only mode. Strings consisting of `volume:path` pairs separated by a comma. Example, `"volume-1:/tmp/mount1,volume-2:/home/root/mount2"`. |
| not supported | not supported | | `dapr.io/volume-mounts-rw` | List of [pod volumes to be mounted to the sidecar container]({{< ref "kubernetes-volume-mounts" >}}) in read-write mode. Strings consisting of `volume:path` pairs separated by a comma. Example, `"volume-1:/tmp/mount1,volume-2:/home/root/mount2"`. |
| `--disable-builtin-k8s-secret-store` | not supported | | `dapr.io/disable-builtin-k8s-secret-store` | Disables BuiltIn Kubernetes secret store. Default value is false. See [Kubernetes secret store component]({{< ref "kubernetes-secret-store.md" >}}) for details. |
diff --git a/daprdocs/content/en/reference/components-reference/supported-bindings/blobstorage.md b/daprdocs/content/en/reference/components-reference/supported-bindings/blobstorage.md
index 4baea225c..008d10a7e 100644
--- a/daprdocs/content/en/reference/components-reference/supported-bindings/blobstorage.md
+++ b/daprdocs/content/en/reference/components-reference/supported-bindings/blobstorage.md
@@ -63,6 +63,10 @@ This component supports **output binding** with the following operations:
- `delete` : [Delete blob](#delete-blob)
- `list`: [List blobs](#list-blobs)
+The Blob storage component's **input binding** triggers and pushes events using [Azure Event Grid]({{< ref eventgrid.md >}}).
+
+Refer to the [Reacting to Blob storage events](https://learn.microsoft.com/azure/storage/blobs/storage-blob-event-overview) guide for more set up and more information.
+
### Create blob
To perform a create blob operation, invoke the Azure Blob Storage binding with a `POST` method and the following JSON body:
diff --git a/daprdocs/content/en/reference/components-reference/supported-bindings/eventgrid.md b/daprdocs/content/en/reference/components-reference/supported-bindings/eventgrid.md
index 9e66107b5..f970e8cce 100644
--- a/daprdocs/content/en/reference/components-reference/supported-bindings/eventgrid.md
+++ b/daprdocs/content/en/reference/components-reference/supported-bindings/eventgrid.md
@@ -90,6 +90,21 @@ This component supports **output binding** with the following operations:
- `create`: publishes a message on the Event Grid topic
+## Receiving events
+
+You can use the Event Grid binding to receive events from a variety of sources and actions. [Learn more about all of the available event sources and handlers that work with Event Grid.](https://learn.microsoft.com/azure/event-grid/overview)
+
+In the following table, you can find the list of Dapr components that can raise events.
+
+| Event sources | Dapr components |
+| ------------- | --------------- |
+| [Azure Blob Storage](https://learn.microsoft.com/azure/storage/blobs/) | [Azure Blob Storage binding]({{< ref blobstorage.md >}})
[Azure Blob Storage state store]({{< ref setup-azure-blobstorage.md >}}) |
+| [Azure Cache for Redis](https://learn.microsoft.com/azure/azure-cache-for-redis/cache-overview) | [Redis binding]({{< ref redis.md >}})
[Redis pub/sub]({{< ref setup-redis-pubsub.md >}}) |
+| [Azure Event Hubs](https://learn.microsoft.com/azure/event-hubs/event-hubs-about) | [Azure Event Hubs pub/sub]({{< ref setup-azure-eventhubs.md >}})
[Azure Event Hubs binding]({{< ref eventhubs.md >}}) |
+| [Azure IoT Hub](https://learn.microsoft.com/azure/iot-hub/iot-concepts-and-iot-hub) | [Azure Event Hubs pub/sub]({{< ref setup-azure-eventhubs.md >}})
[Azure Event Hubs binding]({{< ref eventhubs.md >}}) |
+| [Azure Service Bus](https://learn.microsoft.com/azure/service-bus-messaging/service-bus-messaging-overview) | [Azure Service Bus binding]({{< ref servicebusqueues.md >}})
[Azure Service Bus pub/sub topics]({{< ref setup-azure-servicebus-topics.md >}}) and [queues]({{< ref setup-azure-servicebus-queues.md >}}) |
+| [Azure SignalR Service](https://learn.microsoft.com/azure/azure-signalr/signalr-overview) | [SignalR binding]({{< ref signalr.md >}}) |
+
## Microsoft Entra ID credentials
The Azure Event Grid binding requires an Microsoft Entra ID application and service principal for two reasons:
@@ -142,7 +157,7 @@ Connect-MgGraph -Scopes "Application.Read.All","Application.ReadWrite.All"
> Note: if your directory does not have a Service Principal for the application "Microsoft.EventGrid", you may need to run the command `Connect-MgGraph` and sign in as an admin for the Microsoft Entra ID tenant (this is related to permissions on the Microsoft Entra ID directory, and not the Azure subscription). Otherwise, please ask your tenant's admin to sign in and run this PowerShell command: `New-MgServicePrincipal -AppId "4962773b-9cdb-44cf-a8bf-237846a00ab7"` (the UUID is a constant)
-### Testing locally
+## Testing locally
- Install [ngrok](https://ngrok.com/download)
- Run locally using a custom port, for example `9000`, for handshakes
@@ -160,7 +175,7 @@ ngrok http --host-header=localhost 9000
dapr run --app-id dotnetwebapi --app-port 5000 --dapr-http-port 3500 dotnet run
```
-### Testing on Kubernetes
+## Testing on Kubernetes
Azure Event Grid requires a valid HTTPS endpoint for custom webhooks; self-signed certificates aren't accepted. In order to enable traffic from the public internet to your app's Dapr sidecar you need an ingress controller enabled with Dapr. There's a good article on this topic: [Kubernetes NGINX ingress controller with Dapr](https://carlos.mendible.com/2020/04/05/kubernetes-nginx-ingress-controller-with-dapr/).
diff --git a/daprdocs/content/en/reference/components-reference/supported-bindings/eventhubs.md b/daprdocs/content/en/reference/components-reference/supported-bindings/eventhubs.md
index ee005b4dd..be8536f72 100644
--- a/daprdocs/content/en/reference/components-reference/supported-bindings/eventhubs.md
+++ b/daprdocs/content/en/reference/components-reference/supported-bindings/eventhubs.md
@@ -36,6 +36,8 @@ spec:
value: "namespace"
- name: enableEntityManagement
value: "false"
+ - name: enableInOrderMessageDelivery
+ value: "false"
# The following four properties are needed only if enableEntityManagement is set to true
- name: resourceGroupName
value: "test-rg"
@@ -71,7 +73,8 @@ The above example uses secrets as plain strings. It is recommended to use a secr
| `eventHub` | Y* | Input/Output | The name of the Event Hubs hub ("topic"). Required if using Microsoft Entra ID authentication or if the connection string doesn't contain an `EntityPath` value | `mytopic` |
| `connectionString` | Y* | Input/Output | Connection string for the Event Hub or the Event Hub namespace.
* Mutally exclusive with `eventHubNamespace` field.
* Required when not using [Microsoft Entra ID Authentication]({{< ref "authenticating-azure.md" >}}) | `"Endpoint=sb://{EventHubNamespace}.servicebus.windows.net/;SharedAccessKeyName={PolicyName};SharedAccessKey={Key};EntityPath={EventHub}"` or `"Endpoint=sb://{EventHubNamespace}.servicebus.windows.net/;SharedAccessKeyName={PolicyName};SharedAccessKey={Key}"`
| `eventHubNamespace` | Y* | Input/Output | The Event Hub Namespace name.
* Mutally exclusive with `connectionString` field.
* Required when using [Microsoft Entra ID Authentication]({{< ref "authenticating-azure.md" >}}) | `"namespace"`
-| `enableEntityManagement` | N | Input/Output | Boolean value to allow management of the EventHub namespace and storage account. Default: `false` | `"true", "false"`
+| `enableEntityManagement` | N | Input/Output | Boolean value to allow management of the EventHub namespace and storage account. Default: `false` | `"true"`, `"false"`
+| `enableInOrderMessageDelivery` | N | Input/Output | Boolean value to allow messages to be delivered in the order in which they were posted. This assumes `partitionKey` is set when publishing or posting to ensure ordering across partitions. Default: `false` | `"true"`, `"false"`
| `resourceGroupName` | N | Input/Output | Name of the resource group the Event Hub namespace is part of. Required when entity management is enabled | `"test-rg"`
| `subscriptionID` | N | Input/Output | Azure subscription ID value. Required when entity management is enabled | `"azure subscription id"`
| `partitionCount` | N | Input/Output | Number of partitions for the new Event Hub namespace. Used only when entity management is enabled. Default: `"1"` | `"2"`
diff --git a/daprdocs/content/en/reference/components-reference/supported-bindings/kafka.md b/daprdocs/content/en/reference/components-reference/supported-bindings/kafka.md
index addfba98a..413e1893f 100644
--- a/daprdocs/content/en/reference/components-reference/supported-bindings/kafka.md
+++ b/daprdocs/content/en/reference/components-reference/supported-bindings/kafka.md
@@ -63,6 +63,8 @@ spec:
value: true
- name: schemaLatestVersionCacheTTL # Optional. When using Schema Registry Avro serialization/deserialization. The TTL for schema caching when publishing a message with latest schema available.
value: 5m
+ - name: escapeHeaders # Optional.
+ value: false
```
## Spec metadata fields
@@ -99,6 +101,7 @@ spec:
| `consumerFetchDefault` | N | Input/Output | The default number of message bytes to fetch from the broker in each request. Default is `"1048576"` bytes. | `"2097152"` |
| `heartbeatInterval` | N | Input | The interval between heartbeats to the consumer coordinator. At most, the value should be set to a 1/3 of the `sessionTimeout` value. Defaults to `"3s"`. | `"5s"` |
| `sessionTimeout` | N | Input | The timeout used to detect client failures when using Kafka’s group management facility. If the broker fails to receive any heartbeats from the consumer before the expiration of this session timeout, then the consumer is removed and initiates a rebalance. Defaults to `"10s"`. | `"20s"` |
+| `escapeHeaders` | N | Input | Enables URL escaping of the message header values received by the consumer. Allows receiving content with special characters that are usually not allowed in HTTP headers. Default is `false`. | `true` |
#### Note
The metadata `version` must be set to `1.0.0` when using Azure EventHubs with Kafka.
diff --git a/daprdocs/content/en/reference/components-reference/supported-bindings/postgresql.md b/daprdocs/content/en/reference/components-reference/supported-bindings/postgresql.md
index 7d8f4104b..a77814b8e 100644
--- a/daprdocs/content/en/reference/components-reference/supported-bindings/postgresql.md
+++ b/daprdocs/content/en/reference/components-reference/supported-bindings/postgresql.md
@@ -56,23 +56,27 @@ Authenticating with Microsoft Entra ID is supported with Azure Database for Post
### Authenticate using AWS IAM
Authenticating with AWS IAM is supported with all versions of PostgreSQL type components.
-The user specified in the connection string must be an AWS IAM enabled user granted the `rds_iam` database role.
+The user specified in the connection string must be an already existing user in the DB, and an AWS IAM enabled user granted the `rds_iam` database role.
Authentication is based on the AWS authentication configuration file, or the AccessKey/SecretKey provided.
The AWS authentication token will be dynamically rotated before it's expiration time with AWS.
| Field | Required | Details | Example |
|--------|:--------:|---------|---------|
-| `awsRegion` | Y | The AWS Region where the AWS Relational Database Service is deployed to. | `"us-east-1"`
-| `accessKey` | Y | AWS access key associated with an IAM account | `"AKIAIOSFODNN7EXAMPLE"`
-| `secretKey` | Y | The secret key associated with the access key. | `"wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY"`
+| `useAWSIAM` | Y | Must be set to `true` to enable the component to retrieve access tokens from AWS IAM. This authentication method only works with AWS Relational Database Service for PostgreSQL databases. | `"true"` |
+| `connectionString` | Y | The connection string for the PostgreSQL database.
This must contain an already existing user, which corresponds to the name of the user created inside PostgreSQL that maps to the AWS IAM policy. This connection string should not contain any password. Note that the database name field is denoted by dbname with AWS. | `"host=mydb.postgres.database.aws.com user=myapplication port=5432 dbname=my_db sslmode=require"`|
+| `awsRegion` | N | This maintains backwards compatibility with existing fields. It will be deprecated as of Dapr 1.17. Use 'region' instead. The AWS Region where the AWS Relational Database Service is deployed to. | `"us-east-1"` |
+| `awsAccessKey` | N | This maintains backwards compatibility with existing fields. It will be deprecated as of Dapr 1.17. Use 'accessKey' instead. AWS access key associated with an IAM account | `"AKIAIOSFODNN7EXAMPLE"` |
+| `awsSecretKey` | N | This maintains backwards compatibility with existing fields. It will be deprecated as of Dapr 1.17. Use 'secretKey' instead. The secret key associated with the access key | `"wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY"` |
+| `awsSessionToken` | N | This maintains backwards compatibility with existing fields. It will be deprecated as of Dapr 1.17. Use 'sessionToken' instead. AWS session token to use. A session token is only required if you are using temporary security credentials. | `"TOKEN"` |
### Other metadata options
-| Field | Required | Binding support |Details | Example |
+| Field | Required | Binding support | Details | Example |
|--------------------|:--------:|-----|---|---------|
-| `maxConns` | N | Output | Maximum number of connections pooled by this component. Set to 0 or lower to use the default value, which is the greater of 4 or the number of CPUs. | `"4"`
-| `connectionMaxIdleTime` | N | Output | Max idle time before unused connections are automatically closed in the connection pool. By default, there's no value and this is left to the database driver to choose. | `"5m"`
-| `queryExecMode` | N | Output | Controls the default mode for executing queries. By default Dapr uses the extended protocol and automatically prepares and caches prepared statements. However, this may be incompatible with proxies such as PGBouncer. In this case it may be preferrable to use `exec` or `simple_protocol`. | `"simple_protocol"`
+| `timeout` | N | Output | Timeout for operations on the database, as a [Go duration](https://pkg.go.dev/time#ParseDuration). Integers are interpreted as number of seconds. Defaults to `20s` | `"30s"`, `30` |
+| `maxConns` | N | Output | Maximum number of connections pooled by this component. Set to 0 or lower to use the default value, which is the greater of 4 or the number of CPUs. | `"4"` |
+| `connectionMaxIdleTime` | N | Output | Max idle time before unused connections are automatically closed in the connection pool. By default, there's no value and this is left to the database driver to choose. | `"5m"` |
+| `queryExecMode` | N | Output | Controls the default mode for executing queries. By default Dapr uses the extended protocol and automatically prepares and caches prepared statements. However, this may be incompatible with proxies such as PGBouncer. In this case it may be preferrable to use `exec` or `simple_protocol`. | `"simple_protocol"` |
### URL format
diff --git a/daprdocs/content/en/reference/components-reference/supported-bindings/redis.md b/daprdocs/content/en/reference/components-reference/supported-bindings/redis.md
index 3a9093666..4fc8dbb1b 100644
--- a/daprdocs/content/en/reference/components-reference/supported-bindings/redis.md
+++ b/daprdocs/content/en/reference/components-reference/supported-bindings/redis.md
@@ -43,6 +43,8 @@ The above example uses secrets as plain strings. It is recommended to use a secr
| `redisUsername` | N | Output | Username for Redis host. Defaults to empty. Make sure your redis server version is 6 or above, and have created acl rule correctly. | `"username"` |
| `useEntraID` | N | Output | Implements EntraID support for Azure Cache for Redis. Before enabling this: - The `redisHost` name must be specified in the form of `"server:port"`
- TLS must be enabled
Learn more about this setting under [Create a Redis instance > Azure Cache for Redis]({{< ref "#create-a-redis-instance" >}}) | `"true"`, `"false"` |
| `enableTLS` | N | Output | If the Redis instance supports TLS with public certificates it can be configured to enable or disable TLS. Defaults to `"false"` | `"true"`, `"false"` |
+| `clientCert` | N | Output | The content of the client certificate, used for Redis instances that require client-side certificates. Must be used with `clientKey` and `enableTLS` must be set to true. It is recommended to use a secret store as described [here]({{< ref component-secrets.md >}}) | `"----BEGIN CERTIFICATE-----\nMIIC..."` |
+| `clientKey` | N | Output | The content of the client private key, used in conjunction with `clientCert` for authentication. It is recommended to use a secret store as described [here]({{< ref component-secrets.md >}}) | `"----BEGIN PRIVATE KEY-----\nMIIE..."` |
| `failover` | N | Output | Property to enabled failover configuration. Needs sentinalMasterName to be set. Defaults to `"false"` | `"true"`, `"false"`
| `sentinelMasterName` | N | Output | The sentinel master name. See [Redis Sentinel Documentation](https://redis.io/docs/reference/sentinel-clients/) | `""`, `"127.0.0.1:6379"`
| `redeliverInterval` | N | Output | The interval between checking for pending messages to redelivery. Defaults to `"60s"`. `"0"` disables redelivery. | `"30s"`
diff --git a/daprdocs/content/en/reference/components-reference/supported-bindings/s3.md b/daprdocs/content/en/reference/components-reference/supported-bindings/s3.md
index 2de9b95a7..d91818a1d 100644
--- a/daprdocs/content/en/reference/components-reference/supported-bindings/s3.md
+++ b/daprdocs/content/en/reference/components-reference/supported-bindings/s3.md
@@ -44,6 +44,8 @@ spec:
value: ""
- name: insecureSSL
value: ""
+ - name: storageClass
+ value: ""
```
{{% alert title="Warning" color="warning" %}}
@@ -65,6 +67,7 @@ The above example uses secrets as plain strings. It is recommended to use a secr
| `encodeBase64` | N | Output | Configuration to encode base64 file content before return the content. (In case of opening a file with binary content). `"true"` is the only allowed positive value. Other positive variations like `"True", "1"` are not acceptable. Defaults to `"false"` | `"true"`, `"false"` |
| `disableSSL` | N | Output | Allows to connect to non `https://` endpoints. Defaults to `"false"` | `"true"`, `"false"` |
| `insecureSSL` | N | Output | When connecting to `https://` endpoints, accepts invalid or self-signed certificates. Defaults to `"false"` | `"true"`, `"false"` |
+| `storageClass` | N | Output | The desired storage class for objects during the create operation. [Valid aws storage class types can be found here](https://aws.amazon.com/s3/storage-classes/) | `STANDARD_IA` |
{{% alert title="Important" color="warning" %}}
When running the Dapr sidecar (daprd) with your application on EKS (AWS Kubernetes), if you're using a node/pod that has already been attached to an IAM policy defining access to AWS resources, you **must not** provide AWS access-key, secret-key, and tokens in the definition of the component spec you're using.
@@ -165,10 +168,20 @@ To perform a create operation, invoke the AWS S3 binding with a `POST` method an
```json
{
"operation": "create",
- "data": "YOUR_CONTENT"
+ "data": "YOUR_CONTENT",
+ "metadata": {
+ "storageClass": "STANDARD_IA"
+ }
}
```
+For example you can provide a storage class while using the `create` operation with a Linux curl command
+
+```bash
+curl -d '{ "operation": "create", "data": "YOUR_BASE_64_CONTENT", "metadata": { "storageClass": "STANDARD_IA" } }' /
+http://localhost:/v1.0/bindings/
+```
+
#### Share object with a presigned URL
To presign an object with a specified time-to-live, use the `presignTTL` metadata key on a `create` request.
diff --git a/daprdocs/content/en/reference/components-reference/supported-bindings/sftp.md b/daprdocs/content/en/reference/components-reference/supported-bindings/sftp.md
new file mode 100644
index 000000000..a0e356e54
--- /dev/null
+++ b/daprdocs/content/en/reference/components-reference/supported-bindings/sftp.md
@@ -0,0 +1,231 @@
+---
+type: docs
+title: "SFTP binding spec"
+linkTitle: "SFTP"
+description: "Detailed documentation on the Secure File Transfer Protocol (SFTP) binding component"
+aliases:
+ - "/operations/components/setup-bindings/supported-bindings/sftp/"
+---
+
+## Component format
+
+To set up the SFTP binding, create a component of type `bindings.sftp`. See [this guide]({{ ref bindings-overview.md }}) on how to create and apply a binding configuration.
+
+```yaml
+apiVersion: dapr.io/v1alpha1
+kind: Component
+metadata:
+ name:
+spec:
+ type: bindings.sftp
+ version: v1
+ metadata:
+ - name: rootPath
+ value: ""
+ - name: address
+ value: ""
+ - name: username
+ value: ""
+ - name: password
+ value: "*****************"
+ - name: privateKey
+ value: "*****************"
+ - name: privateKeyPassphrase
+ value: "*****************"
+ - name: hostPublicKey
+ value: "*****************"
+ - name: knownHostsFile
+ value: ""
+ - name: insecureIgnoreHostKey
+ value: ""
+```
+
+## Spec metadata fields
+
+| Field | Required | Binding support | Details | Example |
+|--------------------|:--------:|------------|-----|---------|
+| `rootPath` | Y | Output | Root path for default working directory | `"/path"` |
+| `address` | Y | Output | Address of SFTP server | `"localhost:22"` |
+| `username` | Y | Output | Username for authentication | `"username"` |
+| `password` | N | Output | Password for username/password authentication | `"password"` |
+| `privateKey` | N | Output | Private key for public key authentication | "\|-
-----BEGIN OPENSSH PRIVATE KEY-----
*****************
-----END OPENSSH PRIVATE KEY-----"
|
+| `privateKeyPassphrase` | N | Output | Private key passphrase for public key authentication | `"passphrase"` |
+| `hostPublicKey` | N | Output | Host public key for host validation | `"ecdsa-sha2-nistp256 *** root@openssh-server"` |
+| `knownHostsFile` | N | Output | Known hosts file for host validation | `"/path/file"` |
+| `insecureIgnoreHostKey` | N | Output | Allows to skip host validation. Defaults to `"false"` | `"true"`, `"false"` |
+
+## Binding support
+
+This component supports **output binding** with the following operations:
+
+- `create` : [Create file](#create-file)
+- `get` : [Get file](#get-file)
+- `list` : [List files](#list-files)
+- `delete` : [Delete file](#delete-file)
+
+### Create file
+
+To perform a create file operation, invoke the SFTP binding with a `POST` method and the following JSON body:
+
+```json
+{
+ "operation": "create",
+ "data": "",
+ "metadata": {
+ "fileName": "",
+ }
+}
+```
+
+#### Example
+
+{{< tabs Windows Linux >}}
+
+ {{% codetab %}}
+ ```bash
+ curl -d "{ \"operation\": \"create\", \"data\": \"YOUR_BASE_64_CONTENT\", \"metadata\": { \"fileName\": \"my-test-file.jpg\" } }" http://localhost:/v1.0/bindings/
+ ```
+ {{% /codetab %}}
+
+ {{% codetab %}}
+ ```bash
+ curl -d '{ "operation": "create", "data": "YOUR_BASE_64_CONTENT", "metadata": { "fileName": "my-test-file.jpg" } }' \
+ http://localhost:/v1.0/bindings/
+ ```
+ {{% /codetab %}}
+
+{{< /tabs >}}
+
+#### Response
+
+The response body contains the following JSON:
+
+```json
+{
+ "fileName": ""
+}
+
+```
+
+### Get file
+
+To perform a get file operation, invoke the SFTP binding with a `POST` method and the following JSON body:
+
+```json
+{
+ "operation": "get",
+ "metadata": {
+ "fileName": ""
+ }
+}
+```
+
+#### Example
+
+{{< tabs Windows Linux >}}
+
+ {{% codetab %}}
+ ```bash
+ curl -d '{ \"operation\": \"get\", \"metadata\": { \"fileName\": \"filename\" }}' http://localhost:/v1.0/bindings/
+ ```
+ {{% /codetab %}}
+
+ {{% codetab %}}
+ ```bash
+ curl -d '{ "operation": "get", "metadata": { "fileName": "filename" }}' \
+ http://localhost:/v1.0/bindings/
+ ```
+ {{% /codetab %}}
+
+{{< /tabs >}}
+
+#### Response
+
+The response body contains the value stored in the file.
+
+### List files
+
+To perform a list files operation, invoke the SFTP binding with a `POST` method and the following JSON body:
+
+```json
+{
+ "operation": "list"
+}
+```
+
+If you only want to list the files beneath a particular directory below the `rootPath`, specify the relative directory name as the `fileName` in the metadata.
+
+```json
+{
+ "operation": "list",
+ "metadata": {
+ "fileName": "my/cool/directory"
+ }
+}
+```
+
+#### Example
+
+{{< tabs Windows Linux >}}
+
+ {{% codetab %}}
+ ```bash
+ curl -d '{ \"operation\": \"list\", \"metadata\": { \"fileName\": \"my/cool/directory\" }}' http://localhost:/v1.0/bindings/
+ ```
+ {{% /codetab %}}
+
+ {{% codetab %}}
+ ```bash
+ curl -d '{ "operation": "list", "metadata": { "fileName": "my/cool/directory" }}' \
+ http://localhost:/v1.0/bindings/
+ ```
+ {{% /codetab %}}
+
+{{< /tabs >}}
+
+#### Response
+
+The response is a JSON array of file names.
+
+### Delete file
+
+To perform a delete file operation, invoke the SFTP binding with a `POST` method and the following JSON body:
+
+```json
+{
+ "operation": "delete",
+ "metadata": {
+ "fileName": "myfile"
+ }
+}
+```
+
+#### Example
+
+{{< tabs Windows Linux >}}
+
+ {{% codetab %}}
+ ```bash
+ curl -d '{ \"operation\": \"delete\", \"metadata\": { \"fileName\": \"myfile\" }}' http://localhost:/v1.0/bindings/
+ ```
+ {{% /codetab %}}
+
+ {{% codetab %}}
+ ```bash
+ curl -d '{ "operation": "delete", "metadata": { "fileName": "myfile" }}' \
+ http://localhost:/v1.0/bindings/
+ ```
+ {{% /codetab %}}
+
+{{< /tabs >}}
+
+#### Response
+
+An HTTP 204 (No Content) and empty body is returned if successful.
+
+## Related links
+
+- [Basic schema for a Dapr component]({{< ref component-schema >}})
+- [Bindings building block]({{< ref bindings >}})
+- [How-To: Use bindings to interface with external resources]({{< ref howto-bindings.md >}})
+- [Bindings API reference]({{< ref bindings_api.md >}})
diff --git a/daprdocs/content/en/reference/components-reference/supported-configuration-stores/postgresql-configuration-store.md b/daprdocs/content/en/reference/components-reference/supported-configuration-stores/postgresql-configuration-store.md
index a846b6a23..ea4868fe3 100644
--- a/daprdocs/content/en/reference/components-reference/supported-configuration-stores/postgresql-configuration-store.md
+++ b/daprdocs/content/en/reference/components-reference/supported-configuration-stores/postgresql-configuration-store.md
@@ -79,11 +79,28 @@ Authenticating with Microsoft Entra ID is supported with Azure Database for Post
| `azureClientId` | N | Client ID (application ID) | `"c7dd251f-811f-…"` |
| `azureClientSecret` | N | Client secret (application password) | `"Ecy3X…"` |
+### Authenticate using AWS IAM
+
+Authenticating with AWS IAM is supported with all versions of PostgreSQL type components.
+The user specified in the connection string must be an already existing user in the DB, and an AWS IAM enabled user granted the `rds_iam` database role.
+Authentication is based on the AWS authentication configuration file, or the AccessKey/SecretKey provided.
+The AWS authentication token will be dynamically rotated before it's expiration time with AWS.
+
+| Field | Required | Details | Example |
+|--------|:--------:|---------|---------|
+| `useAWSIAM` | Y | Must be set to `true` to enable the component to retrieve access tokens from AWS IAM. This authentication method only works with AWS Relational Database Service for PostgreSQL databases. | `"true"` |
+| `connectionString` | Y | The connection string for the PostgreSQL database.
This must contain an already existing user, which corresponds to the name of the user created inside PostgreSQL that maps to the AWS IAM policy. This connection string should not contain any password. Note that the database name field is denoted by dbname with AWS. | `"host=mydb.postgres.database.aws.com user=myapplication port=5432 dbname=my_db sslmode=require"`|
+| `awsRegion` | N | This maintains backwards compatibility with existing fields. It will be deprecated as of Dapr 1.17. Use 'region' instead. The AWS Region where the AWS Relational Database Service is deployed to. | `"us-east-1"` |
+| `awsAccessKey` | N | This maintains backwards compatibility with existing fields. It will be deprecated as of Dapr 1.17. Use 'accessKey' instead. AWS access key associated with an IAM account | `"AKIAIOSFODNN7EXAMPLE"` |
+| `awsSecretKey` | N | This maintains backwards compatibility with existing fields. It will be deprecated as of Dapr 1.17. Use 'secretKey' instead. The secret key associated with the access key | `"wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY"` |
+| `awsSessionToken` | N | This maintains backwards compatibility with existing fields. It will be deprecated as of Dapr 1.17. Use 'sessionToken' instead. AWS session token to use. A session token is only required if you are using temporary security credentials. | `"TOKEN"` |
+
### Other metadata options
| Field | Required | Details | Example |
|--------------------|:--------:|---------|---------|
| `table` | Y | Table name for configuration information, must be lowercased. | `configtable`
+| `timeout` | N | Timeout for operations on the database, as a [Go duration](https://pkg.go.dev/time#ParseDuration). Integers are interpreted as number of seconds. Defaults to `20s` | `"30s"`, `30` |
| `maxConns` | N | Maximum number of connections pooled by this component. Set to 0 or lower to use the default value, which is the greater of 4 or the number of CPUs. | `"4"`
| `connectionMaxIdleTime` | N | Max idle time before unused connections are automatically closed in the connection pool. By default, there's no value and this is left to the database driver to choose. | `"5m"`
| `queryExecMode` | N | Controls the default mode for executing queries. By default Dapr uses the extended protocol and automatically prepares and caches prepared statements. However, this may be incompatible with proxies such as PGBouncer. In this case it may be preferrable to use `exec` or `simple_protocol`. | `"simple_protocol"`
diff --git a/daprdocs/content/en/reference/components-reference/supported-configuration-stores/redis-configuration-store.md b/daprdocs/content/en/reference/components-reference/supported-configuration-stores/redis-configuration-store.md
index caf9d8a44..28965cb0e 100644
--- a/daprdocs/content/en/reference/components-reference/supported-configuration-stores/redis-configuration-store.md
+++ b/daprdocs/content/en/reference/components-reference/supported-configuration-stores/redis-configuration-store.md
@@ -43,6 +43,8 @@ The above example uses secrets as plain strings. It is recommended to use a secr
| redisPassword | N | Output | The Redis password | `"password"` |
| redisUsername | N | Output | Username for Redis host. Defaults to empty. Make sure your Redis server version is 6 or above, and have created acl rule correctly. | `"username"` |
| enableTLS | N | Output | If the Redis instance supports TLS with public certificates it can be configured to enable or disable TLS. Defaults to `"false"` | `"true"`, `"false"` |
+| clientCert | N | Output | The content of the client certificate, used for Redis instances that require client-side certificates. Must be used with `clientKey` and `enableTLS` must be set to true. It is recommended to use a secret store as described [here]({{< ref component-secrets.md >}}) | `"----BEGIN CERTIFICATE-----\nMIIC..."` |
+| clientKey | N | Output | The content of the client private key, used in conjunction with `clientCert` for authentication. It is recommended to use a secret store as described [here]({{< ref component-secrets.md >}}) | `"----BEGIN PRIVATE KEY-----\nMIIE..."` |
| failover | N | Output | Property to enabled failover configuration. Needs sentinelMasterName to be set. Defaults to `"false"` | `"true"`, `"false"`
| sentinelMasterName | N | Output | The Sentinel master name. See [Redis Sentinel Documentation](https://redis.io/docs/reference/sentinel-clients/) | `""`, `"127.0.0.1:6379"`
| redisType | N | Output | The type of Redis. There are two valid values, one is `"node"` for single node mode, the other is `"cluster"` for Redis cluster mode. Defaults to `"node"`. | `"cluster"`
diff --git a/daprdocs/content/en/reference/components-reference/supported-conversation/_index.md b/daprdocs/content/en/reference/components-reference/supported-conversation/_index.md
new file mode 100644
index 000000000..179162b3b
--- /dev/null
+++ b/daprdocs/content/en/reference/components-reference/supported-conversation/_index.md
@@ -0,0 +1,12 @@
+---
+type: docs
+title: "Conversation component specs"
+linkTitle: "Conversation"
+weight: 9000
+description: The supported conversation components that interface with Dapr
+no_list: true
+---
+
+{{< partial "components/description.html" >}}
+
+{{< partial "components/conversation.html" >}}
\ No newline at end of file
diff --git a/daprdocs/content/en/reference/components-reference/supported-conversation/anthropic.md b/daprdocs/content/en/reference/components-reference/supported-conversation/anthropic.md
new file mode 100644
index 000000000..334b7cb99
--- /dev/null
+++ b/daprdocs/content/en/reference/components-reference/supported-conversation/anthropic.md
@@ -0,0 +1,42 @@
+---
+type: docs
+title: "Anthropic"
+linkTitle: "Anthropic"
+description: Detailed information on the Anthropic conversation component
+---
+
+## Component format
+
+A Dapr `conversation.yaml` component file has the following structure:
+
+```yaml
+apiVersion: dapr.io/v1alpha1
+kind: Component
+metadata:
+ name: anthropic
+spec:
+ type: conversation.anthropic
+ metadata:
+ - name: key
+ value: "mykey"
+ - name: model
+ value: claude-3-5-sonnet-20240620
+ - name: cacheTTL
+ value: 10m
+```
+
+{{% alert title="Warning" color="warning" %}}
+The above example uses secrets as plain strings. It is recommended to use a secret store for the secrets, as described [here]({{< ref component-secrets.md >}}).
+{{% /alert %}}
+
+## Spec metadata fields
+
+| Field | Required | Details | Example |
+|--------------------|:--------:|---------|---------|
+| `key` | Y | API key for Anthropic. | `"mykey"` |
+| `model` | N | The Anthropic LLM to use. Defaults to `claude-3-5-sonnet-20240620` | `claude-3-5-sonnet-20240620` |
+| `cacheTTL` | N | A time-to-live value for a prompt cache to expire. Uses Golang duration format. | `10m` |
+
+## Related links
+
+- [Conversation API overview]({{< ref conversation-overview.md >}})
\ No newline at end of file
diff --git a/daprdocs/content/en/reference/components-reference/supported-conversation/aws-bedrock.md b/daprdocs/content/en/reference/components-reference/supported-conversation/aws-bedrock.md
new file mode 100644
index 000000000..759e37013
--- /dev/null
+++ b/daprdocs/content/en/reference/components-reference/supported-conversation/aws-bedrock.md
@@ -0,0 +1,42 @@
+---
+type: docs
+title: "AWS Bedrock"
+linkTitle: "AWS Bedrock"
+description: Detailed information on the AWS Bedrock conversation component
+---
+
+## Component format
+
+A Dapr `conversation.yaml` component file has the following structure:
+
+```yaml
+apiVersion: dapr.io/v1alpha1
+kind: Component
+metadata:
+ name: awsbedrock
+spec:
+ type: conversation.aws.bedrock
+ metadata:
+ - name: endpoint
+ value: "http://localhost:4566"
+ - name: model
+ value: amazon.titan-text-express-v1
+ - name: cacheTTL
+ value: 10m
+```
+
+{{% alert title="Warning" color="warning" %}}
+The above example uses secrets as plain strings. It is recommended to use a secret store for the secrets, as described [here]({{< ref component-secrets.md >}}).
+{{% /alert %}}
+
+## Spec metadata fields
+
+| Field | Required | Details | Example |
+|--------------------|:--------:|---------|---------|
+| `endpoint` | N | AWS endpoint for the component to use and connect to emulators. Not recommended for production AWS use. | `http://localhost:4566` |
+| `model` | N | The LLM to use. Defaults to Bedrock's default provider model from Amazon. | `amazon.titan-text-express-v1` |
+| `cacheTTL` | N | A time-to-live value for a prompt cache to expire. Uses Golang duration format. | `10m` |
+
+## Related links
+
+- [Conversation API overview]({{< ref conversation-overview.md >}})
\ No newline at end of file
diff --git a/daprdocs/content/en/reference/components-reference/supported-conversation/hugging-face.md b/daprdocs/content/en/reference/components-reference/supported-conversation/hugging-face.md
new file mode 100644
index 000000000..6429c84e8
--- /dev/null
+++ b/daprdocs/content/en/reference/components-reference/supported-conversation/hugging-face.md
@@ -0,0 +1,42 @@
+---
+type: docs
+title: "Huggingface"
+linkTitle: "Huggingface"
+description: Detailed information on the Huggingface conversation component
+---
+
+## Component format
+
+A Dapr `conversation.yaml` component file has the following structure:
+
+```yaml
+apiVersion: dapr.io/v1alpha1
+kind: Component
+metadata:
+ name: huggingface
+spec:
+ type: conversation.huggingface
+ metadata:
+ - name: key
+ value: mykey
+ - name: model
+ value: meta-llama/Meta-Llama-3-8B
+ - name: cacheTTL
+ value: 10m
+```
+
+{{% alert title="Warning" color="warning" %}}
+The above example uses secrets as plain strings. It is recommended to use a secret store for the secrets, as described [here]({{< ref component-secrets.md >}}).
+{{% /alert %}}
+
+## Spec metadata fields
+
+| Field | Required | Details | Example |
+|--------------------|:--------:|---------|---------|
+| `key` | Y | API key for Huggingface. | `mykey` |
+| `model` | N | The Huggingface LLM to use. Defaults to `meta-llama/Meta-Llama-3-8B`. | `meta-llama/Meta-Llama-3-8B` |
+| `cacheTTL` | N | A time-to-live value for a prompt cache to expire. Uses Golang duration format. | `10m` |
+
+## Related links
+
+- [Conversation API overview]({{< ref conversation-overview.md >}})
\ No newline at end of file
diff --git a/daprdocs/content/en/reference/components-reference/supported-conversation/mistral.md b/daprdocs/content/en/reference/components-reference/supported-conversation/mistral.md
new file mode 100644
index 000000000..57504e56b
--- /dev/null
+++ b/daprdocs/content/en/reference/components-reference/supported-conversation/mistral.md
@@ -0,0 +1,42 @@
+---
+type: docs
+title: "Mistral"
+linkTitle: "Mistral"
+description: Detailed information on the Mistral conversation component
+---
+
+## Component format
+
+A Dapr `conversation.yaml` component file has the following structure:
+
+```yaml
+apiVersion: dapr.io/v1alpha1
+kind: Component
+metadata:
+ name: mistral
+spec:
+ type: conversation.mistral
+ metadata:
+ - name: key
+ value: mykey
+ - name: model
+ value: open-mistral-7b
+ - name: cacheTTL
+ value: 10m
+```
+
+{{% alert title="Warning" color="warning" %}}
+The above example uses secrets as plain strings. It is recommended to use a secret store for the secrets, as described [here]({{< ref component-secrets.md >}}).
+{{% /alert %}}
+
+## Spec metadata fields
+
+| Field | Required | Details | Example |
+|--------------------|:--------:|---------|---------|
+| `key` | Y | API key for Mistral. | `mykey` |
+| `model` | N | The Mistral LLM to use. Defaults to `open-mistral-7b`. | `open-mistral-7b` |
+| `cacheTTL` | N | A time-to-live value for a prompt cache to expire. Uses Golang duration format. | `10m` |
+
+## Related links
+
+- [Conversation API overview]({{< ref conversation-overview.md >}})
\ No newline at end of file
diff --git a/daprdocs/content/en/reference/components-reference/supported-conversation/openai.md b/daprdocs/content/en/reference/components-reference/supported-conversation/openai.md
new file mode 100644
index 000000000..7148685b1
--- /dev/null
+++ b/daprdocs/content/en/reference/components-reference/supported-conversation/openai.md
@@ -0,0 +1,42 @@
+---
+type: docs
+title: "OpenAI"
+linkTitle: "OpenAI"
+description: Detailed information on the OpenAI conversation component
+---
+
+## Component format
+
+A Dapr `conversation.yaml` component file has the following structure:
+
+```yaml
+apiVersion: dapr.io/v1alpha1
+kind: Component
+metadata:
+ name: openai
+spec:
+ type: conversation.openai
+ metadata:
+ - name: key
+ value: mykey
+ - name: model
+ value: gpt-4-turbo
+ - name: cacheTTL
+ value: 10m
+```
+
+{{% alert title="Warning" color="warning" %}}
+The above example uses secrets as plain strings. It is recommended to use a secret store for the secrets, as described [here]({{< ref component-secrets.md >}}).
+{{% /alert %}}
+
+## Spec metadata fields
+
+| Field | Required | Details | Example |
+|--------------------|:--------:|---------|---------|
+| `key` | Y | API key for OpenAI. | `mykey` |
+| `model` | N | The OpenAI LLM to use. Defaults to `gpt-4-turbo`. | `gpt-4-turbo` |
+| `cacheTTL` | N | A time-to-live value for a prompt cache to expire. Uses Golang duration format. | `10m` |
+
+## Related links
+
+- [Conversation API overview]({{< ref conversation-overview.md >}})
\ No newline at end of file
diff --git a/daprdocs/content/en/reference/components-reference/supported-pubsub/_index.md b/daprdocs/content/en/reference/components-reference/supported-pubsub/_index.md
index ff00c0137..2e2962d68 100644
--- a/daprdocs/content/en/reference/components-reference/supported-pubsub/_index.md
+++ b/daprdocs/content/en/reference/components-reference/supported-pubsub/_index.md
@@ -11,6 +11,11 @@ no_list: true
The following table lists publish and subscribe brokers supported by the Dapr pub/sub building block. [Learn how to set up different brokers for Dapr publish and subscribe.]({{< ref setup-pubsub.md >}})
+{{% alert title="Pub/sub component retries vs inbound resiliency" color="warning" %}}
+Each pub/sub component has its own built-in retry behaviors. Before explicity applying a [Dapr resiliency policy]({{< ref "policies.md" >}}), make sure you understand the implicit retry policy of the pub/sub component you're using. Instead of overriding these built-in retries, Dapr resiliency augments them, which can cause repetitive clustering of messages.
+{{% /alert %}}
+
+
{{< partial "components/description.html" >}}
{{< partial "components/pubsub.html" >}}
diff --git a/daprdocs/content/en/reference/components-reference/supported-pubsub/setup-apache-kafka.md b/daprdocs/content/en/reference/components-reference/supported-pubsub/setup-apache-kafka.md
index cafcee537..503500ca8 100644
--- a/daprdocs/content/en/reference/components-reference/supported-pubsub/setup-apache-kafka.md
+++ b/daprdocs/content/en/reference/components-reference/supported-pubsub/setup-apache-kafka.md
@@ -53,6 +53,12 @@ spec:
value: 2.0.0
- name: disableTls # Optional. Disable TLS. This is not safe for production!! You should read the `Mutual TLS` section for how to use TLS.
value: "true"
+ - name: consumerFetchMin # Optional. Advanced setting. The minimum number of message bytes to fetch in a request - the broker will wait until at least this many are available.
+ value: 1
+ - name: consumerFetchDefault # Optional. Advanced setting. The default number of message bytes to fetch from the broker in each request.
+ value: 2097152
+ - name: channelBufferSize # Optional. Advanced setting. The number of events to buffer in internal and external channels.
+ value: 512
- name: schemaRegistryURL # Optional. When using Schema Registry Avro serialization/deserialization. The Schema Registry URL.
value: http://localhost:8081
- name: schemaRegistryAPIKey # Optional. When using Schema Registry Avro serialization/deserialization. The Schema Registry API Key.
@@ -63,6 +69,8 @@ spec:
value: true
- name: schemaLatestVersionCacheTTL # Optional. When using Schema Registry Avro serialization/deserialization. The TTL for schema caching when publishing a message with latest schema available.
value: 5m
+ - name: escapeHeaders # Optional.
+ value: false
```
@@ -96,12 +104,12 @@ spec:
| oidcClientSecret | N | The OAuth2 client secret that has been provisioned in the identity provider: Required when `authType` is set to `oidc` | `"KeFg23!"` |
| oidcScopes | N | Comma-delimited list of OAuth2/OIDC scopes to request with the access token. Recommended when `authType` is set to `oidc`. Defaults to `"openid"` | `"openid,kafka-prod"` |
| oidcExtensions | N | String containing a JSON-encoded dictionary of OAuth2/OIDC extensions to request with the access token | `{"cluster":"kafka","poolid":"kafkapool"}` |
-| awsRegion | N | The AWS region where the Kafka cluster is deployed to. Required when `authType` is set to `awsiam` | `us-west-1` |
-| awsAccessKey | N | AWS access key associated with an IAM account. | `"accessKey"`
-| awsSecretKey | N | The secret key associated with the access key. | `"secretKey"`
-| awsSessionToken | N | AWS session token to use. A session token is only required if you are using temporary security credentials. | `"sessionToken"`
-| awsIamRoleArn | N | IAM role that has access to AWS Managed Streaming for Apache Kafka (MSK). This is another option to authenticate with MSK aside from the AWS Credentials. | `"arn:aws:iam::123456789:role/mskRole"`
-| awsStsSessionName | N | Represents the session name for assuming a role. | `"MSKSASLDefaultSession"`
+| awsRegion | N | This maintains backwards compatibility with existing fields. It will be deprecated as of Dapr 1.17. Use 'region' instead. The AWS region where the Kafka cluster is deployed to. Required when `authType` is set to `awsiam` | `us-west-1` |
+| awsAccessKey | N | This maintains backwards compatibility with existing fields. It will be deprecated as of Dapr 1.17. Use 'accessKey' instead. AWS access key associated with an IAM account. | `"accessKey"`
+| awsSecretKey | N | This maintains backwards compatibility with existing fields. It will be deprecated as of Dapr 1.17. Use 'secretKey' instead. The secret key associated with the access key. | `"secretKey"`
+| awsSessionToken | N | This maintains backwards compatibility with existing fields. It will be deprecated as of Dapr 1.17. Use 'sessionToken' instead. AWS session token to use. A session token is only required if you are using temporary security credentials. | `"sessionToken"`
+| awsIamRoleArn | N | This maintains backwards compatibility with existing fields. It will be deprecated as of Dapr 1.17. Use 'assumeRoleArn' instead. IAM role that has access to AWS Managed Streaming for Apache Kafka (MSK). This is another option to authenticate with MSK aside from the AWS Credentials. | `"arn:aws:iam::123456789:role/mskRole"`
+| awsStsSessionName | N | This maintains backwards compatibility with existing fields. It will be deprecated as of Dapr 1.17. Use 'sessionName' instead. Represents the session name for assuming a role. | `"DaprDefaultSession"`
| schemaRegistryURL | N | Required when using Schema Registry Avro serialization/deserialization. The Schema Registry URL. | `http://localhost:8081` |
| schemaRegistryAPIKey | N | When using Schema Registry Avro serialization/deserialization. The Schema Registry credentials API Key. | `XYAXXAZ` |
| schemaRegistryAPISecret | N | When using Schema Registry Avro serialization/deserialization. The Schema Registry credentials API Secret. | `ABCDEFGMEADFF` |
@@ -109,9 +117,12 @@ spec:
| schemaLatestVersionCacheTTL | N | When using Schema Registry Avro serialization/deserialization. The TTL for schema caching when publishing a message with latest schema available. Default is 5 min | `5m` |
| clientConnectionTopicMetadataRefreshInterval | N | The interval for the client connection's topic metadata to be refreshed with the broker as a Go duration. Defaults to `9m`. | `"4m"` |
| clientConnectionKeepAliveInterval | N | The maximum time for the client connection to be kept alive with the broker, as a Go duration, before closing the connection. A zero value (default) means keeping alive indefinitely. | `"4m"` |
+| consumerFetchMin | N | The minimum number of message bytes to fetch in a request - the broker will wait until at least this many are available. The default is `1`, as `0` causes the consumer to spin when no messages are available. Equivalent to the JVM's `fetch.min.bytes`. | `"2"` |
| consumerFetchDefault | N | The default number of message bytes to fetch from the broker in each request. Default is `"1048576"` bytes. | `"2097152"` |
+| channelBufferSize | N | The number of events to buffer in internal and external channels. This permits the producer and consumer to continue processing some messages in the background while user code is working, greatly improving throughput. Defaults to `256`. | `"512"` |
| heartbeatInterval | N | The interval between heartbeats to the consumer coordinator. At most, the value should be set to a 1/3 of the `sessionTimeout` value. Defaults to "3s". | `"5s"` |
| sessionTimeout | N | The timeout used to detect client failures when using Kafka’s group management facility. If the broker fails to receive any heartbeats from the consumer before the expiration of this session timeout, then the consumer is removed and initiates a rebalance. Defaults to "10s". | `"20s"` |
+| escapeHeaders | N | Enables URL escaping of the message header values received by the consumer. Allows receiving content with special characters that are usually not allowed in HTTP headers. Default is `false`. | `true` |
The `secretKeyRef` above is referencing a [kubernetes secrets store]({{< ref kubernetes-secret-store.md >}}) to access the tls information. Visit [here]({{< ref setup-secret-store.md >}}) to learn more about how to configure a secret store component.
@@ -321,7 +332,7 @@ spec:
Authenticating with AWS IAM is supported with MSK. Setting `authType` to `awsiam` uses AWS SDK to generate auth tokens to authenticate.
{{% alert title="Note" color="primary" %}}
-The only required metadata field is `awsRegion`. If no `awsAccessKey` and `awsSecretKey` are provided, you can use AWS IAM roles for service accounts to have password-less authentication to your Kafka cluster.
+The only required metadata field is `region`. If no `acessKey` and `secretKey` are provided, you can use AWS IAM roles for service accounts to have password-less authentication to your Kafka cluster.
{{% /alert %}}
```yaml
@@ -341,18 +352,18 @@ spec:
value: "my-dapr-app-id"
- name: authType # Required.
value: "awsiam"
- - name: awsRegion # Required.
+ - name: region # Required.
value: "us-west-1"
- - name: awsAccessKey # Optional.
+ - name: accessKey # Optional.
value:
- - name: awsSecretKey # Optional.
+ - name: secretKey # Optional.
value:
- - name: awsSessionToken # Optional.
+ - name: sessionToken # Optional.
value:
- - name: awsIamRoleArn # Optional.
+ - name: assumeRoleArn # Optional.
value: "arn:aws:iam::123456789:role/mskRole"
- - name: awsStsSessionName # Optional.
- value: "MSKSASLDefaultSession"
+ - name: sessionName # Optional.
+ value: "DaprDefaultSession"
```
### Communication using TLS
@@ -457,7 +468,7 @@ Apache Kafka supports the following bulk metadata options:
When invoking the Kafka pub/sub, its possible to provide an optional partition key by using the `metadata` query param in the request url.
-The param name is `partitionKey`.
+The param name can either be `partitionKey` or `__key`
Example:
@@ -473,7 +484,7 @@ curl -X POST http://localhost:3500/v1.0/publish/myKafka/myTopic?metadata.partiti
### Message headers
-All other metadata key/value pairs (that are not `partitionKey`) are set as headers in the Kafka message. Here is an example setting a `correlationId` for the message.
+All other metadata key/value pairs (that are not `partitionKey` or `__key`) are set as headers in the Kafka message. Here is an example setting a `correlationId` for the message.
```shell
curl -X POST http://localhost:3500/v1.0/publish/myKafka/myTopic?metadata.correlationId=myCorrelationID&metadata.partitionKey=key1 \
@@ -484,6 +495,85 @@ curl -X POST http://localhost:3500/v1.0/publish/myKafka/myTopic?metadata.correla
}
}'
```
+### Kafka Pubsub special message headers received on consumer side
+
+When consuming messages, special message metadata are being automatically passed as headers. These are:
+- `__key`: the message key if available
+- `__topic`: the topic for the message
+- `__partition`: the partition number for the message
+- `__offset`: the offset of the message in the partition
+- `__timestamp`: the timestamp for the message
+
+You can access them within the consumer endpoint as follows:
+{{< tabs "Python (FastAPI)" >}}
+
+{{% codetab %}}
+
+```python
+from fastapi import APIRouter, Body, Response, status
+import json
+import sys
+
+app = FastAPI()
+
+router = APIRouter()
+
+
+@router.get('/dapr/subscribe')
+def subscribe():
+ subscriptions = [{'pubsubname': 'pubsub',
+ 'topic': 'my-topic',
+ 'route': 'my_topic_subscriber',
+ }]
+ return subscriptions
+
+@router.post('/my_topic_subscriber')
+def my_topic_subscriber(
+ key: Annotated[str, Header(alias="__key")],
+ offset: Annotated[int, Header(alias="__offset")],
+ event_data=Body()):
+ print(f"key={key} - offset={offset} - data={event_data}", flush=True)
+ return Response(status_code=status.HTTP_200_OK)
+
+app.include_router(router)
+
+```
+
+{{% /codetab %}}
+{{< /tabs >}}
+
+## Receiving message headers with special characters
+
+The consumer application may be required to receive message headers that include special characters, which may cause HTTP protocol validation errors.
+HTTP header values must follow specifications, making some characters not allowed. [Learn more about the protocols](https://www.w3.org/Protocols/rfc2616/rfc2616-sec4.html#sec4.2).
+In this case, you can enable `escapeHeaders` configuration setting, which uses URL escaping to encode header values on the consumer side.
+
+{{% alert title="Note" color="primary" %}}
+When using this setting, the received message headers are URL escaped, and you need to URL "un-escape" it to get the original value.
+{{% /alert %}}
+
+Set `escapeHeaders` to `true` to URL escape.
+
+```yaml
+apiVersion: dapr.io/v1alpha1
+kind: Component
+metadata:
+ name: kafka-pubsub-escape-headers
+spec:
+ type: pubsub.kafka
+ version: v1
+ metadata:
+ - name: brokers # Required. Kafka broker connection setting
+ value: "dapr-kafka.myapp.svc.cluster.local:9092"
+ - name: consumerGroup # Optional. Used for input bindings.
+ value: "group1"
+ - name: clientID # Optional. Used as client tracing ID by Kafka brokers.
+ value: "my-dapr-app-id"
+ - name: authType # Required.
+ value: "none"
+ - name: escapeHeaders
+ value: "true"
+```
## Avro Schema Registry serialization/deserialization
You can configure pub/sub to publish or consume data encoded using [Avro binary serialization](https://avro.apache.org/docs/), leveraging an [Apache Schema Registry](https://developer.confluent.io/courses/apache-kafka/schema-registry/) (for example, [Confluent Schema Registry](https://developer.confluent.io/courses/apache-kafka/schema-registry/), [Apicurio](https://www.apicur.io/registry/)).
@@ -597,6 +687,7 @@ To run Kafka on Kubernetes, you can use any Kafka operator, such as [Strimzi](ht
{{< /tabs >}}
+
## Related links
- [Basic schema for a Dapr component]({{< ref component-schema >}})
- Read [this guide]({{< ref "howto-publish-subscribe.md##step-1-setup-the-pubsub-component" >}}) for instructions on configuring pub/sub components
diff --git a/daprdocs/content/en/reference/components-reference/supported-pubsub/setup-aws-snssqs.md b/daprdocs/content/en/reference/components-reference/supported-pubsub/setup-aws-snssqs.md
index 360bd6ef3..86865de5b 100644
--- a/daprdocs/content/en/reference/components-reference/supported-pubsub/setup-aws-snssqs.md
+++ b/daprdocs/content/en/reference/components-reference/supported-pubsub/setup-aws-snssqs.md
@@ -68,7 +68,8 @@ spec:
# value: 5
# - name: concurrencyMode # Optional
# value: "single"
-
+ # - name: concurrencyLimit # Optional
+ # value: "0"
```
@@ -98,6 +99,7 @@ The above example uses secrets as plain strings. It is recommended to use [a sec
| disableDeleteOnRetryLimit | N | When set to true, after retrying and failing of `messageRetryLimit` times processing a message, reset the message visibility timeout so that other consumers can try processing, instead of deleting the message from SQS (the default behvior). Default: `"false"` | `"true"`, `"false"`
| assetsManagementTimeoutSeconds | N | Amount of time in seconds, for an AWS asset management operation, before it times out and cancelled. Asset management operations are any operations performed on STS, SNS and SQS, except message publish and consume operations that implement the default Dapr component retry behavior. The value can be set to any non-negative float/integer. Default: `5` | `0.5`, `10`
| concurrencyMode | N | When messages are received in bulk from SQS, call the subscriber sequentially (“single” message at a time), or concurrently (in “parallel”). Default: `"parallel"` | `"single"`, `"parallel"`
+| concurrencyLimit | N | Defines the maximum number of concurrent workers handling messages. This value is ignored when concurrencyMode is set to `"single"`. To avoid limiting the number of concurrent workers, set this to `0`. Default: `0` | `100`
### Additional info
diff --git a/daprdocs/content/en/reference/components-reference/supported-pubsub/setup-azure-eventhubs.md b/daprdocs/content/en/reference/components-reference/supported-pubsub/setup-azure-eventhubs.md
index 713bdb1cb..73db174a0 100644
--- a/daprdocs/content/en/reference/components-reference/supported-pubsub/setup-azure-eventhubs.md
+++ b/daprdocs/content/en/reference/components-reference/supported-pubsub/setup-azure-eventhubs.md
@@ -33,6 +33,8 @@ spec:
value: "channel1"
- name: enableEntityManagement
value: "false"
+ - name: enableInOrderMessageDelivery
+ value: "false"
# The following four properties are needed only if enableEntityManagement is set to true
- name: resourceGroupName
value: "test-rg"
@@ -65,11 +67,12 @@ The above example uses secrets as plain strings. It is recommended to use a secr
| `connectionString` | Y* | Connection string for the Event Hub or the Event Hub namespace.
* Mutally exclusive with `eventHubNamespace` field.
* Required when not using [Microsoft Entra ID Authentication]({{< ref "authenticating-azure.md" >}}) | `"Endpoint=sb://{EventHubNamespace}.servicebus.windows.net/;SharedAccessKeyName={PolicyName};SharedAccessKey={Key};EntityPath={EventHub}"` or `"Endpoint=sb://{EventHubNamespace}.servicebus.windows.net/;SharedAccessKeyName={PolicyName};SharedAccessKey={Key}"`
| `eventHubNamespace` | Y* | The Event Hub Namespace name.
* Mutally exclusive with `connectionString` field.
* Required when using [Microsoft Entra ID Authentication]({{< ref "authenticating-azure.md" >}}) | `"namespace"`
| `consumerID` | N | Consumer ID (consumer tag) organizes one or more consumers into a group. Consumers with the same consumer ID work as one virtual consumer; for example, a message is processed only once by one of the consumers in the group. If the `consumerID` is not provided, the Dapr runtime set it to the Dapr application ID (`appID`) value. | Can be set to string value (such as `"channel1"` in the example above) or string format value (such as `"{podName}"`, etc.). [See all of template tags you can use in your component metadata.]({{< ref "component-schema.md#templated-metadata-values" >}})
+| `enableEntityManagement` | N | Boolean value to allow management of the EventHub namespace and storage account. Default: `false` | `"true", "false"`
+| `enableInOrderMessageDelivery` | N | Input/Output | Boolean value to allow messages to be delivered in the order in which they were posted. This assumes `partitionKey` is set when publishing or posting to ensure ordering across partitions. Default: `false` | `"true"`, `"false"`
| `storageAccountName` | Y | Storage account name to use for the checkpoint store. |`"myeventhubstorage"`
| `storageAccountKey` | Y* | Storage account key for the checkpoint store account.
* When using Microsoft Entra ID, it's possible to omit this if the service principal has access to the storage account too. | `"112233445566778899"`
| `storageConnectionString` | Y* | Connection string for the checkpoint store, alternative to specifying `storageAccountKey` | `"DefaultEndpointsProtocol=https;AccountName=myeventhubstorage;AccountKey="`
| `storageContainerName` | Y | Storage container name for the storage account name. | `"myeventhubstoragecontainer"`
-| `enableEntityManagement` | N | Boolean value to allow management of the EventHub namespace and storage account. Default: `false` | `"true", "false"`
| `resourceGroupName` | N | Name of the resource group the Event Hub namespace is part of. Required when entity management is enabled | `"test-rg"`
| `subscriptionID` | N | Azure subscription ID value. Required when entity management is enabled | `"azure subscription id"`
| `partitionCount` | N | Number of partitions for the new Event Hub namespace. Used only when entity management is enabled. Default: `"1"` | `"2"`
diff --git a/daprdocs/content/en/reference/components-reference/supported-pubsub/setup-azure-servicebus-topics.md b/daprdocs/content/en/reference/components-reference/supported-pubsub/setup-azure-servicebus-topics.md
index 831f6aa72..cc357b5bc 100644
--- a/daprdocs/content/en/reference/components-reference/supported-pubsub/setup-azure-servicebus-topics.md
+++ b/daprdocs/content/en/reference/components-reference/supported-pubsub/setup-azure-servicebus-topics.md
@@ -83,8 +83,8 @@ The above example uses secrets as plain strings. It is recommended to use a secr
| `maxConcurrentHandlers` | N | Defines the maximum number of concurrent message handlers. Default: `0` (unlimited) | `10`
| `disableEntityManagement` | N | When set to true, queues and subscriptions do not get created automatically. Default: `"false"` | `"true"`, `"false"`
| `defaultMessageTimeToLiveInSec` | N | Default message time to live, in seconds. Used during subscription creation only. | `10`
-| `autoDeleteOnIdleInSec` | N | Time in seconds to wait before auto deleting idle subscriptions. Used during subscription creation only. Default: `0` (disabled) | `3600`
-| `maxDeliveryCount` | N | Defines the number of attempts the server will make to deliver a message. Used during subscription creation only. Must be 300s or greater. Default set by server. | `10`
+| `autoDeleteOnIdleInSec` | N | Time in seconds to wait before auto deleting idle subscriptions. Used during subscription creation only. Must be 300s or greater. Default: `0` (disabled) | `3600`
+| `maxDeliveryCount` | N | Defines the number of attempts the server makes to deliver a message. Used during subscription creation only. Default set by server. | `10`
| `lockDurationInSec` | N | Defines the length in seconds that a message will be locked for before expiring. Used during subscription creation only. Default set by server. | `30`
| `minConnectionRecoveryInSec` | N | Minimum interval (in seconds) to wait before attempting to reconnect to Azure Service Bus in case of a connection failure. Default: `2` | `5`
| `maxConnectionRecoveryInSec` | N | Maximum interval (in seconds) to wait before attempting to reconnect to Azure Service Bus in case of a connection failure. After each attempt, the component waits a random number of seconds, increasing every time, between the minimum and the maximum. Default: `300` (5 minutes) | `600`
diff --git a/daprdocs/content/en/reference/components-reference/supported-pubsub/setup-pulsar.md b/daprdocs/content/en/reference/components-reference/supported-pubsub/setup-pulsar.md
index 5686211ff..3e94c2fc7 100644
--- a/daprdocs/content/en/reference/components-reference/supported-pubsub/setup-pulsar.md
+++ b/daprdocs/content/en/reference/components-reference/supported-pubsub/setup-pulsar.md
@@ -38,6 +38,8 @@ spec:
value: "true"
- name: disableBatching
value: "false"
+ - name: receiverQueueSize
+ value: "1000"
- name: .jsonschema # sets a json schema validation for the configured topic
value: |
{
@@ -78,6 +80,7 @@ The above example uses secrets as plain strings. It is recommended to use a [sec
| namespace | N | The administrative unit of the topic, which acts as a grouping mechanism for related topics. Default: `"default"` | `"default"`
| persistent | N | Pulsar supports two kinds of topics: [persistent](https://pulsar.apache.org/docs/en/concepts-architecture-overview#persistent-storage) and [non-persistent](https://pulsar.apache.org/docs/en/concepts-messaging/#non-persistent-topics). With persistent topics, all messages are durably persisted on disks (if the broker is not standalone, messages are durably persisted on multiple disks), whereas data for non-persistent topics is not persisted to storage disks.
| disableBatching | N | disable batching.When batching enabled default batch delay is set to 10 ms and default batch size is 1000 messages,Setting `disableBatching: true` will make the producer to send messages individually. Default: `"false"` | `"true"`, `"false"`|
+| receiverQueueSize | N | Sets the size of the consumer receiver queue. Controls how many messages can be accumulated by the consumer before it is explicitly called to read messages by Dapr. Default: `"1000"` | `"1000"` |
| batchingMaxPublishDelay | N | batchingMaxPublishDelay set the time period within which the messages sent will be batched,if batch messages are enabled. If set to a non zero value, messages will be queued until this time interval or batchingMaxMessages (see below) or batchingMaxSize (see below). There are two valid formats, one is the fraction with a unit suffix format, and the other is the pure digital format that is processed as milliseconds. Valid time units are "ns", "us" (or "µs"), "ms", "s", "m", "h". Default: `"10ms"` | `"10ms"`, `"10"`|
| batchingMaxMessages | N | batchingMaxMessages set the maximum number of messages permitted in a batch.If set to a value greater than 1, messages will be queued until this threshold is reached or batchingMaxSize (see below) has been reached or the batch interval has elapsed. Default: `"1000"` | `"1000"`|
| batchingMaxSize | N | batchingMaxSize sets the maximum number of bytes permitted in a batch. If set to a value greater than 1, messages will be queued until this threshold is reached or batchingMaxMessages (see above) has been reached or the batch interval has elapsed. Default: `"128KB"` | `"131072"`|
diff --git a/daprdocs/content/en/reference/components-reference/supported-pubsub/setup-redis-pubsub.md b/daprdocs/content/en/reference/components-reference/supported-pubsub/setup-redis-pubsub.md
index 1da2cb8b3..387920e7a 100644
--- a/daprdocs/content/en/reference/components-reference/supported-pubsub/setup-redis-pubsub.md
+++ b/daprdocs/content/en/reference/components-reference/supported-pubsub/setup-redis-pubsub.md
@@ -45,7 +45,9 @@ The above example uses secrets as plain strings. It is recommended to use a secr
| redisUsername | N | Username for Redis host. Defaults to empty. Make sure your redis server version is 6 or above, and have created acl rule correctly. | `""`, `"default"`
| consumerID | N | The consumer group ID. | Can be set to string value (such as `"channel1"` in the example above) or string format value (such as `"{podName}"`, etc.). [See all of template tags you can use in your component metadata.]({{< ref "component-schema.md#templated-metadata-values" >}})
| useEntraID | N | Implements EntraID support for Azure Cache for Redis. Before enabling this: - The `redisHost` name must be specified in the form of `"server:port"`
- TLS must be enabled
Learn more about this setting under [Create a Redis instance > Azure Cache for Redis]({{< ref "#setup-redis" >}}) | `"true"`, `"false"` |
-| enableTLS | N | If the Redis instance supports TLS with public certificates, can be configured to be enabled or disabled. Defaults to `"false"` | `"true"`, `"false"`
+| enableTLS | N | If the Redis instance supports TLS with public certificates, can be configured to be enabled or disabled. Defaults to `"false"` | `"true"`, `"false"` |
+| clientCert | N | The content of the client certificate, used for Redis instances that require client-side certificates. Must be used with `clientKey` and `enableTLS` must be set to true. It is recommended to use a secret store as described [here]({{< ref component-secrets.md >}}) | `"----BEGIN CERTIFICATE-----\nMIIC..."` |
+| clientKey | N | The content of the client private key, used in conjunction with `clientCert` for authentication. It is recommended to use a secret store as described [here]({{< ref component-secrets.md >}}) | `"----BEGIN PRIVATE KEY-----\nMIIE..."` |
| redeliverInterval | N | The interval between checking for pending messages to redeliver. Can use either be Go duration string (for example "ms", "s", "m") or milliseconds number. Defaults to `"60s"`. `"0"` disables redelivery. | `"30s"`, `"5000"`
| processingTimeout | N | The amount time that a message must be pending before attempting to redeliver it. Can use either be Go duration string ( for example "ms", "s", "m") or milliseconds number. Defaults to `"15s"`. `"0"` disables redelivery. | `"60s"`, `"600000"`
| queueDepth | N | The size of the message queue for processing. Defaults to `"100"`. | `"1000"`
diff --git a/daprdocs/content/en/reference/components-reference/supported-state-stores/setup-postgresql-v1.md b/daprdocs/content/en/reference/components-reference/supported-state-stores/setup-postgresql-v1.md
index 7026dcc92..53e4c0e75 100644
--- a/daprdocs/content/en/reference/components-reference/supported-state-stores/setup-postgresql-v1.md
+++ b/daprdocs/content/en/reference/components-reference/supported-state-stores/setup-postgresql-v1.md
@@ -83,6 +83,22 @@ Authenticating with Microsoft Entra ID is supported with Azure Database for Post
| `azureClientId` | N | Client ID (application ID) | `"c7dd251f-811f-…"` |
| `azureClientSecret` | N | Client secret (application password) | `"Ecy3X…"` |
+### Authenticate using AWS IAM
+
+Authenticating with AWS IAM is supported with all versions of PostgreSQL type components.
+The user specified in the connection string must be an already existing user in the DB, and an AWS IAM enabled user granted the `rds_iam` database role.
+Authentication is based on the AWS authentication configuration file, or the AccessKey/SecretKey provided.
+The AWS authentication token will be dynamically rotated before it's expiration time with AWS.
+
+| Field | Required | Details | Example |
+|--------|:--------:|---------|---------|
+| `useAWSIAM` | Y | Must be set to `true` to enable the component to retrieve access tokens from AWS IAM. This authentication method only works with AWS Relational Database Service for PostgreSQL databases. | `"true"` |
+| `connectionString` | Y | The connection string for the PostgreSQL database.
This must contain an already existing user, which corresponds to the name of the user created inside PostgreSQL that maps to the AWS IAM policy. This connection string should not contain any password. Note that the database name field is denoted by dbname with AWS. | `"host=mydb.postgres.database.aws.com user=myapplication port=5432 dbname=my_db sslmode=require"`|
+| `awsRegion` | N | The AWS Region where the AWS Relational Database Service is deployed to. | `"us-east-1"` |
+| `awsAccessKey` | N | AWS access key associated with an IAM account | `"AKIAIOSFODNN7EXAMPLE"` |
+| `awsSecretKey` | N | The secret key associated with the access key | `"wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY"` |
+| `awsSessionToken` | N | AWS session token to use. A session token is only required if you are using temporary security credentials. | `"TOKEN"` |
+
### Other metadata options
| Field | Required | Details | Example |
diff --git a/daprdocs/content/en/reference/components-reference/supported-state-stores/setup-postgresql-v2.md b/daprdocs/content/en/reference/components-reference/supported-state-stores/setup-postgresql-v2.md
index bcda2558b..d4e21f17b 100644
--- a/daprdocs/content/en/reference/components-reference/supported-state-stores/setup-postgresql-v2.md
+++ b/daprdocs/content/en/reference/components-reference/supported-state-stores/setup-postgresql-v2.md
@@ -83,6 +83,22 @@ Authenticating with Microsoft Entra ID is supported with Azure Database for Post
| `azureClientId` | N | Client ID (application ID) | `"c7dd251f-811f-…"` |
| `azureClientSecret` | N | Client secret (application password) | `"Ecy3X…"` |
+### Authenticate using AWS IAM
+
+Authenticating with AWS IAM is supported with all versions of PostgreSQL type components.
+The user specified in the connection string must be an already existing user in the DB, and an AWS IAM enabled user granted the `rds_iam` database role.
+Authentication is based on the AWS authentication configuration file, or the AccessKey/SecretKey provided.
+The AWS authentication token will be dynamically rotated before it's expiration time with AWS.
+
+| Field | Required | Details | Example |
+|--------|:--------:|---------|---------|
+| `useAWSIAM` | Y | Must be set to `true` to enable the component to retrieve access tokens from AWS IAM. This authentication method only works with AWS Relational Database Service for PostgreSQL databases. | `"true"` |
+| `connectionString` | Y | The connection string for the PostgreSQL database.
This must contain an already existing user, which corresponds to the name of the user created inside PostgreSQL that maps to the AWS IAM policy. This connection string should not contain any password. Note that the database name field is denoted by dbname with AWS. | `"host=mydb.postgres.database.aws.com user=myapplication port=5432 dbname=my_db sslmode=require"`|
+| `awsRegion` | N | This maintains backwards compatibility with existing fields. It will be deprecated as of Dapr 1.17. Use 'region' instead. The AWS Region where the AWS Relational Database Service is deployed to. | `"us-east-1"` |
+| `awsAccessKey` | N | This maintains backwards compatibility with existing fields. It will be deprecated as of Dapr 1.17. Use 'accessKey' instead. AWS access key associated with an IAM account | `"AKIAIOSFODNN7EXAMPLE"` |
+| `awsSecretKey` | N | This maintains backwards compatibility with existing fields. It will be deprecated as of Dapr 1.17. Use 'secretKey' instead. The secret key associated with the access key | `"wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY"` |
+| `awsSessionToken` | N | This maintains backwards compatibility with existing fields. It will be deprecated as of Dapr 1.17. Use 'sessionToken' instead. AWS session token to use. A session token is only required if you are using temporary security credentials. | `"TOKEN"` |
+
### Other metadata options
| Field | Required | Details | Example |
diff --git a/daprdocs/content/en/reference/components-reference/supported-state-stores/setup-redis.md b/daprdocs/content/en/reference/components-reference/supported-state-stores/setup-redis.md
index ed6d4118e..9b672c6a6 100644
--- a/daprdocs/content/en/reference/components-reference/supported-state-stores/setup-redis.md
+++ b/daprdocs/content/en/reference/components-reference/supported-state-stores/setup-redis.md
@@ -32,6 +32,10 @@ spec:
value: # Optional. Allowed: true, false.
- name: enableTLS
value: # Optional. Allowed: true, false.
+ - name: clientCert
+ value: # Optional
+ - name: clientKey
+ value: # Optional
- name: maxRetries
value: # Optional
- name: maxRetryBackoff
@@ -102,6 +106,8 @@ If you wish to use Redis as an actor store, append the following to the yaml.
| redisUsername | N | Username for Redis host. Defaults to empty. Make sure your redis server version is 6 or above, and have created acl rule correctly. | `""`, `"default"`
| useEntraID | N | Implements EntraID support for Azure Cache for Redis. Before enabling this: - The `redisHost` name must be specified in the form of `"server:port"`
- TLS must be enabled
Learn more about this setting under [Create a Redis instance > Azure Cache for Redis]({{< ref "#setup-redis" >}}) | `"true"`, `"false"` |
| enableTLS | N | If the Redis instance supports TLS with public certificates, can be configured to be enabled or disabled. Defaults to `"false"` | `"true"`, `"false"`
+| clientCert | N | The content of the client certificate, used for Redis instances that require client-side certificates. Must be used with `clientKey` and `enableTLS` must be set to true. It is recommended to use a secret store as described [here]({{< ref component-secrets.md >}}) | `"----BEGIN CERTIFICATE-----\nMIIC..."` |
+| clientKey | N | The content of the client private key, used in conjunction with `clientCert` for authentication. It is recommended to use a secret store as described [here]({{< ref component-secrets.md >}}) | `"----BEGIN PRIVATE KEY-----\nMIIE..."` |
| maxRetries | N | Maximum number of retries before giving up. Defaults to `3` | `5`, `10`
| maxRetryBackoff | N | Maximum backoff between each retry. Defaults to `2` seconds; `"-1"` disables backoff. | `3000000000`
| failover | N | Property to enabled failover configuration. Needs sentinelMasterName to be set. The redisHost should be the sentinel host address. See [Redis Sentinel Documentation](https://redis.io/docs/manual/sentinel/). Defaults to `"false"` | `"true"`, `"false"`
diff --git a/daprdocs/content/en/reference/resource-specs/component-schema.md b/daprdocs/content/en/reference/resource-specs/component-schema.md
index 349ff4923..875744c28 100644
--- a/daprdocs/content/en/reference/resource-specs/component-schema.md
+++ b/daprdocs/content/en/reference/resource-specs/component-schema.md
@@ -8,27 +8,33 @@ description: "The basic spec for a Dapr component"
Dapr defines and registers components using a [resource specifications](https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/). All components are defined as a resource and can be applied to any hosting environment where Dapr is running, not just Kubernetes.
+Typically, components are restricted to a particular [namepsace]({{< ref isolation-concept.md >}}) and restricted access through scopes to any particular set of applications. The namespace is either explicit on the component manifest itself, or set by the API server, which derives the namespace through context with applying to Kubernetes.
+
+{{% alert title="Note" color="primary" %}}
+The exception to this rule is in self-hosted mode, where daprd ingests component resources when the namespace field is omitted. However, the security profile is mute, as daprd has access to the manifest anyway, unlike in Kubernetes.
+{{% /alert %}}
+
## Format
```yaml
apiVersion: dapr.io/v1alpha1
kind: Component
auth:
- secretstore: [SECRET-STORE-NAME]
+ secretstore:
metadata:
- name: [COMPONENT-NAME]
- namespace: [COMPONENT-NAMESPACE]
+ name:
+ namespace:
spec:
- type: [COMPONENT-TYPE]
+ type:
version: v1
- initTimeout: [TIMEOUT-DURATION]
- ignoreErrors: [BOOLEAN]
+ initTimeout:
+ ignoreErrors:
metadata:
- - name: [METADATA-NAME]
- value: [METADATA-VALUE]
+ - name:
+ value:
scopes:
- - [APPID]
- - [APPID]
+ -
+ -
```
## Spec fields
diff --git a/daprdocs/content/en/reference/resource-specs/configuration-schema.md b/daprdocs/content/en/reference/resource-specs/configuration-schema.md
index b52228c16..e5caac792 100644
--- a/daprdocs/content/en/reference/resource-specs/configuration-schema.md
+++ b/daprdocs/content/en/reference/resource-specs/configuration-schema.md
@@ -36,6 +36,7 @@ spec:
labels:
- name:
regex: {}
+ recordErrorCodes:
latencyDistributionBuckets:
-
-
diff --git a/daprdocs/content/en/reference/resource-specs/httpendpoints-schema.md b/daprdocs/content/en/reference/resource-specs/httpendpoints-schema.md
index a85a25315..5e2b8f45d 100644
--- a/daprdocs/content/en/reference/resource-specs/httpendpoints-schema.md
+++ b/daprdocs/content/en/reference/resource-specs/httpendpoints-schema.md
@@ -10,6 +10,10 @@ aliases:
The `HTTPEndpoint` is a Dapr resource that is used to enable the invocation of non-Dapr endpoints from a Dapr application.
+{{% alert title="Note" color="primary" %}}
+Any HTTPEndpoint resource can be restricted to a particular [namepsace]({{< ref isolation-concept.md >}}) and restricted access through scopes to any particular set of applications.
+{{% /alert %}}
+
## Format
```yaml
diff --git a/daprdocs/content/en/reference/resource-specs/resiliency-schema.md b/daprdocs/content/en/reference/resource-specs/resiliency-schema.md
index 32888adc7..d307b70b4 100644
--- a/daprdocs/content/en/reference/resource-specs/resiliency-schema.md
+++ b/daprdocs/content/en/reference/resource-specs/resiliency-schema.md
@@ -8,6 +8,10 @@ description: "The basic spec for a Dapr resiliency resource"
The `Resiliency` Dapr resource allows you to define and apply fault tolerance resiliency policies. Resiliency specs are applied when the Dapr sidecar starts.
+{{% alert title="Note" color="primary" %}}
+Any resiliency resource can be restricted to a particular [namepsace]({{< ref isolation-concept.md >}}) and restricted access through scopes to any particular set of applications.
+{{% /alert %}}
+
## Format
```yml
@@ -28,6 +32,9 @@ spec:
duration:
maxInterval:
maxRetries:
+ matching:
+ httpStatusCodes:
+ gRPCStatusCodes:
circuitBreakers:
circuitBreakerName: # Replace with any unique name
maxRequests:
diff --git a/daprdocs/content/en/reference/resource-specs/subscription-schema.md b/daprdocs/content/en/reference/resource-specs/subscription-schema.md
index bd5fc8263..c047fd40f 100644
--- a/daprdocs/content/en/reference/resource-specs/subscription-schema.md
+++ b/daprdocs/content/en/reference/resource-specs/subscription-schema.md
@@ -6,7 +6,13 @@ weight: 2000
description: "The basic spec for a Dapr subscription"
---
-The `Subscription` Dapr resource allows you to subscribe declaratively to a topic using an external component YAML file. This guide demonstrates two subscription API versions:
+The `Subscription` Dapr resource allows you to subscribe declaratively to a topic using an external component YAML file.
+
+{{% alert title="Note" color="primary" %}}
+Any subscription can be restricted to a particular [namepsace]({{< ref isolation-concept.md >}}) and restricted access through scopes to any particular set of applications.
+{{% /alert %}}
+
+This guide demonstrates two subscription API versions:
- `v2alpha` (default spec)
- `v1alpha1` (deprecated)
@@ -23,15 +29,15 @@ metadata:
spec:
topic: # Required
routes: # Required
- - rules:
- - match:
- path:
+ rules:
+ - match:
+ path:
pubsubname: # Required
deadLetterTopic: # Optional
bulkSubscribe: # Optional
- - enabled:
- - maxMessagesCount:
- - maxAwaitDurationMs:
+ enabled:
+ maxMessagesCount:
+ maxAwaitDurationMs:
scopes:
-
```
diff --git a/daprdocs/data/components/bindings/generic.yaml b/daprdocs/data/components/bindings/generic.yaml
index 4f63295bd..250eb4d88 100644
--- a/daprdocs/data/components/bindings/generic.yaml
+++ b/daprdocs/data/components/bindings/generic.yaml
@@ -134,6 +134,14 @@
features:
input: true
output: false
+- component: SFTP
+ link: sftp
+ state: Alpha
+ version: v1
+ since: "1.15"
+ features:
+ input: false
+ output: true
- component: SMTP
link: smtp
state: Alpha
diff --git a/daprdocs/data/components/conversation/aws.yaml b/daprdocs/data/components/conversation/aws.yaml
new file mode 100644
index 000000000..6f5b33d20
--- /dev/null
+++ b/daprdocs/data/components/conversation/aws.yaml
@@ -0,0 +1,5 @@
+- component: AWS Bedrock
+ link: aws-bedrock
+ state: Alpha
+ version: v1
+ since: "1.15"
\ No newline at end of file
diff --git a/daprdocs/data/components/conversation/generic.yaml b/daprdocs/data/components/conversation/generic.yaml
new file mode 100644
index 000000000..26cf8431c
--- /dev/null
+++ b/daprdocs/data/components/conversation/generic.yaml
@@ -0,0 +1,20 @@
+- component: Anthropic
+ link: anthropic
+ state: Alpha
+ version: v1
+ since: "1.15"
+- component: Huggingface
+ link: hugging-face
+ state: Alpha
+ version: v1
+ since: "1.15"
+- component: Mistral
+ link: mistral
+ state: Alpha
+ version: v1
+ since: "1.15"
+- component: OpenAI
+ link: openai
+ state: Alpha
+ version: v1
+ since: "1.15"
diff --git a/daprdocs/data/components/secret_stores/aws.yaml b/daprdocs/data/components/secret_stores/aws.yaml
index f1e6d77ec..522b7f64e 100644
--- a/daprdocs/data/components/secret_stores/aws.yaml
+++ b/daprdocs/data/components/secret_stores/aws.yaml
@@ -1,8 +1,8 @@
- component: AWS Secrets Manager
link: aws-secret-manager
- state: Alpha
+ state: Beta
version: v1
- since: "1.0"
+ since: "1.15"
- component: AWS SSM Parameter Store
link: aws-parameter-store
state: Alpha
diff --git a/daprdocs/layouts/partials/components/conversation.html b/daprdocs/layouts/partials/components/conversation.html
new file mode 100644
index 000000000..266217073
--- /dev/null
+++ b/daprdocs/layouts/partials/components/conversation.html
@@ -0,0 +1,28 @@
+{{- $groups := dict
+ "Generic" $.Site.Data.components.conversation.generic
+ "Amazon Web Services (AWS)" $.Site.Data.components.conversation.aws
+
+ }}
+
+ {{ range $group, $components := $groups }}
+ {{ $group }}
+
+
+ Component |
+ Status |
+ Component version |
+ Since runtime version |
+
+ {{ range sort $components "component" }}
+
+ {{ .component }}
+ |
+ {{ .state }} |
+ {{ .version }} |
+ {{ .since }} |
+
+ {{ end }}
+
+ {{ end }}
+
+ {{ partial "components/componenttoc.html" . }}
\ No newline at end of file
diff --git a/daprdocs/layouts/shortcodes/dapr-latest-version.html b/daprdocs/layouts/shortcodes/dapr-latest-version.html
index c64a87827..79be56261 100644
--- a/daprdocs/layouts/shortcodes/dapr-latest-version.html
+++ b/daprdocs/layouts/shortcodes/dapr-latest-version.html
@@ -1 +1 @@
-{{- if .Get "short" }}1.14{{ else if .Get "long" }}1.14.0{{ else if .Get "cli" }}1.14.0{{ else }}1.14.0{{ end -}}
+{{- if .Get "short" }}1.14{{ else if .Get "long" }}1.14.4{{ else if .Get "cli" }}1.14.1{{ else }}1.14.1{{ end -}}
diff --git a/daprdocs/static/images/actors-quickstart/actors-quickstart.png b/daprdocs/static/images/actors-quickstart/actors-quickstart.png
index 1ed195714..3769e1171 100644
Binary files a/daprdocs/static/images/actors-quickstart/actors-quickstart.png and b/daprdocs/static/images/actors-quickstart/actors-quickstart.png differ
diff --git a/daprdocs/static/images/bindings-quickstart/bindings-quickstart.png b/daprdocs/static/images/bindings-quickstart/bindings-quickstart.png
index c10bbd38e..afc3e21cc 100644
Binary files a/daprdocs/static/images/bindings-quickstart/bindings-quickstart.png and b/daprdocs/static/images/bindings-quickstart/bindings-quickstart.png differ
diff --git a/daprdocs/static/images/building_blocks.png b/daprdocs/static/images/building_blocks.png
index ec8f7bbff..6e5c51b69 100644
Binary files a/daprdocs/static/images/building_blocks.png and b/daprdocs/static/images/building_blocks.png differ
diff --git a/daprdocs/static/images/buildingblocks-overview.png b/daprdocs/static/images/buildingblocks-overview.png
index 9b05f41be..9570df612 100644
Binary files a/daprdocs/static/images/buildingblocks-overview.png and b/daprdocs/static/images/buildingblocks-overview.png differ
diff --git a/daprdocs/static/images/concepts-components.png b/daprdocs/static/images/concepts-components.png
index fd80064a6..c22c50f23 100644
Binary files a/daprdocs/static/images/concepts-components.png and b/daprdocs/static/images/concepts-components.png differ
diff --git a/daprdocs/static/images/configuration-quickstart/configuration-quickstart-flow.png b/daprdocs/static/images/configuration-quickstart/configuration-quickstart-flow.png
index 29dc6f44c..94310e1fe 100644
Binary files a/daprdocs/static/images/configuration-quickstart/configuration-quickstart-flow.png and b/daprdocs/static/images/configuration-quickstart/configuration-quickstart-flow.png differ
diff --git a/daprdocs/static/images/crypto-quickstart.png b/daprdocs/static/images/crypto-quickstart.png
index e6f7fe70f..0d315a7d2 100644
Binary files a/daprdocs/static/images/crypto-quickstart.png and b/daprdocs/static/images/crypto-quickstart.png differ
diff --git a/daprdocs/static/images/observability-sidecar.png b/daprdocs/static/images/observability-sidecar.png
index 3df2734b1..585aae22b 100644
Binary files a/daprdocs/static/images/observability-sidecar.png and b/daprdocs/static/images/observability-sidecar.png differ
diff --git a/daprdocs/static/images/observability-tracing.png b/daprdocs/static/images/observability-tracing.png
index b195845b3..1497dfe71 100644
Binary files a/daprdocs/static/images/observability-tracing.png and b/daprdocs/static/images/observability-tracing.png differ
diff --git a/daprdocs/static/images/overview-kubernetes.png b/daprdocs/static/images/overview-kubernetes.png
index ba307aa70..f2aedc745 100644
Binary files a/daprdocs/static/images/overview-kubernetes.png and b/daprdocs/static/images/overview-kubernetes.png differ
diff --git a/daprdocs/static/images/overview-sidecar-apis.png b/daprdocs/static/images/overview-sidecar-apis.png
index 417d381b2..2dc2f6830 100644
Binary files a/daprdocs/static/images/overview-sidecar-apis.png and b/daprdocs/static/images/overview-sidecar-apis.png differ
diff --git a/daprdocs/static/images/overview-sidecar-model.png b/daprdocs/static/images/overview-sidecar-model.png
index 40d7425d9..46185c356 100644
Binary files a/daprdocs/static/images/overview-sidecar-model.png and b/daprdocs/static/images/overview-sidecar-model.png differ
diff --git a/daprdocs/static/images/overview-standalone.png b/daprdocs/static/images/overview-standalone.png
index 179ee7460..e3070c6a7 100644
Binary files a/daprdocs/static/images/overview-standalone.png and b/daprdocs/static/images/overview-standalone.png differ
diff --git a/daprdocs/static/images/overview-vms-hosting.png b/daprdocs/static/images/overview-vms-hosting.png
index 05a6a0fee..cc05fe9e8 100644
Binary files a/daprdocs/static/images/overview-vms-hosting.png and b/daprdocs/static/images/overview-vms-hosting.png differ
diff --git a/daprdocs/static/images/overview.png b/daprdocs/static/images/overview.png
index 58124f0a1..f62c82085 100644
Binary files a/daprdocs/static/images/overview.png and b/daprdocs/static/images/overview.png differ
diff --git a/daprdocs/static/images/prometheus-service-discovery.png b/daprdocs/static/images/prometheus-service-discovery.png
new file mode 100644
index 000000000..34acfcadb
Binary files /dev/null and b/daprdocs/static/images/prometheus-service-discovery.png differ
diff --git a/daprdocs/static/images/prometheus-web-ui.png b/daprdocs/static/images/prometheus-web-ui.png
new file mode 100644
index 000000000..f6b82e903
Binary files /dev/null and b/daprdocs/static/images/prometheus-web-ui.png differ
diff --git a/daprdocs/static/images/pubsub-quickstart/pubsub-diagram.png b/daprdocs/static/images/pubsub-quickstart/pubsub-diagram.png
index 21256c817..dca9a907a 100644
Binary files a/daprdocs/static/images/pubsub-quickstart/pubsub-diagram.png and b/daprdocs/static/images/pubsub-quickstart/pubsub-diagram.png differ
diff --git a/daprdocs/static/images/scheduler/scheduler-architecture.png b/daprdocs/static/images/scheduler/scheduler-architecture.png
index 5cf309bf4..1b87d1ffd 100644
Binary files a/daprdocs/static/images/scheduler/scheduler-architecture.png and b/daprdocs/static/images/scheduler/scheduler-architecture.png differ
diff --git a/daprdocs/static/images/secretsmanagement-quickstart/secrets-mgmt-quickstart.png b/daprdocs/static/images/secretsmanagement-quickstart/secrets-mgmt-quickstart.png
index 643af8ea2..f24c09b17 100644
Binary files a/daprdocs/static/images/secretsmanagement-quickstart/secrets-mgmt-quickstart.png and b/daprdocs/static/images/secretsmanagement-quickstart/secrets-mgmt-quickstart.png differ
diff --git a/daprdocs/static/images/security-dapr-API-scoping.png b/daprdocs/static/images/security-dapr-API-scoping.png
index 03528e38e..d74364cf1 100644
Binary files a/daprdocs/static/images/security-dapr-API-scoping.png and b/daprdocs/static/images/security-dapr-API-scoping.png differ
diff --git a/daprdocs/static/images/security-end-to-end-communication.png b/daprdocs/static/images/security-end-to-end-communication.png
index 6012f22c2..41e8f6469 100644
Binary files a/daprdocs/static/images/security-end-to-end-communication.png and b/daprdocs/static/images/security-end-to-end-communication.png differ
diff --git a/daprdocs/static/images/security-mTLS-dapr-system-services.png b/daprdocs/static/images/security-mTLS-dapr-system-services.png
index ae898d8e9..3762fff6f 100644
Binary files a/daprdocs/static/images/security-mTLS-dapr-system-services.png and b/daprdocs/static/images/security-mTLS-dapr-system-services.png differ
diff --git a/daprdocs/static/images/security-mTLS-sentry-kubernetes.png b/daprdocs/static/images/security-mTLS-sentry-kubernetes.png
index a7437c1d3..1a3163e9a 100644
Binary files a/daprdocs/static/images/security-mTLS-sentry-kubernetes.png and b/daprdocs/static/images/security-mTLS-sentry-kubernetes.png differ
diff --git a/daprdocs/static/images/security-mTLS-sentry-selfhosted.png b/daprdocs/static/images/security-mTLS-sentry-selfhosted.png
index 366ce86b9..55bf0c315 100644
Binary files a/daprdocs/static/images/security-mTLS-sentry-selfhosted.png and b/daprdocs/static/images/security-mTLS-sentry-selfhosted.png differ
diff --git a/daprdocs/static/images/security-overview-capabilities-example.png b/daprdocs/static/images/security-overview-capabilities-example.png
index 0dbde1fc3..386a1590b 100644
Binary files a/daprdocs/static/images/security-overview-capabilities-example.png and b/daprdocs/static/images/security-overview-capabilities-example.png differ
diff --git a/daprdocs/static/images/service-invocation-overview.png b/daprdocs/static/images/service-invocation-overview.png
index c5b2fe554..eadef1e61 100644
Binary files a/daprdocs/static/images/service-invocation-overview.png and b/daprdocs/static/images/service-invocation-overview.png differ
diff --git a/daprdocs/static/images/state-management-quickstart.png b/daprdocs/static/images/state-management-quickstart.png
index e6606ae97..8c07c8a52 100644
Binary files a/daprdocs/static/images/state-management-quickstart.png and b/daprdocs/static/images/state-management-quickstart.png differ
diff --git a/daprdocs/static/images/workflow-quickstart-overview.png b/daprdocs/static/images/workflow-quickstart-overview.png
index d616f2106..7a8ea3e22 100644
Binary files a/daprdocs/static/images/workflow-quickstart-overview.png and b/daprdocs/static/images/workflow-quickstart-overview.png differ
diff --git a/daprdocs/static/presentations/Dapr-Diagrams-template.pptx.zip b/daprdocs/static/presentations/Dapr-Diagrams-template.pptx.zip
new file mode 100644
index 000000000..3a871010f
Binary files /dev/null and b/daprdocs/static/presentations/Dapr-Diagrams-template.pptx.zip differ
diff --git a/daprdocs/static/presentations/Dapr-Diagrams.pptx.zip b/daprdocs/static/presentations/Dapr-Diagrams.pptx.zip
deleted file mode 100644
index 206d01600..000000000
Binary files a/daprdocs/static/presentations/Dapr-Diagrams.pptx.zip and /dev/null differ
diff --git a/daprdocs/static/presentations/dapr-slidedeck.pptx.zip b/daprdocs/static/presentations/dapr-slidedeck.pptx.zip
index 1ccec7c23..985bf939f 100644
Binary files a/daprdocs/static/presentations/dapr-slidedeck.pptx.zip and b/daprdocs/static/presentations/dapr-slidedeck.pptx.zip differ
diff --git a/sdkdocs/dotnet b/sdkdocs/dotnet
index b8e276728..01b483347 160000
--- a/sdkdocs/dotnet
+++ b/sdkdocs/dotnet
@@ -1 +1 @@
-Subproject commit b8e276728935c66b0a335b5aa2ca4102c560dd3d
+Subproject commit 01b4833474f869865cba916196376fb49a97911c
diff --git a/sdkdocs/go b/sdkdocs/go
index 7c03c7ce5..2ab3420ad 160000
--- a/sdkdocs/go
+++ b/sdkdocs/go
@@ -1 +1 @@
-Subproject commit 7c03c7ce58d100a559ac1881bc0c80d6dedc5ab9
+Subproject commit 2ab3420adc75049bfcf27cb2eeebdc08f2156474
diff --git a/sdkdocs/java b/sdkdocs/java
index a98327e7d..380cda68f 160000
--- a/sdkdocs/java
+++ b/sdkdocs/java
@@ -1 +1 @@
-Subproject commit a98327e7d9a81611b0d7e91e59ea23ad48271948
+Subproject commit 380cda68f82456ecc52cd876e9567a7aaaf4e05f
diff --git a/sdkdocs/js b/sdkdocs/js
index 7350742b6..9adc54ded 160000
--- a/sdkdocs/js
+++ b/sdkdocs/js
@@ -1 +1 @@
-Subproject commit 7350742b6869cc166633d1f4d17d76fbdbb12921
+Subproject commit 9adc54dedd87846d513943a5ed9ebe0c1627a192
diff --git a/sdkdocs/python b/sdkdocs/python
index 64a4f2f66..6e90e84b1 160000
--- a/sdkdocs/python
+++ b/sdkdocs/python
@@ -1 +1 @@
-Subproject commit 64a4f2f6658e9023e8ea080eefdb019645cae802
+Subproject commit 6e90e84b166ac7ea603b78894e9e1b92dc456014
diff --git a/sdkdocs/rust b/sdkdocs/rust
index 4abf5aa65..4e2d31603 160000
--- a/sdkdocs/rust
+++ b/sdkdocs/rust
@@ -1 +1 @@
-Subproject commit 4abf5aa6504f7c0b0018d20f8dc038a486a67e3a
+Subproject commit 4e2d3160324f9c5968415acf206c039837df9a63