mirror of https://github.com/dapr/docs.git
Merge branch 'v1.16' into fraser/add-reo-dev
This commit is contained in:
commit
df78936b26
|
|
@ -39,7 +39,7 @@ jobs:
|
|||
- name: Build Hugo Website
|
||||
run: |
|
||||
git config --global --add safe.directory /github/workspace
|
||||
hugo
|
||||
hugo --minify
|
||||
- name: Deploy Website
|
||||
id: builddeploy
|
||||
uses: Azure/static-web-apps-deploy@v1
|
||||
|
|
|
|||
|
|
@ -111,7 +111,7 @@ Dapr apps can subscribe to raw messages from pub/sub topics, even if they weren
|
|||
|
||||
### Programmatically subscribe to raw events
|
||||
|
||||
When subscribing programmatically, add the additional metadata entry for `rawPayload` to allow the subscriber to receive a message that is not wrapped by a CloudEvent. For .NET, this metadata entry is called `isRawPayload`.
|
||||
When subscribing programmatically, add the additional metadata entry for `rawPayload` to allow the subscriber to receive a message that is not wrapped by a CloudEvent. For .NET, this metadata entry is called `rawPayload`.
|
||||
|
||||
When using raw payloads the message is always base64 encoded with content type `application/octet-stream`.
|
||||
|
||||
|
|
@ -137,7 +137,7 @@ app.MapGet("/dapr/subscribe", () =>
|
|||
route = "/messages",
|
||||
metadata = new Dictionary<string, string>
|
||||
{
|
||||
{ "isRawPayload", "true" },
|
||||
{ "rawPayload", "true" },
|
||||
{ "content-type", "application/json" }
|
||||
}
|
||||
}
|
||||
|
|
|
|||
|
|
@ -138,35 +138,9 @@ On Windows, the environment variable needs to be set before starting the `dapr`
|
|||
|
||||
### Authenticate to AWS if using AWS SSO based profiles
|
||||
|
||||
If you authenticate to AWS using [AWS SSO](https://aws.amazon.com/single-sign-on/), some AWS SDKs (including the Go SDK) don't yet support this natively. There are several utilities you can use to "bridge the gap" between AWS SSO-based credentials and "legacy" credentials, such as:
|
||||
- [AwsHelper](https://pypi.org/project/awshelper/)
|
||||
- [aws-sso-util](https://github.com/benkehoe/aws-sso-util)
|
||||
If you authenticate to AWS using [AWS SSO](https://aws.amazon.com/single-sign-on/), the AWS SDK for Go (both v1 and v2) provides native support for AWS SSO credential providers. This means you can use AWS SSO profiles directly without additional utilities.
|
||||
|
||||
{{< tabpane text=true >}}
|
||||
<!-- linux -->
|
||||
{{% tab "Linux/MacOS" %}}
|
||||
|
||||
If using AwsHelper, start Dapr like this:
|
||||
|
||||
```bash
|
||||
AWS_PROFILE=myprofile awshelper dapr run...
|
||||
```
|
||||
|
||||
or
|
||||
|
||||
```bash
|
||||
AWS_PROFILE=myprofile awshelper daprd...
|
||||
```
|
||||
{{% /tab %}}
|
||||
|
||||
<!-- windows -->
|
||||
{{% tab "Windows" %}}
|
||||
|
||||
On Windows, the environment variable needs to be set before starting the `awshelper` command; doing it inline (like in Linux/MacOS) is not supported.
|
||||
|
||||
{{% /tab %}}
|
||||
|
||||
{{< /tabpane >}}
|
||||
For more information about AWS SSO support in the AWS SDK for Go, see the [AWS blog post](https://aws.amazon.com/blogs/developer/aws-sso-support-in-the-aws-sdk-for-go/).
|
||||
|
||||
## Next steps
|
||||
|
||||
|
|
|
|||
|
|
@ -111,6 +111,138 @@ If you decide to generate trace headers yourself, there are three ways this can
|
|||
|
||||
Read [the trace context overview]({{% ref w3c-tracing-overview %}}) for more background and examples on W3C trace context and headers.
|
||||
|
||||
### Baggage Support
|
||||
|
||||
Dapr supports two distinct mechanisms for propagating W3C Baggage alongside trace context:
|
||||
|
||||
1. **Context Baggage (OpenTelemetry)**
|
||||
- Follows OpenTelemetry conventions with decoded values
|
||||
- Used when working with OpenTelemetry context propagation
|
||||
- Values are stored and transmitted in their original, unencoded form
|
||||
- Recommended for OpenTelemetry integrations and when working with application context
|
||||
|
||||
2. **Header/Metadata Baggage**
|
||||
- You must URL encode special characters (for example, `%20` for spaces, `%2F` for slashes) when setting header/metadata baggage
|
||||
- Values remain percent-encoded in transport as required by the W3C Baggage spec
|
||||
- Values stay encoded when inspecting raw headers/metadata
|
||||
- Only OpenTelemetry APIs will decode the values
|
||||
- Example: Use `serverNode=DF%2028` (not `serverNode=DF 28`) when setting header baggage
|
||||
|
||||
For security purposes, context baggage and header baggage are strictly separated and never merged between domains. This ensures that baggage values maintain their intended format and security properties.
|
||||
|
||||
#### Using Baggage with Dapr
|
||||
|
||||
You can propagate baggage using either mechanism, depending on your use case.
|
||||
|
||||
1. **In your application code**: Set the baggage in the context before making a Dapr API call
|
||||
2. **When calling Dapr**: Pass the context to any Dapr API call
|
||||
3. **Inside Dapr**: The Dapr runtime automatically picks up the baggage
|
||||
4. **Propagation**: Dapr automatically propagates the baggage to downstream services, maintaining the appropriate encoding for each mechanism
|
||||
|
||||
Here are examples of both mechanisms:
|
||||
|
||||
**1. Using Context Baggage (OpenTelemetry)**
|
||||
|
||||
When using OpenTelemetry SDK:
|
||||
|
||||
{{< tabpane text=true >}}
|
||||
|
||||
{{% tab header="Go" %}}
|
||||
|
||||
```go
|
||||
import otelbaggage "go.opentelemetry.io/otel/baggage"
|
||||
|
||||
// Set baggage in context (values remain unencoded)
|
||||
baggage, err = otelbaggage.Parse("userId=cassie,serverNode=DF%2028")
|
||||
...
|
||||
ctx := otelbaggage.ContextWithBaggage(t.Context(), baggage)
|
||||
)
|
||||
|
||||
// Pass this context to any Dapr API call
|
||||
client.InvokeMethodWithContent(ctx, "serviceB", ...)
|
||||
```
|
||||
|
||||
**2. Using Header/Metadata Baggage**
|
||||
|
||||
When using gRPC metadata:
|
||||
```go
|
||||
import "google.golang.org/grpc/metadata"
|
||||
|
||||
// Set URL-encoded baggage in context
|
||||
ctx = metadata.AppendToOutgoingContext(ctx,
|
||||
"baggage", "userId=cassie,serverNode=DF%2028",
|
||||
)
|
||||
|
||||
// Pass this context to any Dapr API call
|
||||
client.InvokeMethodWithContent(ctx, "serviceB", ...)
|
||||
```
|
||||
|
||||
**3. Receiving Baggage in Target Service**
|
||||
|
||||
In your target service, you can access the propagated baggage:
|
||||
|
||||
```go
|
||||
// Using OpenTelemetry (values are automatically decoded)
|
||||
import "go.opentelemetry.io/otel/baggage"
|
||||
|
||||
bag := baggage.FromContext(ctx)
|
||||
userID := bag.Member("userId").Value() // "cassie"
|
||||
```
|
||||
|
||||
```go
|
||||
// Using raw gRPC metadata (values remain percent-encoded)
|
||||
import "google.golang.org/grpc/metadata"
|
||||
|
||||
md, _ := metadata.FromIncomingContext(ctx)
|
||||
if values := md.Get("baggage"); len(values) > 0 {
|
||||
// values[0] contains the percent-encoded string you set: "userId=cassie,serverNode=DF%2028"
|
||||
// Remember: You must URL encode special characters when setting baggage
|
||||
|
||||
// To decode the values, use OpenTelemetry APIs:
|
||||
bag, err := baggage.Parse(values[0])
|
||||
...
|
||||
userID := bag.Member("userId").Value() // "cassie"
|
||||
}
|
||||
```
|
||||
|
||||
*HTTP Example (URL-encoded):*
|
||||
```bash
|
||||
curl -X POST http://localhost:3500/v1.0/invoke/serviceB/method/hello \
|
||||
-H "Content-Type: application/json" \
|
||||
-H "baggage: userID=cassie,serverNode=DF%2028" \
|
||||
-d '{"message": "Hello service B"}'
|
||||
```
|
||||
|
||||
*gRPC Example (URL-encoded):*
|
||||
```go
|
||||
ctx = grpcMetadata.AppendToOutgoingContext(ctx,
|
||||
"baggage", "userID=cassie,serverNode=DF%2028",
|
||||
)
|
||||
```
|
||||
|
||||
{{% /tab %}}
|
||||
|
||||
{{< /tabpane >}}
|
||||
|
||||
#### Common Use Cases
|
||||
|
||||
Baggage is useful for:
|
||||
- Propagating user IDs or correlation IDs across services
|
||||
- Passing tenant or environment information
|
||||
- Maintaining consistent context across service boundaries
|
||||
- Debugging and troubleshooting distributed transactions
|
||||
|
||||
#### Best Practices
|
||||
|
||||
1. **Choose the Right Mechanism**
|
||||
- Use Context Baggage when working with OpenTelemetry
|
||||
- Use Header Baggage when working directly with HTTP/gRPC
|
||||
|
||||
2. **Security Considerations**
|
||||
- Be mindful that baggage is propagated across service boundaries
|
||||
- Don't include sensitive information in baggage
|
||||
- Remember that context and header baggage remain separate
|
||||
|
||||
## Related Links
|
||||
|
||||
- [Observability concepts]({{% ref observability-concept.md %}})
|
||||
|
|
|
|||
|
|
@ -48,7 +48,7 @@ When a request arrives without a trace ID, Dapr creates a new one. Otherwise, it
|
|||
These are the specific trace context headers that are generated and propagated by Dapr for HTTP and gRPC.
|
||||
|
||||
{{< tabpane text=true >}}
|
||||
<!-- HTTP -->
|
||||
|
||||
{{% tab "HTTP" %}}
|
||||
|
||||
Copy these headers when propagating a trace context header from an HTTP response to an HTTP request:
|
||||
|
|
@ -73,14 +73,67 @@ tracestate: congo=t61rcWkgMzE
|
|||
|
||||
[Learn more about the tracestate fields details](https://www.w3.org/TR/trace-context/#tracestate-header).
|
||||
|
||||
**Baggage Support**
|
||||
|
||||
Dapr supports [W3C Baggage](https://www.w3.org/TR/baggage/) for propagating key-value pairs alongside trace context through two distinct mechanisms:
|
||||
|
||||
1. **Context Baggage (OpenTelemetry)**
|
||||
- Follows OpenTelemetry conventions with decoded values
|
||||
- Used when propagating baggage through application context
|
||||
- Values are stored in their original, unencoded form
|
||||
- Example of how it would be printed with OpenTelemetry APIs:
|
||||
```
|
||||
baggage: userId=cassie,serverNode=DF 28,isVIP=true
|
||||
```
|
||||
|
||||
2. **HTTP Header Baggage**
|
||||
- You must URL encode special characters (for example, `%20` for spaces, `%2F` for slashes) when setting header baggage
|
||||
- Values remain percent-encoded in HTTP headers as required by the W3C Baggage spec
|
||||
- Values stay encoded when inspecting raw headers in Dapr
|
||||
- Only OpenTelemetry APIs like `otelbaggage.Parse()` will decode the values
|
||||
- Example (note the URL-encoded space `%20`):
|
||||
```bash
|
||||
curl -X POST http://localhost:3500/v1.0/invoke/serviceB/method/hello \
|
||||
-H "Content-Type: application/json" \
|
||||
-H "baggage: userId=cassie,serverNode=DF%2028,isVIP=true" \
|
||||
-d '{"message": "Hello service B"}'
|
||||
```
|
||||
|
||||
For security purposes, context baggage and header baggage are strictly separated and never merged between domains. This ensures that baggage values maintain their intended format and security properties in each domain.
|
||||
|
||||
Multiple baggage headers are supported and will be combined according to the W3C specification. Dapr automatically propagates baggage across service calls while maintaining the appropriate encoding for each domain.
|
||||
|
||||
{{% /tab %}}
|
||||
|
||||
|
||||
<!-- gRPC -->
|
||||
{{% tab "gRPC" %}}
|
||||
|
||||
In the gRPC API calls, trace context is passed through `grpc-trace-bin` header.
|
||||
|
||||
**Baggage Support**
|
||||
|
||||
Dapr supports [W3C Baggage](https://www.w3.org/TR/baggage/) for propagating key-value pairs alongside trace context through two distinct mechanisms:
|
||||
|
||||
1. **Context Baggage (OpenTelemetry)**
|
||||
- Follows OpenTelemetry conventions with decoded values
|
||||
- Used when propagating baggage through gRPC context
|
||||
- Values are stored in their original, unencoded form
|
||||
- Example of how it would be printed with OpenTelemetry APIs:
|
||||
```
|
||||
baggage: userId=cassie,serverNode=DF 28,isVIP=true
|
||||
```
|
||||
|
||||
2. **gRPC Metadata Baggage**
|
||||
- You must URL encode special characters (for example, `%20` for spaces, `%2F` for slashes) when setting metadata baggage
|
||||
- Values remain percent-encoded in gRPC metadata
|
||||
- Example (note the URL-encoded space `%20`):
|
||||
```
|
||||
baggage: userId=cassie,serverNode=DF%2028,isVIP=true
|
||||
```
|
||||
|
||||
For security purposes, context baggage and metadata baggage are strictly separated and never merged between domains. This ensures that baggage values maintain their intended format and security properties in each domain.
|
||||
|
||||
Multiple baggage metadata entries are supported and will be combined according to the W3C specification. Dapr automatically propagates baggage across service calls while maintaining the appropriate encoding for each domain.
|
||||
|
||||
{{% /tab %}}
|
||||
|
||||
{{< /tabpane >}}
|
||||
|
|
|
|||
|
|
@ -25,6 +25,10 @@ spec:
|
|||
value: 'https://api.openai.com/v1'
|
||||
- name: cacheTTL
|
||||
value: 10m
|
||||
# - name: apiType # Optional
|
||||
# value: `azure`
|
||||
# - name: apiVersion # Optional
|
||||
# value: '2025-01-01-preview'
|
||||
```
|
||||
|
||||
{{% alert title="Warning" color="warning" %}}
|
||||
|
|
@ -37,9 +41,12 @@ The above example uses secrets as plain strings. It is recommended to use a secr
|
|||
|--------------------|:--------:|---------|---------|
|
||||
| `key` | Y | API key for OpenAI. | `mykey` |
|
||||
| `model` | N | The OpenAI LLM to use. Defaults to `gpt-4-turbo`. | `gpt-4-turbo` |
|
||||
| `endpoint` | N | Custom API endpoint URL for OpenAI API-compatible services. If not specified, the default OpenAI API endpoint is used. | `https://api.openai.com/v1` |
|
||||
| `endpoint` | N | Custom API endpoint URL for OpenAI API-compatible services. If not specified, the default OpenAI API endpoint is used. Required when `apiType` is set to `azure`. | `https://api.openai.com/v1`, `https://example.openai.azure.com/` |
|
||||
| `cacheTTL` | N | A time-to-live value for a prompt cache to expire. Uses Golang duration format. | `10m` |
|
||||
| `apiType` | N | Specifies the API provider type. Required when using a provider that does not follow the default OpenAI API endpoint conventions. | `azure` |
|
||||
| `apiVersion`| N | The API version to use. Required when the `apiType` is set to `azure`. | `2025-04-01-preview` |
|
||||
|
||||
## Related links
|
||||
|
||||
- [Conversation API overview]({{% ref conversation-overview.md %}})
|
||||
- [Conversation API overview]({{% ref conversation-overview.md %}})
|
||||
- [Azure OpenAI in Azure AI Foundry Models API lifecycle](https://learn.microsoft.com/azure/ai-foundry/openai/api-version-lifecycle)
|
||||
|
|
@ -75,33 +75,33 @@ The above example uses secrets as plain strings. It is recommended to use a secr
|
|||
|
||||
## Spec metadata fields
|
||||
|
||||
| Field | Required | Details | Example |
|
||||
|--------------------|:--------:|---------|---------|
|
||||
| redisHost | Y | Connection-string for the redis host | `localhost:6379`, `redis-master.default.svc.cluster.local:6379`
|
||||
| redisPassword | N | Password for Redis host. No Default. Can be `secretKeyRef` to use a secret reference | `""`, `"KeFg23!"`
|
||||
| redisUsername | N | Username for Redis host. Defaults to empty. Make sure your redis server version is 6 or above, and have created acl rule correctly. | `""`, `"default"`
|
||||
| useEntraID | N | Implements EntraID support for Azure Cache for Redis. Before enabling this: <ul><li>The `redisHost` name must be specified in the form of `"server:port"`</li><li>TLS must be enabled</li></ul> Learn more about this setting under [Create a Redis instance > Azure Cache for Redis]({{% ref "#setup-redis" %}}) | `"true"`, `"false"` |
|
||||
| enableTLS | N | If the Redis instance supports TLS with public certificates, can be configured to be enabled or disabled. Defaults to `"false"` | `"true"`, `"false"`
|
||||
| maxRetries | N | Maximum number of retries before giving up. Defaults to `3` | `5`, `10`
|
||||
| maxRetryBackoff | N | Maximum backoff between each retry. Defaults to `2` seconds; `"-1"` disables backoff. | `3000000000`
|
||||
| failover | N | Property to enable failover configuration. Needs sentinelMasterName to be set. The redisHost should be the sentinel host address. See [Redis Sentinel Documentation](https://redis.io/docs/manual/sentinel/). Defaults to `"false"` | `"true"`, `"false"`
|
||||
| sentinelMasterName | N | The sentinel master name. See [Redis Sentinel Documentation](https://redis.io/docs/manual/sentinel/) | `"mymaster"`
|
||||
| redeliverInterval | N | The interval between checking for pending messages to redelivery. Defaults to `"60s"`. `"0"` disables redelivery. | `"30s"`
|
||||
| processingTimeout | N | The amount time a message must be pending before attempting to redeliver it. Defaults to `"15s"`. `"0"` disables redelivery. | `"30s"`
|
||||
| redisType | N | The type of redis. There are two valid values, one is `"node"` for single node mode, the other is `"cluster"` for redis cluster mode. Defaults to `"node"`. | `"cluster"`
|
||||
| redisDB | N | Database selected after connecting to redis. If `"redisType"` is `"cluster"` this option is ignored. Defaults to `"0"`. | `"0"`
|
||||
| redisMaxRetries | N | Alias for `maxRetries`. If both values are set `maxRetries` is ignored. | `"5"`
|
||||
| redisMinRetryInterval | N | Minimum backoff for redis commands between each retry. Default is `"8ms"`; `"-1"` disables backoff. | `"8ms"`
|
||||
| redisMaxRetryInterval | N | Alias for `maxRetryBackoff`. If both values are set `maxRetryBackoff` is ignored. | `"5s"`
|
||||
| dialTimeout | N | Dial timeout for establishing new connections. Defaults to `"5s"`. | `"5s"`
|
||||
| readTimeout | N | Timeout for socket reads. If reached, redis commands will fail with a timeout instead of blocking. Defaults to `"3s"`, `"-1"` for no timeout. | `"3s"`
|
||||
| writeTimeout | N | Timeout for socket writes. If reached, redis commands will fail with a timeout instead of blocking. Defaults is readTimeout. | `"3s"`
|
||||
| poolSize | N | Maximum number of socket connections. Default is 10 connections per every CPU as reported by runtime.NumCPU. | `"20"`
|
||||
| poolTimeout | N | Amount of time client waits for a connection if all connections are busy before returning an error. Default is readTimeout + 1 second. | `"5s"`
|
||||
| maxConnAge | N | Connection age at which the client retires (closes) the connection. Default is to not close aged connections. | `"30m"`
|
||||
| minIdleConns | N | Minimum number of idle connections to keep open in order to avoid the performance degradation associated with creating new connections. Defaults to `"0"`. | `"2"`
|
||||
| idleCheckFrequency | N | Frequency of idle checks made by idle connections reaper. Default is `"1m"`. `"-1"` disables idle connections reaper. | `"-1"`
|
||||
| idleTimeout | N | Amount of time after which the client closes idle connections. Should be less than server's timeout. Default is `"5m"`. `"-1"` disables idle timeout check. | `"10m"`
|
||||
| Field | Required | Details | Example |
|
||||
|-----------------------|:--------:|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|-----------------------------------------------------------------|
|
||||
| redisHost | Y | Connection string for the redis host | `localhost:6379`, `redis-master.default.svc.cluster.local:6379` |
|
||||
| redisPassword | N | Password for Redis host. No Default. Can be `secretKeyRef` to use a secret reference | `""`, `"KeFg23!"` |
|
||||
| redisUsername | N | Username for Redis host. Defaults to empty. Make sure your redis server version is 6 or above, and have created acl rule correctly. | `""`, `"default"` |
|
||||
| useEntraID | N | Implements EntraID support for Azure Cache for Redis. Before enabling this: <ul><li>The `redisHost` name must be specified in the form of `"server:port"`</li><li>TLS must be enabled</li></ul> Learn more about this setting under [Create a Redis instance > Azure Cache for Redis]({{% ref "#setup-redis" %}}) | `"true"`, `"false"` |
|
||||
| enableTLS | N | If the Redis instance supports TLS with public certificates, can be configured to be enabled or disabled. Defaults to `"false"` | `"true"`, `"false"` |
|
||||
| maxRetries | N | Maximum number of retries before giving up. Defaults to `3` | `5`, `10` |
|
||||
| maxRetryBackoff | N | Maximum backoff between each retry. Defaults to `2` seconds; `"-1"` disables backoff. | `3000000000` |
|
||||
| failover | N | Enable failover configuration. Needs sentinelMasterName to be set. The redisHost should be the sentinel host address. See [Redis Sentinel Documentation](https://redis.io/docs/manual/sentinel/). Defaults to `"false"` | `"true"`, `"false"` |
|
||||
| sentinelMasterName | N | The sentinel master name. See [Redis Sentinel Documentation](https://redis.io/docs/manual/sentinel/) | `"mymaster"` |
|
||||
| redeliverInterval | N | The interval between checking for pending messages for redelivery. Defaults to `"60s"`. `"0"` disables redelivery. | `"30s"` |
|
||||
| processingTimeout | N | The amount of time a message must be pending before attempting to redeliver it. Defaults to `"15s"`. `"0"` disables redelivery. | `"30s"` |
|
||||
| redisType | N | The type of redis. There are two valid values, one is `"node"` for single node mode, the other is `"cluster"` for redis cluster mode. Defaults to `"node"`. | `"cluster"` |
|
||||
| redisDB | N | Database selected after connecting to redis. If `"redisType"` is `"cluster"` this option is ignored. Defaults to `"0"`. | `"0"` |
|
||||
| redisMaxRetries | N | Alias for `maxRetries`. If both values are set `maxRetries` is ignored. | `"5"` |
|
||||
| redisMinRetryInterval | N | Minimum backoff for redis commands between each retry. Default is `"8ms"`; `"-1"` disables backoff. | `"8ms"` |
|
||||
| redisMaxRetryInterval | N | Alias for `maxRetryBackoff`. If both values are set `maxRetryBackoff` is ignored. | `"5s"` |
|
||||
| dialTimeout | N | Dial timeout for establishing new connections. Defaults to `"5s"`. | `"5s"` |
|
||||
| readTimeout | N | Timeout for socket reads. If reached, redis commands will fail with a timeout instead of blocking. Defaults to `"3s"`, `"-1"` for no timeout. | `"3s"` |
|
||||
| writeTimeout | N | Timeout for socket writes. If reached, redis commands will fail with a timeout instead of blocking. Defaults is readTimeout. | `"3s"` |
|
||||
| poolSize | N | Maximum number of socket connections. Default is 10 connections per every CPU as reported by runtime (NumCPU) | `"20" |
|
||||
| poolTimeout | N | Amount of time client waits for a connection if all connections are busy before returning an error. Default is readTimeout + 1 second. | `"5s"` |
|
||||
| maxConnAge | N | Connection age at which the client retires (closes) the connection. Default is to not close aged connections. | `"30m"` |
|
||||
| minIdleConns | N | Minimum number of idle connections to keep open in order to avoid the performance degradation associated with creating new connections. Defaults to `"0"`. | `"2"` |
|
||||
| idleCheckFrequency | N | Frequency of idle checks made by idle connections reaper. Default is `"1m"`. `"-1"` disables idle connections reaper. | `"-1"` |
|
||||
| idleTimeout | N | Amount of time after which the client closes idle connections. Should be less than server's timeout. Default is `"5m"`. `"-1"` disables idle timeout check. | `"10m"` |
|
||||
|
||||
## Setup Redis
|
||||
|
||||
|
|
|
|||
|
|
@ -39,7 +39,7 @@ spec:
|
|||
metadata:
|
||||
- name: url
|
||||
value: "file://router.wasm"
|
||||
- guestConfig
|
||||
- name: guestConfig
|
||||
value: {"environment":"production"}
|
||||
```
|
||||
|
||||
|
|
|
|||
|
|
@ -95,6 +95,7 @@ The above example uses secrets as plain strings. It is recommended to use a [sec
|
|||
| subscribeMode | N | Subscription mode indicates the cursor persistence, durable subscription retains messages and persists the current position. Default: `"durable"` | `"durable"`, `"non_durable"` |
|
||||
| partitionKey | N | Sets the key of the message for routing policy. Default: `""` | |
|
||||
| `maxConcurrentHandlers` | N | Defines the maximum number of concurrent message handlers. Default: `100` | `10`
|
||||
| replicateSubscriptionState | N | Enable replication of subscription state across geo-replicated Pulsar clusters. Default: `"false"` | `"true"`, `"false"` |
|
||||
|
||||
### Authenticate using Token
|
||||
|
||||
|
|
|
|||
|
|
@ -66,6 +66,8 @@ spec:
|
|||
value: {podName}
|
||||
- name: heartBeat
|
||||
value: 10s
|
||||
- name: publishMessagePropertiesToMetadata
|
||||
value: "true"
|
||||
```
|
||||
|
||||
{{% alert title="Warning" color="warning" %}}
|
||||
|
|
@ -102,7 +104,7 @@ The above example uses secrets as plain strings. It is recommended to use a secr
|
|||
| clientKey | Required for using TLS | TLS client key in PEM format. Must be used with `clientCert`. Can be `secretKeyRef` to use a secret reference. | `"-----BEGIN RSA PRIVATE KEY-----\n<base64-encoded PKCS8>\n-----END RSA PRIVATE KEY-----"`
|
||||
| clientName | N | This RabbitMQ [client-provided connection name](https://www.rabbitmq.com/connections.html#client-provided-names) is a custom identifier. If set, the identifier is mentioned in RabbitMQ server log entries and management UI. Can be set to {uuid}, {podName}, or {appID}, which is replaced by Dapr runtime to the real value. | `"app1"`, `{uuid}`, `{podName}`, `{appID}`
|
||||
| heartBeat | N | Defines the heartbeat interval with the server, detecting the aliveness of the peer TCP connection with the RabbitMQ server. Defaults to `10s` . | `"10s"`
|
||||
|
||||
| `publishMessagePropertiesToMetadata` | N | Whether to publish AMQP message properties (headers, message ID, etc.) to the metadata. | "true", "false"
|
||||
|
||||
## Communication using TLS
|
||||
|
||||
|
|
@ -475,6 +477,11 @@ spec:
|
|||
singleActiveConsumer: "true"
|
||||
```
|
||||
|
||||
## Publishing message properties to metadata
|
||||
|
||||
To enable [message properties](https://www.rabbitmq.com/docs/publishers#message-properties) being published in the metadata, set the `publishMessagePropertiesToMetadata` field to `"true"` in the component spec.
|
||||
This will include properties such as message ID, timestamp, and headers in the metadata of the published message.
|
||||
|
||||
## Related links
|
||||
|
||||
- [Basic schema for a Dapr component]({{% ref component-schema %}}) in the Related links section
|
||||
|
|
|
|||
|
|
@ -1,3 +1,25 @@
|
|||
{{ with .Site.Params.search.algolia }}
|
||||
<link rel="stylesheet" href="https://cdn.jsdelivr.net/npm/@docsearch/css@3" />
|
||||
|
||||
<script
|
||||
async
|
||||
src="https://widget.kapa.ai/kapa-widget.bundle.js"
|
||||
data-website-id="8e5bac24-7723-4b77-9b1b-99d6e058a864"
|
||||
data-project-name="Dapr Docs AI"
|
||||
data-modal-title="Dapr Docs AI assistant"
|
||||
data-project-color="#ff4e00"
|
||||
data-project-logo="https://cdn.prod.website-files.com/66965adecd57031ed9ad181e/689f979646c1922bbc244a8b_dapr-ai-icon-transparent.png"
|
||||
data-modal-open-by-default="false"
|
||||
data-modal-disclaimer="Answers are based on the Dapr docs, relevant websites, and GitHub repositories. Always double check the results and please provide feedback with a 👍 or 👎 so we continue to improve this service."
|
||||
data-modal-example-questions="How do I get started with Dapr?, How do I use Kafka with Dapr Pub/Sub?, How do I run Dapr in production?, How do I build agentic AI with Dapr?"
|
||||
data-modal-example-questions-title="Try asking:"
|
||||
data-modal-ask-ai-input-placeholder="Ask me anything Dapr-related..."
|
||||
data-answer-cta-button-enabled="true"
|
||||
data-answer-cta-button-link="https://bit.ly/dapr-discord"
|
||||
data-answer-cta-button-text="Join us on Discord"
|
||||
data-modal-header-bg-color="#0D2192"
|
||||
data-modal-title-color="#FFFFFF"
|
||||
data-modal-image-height="32"
|
||||
data-modal-image-width="32"
|
||||
></script>
|
||||
{{ end }}
|
||||
|
|
@ -1 +1 @@
|
|||
Subproject commit d15a0234049c6ae0fd9d4e23af1fc0650c0c8a8a
|
||||
Subproject commit 241a646a2037d4e91d3192dcbaf1f128b15de185
|
||||
|
|
@ -1 +1 @@
|
|||
Subproject commit f4ba09fae50c634cad11050ff05485cb9ef65bf7
|
||||
Subproject commit 6dd434913b6fb41f6ede006c64c01a35a02c458f
|
||||
|
|
@ -1 +1 @@
|
|||
Subproject commit f73d57eb27028bf41e9ab1fde3b50586ab5de919
|
||||
Subproject commit 3bb91e505e34ef3b7bc3325be853de8d0491c431
|
||||
|
|
@ -1 +1 @@
|
|||
Subproject commit 26b3527e688751a2ffef812bec95790535218506
|
||||
Subproject commit 26e8be8931aed2404e0e382b6c61264d1b64f0de
|
||||
|
|
@ -1 +1 @@
|
|||
Subproject commit b2f2988f397a34159e67fb30c6a0e2f414a60350
|
||||
Subproject commit 5882d52961cee7cb50d07c9a47c902f317dff396
|
||||
|
|
@ -1 +1 @@
|
|||
Subproject commit 407447816c2107860af98897802c4491306e95d0
|
||||
Subproject commit 4475ed57cfdcb912a828f43cace8c0bea3eb99e1
|
||||
Loading…
Reference in New Issue