Merge branch 'v1.15' into cb
|
@ -13,7 +13,7 @@ jobs:
|
|||
validate:
|
||||
runs-on: ubuntu-latest
|
||||
env:
|
||||
PYTHON_VER: 3.7
|
||||
PYTHON_VER: 3.12
|
||||
steps:
|
||||
- uses: actions/checkout@v2
|
||||
- name: Check Microsoft URLs do not pin localized versions
|
||||
|
@ -27,7 +27,7 @@ jobs:
|
|||
exit 1
|
||||
fi
|
||||
- name: Set up Python ${{ env.PYTHON_VER }}
|
||||
uses: actions/setup-python@v2
|
||||
uses: actions/setup-python@v5
|
||||
with:
|
||||
python-version: ${{ env.PYTHON_VER }}
|
||||
- name: Install dependencies
|
||||
|
|
|
@ -31,3 +31,4 @@ Dapr provides the following building blocks:
|
|||
| [**Distributed lock**]({{< ref "distributed-lock-api-overview.md" >}}) | `/v1.0-alpha1/lock` | The distributed lock API enables you to take a lock on a resource so that multiple instances of an application can access the resource without conflicts and provide consistency guarantees.
|
||||
| [**Cryptography**]({{< ref "cryptography-overview.md" >}}) | `/v1.0-alpha1/crypto` | The Cryptography API enables you to perform cryptographic operations, such as encrypting and decrypting messages, without exposing keys to your application.
|
||||
| [**Jobs**]({{< ref "jobs-overview.md" >}}) | `/v1.0-alpha1/jobs` | The Jobs API enables you to schedule and orchestrate jobs. Example scenarios include: <ul><li>Schedule batch processing jobs to run every business day</li><li>Schedule various maintenance scripts to perform clean-ups</li><li>Schedule ETL jobs to run at specific times (hourly, daily) to fetch new data, process it, and update the data warehouse with the latest information.</li></ul>
|
||||
| [**Conversation**]({{< ref "conversation-overview.md" >}}) | `/v1.0-alpha1/conversation` | The Conversation API enables you to supply prompts to converse with different large language models (LLMs) and includes features such as prompt caching and personally identifiable information (PII) obfuscation.
|
|
@ -122,11 +122,18 @@ Lock components are used as a distributed lock to provide mutually exclusive acc
|
|||
|
||||
### Cryptography
|
||||
|
||||
[Cryptography]({{< ref cryptography-overview.md >}}) components are used to perform crypographic operations, including encrypting and decrypting messages, without exposing keys to your application.
|
||||
[Cryptography]({{< ref cryptography-overview.md >}}) components are used to perform cryptographic operations, including encrypting and decrypting messages, without exposing keys to your application.
|
||||
|
||||
- [List of supported cryptography components]({{< ref supported-cryptography >}})
|
||||
- [Cryptography implementations](https://github.com/dapr/components-contrib/tree/master/crypto)
|
||||
|
||||
### Conversation
|
||||
|
||||
Dapr provides developers a way to abstract interactions with large language models (LLMs) with built-in security and reliability features. Use [conversation]({{< ref conversation-overview.md >}}) components to send prompts to different LLMs, along with the conversation context.
|
||||
|
||||
- [List of supported conversation components]({{< ref supported-conversation >}})
|
||||
- [Conversation implementations](https://github.com/dapr/components-contrib/tree/main/conversation)
|
||||
|
||||
### Middleware
|
||||
|
||||
Dapr allows custom [middleware]({{< ref "middleware.md" >}}) to be plugged into the HTTP request processing pipeline. Middleware can perform additional actions on an HTTP request (such as authentication, encryption, and message transformation) before the request is routed to the user code, or the response is returned to the client. The middleware components are used with the [service invocation]({{< ref "service-invocation-overview.md" >}}) building block.
|
||||
|
@ -136,4 +143,4 @@ Dapr allows custom [middleware]({{< ref "middleware.md" >}}) to be plugged into
|
|||
|
||||
{{% alert title="Note" color="primary" %}}
|
||||
Since pluggable components are not required to be written in Go, they follow a different implementation process than built-in Dapr components. For more information on developing built-in components, read [developing new components](https://github.com/dapr/components-contrib/blob/master/docs/developing-component.md).
|
||||
{{% /alert %}}
|
||||
{{% /alert %}}
|
|
@ -13,7 +13,9 @@ The Placement service Docker container is started automatically as part of [`dap
|
|||
|
||||
## Kubernetes mode
|
||||
|
||||
The Placement service is deployed as part of `dapr init -k`, or via the Dapr Helm charts. For more information on running Dapr on Kubernetes, visit the [Kubernetes hosting page]({{< ref kubernetes >}}).
|
||||
The Placement service is deployed as part of `dapr init -k`, or via the Dapr Helm charts. You can run Placement in high availability (HA) mode. [Learn more about setting HA mode in your Kubernetes service.]({{< ref "kubernetes-production.md#individual-service-ha-helm-configuration" >}})
|
||||
|
||||
For more information on running Dapr on Kubernetes, visit the [Kubernetes hosting page]({{< ref kubernetes >}}).
|
||||
|
||||
## Placement tables
|
||||
|
||||
|
|
|
@ -11,13 +11,21 @@ The diagram below shows how the Scheduler service is used via the jobs API when
|
|||
|
||||
<img src="/images/scheduler/scheduler-architecture.png" alt="Diagram showing the Scheduler control plane service and the jobs API">
|
||||
|
||||
## Actor reminders
|
||||
|
||||
Prior to Dapr v1.15, [actor reminders]({{< ref "actors-timers-reminders.md#actor-reminders" >}}) were run using the Placement service. Now, by default, the [`SchedulerReminders` feature flag]({{< ref "support-preview-features.md#current-preview-features" >}}) is set to `true`, and all new actor reminders you create are run using the Scheduler service to make them more scalable.
|
||||
|
||||
When you deploy Dapr v1.15, any _existing_ actor reminders are migrated from the Placement service to the Scheduler service as a one time operation for each actor type. You can prevent this migration by setting the `SchedulerReminders` flag to `false` in application configuration file for the actor type.
|
||||
|
||||
## Self-hosted mode
|
||||
|
||||
The Scheduler service Docker container is started automatically as part of `dapr init`. It can also be run manually as a process if you are running in [slim-init mode]({{< ref self-hosted-no-docker.md >}}).
|
||||
|
||||
## Kubernetes mode
|
||||
|
||||
The Scheduler service is deployed as part of `dapr init -k`, or via the Dapr Helm charts. For more information on running Dapr on Kubernetes, visit the [Kubernetes hosting page]({{< ref kubernetes >}}).
|
||||
The Scheduler service is deployed as part of `dapr init -k`, or via the Dapr Helm charts. You can run Scheduler in high availability (HA) mode. [Learn more about setting HA mode in your Kubernetes service.]({{< ref "kubernetes-production.md#individual-service-ha-helm-configuration" >}})
|
||||
|
||||
For more information on running Dapr on Kubernetes, visit the [Kubernetes hosting page]({{< ref kubernetes >}}).
|
||||
|
||||
## Related links
|
||||
|
||||
|
|
|
@ -55,6 +55,7 @@ Each of these building block APIs is independent, meaning that you can use any n
|
|||
| [**Distributed lock**]({{< ref "distributed-lock-api-overview.md" >}}) | The distributed lock API enables your application to acquire a lock for any resource that gives it exclusive access until either the lock is released by the application, or a lease timeout occurs.
|
||||
| [**Cryptography**]({{< ref "cryptography-overview.md" >}}) | The cryptography API provides an abstraction layer on top of security infrastructure such as key vaults. It contains APIs that allow you to perform cryptographic operations, such as encrypting and decrypting messages, without exposing keys to your applications.
|
||||
| [**Jobs**]({{< ref "jobs-overview.md" >}}) | The jobs API enables you to schedule jobs at specific times or intervals.
|
||||
| [**Conversation**]({{< ref "conversation-overview.md" >}}) | The conversation API enables you to abstract the complexities of interacting with large language models (LLMs) and includes features such as prompt caching and personally identifiable information (PII) obfuscation. Using [conversation components]({{< ref supported-conversation >}}), you can supply prompts to converse with different LLMs.
|
||||
|
||||
### Cross-cutting APIs
|
||||
|
||||
|
|
|
@ -107,6 +107,10 @@ Refer [api spec]({{< ref "actors_api.md#invoke-timer" >}}) for more details.
|
|||
|
||||
## Actor reminders
|
||||
|
||||
{{% alert title="Note" color="primary" %}}
|
||||
In Dapr v1.15, actor reminders are stored by default in the [Scheduler service]({{< ref "scheduler.md#actor-reminders" >}}).
|
||||
{{% /alert %}}
|
||||
|
||||
Reminders are a mechanism to trigger *persistent* callbacks on an actor at specified times. Their functionality is similar to timers. But unlike timers, reminders are triggered under all circumstances until the actor explicitly unregisters them or the actor is explicitly deleted or the number in invocations is exhausted. Specifically, reminders are triggered across actor deactivations and failovers because the Dapr actor runtime persists the information about the actors' reminders using Dapr actor state provider.
|
||||
|
||||
You can create a persistent reminder for an actor by calling the HTTP/gRPC request to Dapr as shown below, or via Dapr SDK.
|
||||
|
@ -148,7 +152,9 @@ If an invocation of the method fails, the timer is not removed. Timers are only
|
|||
|
||||
## Reminder data serialization format
|
||||
|
||||
Actor reminder data is serialized to JSON by default. Dapr v1.13 onwards supports a protobuf serialization format for reminders data which, depending on throughput and size of the payload, can result in significant performance improvements, giving developers a higher throughput and lower latency. Another benefit is storing smaller data in the actor underlying database, which can result in cost optimizations when using some cloud databases. A restriction with using protobuf serialization is that the reminder data can no longer be queried.
|
||||
Actor reminder data is serialized to JSON by default. Dapr v1.13 onwards supports a protobuf serialization format for internal reminders data for workflow via both the Placement and Scheduler services. Depending on throughput and size of the payload, this can result in significant performance improvements, giving developers a higher throughput and lower latency.
|
||||
|
||||
Another benefit is storing smaller data in the actor underlying database, which can result in cost optimizations when using some cloud databases. A restriction with using protobuf serialization is that the reminder data can no longer be queried.
|
||||
|
||||
{{% alert title="Note" color="primary" %}}
|
||||
Protobuf serialization will become the default format in Dapr 1.14
|
||||
|
|
|
@ -0,0 +1,7 @@
|
|||
---
|
||||
type: docs
|
||||
title: "Conversation"
|
||||
linkTitle: "Conversation"
|
||||
weight: 130
|
||||
description: "Utilize prompts with Large Language Models (LLMs)"
|
||||
---
|
|
@ -0,0 +1,43 @@
|
|||
---
|
||||
type: docs
|
||||
title: "Conversation overview"
|
||||
linkTitle: "Overview"
|
||||
weight: 1000
|
||||
description: "Overview of the conversation API building block"
|
||||
---
|
||||
|
||||
{{% alert title="Alpha" color="primary" %}}
|
||||
The conversation API is currently in [alpha]({{< ref "certification-lifecycle.md#certification-levels" >}}).
|
||||
{{% /alert %}}
|
||||
|
||||
|
||||
Using the Dapr conversation API, you can reduce the complexity of interacting with Large Language Models (LLMs) and enable critical performance and security functionality with features like prompt caching and personally identifiable information (PII) data obfuscation.
|
||||
|
||||
## Features
|
||||
|
||||
### Prompt caching
|
||||
|
||||
To significantly reduce latency and cost, frequent prompts are stored in a cache to be reused, instead of reprocessing the information for every new request. Prompt caching optimizes performance by storing and reusing prompts that are often repeated across multiple API calls.
|
||||
|
||||
### Personally identifiable information (PII) obfuscation
|
||||
|
||||
The PII obfuscation feature identifies and removes any PII from a conversation response. This feature protects your privacy by eliminating sensitive details like names, addresses, phone numbers, or other details that could be used to identify an individual.
|
||||
|
||||
## Try out conversation
|
||||
|
||||
### Quickstarts and tutorials
|
||||
|
||||
Want to put the Dapr conversation API to the test? Walk through the following quickstart and tutorials to see it in action:
|
||||
|
||||
| Quickstart/tutorial | Description |
|
||||
| ------------------- | ----------- |
|
||||
| [Conversation quickstart](todo) | . |
|
||||
|
||||
### Start using the conversation API directly in your app
|
||||
|
||||
Want to skip the quickstarts? Not a problem. You can try out the conversation building block directly in your application. After [Dapr is installed]({{< ref "getting-started/_index.md" >}}), you can begin using the conversation API starting with [the how-to guide]({{< ref howto-conversation-layer.md >}}).
|
||||
|
||||
## Next steps
|
||||
|
||||
- [How-To: Converse with an LLM using the conversation API]({{< ref howto-conversation-layer.md >}})
|
||||
- [Conversation API components]({{< ref supported-conversation >}})
|
|
@ -0,0 +1,137 @@
|
|||
---
|
||||
type: docs
|
||||
title: "How-To: Converse with an LLM using the conversation API"
|
||||
linkTitle: "How-To: Converse"
|
||||
weight: 2000
|
||||
description: "Learn how to abstract the complexities of interacting with large language models"
|
||||
---
|
||||
|
||||
{{% alert title="Alpha" color="primary" %}}
|
||||
The conversation API is currently in [alpha]({{< ref "certification-lifecycle.md#certification-levels" >}}).
|
||||
{{% /alert %}}
|
||||
|
||||
Let's get started using the [conversation API]({{< ref conversation-overview.md >}}). In this guide, you'll learn how to:
|
||||
|
||||
- Set up one of the available Dapr components (echo) that work with the conversation API.
|
||||
- Add the conversation client to your application.
|
||||
|
||||
## Set up the conversation component
|
||||
|
||||
Create a new configuration file called `conversation.yaml` and save to a components or config sub-folder in your application directory.
|
||||
|
||||
Select your [preferred conversation component spec]({{< ref supported-conversation >}}) for your `conversation.yaml` file.
|
||||
|
||||
For this scenario, we use a simple echo component.
|
||||
|
||||
```yml
|
||||
apiVersion: dapr.io/v1alpha1
|
||||
kind: Component
|
||||
metadata:
|
||||
name: echo
|
||||
spec:
|
||||
type: conversation.echo
|
||||
version: v1
|
||||
```
|
||||
|
||||
## Connect the conversation client
|
||||
|
||||
|
||||
{{< tabs ".NET" "Go" "Rust" >}}
|
||||
|
||||
|
||||
<!-- .NET -->
|
||||
{{% codetab %}}
|
||||
|
||||
```dotnet
|
||||
todo
|
||||
```
|
||||
|
||||
{{% /codetab %}}
|
||||
|
||||
<!-- Go -->
|
||||
{{% codetab %}}
|
||||
|
||||
```go
|
||||
package main
|
||||
|
||||
import (
|
||||
"context"
|
||||
"fmt"
|
||||
dapr "github.com/dapr/go-sdk/client"
|
||||
"log"
|
||||
)
|
||||
|
||||
func main() {
|
||||
client, err := dapr.NewClient()
|
||||
if err != nil {
|
||||
panic(err)
|
||||
}
|
||||
|
||||
input := dapr.ConversationInput{
|
||||
Message: "hello world",
|
||||
// Role: nil, // Optional
|
||||
// ScrubPII: nil, // Optional
|
||||
}
|
||||
|
||||
fmt.Printf("conversation input: %s\n", input.Message)
|
||||
|
||||
var conversationComponent = "echo"
|
||||
|
||||
request := dapr.NewConversationRequest(conversationComponent, []dapr.ConversationInput{input})
|
||||
|
||||
resp, err := client.ConverseAlpha1(context.Background(), request)
|
||||
if err != nil {
|
||||
log.Fatalf("err: %v", err)
|
||||
}
|
||||
|
||||
fmt.Printf("conversation output: %s\n", resp.Outputs[0].Result)
|
||||
}
|
||||
```
|
||||
|
||||
{{% /codetab %}}
|
||||
|
||||
<!-- Rust -->
|
||||
{{% codetab %}}
|
||||
|
||||
```rust
|
||||
use dapr::client::{ConversationInputBuilder, ConversationRequestBuilder};
|
||||
use std::thread;
|
||||
use std::time::Duration;
|
||||
|
||||
type DaprClient = dapr::Client<dapr::client::TonicClient>;
|
||||
|
||||
#[tokio::main]
|
||||
async fn main() -> Result<(), Box<dyn std::error::Error>> {
|
||||
// Sleep to allow for the server to become available
|
||||
thread::sleep(Duration::from_secs(5));
|
||||
|
||||
// Set the Dapr address
|
||||
let address = "https://127.0.0.1".to_string();
|
||||
|
||||
let mut client = DaprClient::connect(address).await?;
|
||||
|
||||
let input = ConversationInputBuilder::new("hello world").build();
|
||||
|
||||
let conversation_component = "echo";
|
||||
|
||||
let request =
|
||||
ConversationRequestBuilder::new(conversation_component, vec![input.clone()]).build();
|
||||
|
||||
println!("conversation input: {:?}", input.message);
|
||||
|
||||
let response = client.converse_alpha1(request).await?;
|
||||
|
||||
println!("conversation output: {:?}", response.outputs[0].result);
|
||||
Ok(())
|
||||
}
|
||||
```
|
||||
|
||||
{{% /codetab %}}
|
||||
|
||||
{{< /tabs >}}
|
||||
|
||||
|
||||
## Next steps
|
||||
|
||||
- [Conversation API reference guide]({{< ref conversation_api.md >}})
|
||||
- [Available conversation components]({{< ref supported-conversation >}})
|
|
@ -59,10 +59,6 @@ The jobs API provides several features to make it easy for you to schedule jobs.
|
|||
|
||||
The Scheduler service enables the scheduling of jobs to scale across multiple replicas, while guaranteeing that a job is only triggered by 1 scheduler service instance.
|
||||
|
||||
### Actor reminders
|
||||
|
||||
Actors have actor reminders, but present some limitations involving scalability using the Placement service implementation. You can make reminders more scalable by using [`SchedulerReminders`]({{< ref support-preview-features.md >}}). This is set in the configuration for your actor application.
|
||||
|
||||
## Try out the jobs API
|
||||
|
||||
You can try out the jobs API in your application. After [Dapr is installed]({{< ref install-dapr-cli.md >}}), you can begin using the jobs API, starting with [the How-to: Schedule jobs guide]({{< ref howto-schedule-and-handle-triggered-jobs.md >}}).
|
||||
|
|
|
@ -6,10 +6,6 @@ weight: 5000
|
|||
description: "Learn how to develop and author workflows"
|
||||
---
|
||||
|
||||
{{% alert title="Note" color="primary" %}}
|
||||
Dapr Workflow is currently in beta. [See known limitations for {{% dapr-latest-version cli="true" %}}]({{< ref "workflow-overview.md#limitations" >}}).
|
||||
{{% /alert %}}
|
||||
|
||||
This article provides a high-level overview of how to author workflows that are executed by the Dapr Workflow engine.
|
||||
|
||||
{{% alert title="Note" color="primary" %}}
|
||||
|
|
|
@ -6,10 +6,6 @@ weight: 6000
|
|||
description: Manage and run workflows
|
||||
---
|
||||
|
||||
{{% alert title="Note" color="primary" %}}
|
||||
Dapr Workflow is currently in beta. [See known limitations for {{% dapr-latest-version cli="true" %}}]({{< ref "workflow-overview.md#limitations" >}}).
|
||||
{{% /alert %}}
|
||||
|
||||
Now that you've [authored the workflow and its activities in your application]({{< ref howto-author-workflow.md >}}), you can start, terminate, and get information about the workflow using HTTP API calls. For more information, read the [workflow API reference]({{< ref workflow_api.md >}}).
|
||||
|
||||
{{< tabs Python JavaScript ".NET" Java Go HTTP >}}
|
||||
|
|
|
@ -6,10 +6,6 @@ weight: 4000
|
|||
description: "The Dapr Workflow engine architecture"
|
||||
---
|
||||
|
||||
{{% alert title="Note" color="primary" %}}
|
||||
Dapr Workflow is currently in beta. [See known limitations for {{% dapr-latest-version cli="true" %}}]({{< ref "workflow-overview.md#limitations" >}}).
|
||||
{{% /alert %}}
|
||||
|
||||
[Dapr Workflows]({{< ref "workflow-overview.md" >}}) allow developers to define workflows using ordinary code in a variety of programming languages. The workflow engine runs inside of the Dapr sidecar and orchestrates workflow code deployed as part of your application. This article describes:
|
||||
|
||||
- The architecture of the Dapr Workflow engine
|
||||
|
|
|
@ -6,10 +6,6 @@ weight: 2000
|
|||
description: "Learn more about the Dapr Workflow features and concepts"
|
||||
---
|
||||
|
||||
{{% alert title="Note" color="primary" %}}
|
||||
Dapr Workflow is currently in beta. [See known limitations for {{% dapr-latest-version cli="true" %}}]({{< ref "workflow-overview.md#limitations" >}}).
|
||||
{{% /alert %}}
|
||||
|
||||
Now that you've learned about the [workflow building block]({{< ref workflow-overview.md >}}) at a high level, let's deep dive into the features and concepts included with the Dapr Workflow engine and SDKs. Dapr Workflow exposes several core features and concepts which are common across all supported languages.
|
||||
|
||||
{{% alert title="Note" color="primary" %}}
|
||||
|
|
|
@ -6,10 +6,6 @@ weight: 1000
|
|||
description: "Overview of Dapr Workflow"
|
||||
---
|
||||
|
||||
{{% alert title="Note" color="primary" %}}
|
||||
Dapr Workflow is currently in beta. [See known limitations]({{< ref "#limitations" >}}).
|
||||
{{% /alert %}}
|
||||
|
||||
Dapr workflow makes it easy for developers to write business logic and integrations in a reliable way. Since Dapr workflows are stateful, they support long-running and fault-tolerant applications, ideal for orchestrating microservices. Dapr workflow works seamlessly with other Dapr building blocks, such as service invocation, pub/sub, state management, and bindings.
|
||||
|
||||
The durable, resilient Dapr Workflow capability:
|
||||
|
@ -94,7 +90,7 @@ Want to put workflows to the test? Walk through the following quickstart and tut
|
|||
| Quickstart/tutorial | Description |
|
||||
| ------------------- | ----------- |
|
||||
| [Workflow quickstart]({{< ref workflow-quickstart.md >}}) | Run a workflow application with four workflow activities to see Dapr Workflow in action |
|
||||
| [Workflow Python SDK example](https://github.com/dapr/python-sdk/tree/master/examples/demo_workflow) | Learn how to create a Dapr Workflow and invoke it using the Python `DaprClient` package. |
|
||||
| [Workflow Python SDK example](https://github.com/dapr/python-sdk/tree/master/examples/demo_workflow) | Learn how to create a Dapr Workflow and invoke it using the Python `dapr-ext-workflow` package. |
|
||||
| [Workflow JavaScript SDK example](https://github.com/dapr/js-sdk/tree/main/examples/workflow) | Learn how to create a Dapr Workflow and invoke it using the JavaScript SDK. |
|
||||
| [Workflow .NET SDK example](https://github.com/dapr/dotnet-sdk/tree/master/examples/Workflow) | Learn how to create a Dapr Workflow and invoke it using ASP.NET Core web APIs. |
|
||||
| [Workflow Java SDK example](https://github.com/dapr/java-sdk/tree/master/examples/src/main/java/io/dapr/examples/workflows) | Learn how to create a Dapr Workflow and invoke it using the Java `io.dapr.workflows` package. |
|
||||
|
@ -107,24 +103,6 @@ Want to skip the quickstarts? Not a problem. You can try out the workflow buildi
|
|||
## Limitations
|
||||
|
||||
- **State stores:** Due to underlying limitations in some database choices, more commonly NoSQL databases, you might run into limitations around storing internal states. For example, CosmosDB has a maximum single operation item limit of only 100 states in a single request.
|
||||
- **Horizontal scaling:** As of the 1.12.0 beta release of Dapr Workflow, it is recommended to use a maximum of two instances of Dapr per workflow application. This limitation is resolved in Dapr 1.14.x when enabling the scheduler service.
|
||||
|
||||
To enable the scheduler service to work for Dapr Workflows, make sure you're using Dapr 1.14.x or later and assign the following configuration to your app:
|
||||
|
||||
```yaml
|
||||
apiVersion: dapr.io/v1alpha1
|
||||
kind: Configuration
|
||||
metadata:
|
||||
name: schedulerconfig
|
||||
spec:
|
||||
tracing:
|
||||
samplingRate: "1"
|
||||
features:
|
||||
- name: SchedulerReminders
|
||||
enabled: true
|
||||
```
|
||||
|
||||
See more info about [enabling preview features]({{<ref preview-features>}}).
|
||||
|
||||
## Watch the demo
|
||||
|
||||
|
|
|
@ -8,24 +8,70 @@ aliases:
|
|||
- /developing-applications/integrations/authenticating/authenticating-aws/
|
||||
---
|
||||
|
||||
All Dapr components using various AWS services (DynamoDB, SQS, S3, etc) use a standardized set of attributes for configuration via the AWS SDK. [Learn more about how the AWS SDK handles credentials](https://docs.aws.amazon.com/sdk-for-go/v1/developer-guide/configuring-sdk.html#specifying-credentials).
|
||||
Dapr components leveraging AWS services (for example, DynamoDB, SQS, S3) utilize standardized configuration attributes via the AWS SDK. [Learn more about how the AWS SDK handles credentials](https://docs.aws.amazon.com/sdk-for-go/v1/developer-guide/configuring-sdk.html#specifying-credentials).
|
||||
|
||||
Since you can configure the AWS SDK using the default provider chain, all of the following attributes are optional. Test the component configuration and inspect the log output from the Dapr runtime to ensure that components initialize correctly.
|
||||
You can configure authentication using the AWS SDK’s default provider chain or one of the predefined AWS authentication profiles outlined below. Verify your component configuration by testing and inspecting Dapr runtime logs to confirm proper initialization.
|
||||
|
||||
| Attribute | Description |
|
||||
| --------- | ----------- |
|
||||
| `region` | Which AWS region to connect to. In some situations (when running Dapr in self-hosted mode, for example), this flag can be provided by the environment variable `AWS_REGION`. Since Dapr sidecar injection doesn't allow configuring environment variables on the Dapr sidecar, it is recommended to always set the `region` attribute in the component spec. |
|
||||
| `endpoint` | The endpoint is normally handled internally by the AWS SDK. However, in some situations it might make sense to set it locally - for example if developing against [DynamoDB Local](https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/DynamoDBLocal.html). |
|
||||
| `accessKey` | AWS Access key id. |
|
||||
| `secretKey` | AWS Secret access key. Use together with `accessKey` to explicitly specify credentials. |
|
||||
| `sessionToken` | AWS Session token. Used together with `accessKey` and `secretKey`. When using a regular IAM user's access key and secret, a session token is normally not required. |
|
||||
### Terminology
|
||||
- **ARN (Amazon Resource Name):** A unique identifier used to specify AWS resources. Format: `arn:partition:service:region:account-id:resource`. Example: `arn:aws:iam::123456789012:role/example-role`.
|
||||
- **IAM (Identity and Access Management):** AWS's service for managing access to AWS resources securely.
|
||||
|
||||
### Authentication Profiles
|
||||
|
||||
#### Access Key ID and Secret Access Key
|
||||
Use static Access Key and Secret Key credentials, either through component metadata fields or via [default AWS configuration](https://docs.aws.amazon.com/sdkref/latest/guide/creds-config-files.html).
|
||||
|
||||
{{% alert title="Important" color="warning" %}}
|
||||
You **must not** provide AWS access-key, secret-key, and tokens in the definition of the component spec you're using:
|
||||
- When running the Dapr sidecar (`daprd`) with your application on EKS (AWS Kubernetes)
|
||||
- If using a node/pod that has already been attached to an IAM policy defining access to AWS resources
|
||||
Prefer loading credentials via the default AWS configuration in scenarios such as:
|
||||
- Running the Dapr sidecar (`daprd`) with your application on EKS (AWS Kubernetes).
|
||||
- Using nodes or pods attached to IAM policies that define AWS resource access.
|
||||
{{% /alert %}}
|
||||
|
||||
| Attribute | Required | Description | Example |
|
||||
| --------- | ----------- | ----------- | ----------- |
|
||||
| `region` | Y | AWS region to connect to. | "us-east-1" |
|
||||
| `accessKey` | N | AWS Access key id. Will be required in Dapr v1.17. | "AKIAIOSFODNN7EXAMPLE" |
|
||||
| `secretKey` | N | AWS Secret access key, used alongside `accessKey`. Will be required in Dapr v1.17. | "wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY" |
|
||||
| `sessionToken` | N | AWS Session token, used with `accessKey` and `secretKey`. Often unnecessary for IAM user keys. | |
|
||||
|
||||
#### Assume IAM Role
|
||||
This profile allows Dapr to assume a specific IAM Role. Typically used when the Dapr sidecar runs on EKS or nodes/pods linked to IAM policies. Currently supported by Kafka and PostgreSQL components.
|
||||
|
||||
| Attribute | Required | Description | Example |
|
||||
| --------- | ----------- | ----------- | ----------- |
|
||||
| `region` | Y | AWS region to connect to. | "us-east-1" |
|
||||
| `assumeRoleArn` | N | ARN of the IAM role with AWS resource access. Will be required in Dapr v1.17. | "arn:aws:iam::123456789:role/mskRole" |
|
||||
| `sessionName` | N | Session name for role assumption. Default is `"DaprDefaultSession"`. | "MyAppSession" |
|
||||
|
||||
#### Credentials from Environment Variables
|
||||
Authenticate using [environment variables](https://docs.aws.amazon.com/sdkref/latest/guide/environment-variables.html). This is especially useful for Dapr in self-hosted mode where sidecar injectors don’t configure environment variables.
|
||||
|
||||
There are no metadata fields required for this authentication profile.
|
||||
|
||||
#### IAM Roles Anywhere
|
||||
[IAM Roles Anywhere](https://aws.amazon.com/iam/roles-anywhere/) extends IAM role-based authentication to external workloads. It eliminates the need for long-term credentials by using cryptographically signed certificates, anchored in a trust relationship using Dapr PKI. Dapr SPIFFE identity X.509 certificates are used to authenticate to AWS services, and Dapr handles credential rotation at half the session lifespan.
|
||||
|
||||
To configure this authentication profile:
|
||||
1. Create a Trust Anchor in the trusting AWS account using the Dapr certificate bundle as an `External certificate bundle`.
|
||||
2. Create an IAM role with the resource permissions policy necessary, as well as a trust entity for the Roles Anywhere AWS service. Here, you specify SPIFFE identities allowed.
|
||||
3. Create an IAM Profile under the Roles Anywhere service, linking the IAM Role.
|
||||
|
||||
| Attribute | Required | Description | Example |
|
||||
| --------- | ----------- | ----------- | ----------- |
|
||||
| `trustAnchorArn` | Y | ARN of the Trust Anchor in the AWS account granting trust to the Dapr Certificate Authority. | arn:aws:rolesanywhere:us-west-1:012345678910:trust-anchor/01234568-0123-0123-0123-012345678901 |
|
||||
| `trustProfileArn` | Y | ARN of the AWS IAM Profile in the trusting AWS account. | arn:aws:rolesanywhere:us-west-1:012345678910:profile/01234568-0123-0123-0123-012345678901 |
|
||||
| `assumeRoleArn` | Y | ARN of the AWS IAM role to assume in the trusting AWS account. | arn:aws:iam:012345678910:role/exampleIAMRoleName |
|
||||
|
||||
### Additional Fields
|
||||
|
||||
Some AWS components include additional optional fields:
|
||||
|
||||
| Attribute | Required | Description | Example |
|
||||
| --------- | ----------- | ----------- | ----------- |
|
||||
| `endpoint` | N | The endpoint is normally handled internally by the AWS SDK. However, in some situations it might make sense to set it locally - for example if developing against [DynamoDB Local](https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/DynamoDBLocal.html). | |
|
||||
|
||||
Furthermore, non-native AWS components such as Kafka and PostgreSQL that support AWS authentication profiles have metadata fields to trigger the AWS authentication logic. Be sure to check specific component documentation.
|
||||
|
||||
## Alternatives to explicitly specifying credentials in component manifest files
|
||||
|
||||
In production scenarios, it is recommended to use a solution such as:
|
||||
|
|
|
@ -6,10 +6,6 @@ weight: 73
|
|||
description: Get started with the Dapr Workflow building block
|
||||
---
|
||||
|
||||
{{% alert title="Note" color="primary" %}}
|
||||
Dapr Workflow is currently in beta. [See known limitations for {{% dapr-latest-version cli="true" %}}]({{< ref "workflow-overview.md#limitations" >}}).
|
||||
{{% /alert %}}
|
||||
|
||||
{{% alert title="Note" color="primary" %}}
|
||||
Redis is currently used as the state store component for Workflows in the Quickstarts. However, Redis does not support transaction rollbacks and should not be used in production as an actor state store.
|
||||
{{% /alert %}}
|
||||
|
|
|
@ -8,14 +8,14 @@ description: "Learn how to control how many requests and events can invoke your
|
|||
|
||||
Typically, in distributed computing, you may only want to allow for a given number of requests to execute concurrently. Using Dapr's `app-max-concurrency`, you can control how many requests and events can invoke your application simultaneously.
|
||||
|
||||
Default `app-max-concurreny` is set to `-1`, meaning no concurrency.
|
||||
Default `app-max-concurreny` is set to `-1`, meaning no concurrency limit is enforced.
|
||||
|
||||
## Different approaches
|
||||
|
||||
While this guide focuses on `app-max-concurrency`, you can also limit request rate per second using the **`middleware.http.ratelimit`** middleware. However, it's important to understand the difference between the two approaches:
|
||||
|
||||
- `middleware.http.ratelimit`: Time bound and limits the number of requests per second
|
||||
- `app-max-concurrency`: Specifies the number of concurrent requests (and events) at any point of time.
|
||||
- `app-max-concurrency`: Specifies the max number of concurrent requests (and events) at any point of time.
|
||||
|
||||
See [Rate limit middleware]({{< ref middleware-rate-limit.md >}}) for more information about that approach.
|
||||
|
||||
|
@ -46,7 +46,7 @@ To set concurrency limits with the Dapr CLI for running on your local dev machin
|
|||
dapr run --app-max-concurrency 1 --app-port 5000 python ./app.py
|
||||
```
|
||||
|
||||
The above example effectively turns your app into a single concurrent service.
|
||||
The above example effectively turns your app into a sequential processing service.
|
||||
|
||||
{{% /codetab %}}
|
||||
|
||||
|
|
|
@ -231,6 +231,19 @@ You can install Dapr on Kubernetes using a Helm v3 chart.
|
|||
--wait
|
||||
```
|
||||
|
||||
To install in **high availability** mode and scale select services independently of global:
|
||||
|
||||
```bash
|
||||
helm upgrade --install dapr dapr/dapr \
|
||||
--version={{% dapr-latest-version short="true" %}} \
|
||||
--namespace dapr-system \
|
||||
--create-namespace \
|
||||
--set global.ha.enabled=false \
|
||||
--set dapr_scheduler.ha=true \
|
||||
--set dapr_placement.ha=true \
|
||||
--wait
|
||||
```
|
||||
|
||||
See [Guidelines for production ready deployments on Kubernetes]({{< ref kubernetes-production.md >}}) for more information on installing and upgrading Dapr using Helm.
|
||||
|
||||
### (optional) Install the Dapr dashboard as part of the control plane
|
||||
|
|
|
@ -172,8 +172,8 @@ helm upgrade --install dapr dapr/dapr \
|
|||
|
||||
## Ephemeral Storage
|
||||
|
||||
Scheduler can be optionally made to use Ephemeral storage, which is in-memory storage which is **not** resilient to restarts, i.e. all Job data will be lost after a Scheduler restart.
|
||||
This is useful for deployments where storage is not available or required, or for testing purposes.
|
||||
When running in non-HA mode, the Scheduler can be optionally made to use ephemeral storage, which is in-memory storage that is **not** resilient to restarts. For example, all jobs data is lost after a Scheduler restart.
|
||||
This is useful in non-production deployments or for testing where storage is not available or required.
|
||||
|
||||
{{% alert title="Note" color="primary" %}}
|
||||
If Dapr is already installed, the control plane needs to be completely [uninstalled]({{< ref dapr-uninstall.md >}}) in order for the Scheduler `StatefulSet` to be recreated without the persistent volume.
|
||||
|
|
|
@ -95,6 +95,25 @@ For a new Dapr deployment, HA mode can be set with both:
|
|||
|
||||
For an existing Dapr deployment, [you can enable HA mode in a few extra steps]({{< ref "#enabling-high-availability-in-an-existing-dapr-deployment" >}}).
|
||||
|
||||
### Individual service HA Helm configuration
|
||||
|
||||
You can configure HA mode via Helm across all services by setting the `global.ha.enabled` flag to `true`. By default, `--set global.ha.enabled=true` is fully respected and cannot be overridden, making it impossible to simultaneously have either the placement or scheduler service as a single instance.
|
||||
|
||||
> **Note:** HA for scheduler and placement services is not the default setting.
|
||||
|
||||
To scale scheduler and placement to three instances independently of the `global.ha.enabled` flag, set `global.ha.enabled` to `false` and `dapr_scheduler.ha` and `dapr_placement.ha` to `true`. For example:
|
||||
|
||||
```bash
|
||||
helm upgrade --install dapr dapr/dapr \
|
||||
--version={{% dapr-latest-version short="true" %}} \
|
||||
--namespace dapr-system \
|
||||
--create-namespace \
|
||||
--set global.ha.enabled=false \
|
||||
--set dapr_scheduler.ha=true \
|
||||
--set dapr_placement.ha=true \
|
||||
--wait
|
||||
```
|
||||
|
||||
## Setting cluster critical priority class name for control plane services
|
||||
|
||||
In some scenarios, nodes may have memory and/or cpu pressure and the Dapr control plane pods might get selected
|
||||
|
|
|
@ -70,6 +70,38 @@ spec:
|
|||
enabled: false
|
||||
```
|
||||
|
||||
## Configuring metrics for error codes
|
||||
|
||||
You can enable additional metrics for [Dapr API error codes](https://docs.dapr.io/reference/api/error_codes/) by setting `spec.metrics.recordErrorCodes` to `true`. Dapr APIs which communicate back to their caller may return standardized error codes. As described in the [Dapr development docs](https://github.com/dapr/dapr/blob/master/docs/development/dapr-metrics.md), a new metric called `error_code_total` is recorded, which allows monitoring of error codes triggered by application, code, and category. See [the `errorcodes` package](https://github.com/dapr/dapr/blob/master/pkg/messages/errorcodes/errorcodes.go) for specific codes and categories.
|
||||
|
||||
Example configuration:
|
||||
```yaml
|
||||
apiVersion: dapr.io/v1alpha1
|
||||
kind: Configuration
|
||||
metadata:
|
||||
name: tracing
|
||||
namespace: default
|
||||
spec:
|
||||
metrics:
|
||||
enabled: true
|
||||
recordErrorCodes: true
|
||||
```
|
||||
|
||||
Example metric:
|
||||
```json
|
||||
{
|
||||
"app_id": "publisher-app",
|
||||
"category": "state",
|
||||
"dapr_io_enabled": "true",
|
||||
"error_code": "ERR_STATE_STORE_NOT_CONFIGURED",
|
||||
"instance": "10.244.1.64:9090",
|
||||
"job": "kubernetes-service-endpoints",
|
||||
"namespace": "my-app",
|
||||
"node": "my-node",
|
||||
"service": "publisher-app-dapr"
|
||||
}
|
||||
```
|
||||
|
||||
## Optimizing HTTP metrics reporting with path matching
|
||||
|
||||
When invoking Dapr using HTTP, metrics are created for each requested method by default. This can result in a high number of metrics, known as high cardinality, which can impact memory usage and CPU.
|
||||
|
|
|
@ -96,7 +96,7 @@ spec:
|
|||
policy: constant
|
||||
duration: 5s
|
||||
maxRetries: 3
|
||||
matches:
|
||||
matching:
|
||||
httpStatusCodes: "429,500-599" # retry the HTTP status codes in this range. All others are not retried.
|
||||
gRPCStatusCodes: "1-4,8-11,13,14" # retry gRPC status codes in these ranges and separate single codes.
|
||||
```
|
||||
|
|
|
@ -15,13 +15,13 @@ description: "List of current alpha and beta APIs"
|
|||
| Bulk Publish | [Bulk publish proto](https://github.com/dapr/dapr/blob/5aba3c9aa4ea9b3f388df125f9c66495b43c5c9e/dapr/proto/runtime/v1/dapr.proto#L59) | `v1.0-alpha1/publish/bulk` | The bulk publish API allows you to publish multiple messages to a topic in a single request. | [Bulk Publish and Subscribe API]({{< ref "pubsub-bulk.md" >}}) | v1.10 |
|
||||
| Bulk Subscribe | [Bulk subscribe proto](https://github.com/dapr/dapr/blob/5aba3c9aa4ea9b3f388df125f9c66495b43c5c9e/dapr/proto/runtime/v1/appcallback.proto#L57) | N/A | The bulk subscribe application callback receives multiple messages from a topic in a single call. | [Bulk Publish and Subscribe API]({{< ref "pubsub-bulk.md" >}}) | v1.10 |
|
||||
| Cryptography | [Crypto proto](https://github.com/dapr/dapr/blob/5aba3c9aa4ea9b3f388df125f9c66495b43c5c9e/dapr/proto/runtime/v1/dapr.proto#L118) | `v1.0-alpha1/crypto` | The cryptography API enables you to perform **high level** cryptography operations for encrypting and decrypting messages. | [Cryptography API]({{< ref "cryptography-overview.md" >}}) | v1.11 |
|
||||
| Jobs | [Jobs proto](https://github.com/dapr/dapr/blob/master/dapr/proto/runtime/v1/dapr.proto#L198-L204) | `v1.0-alpha1/jobs` | The jobs API enables you to schedule and orchestrate jobs. | [Jobs API]({{< ref "jobs-overview.md" >}}) | v1.14 |
|
||||
| Jobs | [Jobs proto](https://github.com/dapr/dapr/blob/master/dapr/proto/runtime/v1/dapr.proto#L212-219) | `v1.0-alpha1/jobs` | The jobs API enables you to schedule and orchestrate jobs. | [Jobs API]({{< ref "jobs-overview.md" >}}) | v1.14 |
|
||||
| Conversation | [Conversation proto](https://github.com/dapr/dapr/blob/master/dapr/proto/runtime/v1/dapr.proto#L221-222) | `v1.0-alpha1/conversation` | Converse between different large language models using the conversation API. | [Conversation API]({{< ref "conversation-overview.md" >}}) | v1.15 |
|
||||
|
||||
|
||||
## Beta APIs
|
||||
|
||||
| Building block/API | gRPC | HTTP | Description | Documentation | Version introduced |
|
||||
| ------------------ | ---- | ---- | ----------- | ------------- | ------------------ |
|
||||
| Workflow | [Workflow proto](https://github.com/dapr/dapr/blob/5aba3c9aa4ea9b3f388df125f9c66495b43c5c9e/dapr/proto/runtime/v1/dapr.proto#L151) | `/v1.0-beta1/workflow` | The workflow API enables you to define long running, persistent processes or data flows. | [Workflow API]({{< ref "workflow-overview.md" >}}) | v1.10 |
|
||||
No current beta APIs.
|
||||
|
||||
## Related links
|
||||
|
||||
|
|
|
@ -22,4 +22,4 @@ For CLI there is no explicit opt-in, just the version that this was first made a
|
|||
| **Actor State TTL** | Allow actors to save records to state stores with Time To Live (TTL) set to automatically clean up old data. In its current implementation, actor state with TTL may not be reflected correctly by clients, read [Actor State Transactions]({{< ref actors_api.md >}}) for more information. | `ActorStateTTL` | [Actor State Transactions]({{< ref actors_api.md >}}) | v1.11 |
|
||||
| **Component Hot Reloading** | Allows for Dapr-loaded components to be "hot reloaded". A component spec is reloaded when it is created/updated/deleted in Kubernetes or on file when running in self-hosted mode. Ignores changes to actor state stores and workflow backends. | `HotReload`| [Hot Reloading]({{< ref components-concept.md >}}) | v1.13 |
|
||||
| **Subscription Hot Reloading** | Allows for declarative subscriptions to be "hot reloaded". A subscription is reloaded either when it is created/updated/deleted in Kubernetes, or on file in self-hosted mode. In-flight messages are unaffected when reloading. | `HotReload`| [Hot Reloading]({{< ref "subscription-methods.md#declarative-subscriptions" >}}) | v1.14 |
|
||||
| **Scheduler Actor Reminders** | Whilst the [Scheduler service]({{< ref "concepts/dapr-services/scheduler.md" >}}) is deployed by default, Scheduler actor reminders (actor reminders stored in the Scheduler control plane service as opposed to the Placement control plane service actor reminder system) are enabled through a preview feature and needs a feature flag. | `SchedulerReminders`| [Scheduler actor reminders]({{< ref "jobs-overview.md#actor-reminders" >}}) | v1.14 |
|
||||
| **Scheduler Actor Reminders** | Scheduler actor reminders are actor reminders stored in the Scheduler control plane service, as opposed to the Placement control plane service actor reminder system. The `SchedulerReminders` preview feature defaults to `true`, but you can disable Scheduler actor reminders by setting it to `false`. | `SchedulerReminders`| [Scheduler actor reminders]({{< ref "scheduler.md#actor-reminders" >}}) | v1.14 |
|
|
@ -0,0 +1,74 @@
|
|||
---
|
||||
type: docs
|
||||
title: "Conversation API reference"
|
||||
linkTitle: "Conversation API"
|
||||
description: "Detailed documentation on the conversation API"
|
||||
weight: 1400
|
||||
---
|
||||
|
||||
{{% alert title="Alpha" color="primary" %}}
|
||||
The conversation API is currently in [alpha]({{< ref "certification-lifecycle.md#certification-levels" >}}).
|
||||
{{% /alert %}}
|
||||
|
||||
Dapr provides an API to interact with Large Language Models (LLMs) and enables critical performance and security functionality with features like prompt caching and PII data obfuscation.
|
||||
|
||||
## Converse
|
||||
|
||||
This endpoint lets you converse with LLMs.
|
||||
|
||||
```
|
||||
POST /v1.0-alpha1/conversation/<llm-name>/converse
|
||||
```
|
||||
|
||||
### URL parameters
|
||||
|
||||
| Parameter | Description |
|
||||
| --------- | ----------- |
|
||||
| `llm-name` | The name of the LLM component. [See a list of all available conversation components.]({{< ref supported-conversation >}})
|
||||
|
||||
### Request body
|
||||
|
||||
| Field | Description |
|
||||
| --------- | ----------- |
|
||||
| `conversationContext` | |
|
||||
| `inputs` | |
|
||||
| `parameters` | |
|
||||
|
||||
|
||||
### Request content
|
||||
|
||||
```json
|
||||
REQUEST = {
|
||||
"inputs": ["what is Dapr", "Why use Dapr"],
|
||||
"parameters": {},
|
||||
}
|
||||
```
|
||||
|
||||
### HTTP response codes
|
||||
|
||||
Code | Description
|
||||
---- | -----------
|
||||
`202` | Accepted
|
||||
`400` | Request was malformed
|
||||
`500` | Request formatted correctly, error in dapr code or underlying component
|
||||
|
||||
### Response content
|
||||
|
||||
```json
|
||||
RESPONSE = {
|
||||
"outputs": {
|
||||
{
|
||||
"result": "Dapr is distribution application runtime ...",
|
||||
"parameters": {},
|
||||
},
|
||||
{
|
||||
"result": "Dapr can help developers ...",
|
||||
"parameters": {},
|
||||
}
|
||||
},
|
||||
}
|
||||
```
|
||||
|
||||
## Next steps
|
||||
|
||||
[Conversation API overview]({{< ref conversation-overview.md >}})
|
|
@ -20,7 +20,7 @@ This endpoint lets you encrypt a value provided as a byte array using a specifie
|
|||
### HTTP Request
|
||||
|
||||
```
|
||||
PUT http://localhost:<daprPort>/v1.0/crypto/<crypto-store-name>/encrypt
|
||||
PUT http://localhost:<daprPort>/v1.0-alpha1/crypto/<crypto-store-name>/encrypt
|
||||
```
|
||||
|
||||
#### URL Parameters
|
||||
|
@ -59,7 +59,7 @@ returns an array of bytes with the encrypted payload.
|
|||
|
||||
### Examples
|
||||
```shell
|
||||
curl http://localhost:3500/v1.0/crypto/myAzureKeyVault/encrypt \
|
||||
curl http://localhost:3500/v1.0-alpha1/crypto/myAzureKeyVault/encrypt \
|
||||
-X PUT \
|
||||
-H "dapr-key-name: myCryptoKey" \
|
||||
-H "dapr-key-wrap-algorithm: aes-gcm" \
|
||||
|
@ -81,7 +81,7 @@ This endpoint lets you decrypt a value provided as a byte array using a specifie
|
|||
#### HTTP Request
|
||||
|
||||
```
|
||||
PUT curl http://localhost:3500/v1.0/crypto/<crypto-store-name>/decrypt
|
||||
PUT curl http://localhost:3500/v1.0-alpha1/crypto/<crypto-store-name>/decrypt
|
||||
```
|
||||
|
||||
#### URL Parameters
|
||||
|
@ -116,7 +116,7 @@ returns an array of bytes representing the decrypted payload.
|
|||
|
||||
### Examples
|
||||
```bash
|
||||
curl http://localhost:3500/v1.0/crypto/myAzureKeyVault/decrypt \
|
||||
curl http://localhost:3500/v1.0-alpha1/crypto/myAzureKeyVault/decrypt \
|
||||
-X PUT
|
||||
-H "dapr-key-name: myCryptoKey"\
|
||||
-H "Content-Type: application/octet-stream" \
|
||||
|
|
|
@ -7,6 +7,7 @@ weight: 1400
|
|||
---
|
||||
|
||||
For http calls made to Dapr runtime, when an error is encountered, an error json is returned in http response body. The json contains an error code and an descriptive error message, e.g.
|
||||
|
||||
```
|
||||
{
|
||||
"errorCode": "ERR_STATE_GET",
|
||||
|
@ -14,36 +15,142 @@ For http calls made to Dapr runtime, when an error is encountered, an error json
|
|||
}
|
||||
```
|
||||
|
||||
Following table lists the error codes returned by Dapr runtime:
|
||||
The following tables list the error codes returned by Dapr runtime:
|
||||
|
||||
| Error Code | Description |
|
||||
|-----------------------------------|-------------|
|
||||
| ERR_ACTOR_INSTANCE_MISSING | Error getting an actor instance. This means that actor is now hosted in some other service replica.
|
||||
| ERR_ACTOR_RUNTIME_NOT_FOUND | Error getting the actor instance.
|
||||
| ERR_ACTOR_REMINDER_CREATE | Error creating a reminder for an actor.
|
||||
| ERR_ACTOR_REMINDER_DELETE | Error deleting a reminder for an actor.
|
||||
| ERR_ACTOR_TIMER_CREATE | Error creating a timer for an actor.
|
||||
| ERR_ACTOR_TIMER_DELETE | Error deleting a timer for an actor.
|
||||
| ERR_ACTOR_REMINDER_GET | Error getting a reminder for an actor.
|
||||
| ERR_ACTOR_INVOKE_METHOD | Error invoking a method on an actor.
|
||||
| ERR_ACTOR_STATE_DELETE | Error deleting the state for an actor.
|
||||
| ERR_ACTOR_STATE_GET | Error getting the state for an actor.
|
||||
| ERR_ACTOR_STATE_TRANSACTION_SAVE | Error storing actor state transactionally.
|
||||
| ERR_PUBSUB_NOT_FOUND | Error referencing the Pub/Sub component in Dapr runtime.
|
||||
| ERR_PUBSUB_PUBLISH_MESSAGE | Error publishing a message.
|
||||
| ERR_PUBSUB_FORBIDDEN | Error message forbidden by access controls.
|
||||
| ERR_PUBSUB_CLOUD_EVENTS_SER | Error serializing Pub/Sub event envelope.
|
||||
| ERR_STATE_STORE_NOT_FOUND | Error referencing a state store not found.
|
||||
| ERR_STATE_STORES_NOT_CONFIGURED | Error no state stores configured.
|
||||
| ERR_NOT_SUPPORTED_STATE_OPERATION | Error transaction requested on a state store with no transaction support.
|
||||
| ERR_STATE_GET | Error getting a state for state store.
|
||||
| ERR_STATE_DELETE | Error deleting a state from state store.
|
||||
| ERR_STATE_SAVE | Error saving a state in state store.
|
||||
| ERR_INVOKE_OUTPUT_BINDING | Error invoking an output binding.
|
||||
| ERR_MALFORMED_REQUEST | Error with a malformed request.
|
||||
| ERR_DIRECT_INVOKE | Error in direct invocation.
|
||||
| ERR_DESERIALIZE_HTTP_BODY | Error deserializing an HTTP request body.
|
||||
| ERR_SECRET_STORES_NOT_CONFIGURED | Error that no secret store is configured.
|
||||
| ERR_SECRET_STORE_NOT_FOUND | Error that specified secret store is not found.
|
||||
| ERR_HEALTH_NOT_READY | Error that Dapr is not ready.
|
||||
| ERR_METADATA_GET | Error parsing the Metadata information.
|
||||
### Actors API
|
||||
|
||||
| Error Code | Description |
|
||||
| -------------------------------- | ------------------------------------------ |
|
||||
| ERR_ACTOR_INSTANCE_MISSING | Error when an actor instance is missing. |
|
||||
| ERR_ACTOR_RUNTIME_NOT_FOUND | Error the actor instance. |
|
||||
| ERR_ACTOR_REMINDER_CREATE | Error creating a reminder for an actor. |
|
||||
| ERR_ACTOR_REMINDER_DELETE | Error deleting a reminder for an actor. |
|
||||
| ERR_ACTOR_TIMER_CREATE | Error creating a timer for an actor. |
|
||||
| ERR_ACTOR_TIMER_DELETE | Error deleting a timer for an actor. |
|
||||
| ERR_ACTOR_REMINDER_GET | Error getting a reminder for an actor. |
|
||||
| ERR_ACTOR_INVOKE_METHOD | Error invoking a method on an actor. |
|
||||
| ERR_ACTOR_STATE_DELETE | Error deleting the state for an actor. |
|
||||
| ERR_ACTOR_STATE_GET | Error getting the state for an actor. |
|
||||
| ERR_ACTOR_STATE_TRANSACTION_SAVE | Error storing actor state transactionally. |
|
||||
| ERR_ACTOR_REMINDER_NON_HOSTED | Error setting reminder for an actor. |
|
||||
|
||||
### Workflows API
|
||||
|
||||
| Error Code | Description |
|
||||
| -------------------------------- | ----------------------------------------------------------- |
|
||||
| ERR_GET_WORKFLOW | Error getting workflow. |
|
||||
| ERR_START_WORKFLOW | Error starting the workflow. |
|
||||
| ERR_PAUSE_WORKFLOW | Error pausing the workflow. |
|
||||
| ERR_RESUME_WORKFLOW | Error resuming the workflow. |
|
||||
| ERR_TERMINATE_WORKFLOW | Error terminating the workflow. |
|
||||
| ERR_PURGE_WORKFLOW | Error purging workflow. |
|
||||
| ERR_RAISE_EVENT_WORKFLOW | Error raising an event within the workflow. |
|
||||
| ERR_WORKFLOW_COMPONENT_MISSING | Error when a workflow component is missing a configuration. |
|
||||
| ERR_WORKFLOW_COMPONENT_NOT_FOUND | Error when a workflow component is not found. |
|
||||
| ERR_WORKFLOW_EVENT_NAME_MISSING | Error when the event name for a workflow is missing. |
|
||||
| ERR_WORKFLOW_NAME_MISSING | Error when the workflow name is missing. |
|
||||
| ERR_INSTANCE_ID_INVALID | Error invalid workflow instance ID provided. |
|
||||
| ERR_INSTANCE_ID_NOT_FOUND | Error workflow instance ID not found. |
|
||||
| ERR_INSTANCE_ID_PROVIDED_MISSING | Error workflow instance ID was provided but missing. |
|
||||
| ERR_INSTANCE_ID_TOO_LONG | Error workflow instance ID exceeds allowable length. |
|
||||
|
||||
### State Management API
|
||||
|
||||
| Error Code | Description |
|
||||
| ------------------------------------- | ------------------------------------------------------------------------- |
|
||||
| ERR_STATE_STORE_NOT_FOUND | Error referencing a state store not found. |
|
||||
| ERR_STATE_STORES_NOT_CONFIGURED | Error no state stores configured. |
|
||||
| ERR_NOT_SUPPORTED_STATE_OPERATION | Error transaction requested on a state store with no transaction support. |
|
||||
| ERR_STATE_GET | Error getting a state for state store. |
|
||||
| ERR_STATE_DELETE | Error deleting a state from state store. |
|
||||
| ERR_STATE_SAVE | Error saving a state in state store. |
|
||||
| ERR_STATE_TRANSACTION | Error encountered during state transaction. |
|
||||
| ERR_STATE_BULK_GET | Error performing bulk retrieval of state entries. |
|
||||
| ERR_STATE_QUERY | Error querying the state store. |
|
||||
| ERR_STATE_STORE_NOT_CONFIGURED | Error state store is not configured. |
|
||||
| ERR_STATE_STORE_NOT_SUPPORTED | Error state store is not supported. |
|
||||
| ERR_STATE_STORE_TOO_MANY_TRANSACTIONS | Error exceeded maximum allowable transactions. |
|
||||
|
||||
### Configuration API
|
||||
|
||||
| Error Code | Description |
|
||||
| -------------------------------------- | -------------------------------------------- |
|
||||
| ERR_CONFIGURATION_GET | Error retrieving configuration. |
|
||||
| ERR_CONFIGURATION_STORE_NOT_CONFIGURED | Error configuration store is not configured. |
|
||||
| ERR_CONFIGURATION_STORE_NOT_FOUND | Error configuration store not found. |
|
||||
| ERR_CONFIGURATION_SUBSCRIBE | Error subscribing to a configuration. |
|
||||
| ERR_CONFIGURATION_UNSUBSCRIBE | Error unsubscribing from a configuration. |
|
||||
|
||||
### Crypto API
|
||||
|
||||
| Error Code | Description |
|
||||
| ----------------------------------- | ------------------------------------------ |
|
||||
| ERR_CRYPTO | General crypto building block error. |
|
||||
| ERR_CRYPTO_KEY | Error related to a crypto key. |
|
||||
| ERR_CRYPTO_PROVIDER_NOT_FOUND | Error specified crypto provider not found. |
|
||||
| ERR_CRYPTO_PROVIDERS_NOT_CONFIGURED | Error no crypto providers configured. |
|
||||
|
||||
### Secrets API
|
||||
|
||||
| Error Code | Description |
|
||||
| -------------------------------- | ---------------------------------------------------- |
|
||||
| ERR_SECRET_STORES_NOT_CONFIGURED | Error that no secret store is configured. |
|
||||
| ERR_SECRET_STORE_NOT_FOUND | Error that specified secret store is not found. |
|
||||
| ERR_SECRET_GET | Error retrieving the specified secret. |
|
||||
| ERR_PERMISSION_DENIED | Error access denied due to insufficient permissions. |
|
||||
|
||||
### Pub/Sub API
|
||||
|
||||
| Error Code | Description |
|
||||
| --------------------------- | -------------------------------------------------------- |
|
||||
| ERR_PUBSUB_NOT_FOUND | Error referencing the Pub/Sub component in Dapr runtime. |
|
||||
| ERR_PUBSUB_PUBLISH_MESSAGE | Error publishing a message. |
|
||||
| ERR_PUBSUB_FORBIDDEN | Error message forbidden by access controls. |
|
||||
| ERR_PUBSUB_CLOUD_EVENTS_SER | Error serializing Pub/Sub event envelope. |
|
||||
| ERR_PUBSUB_EMPTY | Error empty Pub/Sub. |
|
||||
| ERR_PUBSUB_NOT_CONFIGURED | Error Pub/Sub component is not configured. |
|
||||
| ERR_PUBSUB_REQUEST_METADATA | Error with metadata in Pub/Sub request. |
|
||||
| ERR_PUBSUB_EVENTS_SER | Error serializing Pub/Sub events. |
|
||||
| ERR_PUBLISH_OUTBOX | Error publishing message to the outbox. |
|
||||
| ERR_TOPIC_NAME_EMPTY | Error topic name for Pub/Sub message is empty. |
|
||||
|
||||
### Conversation API
|
||||
|
||||
| Error Code | Description |
|
||||
| ------------------------------- | ----------------------------------------------- |
|
||||
| ERR_INVOKE_OUTPUT_BINDING | Error invoking an output binding. |
|
||||
| ERR_DIRECT_INVOKE | Error in direct invocation. |
|
||||
| ERR_CONVERSATION_INVALID_PARMS | Error invalid parameters for conversation. |
|
||||
| ERR_CONVERSATION_INVOKE | Error invoking the conversation. |
|
||||
| ERR_CONVERSATION_MISSING_INPUTS | Error missing required inputs for conversation. |
|
||||
| ERR_CONVERSATION_NOT_FOUND | Error conversation not found. |
|
||||
|
||||
### Distributed Lock API
|
||||
|
||||
| Error Code | Description |
|
||||
| ----------------------------- | ----------------------------------- |
|
||||
| ERR_TRY_LOCK | Error attempting to acquire a lock. |
|
||||
| ERR_UNLOCK | Error attempting to release a lock. |
|
||||
| ERR_LOCK_STORE_NOT_CONFIGURED | Error lock store is not configured. |
|
||||
| ERR_LOCK_STORE_NOT_FOUND | Error lock store not found. |
|
||||
|
||||
### Healthz
|
||||
|
||||
| Error Code | Description |
|
||||
| ----------------------------- | --------------------------------------------------------------- |
|
||||
| ERR_HEALTH_NOT_READY | Error that Dapr is not ready. |
|
||||
| ERR_HEALTH_APPID_NOT_MATCH | Error the app-id does not match expected value in health check. |
|
||||
| ERR_OUTBOUND_HEALTH_NOT_READY | Error outbound connection health is not ready. |
|
||||
|
||||
### Common
|
||||
|
||||
| Error Code | Description |
|
||||
| -------------------------- | ------------------------------------------------ |
|
||||
| ERR_API_UNIMPLEMENTED | Error API is not implemented. |
|
||||
| ERR_APP_CHANNEL_NIL | Error application channel is nil. |
|
||||
| ERR_BAD_REQUEST | Error client request is badly formed or invalid. |
|
||||
| ERR_BODY_READ | Error reading body. |
|
||||
| ERR_INTERNAL | Internal server error encountered. |
|
||||
| ERR_MALFORMED_REQUEST | Error with a malformed request. |
|
||||
| ERR_MALFORMED_REQUEST_DATA | Error request data is malformed. |
|
||||
| ERR_MALFORMED_RESPONSE | Error response data is malformed. |
|
||||
|
|
|
@ -6,10 +6,6 @@ description: "Detailed documentation on the workflow API"
|
|||
weight: 300
|
||||
---
|
||||
|
||||
{{% alert title="Note" color="primary" %}}
|
||||
Dapr Workflow is currently in beta. [See known limitations for {{% dapr-latest-version cli="true" %}}]({{< ref "workflow-overview.md#limitations" >}}).
|
||||
{{% /alert %}}
|
||||
|
||||
Dapr provides users with the ability to interact with workflows and comes with a built-in `dapr` component.
|
||||
|
||||
## Start workflow request
|
||||
|
|
|
@ -64,10 +64,10 @@ The AWS authentication token will be dynamically rotated before it's expiration
|
|||
|--------|:--------:|---------|---------|
|
||||
| `useAWSIAM` | Y | Must be set to `true` to enable the component to retrieve access tokens from AWS IAM. This authentication method only works with AWS Relational Database Service for PostgreSQL databases. | `"true"` |
|
||||
| `connectionString` | Y | The connection string for the PostgreSQL database.<br>This must contain an already existing user, which corresponds to the name of the user created inside PostgreSQL that maps to the AWS IAM policy. This connection string should not contain any password. Note that the database name field is denoted by dbname with AWS. | `"host=mydb.postgres.database.aws.com user=myapplication port=5432 dbname=my_db sslmode=require"`|
|
||||
| `awsRegion` | Y | The AWS Region where the AWS Relational Database Service is deployed to. | `"us-east-1"` |
|
||||
| `awsAccessKey` | Y | AWS access key associated with an IAM account | `"AKIAIOSFODNN7EXAMPLE"` |
|
||||
| `awsSecretKey` | Y | The secret key associated with the access key | `"wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY"` |
|
||||
| `awsSessionToken` | N | AWS session token to use. A session token is only required if you are using temporary security credentials. | `"TOKEN"` |
|
||||
| `awsRegion` | N | This maintains backwards compatibility with existing fields. It will be deprecated as of Dapr 1.17. Use 'region' instead. The AWS Region where the AWS Relational Database Service is deployed to. | `"us-east-1"` |
|
||||
| `awsAccessKey` | N | This maintains backwards compatibility with existing fields. It will be deprecated as of Dapr 1.17. Use 'accessKey' instead. AWS access key associated with an IAM account | `"AKIAIOSFODNN7EXAMPLE"` |
|
||||
| `awsSecretKey` | N | This maintains backwards compatibility with existing fields. It will be deprecated as of Dapr 1.17. Use 'secretKey' instead. The secret key associated with the access key | `"wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY"` |
|
||||
| `awsSessionToken` | N | This maintains backwards compatibility with existing fields. It will be deprecated as of Dapr 1.17. Use 'sessionToken' instead. AWS session token to use. A session token is only required if you are using temporary security credentials. | `"TOKEN"` |
|
||||
|
||||
### Other metadata options
|
||||
|
||||
|
|
|
@ -90,10 +90,10 @@ The AWS authentication token will be dynamically rotated before it's expiration
|
|||
|--------|:--------:|---------|---------|
|
||||
| `useAWSIAM` | Y | Must be set to `true` to enable the component to retrieve access tokens from AWS IAM. This authentication method only works with AWS Relational Database Service for PostgreSQL databases. | `"true"` |
|
||||
| `connectionString` | Y | The connection string for the PostgreSQL database.<br>This must contain an already existing user, which corresponds to the name of the user created inside PostgreSQL that maps to the AWS IAM policy. This connection string should not contain any password. Note that the database name field is denoted by dbname with AWS. | `"host=mydb.postgres.database.aws.com user=myapplication port=5432 dbname=my_db sslmode=require"`|
|
||||
| `awsRegion` | Y | The AWS Region where the AWS Relational Database Service is deployed to. | `"us-east-1"` |
|
||||
| `awsAccessKey` | Y | AWS access key associated with an IAM account | `"AKIAIOSFODNN7EXAMPLE"` |
|
||||
| `awsSecretKey` | Y | The secret key associated with the access key | `"wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY"` |
|
||||
| `awsSessionToken` | N | AWS session token to use. A session token is only required if you are using temporary security credentials. | `"TOKEN"` |
|
||||
| `awsRegion` | N | This maintains backwards compatibility with existing fields. It will be deprecated as of Dapr 1.17. Use 'region' instead. The AWS Region where the AWS Relational Database Service is deployed to. | `"us-east-1"` |
|
||||
| `awsAccessKey` | N | This maintains backwards compatibility with existing fields. It will be deprecated as of Dapr 1.17. Use 'accessKey' instead. AWS access key associated with an IAM account | `"AKIAIOSFODNN7EXAMPLE"` |
|
||||
| `awsSecretKey` | N | This maintains backwards compatibility with existing fields. It will be deprecated as of Dapr 1.17. Use 'secretKey' instead. The secret key associated with the access key | `"wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY"` |
|
||||
| `awsSessionToken` | N | This maintains backwards compatibility with existing fields. It will be deprecated as of Dapr 1.17. Use 'sessionToken' instead. AWS session token to use. A session token is only required if you are using temporary security credentials. | `"TOKEN"` |
|
||||
|
||||
### Other metadata options
|
||||
|
||||
|
|
|
@ -0,0 +1,12 @@
|
|||
---
|
||||
type: docs
|
||||
title: "Conversation component specs"
|
||||
linkTitle: "Conversation"
|
||||
weight: 9000
|
||||
description: The supported conversation components that interface with Dapr
|
||||
no_list: true
|
||||
---
|
||||
|
||||
{{< partial "components/description.html" >}}
|
||||
|
||||
{{< partial "components/conversation.html" >}}
|
|
@ -0,0 +1,42 @@
|
|||
---
|
||||
type: docs
|
||||
title: "Anthropic"
|
||||
linkTitle: "Anthropic"
|
||||
description: Detailed information on the Anthropic conversation component
|
||||
---
|
||||
|
||||
## Component format
|
||||
|
||||
A Dapr `conversation.yaml` component file has the following structure:
|
||||
|
||||
```yaml
|
||||
apiVersion: dapr.io/v1alpha1
|
||||
kind: Component
|
||||
metadata:
|
||||
name: anthropic
|
||||
spec:
|
||||
type: conversation.anthropic
|
||||
metadata:
|
||||
- name: key
|
||||
value: "mykey"
|
||||
- name: model
|
||||
value: claude-3-5-sonnet-20240620
|
||||
- name: cacheTTL
|
||||
value: 10m
|
||||
```
|
||||
|
||||
{{% alert title="Warning" color="warning" %}}
|
||||
The above example uses secrets as plain strings. It is recommended to use a secret store for the secrets, as described [here]({{< ref component-secrets.md >}}).
|
||||
{{% /alert %}}
|
||||
|
||||
## Spec metadata fields
|
||||
|
||||
| Field | Required | Details | Example |
|
||||
|--------------------|:--------:|---------|---------|
|
||||
| `key` | Y | API key for Anthropic. | `"mykey"` |
|
||||
| `model` | N | The Anthropic LLM to use. Defaults to `claude-3-5-sonnet-20240620` | `claude-3-5-sonnet-20240620` |
|
||||
| `cacheTTL` | N | A time-to-live value for a prompt cache to expire. Uses Golang duration format. | `10m` |
|
||||
|
||||
## Related links
|
||||
|
||||
- [Conversation API overview]({{< ref conversation-overview.md >}})
|
|
@ -0,0 +1,42 @@
|
|||
---
|
||||
type: docs
|
||||
title: "AWS Bedrock"
|
||||
linkTitle: "AWS Bedrock"
|
||||
description: Detailed information on the AWS Bedrock conversation component
|
||||
---
|
||||
|
||||
## Component format
|
||||
|
||||
A Dapr `conversation.yaml` component file has the following structure:
|
||||
|
||||
```yaml
|
||||
apiVersion: dapr.io/v1alpha1
|
||||
kind: Component
|
||||
metadata:
|
||||
name: awsbedrock
|
||||
spec:
|
||||
type: conversation.aws.bedrock
|
||||
metadata:
|
||||
- name: endpoint
|
||||
value: "http://localhost:4566"
|
||||
- name: model
|
||||
value: amazon.titan-text-express-v1
|
||||
- name: cacheTTL
|
||||
value: 10m
|
||||
```
|
||||
|
||||
{{% alert title="Warning" color="warning" %}}
|
||||
The above example uses secrets as plain strings. It is recommended to use a secret store for the secrets, as described [here]({{< ref component-secrets.md >}}).
|
||||
{{% /alert %}}
|
||||
|
||||
## Spec metadata fields
|
||||
|
||||
| Field | Required | Details | Example |
|
||||
|--------------------|:--------:|---------|---------|
|
||||
| `endpoint` | N | AWS endpoint for the component to use and connect to emulators. Not recommended for production AWS use. | `http://localhost:4566` |
|
||||
| `model` | N | The LLM to use. Defaults to Bedrock's default provider model from Amazon. | `amazon.titan-text-express-v1` |
|
||||
| `cacheTTL` | N | A time-to-live value for a prompt cache to expire. Uses Golang duration format. | `10m` |
|
||||
|
||||
## Related links
|
||||
|
||||
- [Conversation API overview]({{< ref conversation-overview.md >}})
|
|
@ -0,0 +1,42 @@
|
|||
---
|
||||
type: docs
|
||||
title: "Huggingface"
|
||||
linkTitle: "Huggingface"
|
||||
description: Detailed information on the Huggingface conversation component
|
||||
---
|
||||
|
||||
## Component format
|
||||
|
||||
A Dapr `conversation.yaml` component file has the following structure:
|
||||
|
||||
```yaml
|
||||
apiVersion: dapr.io/v1alpha1
|
||||
kind: Component
|
||||
metadata:
|
||||
name: huggingface
|
||||
spec:
|
||||
type: conversation.huggingface
|
||||
metadata:
|
||||
- name: key
|
||||
value: mykey
|
||||
- name: model
|
||||
value: meta-llama/Meta-Llama-3-8B
|
||||
- name: cacheTTL
|
||||
value: 10m
|
||||
```
|
||||
|
||||
{{% alert title="Warning" color="warning" %}}
|
||||
The above example uses secrets as plain strings. It is recommended to use a secret store for the secrets, as described [here]({{< ref component-secrets.md >}}).
|
||||
{{% /alert %}}
|
||||
|
||||
## Spec metadata fields
|
||||
|
||||
| Field | Required | Details | Example |
|
||||
|--------------------|:--------:|---------|---------|
|
||||
| `key` | Y | API key for Huggingface. | `mykey` |
|
||||
| `model` | N | The Huggingface LLM to use. Defaults to `meta-llama/Meta-Llama-3-8B`. | `meta-llama/Meta-Llama-3-8B` |
|
||||
| `cacheTTL` | N | A time-to-live value for a prompt cache to expire. Uses Golang duration format. | `10m` |
|
||||
|
||||
## Related links
|
||||
|
||||
- [Conversation API overview]({{< ref conversation-overview.md >}})
|
|
@ -0,0 +1,42 @@
|
|||
---
|
||||
type: docs
|
||||
title: "Mistral"
|
||||
linkTitle: "Mistral"
|
||||
description: Detailed information on the Mistral conversation component
|
||||
---
|
||||
|
||||
## Component format
|
||||
|
||||
A Dapr `conversation.yaml` component file has the following structure:
|
||||
|
||||
```yaml
|
||||
apiVersion: dapr.io/v1alpha1
|
||||
kind: Component
|
||||
metadata:
|
||||
name: mistral
|
||||
spec:
|
||||
type: conversation.mistral
|
||||
metadata:
|
||||
- name: key
|
||||
value: mykey
|
||||
- name: model
|
||||
value: open-mistral-7b
|
||||
- name: cacheTTL
|
||||
value: 10m
|
||||
```
|
||||
|
||||
{{% alert title="Warning" color="warning" %}}
|
||||
The above example uses secrets as plain strings. It is recommended to use a secret store for the secrets, as described [here]({{< ref component-secrets.md >}}).
|
||||
{{% /alert %}}
|
||||
|
||||
## Spec metadata fields
|
||||
|
||||
| Field | Required | Details | Example |
|
||||
|--------------------|:--------:|---------|---------|
|
||||
| `key` | Y | API key for Mistral. | `mykey` |
|
||||
| `model` | N | The Mistral LLM to use. Defaults to `open-mistral-7b`. | `open-mistral-7b` |
|
||||
| `cacheTTL` | N | A time-to-live value for a prompt cache to expire. Uses Golang duration format. | `10m` |
|
||||
|
||||
## Related links
|
||||
|
||||
- [Conversation API overview]({{< ref conversation-overview.md >}})
|
|
@ -0,0 +1,42 @@
|
|||
---
|
||||
type: docs
|
||||
title: "OpenAI"
|
||||
linkTitle: "OpenAI"
|
||||
description: Detailed information on the OpenAI conversation component
|
||||
---
|
||||
|
||||
## Component format
|
||||
|
||||
A Dapr `conversation.yaml` component file has the following structure:
|
||||
|
||||
```yaml
|
||||
apiVersion: dapr.io/v1alpha1
|
||||
kind: Component
|
||||
metadata:
|
||||
name: openai
|
||||
spec:
|
||||
type: conversation.openai
|
||||
metadata:
|
||||
- name: key
|
||||
value: mykey
|
||||
- name: model
|
||||
value: gpt-4-turbo
|
||||
- name: cacheTTL
|
||||
value: 10m
|
||||
```
|
||||
|
||||
{{% alert title="Warning" color="warning" %}}
|
||||
The above example uses secrets as plain strings. It is recommended to use a secret store for the secrets, as described [here]({{< ref component-secrets.md >}}).
|
||||
{{% /alert %}}
|
||||
|
||||
## Spec metadata fields
|
||||
|
||||
| Field | Required | Details | Example |
|
||||
|--------------------|:--------:|---------|---------|
|
||||
| `key` | Y | API key for OpenAI. | `mykey` |
|
||||
| `model` | N | The OpenAI LLM to use. Defaults to `gpt-4-turbo`. | `gpt-4-turbo` |
|
||||
| `cacheTTL` | N | A time-to-live value for a prompt cache to expire. Uses Golang duration format. | `10m` |
|
||||
|
||||
## Related links
|
||||
|
||||
- [Conversation API overview]({{< ref conversation-overview.md >}})
|
|
@ -104,12 +104,12 @@ spec:
|
|||
| oidcClientSecret | N | The OAuth2 client secret that has been provisioned in the identity provider: Required when `authType` is set to `oidc` | `"KeFg23!"` |
|
||||
| oidcScopes | N | Comma-delimited list of OAuth2/OIDC scopes to request with the access token. Recommended when `authType` is set to `oidc`. Defaults to `"openid"` | `"openid,kafka-prod"` |
|
||||
| oidcExtensions | N | String containing a JSON-encoded dictionary of OAuth2/OIDC extensions to request with the access token | `{"cluster":"kafka","poolid":"kafkapool"}` |
|
||||
| awsRegion | N | The AWS region where the Kafka cluster is deployed to. Required when `authType` is set to `awsiam` | `us-west-1` |
|
||||
| awsAccessKey | N | AWS access key associated with an IAM account. | `"accessKey"`
|
||||
| awsSecretKey | N | The secret key associated with the access key. | `"secretKey"`
|
||||
| awsSessionToken | N | AWS session token to use. A session token is only required if you are using temporary security credentials. | `"sessionToken"`
|
||||
| awsIamRoleArn | N | IAM role that has access to AWS Managed Streaming for Apache Kafka (MSK). This is another option to authenticate with MSK aside from the AWS Credentials. | `"arn:aws:iam::123456789:role/mskRole"`
|
||||
| awsStsSessionName | N | Represents the session name for assuming a role. | `"MSKSASLDefaultSession"`
|
||||
| awsRegion | N | This maintains backwards compatibility with existing fields. It will be deprecated as of Dapr 1.17. Use 'region' instead. The AWS region where the Kafka cluster is deployed to. Required when `authType` is set to `awsiam` | `us-west-1` |
|
||||
| awsAccessKey | N | This maintains backwards compatibility with existing fields. It will be deprecated as of Dapr 1.17. Use 'accessKey' instead. AWS access key associated with an IAM account. | `"accessKey"`
|
||||
| awsSecretKey | N | This maintains backwards compatibility with existing fields. It will be deprecated as of Dapr 1.17. Use 'secretKey' instead. The secret key associated with the access key. | `"secretKey"`
|
||||
| awsSessionToken | N | This maintains backwards compatibility with existing fields. It will be deprecated as of Dapr 1.17. Use 'sessionToken' instead. AWS session token to use. A session token is only required if you are using temporary security credentials. | `"sessionToken"`
|
||||
| awsIamRoleArn | N | This maintains backwards compatibility with existing fields. It will be deprecated as of Dapr 1.17. Use 'assumeRoleArn' instead. IAM role that has access to AWS Managed Streaming for Apache Kafka (MSK). This is another option to authenticate with MSK aside from the AWS Credentials. | `"arn:aws:iam::123456789:role/mskRole"`
|
||||
| awsStsSessionName | N | This maintains backwards compatibility with existing fields. It will be deprecated as of Dapr 1.17. Use 'sessionName' instead. Represents the session name for assuming a role. | `"DaprDefaultSession"`
|
||||
| schemaRegistryURL | N | Required when using Schema Registry Avro serialization/deserialization. The Schema Registry URL. | `http://localhost:8081` |
|
||||
| schemaRegistryAPIKey | N | When using Schema Registry Avro serialization/deserialization. The Schema Registry credentials API Key. | `XYAXXAZ` |
|
||||
| schemaRegistryAPISecret | N | When using Schema Registry Avro serialization/deserialization. The Schema Registry credentials API Secret. | `ABCDEFGMEADFF` |
|
||||
|
@ -332,7 +332,7 @@ spec:
|
|||
|
||||
Authenticating with AWS IAM is supported with MSK. Setting `authType` to `awsiam` uses AWS SDK to generate auth tokens to authenticate.
|
||||
{{% alert title="Note" color="primary" %}}
|
||||
The only required metadata field is `awsRegion`. If no `awsAccessKey` and `awsSecretKey` are provided, you can use AWS IAM roles for service accounts to have password-less authentication to your Kafka cluster.
|
||||
The only required metadata field is `region`. If no `acessKey` and `secretKey` are provided, you can use AWS IAM roles for service accounts to have password-less authentication to your Kafka cluster.
|
||||
{{% /alert %}}
|
||||
|
||||
```yaml
|
||||
|
@ -352,18 +352,18 @@ spec:
|
|||
value: "my-dapr-app-id"
|
||||
- name: authType # Required.
|
||||
value: "awsiam"
|
||||
- name: awsRegion # Required.
|
||||
- name: region # Required.
|
||||
value: "us-west-1"
|
||||
- name: awsAccessKey # Optional.
|
||||
- name: accessKey # Optional.
|
||||
value: <AWS_ACCESS_KEY>
|
||||
- name: awsSecretKey # Optional.
|
||||
- name: secretKey # Optional.
|
||||
value: <AWS_SECRET_KEY>
|
||||
- name: awsSessionToken # Optional.
|
||||
- name: sessionToken # Optional.
|
||||
value: <AWS_SESSION_KEY>
|
||||
- name: awsIamRoleArn # Optional.
|
||||
- name: assumeRoleArn # Optional.
|
||||
value: "arn:aws:iam::123456789:role/mskRole"
|
||||
- name: awsStsSessionName # Optional.
|
||||
value: "MSKSASLDefaultSession"
|
||||
- name: sessionName # Optional.
|
||||
value: "DaprDefaultSession"
|
||||
```
|
||||
|
||||
### Communication using TLS
|
||||
|
@ -540,6 +540,8 @@ app.include_router(router)
|
|||
```
|
||||
|
||||
{{% /codetab %}}
|
||||
{{< /tabs >}}
|
||||
|
||||
## Receiving message headers with special characters
|
||||
|
||||
The consumer application may be required to receive message headers that include special characters, which may cause HTTP protocol validation errors.
|
||||
|
|
|
@ -68,7 +68,8 @@ spec:
|
|||
# value: 5
|
||||
# - name: concurrencyMode # Optional
|
||||
# value: "single"
|
||||
|
||||
# - name: concurrencyLimit # Optional
|
||||
# value: "0"
|
||||
|
||||
```
|
||||
|
||||
|
@ -98,6 +99,7 @@ The above example uses secrets as plain strings. It is recommended to use [a sec
|
|||
| disableDeleteOnRetryLimit | N | When set to true, after retrying and failing of `messageRetryLimit` times processing a message, reset the message visibility timeout so that other consumers can try processing, instead of deleting the message from SQS (the default behvior). Default: `"false"` | `"true"`, `"false"`
|
||||
| assetsManagementTimeoutSeconds | N | Amount of time in seconds, for an AWS asset management operation, before it times out and cancelled. Asset management operations are any operations performed on STS, SNS and SQS, except message publish and consume operations that implement the default Dapr component retry behavior. The value can be set to any non-negative float/integer. Default: `5` | `0.5`, `10`
|
||||
| concurrencyMode | N | When messages are received in bulk from SQS, call the subscriber sequentially (“single” message at a time), or concurrently (in “parallel”). Default: `"parallel"` | `"single"`, `"parallel"`
|
||||
| concurrencyLimit | N | Defines the maximum number of concurrent workers handling messages. This value is ignored when concurrencyMode is set to `"single"`. To avoid limiting the number of concurrent workers, set this to `0`. Default: `0` | `100`
|
||||
|
||||
### Additional info
|
||||
|
||||
|
|
|
@ -38,6 +38,8 @@ spec:
|
|||
value: "true"
|
||||
- name: disableBatching
|
||||
value: "false"
|
||||
- name: receiverQueueSize
|
||||
value: "1000"
|
||||
- name: <topic-name>.jsonschema # sets a json schema validation for the configured topic
|
||||
value: |
|
||||
{
|
||||
|
@ -78,6 +80,7 @@ The above example uses secrets as plain strings. It is recommended to use a [sec
|
|||
| namespace | N | The administrative unit of the topic, which acts as a grouping mechanism for related topics. Default: `"default"` | `"default"`
|
||||
| persistent | N | Pulsar supports two kinds of topics: [persistent](https://pulsar.apache.org/docs/en/concepts-architecture-overview#persistent-storage) and [non-persistent](https://pulsar.apache.org/docs/en/concepts-messaging/#non-persistent-topics). With persistent topics, all messages are durably persisted on disks (if the broker is not standalone, messages are durably persisted on multiple disks), whereas data for non-persistent topics is not persisted to storage disks.
|
||||
| disableBatching | N | disable batching.When batching enabled default batch delay is set to 10 ms and default batch size is 1000 messages,Setting `disableBatching: true` will make the producer to send messages individually. Default: `"false"` | `"true"`, `"false"`|
|
||||
| receiverQueueSize | N | Sets the size of the consumer receiver queue. Controls how many messages can be accumulated by the consumer before it is explicitly called to read messages by Dapr. Default: `"1000"` | `"1000"` |
|
||||
| batchingMaxPublishDelay | N | batchingMaxPublishDelay set the time period within which the messages sent will be batched,if batch messages are enabled. If set to a non zero value, messages will be queued until this time interval or batchingMaxMessages (see below) or batchingMaxSize (see below). There are two valid formats, one is the fraction with a unit suffix format, and the other is the pure digital format that is processed as milliseconds. Valid time units are "ns", "us" (or "µs"), "ms", "s", "m", "h". Default: `"10ms"` | `"10ms"`, `"10"`|
|
||||
| batchingMaxMessages | N | batchingMaxMessages set the maximum number of messages permitted in a batch.If set to a value greater than 1, messages will be queued until this threshold is reached or batchingMaxSize (see below) has been reached or the batch interval has elapsed. Default: `"1000"` | `"1000"`|
|
||||
| batchingMaxSize | N | batchingMaxSize sets the maximum number of bytes permitted in a batch. If set to a value greater than 1, messages will be queued until this threshold is reached or batchingMaxMessages (see above) has been reached or the batch interval has elapsed. Default: `"128KB"` | `"131072"`|
|
||||
|
|
|
@ -94,9 +94,9 @@ The AWS authentication token will be dynamically rotated before it's expiration
|
|||
|--------|:--------:|---------|---------|
|
||||
| `useAWSIAM` | Y | Must be set to `true` to enable the component to retrieve access tokens from AWS IAM. This authentication method only works with AWS Relational Database Service for PostgreSQL databases. | `"true"` |
|
||||
| `connectionString` | Y | The connection string for the PostgreSQL database.<br>This must contain an already existing user, which corresponds to the name of the user created inside PostgreSQL that maps to the AWS IAM policy. This connection string should not contain any password. Note that the database name field is denoted by dbname with AWS. | `"host=mydb.postgres.database.aws.com user=myapplication port=5432 dbname=my_db sslmode=require"`|
|
||||
| `awsRegion` | Y | The AWS Region where the AWS Relational Database Service is deployed to. | `"us-east-1"` |
|
||||
| `awsAccessKey` | Y | AWS access key associated with an IAM account | `"AKIAIOSFODNN7EXAMPLE"` |
|
||||
| `awsSecretKey` | Y | The secret key associated with the access key | `"wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY"` |
|
||||
| `awsRegion` | N | The AWS Region where the AWS Relational Database Service is deployed to. | `"us-east-1"` |
|
||||
| `awsAccessKey` | N | AWS access key associated with an IAM account | `"AKIAIOSFODNN7EXAMPLE"` |
|
||||
| `awsSecretKey` | N | The secret key associated with the access key | `"wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY"` |
|
||||
| `awsSessionToken` | N | AWS session token to use. A session token is only required if you are using temporary security credentials. | `"TOKEN"` |
|
||||
|
||||
### Other metadata options
|
||||
|
|
|
@ -94,10 +94,10 @@ The AWS authentication token will be dynamically rotated before it's expiration
|
|||
|--------|:--------:|---------|---------|
|
||||
| `useAWSIAM` | Y | Must be set to `true` to enable the component to retrieve access tokens from AWS IAM. This authentication method only works with AWS Relational Database Service for PostgreSQL databases. | `"true"` |
|
||||
| `connectionString` | Y | The connection string for the PostgreSQL database.<br>This must contain an already existing user, which corresponds to the name of the user created inside PostgreSQL that maps to the AWS IAM policy. This connection string should not contain any password. Note that the database name field is denoted by dbname with AWS. | `"host=mydb.postgres.database.aws.com user=myapplication port=5432 dbname=my_db sslmode=require"`|
|
||||
| `awsRegion` | Y | The AWS Region where the AWS Relational Database Service is deployed to. | `"us-east-1"` |
|
||||
| `awsAccessKey` | Y | AWS access key associated with an IAM account | `"AKIAIOSFODNN7EXAMPLE"` |
|
||||
| `awsSecretKey` | Y | The secret key associated with the access key | `"wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY"` |
|
||||
| `awsSessionToken` | N | AWS session token to use. A session token is only required if you are using temporary security credentials. | `"TOKEN"` |
|
||||
| `awsRegion` | N | This maintains backwards compatibility with existing fields. It will be deprecated as of Dapr 1.17. Use 'region' instead. The AWS Region where the AWS Relational Database Service is deployed to. | `"us-east-1"` |
|
||||
| `awsAccessKey` | N | This maintains backwards compatibility with existing fields. It will be deprecated as of Dapr 1.17. Use 'accessKey' instead. AWS access key associated with an IAM account | `"AKIAIOSFODNN7EXAMPLE"` |
|
||||
| `awsSecretKey` | N | This maintains backwards compatibility with existing fields. It will be deprecated as of Dapr 1.17. Use 'secretKey' instead. The secret key associated with the access key | `"wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY"` |
|
||||
| `awsSessionToken` | N | This maintains backwards compatibility with existing fields. It will be deprecated as of Dapr 1.17. Use 'sessionToken' instead. AWS session token to use. A session token is only required if you are using temporary security credentials. | `"TOKEN"` |
|
||||
|
||||
### Other metadata options
|
||||
|
||||
|
|
|
@ -32,6 +32,9 @@ spec:
|
|||
duration: <REPLACE-WITH-VALUE>
|
||||
maxInterval: <REPLACE-WITH-VALUE>
|
||||
maxRetries: <REPLACE-WITH-VALUE>
|
||||
matching:
|
||||
httpStatusCodes: <REPLACE-WITH-VALUE>
|
||||
gRPCStatusCodes: <REPLACE-WITH-VALUE>
|
||||
circuitBreakers:
|
||||
circuitBreakerName: # Replace with any unique name
|
||||
maxRequests: <REPLACE-WITH-VALUE>
|
||||
|
|
|
@ -0,0 +1,5 @@
|
|||
- component: AWS Bedrock
|
||||
link: aws-bedrock
|
||||
state: Alpha
|
||||
version: v1
|
||||
since: "1.15"
|
|
@ -0,0 +1,20 @@
|
|||
- component: Anthropic
|
||||
link: anthropic
|
||||
state: Alpha
|
||||
version: v1
|
||||
since: "1.15"
|
||||
- component: Huggingface
|
||||
link: hugging-face
|
||||
state: Alpha
|
||||
version: v1
|
||||
since: "1.15"
|
||||
- component: Mistral
|
||||
link: mistral
|
||||
state: Alpha
|
||||
version: v1
|
||||
since: "1.15"
|
||||
- component: OpenAI
|
||||
link: openai
|
||||
state: Alpha
|
||||
version: v1
|
||||
since: "1.15"
|
|
@ -0,0 +1,28 @@
|
|||
{{- $groups := dict
|
||||
"Generic" $.Site.Data.components.conversation.generic
|
||||
"Amazon Web Services (AWS)" $.Site.Data.components.conversation.aws
|
||||
|
||||
}}
|
||||
|
||||
{{ range $group, $components := $groups }}
|
||||
<h3>{{ $group }}</h3>
|
||||
<table width="100%">
|
||||
<tr>
|
||||
<th>Component</th>
|
||||
<th>Status</th>
|
||||
<th>Component version</th>
|
||||
<th>Since runtime version</th>
|
||||
</tr>
|
||||
{{ range sort $components "component" }}
|
||||
<tr>
|
||||
<td><a href="/reference/components-reference/supported-conversation/{{ .link }}/">{{ .component }}</a>
|
||||
</td>
|
||||
<td>{{ .state }}</td>
|
||||
<td>{{ .version }}</td>
|
||||
<td>{{ .since }}</td>
|
||||
</tr>
|
||||
{{ end }}
|
||||
</table>
|
||||
{{ end }}
|
||||
|
||||
{{ partial "components/componenttoc.html" . }}
|
Before Width: | Height: | Size: 107 KiB After Width: | Height: | Size: 133 KiB |
Before Width: | Height: | Size: 43 KiB After Width: | Height: | Size: 52 KiB |
Before Width: | Height: | Size: 160 KiB After Width: | Height: | Size: 203 KiB |
Before Width: | Height: | Size: 163 KiB After Width: | Height: | Size: 205 KiB |
Before Width: | Height: | Size: 37 KiB After Width: | Height: | Size: 145 KiB |