Merge branch 'v1.16' into add-clickhouse-state-store-docs-v1.16

This commit is contained in:
Mehmet TOSUN 2025-09-11 09:07:32 +03:00 committed by GitHub
commit daf8373924
No known key found for this signature in database
GPG Key ID: B5690EEEBB952194
4 changed files with 179 additions and 38 deletions

View File

@ -14,7 +14,12 @@ Dapr's conversation API reduces the complexity of securely and reliably interact
<img src="/images/conversation-overview.png" width=800 alt="Diagram showing the flow of a user's app communicating with Dapr's LLM components.">
In additon to enabling critical performance and security functionality (like [prompt caching]({{% ref "#prompt-caching" %}}) and [PII scrubbing]({{% ref "#personally-identifiable-information-pii-obfuscation" %}})), you can also pair the conversation API with Dapr functionalities, like:
In addition to enabling critical performance and security functionality (like [prompt caching]({{% ref "#prompt-caching" %}}) and [PII scrubbing]({{% ref "#personally-identifiable-information-pii-obfuscation" %}})), the conversation API also provides:
- **Tool calling capabilities** that allow LLMs to interact with external functions and APIs, enabling more sophisticated AI applications
- **OpenAI-compatible interface** for seamless integration with existing AI workflows and tools
You can also pair the conversation API with Dapr functionalities, like:
- Resiliency circuit breakers and retries to circumvent limit and token errors, or
- Middleware to authenticate requests coming to and from the LLM
@ -45,6 +50,17 @@ The PII scrubber obfuscates the following user information:
- SHA-256 hex
- MD5 hex
### Tool calling support
The conversation API supports advanced tool calling capabilities that allow LLMs to interact with external functions and APIs. This enables you to build sophisticated AI applications that can:
- Execute custom functions based on user requests
- Integrate with external services and databases
- Provide dynamic, context-aware responses
- Create multi-step workflows and automation
Tool calling follows [OpenAI's function calling format](https://platform.openai.com/docs/guides/function-calling), making it easy to integrate with existing AI development workflows and tools.
## Demo
Watch the demo presented during [Diagrid's Dapr v1.15 celebration](https://www.diagrid.io/videos/dapr-1-15-deep-dive) to see how the conversation API works using the .NET SDK.

View File

@ -6,7 +6,7 @@ weight: 7000
description: "Executing workflows across multiple applications"
---
It is often the case that a single workflow spans multiple applications, microservices, or programing languages.
It is often the case that a single workflow spans multiple applications, microservices, or programming languages.
This is where an activity or a child workflow will be executed on a different application than the one hosting the parent workflow.
Some scenarios where this is useful include:
@ -16,15 +16,15 @@ Some scenarios where this is useful include:
- Different parts of the workflow need to be executed in different trust zones or networks.
- Different parts of the workflow need to be executed in different geographic regions due to data residency requirements.
- An involved business process spans multiple teams or departments, each owning their own application.
- Implementation of a workflow spans different programming lanaguages based on team expertise or existing codebases.
- Implementation of a workflow spans different programming languages based on team expertise or existing codebases.
- Different team boundaries or microservice ownership.
## Multi-application workflows
Like all building blocks in Dapr, workflow execution routing is based on the [App ID of the hosting Dapr application]({{% ref "security-concept.md#application-identity" %}}).
By default, the full workflow execution is hosted on the app ID that started the workflow.
This workflow will be executed across all replicas of that app ID, not just the single replica which scheduled the workflow.
By default, the full workflow execution is hosted on the app ID that started the workflow. This workflow can be executed across any replicas of that app ID, not just the single replica which scheduled the workflow.
It is possible to execute activities or child workflows on different app IDs by specifying the target app ID parameter, inside the workflow execution code.
Upon execution, the target app ID will execute the activity or child workflow, and return the result to the parent workflow of the originating app ID.
@ -50,7 +50,7 @@ When calling multi-application activities or child workflows:
- If the target application exists but doesn't contain the specified activity or workflow, the call will return an error.
- Standard workflow retry policies apply to multi-application calls.
It is paramount that there is co-ordination between the teams owning the different app IDs to ensure that the activities and child workflows are defined and available when needed.
It is paramount that there is coordination between the teams owning the different app IDs to ensure that the activities and child workflows are defined and available when needed.
## Multi-application activity example

View File

@ -17,8 +17,7 @@ description: "List of current alpha and beta APIs"
| Cryptography | [Crypto proto](https://github.com/dapr/dapr/blob/5aba3c9aa4ea9b3f388df125f9c66495b43c5c9e/dapr/proto/runtime/v1/dapr.proto#L118) | `v1.0-alpha1/crypto` | The cryptography API enables you to perform **high level** cryptography operations for encrypting and decrypting messages. | [Cryptography API]({{% ref "cryptography-overview.md" %}}) | v1.11 |
| Jobs | [Jobs proto](https://github.com/dapr/dapr/blob/master/dapr/proto/runtime/v1/dapr.proto#L212-219) | `v1.0-alpha1/jobs` | The jobs API enables you to schedule and orchestrate jobs. | [Jobs API]({{% ref "jobs-overview.md" %}}) | v1.14 |
| Streaming Subscription | [Streaming Subscription proto](https://github.com/dapr/dapr/blob/310c83140b2f0c3cb7d2bef19624df88af3e8e0a/dapr/proto/runtime/v1/dapr.proto#L454) | N/A | Subscription is defined in the application code. Streaming subscriptions are dynamic, meaning they allow for adding or removing subscriptions at runtime. | [Streaming Subscription API]({{% ref "subscription-methods/#streaming-subscriptions" %}}) | v1.14 |
| Conversation | [Conversation proto](https://github.com/dapr/dapr/blob/master/dapr/proto/runtime/v1/dapr.proto#L221-222) | `v1.0-alpha1/conversation` | Converse between different large language models using the conversation API. | [Conversation API]({{% ref "conversation-overview.md" %}}) | v1.15 |
| Conversation | [Conversation proto](https://github.com/dapr/dapr/blob/master/dapr/proto/runtime/v1/dapr.proto#L226) | `v1.0-alpha2/conversation` | Converse between different large language models using the conversation API. | [Conversation API]({{% ref "conversation-overview.md" %}}) | v1.15 |
## Beta APIs

View File

@ -10,14 +10,16 @@ weight: 1400
The conversation API is currently in [alpha]({{% ref "certification-lifecycle.md#certification-levels" %}}).
{{% /alert %}}
Dapr provides an API to interact with Large Language Models (LLMs) and enables critical performance and security functionality with features like prompt caching and PII data obfuscation.
Dapr provides an API to interact with Large Language Models (LLMs) and enables critical performance and security functionality with features like prompt caching, PII data obfuscation, and tool calling capabilities.
Tool calling follow's OpenAI's function calling format, making it easy to integrate with existing AI development workflows and tools.
## Converse
This endpoint lets you converse with LLMs.
This endpoint lets you converse with LLMs using the Alpha2 version of the API, which provides enhanced tool calling support and alignment with OpenAI's interface.
```
POST http://localhost:<daprPort>/v1.0-alpha1/conversation/<llm-name>/converse
POST http://localhost:<daprPort>/v1.0-alpha2/conversation/<llm-name>/converse
```
### URL parameters
@ -30,34 +32,117 @@ POST http://localhost:<daprPort>/v1.0-alpha1/conversation/<llm-name>/converse
| Field | Description |
| --------- | ----------- |
| `name` | The name of the conversation component. Required |
| `contextId` | The ID of an existing chat (like in ChatGPT). Optional |
| `inputs` | Inputs for the conversation. Multiple inputs at one time are supported. Required |
| `cacheTTL` | A time-to-live value for a prompt cache to expire. Uses Golang duration format. Optional |
| `scrubPII` | A boolean value to enable obfuscation of sensitive information returning from the LLM. Set this value if all PII (across contents) in the request needs to be scrubbed. Optional |
| `temperature` | A float value to control the temperature of the model. Used to optimize for consistency and creativity. Optional |
| `parameters` | Parameters for all custom fields. Optional |
| `metadata` | [Metadata](#metadata) passed to conversation components. Optional |
| `scrubPii` | A boolean value to enable obfuscation of sensitive information returning from the LLM. Optional |
| `temperature` | A float value to control the temperature of the model. Used to optimize for consistency (0) or creativity (1). Optional |
| `tools` | Tools register the tools available to be used by the LLM during the conversation. Optional |
| `toolChoice` | Controls which (if any) tool is called by the model. Values: `auto`, `required`, or specific tool name. Defaults to `auto` if tools are present. Optional |
#### Input body
| Field | Description |
| --------- | ----------- |
| `content` | The message content to send to the LLM. Required |
| `role` | The role for the LLM to assume. Possible values: 'user', 'tool', 'assistant' |
| `scrubPII` | A boolean value to enable obfuscation of sensitive information present in the content field. Set this value if PII for this specific content needs to be scrubbed exclusively. Optional |
| `messages` | Array of conversation messages. Required |
| `scrubPii` | A boolean value to enable obfuscation of sensitive information present in the content field. Optional |
### Request content example
#### Message types
The API supports different message types:
- **`ofDeveloper`**: Developer role messages with optional name and content
- **`ofSystem`**: System role messages with optional name and content
- **`ofUser`**: User role messages with optional name and content
- **`ofAssistant`**: Assistant role messages with optional name, content, and tool calls
- **`ofTool`**: Tool role messages with tool ID, name, and content
#### Tool calling
Tools can be defined using the `tools` field with function definitions:
| Field | Description |
| --------- | ----------- |
| `function.name` | The name of the function to be called. Required |
| `function.description` | A description of what the function does. Optional |
| `function.parameters` | JSON Schema object describing the function parameters. Optional |
### Request content examples
#### Basic conversation
```json
REQUEST = {
"inputs": [
{
"content": "What is Dapr?",
"role": "user", // Optional
"scrubPII": "true", // Optional. Will obfuscate any sensitive information found in the content field
"name": "openai",
"inputs": [{
"messages": [{
"of_user": {
"content": [{
"text": "What is Dapr?"
}]
}
}]
}],
"parameters": {},
"metadata": {}
}
```
#### Conversation with tool calling
```json
{
"name": "openai",
"inputs": [{
"messages": [{
"of_user": {
"content": [{
"text": "What is the weather like in San Francisco in celsius?"
}]
}
}],
"scrub_pii": false
}],
"parameters": {
"max_tokens": {
"@type": "type.googleapis.com/google.protobuf.Int64Value",
"value": "100"
},
],
"cacheTTL": "10m", // Optional
"scrubPII": "true", // Optional. Will obfuscate any sensitive information returning from the LLM
"temperature": 0.5 // Optional. Optimizes for consistency (0) or creativity (1)
"model": {
"@type": "type.googleapis.com/google.protobuf.StringValue",
"value": "claude-3-5-sonnet-20240620"
}
},
"metadata": {
"api_key": "test-key",
"version": "1.0"
},
"scrub_pii": false,
"temperature": 0.7,
"tools": [{
"function": {
"name": "get_weather",
"description": "Get the current weather for a location",
"parameters": {
"type": "object",
"properties": {
"location": {
"type": "string",
"description": "The city and state, e.g. San Francisco, CA"
},
"unit": {
"type": "string",
"enum": ["celsius", "fahrenheit"],
"description": "The temperature unit to use"
}
},
"required": ["location"]
}
}
}],
"tool_choice": "auto"
}
```
@ -71,21 +156,62 @@ Code | Description
### Response content
#### Basic conversation response
```json
RESPONSE = {
"outputs": {
{
"result": "Dapr is distribution application runtime ...",
"parameters": {},
},
{
"result": "Dapr can help developers ...",
"parameters": {},
}
},
RESPONSE = {
"outputs": [{
"choices": [{
"finish_reason": "stop",
"index": 0,
"message": {
"content": "Dapr is a distributed application runtime that makes it easy for developers to build resilient, stateless and stateful applications that run on the cloud and edge.",
"tool_calls": []
}
}]
}]
}
```
#### Tool calling response
```json
{
"outputs": [{
"choices": [{
"finish_reason": "tool_calls",
"index": 0,
"message": {
"content": null,
"tool_calls": [{
"id": "call_123",
"function": {
"name": "get_weather",
"arguments": "{\"location\": \"San Francisco, CA\", \"unit\": \"celsius\"}"
}
}]
}
}]
}]
}
```
### Tool choice options
The `tool_choice` is an optional parameter that controls how the model can use available tools:
- **`auto`**: The model can pick between generating a message or calling one or more tools (default when tools are present)
- **`required`**: Requires one or more functions to be called
- **`{tool_name}`**: Forces the model to call a specific tool by name
## Legacy Alpha1 API
The previous Alpha1 version of the API is still supported for backward compatibility but is deprecated. For new implementations, use the Alpha2 version described above.
```
POST http://localhost:<daprPort>/v1.0-alpha1/conversation/<llm-name>/converse
```
## Next steps
- [Conversation API overview]({{% ref conversation-overview.md %}})