Merge branch 'v1.15' into pulsar-subscribeInitialPosition

This commit is contained in:
Cassie Coyle 2025-04-29 12:38:58 -05:00 committed by GitHub
commit a87b80fd29
No known key found for this signature in database
GPG Key ID: B5690EEEBB952194
25 changed files with 342 additions and 74 deletions

View File

@ -18,4 +18,4 @@ jobs:
stale-pr-message: 'Stale PR, paging all reviewers' stale-pr-message: 'Stale PR, paging all reviewers'
stale-pr-label: 'stale' stale-pr-label: 'stale'
exempt-pr-labels: 'question,"help wanted",do-not-merge,waiting-on-code-pr' exempt-pr-labels: 'question,"help wanted",do-not-merge,waiting-on-code-pr'
days-before-stale: 5 days-before-stale: 30

View File

@ -195,12 +195,8 @@ func configHandler(w http.ResponseWriter, r *http.Request) {
{{< /tabs >}} {{< /tabs >}}
## Next steps
{{< button text="Enable actor reminder partitioning >>" page="howto-actors-partitioning.md" >}}
## Related links ## Related links
- Refer to the [Dapr SDK documentation and examples]({{< ref "developing-applications/sdks/#sdk-languages" >}}). - Refer to the [Dapr SDK documentation and examples]({{< ref "developing-applications/sdks/#sdk-languages" >}}).
- [Actors API reference]({{< ref actors_api.md >}}) - [Actors API reference]({{< ref actors_api.md >}})
- [Actors overview]({{< ref actors-overview.md >}}) - [Actors overview]({{< ref actors-overview.md >}})

View File

@ -8,6 +8,13 @@ aliases:
- "/developing-applications/building-blocks/actors/actors-background" - "/developing-applications/building-blocks/actors/actors-background"
--- ---
{{% alert title="Warning" color="warning" %}}
This feature is only relevant when using state store actor reminders, no longer enabled by default.
As of v1.15, Dapr uses the far more performant [Scheduler Actor Reminders]({{< ref "scheduler.md#actor-reminders" >}}) by default.
This page is only relevant if you are using the legacy state store actor reminders, enabled via setting the [`SchedulerReminders` feature flag]({{< ref "support-preview-features.md#current-preview-features" >}}) to false.
It is highly recommended you use using the Scheduler Actor Reminders feature.
{{% /alert %}}
[Actor reminders]({{< ref "actors-timers-reminders.md#actor-reminders" >}}) are persisted and continue to be triggered after sidecar restarts. Applications with multiple reminders registered can experience the following issues: [Actor reminders]({{< ref "actors-timers-reminders.md#actor-reminders" >}}) are persisted and continue to be triggered after sidecar restarts. Applications with multiple reminders registered can experience the following issues:
- Low throughput on reminders registration and de-registration - Low throughput on reminders registration and de-registration
@ -193,4 +200,4 @@ Watch [this video for a demo of actor reminder partitioning](https://youtu.be/Zw
## Related links ## Related links
- [Actors API reference]({{< ref actors_api.md >}}) - [Actors API reference]({{< ref actors_api.md >}})
- [Actors overview]({{< ref actors-overview.md >}}) - [Actors overview]({{< ref actors-overview.md >}})

View File

@ -113,7 +113,7 @@ Configure your application to receive incoming events. If you're using HTTP, you
- Listen on a `POST` endpoint with the name of the binding, as specified in `metadata.name` in the `binding.yaml` file. - Listen on a `POST` endpoint with the name of the binding, as specified in `metadata.name` in the `binding.yaml` file.
- Verify your application allows Dapr to make an `OPTIONS` request for this endpoint. - Verify your application allows Dapr to make an `OPTIONS` request for this endpoint.
Below are code examples that leverage Dapr SDKs to demonstrate an output binding. Below are code examples that leverage Dapr SDKs to demonstrate an input binding.
{{< tabs ".NET" Java Python Go JavaScript>}} {{< tabs ".NET" Java Python Go JavaScript>}}

View File

@ -138,31 +138,29 @@ Manage your workflow within your code. In the `OrderProcessingWorkflow` example
```csharp ```csharp
string orderId = "exampleOrderId"; string orderId = "exampleOrderId";
string workflowComponent = "dapr";
string workflowName = "OrderProcessingWorkflow";
OrderPayload input = new OrderPayload("Paperclips", 99.95); OrderPayload input = new OrderPayload("Paperclips", 99.95);
Dictionary<string, string> workflowOptions; // This is an optional parameter Dictionary<string, string> workflowOptions; // This is an optional parameter
// Start the workflow. This returns back a "StartWorkflowResponse" which contains the instance ID for the particular workflow instance. // Start the workflow using the orderId as our workflow ID. This returns a string containing the instance ID for the particular workflow instance, whether we provide it ourselves or not.
StartWorkflowResponse startResponse = await daprClient.StartWorkflowAsync(orderId, workflowComponent, workflowName, input, workflowOptions); await daprWorkflowClient.ScheduleNewWorkflowAsync(nameof(OrderProcessingWorkflow), orderId, input, workflowOptions);
// Get information on the workflow. This response contains information such as the status of the workflow, when it started, and more! // Get information on the workflow. This response contains information such as the status of the workflow, when it started, and more!
GetWorkflowResponse getResponse = await daprClient.GetWorkflowAsync(orderId, workflowComponent, eventName); WorkflowState currentState = await daprWorkflowClient.GetWorkflowStateAsync(orderId, orderId);
// Terminate the workflow // Terminate the workflow
await daprClient.TerminateWorkflowAsync(orderId, workflowComponent); await daprWorkflowClient.TerminateWorkflowAsync(orderId);
// Raise an event (an incoming purchase order) that your workflow will wait for. This returns the item waiting to be purchased. // Raise an event (an incoming purchase order) that your workflow will wait for
await daprClient.RaiseWorkflowEventAsync(orderId, workflowComponent, workflowName, input); await daprWorkflowClient.RaiseEventAsync(orderId, "incoming-purchase-order", input);
// Pause // Pause
await daprClient.PauseWorkflowAsync(orderId, workflowComponent); await daprWorkflowClient.SuspendWorkflowAsync(orderId);
// Resume // Resume
await daprClient.ResumeWorkflowAsync(orderId, workflowComponent); await daprWorkflowClient.ResumeWorkflowAsync(orderId);
// Purge the workflow, removing all inbox and history information from associated instance // Purge the workflow, removing all inbox and history information from associated instance
await daprClient.PurgeWorkflowAsync(orderId, workflowComponent); await daprWorkflowClient.PurgeInstanceAsync(orderId);
``` ```
{{% /codetab %}} {{% /codetab %}}

View File

@ -0,0 +1,17 @@
---
type: docs
title: "How to: Integrate with Argo CD"
linkTitle: "Argo CD"
weight: 9000
description: "Integrate Dapr into your GitOps pipeline"
---
[Argo CD](https://argo-cd.readthedocs.io/en/stable/) is a declarative, GitOps continuous delivery tool for Kubernetes. It enables you to manage your Kubernetes deployments by tracking the desired application state in Git repositories and automatically syncing it to your clusters.
## Integration with Dapr
You can use Argo CD to manage the deployment of Dapr control plane components and Dapr-enabled applications. By adopting a GitOps approach, you ensure that Dapr's configurations and applications are consistently deployed, versioned, and auditable across your environments. Argo CD can be easily configured to deploy Helm charts, manifests, and Dapr components stored in Git repositories.
## Sample code
A sample project demonstrating Dapr deployment with Argo CD is available at [https://github.com/dapr/samples/tree/master/dapr-argocd](https://github.com/dapr/samples/tree/master/dapr-argocd).

View File

@ -8,16 +8,40 @@ aliases:
- '/developing-applications/sdks/serialization/' - '/developing-applications/sdks/serialization/'
--- ---
An SDK for Dapr should provide serialization for two use cases. First, for API objects sent through request and response payloads. Second, for objects to be persisted. For both these use cases, a default serialization is provided. In the Java SDK, it is the [DefaultObjectSerializer](https://dapr.github.io/java-sdk/io/dapr/serializer/DefaultObjectSerializer.html) class, providing JSON serialization. Dapr SDKs provide serialization for two use cases. First, for API objects sent through request and response payloads. Second, for objects to be persisted. For both of these cases, a default serialization method is provided in each language SDK.
| Language SDK | Default Serializer |
|------------------------------|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| [.NET]({{< ref dotnet >}}) | [DataContracts](https://learn.microsoft.com/dotnet/framework/wcf/feature-details/using-data-contracts) for remoted actors, [System.Text.Json](https://www.nuget.org/packages/System.Text.Json) otherwise. Read more about .NET serialization [here]({{< ref dotnet-actors-serialization >}}) | |
| [Java]({{< ref java >}}) | [DefaultObjectSerializer](https://dapr.github.io/java-sdk/io/dapr/serializer/DefaultObjectSerializer.html) for JSON serialization |
| [JavaScript]({{< ref js >}}) | JSON |
## Service invocation ## Service invocation
```java {{< tabs ".NET" "Java" >}}
DaprClient client = (new DaprClientBuilder()).build();
client.invokeService("myappid", "saySomething", "My Message", HttpExtension.POST).block(); <!-- .NET -->
{{% codetab %}}
```csharp
using var client = (new DaprClientBuilder()).Build();
await client.InvokeMethodAsync("myappid", "saySomething", "My Message");
``` ```
In the example above, the app will receive a `POST` request for the `saySomething` method with the request payload as `"My Message"` - quoted since the serializer will serialize the input String to JSON. {{% /codetab %}}
<!-- Java -->
{{% codetab %}}
```java
DaprClient client = (new DaprClientBuilder()).build();
client.invokeMethod("myappid", "saySomething", "My Message", HttpExtension.POST).block();
```
{{% /codetab %}}
In the example above, the app `myappid` receives a `POST` request for the `saySomething` method with the request payload as
`"My Message"` - quoted since the serializer will serialize the input String to JSON.
```text ```text
POST /saySomething HTTP/1.1 POST /saySomething HTTP/1.1
@ -30,11 +54,35 @@ Content-Length: 12
## State management ## State management
{{< tabs ".NET" "Java" >}}
<!-- .NET -->
{{% codetab %}}
```csharp
using var client = (new DaprClientBuilder()).Build();
var state = new Dictionary<string, string>
{
{ "key": "MyKey" },
{ "value": "My Message" }
};
await client.SaveStateAsync("MyStateStore", "MyKey", state);
```
{{% /codetab %}}
<!-- Java -->
{{% codetab %}}
```java ```java
DaprClient client = (new DaprClientBuilder()).build(); DaprClient client = (new DaprClientBuilder()).build();
client.saveState("MyStateStore", "MyKey", "My Message").block(); client.saveState("MyStateStore", "MyKey", "My Message").block();
``` ```
In this example, `My Message` will be saved. It is not quoted because Dapr's API will internally parse the JSON request object before saving it.
{{% /codetab %}}
In this example, `My Message` is saved. It is not quoted because Dapr's API internally parse the JSON request
object before saving it.
```JSON ```JSON
[ [
@ -47,12 +95,45 @@ In this example, `My Message` will be saved. It is not quoted because Dapr's API
## PubSub ## PubSub
{{< tabs ".NET" "Java" >}}
<!-- .NET -->
{{% codetab %}}
```csharp
using var client = (new DaprClientBuilder()).Build();
await client.PublishEventAsync("MyPubSubName", "TopicName", "My Message");
```
The event is published and the content is serialized to `byte[]` and sent to Dapr sidecar. The subscriber receives it as a [CloudEvent](https://github.com/cloudevents/spec). Cloud event defines `data` as String. The Dapr SDK also provides a built-in deserializer for `CloudEvent` object.
```csharp
public async Task<IActionResult> HandleMessage(string message)
{
//ASP.NET Core automatically deserializes the UTF-8 encoded bytes to a string
return new Ok();
}
```
or
```csharp
app.MapPost("/TopicName", [Topic("MyPubSubName", "TopicName")] (string message) => {
return Results.Ok();
}
```
{{% /codetab %}}
<!-- Java -->
{{% codetab %}}
```java ```java
DaprClient client = (new DaprClientBuilder()).build(); DaprClient client = (new DaprClientBuilder()).build();
client.publishEvent("TopicName", "My Message").block(); client.publishEvent("TopicName", "My Message").block();
``` ```
The event is published and the content is serialized to `byte[]` and sent to Dapr sidecar. The subscriber will receive it as a [CloudEvent](https://github.com/cloudevents/spec). Cloud event defines `data` as String. Dapr SDK also provides a built-in deserializer for `CloudEvent` object. The event is published and the content is serialized to `byte[]` and sent to Dapr sidecar. The subscriber receives it as a [CloudEvent](https://github.com/cloudevents/spec). Cloud event defines `data` as String. The Dapr SDK also provides a built-in deserializer for `CloudEvent` objects.
```java ```java
@PostMapping(path = "/TopicName") @PostMapping(path = "/TopicName")
@ -62,9 +143,50 @@ The event is published and the content is serialized to `byte[]` and sent to Dap
} }
``` ```
{{% /codetab %}}
## Bindings ## Bindings
In this case, the object is serialized to `byte[]` as well and the input binding receives the raw `byte[]` as-is and deserializes it to the expected object type. For output bindings the object is serialized to `byte[]` whereas the input binding receives the raw `byte[]` as-is and deserializes it to the expected object type.
{{< tabs ".NET" "Java" >}}
<!-- .NET -->
{{% codetab %}}
* Output binding:
```csharp
using var client = (new DaprClientBuilder()).Build();
await client.InvokeBindingAsync("sample", "My Message");
```
* Input binding (controllers):
```csharp
[ApiController]
public class SampleController : ControllerBase
{
[HttpPost("propagate")]
public ActionResult<string> GetValue([FromBody] int itemId)
{
Console.WriteLine($"Received message: {itemId}");
return $"itemID:{itemId}";
}
}
```
* Input binding (minimal API):
```csharp
app.MapPost("value", ([FromBody] int itemId) =>
{
Console.WriteLine($"Received message: {itemId}");
return ${itemID:{itemId}";
});
* ```
{{% /codetab %}}
<!-- Java -->
{{% codetab %}}
* Output binding: * Output binding:
```java ```java
@ -80,15 +202,49 @@ In this case, the object is serialized to `byte[]` as well and the input binding
System.out.println(message); System.out.println(message);
} }
``` ```
{{% /codetab %}}
It should print: It should print:
``` ```
My Message My Message
``` ```
## Actor Method invocation ## Actor Method invocation
Object serialization and deserialization for invocation of Actor's methods are same as for the service method invocation, the only difference is that the application does not need to deserialize the request or serialize the response since it is all done transparently by the SDK. Object serialization and deserialization for Actor method invocation are same as for the service method invocation,
the only difference is that the application does not need to deserialize the request or serialize the response since it
is all done transparently by the SDK.
For Actor's methods, the SDK only supports methods with zero or one parameter. For Actor methods, the SDK only supports methods with zero or one parameter.
{{< tabs ".NET" "Java" >}}
The .NET SDK supports two different serialization types based on whether you're using strongly-typed (DataContracts)
or weakly-typed (DataContracts or System.Text.JSON) actor client. [This document]({{< ref dotnet-actors-serialization >}})
can provide more information about the differences between each and additional considerations to keep in mind.
<!-- .NET -->
{{% codetab %}}
* Invoking an Actor's method using the weakly-typed client and System.Text.JSON:
```csharp
var proxy = this.ProxyFactory.Create(ActorId.CreateRandom(), "DemoActor");
await proxy.SayAsync("My message");
```
* Implementing an Actor's method:
```csharp
public Task SayAsync(string message)
{
Console.WriteLine(message);
return Task.CompletedTask;
}
```
{{% /codetab %}}
<!-- Java -->
{{% codetab %}}
* Invoking an Actor's method: * Invoking an Actor's method:
```java ```java
@ -105,13 +261,37 @@ public String say(String something) {
return "OK"; return "OK";
} }
``` ```
{{% /codetab %}}
It should print: It should print:
``` ```
My Message My Message
``` ```
## Actor's state management ## Actor's state management
Actors can also have state. In this case, the state manager will serialize and deserialize the objects using the state serializer and handle it transparently to the application. Actors can also have state. In this case, the state manager will serialize and deserialize the objects using the state
serializer and handle it transparently to the application.
<!-- .NET -->
{{% codetab %}}
```csharp
public Task SayAsync(string message)
{
// Reads state from a key
var previousMessage = await this.StateManager.GetStateAsync<string>("lastmessage");
// Sets the new state for the key after serializing it
await this.StateManager.SetStateAsync("lastmessage", message);
return previousMessage;
}
```
{{% /codetab %}}
<!-- Java -->
{{% codetab %}}
```java ```java
public String actorMethod(String message) { public String actorMethod(String message) {
@ -124,12 +304,17 @@ public String actorMethod(String message) {
} }
``` ```
{{% /codetab %}}
## Default serializer ## Default serializer
The default serializer for Dapr is a JSON serializer with the following expectations: The default serializer for Dapr is a JSON serializer with the following expectations:
1. Use of basic [JSON data types](https://www.w3schools.com/js/js_json_datatypes.asp) for cross-language and cross-platform compatibility: string, number, array, boolean, null and another JSON object. Every complex property type in application's serializable objects (DateTime, for example), should be represented as one of the JSON's basic types. 1. Use of basic [JSON data types](https://www.w3schools.com/js/js_json_datatypes.asp) for cross-language and cross-platform compatibility: string, number, array,
2. Data persisted with the default serializer should be saved as JSON objects too, without extra quotes or encoding. The example below shows how a string and a JSON object would look like in a Redis store. boolean, null and another JSON object. Every complex property type in application's serializable objects (DateTime,
for example), should be represented as one of the JSON's basic types.
2. Data persisted with the default serializer should be saved as JSON objects too, without extra quotes or encoding.
The example below shows how a string and a JSON object would look like in a Redis store.
```bash ```bash
redis-cli MGET "ActorStateIT_StatefulActorService||StatefulActorTest||1581130928192||message redis-cli MGET "ActorStateIT_StatefulActorService||StatefulActorTest||1581130928192||message
"This is a message to be saved and retrieved." "This is a message to be saved and retrieved."
@ -140,7 +325,8 @@ redis-cli MGET "ActorStateIT_StatefulActorService||StatefulActorTest||1581130928
``` ```
3. Custom serializers must serialize object to `byte[]`. 3. Custom serializers must serialize object to `byte[]`.
4. Custom serializers must deserialize `byte[]` to object. 4. Custom serializers must deserialize `byte[]` to object.
5. When user provides a custom serializer, it should be transferred or persisted as `byte[]`. When persisting, also encode as Base64 string. This is done natively by most JSON libraries. 5. When user provides a custom serializer, it should be transferred or persisted as `byte[]`. When persisting, also
encode as Base64 string. This is done natively by most JSON libraries.
```bash ```bash
redis-cli MGET "ActorStateIT_StatefulActorService||StatefulActorTest||1581130928192||message redis-cli MGET "ActorStateIT_StatefulActorService||StatefulActorTest||1581130928192||message
"VGhpcyBpcyBhIG1lc3NhZ2UgdG8gYmUgc2F2ZWQgYW5kIHJldHJpZXZlZC4=" "VGhpcyBpcyBhIG1lc3NhZ2UgdG8gYmUgc2F2ZWQgYW5kIHJldHJpZXZlZC4="
@ -149,5 +335,3 @@ redis-cli MGET "ActorStateIT_StatefulActorService||StatefulActorTest||1581130928
redis-cli MGET "ActorStateIT_StatefulActorService||StatefulActorTest||1581130928192||mydata redis-cli MGET "ActorStateIT_StatefulActorService||StatefulActorTest||1581130928192||mydata
"eyJ2YWx1ZSI6Ik15IGRhdGEgdmFsdWUuIn0=" "eyJ2YWx1ZSI6Ik15IGRhdGEgdmFsdWUuIn0="
``` ```
*As of now, the [Java SDK](https://github.com/dapr/java-sdk/) is the only Dapr SDK that implements this specification. In the near future, other SDKs will also implement the same.*

View File

@ -128,6 +128,7 @@ See this list of values corresponding to the different Dapr APIs:
| [Distributed Lock]({{< ref distributed_lock_api.md >}}) | `lock` (`v1.0-alpha1`)<br/>`unlock` (`v1.0-alpha1`) | `lock` (`v1alpha1`)<br/>`unlock` (`v1alpha1`) | | [Distributed Lock]({{< ref distributed_lock_api.md >}}) | `lock` (`v1.0-alpha1`)<br/>`unlock` (`v1.0-alpha1`) | `lock` (`v1alpha1`)<br/>`unlock` (`v1alpha1`) |
| [Cryptography]({{< ref cryptography_api.md >}}) | `crypto` (`v1.0-alpha1`) | `crypto` (`v1alpha1`) | | [Cryptography]({{< ref cryptography_api.md >}}) | `crypto` (`v1.0-alpha1`) | `crypto` (`v1alpha1`) |
| [Workflow]({{< ref workflow_api.md >}}) | `workflows` (`v1.0`) |`workflows` (`v1`) | | [Workflow]({{< ref workflow_api.md >}}) | `workflows` (`v1.0`) |`workflows` (`v1`) |
| [Conversation]({{< ref conversation_api.md >}}) | `conversation` (`v1.0-alpha1`) | `conversation` (`v1alpha1`) |
| [Health]({{< ref health_api.md >}}) | `healthz` (`v1.0`) | n/a | | [Health]({{< ref health_api.md >}}) | `healthz` (`v1.0`) | n/a |
| Shutdown | `shutdown` (`v1.0`) | `shutdown` (`v1`) | | Shutdown | `shutdown` (`v1.0`) | `shutdown` (`v1`) |

View File

@ -39,7 +39,7 @@ This guide walks you through installing an Azure Kubernetes Service (AKS) cluste
1. Create an AKS cluster. To use a specific version of Kubernetes, use `--kubernetes-version` (1.13.x or newer version required). 1. Create an AKS cluster. To use a specific version of Kubernetes, use `--kubernetes-version` (1.13.x or newer version required).
```bash ```bash
az aks create --resource-group [your_resource_group] --name [your_aks_cluster_name] --node-count 2 --enable-addons http_application_routing --generate-ssh-keys az aks create --resource-group [your_resource_group] --name [your_aks_cluster_name] --location [region] --node-count 2 --enable-app-routing --generate-ssh-keys
``` ```
1. Get the access credentials for the AKS cluster. 1. Get the access credentials for the AKS cluster.

View File

@ -12,7 +12,7 @@ This means that there is no additional parameter required to run the scheduler s
{{% alert title="Warning" color="warning" %}} {{% alert title="Warning" color="warning" %}}
The default storage size for the Scheduler is `1Gi`, which is likely not sufficient for most production deployments. The default storage size for the Scheduler is `1Gi`, which is likely not sufficient for most production deployments.
Remember that the Scheduler is used for [Actor Reminders]({{< ref actors-timers-reminders.md >}}) & [Workflows]({{< ref workflow-overview.md >}}) when the [SchedulerReminders]({{< ref support-preview-features.md >}}) preview feature is enabled, and the [Jobs API]({{< ref jobs_api.md >}}). Remember that the Scheduler is used for [Actor Reminders]({{< ref actors-timers-reminders.md >}}) & [Workflows]({{< ref workflow-overview.md >}}), and the [Jobs API]({{< ref jobs_api.md >}}).
You may want to consider reinstalling Dapr with a larger Scheduler storage of at least `16Gi` or more. You may want to consider reinstalling Dapr with a larger Scheduler storage of at least `16Gi` or more.
For more information, see the [ETCD Storage Disk Size](#etcd-storage-disk-size) section below. For more information, see the [ETCD Storage Disk Size](#etcd-storage-disk-size) section below.
{{% /alert %}} {{% /alert %}}
@ -30,8 +30,8 @@ error running scheduler: etcdserver: mvcc: database space exceeded
``` ```
Knowing the safe upper bound for your storage size is not an exact science, and relies heavily on the number, persistence, and the data payload size of your application jobs. Knowing the safe upper bound for your storage size is not an exact science, and relies heavily on the number, persistence, and the data payload size of your application jobs.
The [Job API]({{< ref jobs_api.md >}}) and [Actor Reminders]({{< ref actors-timers-reminders.md >}}) (with the [SchedulerReminders]({{< ref support-preview-features.md >}}) preview feature enabled) transparently maps one to one to the usage of your applications. The [Job API]({{< ref jobs_api.md >}}) and [Actor Reminders]({{< ref actors-timers-reminders.md >}}) transparently maps one to one to the usage of your applications.
Workflows (when the [SchedulerReminders]({{< ref support-preview-features.md >}}) preview feature is enabled) create a large number of jobs as Actor Reminders, however these jobs are short lived- matching the lifecycle of each workflow execution. Workflows create a large number of jobs as Actor Reminders, however these jobs are short lived- matching the lifecycle of each workflow execution.
The data payload of jobs created by Workflows is typically empty or small. The data payload of jobs created by Workflows is typically empty or small.
The Scheduler uses Etcd as its storage backend database. The Scheduler uses Etcd as its storage backend database.

View File

@ -93,7 +93,7 @@ For a new Dapr deployment, HA mode can be set with both:
- The [Dapr CLI]({{< ref "kubernetes-deploy.md#install-in-highly-available-mode" >}}), and - The [Dapr CLI]({{< ref "kubernetes-deploy.md#install-in-highly-available-mode" >}}), and
- [Helm charts]({{< ref "kubernetes-deploy.md#add-and-install-dapr-helm-chart" >}}) - [Helm charts]({{< ref "kubernetes-deploy.md#add-and-install-dapr-helm-chart" >}})
For an existing Dapr deployment, [you can enable HA mode in a few extra steps]({{< ref "#enabling-high-availability-in-an-existing-dapr-deployment" >}}). For an existing Dapr deployment, [you can enable HA mode in a few extra steps]({{< ref "#enable-high-availability-in-an-existing-dapr-deployment" >}}).
### Individual service HA Helm configuration ### Individual service HA Helm configuration
@ -159,7 +159,7 @@ spec:
## Deploy Dapr with Helm ## Deploy Dapr with Helm
[Visit the full guide on deploying Dapr with Helm]({{< ref "kubernetes-deploy.md#install-with-helm-advanced" >}}). [Visit the full guide on deploying Dapr with Helm]({{< ref "kubernetes-deploy.md#install-with-helm" >}}).
### Parameters file ### Parameters file
@ -353,4 +353,4 @@ Watch this video for a deep dive into the best practices for running Dapr in pro
## Related links ## Related links
- [Deploy Dapr on Kubernetes]({{< ref kubernetes-deploy.md >}}) - [Deploy Dapr on Kubernetes]({{< ref kubernetes-deploy.md >}})
- [Upgrade Dapr on Kubernetes]({{< ref kubernetes-upgrade.md >}}) - [Upgrade Dapr on Kubernetes]({{< ref kubernetes-upgrade.md >}})

View File

@ -3,6 +3,6 @@ type: docs
title: "Logging" title: "Logging"
linkTitle: "Logging" linkTitle: "Logging"
weight: 400 weight: 400
description: "How to setup loggings for Dapr sidecar, and your application" description: "How to setup logging for Dapr sidecar, and your application"
--- ---

View File

@ -127,7 +127,7 @@ If you are using the Azure Kubernetes Service, you can use [Azure Monitor for co
## References ## References
- [How-to: Set up Fleuntd, Elastic search, and Kibana]({{< ref fluentd.md >}}) - [How-to: Set up Fluentd, Elastic search, and Kibana]({{< ref fluentd.md >}})
- [How-to: Set up Azure Monitor in Azure Kubernetes Service]({{< ref azure-monitor.md >}}) - [How-to: Set up Azure Monitor in Azure Kubernetes Service]({{< ref azure-monitor.md >}})
- [Configure and view Dapr Logs]({{< ref "logs-troubleshooting.md" >}}) - [Configure and view Dapr Logs]({{< ref "logs-troubleshooting.md" >}})
- [Configure and view Dapr API Logs]({{< ref "api-logs-troubleshooting.md" >}}) - [Configure and view Dapr API Logs]({{< ref "api-logs-troubleshooting.md" >}})

View File

@ -11,7 +11,7 @@ Dapr integrates with [OpenTelemetry (OTEL) Collector](https://github.com/open-te
## Prerequisites ## Prerequisites
- [Install Dapr on Kubernetes]({{< ref kubernetes >}}) - [Install Dapr on Kubernetes]({{< ref kubernetes >}})
- [Set up an App Insights resource](https://docs.microsoft.com/azure/azure-monitor/app/create-new-resource) and make note of your App Insights instrumentation key. - [Set up an App Insights resource](https://docs.microsoft.com/azure/azure-monitor/app/create-new-resource) and make note of your App Insights connection string.
## Set up OTEL Collector to push to your App Insights instance ## Set up OTEL Collector to push to your App Insights instance
@ -19,7 +19,7 @@ To push events to your App Insights instance, install the OTEL Collector to your
1. Check out the [`open-telemetry-collector-appinsights.yaml`](/docs/open-telemetry-collector/open-telemetry-collector-appinsights.yaml) file. 1. Check out the [`open-telemetry-collector-appinsights.yaml`](/docs/open-telemetry-collector/open-telemetry-collector-appinsights.yaml) file.
1. Replace the `<INSTRUMENTATION-KEY>` placeholder with your App Insights instrumentation key. 1. Replace the `<CONNECTION_STRING>` placeholder with your App Insights connection string.
1. Apply the configuration with: 1. Apply the configuration with:

View File

@ -291,3 +291,21 @@ kubectl config get-users
``` ```
You may learn more about webhooks [here](https://kubernetes.io/docs/reference/access-authn-authz/extensible-admission-controllers/). You may learn more about webhooks [here](https://kubernetes.io/docs/reference/access-authn-authz/extensible-admission-controllers/).
## Ports not available during `dapr init`
You might encounter the following error on Windows after attempting to execute `dapr init`:
> PS C:\Users\You> dapr init
Making the jump to hyperspace...
Container images will be pulled from Docker Hub
Installing runtime version 1.14.4
Downloading binaries and setting up components...
docker: Error response from daemon: Ports are not available: exposing port TCP 0.0.0.0:52379 -> 0.0.0.0:0: listen tcp4 0.0.0.0:52379: bind: An attempt was made to access a socket in a way forbidden by its access permissions.
To resolve this error, open a command prompt in an elevated terminal and run:
```bash
nat stop winnat
dapr init
net start winnat
```

View File

@ -32,7 +32,7 @@ POST http://localhost:<daprPort>/v1.0-alpha1/conversation/<llm-name>/converse
| --------- | ----------- | | --------- | ----------- |
| `inputs` | Inputs for the conversation. Multiple inputs at one time are supported. Required | | `inputs` | Inputs for the conversation. Multiple inputs at one time are supported. Required |
| `cacheTTL` | A time-to-live value for a prompt cache to expire. Uses Golang duration format. Optional | | `cacheTTL` | A time-to-live value for a prompt cache to expire. Uses Golang duration format. Optional |
| `scrubPII` | A boolean value to enable obfuscation of sensitive information returning from the LLM. Optional | | `scrubPII` | A boolean value to enable obfuscation of sensitive information returning from the LLM. Set this value if all PII (across contents) in the request needs to be scrubbed. Optional |
| `temperature` | A float value to control the temperature of the model. Used to optimize for consistency and creativity. Optional | | `temperature` | A float value to control the temperature of the model. Used to optimize for consistency and creativity. Optional |
| `metadata` | [Metadata](#metadata) passed to conversation components. Optional | | `metadata` | [Metadata](#metadata) passed to conversation components. Optional |
@ -42,7 +42,7 @@ POST http://localhost:<daprPort>/v1.0-alpha1/conversation/<llm-name>/converse
| --------- | ----------- | | --------- | ----------- |
| `content` | The message content to send to the LLM. Required | | `content` | The message content to send to the LLM. Required |
| `role` | The role for the LLM to assume. Possible values: 'user', 'tool', 'assistant' | | `role` | The role for the LLM to assume. Possible values: 'user', 'tool', 'assistant' |
| `scrubPII` | A boolean value to enable obfuscation of sensitive information present in the content field. Optional | | `scrubPII` | A boolean value to enable obfuscation of sensitive information present in the content field. Set this value if PII for this specific content needs to be scrubbed exclusively. Optional |
### Request content example ### Request content example
@ -89,4 +89,4 @@ RESPONSE = {
## Next steps ## Next steps
- [Conversation API overview]({{< ref conversation-overview.md >}}) - [Conversation API overview]({{< ref conversation-overview.md >}})
- [Supported conversation components]({{< ref supported-conversation >}}) - [Supported conversation components]({{< ref supported-conversation >}})

View File

@ -50,7 +50,7 @@ The Dapr cron binding supports following formats:
For example: For example:
* `30 * * * * *` - every 30 seconds * `30 * * * * *` - every 30 seconds
* `0 15 * * * *` - every 15 minutes * `0 */15 * * * *` - every 15 minutes
* `0 30 3-6,20-23 * * *` - every hour on the half hour in the range 3-6am, 8-11pm * `0 30 3-6,20-23 * * *` - every hour on the half hour in the range 3-6am, 8-11pm
* `CRON_TZ=America/New_York 0 30 04 * * *` - every day at 4:30am New York time * `CRON_TZ=America/New_York 0 30 04 * * *` - every day at 4:30am New York time

View File

@ -58,19 +58,24 @@ The above example uses secrets as plain strings. It is recommended to use a secr
| Field | Required | Binding support | Details | Example | | Field | Required | Binding support | Details | Example |
|--------------------|:--------:|------------|-----|---------| |--------------------|:--------:|------------|-----|---------|
| `bucket` | Y | Output | The bucket name | `"mybucket"` | | `bucket` | Y | Output | The bucket name | `"mybucket"` |
| `type` | Y | Output | Tge GCP credentials type | `"service_account"` | | `project_id` | Y | Output | GCP project ID | `projectId` |
| `project_id` | Y | Output | GCP project id| `projectId` | `type` | N | Output | The GCP credentials type | `"service_account"` |
| `private_key_id` | Y | Output | GCP private key id | `"privateKeyId"` | `private_key_id` | N | Output | If using explicit credentials, this field should contain the `private_key_id` field from the service account json document | `"privateKeyId"` |
| `private_key` | Y | Output | GCP credentials private key. Replace with x509 cert | `12345-12345` | `private_key` | N | Output | If using explicit credentials, this field should contain the `private_key` field from the service account json. Replace with x509 cert | `12345-12345` |
| `client_email` | Y | Output | GCP client email | `"client@email.com"` | `client_email` | N | Output | If using explicit credentials, this field should contain the `client_email` field from the service account json | `"client@email.com"` |
| `client_id` | Y | Output | GCP client id | `0123456789-0123456789` | `client_id` | N | Output | If using explicit credentials, this field should contain the `client_id` field from the service account json | `0123456789-0123456789` |
| `auth_uri` | Y | Output | Google account OAuth endpoint | `https://accounts.google.com/o/oauth2/auth` | `auth_uri` | N | Output | If using explicit credentials, this field should contain the `auth_uri` field from the service account json | `https://accounts.google.com/o/oauth2/auth` |
| `token_uri` | Y | Output | Google account token uri | `https://oauth2.googleapis.com/token` | `token_uri` | N | Output | If using explicit credentials, this field should contain the `token_uri` field from the service account json | `https://oauth2.googleapis.com/token`|
| `auth_provider_x509_cert_url` | Y | Output | GCP credentials cert url | `https://www.googleapis.com/oauth2/v1/certs` | `auth_provider_x509_cert_url` | N | Output | If using explicit credentials, this field should contain the `auth_provider_x509_cert_url` field from the service account json | `https://www.googleapis.com/oauth2/v1/certs`|
| `client_x509_cert_url` | Y | Output | GCP credentials project x509 cert url | `https://www.googleapis.com/robot/v1/metadata/x509/<PROJECT_NAME>.iam.gserviceaccount.com` | `client_x509_cert_url` | N | Output | If using explicit credentials, this field should contain the `client_x509_cert_url` field from the service account json | `https://www.googleapis.com/robot/v1/metadata/x509/<PROJECT_NAME>.iam.gserviceaccount.com`|
| `decodeBase64` | N | Output | Configuration to decode base64 file content before saving to bucket storage. (In case of saving a file with binary content). `true` is the only allowed positive value. Other positive variations like `"True", "1"` are not acceptable. Defaults to `false` | `true`, `false` | | `decodeBase64` | N | Output | Configuration to decode base64 file content before saving to bucket storage. (In case of saving a file with binary content). `true` is the only allowed positive value. Other positive variations like `"True", "1"` are not acceptable. Defaults to `false` | `true`, `false` |
| `encodeBase64` | N | Output | Configuration to encode base64 file content before return the content. (In case of opening a file with binary content). `true` is the only allowed positive value. Other positive variations like `"True", "1"` are not acceptable. Defaults to `false` | `true`, `false` | | `encodeBase64` | N | Output | Configuration to encode base64 file content before return the content. (In case of opening a file with binary content). `true` is the only allowed positive value. Other positive variations like `"True", "1"` are not acceptable. Defaults to `false` | `true`, `false` |
## GCP Credentials
Since the GCP Storage Bucket component uses the GCP Go Client Libraries, by default it authenticates using **Application Default Credentials**. This is explained further in the [Authenticate to GCP Cloud services using client libraries](https://cloud.google.com/docs/authentication/client-libraries) guide.
Also, see how to [Set up Application Default Credentials](https://cloud.google.com/docs/authentication/provide-credentials-adc).
## Binding support ## Binding support
This component supports **output binding** with the following operations: This component supports **output binding** with the following operations:

View File

@ -0,0 +1,28 @@
---
type: docs
title: "Local Testing"
linkTitle: "Echo"
description: Detailed information on the echo conversation component used for local testing
---
## Component format
A Dapr `conversation.yaml` component file has the following structure:
```yaml
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
name: echo
spec:
type: conversation.echo
version: v1
```
{{% alert title="Information" color="warning" %}}
This component is only meant for local validation and testing of a Conversation component implementation. It does not actually send the data to any LLM but rather echos the input back directly.
{{% /alert %}}
## Related links
- [Conversation API overview]({{< ref conversation-overview.md >}})

View File

@ -76,7 +76,7 @@ The above example uses secrets as plain strings. It is recommended to use a secr
| Field | Required | Details | Example | | Field | Required | Details | Example |
|--------------------|:--------:|---------|---------| |--------------------|:--------:|---------|---------|
| projectId | Y | GCP project id| `myproject-123` | projectId | Y | GCP project ID | `myproject-123`
| endpoint | N | GCP endpoint for the component to use. Only used for local development (for example) with [GCP Pub/Sub Emulator](https://cloud.google.com/pubsub/docs/emulator). The `endpoint` is unnecessary when running against the GCP production API. | `"http://localhost:8085"` | endpoint | N | GCP endpoint for the component to use. Only used for local development (for example) with [GCP Pub/Sub Emulator](https://cloud.google.com/pubsub/docs/emulator). The `endpoint` is unnecessary when running against the GCP production API. | `"http://localhost:8085"`
| `consumerID` | N | The Consumer ID organizes one or more consumers into a group. Consumers with the same consumer ID work as one virtual consumer; for example, a message is processed only once by one of the consumers in the group. If the `consumerID` is not provided, the Dapr runtime set it to the Dapr application ID (`appID`) value. The `consumerID`, along with the `topic` provided as part of the request, are used to build the Pub/Sub subscription ID | Can be set to string value (such as `"channel1"`) or string format value (such as `"{podName}"`, etc.). [See all of template tags you can use in your component metadata.]({{< ref "component-schema.md#templated-metadata-values" >}}) | `consumerID` | N | The Consumer ID organizes one or more consumers into a group. Consumers with the same consumer ID work as one virtual consumer; for example, a message is processed only once by one of the consumers in the group. If the `consumerID` is not provided, the Dapr runtime set it to the Dapr application ID (`appID`) value. The `consumerID`, along with the `topic` provided as part of the request, are used to build the Pub/Sub subscription ID | Can be set to string value (such as `"channel1"`) or string format value (such as `"{podName}"`, etc.). [See all of template tags you can use in your component metadata.]({{< ref "component-schema.md#templated-metadata-values" >}})
| identityProjectId | N | If the GCP pubsub project is different from the identity project, specify the identity project using this attribute | `"myproject-123"` | identityProjectId | N | If the GCP pubsub project is different from the identity project, specify the identity project using this attribute | `"myproject-123"`

View File

@ -50,16 +50,22 @@ The above example uses secrets as plain strings. It is recommended to use a loca
| Field | Required | Details | Example | | Field | Required | Details | Example |
|--------------------|:--------:|--------------------------------|---------------------| |--------------------|:--------:|--------------------------------|---------------------|
| type | Y | The type of the account. | `"service_account"` | | `project_id` | Y | The project ID associated with this component. | `"project_id"` |
| project_id | Y | The project ID associated with this component. | `"project_id"` | | `type` | N | The type of the account. | `"service_account"` |
| private_key_id | N | The private key ID | `"privatekey"` | | `private_key_id` | N | If using explicit credentials, this field should contain the `private_key_id` field from the service account json document | `"privateKeyId"`|
| client_email | Y | The client email address | `"client@example.com"` | | `private_key` | N | If using explicit credentials, this field should contain the `private_key` field from the service account json. Replace with x509 cert | `12345-12345`|
| client_id | N | The ID of the client | `"11111111"` | | `client_email` | N | If using explicit credentials, this field should contain the `client_email` field from the service account json | `"client@email.com"`|
| auth_uri | N | The authentication URI | `"https://accounts.google.com/o/oauth2/auth"` | | `client_id` | N | If using explicit credentials, this field should contain the `client_id` field from the service account json | `0123456789-0123456789`|
| token_uri | N | The authentication token URI | `"https://oauth2.googleapis.com/token"` | | `auth_uri` | N | If using explicit credentials, this field should contain the `auth_uri` field from the service account json | `https://accounts.google.com/o/oauth2/auth`|
| auth_provider_x509_cert_url | N | The certificate URL for the auth provider | `"https://www.googleapis.com/oauth2/v1/certs"` | | `token_uri` | N | If using explicit credentials, this field should contain the `token_uri` field from the service account json | `https://oauth2.googleapis.com/token`|
| client_x509_cert_url | N | The certificate URL for the client | `"https://www.googleapis.com/robot/v1/metadata/x509/<project-name>.iam.gserviceaccount.com"`| | `auth_provider_x509_cert_url` | N | If using explicit credentials, this field should contain the `auth_provider_x509_cert_url` field from the service account json | `https://www.googleapis.com/oauth2/v1/certs`|
| private_key | Y | The private key for authentication | `"privateKey"` | | `client_x509_cert_url` | N | If using explicit credentials, this field should contain the `client_x509_cert_url` field from the service account json | `https://www.googleapis.com/robot/v1/metadata/x509/<PROJECT_NAME>.iam.gserviceaccount.com`|
## GCP Credentials
Since the GCP Secret Manager component uses the GCP Go Client Libraries, by default it authenticates using **Application Default Credentials**. This is explained further in the [Authenticate to GCP Cloud services using client libraries](https://cloud.google.com/docs/authentication/client-libraries) guide.
Also, see how to [Set up Application Default Credentials](https://cloud.google.com/docs/authentication/provide-credentials-adc).
## Optional per-request metadata properties ## Optional per-request metadata properties

View File

@ -88,6 +88,10 @@ For example, if installing using the example above, the Cassandra DNS would be:
{{< /tabs >}} {{< /tabs >}}
## Apache Ignite
[Apache Ignite](https://ignite.apache.org/)'s integration with Cassandra as a caching layer is not supported by this component.
## Related links ## Related links
- [Basic schema for a Dapr component]({{< ref component-schema >}}) - [Basic schema for a Dapr component]({{< ref component-schema >}})
- Read [this guide]({{< ref "howto-get-save-state.md#step-2-save-and-retrieve-a-single-state" >}}) for instructions on configuring state store components - Read [this guide]({{< ref "howto-get-save-state.md#step-2-save-and-retrieve-a-single-state" >}}) for instructions on configuring state store components

View File

@ -8,7 +8,7 @@ description: "The basic spec for a Dapr component"
Dapr defines and registers components using a [resource specifications](https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/). All components are defined as a resource and can be applied to any hosting environment where Dapr is running, not just Kubernetes. Dapr defines and registers components using a [resource specifications](https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/). All components are defined as a resource and can be applied to any hosting environment where Dapr is running, not just Kubernetes.
Typically, components are restricted to a particular [namepsace]({{< ref isolation-concept.md >}}) and restricted access through scopes to any particular set of applications. The namespace is either explicit on the component manifest itself, or set by the API server, which derives the namespace through context with applying to Kubernetes. Typically, components are restricted to a particular [namespace]({{< ref isolation-concept.md >}}) and restricted access through scopes to any particular set of applications. The namespace is either explicit on the component manifest itself, or set by the API server, which derives the namespace through context with applying to Kubernetes.
{{% alert title="Note" color="primary" %}} {{% alert title="Note" color="primary" %}}
The exception to this rule is in self-hosted mode, where daprd ingests component resources when the namespace field is omitted. However, the security profile is mute, as daprd has access to the manifest anyway, unlike in Kubernetes. The exception to this rule is in self-hosted mode, where daprd ingests component resources when the namespace field is omitted. However, the security profile is mute, as daprd has access to the manifest anyway, unlike in Kubernetes.

View File

@ -23,3 +23,8 @@
state: Alpha state: Alpha
version: v1 version: v1
since: "1.15" since: "1.15"
- component: Local echo
link: local-echo
state: Stable
version: v1
since: "1.15"

View File

@ -20,8 +20,7 @@ data:
debug: debug:
verbosity: basic verbosity: basic
azuremonitor: azuremonitor:
endpoint: "https://dc.services.visualstudio.com/v2/track" connection_string: "<CONNECTION_STRING>"
instrumentation_key: "<INSTRUMENTATION-KEY>"
# maxbatchsize is the maximum number of items that can be # maxbatchsize is the maximum number of items that can be
# queued before calling to the configured endpoint # queued before calling to the configured endpoint
maxbatchsize: 100 maxbatchsize: 100