mirror of https://github.com/dapr/docs.git
				
				
				
			Merge branch 'v1.13' into issue_3965_1.13
This commit is contained in:
		
						commit
						b86331100c
					
				|  | @ -1,3 +1,3 @@ | |||
| # Contributing to Dapr docs | ||||
| 
 | ||||
| Please see [this docs section](https://docs.dapr.io/contributing/) for general guidance on contributions to the Dapr project as well as specific guidelines on contributions to the docs repo. | ||||
| Please see [this docs section](https://docs.dapr.io/contributing/) for general guidance on contributions to the Dapr project as well as specific guidelines on contributions to the docs repo. Learn more about [Dapr bot commands and labels](https://docs.dapr.io/contributing/daprbot/) to improve your docs contributing experience. | ||||
|  | @ -212,18 +212,6 @@ url_latest_version = "https://docs.dapr.io" | |||
| [[params.versions]] | ||||
|   version = "v1.7" | ||||
|   url = "https://v1-7.docs.dapr.io" | ||||
| [[params.versions]] | ||||
|   version = "v1.6" | ||||
|   url = "https://v1-6.docs.dapr.io" | ||||
| [[params.versions]] | ||||
|   version = "v1.5" | ||||
|   url = "https://v1-5.docs.dapr.io" | ||||
| [[params.versions]] | ||||
|   version = "v1.4" | ||||
|   url = "https://v1-4.docs.dapr.io" | ||||
| [[params.versions]] | ||||
|   version = "v1.3" | ||||
|   url = "https://v1-3.docs.dapr.io" | ||||
| 
 | ||||
| # UI Customization | ||||
| [params.ui] | ||||
|  |  | |||
|  | @ -12,7 +12,7 @@ Dapr bot is triggered by a list of commands that helps with common tasks in the | |||
| 
 | ||||
| | Command          | Target                | Description                                                                                              | Who can use                                                                                     | Repository                             | | ||||
| | ---------------- | --------------------- | -------------------------------------------------------------------------------------------------------- | ----------------------------------------------------------------------------------------------- | -------------------------------------- | | ||||
| | `/assign`        | Issue                 | Assigns an issue to a user or group of users                                                             | Anyone                                                                                          | `dapr`, `components-contrib`, `go-sdk` | | ||||
| | `/assign`        | Issue                 | Assigns an issue to a user or group of users                                                             | Anyone                                                                                          | `dapr`, `docs`, `quickstarts`, `cli`, `components-contrib`, `go-sdk`, `js-sdk`, `java-sdk`, `python-sdk`, `dotnet-sdk` | | ||||
| | `/ok-to-test`    | Pull request          | `dapr`: trigger end to end tests <br/> `components-contrib`: trigger conformance and certification tests | Users listed in the [bot](https://github.com/dapr/dapr/blob/master/.github/scripts/dapr_bot.js) | `dapr`, `components-contrib`           | | ||||
| | `/ok-to-perf`    | Pull request          | Trigger performance tests.                                                                               | Users listed in the [bot](https://github.com/dapr/dapr/blob/master/.github/scripts/dapr_bot.js) | `dapr`                                 | | ||||
| | `/make-me-laugh` | Issue or pull request | Posts a random joke                                                                                      | Users listed in the [bot](https://github.com/dapr/dapr/blob/master/.github/scripts/dapr_bot.js) | `dapr`, `components-contrib`           | | ||||
|  |  | |||
|  | @ -14,7 +14,9 @@ Now that you've learned about the [actor building block]({{< ref "actors-overvie | |||
| 
 | ||||
| Dapr actors are virtual, meaning that their lifetime is not tied to their in-memory representation. As a result, they do not need to be explicitly created or destroyed. The Dapr actor runtime automatically activates an actor the first time it receives a request for that actor ID. If an actor is not used for a period of time, the Dapr actor runtime garbage-collects the in-memory object. It will also maintain knowledge of the actor's existence should it need to be reactivated later. | ||||
| 
 | ||||
| Invocation of actor methods and reminders reset the idle time, e.g. reminder firing will keep the actor active. Actor reminders fire whether an actor is active or inactive, if fired for inactive actor, it will activate the actor first. Actor timers do not reset the idle time, so timer firing will not keep the actor active. Timers only fire while the actor is active. | ||||
| Invocation of actor methods, timers, and reminders reset the actor idle time. For example, a reminder firing keeps the actor active.  | ||||
| - Actor reminders fire whether an actor is active or inactive. If fired for an inactive actor, it activates the actor first. | ||||
| - Actor timers firing reset the idle time; however, timers only fire while the actor is active. | ||||
| 
 | ||||
| The idle timeout and scan interval Dapr runtime uses to see if an actor can be garbage-collected is configurable. This information can be passed when Dapr runtime calls into the actor service to get supported actor types. | ||||
| 
 | ||||
|  |  | |||
|  | @ -146,6 +146,36 @@ If an invocation of the method fails, the timer is not removed. Timers are only | |||
| - The executions run out | ||||
| - You delete it explicitly | ||||
| 
 | ||||
| ## Reminder data serialization format | ||||
| 
 | ||||
| Actor reminder data is serialized to JSON by default. Dapr v1.13 onwards supports a protobuf serialization format for reminders data which, depending on throughput and size of the payload, can result in significant performance improvements, giving developers a higher throughput and lower latency. Another benefit is storing smaller data in the actor underlying database, which can result in cost optimizations when using some cloud databases. A restriction with using protobuf serialization is that the reminder data can no longer be queried.  | ||||
| 
 | ||||
| {{% alert title="Note" color="primary" %}} | ||||
| Protobuf serialization will become the default format in Dapr 1.14 | ||||
| {{% /alert %}} | ||||
| 
 | ||||
| Reminder data saved in protobuf format cannot be read in Dapr 1.12.x and earlier versions. Its recommended to test this feature in Dapr v1.13 and verify that it works as expected with your database before taking this into production.  | ||||
| 
 | ||||
| {{% alert title="Note" color="primary" %}} | ||||
| If you use protobuf serialization in Dapr v1.13 and need to downgrade to an earlier Dapr version, the reminder data will be incompatible with versions 1.12.x and earlier versions. **Once you save your reminders data in protobuf format, you cannot move it back to JSON format**. | ||||
| {{% /alert %}} | ||||
| 
 | ||||
| ### Enabling protobuf serialization on Kubernetes | ||||
| 
 | ||||
| To use protobuf serialization for actor reminders on Kubernetes, use the following Helm value: | ||||
| 
 | ||||
| ``` | ||||
| --set dapr_placement.maxActorApiLevel=20 | ||||
| ``` | ||||
| 
 | ||||
| ### Enabling protobuf serialization on self-hosted | ||||
| 
 | ||||
| To use protobuf serialization for actor reminders on self-hosted, use the following `daprd` flag: | ||||
| 
 | ||||
| ``` | ||||
| --max-api-level=20 | ||||
| ``` | ||||
| 
 | ||||
| ## Next steps | ||||
| 
 | ||||
| {{< button text="Configure actor runtime behavior >>" page="actors-runtime-config.md" >}} | ||||
|  | @ -153,4 +183,4 @@ If an invocation of the method fails, the timer is not removed. Timers are only | |||
| ## Related links | ||||
| 
 | ||||
| - [Actors API reference]({{< ref actors_api.md >}}) | ||||
| - [Actors overview]({{< ref actors-overview.md >}}) | ||||
| - [Actors overview]({{< ref actors-overview.md >}}) | ||||
|  |  | |||
|  | @ -6,16 +6,16 @@ weight: 2000 | |||
| description: "Learn how to encrypt and decrypt files" | ||||
| --- | ||||
| 
 | ||||
| Now that you've read about [Cryptography as a Dapr building block]({{< ref cryptography-overview.md >}}), let's walk through using the cryptography APIs with the SDKs.   | ||||
| Now that you've read about [Cryptography as a Dapr building block]({{< ref cryptography-overview.md >}}), let's walk through using the cryptography APIs with the SDKs. | ||||
| 
 | ||||
| {{% alert title="Note" color="primary" %}} | ||||
|   Dapr cryptography is currently in alpha. | ||||
| Dapr cryptography is currently in alpha. | ||||
| 
 | ||||
| {{% /alert %}} | ||||
| 
 | ||||
| ## Encrypt | ||||
| 
 | ||||
| {{< tabs "JavaScript" "Go" >}} | ||||
| {{< tabs "JavaScript" "Go" ".NET" >}} | ||||
| 
 | ||||
| {{% codetab %}} | ||||
| 
 | ||||
|  | @ -136,12 +136,32 @@ if err != nil { | |||
| 
 | ||||
| {{% /codetab %}} | ||||
| 
 | ||||
| {{% codetab %}} | ||||
| 
 | ||||
| <!-- .NET --> | ||||
| Using the Dapr SDK in your project, with the gRPC APIs, you can encrypt data in a string or a byte array: | ||||
| 
 | ||||
| ```csharp | ||||
| using var client = new DaprClientBuilder().Build(); | ||||
| 
 | ||||
| const string componentName = "azurekeyvault"; //Change this to match your cryptography component | ||||
| const string keyName = "myKey"; //Change this to match the name of the key in your cryptographic store | ||||
| 
 | ||||
| const string plainText = "This is the value we're going to encrypt today"; | ||||
| 
 | ||||
| //Encode the string to a UTF-8 byte array and encrypt it | ||||
| var plainTextBytes = Encoding.UTF8.GetBytes(plainText); | ||||
| var encryptedBytesResult = await client.EncryptAsync(componentName, plaintextBytes, keyName, new EncryptionOptions(KeyWrapAlgorithm.Rsa)); | ||||
| ``` | ||||
| 
 | ||||
| {{% /codetab %}} | ||||
| 
 | ||||
| {{< /tabs >}} | ||||
| 
 | ||||
| 
 | ||||
| ## Decrypt | ||||
| 
 | ||||
| {{< tabs "JavaScript" "Go" >}} | ||||
| {{< tabs "JavaScript" "Go" ".NET" >}} | ||||
| 
 | ||||
| {{% codetab %}} | ||||
| 
 | ||||
|  | @ -186,6 +206,29 @@ out, err := sdkClient.Decrypt(context.Background(), rf, dapr.EncryptOptions{ | |||
| 
 | ||||
| {{% /codetab %}} | ||||
| 
 | ||||
| {{% codetab %}} | ||||
| 
 | ||||
| <!-- .NET --> | ||||
| To decrypt a string, use the 'DecryptAsync' gRPC API in your project. | ||||
| 
 | ||||
| In the following example, we'll take a byte array (such as from the example above) and decrypt it to a UTF-8 encoded string. | ||||
| 
 | ||||
| ```csharp | ||||
| public async Task<string> DecryptBytesAsync(byte[] encryptedBytes) | ||||
| { | ||||
|   using var client = new DaprClientBuilder().Build(); | ||||
| 
 | ||||
|   const string componentName = "azurekeyvault"; //Change this to match your cryptography component | ||||
|   const string keyName = "myKey"; //Change this to match the name of the key in your cryptographic store | ||||
| 
 | ||||
|   var decryptedBytes = await client.DecryptAsync(componentName, encryptedBytes, keyName); | ||||
|   var decryptedString = Encoding.UTF8.GetString(decryptedBytes.ToArray()); | ||||
|   return decryptedString; | ||||
| } | ||||
| ``` | ||||
| 
 | ||||
| {{% /codetab %}} | ||||
| 
 | ||||
| {{< /tabs >}} | ||||
| 
 | ||||
| ## Next steps | ||||
|  |  | |||
|  | @ -90,9 +90,13 @@ The diagram below shows an example of how this works. If you have 1 instance of | |||
| 
 | ||||
| **Note**: App ID is unique per _application_, not application instance. Regardless how many instances of that application exist (due to scaling), all of them will share the same app ID. | ||||
| 
 | ||||
| ### Pluggable service discovery | ||||
| ### Swappable service discovery | ||||
| 
 | ||||
| Dapr can run on a variety of [hosting platforms]({{< ref hosting >}}). To enable service discovery and service invocation, Dapr uses pluggable [name resolution components]({{< ref supported-name-resolution >}}). For example, the Kubernetes name resolution component uses the Kubernetes DNS service to resolve the location of other applications running in the cluster. Self-hosted machines can use the mDNS name resolution component. The Consul name resolution component can be used in any hosting environment, including Kubernetes or self-hosted. | ||||
| Dapr can run on a variety of [hosting platforms]({{< ref hosting >}}). To enable swappable service discovery with service invocation, Dapr uses [name resolution components]({{< ref supported-name-resolution >}}). For example, the Kubernetes name resolution component uses the Kubernetes DNS service to resolve the location of other applications running in the cluster.   | ||||
| 
 | ||||
| Self-hosted machines can use the mDNS name resolution component. As an alternative, you can use the SQLite name resolution component to run Dapr on single-node environments and for local development scenarios. Dapr sidecars that are part of the cluster store their information in a SQLite database on the local machine. | ||||
| 
 | ||||
| The Consul name resolution component is particularly suited to multi-machine deployments and can be used in any hosting environment, including Kubernetes, multiple VMs, or self-hosted. | ||||
| 
 | ||||
| ### Streaming for HTTP service invocation | ||||
| 
 | ||||
|  |  | |||
|  | @ -566,6 +566,33 @@ To launch a Dapr sidecar for the above example application, run a command simila | |||
| dapr run --app-id orderprocessing --app-port 6001 --dapr-http-port 3601 --dapr-grpc-port 60001 dotnet run | ||||
| ``` | ||||
| 
 | ||||
| The above example returns a `BulkStateItem` with the serialized format of the value you saved to state. If you prefer that the value be deserialized by the SDK across each of your bulk response items, you can instead use the following: | ||||
| 
 | ||||
| ```csharp | ||||
| //dependencies | ||||
| using Dapr.Client; | ||||
| //code | ||||
| namespace EventService | ||||
| { | ||||
|     class Program | ||||
|     { | ||||
|         static async Task Main(string[] args) | ||||
|         { | ||||
|             string DAPR_STORE_NAME = "statestore"; | ||||
|             //Using Dapr SDK to retrieve multiple states | ||||
|             using var client = new DaprClientBuilder().Build(); | ||||
|             IReadOnlyList<BulkStateItem<Widget>> mulitpleStateResult = await client.GetBulkStateAsync<Widget>(DAPR_STORE_NAME, new List<string> { "widget_1", "widget_2" }, parallelism: 1); | ||||
|         } | ||||
|     } | ||||
| 
 | ||||
|     class Widget | ||||
|     { | ||||
|         string Size { get; set; } | ||||
|         string Color { get; set; }         | ||||
|     } | ||||
| } | ||||
| ``` | ||||
| 
 | ||||
| {{% /codetab %}} | ||||
| 
 | ||||
| {{% codetab %}} | ||||
|  |  | |||
|  | @ -34,7 +34,7 @@ The Dapr sidecar doesn’t load any workflow definitions. Rather, the sidecar si | |||
| 
 | ||||
| [Workflow activities]({{< ref "workflow-features-concepts.md#workflow-activites" >}}) are the basic unit of work in a workflow and are the tasks that get orchestrated in the business process. | ||||
| 
 | ||||
| {{< tabs Python ".NET" Java >}} | ||||
| {{< tabs Python JavaScript ".NET" Java Go >}} | ||||
| 
 | ||||
| {{% codetab %}} | ||||
| 
 | ||||
|  | @ -52,6 +52,37 @@ def hello_act(ctx: WorkflowActivityContext, input): | |||
| [See the `hello_act` workflow activity in context.](https://github.com/dapr/python-sdk/blob/master/examples/demo_workflow/app.py#LL40C1-L43C59) | ||||
| 
 | ||||
| 
 | ||||
| {{% /codetab %}} | ||||
| 
 | ||||
| {{% codetab %}} | ||||
| 
 | ||||
| <!--javascript--> | ||||
| 
 | ||||
| Define the workflow activities you'd like your workflow to perform. Activities are wrapped in the `WorkflowActivityContext` class, which implements the workflow activities.  | ||||
| 
 | ||||
| ```javascript | ||||
| export default class WorkflowActivityContext { | ||||
|   private readonly _innerContext: ActivityContext; | ||||
|   constructor(innerContext: ActivityContext) { | ||||
|     if (!innerContext) { | ||||
|       throw new Error("ActivityContext cannot be undefined"); | ||||
|     } | ||||
|     this._innerContext = innerContext; | ||||
|   } | ||||
| 
 | ||||
|   public getWorkflowInstanceId(): string { | ||||
|     return this._innerContext.orchestrationId; | ||||
|   } | ||||
| 
 | ||||
|   public getWorkflowActivityId(): number { | ||||
|     return this._innerContext.taskId; | ||||
|   } | ||||
| } | ||||
| ``` | ||||
| 
 | ||||
| [See the workflow activity in context.](https://github.com/dapr/js-sdk/blob/main/src/workflow/runtime/WorkflowActivityContext.ts) | ||||
| 
 | ||||
| 
 | ||||
| {{% /codetab %}} | ||||
| 
 | ||||
| {{% codetab %}} | ||||
|  | @ -165,6 +196,27 @@ public class DemoWorkflowActivity implements WorkflowActivity { | |||
| 
 | ||||
| {{% /codetab %}} | ||||
| 
 | ||||
| {{% codetab %}} | ||||
| 
 | ||||
| <!--go--> | ||||
| 
 | ||||
| Define each workflow activity you'd like your workflow to perform. The Activity input can be unmarshalled from the context with `ctx.GetInput`. Activities should be defined as taking a `ctx workflow.ActivityContext` parameter and returning an interface and error. | ||||
|   | ||||
| ```go | ||||
| func TestActivity(ctx workflow.ActivityContext) (any, error) { | ||||
| 	var input int | ||||
| 	if err := ctx.GetInput(&input); err != nil { | ||||
| 		return "", err | ||||
| 	} | ||||
| 	 | ||||
| 	// Do something here | ||||
| 	return "result", nil | ||||
| } | ||||
| ``` | ||||
| 
 | ||||
| [See the Go SDK workflow activity example in context.](https://github.com/dapr/go-sdk/tree/main/examples/workflow/README.md) | ||||
| 
 | ||||
| {{% /codetab %}} | ||||
| 
 | ||||
| {{< /tabs >}} | ||||
| 
 | ||||
|  | @ -172,7 +224,7 @@ public class DemoWorkflowActivity implements WorkflowActivity { | |||
| 
 | ||||
| Next, register and call the activites in a workflow.  | ||||
| 
 | ||||
| {{< tabs Python ".NET" Java >}} | ||||
| {{< tabs Python JavaScript ".NET" Java Go >}} | ||||
| 
 | ||||
| {{% codetab %}} | ||||
| 
 | ||||
|  | @ -193,6 +245,51 @@ def hello_world_wf(ctx: DaprWorkflowContext, input): | |||
| [See the `hello_world_wf` workflow in context.](https://github.com/dapr/python-sdk/blob/master/examples/demo_workflow/app.py#LL32C1-L38C51) | ||||
| 
 | ||||
| 
 | ||||
| {{% /codetab %}} | ||||
| 
 | ||||
| {{% codetab %}} | ||||
| 
 | ||||
| <!--javascript--> | ||||
| 
 | ||||
| Next, register the workflow with the `WorkflowRuntime` class and start the workflow runtime. | ||||
|   | ||||
| ```javascript | ||||
| export default class WorkflowRuntime { | ||||
| 
 | ||||
|   //.. | ||||
|   // Register workflow implementation for handling orchestrations | ||||
|   public registerWorkflow(workflow: TWorkflow): WorkflowRuntime { | ||||
|     const name = getFunctionName(workflow); | ||||
|     const workflowWrapper = (ctx: OrchestrationContext, input: any): any => { | ||||
|       const workflowContext = new WorkflowContext(ctx); | ||||
|       return workflow(workflowContext, input); | ||||
|     }; | ||||
|     this.worker.addNamedOrchestrator(name, workflowWrapper); | ||||
|     return this; | ||||
|   } | ||||
| 
 | ||||
|   // Register workflow activities | ||||
|   public registerActivity(fn: TWorkflowActivity<TInput, TOutput>): WorkflowRuntime { | ||||
|     const name = getFunctionName(fn); | ||||
|     const activityWrapper = (ctx: ActivityContext, intput: TInput): TOutput => { | ||||
|       const wfActivityContext = new WorkflowActivityContext(ctx); | ||||
|       return fn(wfActivityContext, intput); | ||||
|     }; | ||||
|     this.worker.addNamedActivity(name, activityWrapper); | ||||
|     return this; | ||||
|   } | ||||
| 
 | ||||
|   // Start the workflow runtime processing items and block. | ||||
|   public async start() { | ||||
|     await this.worker.start(); | ||||
|   } | ||||
| 
 | ||||
| } | ||||
| ``` | ||||
| 
 | ||||
| [See the `WorkflowRuntime` in context.](https://github.com/dapr/js-sdk/blob/main/src/workflow/runtime/WorkflowRuntime.ts) | ||||
| 
 | ||||
| 
 | ||||
| {{% /codetab %}} | ||||
| 
 | ||||
| {{% codetab %}} | ||||
|  | @ -267,6 +364,37 @@ public class DemoWorkflowWorker { | |||
| [See the Java SDK workflow in context.](https://github.com/dapr/java-sdk/blob/master/examples/src/main/java/io/dapr/examples/workflows/DemoWorkflowWorker.java) | ||||
| 
 | ||||
| 
 | ||||
| {{% /codetab %}} | ||||
| 
 | ||||
| {{% codetab %}} | ||||
| 
 | ||||
| <!--go--> | ||||
| 
 | ||||
| Define your workflow function with the parameter `ctx *workflow.WorkflowContext` and return any and error. Invoke your defined activities from within your workflow. | ||||
| 
 | ||||
| ```go | ||||
| func TestWorkflow(ctx *workflow.WorkflowContext) (any, error) { | ||||
| 	var input int | ||||
| 	if err := ctx.GetInput(&input); err != nil { | ||||
| 		return nil, err | ||||
| 	} | ||||
| 	var output string | ||||
| 	if err := ctx.CallActivity(TestActivity, workflow.ActivityInput(input)).Await(&output); err != nil { | ||||
| 		return nil, err | ||||
| 	} | ||||
| 	if err := ctx.WaitForExternalEvent("testEvent", time.Second*60).Await(&output); err != nil { | ||||
| 		return nil, err | ||||
| 	} | ||||
| 	 | ||||
| 	if err := ctx.CreateTimer(time.Second).Await(nil); err != nil { | ||||
| 		return nil, nil | ||||
| 	} | ||||
| 	return output, nil | ||||
| } | ||||
| ``` | ||||
| 
 | ||||
| [See the Go SDK workflow in context.](https://github.com/dapr/go-sdk/tree/main/examples/workflow/README.md) | ||||
| 
 | ||||
| {{% /codetab %}} | ||||
| 
 | ||||
| {{< /tabs >}} | ||||
|  | @ -275,7 +403,7 @@ public class DemoWorkflowWorker { | |||
| 
 | ||||
| Finally, compose the application using the workflow. | ||||
| 
 | ||||
| {{< tabs Python ".NET" Java >}} | ||||
| {{< tabs Python JavaScript ".NET" Java Go >}} | ||||
| 
 | ||||
| {{% codetab %}} | ||||
| 
 | ||||
|  | @ -364,6 +492,153 @@ if __name__ == '__main__': | |||
| ``` | ||||
| 
 | ||||
| 
 | ||||
| {{% /codetab %}} | ||||
| 
 | ||||
| {{% codetab %}} | ||||
| 
 | ||||
| <!--javascript--> | ||||
| 
 | ||||
| [The following example](https://github.com/dapr/js-sdk/blob/main/src/workflow/client/DaprWorkflowClient.ts) is a basic JavaScript application using the JavaScript SDK. As in this example, your project code would include: | ||||
| 
 | ||||
| - A builder with extensions called: | ||||
|   - `WorkflowRuntime`: Allows you to register workflows and workflow activities | ||||
|   - `DaprWorkflowContext`: Allows you to [create workflows]({{< ref "#write-the-workflow" >}}) | ||||
|   - `WorkflowActivityContext`: Allows you to [create workflow activities]({{< ref "#write-the-workflow-activities" >}}) | ||||
| - API calls. In the example below, these calls start, terminate, get status, pause, resume, raise event, and purge the workflow. | ||||
|   | ||||
| ```javascript | ||||
| import { TaskHubGrpcClient } from "@microsoft/durabletask-js"; | ||||
| import { WorkflowState } from "./WorkflowState"; | ||||
| import { generateApiTokenClientInterceptors, generateEndpoint, getDaprApiToken } from "../internal/index"; | ||||
| import { TWorkflow } from "../../types/workflow/Workflow.type"; | ||||
| import { getFunctionName } from "../internal"; | ||||
| import { WorkflowClientOptions } from "../../types/workflow/WorkflowClientOption"; | ||||
| 
 | ||||
| /** DaprWorkflowClient class defines client operations for managing workflow instances. */ | ||||
| 
 | ||||
| export default class DaprWorkflowClient { | ||||
|   private readonly _innerClient: TaskHubGrpcClient; | ||||
| 
 | ||||
|   /** Initialize a new instance of the DaprWorkflowClient. | ||||
|    */ | ||||
|   constructor(options: Partial<WorkflowClientOptions> = {}) { | ||||
|     const grpcEndpoint = generateEndpoint(options); | ||||
|     options.daprApiToken = getDaprApiToken(options); | ||||
|     this._innerClient = this.buildInnerClient(grpcEndpoint.endpoint, options); | ||||
|   } | ||||
| 
 | ||||
|   private buildInnerClient(hostAddress: string, options: Partial<WorkflowClientOptions>): TaskHubGrpcClient { | ||||
|     let innerOptions = options?.grpcOptions; | ||||
|     if (options.daprApiToken !== undefined && options.daprApiToken !== "") { | ||||
|       innerOptions = { | ||||
|         ...innerOptions, | ||||
|         interceptors: [generateApiTokenClientInterceptors(options), ...(innerOptions?.interceptors ?? [])], | ||||
|       }; | ||||
|     } | ||||
|     return new TaskHubGrpcClient(hostAddress, innerOptions); | ||||
|   } | ||||
| 
 | ||||
|   /** | ||||
|    * Schedule a new workflow using the DurableTask client. | ||||
|    */ | ||||
|   public async scheduleNewWorkflow( | ||||
|     workflow: TWorkflow | string, | ||||
|     input?: any, | ||||
|     instanceId?: string, | ||||
|     startAt?: Date, | ||||
|   ): Promise<string> { | ||||
|     if (typeof workflow === "string") { | ||||
|       return await this._innerClient.scheduleNewOrchestration(workflow, input, instanceId, startAt); | ||||
|     } | ||||
|     return await this._innerClient.scheduleNewOrchestration(getFunctionName(workflow), input, instanceId, startAt); | ||||
|   } | ||||
| 
 | ||||
|   /** | ||||
|    * Terminate the workflow associated with the provided instance id. | ||||
|    * | ||||
|    * @param {string} workflowInstanceId - Workflow instance id to terminate. | ||||
|    * @param {any} output - The optional output to set for the terminated workflow instance. | ||||
|    */ | ||||
|   public async terminateWorkflow(workflowInstanceId: string, output: any) { | ||||
|     await this._innerClient.terminateOrchestration(workflowInstanceId, output); | ||||
|   } | ||||
| 
 | ||||
|   /** | ||||
|    * Fetch workflow instance metadata from the configured durable store. | ||||
|    */ | ||||
|   public async getWorkflowState( | ||||
|     workflowInstanceId: string, | ||||
|     getInputsAndOutputs: boolean, | ||||
|   ): Promise<WorkflowState | undefined> { | ||||
|     const state = await this._innerClient.getOrchestrationState(workflowInstanceId, getInputsAndOutputs); | ||||
|     if (state !== undefined) { | ||||
|       return new WorkflowState(state); | ||||
|     } | ||||
|   } | ||||
| 
 | ||||
|   /** | ||||
|    * Waits for a workflow to start running | ||||
|    */ | ||||
|   public async waitForWorkflowStart( | ||||
|     workflowInstanceId: string, | ||||
|     fetchPayloads = true, | ||||
|     timeoutInSeconds = 60, | ||||
|   ): Promise<WorkflowState | undefined> { | ||||
|     const state = await this._innerClient.waitForOrchestrationStart( | ||||
|       workflowInstanceId, | ||||
|       fetchPayloads, | ||||
|       timeoutInSeconds, | ||||
|     ); | ||||
|     if (state !== undefined) { | ||||
|       return new WorkflowState(state); | ||||
|     } | ||||
|   } | ||||
| 
 | ||||
|   /** | ||||
|    * Waits for a workflow to complete running | ||||
|    */ | ||||
|   public async waitForWorkflowCompletion( | ||||
|     workflowInstanceId: string, | ||||
|     fetchPayloads = true, | ||||
|     timeoutInSeconds = 60, | ||||
|   ): Promise<WorkflowState | undefined> { | ||||
|     const state = await this._innerClient.waitForOrchestrationCompletion( | ||||
|       workflowInstanceId, | ||||
|       fetchPayloads, | ||||
|       timeoutInSeconds, | ||||
|     ); | ||||
|     if (state != undefined) { | ||||
|       return new WorkflowState(state); | ||||
|     } | ||||
|   } | ||||
| 
 | ||||
|   /** | ||||
|    * Sends an event notification message to an awaiting workflow instance | ||||
|    */ | ||||
|   public async raiseEvent(workflowInstanceId: string, eventName: string, eventPayload?: any) { | ||||
|     this._innerClient.raiseOrchestrationEvent(workflowInstanceId, eventName, eventPayload); | ||||
|   } | ||||
| 
 | ||||
|   /** | ||||
|    * Purges the workflow instance state from the workflow state store. | ||||
|    */ | ||||
|   public async purgeWorkflow(workflowInstanceId: string): Promise<boolean> { | ||||
|     const purgeResult = await this._innerClient.purgeOrchestration(workflowInstanceId); | ||||
|     if (purgeResult !== undefined) { | ||||
|       return purgeResult.deletedInstanceCount > 0; | ||||
|     } | ||||
|     return false; | ||||
|   } | ||||
| 
 | ||||
|   /** | ||||
|    * Closes the inner DurableTask client and shutdown the GRPC channel. | ||||
|    */ | ||||
|   public async stop() { | ||||
|     await this._innerClient.stop(); | ||||
|   } | ||||
| } | ||||
| ``` | ||||
| 
 | ||||
| {{% /codetab %}} | ||||
| 
 | ||||
| {{% codetab %}} | ||||
|  | @ -484,6 +759,336 @@ public class DemoWorkflow extends Workflow { | |||
| 
 | ||||
| {{% /codetab %}} | ||||
| 
 | ||||
| {{% codetab %}} | ||||
| 
 | ||||
| <!--go--> | ||||
| 
 | ||||
| [As in the following example](https://github.com/dapr/go-sdk/tree/main/examples/workflow/README.md), a hello-world application using the Go SDK and Dapr Workflow would include: | ||||
| 
 | ||||
| - A Go package called `client` to receive the Go SDK client capabilities. | ||||
| - The `TestWorkflow` method | ||||
| - Creating the workflow with input and output. | ||||
| - API calls. In the example below, these calls start and call the workflow activities. | ||||
|   | ||||
| ```go | ||||
| package main | ||||
| 
 | ||||
| import ( | ||||
| 	"context" | ||||
| 	"fmt" | ||||
| 	"log" | ||||
| 	"time" | ||||
| 
 | ||||
| 	"github.com/dapr/go-sdk/client" | ||||
| 	"github.com/dapr/go-sdk/workflow" | ||||
| ) | ||||
| 
 | ||||
| var stage = 0 | ||||
| 
 | ||||
| const ( | ||||
| 	workflowComponent = "dapr" | ||||
| ) | ||||
| 
 | ||||
| func main() { | ||||
| 	w, err := workflow.NewWorker() | ||||
| 	if err != nil { | ||||
| 		log.Fatal(err) | ||||
| 	} | ||||
| 
 | ||||
| 	fmt.Println("Worker initialized") | ||||
| 
 | ||||
| 	if err := w.RegisterWorkflow(TestWorkflow); err != nil { | ||||
| 		log.Fatal(err) | ||||
| 	} | ||||
| 	fmt.Println("TestWorkflow registered") | ||||
| 
 | ||||
| 	if err := w.RegisterActivity(TestActivity); err != nil { | ||||
| 		log.Fatal(err) | ||||
| 	} | ||||
| 	fmt.Println("TestActivity registered") | ||||
| 
 | ||||
| 	// Start workflow runner | ||||
| 	if err := w.Start(); err != nil { | ||||
| 		log.Fatal(err) | ||||
| 	} | ||||
| 	fmt.Println("runner started") | ||||
| 
 | ||||
| 	daprClient, err := client.NewClient() | ||||
| 	if err != nil { | ||||
| 		log.Fatalf("failed to intialise client: %v", err) | ||||
| 	} | ||||
| 	defer daprClient.Close() | ||||
| 	ctx := context.Background() | ||||
| 
 | ||||
| 	// Start workflow test | ||||
| 	respStart, err := daprClient.StartWorkflowBeta1(ctx, &client.StartWorkflowRequest{ | ||||
| 		InstanceID:        "a7a4168d-3a1c-41da-8a4f-e7f6d9c718d9", | ||||
| 		WorkflowComponent: workflowComponent, | ||||
| 		WorkflowName:      "TestWorkflow", | ||||
| 		Options:           nil, | ||||
| 		Input:             1, | ||||
| 		SendRawInput:      false, | ||||
| 	}) | ||||
| 	if err != nil { | ||||
| 		log.Fatalf("failed to start workflow: %v", err) | ||||
| 	} | ||||
| 	fmt.Printf("workflow started with id: %v\n", respStart.InstanceID) | ||||
| 
 | ||||
| 	// Pause workflow test | ||||
| 	err = daprClient.PauseWorkflowBeta1(ctx, &client.PauseWorkflowRequest{ | ||||
| 		InstanceID:        "a7a4168d-3a1c-41da-8a4f-e7f6d9c718d9", | ||||
| 		WorkflowComponent: workflowComponent, | ||||
| 	}) | ||||
| 
 | ||||
| 	if err != nil { | ||||
| 		log.Fatalf("failed to pause workflow: %v", err) | ||||
| 	} | ||||
| 
 | ||||
| 	respGet, err := daprClient.GetWorkflowBeta1(ctx, &client.GetWorkflowRequest{ | ||||
| 		InstanceID:        "a7a4168d-3a1c-41da-8a4f-e7f6d9c718d9", | ||||
| 		WorkflowComponent: workflowComponent, | ||||
| 	}) | ||||
| 	if err != nil { | ||||
| 		log.Fatalf("failed to get workflow: %v", err) | ||||
| 	} | ||||
| 
 | ||||
| 	if respGet.RuntimeStatus != workflow.StatusSuspended.String() { | ||||
| 		log.Fatalf("workflow not paused: %v", respGet.RuntimeStatus) | ||||
| 	} | ||||
| 
 | ||||
| 	fmt.Printf("workflow paused\n") | ||||
| 
 | ||||
| 	// Resume workflow test | ||||
| 	err = daprClient.ResumeWorkflowBeta1(ctx, &client.ResumeWorkflowRequest{ | ||||
| 		InstanceID:        "a7a4168d-3a1c-41da-8a4f-e7f6d9c718d9", | ||||
| 		WorkflowComponent: workflowComponent, | ||||
| 	}) | ||||
| 
 | ||||
| 	if err != nil { | ||||
| 		log.Fatalf("failed to resume workflow: %v", err) | ||||
| 	} | ||||
| 
 | ||||
| 	respGet, err = daprClient.GetWorkflowBeta1(ctx, &client.GetWorkflowRequest{ | ||||
| 		InstanceID:        "a7a4168d-3a1c-41da-8a4f-e7f6d9c718d9", | ||||
| 		WorkflowComponent: workflowComponent, | ||||
| 	}) | ||||
| 	if err != nil { | ||||
| 		log.Fatalf("failed to get workflow: %v", err) | ||||
| 	} | ||||
| 
 | ||||
| 	if respGet.RuntimeStatus != workflow.StatusRunning.String() { | ||||
| 		log.Fatalf("workflow not running") | ||||
| 	} | ||||
| 
 | ||||
| 	fmt.Println("workflow resumed") | ||||
| 
 | ||||
| 	fmt.Printf("stage: %d\n", stage) | ||||
| 
 | ||||
| 	// Raise Event Test | ||||
| 
 | ||||
| 	err = daprClient.RaiseEventWorkflowBeta1(ctx, &client.RaiseEventWorkflowRequest{ | ||||
| 		InstanceID:        "a7a4168d-3a1c-41da-8a4f-e7f6d9c718d9", | ||||
| 		WorkflowComponent: workflowComponent, | ||||
| 		EventName:         "testEvent", | ||||
| 		EventData:         "testData", | ||||
| 		SendRawData:       false, | ||||
| 	}) | ||||
| 
 | ||||
| 	if err != nil { | ||||
| 		fmt.Printf("failed to raise event: %v", err) | ||||
| 	} | ||||
| 
 | ||||
| 	fmt.Println("workflow event raised") | ||||
| 
 | ||||
| 	time.Sleep(time.Second) // allow workflow to advance | ||||
| 
 | ||||
| 	fmt.Printf("stage: %d\n", stage) | ||||
| 
 | ||||
| 	respGet, err = daprClient.GetWorkflowBeta1(ctx, &client.GetWorkflowRequest{ | ||||
| 		InstanceID:        "a7a4168d-3a1c-41da-8a4f-e7f6d9c718d9", | ||||
| 		WorkflowComponent: workflowComponent, | ||||
| 	}) | ||||
| 	if err != nil { | ||||
| 		log.Fatalf("failed to get workflow: %v", err) | ||||
| 	} | ||||
| 
 | ||||
| 	fmt.Printf("workflow status: %v\n", respGet.RuntimeStatus) | ||||
| 
 | ||||
| 	// Purge workflow test | ||||
| 	err = daprClient.PurgeWorkflowBeta1(ctx, &client.PurgeWorkflowRequest{ | ||||
| 		InstanceID:        "a7a4168d-3a1c-41da-8a4f-e7f6d9c718d9", | ||||
| 		WorkflowComponent: workflowComponent, | ||||
| 	}) | ||||
| 	if err != nil { | ||||
| 		log.Fatalf("failed to purge workflow: %v", err) | ||||
| 	} | ||||
| 
 | ||||
| 	respGet, err = daprClient.GetWorkflowBeta1(ctx, &client.GetWorkflowRequest{ | ||||
| 		InstanceID:        "a7a4168d-3a1c-41da-8a4f-e7f6d9c718d9", | ||||
| 		WorkflowComponent: workflowComponent, | ||||
| 	}) | ||||
| 	if err != nil && respGet != nil { | ||||
| 		log.Fatal("failed to purge workflow") | ||||
| 	} | ||||
| 
 | ||||
| 	fmt.Println("workflow purged") | ||||
| 
 | ||||
| 	fmt.Printf("stage: %d\n", stage) | ||||
| 
 | ||||
| 	// Terminate workflow test | ||||
| 	respStart, err = daprClient.StartWorkflowBeta1(ctx, &client.StartWorkflowRequest{ | ||||
| 		InstanceID:        "a7a4168d-3a1c-41da-8a4f-e7f6d9c718d9", | ||||
| 		WorkflowComponent: workflowComponent, | ||||
| 		WorkflowName:      "TestWorkflow", | ||||
| 		Options:           nil, | ||||
| 		Input:             1, | ||||
| 		SendRawInput:      false, | ||||
| 	}) | ||||
| 	if err != nil { | ||||
| 		log.Fatalf("failed to start workflow: %v", err) | ||||
| 	} | ||||
| 
 | ||||
| 	fmt.Printf("workflow started with id: %s\n", respStart.InstanceID) | ||||
| 
 | ||||
| 	err = daprClient.TerminateWorkflowBeta1(ctx, &client.TerminateWorkflowRequest{ | ||||
| 		InstanceID:        "a7a4168d-3a1c-41da-8a4f-e7f6d9c718d9", | ||||
| 		WorkflowComponent: workflowComponent, | ||||
| 	}) | ||||
| 	if err != nil { | ||||
| 		log.Fatalf("failed to terminate workflow: %v", err) | ||||
| 	} | ||||
| 
 | ||||
| 	respGet, err = daprClient.GetWorkflowBeta1(ctx, &client.GetWorkflowRequest{ | ||||
| 		InstanceID:        "a7a4168d-3a1c-41da-8a4f-e7f6d9c718d9", | ||||
| 		WorkflowComponent: workflowComponent, | ||||
| 	}) | ||||
| 	if err != nil { | ||||
| 		log.Fatalf("failed to get workflow: %v", err) | ||||
| 	} | ||||
| 	if respGet.RuntimeStatus != workflow.StatusTerminated.String() { | ||||
| 		log.Fatal("failed to terminate workflow") | ||||
| 	} | ||||
| 
 | ||||
| 	fmt.Println("workflow terminated") | ||||
| 
 | ||||
| 	err = daprClient.PurgeWorkflowBeta1(ctx, &client.PurgeWorkflowRequest{ | ||||
| 		InstanceID:        "a7a4168d-3a1c-41da-8a4f-e7f6d9c718d9", | ||||
| 		WorkflowComponent: workflowComponent, | ||||
| 	}) | ||||
| 
 | ||||
| 	respGet, err = daprClient.GetWorkflowBeta1(ctx, &client.GetWorkflowRequest{ | ||||
| 		InstanceID:        "a7a4168d-3a1c-41da-8a4f-e7f6d9c718d9", | ||||
| 		WorkflowComponent: workflowComponent, | ||||
| 	}) | ||||
| 	if err == nil || respGet != nil { | ||||
| 		log.Fatalf("failed to purge workflow: %v", err) | ||||
| 	} | ||||
| 
 | ||||
| 	fmt.Println("workflow purged") | ||||
| 
 | ||||
| 	stage = 0 | ||||
| 	fmt.Println("workflow client test") | ||||
| 
 | ||||
| 	wfClient, err := workflow.NewClient() | ||||
| 	if err != nil { | ||||
| 		log.Fatalf("[wfclient] faield to initialize: %v", err) | ||||
| 	} | ||||
| 
 | ||||
| 	id, err := wfClient.ScheduleNewWorkflow(ctx, "TestWorkflow", workflow.WithInstanceID("a7a4168d-3a1c-41da-8a4f-e7f6d9c718d9"), workflow.WithInput(1)) | ||||
| 	if err != nil { | ||||
| 		log.Fatalf("[wfclient] failed to start workflow: %v", err) | ||||
| 	} | ||||
| 
 | ||||
| 	fmt.Printf("[wfclient] started workflow with id: %s\n", id) | ||||
| 
 | ||||
| 	metadata, err := wfClient.FetchWorkflowMetadata(ctx, id) | ||||
| 	if err != nil { | ||||
| 		log.Fatalf("[wfclient] failed to get worfklow: %v", err) | ||||
| 	} | ||||
| 
 | ||||
| 	fmt.Printf("[wfclient] workflow status: %v\n", metadata.RuntimeStatus.String()) | ||||
| 
 | ||||
| 	if stage != 1 { | ||||
| 		log.Fatalf("Workflow assertion failed while validating the wfclient. Stage 1 expected, current: %d", stage) | ||||
| 	} | ||||
| 
 | ||||
| 	fmt.Printf("[wfclient] stage: %d\n", stage) | ||||
| 
 | ||||
| 	// raise event | ||||
| 
 | ||||
| 	if err := wfClient.RaiseEvent(ctx, id, "testEvent", workflow.WithEventPayload("testData")); err != nil { | ||||
| 		log.Fatalf("[wfclient] failed to raise event: %v", err) | ||||
| 	} | ||||
| 
 | ||||
| 	fmt.Println("[wfclient] event raised") | ||||
| 
 | ||||
| 	// Sleep to allow the workflow to advance | ||||
| 	time.Sleep(time.Second) | ||||
| 
 | ||||
| 	if stage != 2 { | ||||
| 		log.Fatalf("Workflow assertion failed while validating the wfclient. Stage 2 expected, current: %d", stage) | ||||
| 	} | ||||
| 
 | ||||
| 	fmt.Printf("[wfclient] stage: %d\n", stage) | ||||
| 
 | ||||
| 	// stop workflow | ||||
| 	if err := wfClient.TerminateWorkflow(ctx, id); err != nil { | ||||
| 		log.Fatalf("[wfclient] failed to terminate workflow: %v", err) | ||||
| 	} | ||||
| 
 | ||||
| 	fmt.Println("[wfclient] workflow terminated") | ||||
| 
 | ||||
| 	if err := wfClient.PurgeWorkflow(ctx, id); err != nil { | ||||
| 		log.Fatalf("[wfclient] failed to purge workflow: %v", err) | ||||
| 	} | ||||
| 
 | ||||
| 	fmt.Println("[wfclient] workflow purged") | ||||
| 
 | ||||
| 	// stop workflow runtime | ||||
| 	if err := w.Shutdown(); err != nil { | ||||
| 		log.Fatalf("failed to shutdown runtime: %v", err) | ||||
| 	} | ||||
| 
 | ||||
| 	fmt.Println("workflow worker successfully shutdown") | ||||
| } | ||||
| 
 | ||||
| func TestWorkflow(ctx *workflow.WorkflowContext) (any, error) { | ||||
| 	var input int | ||||
| 	if err := ctx.GetInput(&input); err != nil { | ||||
| 		return nil, err | ||||
| 	} | ||||
| 	var output string | ||||
| 	if err := ctx.CallActivity(TestActivity, workflow.ActivityInput(input)).Await(&output); err != nil { | ||||
| 		return nil, err | ||||
| 	} | ||||
| 
 | ||||
| 	err := ctx.WaitForExternalEvent("testEvent", time.Second*60).Await(&output) | ||||
| 	if err != nil { | ||||
| 		return nil, err | ||||
| 	} | ||||
| 
 | ||||
| 	if err := ctx.CallActivity(TestActivity, workflow.ActivityInput(input)).Await(&output); err != nil { | ||||
| 		return nil, err | ||||
| 	} | ||||
| 
 | ||||
| 	return output, nil | ||||
| } | ||||
| 
 | ||||
| func TestActivity(ctx workflow.ActivityContext) (any, error) { | ||||
| 	var input int | ||||
| 	if err := ctx.GetInput(&input); err != nil { | ||||
| 		return "", err | ||||
| 	} | ||||
| 
 | ||||
| 	stage += input | ||||
| 
 | ||||
| 	return fmt.Sprintf("Stage: %d", stage), nil | ||||
| } | ||||
| ``` | ||||
| 
 | ||||
| [See the full Go SDK workflow example in context.](https://github.com/dapr/go-sdk/tree/main/examples/workflow/README.md) | ||||
| 
 | ||||
| {{% /codetab %}} | ||||
| 
 | ||||
| {{< /tabs >}} | ||||
| 
 | ||||
|  | @ -504,5 +1109,7 @@ Now that you've authored a workflow, learn how to manage it. | |||
| - [Workflow API reference]({{< ref workflow_api.md >}}) | ||||
| - Try out the full SDK examples: | ||||
|   - [Python example](https://github.com/dapr/python-sdk/tree/master/examples/demo_workflow) | ||||
|   - [JavaScript example](https://github.com/dapr/js-sdk/tree/main/examples/workflow) | ||||
|   - [.NET example](https://github.com/dapr/dotnet-sdk/tree/master/examples/Workflow) | ||||
|   - [Java example](https://github.com/dapr/java-sdk/tree/master/examples/src/main/java/io/dapr/examples/workflows) | ||||
|   - [Go example](https://github.com/dapr/go-sdk/tree/main/examples/workflow/README.md) | ||||
|  |  | |||
|  | @ -12,7 +12,7 @@ Dapr Workflow is currently in beta. [See known limitations for {{% dapr-latest-v | |||
| 
 | ||||
| Now that you've [authored the workflow and its activities in your application]({{< ref howto-author-workflow.md >}}), you can start, terminate, and get information about the workflow using HTTP API calls. For more information, read the [workflow API reference]({{< ref workflow_api.md >}}). | ||||
| 
 | ||||
| {{< tabs Python ".NET" Java HTTP >}} | ||||
| {{< tabs Python JavaScript ".NET" Java Go HTTP >}} | ||||
| 
 | ||||
| <!--Python--> | ||||
| {{% codetab %}} | ||||
|  | @ -63,6 +63,77 @@ d.terminate_workflow(instance_id=instanceId, workflow_component=workflowComponen | |||
| 
 | ||||
| {{% /codetab %}} | ||||
| 
 | ||||
| <!--JavaScript--> | ||||
| {{% codetab %}} | ||||
| 
 | ||||
| Manage your workflow within your code. In the workflow example from the [Author a workflow]({{< ref "howto-author-workflow.md#write-the-application" >}}) guide, the workflow is registered in the code using the following APIs: | ||||
| - **client.workflow.start**: Start an instance of a workflow | ||||
| - **client.workflow.get**: Get information on the status of the workflow | ||||
| - **client.workflow.pause**: Pauses or suspends a workflow instance that can later be resumed | ||||
| - **client.workflow.resume**: Resumes a paused workflow instance | ||||
| - **client.workflow.purge**: Removes all metadata related to a specific workflow instance | ||||
| - **client.workflow.terminate**: Terminate or stop a particular instance of a workflow | ||||
| 
 | ||||
| ```javascript | ||||
| import { DaprClient } from "@dapr/dapr"; | ||||
| 
 | ||||
| async function printWorkflowStatus(client: DaprClient, instanceId: string) { | ||||
|   const workflow = await client.workflow.get(instanceId); | ||||
|   console.log( | ||||
|     `Workflow ${workflow.workflowName}, created at ${workflow.createdAt.toUTCString()}, has status ${ | ||||
|       workflow.runtimeStatus | ||||
|     }`, | ||||
|   ); | ||||
|   console.log(`Additional properties: ${JSON.stringify(workflow.properties)}`); | ||||
|   console.log("--------------------------------------------------\n\n"); | ||||
| } | ||||
| 
 | ||||
| async function start() { | ||||
|   const client = new DaprClient(); | ||||
| 
 | ||||
|   // Start a new workflow instance | ||||
|   const instanceId = await client.workflow.start("OrderProcessingWorkflow", { | ||||
|     Name: "Paperclips", | ||||
|     TotalCost: 99.95, | ||||
|     Quantity: 4, | ||||
|   }); | ||||
|   console.log(`Started workflow instance ${instanceId}`); | ||||
|   await printWorkflowStatus(client, instanceId); | ||||
| 
 | ||||
|   // Pause a workflow instance | ||||
|   await client.workflow.pause(instanceId); | ||||
|   console.log(`Paused workflow instance ${instanceId}`); | ||||
|   await printWorkflowStatus(client, instanceId); | ||||
| 
 | ||||
|   // Resume a workflow instance | ||||
|   await client.workflow.resume(instanceId); | ||||
|   console.log(`Resumed workflow instance ${instanceId}`); | ||||
|   await printWorkflowStatus(client, instanceId); | ||||
| 
 | ||||
|   // Terminate a workflow instance | ||||
|   await client.workflow.terminate(instanceId); | ||||
|   console.log(`Terminated workflow instance ${instanceId}`); | ||||
|   await printWorkflowStatus(client, instanceId); | ||||
| 
 | ||||
|   // Wait for the workflow to complete, 30 seconds! | ||||
|   await new Promise((resolve) => setTimeout(resolve, 30000)); | ||||
|   await printWorkflowStatus(client, instanceId); | ||||
| 
 | ||||
|   // Purge a workflow instance | ||||
|   await client.workflow.purge(instanceId); | ||||
|   console.log(`Purged workflow instance ${instanceId}`); | ||||
|   // This will throw an error because the workflow instance no longer exists. | ||||
|   await printWorkflowStatus(client, instanceId); | ||||
| } | ||||
| 
 | ||||
| start().catch((e) => { | ||||
|   console.error(e); | ||||
|   process.exit(1); | ||||
| }); | ||||
| ``` | ||||
| 
 | ||||
| {{% /codetab %}} | ||||
| 
 | ||||
| <!--NET--> | ||||
| {{% codetab %}} | ||||
| 
 | ||||
|  | @ -99,10 +170,10 @@ await daprClient.PurgeWorkflowAsync(orderId, workflowComponent); | |||
| 
 | ||||
| {{% /codetab %}} | ||||
| 
 | ||||
| <!--Python--> | ||||
| <!--Java--> | ||||
| {{% codetab %}} | ||||
| 
 | ||||
| Manage your workflow within your code. [In the workflow example from the Java SDK](https://github.com/dapr/java-sdk/blob/master/examples/src/main/java/io/dapr/examples/workflows/DemoWorkflowClient.java), the workflow is registered in the code using the following APIs: | ||||
| Manage your workflow within your code. [In the workflow example from the Java SDK](https://github.com/dapr/java-sdk/blob/master/examples/src/main/java/io/dapr/examples/workflows/), the workflow is registered in the code using the following APIs: | ||||
| 
 | ||||
| - **scheduleNewWorkflow**: Starts a new workflow instance | ||||
| - **getInstanceState**: Get information on the status of the workflow | ||||
|  | @ -164,6 +235,84 @@ public class DemoWorkflowClient { | |||
| 
 | ||||
| {{% /codetab %}} | ||||
| 
 | ||||
| <!--Go--> | ||||
| {{% codetab %}} | ||||
| 
 | ||||
| Manage your workflow within your code. [In the workflow example from the Go SDK](https://github.com/dapr/go-sdk/tree/main/examples/workflow), the workflow is registered in the code using the following APIs: | ||||
| 
 | ||||
| - **StartWorkflow**: Starts a new workflow instance | ||||
| - **GetWorkflow**: Get information on the status of the workflow | ||||
| - **PauseWorkflow**: Pauses or suspends a workflow instance that can later be resumed | ||||
| - **RaiseEventWorkflow**: Raises events/tasks for the running workflow instance | ||||
| - **ResumeWorkflow**: Waits for the workflow to complete its tasks | ||||
| - **PurgeWorkflow**: Removes all metadata related to a specific workflow instance | ||||
| - **TerminateWorkflow**: Terminates the workflow | ||||
| 
 | ||||
| ```go | ||||
| // Start workflow | ||||
| type StartWorkflowRequest struct { | ||||
| 	InstanceID        string // Optional instance identifier | ||||
| 	WorkflowComponent string | ||||
| 	WorkflowName      string | ||||
| 	Options           map[string]string // Optional metadata | ||||
| 	Input             any               // Optional input | ||||
| 	SendRawInput      bool              // Set to True in order to disable serialization on the input | ||||
| } | ||||
| 
 | ||||
| type StartWorkflowResponse struct { | ||||
| 	InstanceID string | ||||
| } | ||||
| 
 | ||||
| // Get the workflow status | ||||
| type GetWorkflowRequest struct { | ||||
| 	InstanceID        string | ||||
| 	WorkflowComponent string | ||||
| } | ||||
| 
 | ||||
| type GetWorkflowResponse struct { | ||||
| 	InstanceID    string | ||||
| 	WorkflowName  string | ||||
| 	CreatedAt     time.Time | ||||
| 	LastUpdatedAt time.Time | ||||
| 	RuntimeStatus string | ||||
| 	Properties    map[string]string | ||||
| } | ||||
| 
 | ||||
| // Purge workflow | ||||
| type PurgeWorkflowRequest struct { | ||||
| 	InstanceID        string | ||||
| 	WorkflowComponent string | ||||
| } | ||||
| 
 | ||||
| // Terminate workflow | ||||
| type TerminateWorkflowRequest struct { | ||||
| 	InstanceID        string | ||||
| 	WorkflowComponent string | ||||
| } | ||||
| 
 | ||||
| // Pause workflow | ||||
| type PauseWorkflowRequest struct { | ||||
| 	InstanceID        string | ||||
| 	WorkflowComponent string | ||||
| } | ||||
| 
 | ||||
| // Resume workflow | ||||
| type ResumeWorkflowRequest struct { | ||||
| 	InstanceID        string | ||||
| 	WorkflowComponent string | ||||
| } | ||||
| 
 | ||||
| // Raise an event for the running workflow | ||||
| type RaiseEventWorkflowRequest struct { | ||||
| 	InstanceID        string | ||||
| 	WorkflowComponent string | ||||
| 	EventName         string | ||||
| 	EventData         any | ||||
| 	SendRawData       bool // Set to True in order to disable serialization on the data | ||||
| } | ||||
| ``` | ||||
| 
 | ||||
| {{% /codetab %}} | ||||
| 
 | ||||
| <!--HTTP--> | ||||
| {{% codetab %}} | ||||
|  | @ -242,7 +391,9 @@ Learn more about these HTTP calls in the [workflow API reference guide]({{< ref | |||
| - [Try out the Workflow quickstart]({{< ref workflow-quickstart.md >}}) | ||||
| - Try out the full SDK examples: | ||||
|   - [Python example](https://github.com/dapr/python-sdk/blob/master/examples/demo_workflow/app.py) | ||||
|   - [JavaScript example](https://github.com/dapr/js-sdk/tree/main/examples/workflow) | ||||
|   - [.NET example](https://github.com/dapr/dotnet-sdk/tree/master/examples/Workflow) | ||||
|   - [Java example](https://github.com/dapr/java-sdk/tree/master/examples/src/main/java/io/dapr/examples/workflows) | ||||
|   - [Go example](https://github.com/dapr/go-sdk/tree/main/examples/workflow) | ||||
| 
 | ||||
| - [Workflow API reference]({{< ref workflow_api.md >}}) | ||||
|  |  | |||
|  | @ -15,6 +15,7 @@ Dapr Workflow is currently in beta. [See known limitations for {{% dapr-latest-v | |||
| - The architecture of the Dapr Workflow engine | ||||
| - How the workflow engine interacts with application code | ||||
| - How the workflow engine fits into the overall Dapr architecture | ||||
| - How different workflow backends can work with workflow engine | ||||
| 
 | ||||
| For more information on how to author Dapr Workflows in your application, see [How to: Author a workflow]({{< ref "workflow-overview.md" >}}). | ||||
| 
 | ||||
|  | @ -145,6 +146,8 @@ Different state store implementations may implicitly put restrictions on the typ | |||
| 
 | ||||
| Similarly, if a state store imposes restrictions on the size of a batch transaction, that may limit the number of parallel actions that can be scheduled by a workflow. | ||||
| 
 | ||||
| Workflow state can be purged from a state store, including all its history. Each Dapr SDK exposes APIs for purging all metadata related to specific workflow instances. | ||||
| 
 | ||||
| ## Workflow scalability | ||||
| 
 | ||||
| Because Dapr Workflows are internally implemented using actors, Dapr Workflows have the same scalability characteristics as actors. The placement service: | ||||
|  | @ -173,6 +176,24 @@ Also, the Dapr Workflow engine requires that all instances of each workflow app | |||
| 
 | ||||
| Workflows don't control the specifics of how load is distributed across the cluster. For example, if a workflow schedules 10 activity tasks to run in parallel, all 10 tasks may run on as many as 10 different compute nodes or as few as a single compute node. The actual scale behavior is determined by the actor placement service, which manages the distribution of the actors that represent each of the workflow's tasks. | ||||
| 
 | ||||
| ## Workflow backend | ||||
| 
 | ||||
| The workflow backend is responsible for orchestrating and preserving the state of workflows. At any given time, only one backend can be supported. You can configure the workflow backend as a component, similar to any other component in Dapr. Configuration requires:   | ||||
| 1. Specifying the type of workflow backend.  | ||||
| 1. Providing the configuration specific to that backend. | ||||
| 
 | ||||
| For instance, the following sample demonstrates how to define a actor backend component. Dapr workflow currently supports only the actor backend by default, and users are not required to define an actor backend component to use it. | ||||
| 
 | ||||
| ```yaml | ||||
| apiVersion: dapr.io/v1alpha1 | ||||
| kind: Component | ||||
| metadata: | ||||
|   name: actorbackend | ||||
| spec: | ||||
|   type: workflowbackend.actor | ||||
|   version: v1 | ||||
| ``` | ||||
| 
 | ||||
| ## Workflow latency | ||||
| 
 | ||||
| In order to provide guarantees around durability and resiliency, Dapr Workflows frequently write to the state store and rely on reminders to drive execution. Dapr Workflows therefore may not be appropriate for latency-sensitive workloads. Expected sources of high latency include: | ||||
|  | @ -195,5 +216,7 @@ See the [Reminder usage and execution guarantees section]({{< ref "workflow-arch | |||
| - [Try out the Workflow quickstart]({{< ref workflow-quickstart.md >}}) | ||||
| - Try out the following examples:  | ||||
|    - [Python](https://github.com/dapr/python-sdk/tree/master/examples/demo_workflow) | ||||
|    - [JavaScript example](https://github.com/dapr/js-sdk/tree/main/examples/workflow) | ||||
|    - [.NET](https://github.com/dapr/dotnet-sdk/tree/master/examples/Workflow) | ||||
|    - [Java](https://github.com/dapr/java-sdk/tree/master/examples/src/main/java/io/dapr/examples/workflows) | ||||
|    - [Java](https://github.com/dapr/java-sdk/tree/master/examples/src/main/java/io/dapr/examples/workflows) | ||||
|    - [Go example](https://github.com/dapr/go-sdk/tree/main/examples/workflow/README.md) | ||||
|  |  | |||
|  | @ -63,6 +63,8 @@ You can use the following two techniques to write workflows that may need to sch | |||
| 
 | ||||
| 1. **Use the _continue-as-new_ API**:   | ||||
|     Each workflow SDK exposes a _continue-as-new_ API that workflows can invoke to restart themselves with a new input and history. The _continue-as-new_ API is especially ideal for implementing "eternal workflows", like monitoring agents, which would otherwise be implemented using a `while (true)`-like construct. Using _continue-as-new_ is a great way to keep the workflow history size small. | ||||
|     | ||||
|     > The _continue-as-new_ API truncates the existing history, replacing it with a new history. | ||||
| 
 | ||||
| 1. **Use child workflows**:   | ||||
|     Each workflow SDK exposes an API for creating child workflows. A child workflow behaves like any other workflow, except that it's scheduled by a parent workflow. Child workflows have: | ||||
|  | @ -97,9 +99,7 @@ Child workflows have many benefits: | |||
| 
 | ||||
| The return value of a child workflow is its output. If a child workflow fails with an exception, then that exception is surfaced to the parent workflow, just like it is when an activity task fails with an exception. Child workflows also support automatic retry policies. | ||||
| 
 | ||||
| {{% alert title="Note" color="primary" %}} | ||||
| Because child workflows are independent of their parents, terminating a parent workflow does not affect any child workflows. You must terminate each child workflow independently using its instance ID. | ||||
| {{% /alert %}} | ||||
| Terminating a parent workflow terminates all of the child workflows created by the workflow instance. See [the terminate workflow api]({{< ref "workflow_api.md#terminate-workflow-request" >}}) for more information. | ||||
| 
 | ||||
| ## Durable timers | ||||
| 
 | ||||
|  | @ -149,6 +149,24 @@ Workflows can also wait for multiple external event signals of the same name, in | |||
| 
 | ||||
| Learn more about [external system interaction.]({{< ref "workflow-patterns.md#external-system-interaction" >}}) | ||||
| 
 | ||||
| ## Workflow backend | ||||
| 
 | ||||
| Dapr Workflow relies on the Durable Task Framework for Go (a.k.a. [durabletask-go](https://github.com/microsoft/durabletask-go)) as the core engine for executing workflows. This engine is designed to support multiple backend implementations. For example, the [durabletask-go](https://github.com/microsoft/durabletask-go) repo includes a SQLite implementation and the Dapr repo includes an Actors implementation.  | ||||
| 
 | ||||
| By default, Dapr Workflow supports the Actors backend, which is stable and scalable. However, you can choose a different backend supported in Dapr Workflow. For example, [SQLite](https://github.com/microsoft/durabletask-go/tree/main/backend/sqlite)(TBD future release) could be an option for backend for local development and testing. | ||||
| 
 | ||||
| The backend implementation is largely decoupled from the workflow core engine or the programming model that you see. The backend primarily impacts: | ||||
| - How workflow state is stored  | ||||
| - How workflow execution is coordinated across replicas | ||||
| 
 | ||||
| In that sense, it's similar to Dapr's state store abstraction, except designed specifically for workflow. All APIs and programming model features are the same, regardless of which backend is used. | ||||
| 
 | ||||
| ## Purging | ||||
| 
 | ||||
| Workflow state can be purged from a state store, purging all its history and removing all metadata related to a specific workflow instance. The purge capability is used for workflows that have run to a `COMPLETED`, `FAILED`, or `TERMINATED` state.  | ||||
| 
 | ||||
| Learn more in [the workflow API reference guide]({{< ref workflow_api.md >}}). | ||||
| 
 | ||||
| ## Limitations | ||||
| 
 | ||||
| ### Workflow determinism and code restraints  | ||||
|  | @ -162,7 +180,7 @@ APIs that generate random numbers, random UUIDs, or the current date are _non-de | |||
| 
 | ||||
| For example, instead of this: | ||||
| 
 | ||||
| {{< tabs ".NET" Java >}} | ||||
| {{< tabs ".NET" Java JavaScript Go >}} | ||||
| 
 | ||||
| {{% codetab %}} | ||||
| 
 | ||||
|  | @ -186,11 +204,31 @@ string randomString = GetRandomString(); | |||
| 
 | ||||
| {{% /codetab %}} | ||||
| 
 | ||||
| {{% codetab %}} | ||||
| 
 | ||||
| ```javascript | ||||
| // DON'T DO THIS! | ||||
| const currentTime = new Date(); | ||||
| const newIdentifier = uuidv4(); | ||||
| const randomString = getRandomString(); | ||||
| ``` | ||||
| 
 | ||||
| {{% /codetab %}} | ||||
| 
 | ||||
| {{% codetab %}} | ||||
| 
 | ||||
| ```go | ||||
| // DON'T DO THIS! | ||||
| const currentTime = time.Now() | ||||
| ``` | ||||
| 
 | ||||
| {{% /codetab %}} | ||||
| 
 | ||||
| {{< /tabs >}} | ||||
| 
 | ||||
| Do this: | ||||
| 
 | ||||
| {{< tabs ".NET" Java >}} | ||||
| {{< tabs ".NET" Java JavaScript Go >}} | ||||
| 
 | ||||
| {{% codetab %}} | ||||
| 
 | ||||
|  | @ -214,6 +252,24 @@ String randomString = context.callActivity(GetRandomString.class.getName(), Stri | |||
| 
 | ||||
| {{% /codetab %}} | ||||
| 
 | ||||
| {{% codetab %}} | ||||
| 
 | ||||
| ```javascript | ||||
| // Do this!! | ||||
| const currentTime = context.getCurrentUtcDateTime(); | ||||
| const randomString = yield context.callActivity(getRandomString); | ||||
| ``` | ||||
| 
 | ||||
| {{% /codetab %}} | ||||
| 
 | ||||
| {{% codetab %}} | ||||
| 
 | ||||
| ```go | ||||
| const currentTime = ctx.CurrentUTCDateTime() | ||||
| ``` | ||||
| 
 | ||||
| {{% /codetab %}} | ||||
| 
 | ||||
| {{< /tabs >}} | ||||
| 
 | ||||
| 
 | ||||
|  | @ -224,7 +280,7 @@ Instead, workflows should interact with external state _indirectly_ using workfl | |||
| 
 | ||||
| For example, instead of this: | ||||
| 
 | ||||
| {{< tabs ".NET" Java >}} | ||||
| {{< tabs ".NET" Java JavaScript Go >}} | ||||
| 
 | ||||
| {{% codetab %}} | ||||
| 
 | ||||
|  | @ -247,11 +303,40 @@ HttpResponse<String> response = HttpClient.newBuilder().build().send(request, Ht | |||
| 
 | ||||
| {{% /codetab %}} | ||||
| 
 | ||||
| {{% codetab %}} | ||||
| 
 | ||||
| ```javascript | ||||
| // DON'T DO THIS! | ||||
| // Accessing an Environment Variable (Node.js) | ||||
| const configuration = process.env.MY_CONFIGURATION; | ||||
| 
 | ||||
| fetch('https://postman-echo.com/get') | ||||
|   .then(response => response.text()) | ||||
|   .then(data => { | ||||
|     console.log(data); | ||||
|   }) | ||||
|   .catch(error => { | ||||
|     console.error('Error:', error); | ||||
|   }); | ||||
| ``` | ||||
| 
 | ||||
| {{% /codetab %}} | ||||
| 
 | ||||
| {{% codetab %}} | ||||
| 
 | ||||
| ```go | ||||
| // DON'T DO THIS! | ||||
| resp, err := http.Get("http://example.com/api/data") | ||||
| ``` | ||||
| 
 | ||||
| {{% /codetab %}} | ||||
| 
 | ||||
| 
 | ||||
| {{< /tabs >}} | ||||
| 
 | ||||
| Do this: | ||||
| 
 | ||||
| {{< tabs ".NET" Java >}} | ||||
| {{< tabs ".NET" Java JavaScript Go >}} | ||||
| 
 | ||||
| {{% codetab %}} | ||||
| 
 | ||||
|  | @ -273,6 +358,26 @@ String data = ctx.callActivity(MakeHttpCall.class, "https://example.com/api/data | |||
| 
 | ||||
| {{% /codetab %}} | ||||
| 
 | ||||
| {{% codetab %}} | ||||
| 
 | ||||
| ```javascript | ||||
| // Do this!! | ||||
| const configuation = workflowInput.getConfiguration(); // imaginary workflow input argument | ||||
| const data = yield ctx.callActivity(makeHttpCall, "https://example.com/api/data"); | ||||
| ``` | ||||
| 
 | ||||
| {{% /codetab %}} | ||||
| 
 | ||||
| 
 | ||||
| {{% codetab %}} | ||||
| 
 | ||||
| ```go | ||||
| // Do this!! | ||||
| err := ctx.CallActivity(MakeHttpCallActivity, workflow.ActivityInput("https://example.com/api/data")).Await(&output) | ||||
| 
 | ||||
| ``` | ||||
| 
 | ||||
| {{% /codetab %}} | ||||
| {{< /tabs >}} | ||||
| 
 | ||||
| 
 | ||||
|  | @ -285,7 +390,7 @@ Failure to follow this rule could result in undefined behavior. Any background p | |||
| 
 | ||||
| For example, instead of this: | ||||
| 
 | ||||
| {{< tabs ".NET" Java >}} | ||||
| {{< tabs ".NET" Java JavaScript Go >}} | ||||
| 
 | ||||
| {{% codetab %}} | ||||
| 
 | ||||
|  | @ -308,11 +413,31 @@ ctx.createTimer(Duration.ofSeconds(5)).await(); | |||
| 
 | ||||
| {{% /codetab %}} | ||||
| 
 | ||||
| {{% codetab %}} | ||||
| 
 | ||||
| Don't declare JavaScript workflow as `async`. The Node.js runtime doesn't guarantee that asynchronous functions are deterministic. | ||||
| 
 | ||||
| {{% /codetab %}} | ||||
| 
 | ||||
| {{% codetab %}} | ||||
| 
 | ||||
| ```go | ||||
| // DON'T DO THIS! | ||||
| go func() { | ||||
|   err := ctx.CallActivity(DoSomething).Await(nil) | ||||
| }() | ||||
| err := ctx.CreateTimer(time.Second).Await(nil) | ||||
| ``` | ||||
| 
 | ||||
| {{% /codetab %}} | ||||
| 
 | ||||
| 
 | ||||
| 
 | ||||
| {{< /tabs >}} | ||||
| 
 | ||||
| Do this: | ||||
| 
 | ||||
| {{< tabs ".NET" Java >}} | ||||
| {{< tabs ".NET" Java JavaScript Go >}} | ||||
| 
 | ||||
| {{% codetab %}} | ||||
| 
 | ||||
|  | @ -334,6 +459,22 @@ ctx.createTimer(Duration.ofSeconds(5)).await(); | |||
| 
 | ||||
| {{% /codetab %}} | ||||
| 
 | ||||
| {{% codetab %}} | ||||
| 
 | ||||
| Since the Node.js runtime doesn't guarantee that asynchronous functions are deterministic, always declare JavaScript workflow as synchronous generator functions.  | ||||
| 
 | ||||
| {{% /codetab %}} | ||||
| 
 | ||||
| {{% codetab %}} | ||||
| 
 | ||||
| ```go | ||||
| // Do this! | ||||
| task := ctx.CallActivity(DoSomething) | ||||
| task.Await(nil) | ||||
| ``` | ||||
| 
 | ||||
| {{% /codetab %}} | ||||
| 
 | ||||
| {{< /tabs >}} | ||||
| 
 | ||||
| 
 | ||||
|  | @ -363,4 +504,9 @@ To work around these constraints: | |||
| - [Try out Dapr Workflow using the quickstart]({{< ref workflow-quickstart.md >}}) | ||||
| - [Workflow overview]({{< ref workflow-overview.md >}}) | ||||
| - [Workflow API reference]({{< ref workflow_api.md >}}) | ||||
| - [Try out the .NET example](https://github.com/dapr/dotnet-sdk/tree/master/examples/Workflow) | ||||
| - Try out the following examples:  | ||||
|    - [Python](https://github.com/dapr/python-sdk/tree/master/examples/demo_workflow) | ||||
|    - [JavaScript](https://github.com/dapr/js-sdk/tree/main/examples/workflow) | ||||
|    - [.NET](https://github.com/dapr/dotnet-sdk/tree/master/examples/Workflow) | ||||
|    - [Java](https://github.com/dapr/java-sdk/tree/master/examples/src/main/java/io/dapr/examples/workflows) | ||||
|    - [Go](https://github.com/dapr/go-sdk/tree/main/examples/workflow/README.md) | ||||
|  |  | |||
|  | @ -7,7 +7,7 @@ description: "Overview of Dapr Workflow" | |||
| --- | ||||
| 
 | ||||
| {{% alert title="Note" color="primary" %}} | ||||
| Dapr Workflow is currently in beta. [See known limitations for {{% dapr-latest-version cli="true" %}}]({{< ref "#limitations" >}}). | ||||
| Dapr Workflow is currently in beta. [See known limitations]({{< ref "#limitations" >}}). | ||||
| {{% /alert %}} | ||||
| 
 | ||||
| Dapr workflow makes it easy for developers to write business logic and integrations in a reliable way. Since Dapr workflows are stateful, they support long-running and fault-tolerant applications, ideal for orchestrating microservices. Dapr workflow works seamlessly with other Dapr building blocks, such as service invocation, pub/sub, state management, and bindings. | ||||
|  | @ -41,7 +41,7 @@ With Dapr Workflow, you can write activities and then orchestrate those activiti | |||
| 
 | ||||
| ### Child workflows | ||||
| 
 | ||||
| In addition to activities, you can write workflows to schedule other workflows as child workflows. A child workflow is independent of the parent workflow that started it and support automatic retry policies. | ||||
| In addition to activities, you can write workflows to schedule other workflows as child workflows. A child workflow has its own instance ID, history, and status that is independent of the parent workflow that started it, except for the fact that terminating the parent workflow terminates all of the child workflows created by it. Child workflow also supports automatic retry policies. | ||||
| 
 | ||||
| [Learn more about child workflows.]({{< ref "workflow-features-concepts.md#child-workflows" >}}) | ||||
| 
 | ||||
|  | @ -80,8 +80,10 @@ You can use the following SDKs to author a workflow. | |||
| | Language stack | Package | | ||||
| | - | - | | ||||
| | Python | [dapr-ext-workflow](https://github.com/dapr/python-sdk/tree/master/ext/dapr-ext-workflow) | | ||||
| | JavaScript | [DaprWorkflowClient](https://github.com/dapr/js-sdk/blob/main/src/workflow/client/DaprWorkflowClient.ts) | | ||||
| | .NET | [Dapr.Workflow](https://www.nuget.org/profiles/dapr.io) | | ||||
| | Java | [io.dapr.workflows](https://dapr.github.io/java-sdk/io/dapr/workflows/package-summary.html) | | ||||
| | Go | [workflow](https://github.com/dapr/go-sdk/tree/main/client/workflow.go) | | ||||
| 
 | ||||
| ## Try out workflows | ||||
| 
 | ||||
|  | @ -93,9 +95,10 @@ Want to put workflows to the test? Walk through the following quickstart and tut | |||
| | ------------------- | ----------- | | ||||
| | [Workflow quickstart]({{< ref workflow-quickstart.md >}}) | Run a workflow application with four workflow activities to see Dapr Workflow in action  | | ||||
| | [Workflow Python SDK example](https://github.com/dapr/python-sdk/tree/master/examples/demo_workflow) | Learn how to create a Dapr Workflow and invoke it using the Python `DaprClient` package. | | ||||
| | [Workflow JavaScript SDK example](https://github.com/dapr/js-sdk/tree/main/examples/workflow) | Learn how to create a Dapr Workflow and invoke it using the JavaScript SDK. | | ||||
| | [Workflow .NET SDK example](https://github.com/dapr/dotnet-sdk/tree/master/examples/Workflow) | Learn how to create a Dapr Workflow and invoke it using ASP.NET Core web APIs. | | ||||
| | [Workflow Java SDK example](https://github.com/dapr/java-sdk/tree/master/examples/src/main/java/io/dapr/examples/workflows) | Learn how to create a Dapr Workflow and invoke it using the Java `io.dapr.workflows` package. | | ||||
| 
 | ||||
| | [Workflow Go SDK example](https://github.com/dapr/go-sdk/tree/main/examples/workflow/README.md) | Learn how to create a Dapr Workflow and invoke it using the Go `workflow` package. | | ||||
| 
 | ||||
| ### Start using workflows directly in your app | ||||
| 
 | ||||
|  | @ -103,11 +106,8 @@ Want to skip the quickstarts? Not a problem. You can try out the workflow buildi | |||
| 
 | ||||
| ## Limitations | ||||
| 
 | ||||
| With Dapr Workflow in beta stage comes the following limitation(s): | ||||
| 
 | ||||
| - **State stores:** For the {{% dapr-latest-version cli="true" %}} beta release of Dapr Workflow, using the NoSQL databases as a state store results in limitations around storing internal states. For example, CosmosDB has a maximum single operation item limit of only 100 states in a single request. | ||||
| 
 | ||||
| - **Horizontal scaling:** For the {{% dapr-latest-version cli="true" %}} beta release of Dapr Workflow, if you scale out Dapr sidecars or your application pods to more than 2, then the concurrency of the workflow execution drops. It is recommended to test with 1 or 2 instances, and no more than 2. | ||||
| - **State stores:** As of the 1.12.0 beta release of Dapr Workflow, using the NoSQL databases as a state store results in limitations around storing internal states. For example, CosmosDB has a maximum single operation item limit of only 100 states in a single request. | ||||
| - **Horizontal scaling:** As of the 1.12.0 beta release of Dapr Workflow, if you scale out Dapr sidecars or your application pods to more than 2, then the concurrency of the workflow execution drops. It is recommended to test with 1 or 2 instances, and no more than 2. | ||||
| 
 | ||||
| ## Watch the demo | ||||
| 
 | ||||
|  | @ -123,6 +123,8 @@ Watch [this video for an overview on Dapr Workflow](https://youtu.be/s1p9MNl4VGo | |||
| 
 | ||||
| - [Workflow API reference]({{< ref workflow_api.md >}}) | ||||
| - Try out the full SDK examples: | ||||
|   - [.NET example](https://github.com/dapr/dotnet-sdk/tree/master/examples/Workflow) | ||||
|   - [Python example](https://github.com/dapr/python-sdk/tree/master/examples/demo_workflow) | ||||
|   - [JavaScript example](https://github.com/dapr/js-sdk/tree/main/examples/workflow) | ||||
|   - [.NET example](https://github.com/dapr/dotnet-sdk/tree/master/examples/Workflow) | ||||
|   - [Java example](https://github.com/dapr/java-sdk/tree/master/examples/src/main/java/io/dapr/examples/workflows) | ||||
|   - [Go example](https://github.com/dapr/go-sdk/tree/main/examples/workflow/README.md) | ||||
|  |  | |||
|  | @ -25,7 +25,7 @@ While the pattern is simple, there are many complexities hidden in the implement | |||
| 
 | ||||
| Dapr Workflow solves these complexities by allowing you to implement the task chaining pattern concisely as a simple function in the programming language of your choice, as shown in the following example. | ||||
| 
 | ||||
| {{< tabs Python ".NET" Java >}} | ||||
| {{< tabs Python JavaScript ".NET" Java Go >}} | ||||
| 
 | ||||
| {{% codetab %}} | ||||
| <!--python--> | ||||
|  | @ -72,6 +72,80 @@ def error_handler(ctx, error): | |||
| 
 | ||||
| {{% /codetab %}} | ||||
| 
 | ||||
| {{% codetab %}} | ||||
| <!--javascript--> | ||||
| 
 | ||||
| ```javascript | ||||
| import { DaprWorkflowClient, WorkflowActivityContext, WorkflowContext, WorkflowRuntime, TWorkflow } from "@dapr/dapr"; | ||||
| 
 | ||||
| async function start() { | ||||
|   // Update the gRPC client and worker to use a local address and port | ||||
|   const daprHost = "localhost"; | ||||
|   const daprPort = "50001"; | ||||
|   const workflowClient = new DaprWorkflowClient({ | ||||
|     daprHost, | ||||
|     daprPort, | ||||
|   }); | ||||
|   const workflowRuntime = new WorkflowRuntime({ | ||||
|     daprHost, | ||||
|     daprPort, | ||||
|   }); | ||||
| 
 | ||||
|   const hello = async (_: WorkflowActivityContext, name: string) => { | ||||
|     return `Hello ${name}!`; | ||||
|   }; | ||||
| 
 | ||||
|   const sequence: TWorkflow = async function* (ctx: WorkflowContext): any { | ||||
|     const cities: string[] = []; | ||||
| 
 | ||||
|     const result1 = yield ctx.callActivity(hello, "Tokyo"); | ||||
|     cities.push(result1); | ||||
|     const result2 = yield ctx.callActivity(hello, "Seattle"); | ||||
|     cities.push(result2); | ||||
|     const result3 = yield ctx.callActivity(hello, "London"); | ||||
|     cities.push(result3); | ||||
| 
 | ||||
|     return cities; | ||||
|   }; | ||||
| 
 | ||||
|   workflowRuntime.registerWorkflow(sequence).registerActivity(hello); | ||||
| 
 | ||||
|   // Wrap the worker startup in a try-catch block to handle any errors during startup | ||||
|   try { | ||||
|     await workflowRuntime.start(); | ||||
|     console.log("Workflow runtime started successfully"); | ||||
|   } catch (error) { | ||||
|     console.error("Error starting workflow runtime:", error); | ||||
|   } | ||||
| 
 | ||||
|   // Schedule a new orchestration | ||||
|   try { | ||||
|     const id = await workflowClient.scheduleNewWorkflow(sequence); | ||||
|     console.log(`Orchestration scheduled with ID: ${id}`); | ||||
| 
 | ||||
|     // Wait for orchestration completion | ||||
|     const state = await workflowClient.waitForWorkflowCompletion(id, undefined, 30); | ||||
| 
 | ||||
|     console.log(`Orchestration completed! Result: ${state?.serializedOutput}`); | ||||
|   } catch (error) { | ||||
|     console.error("Error scheduling or waiting for orchestration:", error); | ||||
|   } | ||||
| 
 | ||||
|   await workflowRuntime.stop(); | ||||
|   await workflowClient.stop(); | ||||
| 
 | ||||
|   // stop the dapr side car | ||||
|   process.exit(0); | ||||
| } | ||||
| 
 | ||||
| start().catch((e) => { | ||||
|   console.error(e); | ||||
|   process.exit(1); | ||||
| }); | ||||
| ``` | ||||
| 
 | ||||
| {{% /codetab %}} | ||||
| 
 | ||||
| {{% codetab %}} | ||||
| <!--dotnet--> | ||||
| 
 | ||||
|  | @ -160,6 +234,57 @@ public class ChainWorkflow extends Workflow { | |||
| 
 | ||||
| {{% /codetab %}} | ||||
| 
 | ||||
| {{% codetab %}} | ||||
| <!--go--> | ||||
| 
 | ||||
| ```go | ||||
| func TaskChainWorkflow(ctx *workflow.WorkflowContext) (any, error) { | ||||
| 	var input int | ||||
| 	if err := ctx.GetInput(&input); err != nil { | ||||
| 		return "", err | ||||
| 	} | ||||
| 	var result1 int | ||||
| 	if err := ctx.CallActivity(Step1, workflow.ActivityInput(input)).Await(&result1); err != nil { | ||||
| 		return nil, err | ||||
| 	} | ||||
| 	var result2 int | ||||
| 	if err := ctx.CallActivity(Step1, workflow.ActivityInput(input)).Await(&result2); err != nil { | ||||
| 		return nil, err | ||||
| 	} | ||||
| 	var result3 int | ||||
| 	if err := ctx.CallActivity(Step1, workflow.ActivityInput(input)).Await(&result3); err != nil { | ||||
| 		return nil, err | ||||
| 	} | ||||
| 	return []int{result1, result2, result3}, nil | ||||
| } | ||||
| func Step1(ctx workflow.ActivityContext) (any, error) { | ||||
| 	var input int | ||||
| 	if err := ctx.GetInput(&input); err != nil { | ||||
| 		return "", err | ||||
| 	} | ||||
| 	fmt.Printf("Step 1: Received input: %s", input) | ||||
| 	return input + 1, nil | ||||
| } | ||||
| func Step2(ctx workflow.ActivityContext) (any, error) { | ||||
| 	var input int | ||||
| 	if err := ctx.GetInput(&input); err != nil { | ||||
| 		return "", err | ||||
| 	} | ||||
| 	fmt.Printf("Step 2: Received input: %s", input) | ||||
| 	return input * 2, nil | ||||
| } | ||||
| func Step3(ctx workflow.ActivityContext) (any, error) { | ||||
| 	var input int | ||||
| 	if err := ctx.GetInput(&input); err != nil { | ||||
| 		return "", err | ||||
| 	} | ||||
| 	fmt.Printf("Step 3: Received input: %s", input) | ||||
| 	return int(math.Pow(float64(input), 2)), nil | ||||
| } | ||||
| ``` | ||||
| 
 | ||||
| {{% /codetab %}} | ||||
| 
 | ||||
| {{< /tabs >}} | ||||
| 
 | ||||
| As you can see, the workflow is expressed as a simple series of statements in the programming language of your choice. This allows any engineer in the organization to quickly understand the end-to-end flow without necessarily needing to understand the end-to-end system architecture. | ||||
|  | @ -186,7 +311,7 @@ In addition to the challenges mentioned in [the previous pattern]({{< ref "workf | |||
| 
 | ||||
| Dapr Workflows provides a way to express the fan-out/fan-in pattern as a simple function, as shown in the following example: | ||||
| 
 | ||||
| {{< tabs Python ".NET" Java >}} | ||||
| {{< tabs Python JavaScript ".NET" Java Go >}} | ||||
| 
 | ||||
| {{% codetab %}} | ||||
| <!--python--> | ||||
|  | @ -228,6 +353,114 @@ def process_results(ctx, final_result: int): | |||
| 
 | ||||
| {{% /codetab %}} | ||||
| 
 | ||||
| {{% codetab %}} | ||||
| <!--javascript--> | ||||
| 
 | ||||
| ```javascript | ||||
| import { | ||||
|   Task, | ||||
|   DaprWorkflowClient, | ||||
|   WorkflowActivityContext, | ||||
|   WorkflowContext, | ||||
|   WorkflowRuntime, | ||||
|   TWorkflow, | ||||
| } from "@dapr/dapr"; | ||||
| 
 | ||||
| // Wrap the entire code in an immediately-invoked async function | ||||
| async function start() { | ||||
|   // Update the gRPC client and worker to use a local address and port | ||||
|   const daprHost = "localhost"; | ||||
|   const daprPort = "50001"; | ||||
|   const workflowClient = new DaprWorkflowClient({ | ||||
|     daprHost, | ||||
|     daprPort, | ||||
|   }); | ||||
|   const workflowRuntime = new WorkflowRuntime({ | ||||
|     daprHost, | ||||
|     daprPort, | ||||
|   }); | ||||
| 
 | ||||
|   function getRandomInt(min: number, max: number): number { | ||||
|     return Math.floor(Math.random() * (max - min + 1)) + min; | ||||
|   } | ||||
| 
 | ||||
|   async function getWorkItemsActivity(_: WorkflowActivityContext): Promise<string[]> { | ||||
|     const count: number = getRandomInt(2, 10); | ||||
|     console.log(`generating ${count} work items...`); | ||||
| 
 | ||||
|     const workItems: string[] = Array.from({ length: count }, (_, i) => `work item ${i}`); | ||||
|     return workItems; | ||||
|   } | ||||
| 
 | ||||
|   function sleep(ms: number): Promise<void> { | ||||
|     return new Promise((resolve) => setTimeout(resolve, ms)); | ||||
|   } | ||||
| 
 | ||||
|   async function processWorkItemActivity(context: WorkflowActivityContext, item: string): Promise<number> { | ||||
|     console.log(`processing work item: ${item}`); | ||||
| 
 | ||||
|     // Simulate some work that takes a variable amount of time | ||||
|     const sleepTime = Math.random() * 5000; | ||||
|     await sleep(sleepTime); | ||||
| 
 | ||||
|     // Return a result for the given work item, which is also a random number in this case | ||||
|     // For more information about random numbers in workflow please check | ||||
|     // https://learn.microsoft.com/azure/azure-functions/durable/durable-functions-code-constraints?tabs=csharp#random-numbers | ||||
|     return Math.floor(Math.random() * 11); | ||||
|   } | ||||
| 
 | ||||
|   const workflow: TWorkflow = async function* (ctx: WorkflowContext): any { | ||||
|     const tasks: Task<any>[] = []; | ||||
|     const workItems = yield ctx.callActivity(getWorkItemsActivity); | ||||
|     for (const workItem of workItems) { | ||||
|       tasks.push(ctx.callActivity(processWorkItemActivity, workItem)); | ||||
|     } | ||||
|     const results: number[] = yield ctx.whenAll(tasks); | ||||
|     const sum: number = results.reduce((accumulator, currentValue) => accumulator + currentValue, 0); | ||||
|     return sum; | ||||
|   }; | ||||
| 
 | ||||
|   workflowRuntime.registerWorkflow(workflow); | ||||
|   workflowRuntime.registerActivity(getWorkItemsActivity); | ||||
|   workflowRuntime.registerActivity(processWorkItemActivity); | ||||
| 
 | ||||
|   // Wrap the worker startup in a try-catch block to handle any errors during startup | ||||
|   try { | ||||
|     await workflowRuntime.start(); | ||||
|     console.log("Worker started successfully"); | ||||
|   } catch (error) { | ||||
|     console.error("Error starting worker:", error); | ||||
|   } | ||||
| 
 | ||||
|   // Schedule a new orchestration | ||||
|   try { | ||||
|     const id = await workflowClient.scheduleNewWorkflow(workflow); | ||||
|     console.log(`Orchestration scheduled with ID: ${id}`); | ||||
| 
 | ||||
|     // Wait for orchestration completion | ||||
|     const state = await workflowClient.waitForWorkflowCompletion(id, undefined, 30); | ||||
| 
 | ||||
|     console.log(`Orchestration completed! Result: ${state?.serializedOutput}`); | ||||
|   } catch (error) { | ||||
|     console.error("Error scheduling or waiting for orchestration:", error); | ||||
|   } | ||||
| 
 | ||||
|   // stop worker and client | ||||
|   await workflowRuntime.stop(); | ||||
|   await workflowClient.stop(); | ||||
| 
 | ||||
|   // stop the dapr side car | ||||
|   process.exit(0); | ||||
| } | ||||
| 
 | ||||
| start().catch((e) => { | ||||
|   console.error(e); | ||||
|   process.exit(1); | ||||
| }); | ||||
| ``` | ||||
| 
 | ||||
| {{% /codetab %}} | ||||
| 
 | ||||
| {{% codetab %}} | ||||
| <!--dotnet--> | ||||
| 
 | ||||
|  | @ -279,6 +512,72 @@ public class FaninoutWorkflow extends Workflow { | |||
| 
 | ||||
| {{% /codetab %}} | ||||
| 
 | ||||
| {{% codetab %}} | ||||
| <!--go--> | ||||
| 
 | ||||
| ```go | ||||
| func BatchProcessingWorkflow(ctx *workflow.WorkflowContext) (any, error) { | ||||
| 	var input int | ||||
| 	if err := ctx.GetInput(&input); err != nil { | ||||
| 		return 0, err | ||||
| 	} | ||||
| 	var workBatch []int | ||||
| 	if err := ctx.CallActivity(GetWorkBatch, workflow.ActivityInput(input)).Await(&workBatch); err != nil { | ||||
| 		return 0, err | ||||
| 	} | ||||
| 	parallelTasks := workflow.NewTaskSlice(len(workBatch)) | ||||
| 	for i, workItem := range workBatch { | ||||
| 		parallelTasks[i] = ctx.CallActivity(ProcessWorkItem, workflow.ActivityInput(workItem)) | ||||
| 	} | ||||
| 	var outputs int | ||||
| 	for _, task := range parallelTasks { | ||||
| 		var output int | ||||
| 		err := task.Await(&output) | ||||
| 		if err == nil { | ||||
| 			outputs += output | ||||
| 		} else { | ||||
| 			return 0, err | ||||
| 		} | ||||
| 	} | ||||
| 	if err := ctx.CallActivity(ProcessResults, workflow.ActivityInput(outputs)).Await(nil); err != nil { | ||||
| 		return 0, err | ||||
| 	} | ||||
| 	return 0, nil | ||||
| } | ||||
| func GetWorkBatch(ctx workflow.ActivityContext) (any, error) { | ||||
| 	var batchSize int | ||||
| 	if err := ctx.GetInput(&batchSize); err != nil { | ||||
| 		return 0, err | ||||
| 	} | ||||
| 	batch := make([]int, batchSize) | ||||
| 	for i := 0; i < batchSize; i++ { | ||||
| 		batch[i] = i | ||||
| 	} | ||||
| 	return batch, nil | ||||
| } | ||||
| func ProcessWorkItem(ctx workflow.ActivityContext) (any, error) { | ||||
| 	var workItem int | ||||
| 	if err := ctx.GetInput(&workItem); err != nil { | ||||
| 		return 0, err | ||||
| 	} | ||||
| 	fmt.Printf("Processing work item: %d\n", workItem) | ||||
| 	time.Sleep(time.Second * 5) | ||||
| 	result := workItem * 2 | ||||
| 	fmt.Printf("Work item %d processed. Result: %d\n", workItem, result) | ||||
| 	return result, nil | ||||
| } | ||||
| func ProcessResults(ctx workflow.ActivityContext) (any, error) { | ||||
| 	var finalResult int | ||||
| 	if err := ctx.GetInput(&finalResult); err != nil { | ||||
| 		return 0, err | ||||
| 	} | ||||
| 	fmt.Printf("Final result: %d\n", finalResult) | ||||
| 	return finalResult, nil | ||||
| } | ||||
| ``` | ||||
| 
 | ||||
| {{% /codetab %}} | ||||
| 
 | ||||
| {{< /tabs >}} | ||||
| 
 | ||||
| The key takeaways from this example are: | ||||
|  | @ -379,7 +678,7 @@ Depending on the business needs, there may be a single monitor or there may be m | |||
| 
 | ||||
| Dapr Workflow supports this pattern natively by allowing you to implement _eternal workflows_. Rather than writing infinite while-loops ([which is an anti-pattern]({{< ref "workflow-features-concepts.md#infinite-loops-and-eternal-workflows" >}})), Dapr Workflow exposes a _continue-as-new_ API that workflow authors can use to restart a workflow function from the beginning with a new input. | ||||
| 
 | ||||
| {{< tabs Python ".NET" Java >}} | ||||
| {{< tabs Python JavaScript ".NET" Java Go >}} | ||||
| 
 | ||||
| {{% codetab %}} | ||||
| <!--python--> | ||||
|  | @ -428,6 +727,34 @@ def send_alert(ctx, message: str): | |||
| 
 | ||||
| {{% /codetab %}} | ||||
| 
 | ||||
| {{% codetab %}} | ||||
| <!--javascript--> | ||||
| 
 | ||||
| ```javascript | ||||
| const statusMonitorWorkflow: TWorkflow = async function* (ctx: WorkflowContext): any { | ||||
|     let duration; | ||||
|     const status = yield ctx.callActivity(checkStatusActivity); | ||||
|     if (status === "healthy") { | ||||
|       // Check less frequently when in a healthy state | ||||
|       // set duration to 1 hour | ||||
|       duration = 60 * 60; | ||||
|     } else { | ||||
|       yield ctx.callActivity(alertActivity, "job unhealthy"); | ||||
|       // Check more frequently when in an unhealthy state | ||||
|       // set duration to 5 minutes | ||||
|       duration = 5 * 60; | ||||
|     } | ||||
| 
 | ||||
|     // Put the workflow to sleep until the determined time | ||||
|     ctx.createTimer(duration); | ||||
| 
 | ||||
|     // Restart from the beginning with the updated state | ||||
|     ctx.continueAsNew(); | ||||
|   }; | ||||
| ``` | ||||
| 
 | ||||
| {{% /codetab %}} | ||||
| 
 | ||||
| {{% codetab %}} | ||||
| <!--dotnet--> | ||||
| 
 | ||||
|  | @ -496,9 +823,8 @@ public class MonitorWorkflow extends Workflow { | |||
|       } | ||||
| 
 | ||||
|       // Put the workflow to sleep until the determined time | ||||
|       // Note: ctx.createTimer() method is not supported in the Java SDK yet | ||||
|       try { | ||||
|         TimeUnit.SECONDS.sleep(nextSleepInterval.getSeconds()); | ||||
|         ctx.createTimer(nextSleepInterval); | ||||
|       } catch (InterruptedException e) { | ||||
|         throw new RuntimeException(e); | ||||
|       } | ||||
|  | @ -512,6 +838,59 @@ public class MonitorWorkflow extends Workflow { | |||
| 
 | ||||
| {{% /codetab %}} | ||||
| 
 | ||||
| {{% codetab %}} | ||||
| <!--go--> | ||||
| 
 | ||||
| ```go | ||||
| type JobStatus struct { | ||||
| 	JobID     string `json:"job_id"` | ||||
| 	IsHealthy bool   `json:"is_healthy"` | ||||
| } | ||||
| func StatusMonitorWorkflow(ctx *workflow.WorkflowContext) (any, error) { | ||||
| 	var sleepInterval time.Duration | ||||
| 	var job JobStatus | ||||
| 	if err := ctx.GetInput(&job); err != nil { | ||||
| 		return "", err | ||||
| 	} | ||||
| 	var status string | ||||
| 	if err := ctx.CallActivity(CheckStatus, workflow.ActivityInput(job)).Await(&status); err != nil { | ||||
| 		return "", err | ||||
| 	} | ||||
| 	if status == "healthy" { | ||||
| 		job.IsHealthy = true | ||||
| 		sleepInterval = time.Second * 60 | ||||
| 	} else { | ||||
| 		if job.IsHealthy { | ||||
| 			job.IsHealthy = false | ||||
| 			err := ctx.CallActivity(SendAlert, workflow.ActivityInput(fmt.Sprintf("Job '%s' is unhealthy!", job.JobID))).Await(nil) | ||||
| 			if err != nil { | ||||
| 				return "", err | ||||
| 			} | ||||
| 		} | ||||
| 		sleepInterval = time.Second * 5 | ||||
| 	} | ||||
| 	if err := ctx.CreateTimer(sleepInterval).Await(nil); err != nil { | ||||
| 		return "", err | ||||
| 	} | ||||
| 	ctx.ContinueAsNew(job, false) | ||||
| 	return "", nil | ||||
| } | ||||
| func CheckStatus(ctx workflow.ActivityContext) (any, error) { | ||||
| 	statuses := []string{"healthy", "unhealthy"} | ||||
| 	return statuses[rand.Intn(1)], nil | ||||
| } | ||||
| func SendAlert(ctx workflow.ActivityContext) (any, error) { | ||||
| 	var message string | ||||
| 	if err := ctx.GetInput(&message); err != nil { | ||||
| 		return "", err | ||||
| 	} | ||||
| 	fmt.Printf("*** Alert: %s", message) | ||||
| 	return "", nil | ||||
| } | ||||
| ``` | ||||
| 
 | ||||
| {{% /codetab %}} | ||||
| 
 | ||||
| {{< /tabs >}} | ||||
| 
 | ||||
| A workflow implementing the monitor pattern can loop forever or it can terminate itself gracefully by not calling _continue-as-new_. | ||||
|  | @ -540,7 +919,7 @@ The following diagram illustrates this flow. | |||
| 
 | ||||
| The following example code shows how this pattern can be implemented using Dapr Workflow. | ||||
| 
 | ||||
| {{< tabs Python ".NET" Java >}} | ||||
| {{< tabs Python JavaScript ".NET" Java Go >}} | ||||
| 
 | ||||
| {{% codetab %}} | ||||
| <!--python--> | ||||
|  | @ -601,6 +980,146 @@ def place_order(_, order: Order) -> None: | |||
| 
 | ||||
| {{% /codetab %}} | ||||
| 
 | ||||
| {{% codetab %}} | ||||
| <!--javascript--> | ||||
| 
 | ||||
| ```javascript | ||||
| import { | ||||
|   Task, | ||||
|   DaprWorkflowClient, | ||||
|   WorkflowActivityContext, | ||||
|   WorkflowContext, | ||||
|   WorkflowRuntime, | ||||
|   TWorkflow, | ||||
| } from "@dapr/dapr"; | ||||
| import * as readlineSync from "readline-sync"; | ||||
| 
 | ||||
| // Wrap the entire code in an immediately-invoked async function | ||||
| async function start() { | ||||
|   class Order { | ||||
|     cost: number; | ||||
|     product: string; | ||||
|     quantity: number; | ||||
|     constructor(cost: number, product: string, quantity: number) { | ||||
|       this.cost = cost; | ||||
|       this.product = product; | ||||
|       this.quantity = quantity; | ||||
|     } | ||||
|   } | ||||
| 
 | ||||
|   function sleep(ms: number): Promise<void> { | ||||
|     return new Promise((resolve) => setTimeout(resolve, ms)); | ||||
|   } | ||||
| 
 | ||||
|   // Update the gRPC client and worker to use a local address and port | ||||
|   const daprHost = "localhost"; | ||||
|   const daprPort = "50001"; | ||||
|   const workflowClient = new DaprWorkflowClient({ | ||||
|     daprHost, | ||||
|     daprPort, | ||||
|   }); | ||||
|   const workflowRuntime = new WorkflowRuntime({ | ||||
|     daprHost, | ||||
|     daprPort, | ||||
|   }); | ||||
| 
 | ||||
|   // Activity function that sends an approval request to the manager | ||||
|   const sendApprovalRequest = async (_: WorkflowActivityContext, order: Order) => { | ||||
|     // Simulate some work that takes an amount of time | ||||
|     await sleep(3000); | ||||
|     console.log(`Sending approval request for order: ${order.product}`); | ||||
|   }; | ||||
| 
 | ||||
|   // Activity function that places an order | ||||
|   const placeOrder = async (_: WorkflowActivityContext, order: Order) => { | ||||
|     console.log(`Placing order: ${order.product}`); | ||||
|   }; | ||||
| 
 | ||||
|   // Orchestrator function that represents a purchase order workflow | ||||
|   const purchaseOrderWorkflow: TWorkflow = async function* (ctx: WorkflowContext, order: Order): any { | ||||
|     // Orders under $1000 are auto-approved | ||||
|     if (order.cost < 1000) { | ||||
|       return "Auto-approved"; | ||||
|     } | ||||
| 
 | ||||
|     // Orders of $1000 or more require manager approval | ||||
|     yield ctx.callActivity(sendApprovalRequest, order); | ||||
| 
 | ||||
|     // Approvals must be received within 24 hours or they will be cancled. | ||||
|     const tasks: Task<any>[] = []; | ||||
|     const approvalEvent = ctx.waitForExternalEvent("approval_received"); | ||||
|     const timeoutEvent = ctx.createTimer(24 * 60 * 60); | ||||
|     tasks.push(approvalEvent); | ||||
|     tasks.push(timeoutEvent); | ||||
|     const winner = ctx.whenAny(tasks); | ||||
| 
 | ||||
|     if (winner == timeoutEvent) { | ||||
|       return "Cancelled"; | ||||
|     } | ||||
| 
 | ||||
|     yield ctx.callActivity(placeOrder, order); | ||||
|     const approvalDetails = approvalEvent.getResult(); | ||||
|     return `Approved by ${approvalDetails.approver}`; | ||||
|   }; | ||||
| 
 | ||||
|   workflowRuntime | ||||
|     .registerWorkflow(purchaseOrderWorkflow) | ||||
|     .registerActivity(sendApprovalRequest) | ||||
|     .registerActivity(placeOrder); | ||||
| 
 | ||||
|   // Wrap the worker startup in a try-catch block to handle any errors during startup | ||||
|   try { | ||||
|     await workflowRuntime.start(); | ||||
|     console.log("Worker started successfully"); | ||||
|   } catch (error) { | ||||
|     console.error("Error starting worker:", error); | ||||
|   } | ||||
| 
 | ||||
|   // Schedule a new orchestration | ||||
|   try { | ||||
|     const cost = readlineSync.questionInt("Cost of your order:"); | ||||
|     const approver = readlineSync.question("Approver of your order:"); | ||||
|     const timeout = readlineSync.questionInt("Timeout for your order in seconds:"); | ||||
|     const order = new Order(cost, "MyProduct", 1); | ||||
|     const id = await workflowClient.scheduleNewWorkflow(purchaseOrderWorkflow, order); | ||||
|     console.log(`Orchestration scheduled with ID: ${id}`); | ||||
| 
 | ||||
|     // prompt for approval asynchronously | ||||
|     promptForApproval(approver, workflowClient, id); | ||||
| 
 | ||||
|     // Wait for orchestration completion | ||||
|     const state = await workflowClient.waitForWorkflowCompletion(id, undefined, timeout + 2); | ||||
| 
 | ||||
|     console.log(`Orchestration completed! Result: ${state?.serializedOutput}`); | ||||
|   } catch (error) { | ||||
|     console.error("Error scheduling or waiting for orchestration:", error); | ||||
|   } | ||||
| 
 | ||||
|   // stop worker and client | ||||
|   await workflowRuntime.stop(); | ||||
|   await workflowClient.stop(); | ||||
| 
 | ||||
|   // stop the dapr side car | ||||
|   process.exit(0); | ||||
| } | ||||
| 
 | ||||
| async function promptForApproval(approver: string, workflowClient: DaprWorkflowClient, id: string) { | ||||
|   if (readlineSync.keyInYN("Press [Y] to approve the order... Y/yes, N/no")) { | ||||
|     const approvalEvent = { approver: approver }; | ||||
|     await workflowClient.raiseEvent(id, "approval_received", approvalEvent); | ||||
|   } else { | ||||
|     return "Order rejected"; | ||||
|   } | ||||
| } | ||||
| 
 | ||||
| start().catch((e) => { | ||||
|   console.error(e); | ||||
|   process.exit(1); | ||||
| }); | ||||
| ``` | ||||
| 
 | ||||
| {{% /codetab %}} | ||||
| 
 | ||||
| {{% codetab %}} | ||||
| <!--dotnet--> | ||||
| 
 | ||||
|  | @ -682,11 +1201,68 @@ public class ExternalSystemInteractionWorkflow extends Workflow { | |||
| 
 | ||||
| {{% /codetab %}} | ||||
| 
 | ||||
| {{% codetab %}} | ||||
| <!--go--> | ||||
| 
 | ||||
| ```go | ||||
| type Order struct { | ||||
| 	Cost     float64 `json:"cost"` | ||||
| 	Product  string  `json:"product"` | ||||
| 	Quantity int     `json:"quantity"` | ||||
| } | ||||
| type Approval struct { | ||||
| 	Approver string `json:"approver"` | ||||
| } | ||||
| func PurchaseOrderWorkflow(ctx *workflow.WorkflowContext) (any, error) { | ||||
| 	var order Order | ||||
| 	if err := ctx.GetInput(&order); err != nil { | ||||
| 		return "", err | ||||
| 	} | ||||
| 	// Orders under $1000 are auto-approved | ||||
| 	if order.Cost < 1000 { | ||||
| 		return "Auto-approved", nil | ||||
| 	} | ||||
| 	// Orders of $1000 or more require manager approval | ||||
| 	if err := ctx.CallActivity(SendApprovalRequest, workflow.ActivityInput(order)).Await(nil); err != nil { | ||||
| 		return "", err | ||||
| 	} | ||||
| 	// Approvals must be received within 24 hours or they will be cancelled | ||||
| 	var approval Approval | ||||
| 	if err := ctx.WaitForExternalEvent("approval_received", time.Hour*24).Await(&approval); err != nil { | ||||
| 		// Assuming that a timeout has taken place - in any case; an error. | ||||
| 		return "error/cancelled", err | ||||
| 	} | ||||
| 	// The order was approved | ||||
| 	if err := ctx.CallActivity(PlaceOrder, workflow.ActivityInput(order)).Await(nil); err != nil { | ||||
| 		return "", err | ||||
| 	} | ||||
| 	return fmt.Sprintf("Approved by %s", approval.Approver), nil | ||||
| } | ||||
| func SendApprovalRequest(ctx workflow.ActivityContext) (any, error) { | ||||
| 	var order Order | ||||
| 	if err := ctx.GetInput(&order); err != nil { | ||||
| 		return "", err | ||||
| 	} | ||||
| 	fmt.Printf("*** Sending approval request for order: %v\n", order) | ||||
| 	return "", nil | ||||
| } | ||||
| func PlaceOrder(ctx workflow.ActivityContext) (any, error) { | ||||
| 	var order Order | ||||
| 	if err := ctx.GetInput(&order); err != nil { | ||||
| 		return "", err | ||||
| 	} | ||||
| 	fmt.Printf("*** Placing order: %v", order) | ||||
| 	return "", nil | ||||
| } | ||||
| ``` | ||||
| 
 | ||||
| {{% /codetab %}} | ||||
| 
 | ||||
| {{< /tabs >}} | ||||
| 
 | ||||
| The code that delivers the event to resume the workflow execution is external to the workflow. Workflow events can be delivered to a waiting workflow instance using the [raise event]({{< ref "howto-manage-workflow.md#raise-an-event" >}}) workflow management API, as shown in the following example: | ||||
| 
 | ||||
| {{< tabs Python ".NET" Java >}} | ||||
| {{< tabs Python JavaScript ".NET" Java Go >}} | ||||
| 
 | ||||
| {{% codetab %}} | ||||
| <!--python--> | ||||
|  | @ -705,6 +1281,19 @@ with DaprClient() as d: | |||
| 
 | ||||
| {{% /codetab %}} | ||||
| 
 | ||||
| {{% codetab %}} | ||||
| <!--javascript--> | ||||
| 
 | ||||
| ```javascript | ||||
| import { DaprClient } from "@dapr/dapr"; | ||||
| 
 | ||||
|   public async raiseEvent(workflowInstanceId: string, eventName: string, eventPayload?: any) { | ||||
|     this._innerClient.raiseOrchestrationEvent(workflowInstanceId, eventName, eventPayload); | ||||
|   } | ||||
| ``` | ||||
| 
 | ||||
| {{% /codetab %}} | ||||
| 
 | ||||
| {{% codetab %}} | ||||
| <!--dotnet--> | ||||
| 
 | ||||
|  | @ -729,6 +1318,32 @@ client.raiseEvent(restartingInstanceId, "RestartEvent", "RestartEventPayload"); | |||
| 
 | ||||
| {{% /codetab %}} | ||||
| 
 | ||||
| {{% codetab %}} | ||||
| <!--go--> | ||||
| 
 | ||||
| ```go | ||||
| func raiseEvent() { | ||||
|   daprClient, err := client.NewClient() | ||||
|   if err != nil { | ||||
|     log.Fatalf("failed to initialize the client") | ||||
|   } | ||||
|   err = daprClient.RaiseEventWorkflowBeta1(context.Background(), &client.RaiseEventWorkflowRequest{ | ||||
|     InstanceID: "instance_id", | ||||
|     WorkflowComponent: "dapr", | ||||
|     EventName: "approval_received", | ||||
|     EventData: Approval{ | ||||
|       Approver: "Jane Doe", | ||||
|     }, | ||||
|   }) | ||||
|   if err != nil { | ||||
|     log.Fatalf("failed to raise event on workflow") | ||||
|   } | ||||
|   log.Println("raised an event on specified workflow") | ||||
| } | ||||
| ``` | ||||
| 
 | ||||
| {{% /codetab %}} | ||||
| 
 | ||||
| {{< /tabs >}} | ||||
| 
 | ||||
| External events don't have to be directly triggered by humans. They can also be triggered by other systems. For example, a workflow may need to pause and wait for a payment to be received. In this case, a payment system might publish an event to a pub/sub topic on receipt of a payment, and a listener on that topic can raise an event to the workflow using the raise event workflow API. | ||||
|  | @ -744,5 +1359,7 @@ External events don't have to be directly triggered by humans. They can also be | |||
| - [Workflow API reference]({{< ref workflow_api.md >}}) | ||||
| - Try out the following examples:  | ||||
|    - [Python](https://github.com/dapr/python-sdk/tree/master/examples/demo_workflow) | ||||
|    - [JavaScript](https://github.com/dapr/js-sdk/tree/main/examples/workflow) | ||||
|    - [.NET](https://github.com/dapr/dotnet-sdk/tree/master/examples/Workflow) | ||||
|    - [Java](https://github.com/dapr/java-sdk/tree/master/examples/src/main/java/io/dapr/examples/workflows) | ||||
|    - [Java](https://github.com/dapr/java-sdk/tree/master/examples/src/main/java/io/dapr/examples/workflows) | ||||
|    - [Go](https://github.com/dapr/go-sdk/tree/main/examples/workflow/README.md) | ||||
|  |  | |||
|  | @ -106,7 +106,7 @@ When running on Kubernetes, you can also use references to Kubernetes secrets fo | |||
| |-----------------|----------|----------------------------|------------------------------------------| | ||||
| | `azureClientId` | N        | Client ID (application ID) | `"c7dd251f-811f-4ba2-a905-acd4d3f8f08b"` | | ||||
| 
 | ||||
| Using Managed Identities, the `azureClientId` field is generally recommended. The field is optional when using a system-assigned identity, but may be required when using user-assigned identities. | ||||
| [Using Managed Identities]({{< ref howto-mi.md >}}), the `azureClientId` field is generally recommended. The field is optional when using a system-assigned identity, but may be required when using user-assigned identities. | ||||
| 
 | ||||
| #### Authenticating with Workload Identity on AKS | ||||
| 
 | ||||
|  |  | |||
|  | @ -1,18 +1,35 @@ | |||
| --- | ||||
| type: docs | ||||
| title: "How to: Use Managed Identities" | ||||
| linkTitle: "How to: Use MI" | ||||
| title: "How to: Use managed identities" | ||||
| linkTitle: "How to: Use managed identities" | ||||
| weight: 40000 | ||||
| aliases: | ||||
|   - "/developing-applications/integrations/azure/azure-authentication/howto-msi/" | ||||
| description: "Learn how to use Managed Identities" | ||||
| description: "Learn how to use managed identities" | ||||
| --- | ||||
| 
 | ||||
| Using Managed Identities (MI), authentication happens automatically by virtue of your application running on top of an Azure service that has an assigned identity.  | ||||
| Using managed identities, authentication happens automatically by virtue of your application running on top of an Azure service that has either a system-managed or a user-assigned identity.  | ||||
| 
 | ||||
| For example, let's say you enable a managed service identity for an Azure VM, Azure Container App, or an Azure Kubernetes Service cluster. When you do, an Microsoft Entra ID application is created for you and automatically assigned to the service. Your Dapr services can then leverage that identity to authenticate with Microsoft Entra ID, transparently and without you having to specify any credentials. | ||||
| To get started, you need to enable a managed identity as a service option/functionality in various Azure services, independent of Dapr. Enabling this creates an identity (or application) under the hood for Microsoft Entra ID (previously Azure Active Directory ID) purposes. | ||||
| 
 | ||||
| To get started with managed identities, you need to assign an identity to a new or existing Azure resource. The instructions depend on the service use. Check the following official documentation for the most appropriate instructions: | ||||
| Your Dapr services can then leverage that identity to authenticate with Microsoft Entra ID, transparently and without you having to specify any credentials. | ||||
| 
 | ||||
| In this guide, you learn how to: | ||||
| - Grant your identity to the Azure service you're using via official Azure documentation | ||||
| - Set up either a system-managed or user-assigned identity in your component | ||||
| 
 | ||||
| 
 | ||||
| That's about all there is to it. | ||||
| 
 | ||||
| {{% alert title="Note" color="primary" %}} | ||||
| In your component YAML, you only need the [`azureClientId` property]({{< ref "authenticating-azure.md#authenticating-with-managed-identities-mi" >}}) if using user-assigned identity. Otherwise, you can omit this property for system-managed identity to be used by default. | ||||
| {{% /alert %}} | ||||
| 
 | ||||
| ## Grant access to the service | ||||
| 
 | ||||
| Set the requisite Microsoft Entra ID role assignments or custom permissions to your system-managed or user-assigned identity for a particular Azure resource (as identified by the resource scope). | ||||
| 
 | ||||
| You can set up a managed identity to a new or existing Azure resource. The instructions depend on the service use. Check the following official documentation for the most appropriate instructions: | ||||
| 
 | ||||
| - [Azure Kubernetes Service (AKS)](https://docs.microsoft.com/azure/aks/use-managed-identity) | ||||
| - [Azure Container Apps (ACA)](https://learn.microsoft.com/azure/container-apps/dapr-overview?tabs=bicep1%2Cyaml#using-managed-identity) | ||||
|  | @ -21,9 +38,7 @@ To get started with managed identities, you need to assign an identity to a new | |||
| - [Azure Virtual Machines Scale Sets (VMSS)](https://docs.microsoft.com/azure/active-directory/managed-identities-azure-resources/qs-configure-cli-windows-vmss) | ||||
| - [Azure Container Instance (ACI)](https://docs.microsoft.com/azure/container-instances/container-instances-managed-identity) | ||||
| 
 | ||||
| Dapr supports both system-assigned and user-assigned identities. | ||||
| 
 | ||||
| After assigning an identity to your Azure resource, you will have credentials such as: | ||||
| After assigning a system-managed identity to your Azure resource, you'll have credentials like the following: | ||||
| 
 | ||||
| ```json | ||||
| { | ||||
|  | @ -34,7 +49,95 @@ After assigning an identity to your Azure resource, you will have credentials su | |||
| } | ||||
| ``` | ||||
| 
 | ||||
| From the returned values, take note of **`principalId`**, which is the Service Principal ID that was created. You'll use that to grant access to Azure resources to your identity. | ||||
| From the returned values, take note of the **`principalId`** value, which is [the Service Principal ID created for your identity]({{< ref "howto-aad.md#create-a-service-principal" >}}). Use that to grant access permissions for your Azure resources component to access the identity. | ||||
| 
 | ||||
| {{% alert title="Managed identities in Azure Container Apps" color="primary" %}} | ||||
| Every container app has a completely different system-managed identity, making it very unmanageable to handle the required role assignments across multiple apps.  | ||||
| 
 | ||||
| Instead, it's _strongly recommended_ to use a user-assigned identity and attach this to all the apps that should load the component. Then, you should scope the component to those same apps. | ||||
| {{% /alert %}} | ||||
| 
 | ||||
| ## Set up identities in your component | ||||
| 
 | ||||
| By default, Dapr Azure components look up the system-managed identity of the environment they run in and authenticate as that. Generally, for a given component, there are no required properties to use system-managed identity other than the service name, storage account name, and any other properites required by the Azure service (listed in the documentation).  | ||||
| 
 | ||||
| For user-assigned idenitities, in addition to the basic properties required by the service you're using, you need to specify the `azureClientId` (user-assigned identity ID) in the component. Make sure the user-assigned identity is attached to the Azure service Dapr is running on, or else you won't be able to use that identity. | ||||
| 
 | ||||
| {{% alert title="Note" color="primary" %}} | ||||
| If the sidecar loads a component which does not specify `azureClientId`, it only tries the system-assigned identity. If the component specifies the `azureClientId` property, it only tries the particular user-assigned identity with that ID. | ||||
| {{% /alert %}} | ||||
| 
 | ||||
| The following examples demonstrate setting up either a system-managed or user-assigned identity in an Azure KeyVault secrets component. | ||||
| 
 | ||||
| {{< tabs "System-managed" "User-assigned" "Kubernetes" >}} | ||||
| 
 | ||||
|  <!-- system managed --> | ||||
| {{% codetab %}} | ||||
| 
 | ||||
| If you set up system-managed identity using an Azure KeyVault component, the YAML would look like the following: | ||||
| 
 | ||||
| ```yml | ||||
| apiVersion: dapr.io/v1alpha1 | ||||
| kind: Component | ||||
| metadata: | ||||
|   name: azurekeyvault | ||||
| spec: | ||||
|   type: secretstores.azure.keyvault | ||||
|   version: v1 | ||||
|   metadata: | ||||
|   - name: vaultName | ||||
|     value: mykeyvault | ||||
| ``` | ||||
| 
 | ||||
| In this example, the system-managed identity looks up the service identity and communicates with the `mykeyvault` vault. Next, grant your system-managed identiy access to the desired service. | ||||
| 
 | ||||
| {{% /codetab %}} | ||||
| 
 | ||||
|  <!-- user assigned --> | ||||
| {{% codetab %}} | ||||
| 
 | ||||
| If you set up user-assigned identity using an Azure KeyVault component, the YAML would look like the following: | ||||
| 
 | ||||
| ```yml | ||||
| apiVersion: dapr.io/v1alpha1 | ||||
| kind: Component | ||||
| metadata: | ||||
|   name: azurekeyvault | ||||
| spec: | ||||
|   type: secretstores.azure.keyvault | ||||
|   version: v1 | ||||
|   metadata: | ||||
|   - name: vaultName | ||||
|     value: mykeyvault | ||||
|   - name: azureClientId | ||||
|     value: someAzureIdentityClientIDHere | ||||
| ``` | ||||
| 
 | ||||
| Once you've set up the component YAML with the `azureClientId` property, you can grant your user-assigned identity access to your service. | ||||
| 
 | ||||
| {{% /codetab %}} | ||||
| 
 | ||||
|  <!-- k8s --> | ||||
| {{% codetab %}} | ||||
| 
 | ||||
| For component configuration in Kubernetes or AKS, refer to the [Workload Identity guidance.](https://learn.microsoft.com/azure/aks/workload-identity-overview?tabs=dotnet) | ||||
| 
 | ||||
| {{% /codetab %}} | ||||
| 
 | ||||
| {{< /tabs >}} | ||||
| 
 | ||||
| ## Troubleshooting | ||||
| 
 | ||||
| If you receive an error or your managed identity doesn't work as expected, check if the following items are true: | ||||
| 
 | ||||
| - The system-managed identity or user-assigned identity don't have the required permissions on the target resource. | ||||
| - The user-assigned identity isn't attached to the Azure service (container app or pod) from which you're loading the component. This can especially happen if: | ||||
|   - You have an unscoped component (a component loaded by all container apps in an environment, or all deployments in your AKS cluster).  | ||||
|   - You attached the user-assigned identity to only one container app or one deployment in AKS (using [Azure Workload Identity](https://learn.microsoft.com/azure/aks/workload-identity-overview?tabs=dotnet)).  | ||||
|    | ||||
|   In this scenario, since the identity isn't attached to every other container app or deployment in AKS, the component referencing the user-assigned identity via `azureClientId` fails. | ||||
| 
 | ||||
| > **Best practice:** When using user-assigned identities, make sure to scope your components to specific apps! | ||||
| 
 | ||||
| ## Next steps | ||||
| 
 | ||||
|  |  | |||
|  | @ -3,10 +3,11 @@ type: docs | |||
| title: "Use the Dapr API" | ||||
| linkTitle: "Use the Dapr API" | ||||
| weight: 30 | ||||
| description: "Run a Dapr sidecar and try out the state API" | ||||
| description: "Run a Dapr sidecar and try out the state management API" | ||||
| --- | ||||
| 
 | ||||
| In this guide, you'll simulate an application by running the sidecar and calling the API directly. After running Dapr using the Dapr CLI, you'll: | ||||
| In this guide, you'll simulate an application by running the sidecar and calling the state management API directly.  | ||||
| After running Dapr using the Dapr CLI, you'll: | ||||
| 
 | ||||
| - Save a state object. | ||||
| - Read/get the state object. | ||||
|  | @ -21,7 +22,8 @@ In this guide, you'll simulate an application by running the sidecar and calling | |||
| 
 | ||||
| ### Step 1: Run the Dapr sidecar | ||||
| 
 | ||||
| The [`dapr run`]({{< ref dapr-run.md >}}) command launches an application, together with a sidecar. | ||||
| The [`dapr run`]({{< ref dapr-run.md >}}) command normally runs your application and a Dapr sidecar. In this case,  | ||||
| it only runs the sidecar since you are interacting with the state management API directly. | ||||
| 
 | ||||
| Launch a Dapr sidecar that will listen on port 3500 for a blank application named `myapp`: | ||||
| 
 | ||||
|  |  | |||
|  | @ -15,6 +15,11 @@ You'll use the Dapr CLI as the main tool for various Dapr-related tasks. You can | |||
| 
 | ||||
| The Dapr CLI works with both [self-hosted]({{< ref self-hosted >}}) and [Kubernetes]({{< ref Kubernetes >}}) environments. | ||||
| 
 | ||||
| {{% alert title="Before you begin" color="primary" %}} | ||||
| In Docker Desktop's advanced options, verify you've allowed the default Docker socket to be used. | ||||
|    <img src="/images/docker-desktop-setting.png" width=800 style="padding-bottom:15px;"> | ||||
| {{% /alert %}} | ||||
| 
 | ||||
| ### Step 1: Install the Dapr CLI | ||||
| 
 | ||||
| {{< tabs Linux Windows MacOS Binaries>}} | ||||
|  |  | |||
|  | @ -22,10 +22,14 @@ Dapr initialization includes: | |||
| 1. Creating a **default components folder** with component definitions for the above. | ||||
| 1. Running a **Dapr placement service container instance** for local actor support. | ||||
| 
 | ||||
| {{% alert title="Docker" color="primary" %}} | ||||
| The recommended development environment requires [Docker](https://docs.docker.com/install/). While you can [initialize Dapr without a dependency on Docker]({{<ref self-hosted-no-docker.md>}})), the next steps in this guide assume the recommended Docker development environment. | ||||
| {{% alert title="Kubernetes Development Environment" color="primary" %}} | ||||
| To initialize Dapr in your local or remote **Kubernetes** cluster for development (including the Redis and Zipkin containers listed above), see [how to initialize Dapr for development on Kubernetes]({{<ref "kubernetes-deploy.md#install-dapr-from-the-official-dapr-helm-chart-with-development-flag">}}) | ||||
| {{% /alert %}} | ||||
| 
 | ||||
| You can also install [Podman](https://podman.io/) in place of Docker. Read more about [initializing Dapr using Podman]({{<ref dapr-init.md>}}). | ||||
| {{% alert title="Docker" color="primary" %}} | ||||
| The recommended development environment requires [Docker](https://docs.docker.com/install/). While you can [initialize Dapr without a dependency on Docker]({{< ref self-hosted-no-docker.md >}}), the next steps in this guide assume the recommended Docker development environment. | ||||
| 
 | ||||
| You can also install [Podman](https://podman.io/) in place of Docker. Read more about [initializing Dapr using Podman]({{< ref dapr-init.md >}}). | ||||
| {{% /alert %}} | ||||
| 
 | ||||
| ### Step 1: Open an elevated terminal | ||||
|  | @ -54,12 +58,36 @@ Run Windows Terminal or command prompt as administrator. | |||
| 
 | ||||
| ### Step 2: Run the init CLI command | ||||
| 
 | ||||
| {{< tabs "Linux/MacOS" "Windows">}} | ||||
| 
 | ||||
| {{% codetab %}} | ||||
| 
 | ||||
| Install the latest Dapr runtime binaries: | ||||
| 
 | ||||
| ```bash | ||||
| dapr init | ||||
| ``` | ||||
| 
 | ||||
| **If you are installing on Mac OS Silicon with Docker,** you may need to perform the following workaround to enable `dapr init` to talk to Docker without using Kubernetes. | ||||
| 1. Navigate to **Docker Desktop** > **Settings** > **Advanced**. | ||||
| 1. Select the **Allow the default Docker socket to be used (requires password)** checkbox. | ||||
| 
 | ||||
| {{% /codetab %}} | ||||
| 
 | ||||
| {{% codetab %}} | ||||
| 
 | ||||
| Install the latest Dapr runtime binaries: | ||||
| 
 | ||||
| ```bash | ||||
| dapr init | ||||
| ``` | ||||
| 
 | ||||
| {{% /codetab %}} | ||||
| 
 | ||||
| {{< /tabs >}} | ||||
| 
 | ||||
| [See the troubleshooting guide if you encounter any error messages regarding Docker not being installed or running.]({{< ref "common_issues.md#dapr-cant-connect-to-docker-when-installing-the-dapr-cli" >}}) | ||||
| 
 | ||||
| ### Step 3: Verify Dapr version | ||||
| 
 | ||||
| ```bash | ||||
|  | @ -112,9 +140,14 @@ ls $HOME/.dapr | |||
| {{% /codetab %}} | ||||
| 
 | ||||
| {{% codetab %}} | ||||
| 
 | ||||
| You can verify using either PowerShell or command line. If using PowerShell, run: | ||||
| ```powershell | ||||
| explorer "%USERPROFILE%\.dapr\" | ||||
| explorer "$env:USERPROFILE\.dapr" | ||||
| ``` | ||||
| 
 | ||||
| If using command line, run:  | ||||
| ```cmd | ||||
| explorer "%USERPROFILE%\.dapr" | ||||
| ``` | ||||
| 
 | ||||
| **Result:** | ||||
|  |  | |||
|  | @ -51,6 +51,20 @@ From the root of the Quickstarts directory, navigate into the pub/sub directory: | |||
| cd pub_sub/python/sdk | ||||
| ``` | ||||
| 
 | ||||
| Install the dependencies for the `order-processor` and `checkout` apps: | ||||
| 
 | ||||
| ```bash | ||||
| cd ./checkout | ||||
| pip3 install -r requirements.txt | ||||
| cd .. | ||||
| cd ./order-processor | ||||
| pip3 install -r requirements.txt | ||||
| cd .. | ||||
| cd ./order-processor-fastapi | ||||
| pip3 install -r requirements.txt | ||||
| cd .. | ||||
| ``` | ||||
| 
 | ||||
| ### Step 3: Run the publisher and subscriber | ||||
| 
 | ||||
| With the following command, simultaneously run the following services alongside their own Dapr sidecars: | ||||
|  | @ -215,6 +229,17 @@ From the root of the Quickstarts directory, navigate into the pub/sub directory: | |||
| cd pub_sub/javascript/sdk | ||||
| ``` | ||||
| 
 | ||||
| Install the dependencies for the `order-processor` and `checkout` apps: | ||||
| 
 | ||||
| ```bash | ||||
| cd ./order-processor | ||||
| npm install | ||||
| cd .. | ||||
| cd ./checkout | ||||
| npm install | ||||
| cd .. | ||||
| ``` | ||||
| 
 | ||||
| ### Step 3: Run the publisher and subscriber | ||||
| 
 | ||||
| With the following command, simultaneously run the following services alongside their own Dapr sidecars: | ||||
|  | @ -352,6 +377,18 @@ From the root of the Quickstarts directory, navigate into the pub/sub directory: | |||
| cd pub_sub/csharp/sdk | ||||
| ``` | ||||
| 
 | ||||
| Install the dependencies for the `order-processor` and `checkout` apps: | ||||
| 
 | ||||
| ```bash | ||||
| cd ./order-processor | ||||
| dotnet restore | ||||
| dotnet build | ||||
| cd ../checkout | ||||
| dotnet restore | ||||
| dotnet build | ||||
| cd .. | ||||
| ``` | ||||
| 
 | ||||
| ### Step 3: Run the publisher and subscriber | ||||
| 
 | ||||
| With the following command, simultaneously run the following services alongside their own Dapr sidecars: | ||||
|  | @ -497,6 +534,17 @@ From the root of the Quickstarts directory, navigate into the pub/sub directory: | |||
| cd pub_sub/java/sdk | ||||
| ``` | ||||
| 
 | ||||
| Install the dependencies for the `order-processor` and `checkout` apps: | ||||
| 
 | ||||
| ```bash | ||||
| cd ./order-processor | ||||
| mvn clean install | ||||
| cd .. | ||||
| cd ./checkout | ||||
| mvn clean install | ||||
| cd .. | ||||
| ``` | ||||
| 
 | ||||
| ### Step 3: Run the publisher and subscriber | ||||
| 
 | ||||
| With the following command, simultaneously run the following services alongside their own Dapr sidecars: | ||||
|  | @ -647,6 +695,16 @@ From the root of the Quickstarts directory, navigate into the pub/sub directory: | |||
| cd pub_sub/go/sdk | ||||
| ``` | ||||
| 
 | ||||
| Install the dependencies for the `order-processor` and `checkout` apps: | ||||
| 
 | ||||
| ```bash | ||||
| cd ./order-processor | ||||
| go build . | ||||
| cd ../checkout | ||||
| go build . | ||||
| cd .. | ||||
| ``` | ||||
| 
 | ||||
| ### Step 3: Run the publisher and subscriber | ||||
| 
 | ||||
| With the following command, simultaneously run the following services alongside their own Dapr sidecars: | ||||
|  |  | |||
|  | @ -48,6 +48,16 @@ From the root of the Quickstart clone directory, navigate to the quickstart dire | |||
| cd service_invocation/python/http | ||||
| ``` | ||||
| 
 | ||||
| Install the dependencies for the `order-processor` and `checkout` apps: | ||||
| 
 | ||||
| ```bash | ||||
| cd ./order-processor | ||||
| pip3 install -r requirements.txt | ||||
| cd ../checkout | ||||
| pip3 install -r requirements.txt | ||||
| cd .. | ||||
| ``` | ||||
| 
 | ||||
| ### Step 3: Run the `order-processor` and `checkout` services | ||||
| 
 | ||||
| With the following command, simultaneously run the following services alongside their own Dapr sidecars: | ||||
|  | @ -184,6 +194,16 @@ From the root of the Quickstart clone directory, navigate to the quickstart dire | |||
| cd service_invocation/javascript/http | ||||
| ``` | ||||
| 
 | ||||
| Install the dependencies for the `order-processor` and `checkout` apps: | ||||
| 
 | ||||
| ```bash | ||||
| cd ./order-processor | ||||
| npm install | ||||
| cd ../checkout | ||||
| npm install | ||||
| cd .. | ||||
| ``` | ||||
| 
 | ||||
| ### Step 3: Run the `order-processor` and `checkout` services | ||||
| 
 | ||||
| With the following command, simultaneously run the following services alongside their own Dapr sidecars: | ||||
|  | @ -314,6 +334,18 @@ From the root of the Quickstart clone directory, navigate to the quickstart dire | |||
| cd service_invocation/csharp/http | ||||
| ``` | ||||
| 
 | ||||
| Install the dependencies for the `order-processor` and `checkout` apps: | ||||
| 
 | ||||
| ```bash | ||||
| cd ./order-processor | ||||
| dotnet restore | ||||
| dotnet build | ||||
| cd ../checkout | ||||
| dotnet restore | ||||
| dotnet build | ||||
| cd .. | ||||
| ``` | ||||
| 
 | ||||
| ### Step 3: Run the `order-processor` and `checkout` services | ||||
| 
 | ||||
| With the following command, simultaneously run the following services alongside their own Dapr sidecars: | ||||
|  | @ -448,6 +480,16 @@ From the root of the Quickstart clone directory, navigate to the quickstart dire | |||
| cd service_invocation/java/http | ||||
| ``` | ||||
| 
 | ||||
| Install the dependencies for the `order-processor` and `checkout` apps: | ||||
| 
 | ||||
| ```bash | ||||
| cd ./order-processor | ||||
| mvn clean install | ||||
| cd ../checkout | ||||
| mvn clean install | ||||
| cd .. | ||||
| ``` | ||||
| 
 | ||||
| ### Step 3: Run the `order-processor` and `checkout` services | ||||
| 
 | ||||
| With the following command, simultaneously run the following services alongside their own Dapr sidecars: | ||||
|  | @ -577,6 +619,16 @@ From the root of the Quickstart clone directory, navigate to the quickstart dire | |||
| cd service_invocation/go/http | ||||
| ``` | ||||
| 
 | ||||
| Install the dependencies for the `order-processor` and `checkout` apps: | ||||
| 
 | ||||
| ```bash | ||||
| cd ./order-processor | ||||
| go build . | ||||
| cd ../checkout | ||||
| go build . | ||||
| cd .. | ||||
| ``` | ||||
| 
 | ||||
| ### Step 3: Run the `order-processor` and `checkout` services | ||||
| 
 | ||||
| With the following command, simultaneously run the following services alongside their own Dapr sidecars: | ||||
|  |  | |||
|  | @ -48,6 +48,12 @@ In a terminal window, navigate to the `order-processor` directory. | |||
| cd state_management/python/sdk/order-processor | ||||
| ``` | ||||
| 
 | ||||
| Install the dependencies: | ||||
| 
 | ||||
| ```bash | ||||
| pip3 install -r requirements.txt  | ||||
| ``` | ||||
| 
 | ||||
| Run the `order-processor` service alongside a Dapr sidecar using [Multi-App Run]({{< ref multi-app-dapr-run >}}). | ||||
| 
 | ||||
| ```bash | ||||
|  | @ -163,6 +169,14 @@ Clone the [sample provided in the Quickstarts repo](https://github.com/dapr/quic | |||
| git clone https://github.com/dapr/quickstarts.git | ||||
| ``` | ||||
| 
 | ||||
| Install the dependencies for the `order-processor` app: | ||||
| 
 | ||||
| ```bash | ||||
| cd ./order-processor | ||||
| npm install | ||||
| cd .. | ||||
| ``` | ||||
| 
 | ||||
| ### Step 2: Manipulate service state | ||||
| 
 | ||||
| In a terminal window, navigate to the `order-processor` directory. | ||||
|  | @ -171,6 +185,12 @@ In a terminal window, navigate to the `order-processor` directory. | |||
| cd state_management/javascript/sdk/order-processor | ||||
| ``` | ||||
| 
 | ||||
| Install the dependencies: | ||||
| 
 | ||||
| ```bash | ||||
| npm install | ||||
| ``` | ||||
| 
 | ||||
| Run the `order-processor` service alongside a Dapr sidecar. | ||||
| 
 | ||||
| ```bash | ||||
|  | @ -297,6 +317,13 @@ In a terminal window, navigate to the `order-processor` directory. | |||
| cd state_management/csharp/sdk/order-processor | ||||
| ``` | ||||
| 
 | ||||
| Install the dependencies: | ||||
| 
 | ||||
| ```bash | ||||
| dotnet restore | ||||
| dotnet build | ||||
| ``` | ||||
| 
 | ||||
| Run the `order-processor` service alongside a Dapr sidecar. | ||||
| 
 | ||||
| ```bash | ||||
|  | @ -557,6 +584,12 @@ In a terminal window, navigate to the `order-processor` directory. | |||
| cd state_management/go/sdk/order-processor | ||||
| ``` | ||||
| 
 | ||||
| Install the dependencies: | ||||
| 
 | ||||
| ```bash | ||||
| go build . | ||||
| ``` | ||||
| 
 | ||||
| Run the `order-processor` service alongside a Dapr sidecar. | ||||
| 
 | ||||
| ```bash | ||||
|  |  | |||
|  | @ -20,8 +20,8 @@ In this guide, you'll: | |||
| 
 | ||||
| <img src="/images/workflow-quickstart-overview.png" width=800 style="padding-bottom:15px;"> | ||||
| 
 | ||||
| 
 | ||||
| {{< tabs "Python" ".NET" "Java" >}} | ||||
| Select your preferred language-specific Dapr SDK before proceeding with the Quickstart. | ||||
| {{< tabs "Python" "JavaScript" ".NET" "Java" Go >}} | ||||
| 
 | ||||
|  <!-- Python --> | ||||
| {{% codetab %}} | ||||
|  | @ -68,14 +68,12 @@ pip3 install -r requirements.txt | |||
| 
 | ||||
| ### Step 3: Run the order processor app | ||||
| 
 | ||||
| In the terminal, start the order processor app alongside a Dapr sidecar: | ||||
| In the terminal, start the order processor app alongside a Dapr sidecar using [Multi-App Run]({{< ref multi-app-dapr-run >}}): | ||||
| 
 | ||||
| ```bash | ||||
| dapr run --app-id order-processor --resources-path ../../../components/ -- python3 app.py | ||||
| dapr run -f . | ||||
| ``` | ||||
| 
 | ||||
| > **Note:** Since Python3.exe is not defined in Windows, you may need to use `python app.py` instead of `python3 app.py`. | ||||
| 
 | ||||
| This starts the `order-processor` app with unique workflow ID and runs the workflow activities.  | ||||
| 
 | ||||
| Expected output: | ||||
|  | @ -113,7 +111,7 @@ View the workflow trace spans in the Zipkin web UI (typically at `http://localho | |||
| 
 | ||||
| ### What happened? | ||||
| 
 | ||||
| When you ran `dapr run --app-id order-processor --resources-path ../../../components/ -- python3 app.py`: | ||||
| When you ran `dapr run -f .`: | ||||
| 
 | ||||
| 1. A unique order ID for the workflow is generated (in the above example, `f4e1926e-3721-478d-be8a-f5bebd1995da`) and the workflow is scheduled. | ||||
| 1. The `NotifyActivity` workflow activity sends a notification saying an order for 10 cars has been received. | ||||
|  | @ -265,6 +263,226 @@ In `workflow.py`, the workflow is defined as a class with all of its associated | |||
|         message=f'Order {order_id} has completed!')) | ||||
|     return OrderResult(processed=True)  | ||||
| ``` | ||||
| {{% /codetab %}} | ||||
| 
 | ||||
|  <!-- JavaScript --> | ||||
| {{% codetab %}} | ||||
| 
 | ||||
| The `order-processor` console app starts and manages the lifecycle of an order processing workflow that stores and retrieves data in a state store. The workflow consists of four workflow activities, or tasks: | ||||
| 
 | ||||
| - `notifyActivity`: Utilizes a logger to print out messages throughout the workflow. These messages notify the user when there is insufficient inventory, their payment couldn't be processed, and more. | ||||
| - `reserveInventoryActivity`: Checks the state store to ensure that there is enough inventory present for purchase. | ||||
| - `requestApprovalActivity`: Requests approval for orders over a certain threshold | ||||
| - `processPaymentActivity`: Processes and authorizes the payment. | ||||
| - `updateInventoryActivity`: Updates the state store with the new remaining inventory value. | ||||
| 
 | ||||
| 
 | ||||
| ### Step 1: Pre-requisites | ||||
| 
 | ||||
| For this example, you will need: | ||||
| 
 | ||||
| - [Dapr CLI and initialized environment](https://docs.dapr.io/getting-started). | ||||
| - [Latest Node.js installed](https://nodejs.org/download/). | ||||
| <!-- IGNORE_LINKS --> | ||||
| - [Docker Desktop](https://www.docker.com/products/docker-desktop) | ||||
| <!-- END_IGNORE --> | ||||
| 
 | ||||
| ### Step 2: Set up the environment | ||||
| 
 | ||||
| Clone the [sample provided in the Quickstarts repo](https://github.com/dapr/quickstarts/tree/master/workflows). | ||||
| 
 | ||||
| ```bash | ||||
| git clone https://github.com/dapr/quickstarts.git | ||||
| ``` | ||||
| 
 | ||||
| In a new terminal window, navigate to the `order-processor` directory: | ||||
| 
 | ||||
| ```bash | ||||
| cd workflows/javascript/sdk/order-processor | ||||
| ``` | ||||
| 
 | ||||
| Install the dependencies: | ||||
| 
 | ||||
| ```bash | ||||
| cd ./javascript/sdk | ||||
| npm install | ||||
| npm run build | ||||
| cd .. | ||||
| ``` | ||||
| 
 | ||||
| ### Step 3: Run the order processor app | ||||
| 
 | ||||
| In the terminal, start the order processor app alongside a Dapr sidecar using [Multi-App Run]({{< ref multi-app-dapr-run >}}): | ||||
| 
 | ||||
| ```bash | ||||
| dapr run -f . | ||||
| ``` | ||||
| 
 | ||||
| This starts the `order-processor` app with unique workflow ID and runs the workflow activities.  | ||||
| 
 | ||||
| Expected output: | ||||
| 
 | ||||
| ``` | ||||
| == APP - workflowApp == == APP == Orchestration scheduled with ID: 0c332155-1e02-453a-a333-28cfc7777642 | ||||
| == APP - workflowApp == == APP == Waiting 30 seconds for instance 0c332155-1e02-453a-a333-28cfc7777642 to complete... | ||||
| == APP - workflowApp == == APP == Received "Orchestrator Request" work item with instance id '0c332155-1e02-453a-a333-28cfc7777642' | ||||
| == APP - workflowApp == == APP == 0c332155-1e02-453a-a333-28cfc7777642: Rebuilding local state with 0 history event... | ||||
| == APP - workflowApp == == APP == 0c332155-1e02-453a-a333-28cfc7777642: Processing 2 new history event(s): [ORCHESTRATORSTARTED=1, EXECUTIONSTARTED=1] | ||||
| == APP - workflowApp == == APP == Processing order 0c332155-1e02-453a-a333-28cfc7777642... | ||||
| == APP - workflowApp == == APP == 0c332155-1e02-453a-a333-28cfc7777642: Waiting for 1 task(s) and 0 event(s) to complete... | ||||
| == APP - workflowApp == == APP == 0c332155-1e02-453a-a333-28cfc7777642: Returning 1 action(s) | ||||
| == APP - workflowApp == == APP == Received "Activity Request" work item | ||||
| == APP - workflowApp == == APP == Received order 0c332155-1e02-453a-a333-28cfc7777642 for 10 item1 at a total cost of 100 | ||||
| == APP - workflowApp == == APP == Activity notifyActivity completed with output undefined (0 chars) | ||||
| == APP - workflowApp == == APP == Received "Orchestrator Request" work item with instance id '0c332155-1e02-453a-a333-28cfc7777642' | ||||
| == APP - workflowApp == == APP == 0c332155-1e02-453a-a333-28cfc7777642: Rebuilding local state with 3 history event... | ||||
| == APP - workflowApp == == APP == Processing order 0c332155-1e02-453a-a333-28cfc7777642... | ||||
| == APP - workflowApp == == APP == 0c332155-1e02-453a-a333-28cfc7777642: Processing 2 new history event(s): [ORCHESTRATORSTARTED=1, TASKCOMPLETED=1] | ||||
| == APP - workflowApp == == APP == 0c332155-1e02-453a-a333-28cfc7777642: Waiting for 1 task(s) and 0 event(s) to complete... | ||||
| == APP - workflowApp == == APP == 0c332155-1e02-453a-a333-28cfc7777642: Returning 1 action(s) | ||||
| == APP - workflowApp == == APP == Received "Activity Request" work item | ||||
| == APP - workflowApp == == APP == Reserving inventory for 0c332155-1e02-453a-a333-28cfc7777642 of 10 item1 | ||||
| == APP - workflowApp == == APP == 2024-02-16T03:15:59.498Z INFO [HTTPClient, HTTPClient] Sidecar Started | ||||
| == APP - workflowApp == == APP == There are 100 item1 in stock | ||||
| == APP - workflowApp == == APP == Activity reserveInventoryActivity completed with output {"success":true,"inventoryItem":{"perItemCost":100,"quantity":100,"itemName":"item1"}} (86 chars) | ||||
| == APP - workflowApp == == APP == Received "Orchestrator Request" work item with instance id '0c332155-1e02-453a-a333-28cfc7777642' | ||||
| == APP - workflowApp == == APP == 0c332155-1e02-453a-a333-28cfc7777642: Rebuilding local state with 6 history event... | ||||
| == APP - workflowApp == == APP == Processing order 0c332155-1e02-453a-a333-28cfc7777642... | ||||
| == APP - workflowApp == == APP == 0c332155-1e02-453a-a333-28cfc7777642: Processing 2 new history event(s): [ORCHESTRATORSTARTED=1, TASKCOMPLETED=1] | ||||
| == APP - workflowApp == == APP == 0c332155-1e02-453a-a333-28cfc7777642: Waiting for 1 task(s) and 0 event(s) to complete... | ||||
| == APP - workflowApp == == APP == 0c332155-1e02-453a-a333-28cfc7777642: Returning 1 action(s) | ||||
| == APP - workflowApp == == APP == Received "Activity Request" work item | ||||
| == APP - workflowApp == == APP == Processing payment for order item1 | ||||
| == APP - workflowApp == == APP == Payment of 100 for 10 item1 processed successfully | ||||
| == APP - workflowApp == == APP == Activity processPaymentActivity completed with output true (4 chars) | ||||
| == APP - workflowApp == == APP == Received "Orchestrator Request" work item with instance id '0c332155-1e02-453a-a333-28cfc7777642' | ||||
| == APP - workflowApp == == APP == 0c332155-1e02-453a-a333-28cfc7777642: Rebuilding local state with 9 history event... | ||||
| == APP - workflowApp == == APP == Processing order 0c332155-1e02-453a-a333-28cfc7777642... | ||||
| == APP - workflowApp == == APP == 0c332155-1e02-453a-a333-28cfc7777642: Processing 2 new history event(s): [ORCHESTRATORSTARTED=1, TASKCOMPLETED=1] | ||||
| == APP - workflowApp == == APP == 0c332155-1e02-453a-a333-28cfc7777642: Waiting for 1 task(s) and 0 event(s) to complete... | ||||
| == APP - workflowApp == == APP == 0c332155-1e02-453a-a333-28cfc7777642: Returning 1 action(s) | ||||
| == APP - workflowApp == == APP == Received "Activity Request" work item | ||||
| == APP - workflowApp == == APP == Updating inventory for 0c332155-1e02-453a-a333-28cfc7777642 of 10 item1 | ||||
| == APP - workflowApp == == APP == Inventory updated for 0c332155-1e02-453a-a333-28cfc7777642, there are now 90 item1 in stock | ||||
| == APP - workflowApp == == APP == Activity updateInventoryActivity completed with output {"success":true,"inventoryItem":{"perItemCost":100,"quantity":90,"itemName":"item1"}} (85 chars) | ||||
| == APP - workflowApp == == APP == Received "Orchestrator Request" work item with instance id '0c332155-1e02-453a-a333-28cfc7777642' | ||||
| == APP - workflowApp == == APP == 0c332155-1e02-453a-a333-28cfc7777642: Rebuilding local state with 12 history event... | ||||
| == APP - workflowApp == == APP == Processing order 0c332155-1e02-453a-a333-28cfc7777642... | ||||
| == APP - workflowApp == == APP == 0c332155-1e02-453a-a333-28cfc7777642: Processing 2 new history event(s): [ORCHESTRATORSTARTED=1, TASKCOMPLETED=1] | ||||
| == APP - workflowApp == == APP == 0c332155-1e02-453a-a333-28cfc7777642: Waiting for 1 task(s) and 0 event(s) to complete... | ||||
| == APP - workflowApp == == APP == 0c332155-1e02-453a-a333-28cfc7777642: Returning 1 action(s) | ||||
| == APP - workflowApp == == APP == Received "Activity Request" work item | ||||
| == APP - workflowApp == == APP == order 0c332155-1e02-453a-a333-28cfc7777642 processed successfully! | ||||
| == APP - workflowApp == == APP == Activity notifyActivity completed with output undefined (0 chars) | ||||
| == APP - workflowApp == == APP == Received "Orchestrator Request" work item with instance id '0c332155-1e02-453a-a333-28cfc7777642' | ||||
| == APP - workflowApp == == APP == 0c332155-1e02-453a-a333-28cfc7777642: Rebuilding local state with 15 history event... | ||||
| == APP - workflowApp == == APP == Processing order 0c332155-1e02-453a-a333-28cfc7777642... | ||||
| == APP - workflowApp == == APP == 0c332155-1e02-453a-a333-28cfc7777642: Processing 2 new history event(s): [ORCHESTRATORSTARTED=1, TASKCOMPLETED=1] | ||||
| == APP - workflowApp == == APP == Order 0c332155-1e02-453a-a333-28cfc7777642 processed successfully! | ||||
| == APP - workflowApp == == APP == 0c332155-1e02-453a-a333-28cfc7777642: Orchestration completed with status COMPLETED | ||||
| == APP - workflowApp == == APP == 0c332155-1e02-453a-a333-28cfc7777642: Returning 1 action(s) | ||||
| == APP - workflowApp == time="2024-02-15T21:15:59.5589687-06:00" level=info msg="0c332155-1e02-453a-a333-28cfc7777642: 'orderProcessingWorkflow' completed with a COMPLETED status." app_id=activity-sequence-workflow instance=kaibocai-devbox scope=wfengine.backend type=log ver=1.12.4 | ||||
| == APP - workflowApp == == APP == Instance 0c332155-1e02-453a-a333-28cfc7777642 completed | ||||
| ``` | ||||
| 
 | ||||
| ### (Optional) Step 4: View in Zipkin | ||||
| 
 | ||||
| Running `dapr init` launches the [openzipkin/zipkin](https://hub.docker.com/r/openzipkin/zipkin/) Docker container. If the container has stopped running, launch the Zipkin Docker container with the following command: | ||||
| 
 | ||||
| ``` | ||||
| docker run -d -p 9411:9411 openzipkin/zipkin | ||||
| ``` | ||||
| 
 | ||||
| View the workflow trace spans in the Zipkin web UI (typically at `http://localhost:9411/zipkin/`).  | ||||
| 
 | ||||
| <img src="/images/workflow-trace-spans-zipkin.png" width=800 style="padding-bottom:15px;"> | ||||
| 
 | ||||
| ### What happened? | ||||
| 
 | ||||
| When you ran `dapr run -f .`: | ||||
| 
 | ||||
| 1. A unique order ID for the workflow is generated (in the above example, `0c332155-1e02-453a-a333-28cfc7777642`) and the workflow is scheduled. | ||||
| 1. The `notifyActivity` workflow activity sends a notification saying an order for 10 cars has been received. | ||||
| 1. The `reserveInventoryActivity` workflow activity checks the inventory data, determines if you can supply the ordered item, and responds with the number of cars in stock. | ||||
| 1. Your workflow starts and notifies you of its status. | ||||
| 1. The `processPaymentActivity` workflow activity begins processing payment for order `0c332155-1e02-453a-a333-28cfc7777642` and confirms if successful. | ||||
| 1. The `updateInventoryActivity` workflow activity updates the inventory with the current available cars after the order has been processed. | ||||
| 1. The `notifyActivity` workflow activity sends a notification saying that order `0c332155-1e02-453a-a333-28cfc7777642` has completed. | ||||
| 1. The workflow terminates as completed. | ||||
| 
 | ||||
| #### `order-processor/workflowApp.ts`  | ||||
| 
 | ||||
| In the application file: | ||||
| - The unique workflow order ID is generated | ||||
| - The workflow is scheduled | ||||
| - The workflow status is retrieved | ||||
| - The workflow and the workflow activities it invokes are registered | ||||
| 
 | ||||
| ```javascript | ||||
| import { DaprWorkflowClient, WorkflowRuntime, DaprClient } from "@dapr/dapr-dev"; | ||||
| import { InventoryItem, OrderPayload } from "./model"; | ||||
| import { notifyActivity, orderProcessingWorkflow, processPaymentActivity, requestApprovalActivity, reserveInventoryActivity, updateInventoryActivity } from "./orderProcessingWorkflow"; | ||||
| 
 | ||||
| async function start() { | ||||
|   // Update the gRPC client and worker to use a local address and port | ||||
|   const workflowClient = new DaprWorkflowClient(); | ||||
|   const workflowWorker = new WorkflowRuntime(); | ||||
| 
 | ||||
|   const daprClient = new DaprClient(); | ||||
|   const storeName = "statestore"; | ||||
| 
 | ||||
|   const inventory = new InventoryItem("item1", 100, 100); | ||||
|   const key = inventory.itemName; | ||||
| 
 | ||||
|   await daprClient.state.save(storeName, [ | ||||
|     { | ||||
|       key: key, | ||||
|       value: inventory, | ||||
|     } | ||||
|   ]); | ||||
| 
 | ||||
|   const order = new OrderPayload("item1", 100, 10); | ||||
| 
 | ||||
|   workflowWorker | ||||
|   .registerWorkflow(orderProcessingWorkflow) | ||||
|   .registerActivity(notifyActivity) | ||||
|   .registerActivity(reserveInventoryActivity) | ||||
|   .registerActivity(requestApprovalActivity) | ||||
|   .registerActivity(processPaymentActivity) | ||||
|   .registerActivity(updateInventoryActivity); | ||||
| 
 | ||||
|   // Wrap the worker startup in a try-catch block to handle any errors during startup | ||||
|   try { | ||||
|     await workflowWorker.start(); | ||||
|     console.log("Workflow runtime started successfully"); | ||||
|   } catch (error) { | ||||
|     console.error("Error starting workflow runtime:", error); | ||||
|   } | ||||
| 
 | ||||
|   // Schedule a new orchestration | ||||
|   try { | ||||
|     const id = await workflowClient.scheduleNewWorkflow(orderProcessingWorkflow, order); | ||||
|     console.log(`Orchestration scheduled with ID: ${id}`); | ||||
| 
 | ||||
|     // Wait for orchestration completion | ||||
|     const state = await workflowClient.waitForWorkflowCompletion(id, undefined, 30); | ||||
| 
 | ||||
|     console.log(`Orchestration completed! Result: ${state?.serializedOutput}`); | ||||
|   } catch (error) { | ||||
|     console.error("Error scheduling or waiting for orchestration:", error); | ||||
|     throw error; | ||||
|   } | ||||
| 
 | ||||
|   await workflowWorker.stop(); | ||||
|   await workflowClient.stop(); | ||||
| } | ||||
| 
 | ||||
| start().catch((e) => { | ||||
|   console.error(e); | ||||
|   process.exit(1); | ||||
| }); | ||||
| ``` | ||||
| 
 | ||||
| {{% /codetab %}} | ||||
| 
 | ||||
|  <!-- .NET --> | ||||
|  | @ -303,10 +521,10 @@ cd workflows/csharp/sdk/order-processor | |||
| 
 | ||||
| ### Step 3: Run the order processor app | ||||
| 
 | ||||
| In the terminal, start the order processor app alongside a Dapr sidecar: | ||||
| In the terminal, start the order processor app alongside a Dapr sidecar using [Multi-App Run]({{< ref multi-app-dapr-run >}}): | ||||
| 
 | ||||
| ```bash | ||||
| dapr run --app-id order-processor dotnet run | ||||
| dapr run -f . | ||||
| ``` | ||||
| 
 | ||||
| This starts the `order-processor` app with unique workflow ID and runs the workflow activities.  | ||||
|  | @ -355,7 +573,7 @@ View the workflow trace spans in the Zipkin web UI (typically at `http://localho | |||
| 
 | ||||
| ### What happened? | ||||
| 
 | ||||
| When you ran `dapr run --app-id order-processor dotnet run`: | ||||
| When you ran `dapr run -f .`: | ||||
| 
 | ||||
| 1. A unique order ID for the workflow is generated (in the above example, `6d2abcc9`) and the workflow is scheduled. | ||||
| 1. The `NotifyActivity` workflow activity sends a notification saying an order for 10 cars has been received. | ||||
|  | @ -559,10 +777,10 @@ mvn clean install | |||
| 
 | ||||
| ### Step 3: Run the order processor app | ||||
| 
 | ||||
| In the terminal, start the order processor app alongside a Dapr sidecar: | ||||
| In the terminal, start the order processor app alongside a Dapr sidecar using [Multi-App Run]({{< ref multi-app-dapr-run >}}): | ||||
| 
 | ||||
| ```bash | ||||
| dapr run --app-id WorkflowConsoleApp --resources-path ../../../components/ --dapr-grpc-port 50001 -- java -jar target/OrderProcessingService-0.0.1-SNAPSHOT.jar io.dapr.quickstarts.workflows.WorkflowConsoleApp | ||||
| dapr run -f . | ||||
| ``` | ||||
| 
 | ||||
| This starts the `order-processor` app with unique workflow ID and runs the workflow activities.  | ||||
|  | @ -614,7 +832,7 @@ View the workflow trace spans in the Zipkin web UI (typically at `http://localho | |||
| 
 | ||||
| ### What happened? | ||||
| 
 | ||||
| When you ran `dapr run`: | ||||
| When you ran `dapr run -f .`: | ||||
| 
 | ||||
| 1. A unique order ID for the workflow is generated (in the above example, `edceba90-9c45-4be8-ad40-60d16e060797`) and the workflow is scheduled. | ||||
| 1. The `NotifyActivity` workflow activity sends a notification saying an order for 10 cars has been received. | ||||
|  | @ -852,6 +1070,250 @@ The `Activities` directory holds the four workflow activities used by the workfl | |||
| 
 | ||||
| {{% /codetab %}} | ||||
| 
 | ||||
|  <!-- Go --> | ||||
| {{% codetab %}} | ||||
| 
 | ||||
| 
 | ||||
| The `order-processor` console app starts and manages the `OrderProcessingWorkflow` workflow, which simulates purchasing items from a store. The workflow consists of five unique workflow activities, or tasks: | ||||
| 
 | ||||
| - `NotifyActivity`: Utilizes a logger to print out messages throughout the workflow. These messages notify you when: | ||||
|   - You have insufficient inventory | ||||
|   - Your payment couldn't be processed, etc. | ||||
| - `ProcessPaymentActivity`: Processes and authorizes the payment. | ||||
| - `VerifyInventoryActivity`: Checks the state store to ensure there is enough inventory present for purchase. | ||||
| - `UpdateInventoryActivity`: Removes the requested items from the state store and updates the store with the new remaining inventory value. | ||||
| - `RequestApprovalActivity`: Seeks approval from the manager if payment is greater than 50,000 USD. | ||||
| 
 | ||||
| ### Step 1: Pre-requisites | ||||
| 
 | ||||
| For this example, you will need: | ||||
| 
 | ||||
| - [Dapr CLI and initialized environment](https://docs.dapr.io/getting-started). | ||||
| - [Latest version of Go](https://go.dev/dl/). | ||||
| <!-- IGNORE_LINKS --> | ||||
| - [Docker Desktop](https://www.docker.com/products/docker-desktop) | ||||
| <!-- END_IGNORE --> | ||||
| 
 | ||||
| ### Step 2: Set up the environment | ||||
| 
 | ||||
| Clone the [sample provided in the Quickstarts repo](https://github.com/dapr/quickstarts/tree/master/workflows). | ||||
| 
 | ||||
| ```bash | ||||
| git clone https://github.com/dapr/quickstarts.git | ||||
| ``` | ||||
| 
 | ||||
| In a new terminal window, navigate to the `order-processor` directory: | ||||
| 
 | ||||
| ```bash | ||||
| cd workflows/go/sdk/order-processor | ||||
| ``` | ||||
| 
 | ||||
| ### Step 3: Run the order processor app | ||||
| 
 | ||||
| In the terminal, start the order processor app alongside a Dapr sidecar using [Multi-App Run]({{< ref multi-app-dapr-run >}}): | ||||
| 
 | ||||
| ```bash | ||||
| dapr run -f . | ||||
| ``` | ||||
| 
 | ||||
| This starts the `order-processor` app with unique workflow ID and runs the workflow activities.  | ||||
| 
 | ||||
| Expected output: | ||||
| 
 | ||||
| ```bash | ||||
| == APP - order-processor == *** Welcome to the Dapr Workflow console app sample! | ||||
| == APP - order-processor == *** Using this app, you can place orders that start workflows. | ||||
| == APP - order-processor == dapr client initializing for: 127.0.0.1:50056 | ||||
| == APP - order-processor == adding base stock item: paperclip | ||||
| == APP - order-processor == 2024/02/01 12:59:52 work item listener started | ||||
| == APP - order-processor == INFO: 2024/02/01 12:59:52 starting background processor | ||||
| == APP - order-processor == adding base stock item: cars | ||||
| == APP - order-processor == adding base stock item: computers | ||||
| == APP - order-processor == ==========Begin the purchase of item:========== | ||||
| == APP - order-processor == NotifyActivity: Received order 48ee83b7-5d80-48d5-97f9-6b372f5480a5 for 10 cars - $150000 | ||||
| == APP - order-processor == VerifyInventoryActivity: Verifying inventory for order 48ee83b7-5d80-48d5-97f9-6b372f5480a5 of 10 cars | ||||
| == APP - order-processor == VerifyInventoryActivity: There are 100 cars available for purchase | ||||
| == APP - order-processor == RequestApprovalActivity: Requesting approval for payment of 150000USD for 10 cars | ||||
| == APP - order-processor == NotifyActivity: Payment for order 48ee83b7-5d80-48d5-97f9-6b372f5480a5 has been approved! | ||||
| == APP - order-processor == ProcessPaymentActivity: 48ee83b7-5d80-48d5-97f9-6b372f5480a5 for 10 - cars (150000USD) | ||||
| == APP - order-processor == UpdateInventoryActivity: Checking Inventory for order 48ee83b7-5d80-48d5-97f9-6b372f5480a5 for 10 * cars | ||||
| == APP - order-processor == UpdateInventoryActivity: There are now 90 cars left in stock | ||||
| == APP - order-processor == NotifyActivity: Order 48ee83b7-5d80-48d5-97f9-6b372f5480a5 has completed! | ||||
| == APP - order-processor == Workflow completed - result: COMPLETED | ||||
| == APP - order-processor == Purchase of item is complete | ||||
| ``` | ||||
| 
 | ||||
| Stop the Dapr workflow with `CTRL+C` or: | ||||
| 
 | ||||
| ```bash | ||||
| dapr stop -f . | ||||
| ``` | ||||
| 
 | ||||
| ### (Optional) Step 4: View in Zipkin | ||||
| 
 | ||||
| Running `dapr init` launches the [openzipkin/zipkin](https://hub.docker.com/r/openzipkin/zipkin/) Docker container. If the container has stopped running, launch the Zipkin Docker container with the following command: | ||||
| 
 | ||||
| ``` | ||||
| docker run -d -p 9411:9411 openzipkin/zipkin | ||||
| ``` | ||||
| 
 | ||||
| View the workflow trace spans in the Zipkin web UI (typically at `http://localhost:9411/zipkin/`).  | ||||
| 
 | ||||
| <img src="/images/workflow-trace-spans-zipkin.png" width=800 style="padding-bottom:15px;"> | ||||
| 
 | ||||
| ### What happened? | ||||
| 
 | ||||
| When you ran `dapr run`: | ||||
| 
 | ||||
| 1. A unique order ID for the workflow is generated (in the above example, `48ee83b7-5d80-48d5-97f9-6b372f5480a5`) and the workflow is scheduled. | ||||
| 1. The `NotifyActivity` workflow activity sends a notification saying an order for 10 cars has been received. | ||||
| 1. The `ReserveInventoryActivity` workflow activity checks the inventory data, determines if you can supply the ordered item, and responds with the number of cars in stock. | ||||
| 1. Your workflow starts and notifies you of its status. | ||||
| 1. The `ProcessPaymentActivity` workflow activity begins processing payment for order `48ee83b7-5d80-48d5-97f9-6b372f5480a5` and confirms if successful. | ||||
| 1. The `UpdateInventoryActivity` workflow activity updates the inventory with the current available cars after the order has been processed. | ||||
| 1. The `NotifyActivity` workflow activity sends a notification saying that order `48ee83b7-5d80-48d5-97f9-6b372f5480a5` has completed. | ||||
| 1. The workflow terminates as completed. | ||||
| 
 | ||||
| #### `order-processor/main.go`  | ||||
| 
 | ||||
| In the application's program file: | ||||
| - The unique workflow order ID is generated | ||||
| - The workflow is scheduled | ||||
| - The workflow status is retrieved | ||||
| - The workflow and the workflow activities it invokes are registered | ||||
| 
 | ||||
| ```go | ||||
| func main() { | ||||
| 	fmt.Println("*** Welcome to the Dapr Workflow console app sample!") | ||||
| 	fmt.Println("*** Using this app, you can place orders that start workflows.") | ||||
| 
 | ||||
|   // ... | ||||
| 
 | ||||
|   // Register workflow and activities | ||||
| 	if err := w.RegisterWorkflow(OrderProcessingWorkflow); err != nil { | ||||
| 		log.Fatal(err) | ||||
| 	} | ||||
| 	if err := w.RegisterActivity(NotifyActivity); err != nil { | ||||
| 		log.Fatal(err) | ||||
| 	} | ||||
| 	if err := w.RegisterActivity(RequestApprovalActivity); err != nil { | ||||
| 		log.Fatal(err) | ||||
| 	} | ||||
| 	if err := w.RegisterActivity(VerifyInventoryActivity); err != nil { | ||||
| 		log.Fatal(err) | ||||
| 	} | ||||
| 	if err := w.RegisterActivity(ProcessPaymentActivity); err != nil { | ||||
| 		log.Fatal(err) | ||||
| 	} | ||||
| 	if err := w.RegisterActivity(UpdateInventoryActivity); err != nil { | ||||
| 		log.Fatal(err) | ||||
| 	} | ||||
| 
 | ||||
|   // Build and start workflow runtime, pulling and executing tasks | ||||
| 	if err := w.Start(); err != nil { | ||||
| 		log.Fatal(err) | ||||
| 	} | ||||
| 
 | ||||
| 	daprClient, err := client.NewClient() | ||||
| 	if err != nil { | ||||
| 		log.Fatalf("failed to initialise dapr client: %v", err) | ||||
| 	} | ||||
| 	wfClient, err := workflow.NewClient(workflow.WithDaprClient(daprClient)) | ||||
| 	if err != nil { | ||||
| 		log.Fatalf("failed to initialise workflow client: %v", err) | ||||
| 	} | ||||
| 
 | ||||
|   // Check inventory | ||||
| 	inventory := []InventoryItem{ | ||||
| 		{ItemName: "paperclip", PerItemCost: 5, Quantity: 100}, | ||||
| 		{ItemName: "cars", PerItemCost: 15000, Quantity: 100}, | ||||
| 		{ItemName: "computers", PerItemCost: 500, Quantity: 100}, | ||||
| 	} | ||||
| 	if err := restockInventory(daprClient, inventory); err != nil { | ||||
| 		log.Fatalf("failed to restock: %v", err) | ||||
| 	} | ||||
| 
 | ||||
| 	fmt.Println("==========Begin the purchase of item:==========") | ||||
| 
 | ||||
| 	itemName := defaultItemName | ||||
| 	orderQuantity := 10 | ||||
| 
 | ||||
| 	totalCost := inventory[1].PerItemCost * orderQuantity | ||||
| 
 | ||||
| 	orderPayload := OrderPayload{ | ||||
| 		ItemName:  itemName, | ||||
| 		Quantity:  orderQuantity, | ||||
| 		TotalCost: totalCost, | ||||
| 	} | ||||
| 
 | ||||
|   // Start workflow events, like receiving order, verifying inventory, and processing payment  | ||||
| 	id, err := wfClient.ScheduleNewWorkflow(context.Background(), workflowName, workflow.WithInput(orderPayload)) | ||||
| 	if err != nil { | ||||
| 		log.Fatalf("failed to start workflow: %v", err) | ||||
| 	} | ||||
| 
 | ||||
| 	// ... | ||||
| 
 | ||||
|   // Notification that workflow has completed or failed | ||||
| 	for { | ||||
| 		timeDelta := time.Since(startTime) | ||||
| 		metadata, err := wfClient.FetchWorkflowMetadata(context.Background(), id) | ||||
| 		if err != nil { | ||||
| 			log.Fatalf("failed to fetch workflow: %v", err) | ||||
| 		} | ||||
| 		if (metadata.RuntimeStatus == workflow.StatusCompleted) || (metadata.RuntimeStatus == workflow.StatusFailed) || (metadata.RuntimeStatus == workflow.StatusTerminated) { | ||||
| 			fmt.Printf("Workflow completed - result: %v\n", metadata.RuntimeStatus.String()) | ||||
| 			break | ||||
| 		} | ||||
| 		if timeDelta.Seconds() >= 10 { | ||||
| 			metadata, err := wfClient.FetchWorkflowMetadata(context.Background(), id) | ||||
| 			if err != nil { | ||||
| 				log.Fatalf("failed to fetch workflow: %v", err) | ||||
| 			} | ||||
| 			if totalCost > 50000 && !approvalSought && ((metadata.RuntimeStatus != workflow.StatusCompleted) || (metadata.RuntimeStatus != workflow.StatusFailed) || (metadata.RuntimeStatus != workflow.StatusTerminated)) { | ||||
| 				approvalSought = true | ||||
| 				promptForApproval(id) | ||||
| 			} | ||||
| 		} | ||||
| 		// Sleep to not DoS the dapr dev instance | ||||
| 		time.Sleep(time.Second) | ||||
| 	} | ||||
| 
 | ||||
| 	fmt.Println("Purchase of item is complete") | ||||
| } | ||||
| 
 | ||||
| // Request approval (RequestApprovalActivity) | ||||
| func promptForApproval(id string) { | ||||
| 	wfClient, err := workflow.NewClient() | ||||
| 	if err != nil { | ||||
| 		log.Fatalf("failed to initialise wfClient: %v", err) | ||||
| 	} | ||||
| 	if err := wfClient.RaiseEvent(context.Background(), id, "manager_approval"); err != nil { | ||||
| 		log.Fatal(err) | ||||
| 	} | ||||
| } | ||||
| 
 | ||||
| // Update inventory for remaining stock (UpdateInventoryActivity) | ||||
| func restockInventory(daprClient client.Client, inventory []InventoryItem) error { | ||||
| 	for _, item := range inventory { | ||||
| 		itemSerialized, err := json.Marshal(item) | ||||
| 		if err != nil { | ||||
| 			return err | ||||
| 		} | ||||
| 		fmt.Printf("adding base stock item: %s\n", item.ItemName) | ||||
| 		if err := daprClient.SaveState(context.Background(), stateStoreName, item.ItemName, itemSerialized, nil); err != nil { | ||||
| 			return err | ||||
| 		} | ||||
| 	} | ||||
| 	return nil | ||||
| } | ||||
| ``` | ||||
| 
 | ||||
| Meanwhile, the `OrderProcessingWorkflow` and its activities are defined as methods in [`workflow.go`](https://github.com/dapr/quickstarts/workflows/go/sdk/order-processor/workflow.go) | ||||
| 
 | ||||
| {{% /codetab %}} | ||||
| 
 | ||||
| 
 | ||||
| {{< /tabs >}} | ||||
| 
 | ||||
| ## Tell us what you think! | ||||
|  |  | |||
|  | @ -62,6 +62,12 @@ A component may skip the Beta stage and conformance test requirement per the dis | |||
| - The component has been available as Alpha or Beta for at least 1 minor version release of Dapr runtime prior | ||||
| - A maintainer will address component security, core functionality and test issues according to the Dapr support policy and issue a patch release that includes the patched stable component | ||||
| 
 | ||||
| {{% alert title="Note" color="primary" %}} | ||||
| Stable Dapr components are based on Dapr certification and conformance tests and are not a guarantee of support by any specific vendor, where the vendor's SDK is used as part of the component. | ||||
| 
 | ||||
| Dapr component tests guarantee the stability of a component independent of a third party vendor's declared stability status for any SDKs used. This is because the meaning of stable (for example alpha, beta, stable) can vary for each vendor.  | ||||
| {{% /alert %}} | ||||
| 
 | ||||
| ### Previous Generally Available (GA) components | ||||
| 
 | ||||
| Any component that was previously certified as GA is allowed into Stable even if the new requirements are not met. | ||||
|  |  | |||
|  | @ -6,27 +6,45 @@ weight: 300 | |||
| description: "Updating deployed components used by applications" | ||||
| --- | ||||
| 
 | ||||
| When making an update to an existing deployed component used by an application, Dapr does not update the component automatically unless the `HotReload` feature gate is enabled. | ||||
| When making an update to an existing deployed component used by an application, Dapr does not update the component automatically unless the [`HotReload`](#hot-reloading-preview-feature) feature gate is enabled. | ||||
| The Dapr sidecar needs to be restarted in order to pick up the latest version of the component. | ||||
| How this is done depends on the hosting environment. | ||||
| 
 | ||||
| {{% alert title="Note" color="primary" %}} | ||||
| Dapr can be made to "hot reload" components, where updates are picked up automatically without needing a restart. | ||||
| This is enabled by via the [`HotReload` feature gate]({{< ref "support-preview-features.md" >}}). | ||||
| All component types are supported for hot reloading. | ||||
| This feature is currently in preview. | ||||
| {{% /alert %}} | ||||
| 
 | ||||
| ## Kubernetes | ||||
| ### Kubernetes | ||||
| 
 | ||||
| When running in Kubernetes, the process of updating a component involves two steps: | ||||
| 
 | ||||
| 1. Apply the new component YAML to the desired namespace | ||||
| 1. Unless the [`HotReload` feature gate is enabled]({{< ref "support-preview-features.md" >}}), perform a [rollout restart operation](https://kubernetes.io/docs/reference/kubectl/cheatsheet/#updating-resources) on your deployments to pick up the latest component | ||||
| 1. Unless the [`HotReload` feature gate is enabled](#hot-reloading-preview-feature), perform a [rollout restart operation](https://kubernetes.io/docs/reference/kubectl/cheatsheet/#updating-resources) on your deployments to pick up the latest component | ||||
| 
 | ||||
| ## Self Hosted | ||||
| ### Self Hosted | ||||
| 
 | ||||
| Unless the [`HotReload` feature gate is enabled]({{< ref "support-preview-features.md" >}}), the process of updating a component involves a single step of stopping and restarting the `daprd` process to pick up the latest component. | ||||
| Unless the [`HotReload` feature gate is enabled](#hot-reloading-preview-feature), the process of updating a component involves a single step of stopping and restarting the `daprd` process to pick up the latest component. | ||||
| 
 | ||||
| ## Hot Reloading (Preview Feature) | ||||
| 
 | ||||
| > This feature is currently in [preview]({{< ref "preview-features.md" >}}). | ||||
| > Hot reloading is enabled by via the [`HotReload` feature gate]({{< ref "support-preview-features.md" >}}). | ||||
| 
 | ||||
| Dapr can be made to "hot reload" components whereby component updates are picked up automatically without the need to restart the Dapr sidecar process or Kubernetes pod. | ||||
| This means creating, updating, or deleting a component manifest will be reflected in the Dapr sidecar during runtime. | ||||
| 
 | ||||
| {{% alert title="Updating Components" color="warning" %}} | ||||
| When a component is updated it is first closed, and then re-initialized using the new configuration. | ||||
| This causes the component to be unavailable for a short period of time during this process. | ||||
| {{% /alert %}} | ||||
| 
 | ||||
| {{% alert title="Initialization Errors" color="warning" %}} | ||||
| If the initialization processes errors when a component is created or updated through hot reloading, the Dapr sidecar respects the component field [`spec.ignoreErrors`]({{< ref component-schema.md>}}). | ||||
| That is, the behaviour is the same as when the sidecar loads components on boot. | ||||
| - `spec.ignoreErrors=false` (*default*): the sidecar gracefully shuts down. | ||||
| - `spec.ignoreErrors=true`: the sidecar continues to run with neither the old or new component configuration registered. | ||||
| {{% /alert %}} | ||||
| 
 | ||||
| All components are supported for hot reloading except for the following types. | ||||
| Any create, update, or deletion of these component types is ignored by the sidecar with a restart required to pick up changes. | ||||
| - [Actor State Stores]({{< ref "state_api.md#configuring-state-store-for-actors" >}}) | ||||
| - [Workflow Backends]({{< ref "workflow-architecture.md#workflow-backend" >}}) | ||||
| 
 | ||||
| ## Further reading | ||||
| - [Components concept]({{< ref components-concept.md >}}) | ||||
|  |  | |||
|  | @ -50,6 +50,7 @@ The following configuration settings can be applied to Dapr application sidecars | |||
| - [Metrics](#metrics) | ||||
| - [Logging](#logging) | ||||
| - [Middleware](#middleware) | ||||
| - [Name resolution](#name-resolution) | ||||
| - [Scope secret store access](#scope-secret-store-access) | ||||
| - [Access Control allow lists for building block APIs](#access-control-allow-lists-for-building-block-apis) | ||||
| - [Access Control allow lists for service invocation API](#access-control-allow-lists-for-service-invocation-api) | ||||
|  | @ -106,21 +107,27 @@ The `metrics` section under the `Configuration` spec contains the following prop | |||
| ```yml | ||||
| metrics: | ||||
|   enabled: true | ||||
|   rules: [] | ||||
|   http: | ||||
|     increasedCardinality: true | ||||
| ``` | ||||
| 
 | ||||
| The following table lists the properties for metrics: | ||||
| 
 | ||||
| | Property     | Type   | Description | | ||||
| |--------------|--------|-------------| | ||||
| | `enabled` | boolean | Whether metrics should to be enabled. | | ||||
| | `rules`   | boolean | Named rule to filter metrics. Each rule contains a set of `labels` to filter on and a`regex`expression to apply to the metrics path. | | ||||
| | `enabled` | boolean | When set to true, the default, enables metrics collection and the metrics endpoint. | | ||||
| | `rules`   | array | Named rule to filter metrics. Each rule contains a set of `labels` to filter on and a `regex` expression to apply to the metrics path. | | ||||
| | `http.increasedCardinality` | boolean | When set to true, in the Dapr HTTP server each request path causes the creation of a new "bucket" of metrics. This can cause issues, including excessive memory consumption, when there many different requested endpoints (such as when interacting with RESTful APIs).<br>In Dapr 1.13 the default value is `true` (to preserve the behavior of Dapr <= 1.12), but will change to `false` in Dapr 1.14. | | ||||
| 
 | ||||
| To mitigate high memory usage and egress costs associated with [high cardinality metrics]({{< ref "metrics-overview.md#high-cardinality-metrics" >}}), you can set regular expressions for every metric exposed by the Dapr sidecar. For example: | ||||
| To mitigate high memory usage and egress costs associated with [high cardinality metrics]({{< ref "metrics-overview.md#high-cardinality-metrics" >}}) with the HTTP server, you should set the `metrics.http.increasedCardinality` property to `false`. | ||||
| 
 | ||||
| Using rules, you can set regular expressions for every metric exposed by the Dapr sidecar. For example: | ||||
| 
 | ||||
| ```yml | ||||
| metric: | ||||
|     enabled: true | ||||
|     rules: | ||||
| metrics: | ||||
|   enabled: true | ||||
|   rules: | ||||
|     - name: dapr_runtime_service_invocation_req_sent_total | ||||
|       labels: | ||||
|       - name: method | ||||
|  | @ -183,6 +190,29 @@ The following table lists the properties for HTTP handlers: | |||
| 
 | ||||
| See [Middleware pipelines]({{< ref "middleware.md" >}}) for more information | ||||
| 
 | ||||
| #### Name resolution component | ||||
| 
 | ||||
| You can set name resolution component to use within the configuration YAML. For example, to set the `spec.nameResolution.component` property to `"sqlite"`, pass configuration options in the `spec.nameResolution.configuration` dictionary as shown below. | ||||
| 
 | ||||
| This is the basic example of a configuration resource: | ||||
| 
 | ||||
| ```yaml | ||||
| apiVersion: dapr.io/v1alpha1 | ||||
| kind: Configuration  | ||||
| metadata: | ||||
|   name: appconfig | ||||
| spec: | ||||
|   nameResolution: | ||||
|     component: "sqlite" | ||||
|     version: "v1" | ||||
|     configuration: | ||||
|       connectionString: "/home/user/.dapr/nr.db" | ||||
| ``` | ||||
| 
 | ||||
| For more information, see: | ||||
| - [The name resolution component documentation]({{< ref supported-name-resolution >}}) for more examples. | ||||
| - - [The Configuration YAML documentation]({{< ref configuration-schema.md >}}) to learn more about how to configure name resolution per component. | ||||
| 
 | ||||
| #### Scope secret store access | ||||
| 
 | ||||
| See the [Scoping secrets]({{< ref "secret-scope.md" >}}) guide for information and examples on how to scope secrets to an application. | ||||
|  | @ -288,6 +318,9 @@ The `mtls` section contains properties for mTLS. | |||
| | `enabled`          | bool   | If true, enables mTLS for communication between services and apps in the cluster. | ||||
| | `allowedClockSkew` | string | Allowed tolerance when checking the expiration of TLS certificates, to allow for clock skew. Follows the format used by [Go's time.ParseDuration](https://pkg.go.dev/time#ParseDuration). Default is `15m` (15 minutes). | ||||
| | `workloadCertTTL`  | string | How long a certificate TLS issued by Dapr is valid for. Follows the format used by [Go's time.ParseDuration](https://pkg.go.dev/time#ParseDuration). Default is `24h` (24 hours). | ||||
| | `sentryAddress`  | string | Hostname port address for connecting to the Sentry server. | | ||||
| | `controlPlaneTrustDomain` | string | Trust domain for the control plane. This is used to verify connection to control plane services. | | ||||
| | `tokenValidators` | array | Additional Sentry token validators to use for authenticating certificate requests. | | ||||
| 
 | ||||
| See the [mTLS how-to]({{< ref "mtls.md" >}}) and [security concepts]({{< ref "security-concept.md" >}}) for more information. | ||||
| 
 | ||||
|  |  | |||
|  | @ -72,9 +72,54 @@ The `-k` flag initializes Dapr on the Kubernetes cluster in your current context | |||
|     dapr dashboard -k -n <your-namespace> | ||||
|     ``` | ||||
| 
 | ||||
| 
 | ||||
|  #### Install Dapr from the offical Dapr Helm chart (with development flag) | ||||
| 
 | ||||
| Adding the `--dev` flag initializes Dapr on the Kubernetes cluster on your current context, with the addition of Redis and Zipkin deployments. | ||||
| 
 | ||||
| The steps are similar to [installing from the Dapr Helm chart](#install-dapr-from-an-official-dapr-helm-chart), except for appending the `--dev` flag to the `init` command: | ||||
| 
 | ||||
|  ```bash | ||||
|  dapr init -k --dev | ||||
|  ``` | ||||
| 
 | ||||
| Expected output: | ||||
| 
 | ||||
| ```bash | ||||
| ⌛  Making the jump to hyperspace... | ||||
| ℹ️  Note: To install Dapr using Helm, see here: https://docs.dapr.io/getting-started/install-dapr-kubernetes/#install-with-helm-advanced | ||||
| 
 | ||||
| ℹ️  Container images will be pulled from Docker Hub | ||||
| ✅  Deploying the Dapr control plane with latest version to your cluster... | ||||
| ✅  Deploying the Dapr dashboard with latest version to your cluster... | ||||
| ✅  Deploying the Dapr Redis with latest version to your cluster... | ||||
| ✅  Deploying the Dapr Zipkin with latest version to your cluster... | ||||
| ℹ️  Applying "statestore" component to Kubernetes "default" namespace. | ||||
| ℹ️  Applying "pubsub" component to Kubernetes "default" namespace. | ||||
| ℹ️  Applying "appconfig" zipkin configuration to Kubernetes "default" namespace. | ||||
| ✅  Success! Dapr has been installed to namespace dapr-system. To verify, run `dapr status -k' in your terminal. To get started, go here: https://aka.ms/dapr-getting-started | ||||
|  ``` | ||||
| 
 | ||||
| After a short period of time (or using the `--wait` flag and specifying an amount of time to wait), you can check that the Redis and Zipkin components have been deployed to the cluster. | ||||
| 
 | ||||
| ```bash | ||||
| kubectl get pods --namespace default | ||||
| ``` | ||||
| 
 | ||||
| Expected output: | ||||
| 
 | ||||
| ```bash | ||||
| NAME                              READY   STATUS    RESTARTS   AGE | ||||
| dapr-dev-zipkin-bfb4b45bb-sttz7   1/1     Running   0          159m | ||||
| dapr-dev-redis-master-0           1/1     Running   0          159m | ||||
| dapr-dev-redis-replicas-0         1/1     Running   0          159m | ||||
| dapr-dev-redis-replicas-1         1/1     Running   0          159m | ||||
| dapr-dev-redis-replicas-2         1/1     Running   0          158m  | ||||
|  ``` | ||||
| 
 | ||||
| #### Install Dapr from a private Dapr Helm chart | ||||
| 
 | ||||
| Installing Dapr from a private Helm chart can be helpful for when you: | ||||
| Installing [Dapr from a private Helm chart](#install-dapr-from-an-official-dapr-helm-chart) can be helpful for when you: | ||||
| - Need more granular control of the Dapr Helm chart | ||||
| - Have a custom Dapr deployment | ||||
| - Pull Helm charts from trusted registries that are managed and maintained by your organization | ||||
|  |  | |||
|  | @ -3,22 +3,24 @@ type: docs | |||
| title: "Configure metrics" | ||||
| linkTitle: "Overview" | ||||
| weight: 4000 | ||||
| description: "Enable or disable Dapr metrics " | ||||
| description: "Enable or disable Dapr metrics" | ||||
| --- | ||||
| 
 | ||||
| By default, each Dapr system process emits Go runtime/process metrics and has their own [Dapr metrics](https://github.com/dapr/dapr/blob/master/docs/development/dapr-metrics.md). | ||||
| 
 | ||||
| ## Prometheus endpoint | ||||
| The Dapr sidecars exposes a [Prometheus](https://prometheus.io/) metrics endpoint that you can scrape to gain a greater understanding of how Dapr is behaving. | ||||
| 
 | ||||
| The Dapr sidecar exposes a [Prometheus](https://prometheus.io/)-compatible metrics endpoint that you can scrape to gain a greater understanding of how Dapr is behaving. | ||||
| 
 | ||||
| ## Configuring metrics using the CLI | ||||
| 
 | ||||
| The metrics application endpoint is enabled by default. You can disable it by passing the command line argument `--enable-metrics=false`. | ||||
| 
 | ||||
| The default metrics port is `9090`. You can override this by passing the command line argument `--metrics-port` to Daprd.  | ||||
| The default metrics port is `9090`. You can override this by passing the command line argument `--metrics-port` to daprd. | ||||
| 
 | ||||
| ## Configuring metrics in Kubernetes | ||||
| You can also enable/disable the metrics for a specific application by setting the `dapr.io/enable-metrics: "false"` annotation on your application deployment. With the metrics exporter disabled, `daprd` does not open the metrics listening port. | ||||
| 
 | ||||
| You can also enable/disable the metrics for a specific application by setting the `dapr.io/enable-metrics: "false"` annotation on your application deployment. With the metrics exporter disabled, daprd does not open the metrics listening port. | ||||
| 
 | ||||
| The following Kubernetes deployment example shows how metrics are explicitly enabled with the port specified as "9090". | ||||
| 
 | ||||
|  | @ -54,10 +56,8 @@ spec: | |||
| ``` | ||||
| 
 | ||||
| ## Configuring metrics using application configuration | ||||
| You can also enable metrics via application configuration. To disable the metrics collection in the Dapr sidecars running in a specific namespace: | ||||
| 
 | ||||
| - Use the `metrics` spec configuration. | ||||
| - Set `enabled: false` to disable the metrics in the Dapr runtime. | ||||
| You can also enable metrics via application configuration. To disable the metrics collection in the Dapr sidecars by default, set `spec.metrics.enabled` to `false`. | ||||
| 
 | ||||
| ```yaml | ||||
| apiVersion: dapr.io/v1alpha1 | ||||
|  | @ -66,17 +66,25 @@ metadata: | |||
|   name: tracing | ||||
|   namespace: default | ||||
| spec: | ||||
|   tracing: | ||||
|     samplingRate: "1" | ||||
|   metrics: | ||||
|     enabled: false | ||||
| ``` | ||||
| 
 | ||||
| ## High cardinality metrics | ||||
| 
 | ||||
| Depending on your use case, some metrics emitted by Dapr might contain values that have a high cardinality. This might cause increased memory usage for the Dapr process/container and incur expensive egress costs in certain cloud environments. To mitigate this issue, you can set regular expressions for every metric exposed by the Dapr sidecar. [See a list of all Dapr metrics](https://github.com/dapr/dapr/blob/master/docs/development/dapr-metrics.md). | ||||
| When invoking Dapr using HTTP, the legacy behavior (and current default as of Dapr 1.13) is to create a separate "bucket" for each requested method. When working with RESTful APIs, this can cause very high cardinality, with potential negative impact on memory usage and CPU. | ||||
| 
 | ||||
| The following example shows how to apply a regular expression for the label `method` in the metric `dapr_runtime_service_invocation_req_sent_total`: | ||||
| Dapr 1.13 introduces a new option for the Dapr Configuration resource `spec.metrics.http.increasedCardinality`: when set to `false`, it reports metrics for the HTTP server for each "abstract" method (for example, requesting from a state store) instead of creating a "bucket" for each concrete request path. | ||||
| 
 | ||||
| The default value of `spec.metrics.http.increasedCardinality` is `true` in Dapr 1.13, to maintain the same behavior as Dapr 1.12 and older. However, the value will change to `false` (low-cardinality metrics by default) in Dapr 1.14. | ||||
| 
 | ||||
| Setting `spec.metrics.http.increasedCardinality` to `false` is **recommended** to all Dapr users, to reduce resource consumption. The pre-1.13 behavior, which is used when the option is `true`, is considered legacy and is only maintained for users who have special requirements around backwards-compatibility. | ||||
| 
 | ||||
| ## Transform metrics with regular expressions | ||||
| 
 | ||||
| You can set regular expressions for every metric exposed by the Dapr sidecar to "transform" their values. [See a list of all Dapr metrics](https://github.com/dapr/dapr/blob/master/docs/development/dapr-metrics.md). | ||||
| 
 | ||||
| The name of the rule must match the name of the metric that is transformed. The following example shows how to apply a regular expression for the label `method` in the metric `dapr_runtime_service_invocation_req_sent_total`: | ||||
| 
 | ||||
| ```yaml | ||||
| apiVersion: dapr.io/v1alpha1 | ||||
|  | @ -84,9 +92,11 @@ kind: Configuration | |||
| metadata: | ||||
|   name: daprConfig | ||||
| spec: | ||||
|   metric: | ||||
|       enabled: true | ||||
|       rules: | ||||
|   metrics: | ||||
|     enabled: true | ||||
|     http: | ||||
|       increasedCardinality: true | ||||
|     rules: | ||||
|       - name: dapr_runtime_service_invocation_req_sent_total | ||||
|         labels: | ||||
|         - name: method | ||||
|  | @ -94,14 +104,9 @@ spec: | |||
|             "orders/": "orders/.+" | ||||
| ``` | ||||
| 
 | ||||
| When this configuration is applied, a recorded metric with the `method` label of `orders/a746dhsk293972nz` will be replaced with `orders/`. | ||||
| 
 | ||||
| ### Watch the demo | ||||
| 
 | ||||
| Watch [this video to walk through handling high cardinality metrics](https://youtu.be/pOT8teL6j_k?t=1524): | ||||
| 
 | ||||
| <iframe width="560" height="315" src="https://www.youtube-nocookie.com/embed/pOT8teL6j_k?start=1524" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" allowfullscreen></iframe> | ||||
| When this configuration is applied, a recorded metric with the `method` label of `orders/a746dhsk293972nz` is replaced with `orders/`. | ||||
| 
 | ||||
| Using regular expressions to reduce metrics cardinality is considered legacy. We encourage all users to set `spec.metrics.http.increasedCardinality` to `false` instead, which is simpler to configure and offers better performance. | ||||
| 
 | ||||
| ## References | ||||
| 
 | ||||
|  |  | |||
|  | @ -8,18 +8,50 @@ description: Dapr sidecar health checks | |||
| 
 | ||||
| Dapr provides a way to determine its health using an [HTTP `/healthz` endpoint]({{< ref health_api.md >}}). With this endpoint, the *daprd* process, or sidecar, can be: | ||||
| 
 | ||||
| - Probed for its health | ||||
| - Determined for readiness and liveness | ||||
| - Probed for its overall health | ||||
| - Probed for Dapr sidecar readiness during initialization | ||||
| - Determined for readiness and liveness with Kubernetes | ||||
| 
 | ||||
| In this guide, you learn how the Dapr `/healthz` endpoint integrate with health probes from the application hosting platform (for example, Kubernetes).  | ||||
| 
 | ||||
| When deploying Dapr to a hosting platform like Kubernetes, the Dapr health endpoint is automatically configured for you. | ||||
| In this guide, you learn how the Dapr `/healthz` endpoint integrates with health probes from the application hosting platform (for example, Kubernetes) as well as the Dapr SDKs.  | ||||
| 
 | ||||
| {{% alert title="Note" color="primary" %}} | ||||
| Dapr actors also have a health API endpoint where Dapr probes the application for a response to a signal from Dapr that the actor application is healthy and running. See [actor health API]({{< ref "actors_api.md#health-check" >}}). | ||||
| {{% /alert %}}  | ||||
| 
 | ||||
| The following diagram shows the steps when a Dapr sidecar starts, the healthz endpoint and when the app channel is initialized.   | ||||
| 
 | ||||
| <img src="/images/healthz-outbound.png" width="800" alt="Diagram of Dapr checking oubound health connections." /> | ||||
| 
 | ||||
| ## Outbound health endpoint | ||||
| 
 | ||||
| As shown by the red boundary lines in the diagram above, the `v1.0/healthz/` endpoint is used to wait for when: | ||||
| - All components are initialized; | ||||
| - The Dapr HTTP port is available; _and,_ | ||||
| - The app channel is initialized.  | ||||
| 
 | ||||
| This is used to check the complete initialization of the Dapr sidecar and its health.  | ||||
| 
 | ||||
| Setting the `DAPR_HEALTH_TIMEOUT` environment variable lets you control the health timeout, which, for example, can be important in different environments with higher latency. | ||||
| 
 | ||||
| On the other hand, as shown by the green boundary lines in the diagram above, the `v1.0/healthz/outbound` endpoint returns successfully when: | ||||
| - All the components are initialized; | ||||
| - The Dapr HTTP port is available; _but,_  | ||||
| - The app channel is not yet established.  | ||||
| 
 | ||||
| In the Dapr SDKs, the `waitForSidecar`/`wait_until_ready` method (depending on [which SDK you use]({{< ref "#sdks-supporting-outbound-health-endpoint" >}})) is used for this specific check with the `v1.0/healthz/outbound` endpoint. Using this behavior, instead of waiting for the app channel to be available (see: red boundary lines) with the `v1.0/healthz/` endpoint, Dapr waits for a successful response from `v1.0/healthz/outbound`. This approach enables your application to perform calls on the Dapr sidecar APIs before the app channel is initalized - for example, reading secrets with the secrets API. | ||||
| 
 | ||||
| If you are using the `waitForSidecar`/`wait_until_ready` method on the SDKs, then the correct initialization is performed. Otherwise, you can call the `v1.0/healthz/outbound` endpoint during initalization, and if successesful, you can call the Dapr sidecar APIs. | ||||
| 
 | ||||
| ### SDKs supporting outbound health endpoint | ||||
| Currently, the `v1.0/healthz/outbound` endpoint is supported in the: | ||||
| - [.NET SDK]({{< ref "dotnet-client.md#wait-for-sidecar" >}}) | ||||
| - [Java SDK]({{< ref "java-client.md#wait-for-sidecar" >}}) | ||||
| - [Python SDK]({{< ref "python-client.md#health-timeout" >}}) | ||||
| - [JavaScript SDK](https://github.com/dapr/js-sdk/blob/4189a3d2ad6897406abd766f4ccbf2300c8f8852/src/interfaces/Client/IClientHealth.ts#L14) | ||||
| 
 | ||||
| 
 | ||||
| ## Health endpoint: Integration with Kubernetes | ||||
| When deploying Dapr to a hosting platform like Kubernetes, the Dapr health endpoint is automatically configured for you. | ||||
| 
 | ||||
| Kubernetes uses *readiness* and *liveness* probes to determines the health of the container. | ||||
| 
 | ||||
|  |  | |||
|  | @ -88,7 +88,11 @@ kubectl rollout restart deployment/<deployment-name> --namespace <namespace-name | |||
| 
 | ||||
| ## Adding API token to client API invocations | ||||
| 
 | ||||
| Once token authentication is configured in Dapr, all clients invoking Dapr API will have to append the API token token to every request: | ||||
| Once token authentication is configured in Dapr, all clients invoking Dapr API need to append the `dapr-api-token` token to every request.  | ||||
| 
 | ||||
| > **Note:** The Dapr SDKs read the [DAPR_API_TOKEN]({{< ref environment >}}) environment variable and set it for you by default. | ||||
| 
 | ||||
| <img src="/images/tokens-auth.png" width=800 style="padding-bottom:15px;"> | ||||
| 
 | ||||
| ### HTTP | ||||
| 
 | ||||
|  |  | |||
|  | @ -89,11 +89,13 @@ kubectl rollout restart deployment/<deployment-name> --namespace <namespace-name | |||
| 
 | ||||
| ## Authenticating requests from Dapr | ||||
| 
 | ||||
| Once app token authentication is configured in Dapr, all requests *coming from Dapr* include the token. | ||||
| Once app token authentication is configured using the environment variable or Kubernetes secret `app-api-token`, the Dapr sidecar always includes the HTTP header/gRPC metadata `dapr-api-token: <token>` in the calls to the app. From the app side, ensure you are authenticating using the `dapr-api-token` value which uses the `app-api-token` you set to authenticate requests from Dapr. | ||||
| 
 | ||||
| <img src="/images/tokens-auth.png" width=800 style="padding-bottom:15px;"> | ||||
| 
 | ||||
| ### HTTP | ||||
| 
 | ||||
| In case of HTTP, in your code look for the HTTP header `dapr-api-token` in incoming requests: | ||||
| In your code, look for the HTTP header `dapr-api-token` in incoming requests: | ||||
| 
 | ||||
| ```text | ||||
| dapr-api-token: <token> | ||||
|  |  | |||
|  | @ -14,7 +14,7 @@ For detailed information on mTLS, read the [security concepts section]({{< ref " | |||
| 
 | ||||
| If custom certificates have not been provided, Dapr automatically creates and persist self-signed certs valid for one year. | ||||
| In Kubernetes, the certs are persisted to a secret that resides in the namespace of the Dapr system pods, accessible only to them. | ||||
| In self hosted mode, the certs are persisted to disk. | ||||
| In self-hosted mode, the certs are persisted to disk. | ||||
| 
 | ||||
| ## Control plane Sentry service configuration | ||||
| The mTLS settings reside in a Dapr control plane configuration file. For example when you deploy the Dapr control plane to Kubernetes this configuration file is automatically created and then you can edit this. The following file shows the available settings for mTLS in a configuration resource, deployed in the `daprsystem` namespace: | ||||
|  | @ -32,7 +32,7 @@ spec: | |||
|     allowedClockSkew: "15m" | ||||
| ``` | ||||
| 
 | ||||
| The file here shows the default `daprsystem` configuration settings. The examples below show you how to change and apply this configuration to the control plane Sentry service either in Kubernetes and self hosted modes. | ||||
| The file here shows the default `daprsystem` configuration settings. The examples below show you how to change and apply this configuration to the control plane Sentry service either in Kubernetes and self-hosted modes. | ||||
| 
 | ||||
| ## Kubernetes | ||||
| 
 | ||||
|  | @ -491,3 +491,67 @@ Watch this [video](https://www.youtube.com/watch?v=Hkcx9kBDrAc&feature=youtu.be& | |||
| <div class="embed-responsive embed-responsive-16by9"> | ||||
| <iframe width="560" height="315" src="https://www.youtube-nocookie.com/embed/Hkcx9kBDrAc?start=1400"></iframe> | ||||
| </div> | ||||
| 
 | ||||
| ## Sentry Token Validators | ||||
| 
 | ||||
| Tokens are often used for authentication and authorization purposes. | ||||
| Token validators are components responsible for verifying the validity and authenticity of these tokens. | ||||
| For example in Kubernetes environments, a common approach to token validation is through the Kubernetes bound service account mechanism. | ||||
| This validator checks bound service account tokens against Kubernetes to ensure their legitimacy. | ||||
| 
 | ||||
| Sentry service can be configured to: | ||||
| - Enable extra token validators beyond the Kubernetes bound Service Account validator | ||||
| - Replace the `insecure` validator enabled by default in self hosted mode | ||||
| 
 | ||||
| Sentry token validators are used for joining extra non-Kubernetes clients to the Dapr cluster running in Kubernetes mode, or replace the insecure "allow all" validator in self hosted mode to enable proper identity validation. | ||||
| It is not expected that you will need to configure a token validator unless you are using an exotic deployment scenario. | ||||
| 
 | ||||
| > The only token validator currently supported is the `jwks` validator. | ||||
| 
 | ||||
| ### JWKS | ||||
| 
 | ||||
| The `jwks` validator enables Sentry service to validate JWT tokens using a JWKS endpoint. | ||||
| The contents of the token _must_ contain the `sub` claim which matches the SPIFFE identity of the Dapr client, in the same Dapr format `spiffe://<trust-domain>/ns/<namespace>/<app-id>`. | ||||
| The audience of the token must by the SPIFFE ID of the Sentry identity, For example, `spiffe://cluster.local/ns/dapr-system/dapr-sentry`. | ||||
| Other basic JWT rules regarding signature, expiry etc. apply. | ||||
| 
 | ||||
| The `jwks` validator can accept either a remote source to fetch the public key list or a static array for public keys. | ||||
| 
 | ||||
| The configuration below enables the `jwks` token validator with a remote source. | ||||
| This remote source uses HTTPS so the `caCertificate` field contains the root of trust for the remote source. | ||||
| 
 | ||||
| ```yaml | ||||
| kind: Configuration | ||||
| apiVersion: dapr.io/v1alpha1 | ||||
| metadata: | ||||
|   name: sentryconfig | ||||
| spec: | ||||
|   mtls: | ||||
|     enabled: true | ||||
|     tokenValidators: | ||||
|       - name: jwks | ||||
|         options: | ||||
|           minRefreshInterval: 2m | ||||
|           requestTimeout: 1m | ||||
|           source: "https://localhost:1234/" | ||||
|           caCertificate: "<optional ca certificate bundle string>" | ||||
| ``` | ||||
| 
 | ||||
| The configuration below enables the `jwks` token validator with a static array of public keys. | ||||
| 
 | ||||
| ```yaml | ||||
| kind: Configuration | ||||
| apiVersion: dapr.io/v1alpha1 | ||||
| metadata: | ||||
|   name: sentryconfig | ||||
| spec: | ||||
|   mtls: | ||||
|     enabled: true | ||||
|     tokenValidators: | ||||
|       - name: jwks | ||||
|         options: | ||||
|           minRefreshInterval: 2m | ||||
|           requestTimeout: 1m | ||||
|           source: | | ||||
|             {"keys":[ "12345.." ]} | ||||
| ``` | ||||
|  |  | |||
|  | @ -22,4 +22,4 @@ For CLI there is no explicit opt-in, just the version that this was first made a | |||
| | **Service invocation for non-Dapr endpoints** | Allow the invocation of non-Dapr endpoints by Dapr using the [Service invocation API]({{< ref service_invocation_api.md >}}). Read ["How-To: Invoke Non-Dapr Endpoints using HTTP"]({{< ref howto-invoke-non-dapr-endpoints.md >}}) for more information. | N/A | [Service invocation API]({{< ref service_invocation_api.md >}}) | v1.11  | | ||||
| | **Actor State TTL** | Allow actors to save records to state stores with Time To Live (TTL) set to automatically clean up old data. In its current implementation, actor state with TTL may not be reflected correctly by clients, read [Actor State Transactions]({{< ref actors_api.md >}}) for more information. | `ActorStateTTL` | [Actor State Transactions]({{< ref actors_api.md >}}) | v1.11  | | ||||
| | **Transactional Outbox** | Allows state operations for inserts and updates to be published to a configured pub/sub topic using a single transaction across the state store and the pub/sub | N/A | [Transactional Outbox Feature]({{< ref howto-outbox.md >}}) | v1.12  | | ||||
| | **Component Hot Reloading** | Allows for Dapr-loaded components to be "hot reloaded". A component spec is reloaded when it is created/updated/deleted in Kubernetes or on file when running in self-hosted mode.| `HotReload`| [Hot Reloading]({{< ref components-concept.md >}}) | v1.13  | | ||||
| | **Component Hot Reloading** | Allows for Dapr-loaded components to be "hot reloaded". A component spec is reloaded when it is created/updated/deleted in Kubernetes or on file when running in self-hosted mode. Ignores changes to actor state stores and workflow backends. | `HotReload`| [Hot Reloading]({{< ref components-concept.md >}}) | v1.13  | | ||||
|  |  | |||
|  | @ -92,6 +92,15 @@ The table below shows the versions of Dapr releases that have been tested togeth | |||
| | Mar 25th 2022 | 1.6.1</br>   | 1.6.0 | Java 1.4.0 </br>Go 1.3.1 </br>PHP 1.1.0 </br>Python 1.5.0 </br>.NET 1.6.0 </br>JS 2.0.0 | 0.9.0 | Unsupported |  | | ||||
| | Jan 25th 2022 | 1.6.0</br>   | 1.6.0 | Java 1.4.0 </br>Go 1.3.1 </br>PHP 1.1.0 </br>Python 1.5.0 </br>.NET 1.6.0 </br>JS 2.0.0 | 0.9.0 | Unsupported |  | | ||||
| 
 | ||||
| ## SDK compatibility | ||||
| The SDKs and runtime are committed to non-breaking changes other than those required for security issues.  All breaking changes are announced if required in the release notes.  | ||||
| 
 | ||||
| **SDK and runtime forward compatibility**   | ||||
| Newer Dapr SDKs support the latest version of Dapr runtime and two previous versions (N-2).  | ||||
| 
 | ||||
| **SDK and runtime backward compatibility**   | ||||
| For a new Dapr runtime, the current SDK version and two previous versions (N-2) are supported.  | ||||
| 
 | ||||
| ## Upgrade paths | ||||
| 
 | ||||
| After the 1.0 release of the runtime there may be situations where it is necessary to explicitly upgrade through an additional release to reach the desired target. For example, an upgrade from v1.0 to v1.2 may need to pass through v1.1. | ||||
|  |  | |||
|  | @ -6,6 +6,24 @@ weight: 1000 | |||
| description: "Common issues and problems faced when running Dapr applications" | ||||
| --- | ||||
| 
 | ||||
| This guide covers common issues you may encounter while installing and running Dapr. | ||||
| 
 | ||||
| ## Dapr can't connect to Docker when installing the Dapr CLI | ||||
| 
 | ||||
| When installing and initializing the Dapr CLI, if you see the following error message after running `dapr init`: | ||||
| 
 | ||||
| ```bash | ||||
| ⌛  Making the jump to hyperspace... | ||||
| ❌  could not connect to docker. docker may not be installed or running | ||||
| ``` | ||||
| 
 | ||||
| Troubleshoot the error by ensuring: | ||||
| 
 | ||||
| 1. [The correct containers are running.]({{< ref "install-dapr-selfhost.md#step-4-verify-containers-are-running" >}}) | ||||
| 1. In Docker Desktop, verify the **Allow the default Docker socket to be used (requires password)** option is selected. | ||||
| 
 | ||||
|    <img src="/images/docker-desktop-setting.png" width=800 style="padding-bottom:15px;"> | ||||
| 
 | ||||
| ## I don't see the Dapr sidecar injected to my pod | ||||
| 
 | ||||
| There could be several reasons to why a sidecar will not be injected into a pod. | ||||
|  |  | |||
|  | @ -6,34 +6,81 @@ description: "Detailed documentation on the health API" | |||
| weight: 1000 | ||||
| --- | ||||
| 
 | ||||
| Dapr provides health checking probes that can be used as readiness or liveness of Dapr. | ||||
| Dapr provides health checking probes that can be used as readiness or liveness of Dapr and for initialization readiness from SDKs.  | ||||
| 
 | ||||
| ## Get Dapr health state | ||||
| 
 | ||||
| Gets the health state for Dapr. | ||||
| Gets the health state for Dapr by either: | ||||
| - Check for sidecar health | ||||
| - Check for the sidecar health, including component readiness, used during initialization. | ||||
| 
 | ||||
| ### HTTP Request | ||||
| ### Wait for Dapr HTTP port to become available | ||||
| 
 | ||||
| Wait for all components to be initialized, the Dapr HTTP port to be available _and_ the app channel is initialized. For example, this endpoint is used with Kubernetes liveness probes. | ||||
| 
 | ||||
| #### HTTP Request | ||||
| 
 | ||||
| ``` | ||||
| GET http://localhost:<daprPort>/v1.0/healthz | ||||
| ``` | ||||
| 
 | ||||
| ### HTTP Response Codes | ||||
| #### HTTP Response Codes | ||||
| 
 | ||||
| Code | Description | ||||
| ---- | ----------- | ||||
| 204  | dapr is healthy | ||||
| 500  | dapr is not healthy | ||||
| 204  | Dapr is healthy | ||||
| 500  | Dapr is not healthy | ||||
| 
 | ||||
| ### URL Parameters | ||||
| #### URL Parameters | ||||
| 
 | ||||
| Parameter | Description | ||||
| --------- | ----------- | ||||
| daprPort | The Dapr port. | ||||
| daprPort | The Dapr port | ||||
| 
 | ||||
| ### Examples | ||||
| #### Examples | ||||
| 
 | ||||
| ```shell | ||||
| curl -i http://localhost:3500/v1.0/healthz | ||||
| ``` | ||||
| 
 | ||||
| ### Wait for specific health check against `/outbound` path | ||||
| 
 | ||||
| Wait for all components to be initialized, the Dapr HTTP port to be available, however the app channel is not yet established. This endpoint enables your application to perform calls on the Dapr sidecar APIs before the app channel is initalized, for example reading secrets with the secrets API. For example used in the Dapr SDKs `waitForSidecar` method (for example .NET and Java SDKs) to check sidecar is initialized correctly ready for any calls. | ||||
| 
 | ||||
| For example, the [Java SDK]({{< ref "java-client.md#wait-for-sidecar" >}}) and [the .NET SDK]({{< ref "dotnet-client.md#wait-for-sidecar" >}}) uses this endpoint for initialization.  | ||||
| 
 | ||||
| Currently, the `v1.0/healthz/outbound` endpoint is supported in the: | ||||
| - [.NET SDK]({{< ref "dotnet-client.md#wait-for-sidecar" >}}) | ||||
| - [Java SDK]({{< ref "java-client.md#wait-for-sidecar" >}}) | ||||
| - [Python SDK]({{< ref "python-client.md#health-timeout" >}}) | ||||
| - [JavaScript SDK](https://github.com/dapr/js-sdk/blob/4189a3d2ad6897406abd766f4ccbf2300c8f8852/src/interfaces/Client/IClientHealth.ts#L14) | ||||
| 
 | ||||
| #### HTTP Request | ||||
| 
 | ||||
| ``` | ||||
| GET http://localhost:<daprPort>/v1.0/healthz/outbound | ||||
| ``` | ||||
| 
 | ||||
| #### HTTP Response Codes | ||||
| 
 | ||||
| Code | Description | ||||
| ---- | ----------- | ||||
| 204  | Dapr is healthy | ||||
| 500  | Dapr is not healthy | ||||
| 
 | ||||
| #### URL Parameters | ||||
| 
 | ||||
| Parameter | Description | ||||
| --------- | ----------- | ||||
| daprPort | The Dapr port | ||||
|   | ||||
| #### Examples | ||||
| 
 | ||||
| ```shell | ||||
| curl -i http://localhost:3500/v1.0/healthz/outbound | ||||
| ``` | ||||
| 
 | ||||
| ## Related articles | ||||
| 
 | ||||
| - [Sidecar health]({{< ref "sidecar-health.md" >}}) | ||||
| - [App health]({{< ref "app-health.md" >}}) | ||||
|  |  | |||
|  | @ -60,6 +60,13 @@ Terminate a running workflow instance with the given name and instance ID. | |||
| POST http://localhost:3500/v1.0-beta1/workflows/<workflowComponentName>/<instanceId>/terminate | ||||
| ``` | ||||
| 
 | ||||
| {{% alert title="Note" color="primary" %}} | ||||
|  Terminating a workflow terminates all of the child workflows created by the workflow instance. | ||||
| 
 | ||||
| Terminating a workflow has no effect on any in-flight activity executions that were started by the terminated instance.  | ||||
| 
 | ||||
| {{% /alert %}} | ||||
| 
 | ||||
| ### URL parameters | ||||
| 
 | ||||
| Parameter | Description | ||||
|  | @ -174,6 +181,10 @@ Purge the workflow state from your state store with the workflow's instance ID. | |||
| POST http://localhost:3500/v1.0-beta1/workflows/<workflowComponentName>/<instanceId>/purge | ||||
| ``` | ||||
| 
 | ||||
| {{% alert title="Note" color="primary" %}} | ||||
| Only `COMPLETED`, `FAILED`, or `TERMINATED` workflows can be purged. | ||||
| {{% /alert %}} | ||||
| 
 | ||||
| ### URL parameters | ||||
| 
 | ||||
| Parameter | Description | ||||
|  | @ -235,7 +246,7 @@ The API call will provide a JSON response similar to this: | |||
| 
 | ||||
| Parameter | Description | ||||
| --------- | ----------- | ||||
| `runtimeStatus` | The status of the workflow instance. Values include: `RUNNING`, `TERMINATED`, `PAUSED`   | ||||
| `runtimeStatus` | The status of the workflow instance. Values include: `"RUNNING"`, `"COMPLETED"`, `"CONTINUED_AS_NEW"`, `"FAILED"`, `"CANCELED"`, `"TERMINATED"`, `"PENDING"`, `"SUSPENDED"`   | ||||
| 
 | ||||
| ## Component format | ||||
| 
 | ||||
|  |  | |||
|  | @ -36,6 +36,8 @@ This table is meant to help users understand the equivalent options for running | |||
| | `--metrics-port` | `--metrics-port` | | `dapr.io/metrics-port` | Sets the port for the sidecar metrics server. Default is `9090` | | ||||
| | `--mode` | not supported | | not supported | Runtime hosting option mode for Dapr, either `"standalone"` or `"kubernetes"` (default `"standalone"`). [Learn more.]({{< ref hosting >}}) | | ||||
| | `--placement-host-address` | `--placement-host-address` | | `dapr.io/placement-host-address` | Comma separated list of addresses for Dapr Actor Placement servers. When no annotation is set, the default value is set by the Sidecar Injector. When the annotation is set and the value is empty, the sidecar does not connect to Placement server. This can be used when there are no actors running in the sidecar. When the annotation is set and the value is not empty, the sidecar connects to the configured address. For example: `127.0.0.1:50057,127.0.0.1:50058` | | ||||
| | `--actors-service` | not supported | | not supported | Configuration for the service that offers actor placement information. The format is `<name>:<address>`. For example, setting this value to `placement:127.0.0.1:50057,127.0.0.1:50058` is an alternative to using the `--placement-host-address` flag. | | ||||
| | `--reminders-service` | not supported | | not supported | Configuration for the service that enables actor reminders. The format is `<name>[:<address>]`. Currently, the only supported value is `"default"` (which is also the default value), which uses the built-in reminders subsystem in the Dapr sidecar. | | ||||
| | `--profiling-port` | `--profiling-port` | | not supported | The port for the profile server (default `7777`) | | ||||
| | `--app-protocol` | `--app-protocol` | `-P` | `dapr.io/app-protocol` | Configures the protocol Dapr uses to communicate with your app. Valid options are `http`, `grpc`, `https` (HTTP with TLS), `grpcs` (gRPC with TLS), `h2c` (HTTP/2 Cleartext). Note that Dapr does not validate TLS certificates presented by the app. Default is `http` | | ||||
| | `--enable-app-health-check` | `--enable-app-health-check` | | `dapr.io/enable-app-health-check` | Boolean that enables the [health checks]({{< ref "app-health.md#configuring-app-health-checks" >}}). Default is `false`.  | | ||||
|  |  | |||
|  | @ -162,6 +162,12 @@ dapr uninstall --all --network mynet | |||
| dapr init -k | ||||
| ``` | ||||
| 
 | ||||
| Using the `--dev` flag initializes Dapr in dev mode, which includes Zipkin and Redis. | ||||
| ```bash | ||||
| dapr init -k --dev | ||||
| ``` | ||||
| 
 | ||||
| 
 | ||||
| You can wait for the installation to complete its deployment with the `--wait` flag. | ||||
| The default timeout is 300s (5 min), but can be customized with the `--timeout` flag. | ||||
| 
 | ||||
|  |  | |||
|  | @ -40,6 +40,8 @@ spec: | |||
|     #    key: "mytoken" | ||||
|     #- name: securityTokenHeader | ||||
|     #  value: "Authorization: Bearer" # OPTIONAL <header name for the security token> | ||||
|     #- name: errorIfNot2XX | ||||
|     #  value: "false" # OPTIONAL | ||||
| ``` | ||||
| 
 | ||||
| ## Spec metadata fields | ||||
|  | @ -54,8 +56,8 @@ spec: | |||
| | `MTLSRenegotiation`  | N | Output | Type of mTLS renegotiation to be used | `RenegotiateOnceAsClient` | ||||
| | `securityToken`      | N | Output | The value of a token to be added to a HTTP request as a header. Used together with `securityTokenHeader` | | ||||
| | `securityTokenHeader` | N | Output | The name of the header for `securityToken` on a HTTP request | | ||||
| | `errorIfNot2XX`      | N | Output | If a binding error should be thrown when the response is not in the 2xx range. Defaults to `true` | | ||||
| 
 | ||||
| ### How to configure mTLS-related fields in metadata | ||||
| 
 | ||||
| The values for **MTLSRootCA**, **MTLSClientCert** and **MTLSClientKey** can be provided in three ways: | ||||
| 
 | ||||
|  |  | |||
|  | @ -276,6 +276,11 @@ The response body contains the following JSON: | |||
| [0.018574921,-0.00023652936,-0.0057790717,.... (1536 floats total for ada)] | ||||
| ``` | ||||
| 
 | ||||
| ## Learn more about the Azure OpenAI output binding | ||||
| 
 | ||||
| Watch [the following Community Call presentation](https://youtu.be/rTovKpG0rhY?si=g7hZTQSpSEXz4pV1&t=80) to learn more about the Azure OpenAI output binding. | ||||
| 
 | ||||
| <iframe width="560" height="315" src="https://www.youtube-nocookie.com/embed/rTovKpG0rhY?si=XP1S-80SIg1ptJuG&start=80" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" allowfullscreen></iframe> | ||||
| 
 | ||||
| ## Related links | ||||
| 
 | ||||
|  |  | |||
|  | @ -3,12 +3,14 @@ type: docs | |||
| title: "Name resolution provider component specs" | ||||
| linkTitle: "Name resolution" | ||||
| weight: 8000 | ||||
| description: The supported name resolution providers that interface with Dapr service invocation | ||||
| description: The supported name resolution providers to enable Dapr service invocation | ||||
| no_list: true | ||||
| --- | ||||
| 
 | ||||
| The following components provide name resolution for the service invocation building block. | ||||
| 
 | ||||
| Name resolution components are configured via the [configuration]({{< ref configuration-overview.md >}}). | ||||
| 
 | ||||
| {{< partial "components/description.html" >}} | ||||
| 
 | ||||
| {{< partial "components/name-resolution.html" >}} | ||||
|  |  | |||
|  | @ -1,6 +1,6 @@ | |||
| --- | ||||
| type: docs | ||||
| title: "Kubernetes DNS name resolution provider spec" | ||||
| title: "Kubernetes DNS" | ||||
| linkTitle: "Kubernetes DNS" | ||||
| description: Detailed information on the Kubernetes DNS name resolution component | ||||
| --- | ||||
|  |  | |||
|  | @ -1,6 +1,6 @@ | |||
| --- | ||||
| type: docs | ||||
| title: "mDNS name resolution provider spec" | ||||
| title: "mDNS" | ||||
| linkTitle: "mDNS" | ||||
| description: Detailed information on the mDNS name resolution component | ||||
| --- | ||||
|  |  | |||
|  | @ -0,0 +1,54 @@ | |||
| --- | ||||
| type: docs | ||||
| title: "SQLite" | ||||
| linkTitle: "SQLite" | ||||
| description: Detailed information on the SQLite name resolution component | ||||
| --- | ||||
| 
 | ||||
| As an alternative to mDNS, the SQLite name resolution component can be used for running Dapr on single-node environments and for local development scenarios. Dapr sidecars that are part of the cluster store their information in a SQLite database on the local machine. | ||||
| 
 | ||||
| {{% alert title="Note" color="primary" %}} | ||||
| 
 | ||||
| This component is optimized to be used in scenarios where all Dapr instances are running on the same physical machine, where the database is accessed through the same, locally-mounted disk.   | ||||
| Using the SQLite nameresolver with a database file accessed over the network (including via SMB/NFS) can lead to issues such as data corruption, and is **not supported**. | ||||
| {{% /alert %}} | ||||
| 
 | ||||
| ## Configuration format | ||||
| 
 | ||||
| Name resolution is configured via the [Dapr Configuration]({{< ref configuration-overview.md >}}). | ||||
| 
 | ||||
| Within the Configuration YAML, set the `spec.nameResolution.component` property to `"sqlite"`, then pass configuration options in the `spec.nameResolution.configuration` dictionary. | ||||
| 
 | ||||
| This is the basic example of a Configuration resource: | ||||
| 
 | ||||
| ```yaml | ||||
| apiVersion: dapr.io/v1alpha1 | ||||
| kind: Configuration | ||||
| metadata: | ||||
|   name: appconfig | ||||
| spec: | ||||
|   nameResolution: | ||||
|     component: "sqlite" | ||||
|     version: "v1" | ||||
|     configuration: | ||||
|       connectionString: "/home/user/.dapr/nr.db" | ||||
| ``` | ||||
| 
 | ||||
| ## Spec configuration fields | ||||
| 
 | ||||
| When using the SQLite name resolver component, the `spec.nameResolution.configuration` dictionary contains these options: | ||||
| 
 | ||||
| | Field        | Required | Type | Details  | Examples | | ||||
| |--------------|:--------:|-----:|:---------|----------| | ||||
| | `connectionString` | Y | `string` | The connection string for the SQLite database. Normally, this is the path to a file on disk, relative to the current working directory, or absolute. | `"nr.db"` (relative to the working directory), `"/home/user/.dapr/nr.db"` | | ||||
| | `updateInterval` | N | [Go duration](https://pkg.go.dev/time#ParseDuration) (as a `string`) | Interval for active Dapr sidecars to update their status in the database, which is used as healthcheck.<br>Smaller intervals reduce the likelihood of stale data being returned if an application goes offline, but increase the load on the database.<br>Must be at least 1s greater than `timeout`. Values with fractions of seconds are truncated (for example, `1500ms` becomes `1s`). Default: `5s` | `"2s"` | | ||||
| | `timeout` | N | [Go duration](https://pkg.go.dev/time#ParseDuration) (as a `string`).<br>Must be at least 1s. | Timeout for operations on the database. Integers are interpreted as number of seconds. Defaults to `1s` | `"2s"`, `2` | | ||||
| | `tableName` | N | `string` | Name of the table where the data is stored. If the table does not exist, the table is created by Dapr.  Defaults to `hosts`. | `"hosts"` | | ||||
| | `metadataTableName` | N | `string` | Name of the table used by Dapr to store metadata for the component. If the table does not exist, the table is created by Dapr. Defaults to `metadata`. | `"metadata"` | | ||||
| | `cleanupInterval` | N | [Go duration](https://pkg.go.dev/time#ParseDuration) (as a `string`) | Interval to remove stale records from the database. Default: `1h` (1 hour) | `"10m"` | | ||||
| | `busyTimeout` | N | [Go duration](https://pkg.go.dev/time#ParseDuration) (as a `string`) | Interval to wait in case the SQLite database is currently busy serving another request, before returning a "database busy" error. This is an advanced setting.</br>`busyTimeout` controls how locking works in SQLite. With SQLite, writes are exclusive, so every time any app is writing the database is locked. If another app tries to write, it waits up to `busyTimeout` before returning the "database busy" error. However the `timeout` setting controls the timeout for the entire operation. For example if the query "hangs", after the database has acquired the lock (so after busy timeout is cleared), then `timeout` comes into effect. Default: `800ms` (800 milliseconds) | `"100ms"` | | ||||
| | `disableWAL` | N | `bool` | If set to true, disables Write-Ahead Logging for journaling of the SQLite database. This is for advanced scenarios only | `true`, `false` | | ||||
| 
 | ||||
| ## Related links | ||||
| 
 | ||||
| - [Service invocation building block]({{< ref service-invocation >}}) | ||||
|  | @ -1,6 +1,6 @@ | |||
| --- | ||||
| type: docs | ||||
| title: "HashiCorp Consul name resolution provider spec" | ||||
| title: "HashiCorp Consul" | ||||
| linkTitle: "HashiCorp Consul" | ||||
| description: Detailed information on the HashiCorp Consul name resolution component | ||||
| --- | ||||
|  |  | |||
|  | @ -73,7 +73,7 @@ spec: | |||
| | consumerID       | N | Consumer ID (consumer tag) organizes one or more consumers into a group. Consumers with the same consumer ID work as one virtual consumer; for example, a message is processed only once by one of the consumers in the group. If the `consumerID` is not provided, the Dapr runtime set it to the Dapr application ID (`appID`) value. If a value for `consumerGroup` is provided, any value for `consumerID` is ignored - a combination of the consumer group and a random unique identifier will be set for the `consumerID` instead. | `"channel1"` | ||||
| | clientID            | N | A user-provided string sent with every request to the Kafka brokers for logging, debugging, and auditing purposes. Defaults to `"namespace.appID"` for Kubernetes mode or `"appID"` for Self-Hosted mode. | `"my-namespace.my-dapr-app"`, `"my-dapr-app"` | ||||
| | authRequired        | N | *Deprecated* Enable [SASL](https://en.wikipedia.org/wiki/Simple_Authentication_and_Security_Layer) authentication with the Kafka brokers. | `"true"`, `"false"` | ||||
| | authType            | Y | Configure or disable authentication. Supported values: `none`, `password`, `mtls`, or `oidc` | `"password"`, `"none"` | ||||
| | authType            | Y | Configure or disable authentication. Supported values: `none`, `password`, `mtls`, `oidc` or `awsiam` | `"password"`, `"none"` | ||||
| | saslUsername        | N | The SASL username used for authentication. Only required if `authType` is set to `"password"`. | `"adminuser"` | ||||
| | saslPassword        | N | The SASL password used for authentication. Can be `secretKeyRef` to use a [secret reference]({{< ref component-secrets.md >}}). Only required if `authType is set to `"password"`. | `""`, `"KeFg23!"` | ||||
| | saslMechanism      | N | The SASL Authentication Mechanism you wish to use. Only required if `authType` is set to `"password"`. Defaults to `PLAINTEXT` | `"SHA-512", "SHA-256", "PLAINTEXT"` | ||||
|  | @ -92,6 +92,12 @@ spec: | |||
| | oidcClientSecret | N | The OAuth2 client secret that has been provisioned in the identity provider: Required when `authType` is set to `oidc` | `"KeFg23!"` | | ||||
| | oidcScopes | N | Comma-delimited list of OAuth2/OIDC scopes to request with the access token. Recommended when `authType` is set to `oidc`. Defaults to `"openid"` | `"openid,kafka-prod"` | | ||||
| | oidcExtensions | N | Input/Output | String containing a JSON-encoded dictionary of OAuth2/OIDC extensions to request with the access token | `{"cluster":"kafka","poolid":"kafkapool"}` | | ||||
| | awsRegion | N | The AWS region where the Kafka cluster is deployed to. Required when `authType` is set to `awsiam` | `us-west-1` | | ||||
| | awsAccessKey | N  | AWS access key associated with an IAM account. | `"accessKey"` | ||||
| | awsSecretKey | N  | The secret key associated with the access key. | `"secretKey"` | ||||
| | awsSessionToken | N  | AWS session token to use. A session token is only required if you are using temporary security credentials. | `"sessionToken"` | ||||
| | awsIamRoleArn | N  | IAM role that has access to AWS Managed Streaming for Apache Kafka (MSK). This is another option to authenticate with MSK aside from the AWS Credentials. | `"arn:aws:iam::123456789:role/mskRole"` | ||||
| | awsStsSessionName | N  | Represents the session name for assuming a role. | `"MSKSASLDefaultSession"` | ||||
| | schemaRegistryURL | N | Required when using Schema Registry Avro serialization/deserialization. The Schema Registry URL. | `http://localhost:8081` | | ||||
| | schemaRegistryAPIKey | N | When using Schema Registry Avro serialization/deserialization. The Schema Registry credentials API Key. | `XYAXXAZ` | | ||||
| | schemaRegistryAPISecret | N | When using Schema Registry Avro serialization/deserialization. The Schema Registry credentials API Secret. | `ABCDEFGMEADFF` | | ||||
|  | @ -107,7 +113,17 @@ The metadata `version` must be set to `1.0.0` when using Azure EventHubs with Ka | |||
| 
 | ||||
| Kafka supports a variety of authentication schemes and Dapr supports several: SASL password, mTLS, OIDC/OAuth2. With the added authentication methods, the `authRequired` field has | ||||
| been deprecated from the v1.6 release and instead the `authType` field should be used. If `authRequired` is set to `true`, Dapr will attempt to configure `authType` correctly | ||||
| based on the value of `saslPassword`. There are four valid values for `authType`: `none`, `password`, `certificate`, `mtls`, and `oidc`. Note this is authentication only; authorization is still configured within Kafka. | ||||
| based on the value of `saslPassword`. The valid values for `authType` are:  | ||||
| - `none` | ||||
| - `password` | ||||
| - `certificate` | ||||
| - `mtls` | ||||
| - `oidc`  | ||||
| - `awsiam` | ||||
| 
 | ||||
| {{% alert title="Note" color="primary" %}} | ||||
| `authType` is _authentication_ only. _Authorization_ is still configured within Kafka, except for `awsiam`, which can also drive authorization decisions configured in AWS IAM. | ||||
| {{% /alert %}} | ||||
| 
 | ||||
| #### None | ||||
| 
 | ||||
|  | @ -276,6 +292,44 @@ spec: | |||
|     value: 0.10.2.0 | ||||
| ``` | ||||
| 
 | ||||
| #### AWS IAM | ||||
| 
 | ||||
| Authenticating with AWS IAM is supported with MSK. Setting `authType` to `awsiam` uses AWS SDK to generate auth tokens to authenticate. | ||||
| {{% alert title="Note" color="primary" %}} | ||||
| The only required metadata field is `awsRegion`. If no `awsAccessKey` and `awsSecretKey` are provided, you can use AWS IAM roles for service accounts to have password-less authentication to your Kafka cluster. | ||||
| {{% /alert %}} | ||||
| 
 | ||||
| ```yaml | ||||
| apiVersion: dapr.io/v1alpha1 | ||||
| kind: Component | ||||
| metadata: | ||||
|   name: kafka-pubsub-awsiam | ||||
| spec: | ||||
|   type: pubsub.kafka | ||||
|   version: v1 | ||||
|   metadata: | ||||
|   - name: brokers # Required. Kafka broker connection setting | ||||
|     value: "dapr-kafka.myapp.svc.cluster.local:9092" | ||||
|   - name: consumerGroup # Optional. Used for input bindings. | ||||
|     value: "group1" | ||||
|   - name: clientID # Optional. Used as client tracing ID by Kafka brokers. | ||||
|     value: "my-dapr-app-id" | ||||
|   - name: authType # Required. | ||||
|     value: "awsiam" | ||||
|   - name: awsRegion # Required. | ||||
|     value: "us-west-1" | ||||
|   - name: awsAccessKey # Optional. | ||||
|     value: <AWS_ACCESS_KEY> | ||||
|   - name: awsSecretKey # Optional. | ||||
|     value: <AWS_SECRET_KEY> | ||||
|   - name: awsSessionToken # Optional. | ||||
|     value: <AWS_SESSION_KEY> | ||||
|   - name: awsIamRoleArn # Optional. | ||||
|     value: "arn:aws:iam::123456789:role/mskRole" | ||||
|   - name: awsStsSessionName # Optional. | ||||
|     value: "MSKSASLDefaultSession" | ||||
| ``` | ||||
| 
 | ||||
| ### Communication using TLS | ||||
| 
 | ||||
| By default TLS is enabled to secure the transport layer to Kafka. To disable TLS, set `disableTls` to `true`. When TLS is enabled, you can | ||||
|  |  | |||
|  | @ -1,133 +0,0 @@ | |||
| --- | ||||
| type: docs | ||||
| title: "NATS Streaming" | ||||
| linkTitle: "NATS Streaming" | ||||
| description: "Detailed documentation on the NATS Streaming pubsub component" | ||||
| aliases: | ||||
|   - "/operations/components/setup-pubsub/supported-pubsub/setup-nats-streaming/" | ||||
| --- | ||||
| 
 | ||||
| ## ⚠️ Deprecation notice | ||||
| 
 | ||||
| {{% alert title="Warning" color="warning" %}} | ||||
| This component is **deprecated** because the [NATS Streaming Server](https://nats-io.gitbook.io/legacy-nats-docs/nats-streaming-server-aka-stan/developing-with-stan) was deprecated in June 2023 and no longer receives updates. Users are encouraged to switch to using [JetStream]({{< ref setup-jetstream >}}) as an alternative. | ||||
| 
 | ||||
| This component will be **removed in the Dapr v1.13 release**. | ||||
| {{% /alert %}} | ||||
| 
 | ||||
| ## Component format | ||||
| 
 | ||||
| To set up NATS Streaming pub/sub, create a component of type `pubsub.natsstreaming`. See the [pub/sub broker component file]({{< ref setup-pubsub.md >}}) to learn how ConsumerID is automatically generated. Read the [How-to: Publish and Subscribe guide]({{< ref "howto-publish-subscribe.md#step-1-setup-the-pubsub-component" >}}) on how to create and apply a pub/sub configuration. | ||||
| 
 | ||||
| ```yaml | ||||
| apiVersion: dapr.io/v1alpha1 | ||||
| kind: Component | ||||
| metadata: | ||||
|   name: natsstreaming-pubsub | ||||
| spec: | ||||
|   type: pubsub.natsstreaming | ||||
|   version: v1 | ||||
|   metadata: | ||||
|   - name: natsURL | ||||
|     value: "nats://localhost:4222" | ||||
|   - name: natsStreamingClusterID | ||||
|     value: "clusterId" | ||||
|   - name: concurrencyMode | ||||
|     value: parallel | ||||
|   - name: consumerID # Optional. If not supplied, runtime will create one. | ||||
|     value: "channel1"  | ||||
|     # below are subscription configuration. | ||||
|   - name: subscriptionType | ||||
|     value: <REPLACE-WITH-SUBSCRIPTION-TYPE> # Required. Allowed values: topic, queue. | ||||
|   - name: ackWaitTime | ||||
|     value: "" # Optional. | ||||
|   - name: maxInFlight | ||||
|     value: "" # Optional. | ||||
|   - name: durableSubscriptionName | ||||
|     value: "" # Optional. | ||||
|   # following subscription options - only one can be used | ||||
|   - name: deliverNew | ||||
|     value: <bool> | ||||
|   - name: startAtSequence | ||||
|     value: 1 | ||||
|   - name: startWithLastReceived | ||||
|     value: false | ||||
|   - name: deliverAll | ||||
|     value: false | ||||
|   - name: startAtTimeDelta | ||||
|     value: "" | ||||
|   - name: startAtTime | ||||
|     value: "" | ||||
|   - name: startAtTimeFormat | ||||
|     value: "" | ||||
| ``` | ||||
| 
 | ||||
| {{% alert title="Warning" color="warning" %}} | ||||
| The above example uses secrets as plain strings. It is recommended to [use a secret store for the secrets]({{< ref component-secrets.md >}}). | ||||
| {{% /alert %}} | ||||
| 
 | ||||
| ## Spec metadata fields | ||||
| 
 | ||||
| | Field              | Required | Details | Example | | ||||
| |--------------------|:--------:|---------|---------| | ||||
| | natsURL            | Y  | NATS server address URL   | "`nats://localhost:4222`"| | ||||
| | natsStreamingClusterID  | Y  | NATS cluster ID   |`"clusterId"`| | ||||
| | subscriptionType   | Y | Subscription type. Allowed values `"topic"`, `"queue"` | `"topic"` | | ||||
| | consumerID        |    N     | Consumer ID (consumer tag) organizes one or more consumers into a group. Consumers with the same consumer ID work as one virtual consumer; for example, a message is processed only once by one of the consumers in the group. If the `consumerID` is not provided, the Dapr runtime set it to the Dapr application ID (`appID`) value. | `"channel1"` | ||||
| | ackWaitTime        | N | See [here](https://docs.nats.io/developing-with-nats-streaming/acks#acknowledgements) | `"300ms"`| | ||||
| | maxInFlight        | N | See [here](https://docs.nats.io/developing-with-nats-streaming/acks#acknowledgements) | `"25"` | | ||||
| | durableSubscriptionName | N | [Durable subscriptions](https://docs.nats.io/developing-with-nats-streaming/durables) identification name. | `"my-durable"`| | ||||
| | deliverNew         | N | Subscription Options. Only one can be used. Deliver new messages only  | `"true"`, `"false"` | | ||||
| | startAtSequence    | N | Subscription Options. Only one can be used. Sets the desired start sequence position and state  | `"100000"`, `"230420"` | | ||||
| | startWithLastReceived | N | Subscription Options. Only one can be used. Sets the start position to last received. | `"true"`, `"false"` | | ||||
| | deliverAll         | N | Subscription Options. Only one can be used. Deliver all available messages  | `"true"`, `"false"` | | ||||
| | startAtTimeDelta   | N | Subscription Options. Only one can be used. Sets the desired start time position and state using the delta  | `"10m"`, `"23s"` | | ||||
| | startAtTime        | N | Subscription Options. Only one can be used. Sets the desired start time position and state  | `"Feb 3, 2013 at 7:54pm (PST)"` | | ||||
| | startAtTimeFormat   | N | Must be used with `startAtTime`. Sets the format for the time  | `"Jan 2, 2006 at 3:04pm (MST)"` | | ||||
| | concurrencyMode | N  | Call the subscriber sequentially (“single” message at a time), or concurrently (in “parallel”). Default: `"parallel"` | `"single"`, `"parallel"` | ||||
| 
 | ||||
| ## Create a NATS server | ||||
| 
 | ||||
| {{< tabs "Self-Hosted" "Kubernetes">}} | ||||
| 
 | ||||
| {{% codetab %}} | ||||
| Run a NATS server locally using Docker: | ||||
| 
 | ||||
| ```bash | ||||
| docker run -d --name nats-streaming -p 4222:4222 -p 8222:8222 nats-streaming | ||||
| ``` | ||||
| 
 | ||||
| Interact with the server using the client port: `localhost:4222`. | ||||
| {{% /codetab %}} | ||||
| 
 | ||||
| {{% codetab %}} | ||||
| Install NATS on Kubernetes by using the [kubectl](https://docs.nats.io/running-a-nats-service/introduction/running/nats-kubernetes/): | ||||
| 
 | ||||
| ```bash | ||||
| # Single server NATS | ||||
| 
 | ||||
| kubectl apply -f https://raw.githubusercontent.com/nats-io/k8s/master/nats-server/single-server-nats.yml | ||||
| 
 | ||||
| kubectl apply -f https://raw.githubusercontent.com/nats-io/k8s/master/nats-streaming-server/single-server-stan.yml | ||||
| ``` | ||||
| 
 | ||||
| This installs a single NATS-Streaming and NATS into the `default` namespace. To interact with NATS, find the service with: | ||||
| 
 | ||||
| ```bash | ||||
| kubectl get svc stan | ||||
| ``` | ||||
| 
 | ||||
| For example, if installing using the example above, the NATS Streaming address would be: | ||||
| 
 | ||||
| `<YOUR-HOST>:4222` | ||||
| 
 | ||||
| {{% /codetab %}} | ||||
| 
 | ||||
| {{< /tabs >}} | ||||
| 
 | ||||
| ## Related links | ||||
| 
 | ||||
| - [Basic schema for a Dapr component]({{< ref component-schema >}}). | ||||
| - Read [this guide]({{< ref "howto-publish-subscribe.md#step-2-publish-a-topic" >}}) for instructions on configuring pub/sub components. | ||||
| - [Pub/Sub building block]({{< ref pubsub >}}). | ||||
| - [NATS Streaming Deprecation Notice](https://github.com/nats-io/nats-streaming-server/#warning--deprecation-notice-warning). | ||||
|  | @ -89,6 +89,7 @@ The above example uses secrets as plain strings. It is recommended to use a [sec | |||
| | processMode | N | Enable processing multiple messages at once. Default: `"async"` | `"async"`, `"sync"`| | ||||
| | subscribeType | N | Pulsar supports four kinds of [subscription types](https://pulsar.apache.org/docs/3.0.x/concepts-messaging/#subscription-types). Default: `"shared"` | `"shared"`, `"exclusive"`, `"failover"`, `"key_shared"`| | ||||
| | partitionKey | N | Sets the key of the message for routing policy. Default: `""` | | | ||||
| | `maxConcurrentHandlers` | N  | Defines the maximum number of concurrent message handlers. Default: `100` | `10` | ||||
| 
 | ||||
| ### Authenticate using Token | ||||
| 
 | ||||
|  |  | |||
|  | @ -43,8 +43,8 @@ The above example uses secrets as plain strings. It is recommended to use a secr | |||
| | redisUsername      | N        | Username for Redis host. Defaults to empty. Make sure your redis server version is 6 or above, and have created acl rule correctly. | `""`, `"default"` | ||||
| | consumerID         | N        | The consumer group ID   | `"myGroup"` | ||||
| | enableTLS          | N        | If the Redis instance supports TLS with public certificates, can be configured to be enabled or disabled. Defaults to `"false"` | `"true"`, `"false"` | ||||
| | redeliverInterval  | N        | The interval between checking for pending messages to redelivery. Defaults to `"60s"`. `"0"` disables redelivery. | `"30s"` | ||||
| | processingTimeout  | N        | The amount time a message must be pending before attempting to redeliver it. Defaults to `"15s"`. `"0"` disables redelivery. | `"30s"` | ||||
| | redeliverInterval  | N        | The interval between checking for pending messages to redeliver. Can use either be Go duration string (for example "ms", "s", "m") or milliseconds number. Defaults to `"60s"`. `"0"` disables redelivery. | `"30s"`, `"5000"` | ||||
| | processingTimeout  | N        | The amount time that a message must be pending before attempting to redeliver it. Can use either be Go duration string ( for example "ms", "s", "m") or milliseconds number. Defaults to `"15s"`. `"0"` disables redelivery. | `"60s"`, `"600000"` | ||||
| | queueDepth         | N        | The size of the message queue for processing. Defaults to `"100"`. | `"1000"` | ||||
| | concurrency        | N        | The number of concurrent workers that are processing messages. Defaults to `"10"`. | `"15"` | ||||
| | redisType        | N        | The type of redis. There are two valid values, one is `"node"` for single node mode, the other is `"cluster"` for redis cluster mode. Defaults to `"node"`. | `"cluster"` | ||||
|  |  | |||
|  | @ -18,7 +18,9 @@ metadata: | |||
|   name: <NAME> | ||||
| spec: | ||||
|   type: state.azure.blobstorage | ||||
|   version: v1 | ||||
|   # Supports v1 and v2. Users should always use v2 by default. There is no | ||||
|   # migration path from v1 to v2, see `versioning` below. | ||||
|   version: v2 | ||||
|   metadata: | ||||
|   - name: accountName | ||||
|     value: "[your_account_name]" | ||||
|  | @ -32,21 +34,32 @@ spec: | |||
| The above example uses secrets as plain strings. It is recommended to use a secret store for the secrets as described [here]({{< ref component-secrets.md >}}). | ||||
| {{% /alert %}} | ||||
| 
 | ||||
| ## Versioning | ||||
| 
 | ||||
| Dapr has 2 versions of the Azure Blob Storage state store component: `v1` and `v2`. It is recommended to use `v2` for all new applications. `v1` is considered legacy and is preserved for compatibility with existing applications only. | ||||
| 
 | ||||
| In `v1`, a longstanding implementation issue was identified, where the [key prefix]({{< ref howto-share-state.md >}}) was incorrectly stripped by the component, essentially behaving as if `keyPrefix` was always set to `none`.   | ||||
| The updated `v2` of the component fixes the incorrect behavior and makes the state store correctly respect the `keyPrefix` property. | ||||
| 
 | ||||
| While `v1` and `v2` have the same metadata fields, they are otherwise incompatible, with no automatic data migration path for `v1` to `v2`. | ||||
| 
 | ||||
| If you are using `v1` of this component, you should continue to use `v1` until you create a new state store. | ||||
| 
 | ||||
| ## Spec metadata fields | ||||
| 
 | ||||
| | Field              | Required | Details | Example | | ||||
| | Field             | Required | Details | Example | | ||||
| |--------------------|:--------:|---------|---------| | ||||
| | `accountName`        | Y        | The storage account name | `"mystorageaccount"`. | ||||
| | `accountKey`         | Y (unless using Microsoft Entra ID) | Primary or secondary storage key | `"key"` | ||||
| | `containerName`      | Y         | The name of the container to be used for Dapr state. The container will be created for you if it doesn't exist  | `"container"` | ||||
| | `azureEnvironment` | N | Optional name for the Azure environment if using a different Azure cloud | `"AZUREPUBLICCLOUD"` (default value), `"AZURECHINACLOUD"`, `"AZUREUSGOVERNMENTCLOUD"`, `"AZUREGERMANCLOUD"` | ||||
| | `accountName` | Y | The storage account name | `"mystorageaccount"`. | | ||||
| | `accountKey` | Y (unless using Microsoft Entra ID) | Primary or secondary storage key | `"key"` | | ||||
| | `containerName` | Y | The name of the container to be used for Dapr state. The container will be created for you if it doesn't exist | `"container"` | | ||||
| | `azureEnvironment` | N | Optional name for the Azure environment if using a different Azure cloud | `"AZUREPUBLICCLOUD"` (default value), `"AZURECHINACLOUD"`, `"AZUREUSGOVERNMENTCLOUD"` | | ||||
| | `endpoint` | N | Optional custom endpoint URL. This is useful when using the [Azurite emulator](https://github.com/Azure/azurite) or when using custom domains for Azure Storage (although this is not officially supported). The endpoint must be the full base URL, including the protocol (`http://` or `https://`), the IP or FQDN, and optional port. | `"http://127.0.0.1:10000"`  | ||||
| | `ContentType`        | N        | The blob's content type | `"text/plain"` | ||||
| | `ContentMD5`         | N        | The blob's MD5 hash | `"vZGKbMRDAnMs4BIwlXaRvQ=="` | ||||
| | `ContentEncoding`    | N        | The blob's content encoding | `"UTF-8"` | ||||
| | `ContentLanguage`    | N        | The blob's content language | `"en-us"` | ||||
| | `ContentDisposition` | N        | The blob's content disposition. Conveys additional information about how to process the response payload | `"attachment"` | ||||
| | `CacheControl`       | N        | The blob's cache control | `"no-cache"` | ||||
| | `ContentType` | N | The blob's content type | `"text/plain"` | | ||||
| | `ContentMD5` | N | The blob's MD5 hash | `"vZGKbMRDAnMs4BIwlXaRvQ=="` | | ||||
| | `ContentEncoding` | N | The blob's content encoding | `"UTF-8"` | | ||||
| | `ContentLanguage` | N | The blob's content language | `"en-us"` | | ||||
| | `ContentDisposition` | N | The blob's content disposition. Conveys additional information about how to process the response payload | `"attachment"` | | ||||
| | `CacheControl`| N | The blob's cache control | `"no-cache"` | | ||||
| 
 | ||||
| ## Setup Azure Blob Storage | ||||
| 
 | ||||
|  |  | |||
|  | @ -47,20 +47,23 @@ spec: | |||
| The above example uses secrets as plain strings. It is recommended to use a secret store for the secrets as described [here]({{< ref component-secrets.md >}}). | ||||
| {{% /alert %}} | ||||
| 
 | ||||
| If you wish to use MongoDB as an actor store, append the following to the yaml. | ||||
| ### Actor state store and transactions support | ||||
| 
 | ||||
| When using as an actor state store or to leverage transactions, MongoDB must be running in a [Replica Set](https://www.mongodb.com/docs/manual/replication/). | ||||
| 
 | ||||
| If you wish to use MongoDB as an actor store, add this metadata option to your Component YAML: | ||||
| 
 | ||||
| ```yaml | ||||
|   - name: actorStateStore | ||||
|     value: "true" | ||||
| ``` | ||||
| 
 | ||||
| 
 | ||||
| ## Spec metadata fields | ||||
| 
 | ||||
| | Field              | Required | Details | Example | | ||||
| |--------------------|:--------:|---------|---------| | ||||
| | server             | Y<sup>*</sup> | The server to connect to, when using DNS SRV record | `"server.example.com"` | ||||
| | host               | Y<sup>*</sup> | The host to connect to | `"mongo-mongodb.default.svc.cluster.local:27017"` | ||||
| | server             | Y<sup>1</sup> | The server to connect to, when using DNS SRV record | `"server.example.com"` | ||||
| | host               | Y<sup>1</sup> | The host to connect to | `"mongo-mongodb.default.svc.cluster.local:27017"` | ||||
| | username           | N        | The username of the user to connect with (applicable in conjunction with `host`) | `"admin"` | ||||
| | password           | N        | The password of the user (applicable in conjunction with `host`) | `"password"` | ||||
| | databaseName       | N        | The name of the database to use. Defaults to `"daprStore"` | `"daprStore"` | ||||
|  | @ -68,46 +71,36 @@ If you wish to use MongoDB as an actor store, append the following to the yaml. | |||
| | writeConcern       | N        | The write concern to use | `"majority"` | ||||
| | readConcern        | N        | The read concern to use  | `"majority"`, `"local"`,`"available"`, `"linearizable"`, `"snapshot"` | ||||
| | operationTimeout   | N        | The timeout for the operation. Defaults to `"5s"` | `"5s"` | ||||
| | params             | N<sup>**</sup> | Additional parameters to use | `"?authSource=daprStore&ssl=true"` | ||||
| | params             | N<sup>2</sup> | Additional parameters to use | `"?authSource=daprStore&ssl=true"` | ||||
| 
 | ||||
| > <sup>[*]</sup> The `server` and `host` fields are mutually exclusive. If neither or both are set, Dapr will return an error. | ||||
| > <sup>[1]</sup> The `server` and `host` fields are mutually exclusive. If neither or both are set, Dapr returns an error. | ||||
| 
 | ||||
| > <sup>[**]</sup> The `params` field accepts a query string that specifies connection specific options as `<name>=<value>` pairs, separated by `"&"` and prefixed with `"?"`. e.g. to use "daprStore" db as authentication database and enabling SSL/TLS in connection, specify params as `"?authSource=daprStore&ssl=true"`. See [the mongodb manual](https://docs.mongodb.com/manual/reference/connection-string/#std-label-connections-connection-options) for the list of available options and their use cases. | ||||
| > <sup>[2]</sup> The `params` field accepts a query string that specifies connection specific options as `<name>=<value>` pairs, separated by `&` and prefixed with `?`. e.g. to use "daprStore" db as authentication database and enabling SSL/TLS in connection, specify params as `?authSource=daprStore&ssl=true`. See [the mongodb manual](https://docs.mongodb.com/manual/reference/connection-string/#std-label-connections-connection-options) for the list of available options and their use cases. | ||||
| 
 | ||||
| ## Setup MongoDB | ||||
| 
 | ||||
| {{< tabs "Self-Hosted" "Kubernetes" >}} | ||||
| 
 | ||||
| {{% codetab %}} | ||||
| You can run MongoDB locally using Docker: | ||||
| You can run a single MongoDB instance locally using Docker: | ||||
| 
 | ||||
| ``` | ||||
| ```sh | ||||
| docker run --name some-mongo -d mongo | ||||
| ``` | ||||
| 
 | ||||
| You can then interact with the server using `localhost:27017`. | ||||
| 
 | ||||
| If you do not specify a `databaseName` value in your component definition, make sure to create a database named `daprStore`. | ||||
| You can then interact with the server at `localhost:27017`. If you do not specify a `databaseName` value in your component definition, make sure to create a database named `daprStore`. | ||||
| 
 | ||||
| In order to use the MongoDB state store for transactions and as an actor state store, you need to run MongoDB as a Replica Set. Refer to [the official documentation](https://www.mongodb.com/compatibility/deploying-a-mongodb-cluster-with-docker) for how to create a 3-node Replica Set using Docker. | ||||
| {{% /codetab %}} | ||||
| 
 | ||||
| {{% codetab %}} | ||||
| The easiest way to install MongoDB on Kubernetes is by using the [Helm chart](https://github.com/helm/charts/tree/master/stable/mongodb): | ||||
| 
 | ||||
| ``` | ||||
| helm install mongo stable/mongodb | ||||
| ``` | ||||
| 
 | ||||
| You can conveniently install MongoDB on Kubernetes using the [Helm chart packaged by Bitnami](https://github.com/bitnami/charts/tree/main/bitnami/mongodb/). Refer to the documentation for the Helm chart for deploying MongoDB, both as a standalone server, and with a Replica Set (required for using transactions and actors). | ||||
| This installs MongoDB into the `default` namespace. | ||||
| To interact with MongoDB, find the service with: `kubectl get svc mongo-mongodb`. | ||||
| 
 | ||||
| For example, if installing using the example above, the MongoDB host address would be: | ||||
| 
 | ||||
| For example, if installing using the Helm defaults above, the MongoDB host address would be: | ||||
| `mongo-mongodb.default.svc.cluster.local:27017` | ||||
| 
 | ||||
| 
 | ||||
| Follow the on-screen instructions to get the root password for MongoDB. | ||||
| The username is `admin` by default. | ||||
| The username is typically `admin` by default. | ||||
| {{% /codetab %}} | ||||
| 
 | ||||
| {{< /tabs >}} | ||||
|  | @ -117,6 +110,7 @@ The username is `admin` by default. | |||
| This state store supports [Time-To-Live (TTL)]({{< ref state-store-ttl.md >}}) for records stored with Dapr. When storing data using Dapr, you can set the `ttlInSeconds` metadata property to indicate when the data should be considered "expired". | ||||
| 
 | ||||
| ## Related links | ||||
| 
 | ||||
| - [Basic schema for a Dapr component]({{< ref component-schema >}}) | ||||
| - Read [this guide]({{< ref "howto-get-save-state.md#step-2-save-and-retrieve-a-single-state" >}}) for instructions on configuring state store components | ||||
| - [State management building block]({{< ref state-management >}}) | ||||
|  |  | |||
|  | @ -1,13 +1,23 @@ | |||
| --- | ||||
| type: docs | ||||
| title: "PostgreSQL" | ||||
| linkTitle: "PostgreSQL" | ||||
| description: Detailed information on the PostgreSQL state store component | ||||
| title: "PostgreSQL v1" | ||||
| linkTitle: "PostgreSQL v1" | ||||
| description: Detailed information on the PostgreSQL v1 state store component | ||||
| aliases: | ||||
|   - "/operations/components/setup-state-store/supported-state-stores/setup-postgresql/" | ||||
|   - "/operations/components/setup-state-store/supported-state-stores/setup-postgres/" | ||||
|   - "/operations/components/setup-state-store/supported-state-stores/setup-postgresql-v1/" | ||||
|   - "/operations/components/setup-state-store/supported-state-stores/setup-postgres-v1/" | ||||
| --- | ||||
| 
 | ||||
| This component allows using PostgreSQL (Postgres) as state store for Dapr. See [this guide]({{< ref "howto-get-save-state.md#step-1-setup-a-state-store" >}}) on how to create and apply a state store configuration. | ||||
| {{% alert title="Note" color="primary" %}} | ||||
| Starting with Dapr 1.13, you can leverage the [PostgreSQL v2]({{< ref setup-postgresql-v2.md >}}) state store component, which contains some improvements to performance and reliability.   | ||||
| The v2 component is not compatible with v1, and data cannot be migrated between the two components. The v2 component does not offer support for state store query APIs. | ||||
| 
 | ||||
| There are no plans to deprecate the v1 component. | ||||
| {{% /alert %}} | ||||
| 
 | ||||
| This component allows using PostgreSQL (Postgres) as state store for Dapr, using the "v1" component. See [this guide]({{< ref "howto-get-save-state.md#step-1-setup-a-state-store" >}}) on how to create and apply a state store configuration. | ||||
| 
 | ||||
| ```yaml | ||||
| apiVersion: dapr.io/v1alpha1 | ||||
|  | @ -21,8 +31,8 @@ spec: | |||
|     # Connection string | ||||
|     - name: connectionString | ||||
|       value: "<CONNECTION STRING>" | ||||
|     # Timeout for database operations, in seconds (optional) | ||||
|     #- name: timeoutInSeconds | ||||
|     # Timeout for database operations, as a Go duration or number of seconds (optional) | ||||
|     #- name: timeout | ||||
|     #  value: 20 | ||||
|     # Name of the table where to store the state (optional) | ||||
|     #- name: tableName | ||||
|  | @ -31,8 +41,8 @@ spec: | |||
|     #- name: metadataTableName | ||||
|     #  value: "dapr_metadata" | ||||
|     # Cleanup interval in seconds, to remove expired rows (optional) | ||||
|     #- name: cleanupIntervalInSeconds | ||||
|     #  value: 3600 | ||||
|     #- name: cleanupInterval | ||||
|     #  value: "1h" | ||||
|     # Maximum number of connections pooled by this component (optional) | ||||
|     #- name: maxConns | ||||
|     #  value: 0 | ||||
|  | @ -59,7 +69,7 @@ The following metadata options are **required** to authenticate using a PostgreS | |||
| 
 | ||||
| | Field  | Required | Details | Example | | ||||
| |--------|:--------:|---------|---------| | ||||
| | `connectionString` | Y | The connection string for the PostgreSQL database. See the PostgreSQL [documentation on database connections](https://www.postgresql.org/docs/current/libpq-connect.html) for information on how to define a connection string. | `"host=localhost user=postgres password=example port=5432 connect_timeout=10 database=my_db"` | ||||
| | `connectionString` | Y | The connection string for the PostgreSQL database. See the PostgreSQL [documentation on database connections](https://www.postgresql.org/docs/current/libpq-connect.html) for information on how to define a connection string. | `"host=localhost user=postgres password=example port=5432 connect_timeout=10 database=my_db"` | | ||||
| 
 | ||||
| ### Authenticate using Microsoft Entra ID | ||||
| 
 | ||||
|  | @ -77,10 +87,10 @@ Authenticating with Microsoft Entra ID is supported with Azure Database for Post | |||
| 
 | ||||
| | Field | Required | Details | Example | | ||||
| |--------------------|:--------:|---------|---------| | ||||
| | `timeoutInSeconds` | N | Timeout, in seconds, for all database operations. Defaults to `20` | `30` | ||||
| | `tableName` | N | Name of the table where the data is stored. Defaults to `state`. Can optionally have the schema name as prefix, such as `public.state` | `"state"`, `"public.state"` | ||||
| | `metadataTableName` | N | Name of the table Dapr uses to store a few metadata properties. Defaults to `dapr_metadata`. Can optionally have the schema name as prefix, such as `public.dapr_metadata` | `"dapr_metadata"`, `"public.dapr_metadata"` | ||||
| | `cleanupIntervalInSeconds` | N | Interval, in seconds, to clean up rows with an expired TTL. Default: `3600` (i.e. 1 hour). Setting this to values <=0 disables the periodic cleanup. | `1800`, `-1` | ||||
| | `timeout` | N | Timeout for operations on the database, as a [Go duration](https://pkg.go.dev/time#ParseDuration). Integers are interpreted as number of seconds. Defaults to `20s` | `"30s"`, `30` | | ||||
| | `cleanupInterval` | N | Interval, as a Go duration or number of seconds, to clean up rows with an expired TTL. Default: `1h` (1 hour). Setting this to values <=0 disables the periodic cleanup. | `"30m"`, `1800`, `-1` | ||||
| | `maxConns` | N | Maximum number of connections pooled by this component. Set to 0 or lower to use the default value, which is the greater of 4 or the number of CPUs. | `"4"` | ||||
| | `connectionMaxIdleTime` | N | Max idle time before unused connections are automatically closed in the connection pool. By default, there's no value and this is left to the database driver to choose. | `"5m"` | ||||
| | `queryExecMode` | N | Controls the default mode for executing queries. By default Dapr uses the extended protocol and automatically prepares and caches prepared statements. However, this may be incompatible with proxies such as PGBouncer. In this case it may be preferrable to use `exec` or `simple_protocol`. | `"simple_protocol"` | ||||
|  | @ -100,8 +110,8 @@ Authenticating with Microsoft Entra ID is supported with Azure Database for Post | |||
| 
 | ||||
|      > This example does not describe a production configuration because it sets the password in plain text and the user name is left as the PostgreSQL default of "postgres". | ||||
| 
 | ||||
| 2. Create a database for state data. | ||||
| Either the default "postgres" database can be used, or create a new database for storing state data. | ||||
| 1. Create a database for state data.   | ||||
|     Either the default "postgres" database can be used, or create a new database for storing state data. | ||||
| 
 | ||||
|     To create a new database in PostgreSQL, run the following SQL command: | ||||
| 
 | ||||
|  | @ -121,10 +131,10 @@ This state store supports [Time-To-Live (TTL)]({{< ref state-store-ttl.md >}}) f | |||
| 
 | ||||
| Because PostgreSQL doesn't have built-in support for TTLs, this is implemented in Dapr by adding a column in the state table indicating when the data is to be considered "expired". Records that are "expired" are not returned to the caller, even if they're still physically stored in the database. A background "garbage collector" periodically scans the state table for expired rows and deletes them. | ||||
| 
 | ||||
| The interval at which the deletion of expired records happens is set with the `cleanupIntervalInSeconds` metadata property, which defaults to 3600 seconds (that is, 1 hour).  | ||||
| You can set the deletion interval of expired records with the `cleanupInterval` metadata property, which defaults to 3600 seconds (that is, 1 hour). | ||||
| 
 | ||||
| - Longer intervals require less frequent scans for expired rows, but can require storing expired records for longer, potentially requiring more storage space. If you plan to store many records in your state table, with short TTLs, consider setting `cleanupIntervalInSeconds` to a smaller value, for example `300` (300 seconds, or 5 minutes). | ||||
| - If you do not plan to use TTLs with Dapr and the PostgreSQL state store, you should consider setting `cleanupIntervalInSeconds` to a value <= 0 (e.g. `0` or `-1`) to disable the periodic cleanup and reduce the load on the database. | ||||
| - Longer intervals require less frequent scans for expired rows, but can require storing expired records for longer, potentially requiring more storage space. If you plan to store many records in your state table, with short TTLs, consider setting `cleanupInterval` to a smaller value; for example, `5m` (5 minutes). | ||||
| - If you do not plan to use TTLs with Dapr and the PostgreSQL state store, you should consider setting `cleanupInterval` to a value <= 0 (for example, `0` or `-1`) to disable the periodic cleanup and reduce the load on the database. | ||||
| 
 | ||||
| The column in the state table where the expiration date for records is stored in, `expiredate`, **does not have an index by default**, so each periodic cleanup must perform a full-table scan. If you have a table with a very large number of records, and only some of them use a TTL, you may find it useful to create an index on that column. Assuming that your state table name is `state` (the default), you can use this query: | ||||
| 
 | ||||
|  | @ -0,0 +1,165 @@ | |||
| --- | ||||
| type: docs | ||||
| title: "PostgreSQL" | ||||
| linkTitle: "PostgreSQL" | ||||
| description: Detailed information on the PostgreSQL state store component | ||||
| aliases: | ||||
|   - "/operations/components/setup-state-store/supported-state-stores/setup-postgresql-v2/" | ||||
|   - "/operations/components/setup-state-store/supported-state-stores/setup-postgres-v2/" | ||||
| --- | ||||
| 
 | ||||
| {{% alert title="Note" color="primary" %}} | ||||
| This is the v2 of the PostgreSQL state store component, which contains some improvements to performance and reliability. New applications are encouraged to use v2. | ||||
| 
 | ||||
| The PostgreSQL v2 state store component is not compatible with the [v1 component]({{< ref setup-postgresql-v1.md >}}), and data cannot be migrated between the two components. The v2 component does not offer support for state store query APIs. | ||||
| 
 | ||||
| There are no plans to deprecate the v1 component. | ||||
| {{% /alert %}} | ||||
| 
 | ||||
| This component allows using PostgreSQL (Postgres) as state store for Dapr, using the "v2" component. See [this guide]({{< ref "howto-get-save-state.md#step-1-setup-a-state-store" >}}) on how to create and apply a state store configuration. | ||||
| 
 | ||||
| ```yaml | ||||
| apiVersion: dapr.io/v1alpha1 | ||||
| kind: Component | ||||
| metadata: | ||||
|   name: <NAME> | ||||
| spec: | ||||
|   type: state.postgresql | ||||
|   # Note: setting "version" to "v2" is required to use the v2 of the component | ||||
|   version: v2 | ||||
|   metadata: | ||||
|     # Connection string | ||||
|     - name: connectionString | ||||
|       value: "<CONNECTION STRING>" | ||||
|     # Timeout for database operations, as a Go duration or number of seconds (optional) | ||||
|     #- name: timeout | ||||
|     #  value: 20 | ||||
|     # Prefix for the table where the data is stored (optional) | ||||
|     #- name: tablePrefix | ||||
|     #  value: "" | ||||
|     # Name of the table where to store metadata used by Dapr (optional) | ||||
|     #- name: metadataTableName | ||||
|     #  value: "dapr_metadata" | ||||
|     # Cleanup interval in seconds, to remove expired rows (optional) | ||||
|     #- name: cleanupInterval | ||||
|     #  value: "1h" | ||||
|     # Maximum number of connections pooled by this component (optional) | ||||
|     #- name: maxConns | ||||
|     #  value: 0 | ||||
|     # Max idle time for connections before they're closed (optional) | ||||
|     #- name: connectionMaxIdleTime | ||||
|     #  value: 0 | ||||
|     # Controls the default mode for executing queries. (optional) | ||||
|     #- name: queryExecMode | ||||
|     #  value: "" | ||||
|     # Uncomment this if you wish to use PostgreSQL as a state store for actors (optional) | ||||
|     #- name: actorStateStore | ||||
|     #  value: "true" | ||||
| ``` | ||||
| 
 | ||||
| {{% alert title="Warning" color="warning" %}} | ||||
| The above example uses secrets as plain strings. It is recommended to use a secret store for the secrets as described [here]({{< ref component-secrets.md >}}). | ||||
| {{% /alert %}} | ||||
| 
 | ||||
| ## Spec metadata fields | ||||
| 
 | ||||
| ### Authenticate using a connection string | ||||
| 
 | ||||
| The following metadata options are **required** to authenticate using a PostgreSQL connection string. | ||||
| 
 | ||||
| | Field  | Required | Details | Example | | ||||
| |--------|:--------:|---------|---------| | ||||
| | `connectionString` | Y | The connection string for the PostgreSQL database. See the PostgreSQL [documentation on database connections](https://www.postgresql.org/docs/current/libpq-connect.html) for information on how to define a connection string. | `"host=localhost user=postgres password=example port=5432 connect_timeout=10 database=my_db"` | | ||||
| 
 | ||||
| ### Authenticate using Microsoft Entra ID | ||||
| 
 | ||||
| Authenticating with Microsoft Entra ID is supported with Azure Database for PostgreSQL. All authentication methods supported by Dapr can be used, including client credentials ("service principal") and Managed Identity. | ||||
| 
 | ||||
| | Field  | Required | Details | Example | | ||||
| |--------|:--------:|---------|---------| | ||||
| | `useAzureAD` | Y | Must be set to `true` to enable the component to retrieve access tokens from Microsoft Entra ID. | `"true"` | | ||||
| | `connectionString` | Y | The connection string for the PostgreSQL database.<br>This must contain the user, which corresponds to the name of the user created inside PostgreSQL that maps to the Microsoft Entra ID identity. This is often the name of the corresponding principal (for example, the name of the Microsoft Entra ID application). This connection string should not contain any password.  | `"host=mydb.postgres.database.azure.com user=myapplication port=5432 database=my_db sslmode=require"` | | ||||
| | `azureTenantId` | N | ID of the Microsoft Entra ID tenant | `"cd4b2887-304c-…"` | | ||||
| | `azureClientId` | N | Client ID (application ID) | `"c7dd251f-811f-…"` | | ||||
| | `azureClientSecret` | N | Client secret (application password) | `"Ecy3X…"` | | ||||
| 
 | ||||
| ### Other metadata options | ||||
| 
 | ||||
| | Field | Required | Details | Example | | ||||
| |--------------------|:--------:|---------|---------| | ||||
| | `tablePrefix` | N | Prefix for the table where the data is stored. Can optionally have the schema name as prefix, such as `public.prefix_` | `"prefix_"`, `"public.prefix_"` | | ||||
| | `metadataTableName` | N | Name of the table Dapr uses to store a few metadata properties. Defaults to `dapr_metadata`. Can optionally have the schema name as prefix, such as `public.dapr_metadata` | `"dapr_metadata"`, `"public.dapr_metadata"` | | ||||
| | `timeout` | N | Timeout for operations on the database, as a [Go duration](https://pkg.go.dev/time#ParseDuration). Integers are interpreted as number of seconds. Defaults to `20s` | `"30s"`, `30` | | ||||
| | `cleanupInterval` | N | Interval, as a Go duration or number of seconds, to clean up rows with an expired TTL. Default: `1h` (1 hour). Setting this to values <=0 disables the periodic cleanup. | `"30m"`, `1800`, `-1` | | ||||
| | `maxConns` | N | Maximum number of connections pooled by this component. Set to 0 or lower to use the default value, which is the greater of 4 or the number of CPUs. | `"4"` | | ||||
| | `connectionMaxIdleTime` | N | Max idle time before unused connections are automatically closed in the connection pool. By default, there's no value and this is left to the database driver to choose. | `"5m"` | | ||||
| | `queryExecMode` | N | Controls the default mode for executing queries. By default Dapr uses the extended protocol and automatically prepares and caches prepared statements. However, this may be incompatible with proxies such as PGBouncer. In this case, it may be preferrable to use `exec` or `simple_protocol`. | `"simple_protocol"` | | ||||
| | `actorStateStore` | N | Consider this state store for actors. Defaults to `"false"` | `"true"`, `"false"` | | ||||
| 
 | ||||
| ## Setup PostgreSQL | ||||
| 
 | ||||
| {{< tabs "Self-Hosted" >}} | ||||
| 
 | ||||
| {{% codetab %}} | ||||
| 
 | ||||
| 1. Run an instance of PostgreSQL. You can run a local instance of PostgreSQL in Docker with the following command: | ||||
| 
 | ||||
|      ```bash | ||||
|      docker run -p 5432:5432 -e POSTGRES_PASSWORD=example postgres | ||||
|      ``` | ||||
| 
 | ||||
|      > This example does not describe a production configuration because it sets the password in plain text and the user name is left as the PostgreSQL default of "postgres". | ||||
| 
 | ||||
| 2. Create a database for state data.   | ||||
|     Either the default "postgres" database can be used, or create a new database for storing state data. | ||||
| 
 | ||||
|     To create a new database in PostgreSQL, run the following SQL command: | ||||
| 
 | ||||
|     ```sql | ||||
|     CREATE DATABASE my_dapr; | ||||
|     ``` | ||||
|    | ||||
| {{% /codetab %}} | ||||
| 
 | ||||
| {{% /tabs %}} | ||||
| 
 | ||||
| ## Advanced | ||||
| 
 | ||||
| ### Differences between v1 and v2 | ||||
| 
 | ||||
| The PostgreSQL state store v2 was introduced in Dapr 1.13. The [pre-existing v1]({{< ref setup-postgresql-v1.md >}}) remains available and is not deprecated. | ||||
| 
 | ||||
| In the v2 component, the table schema has been changed significantly, with the goal of increasing performance and reliability. Most notably, the value stored by Dapr is now of type _BYTEA_, which allows faster queries and, in some cases, is more space-efficient than the previously-used _JSONB_ column.   | ||||
| However, due to this change, the v2 component does not support the [Dapr state store query APIs]({{< ref howto-state-query-api.md >}}). | ||||
| 
 | ||||
| Also, in the v2 component, ETags are now random UUIDs, which ensures better compatibility with other PostgreSQL-compatible databases, such as CockroachDB. | ||||
| 
 | ||||
| Because of these changes, v1 and v2 components are not able to read or write data from the same table. At this stage, it's also impossible to migrate data between the two versions of the component. | ||||
| 
 | ||||
| ### Displaying the data in human-readable format | ||||
| 
 | ||||
| The PostgreSQL v2 component stores the state's value in the `value` column, which is of type _BYTEA_. Most PostgreSQL tools, including pgAdmin, consider the value as binary and do not display it in human-readable form by default. | ||||
| 
 | ||||
| If you want to inspect the value in the state store, and you know it's not binary (for example, JSON data), you can have the value displayed in human-readable form using a query like the following: | ||||
| 
 | ||||
| ```sql | ||||
| -- Replace "state" with the name of the state table in your environment | ||||
| SELECT *, convert_from(value, 'utf-8') FROM state; | ||||
| ``` | ||||
| 
 | ||||
| ### TTLs and cleanups | ||||
| 
 | ||||
| This state store supports [Time-To-Live (TTL)]({{< ref state-store-ttl.md >}}) for records stored with Dapr. When storing data using Dapr, you can set the `ttlInSeconds` metadata property to indicate after how many seconds the data should be considered "expired". | ||||
| 
 | ||||
| Because PostgreSQL doesn't have built-in support for TTLs, this is implemented in Dapr by adding a column in the state table indicating when the data is to be considered "expired". Records that are "expired" are not returned to the caller, even if they're still physically stored in the database. A background "garbage collector" periodically scans the state table for expired rows and deletes them. | ||||
| 
 | ||||
| You can set the deletion interval of expired records with the `cleanupInterval` metadata property, which defaults to 3600 seconds (that is, 1 hour). | ||||
| 
 | ||||
| - Longer intervals require less frequent scans for expired rows, but can require storing expired records for longer, potentially requiring more storage space. If you plan to store many records in your state table, with short TTLs, consider setting `cleanupInterval` to a smaller value; for example, `5m` (5 minutes). | ||||
| - If you do not plan to use TTLs with Dapr and the PostgreSQL state store, you should consider setting `cleanupInterval` to a value <= 0 (for example, `0` or `-1`) to disable the periodic cleanup and reduce the load on the database. | ||||
| 
 | ||||
| ## Related links | ||||
| 
 | ||||
| - [Basic schema for a Dapr component]({{< ref component-schema >}}) | ||||
| - Read [this guide]({{< ref "howto-get-save-state.md#step-2-save-and-retrieve-a-single-state" >}}) for instructions on configuring state store components | ||||
| - [State management building block]({{< ref state-management >}}) | ||||
|  | @ -30,18 +30,14 @@ spec: | |||
|     value: <PASSWORD> | ||||
|   - name: enableTLS | ||||
|     value: <bool> # Optional. Allowed: true, false. | ||||
|   - name: failover | ||||
|     value: <bool> # Optional. Allowed: true, false. | ||||
|   - name: sentinelMasterName | ||||
|     value: <string> # Optional | ||||
|   - name: maxRetries | ||||
|     value: # Optional | ||||
|   - name: maxRetryBackoff | ||||
|     value: # Optional | ||||
|   - name: failover | ||||
|     value: # Optional | ||||
|     value: <bool> # Optional. Allowed: true, false. | ||||
|   - name: sentinelMasterName | ||||
|     value: # Optional | ||||
|     value: <string> # Optional | ||||
|   - name: redeliverInterval | ||||
|     value: # Optional | ||||
|   - name: processingTimeout | ||||
|  |  | |||
|  | @ -50,14 +50,14 @@ spec: | |||
| 
 | ||||
| | Field | Required | Details | Example | | ||||
| |--------------------|:--------:|---------|---------| | ||||
| | `connectionString` | Y | The connection string for the SQLite database. See below for more details. | `"path/to/data.db"`, `"file::memory:?cache=shared"` | ||||
| | `timeoutInSeconds` | N | Timeout, in seconds, for all database operations. Defaults to `20` | `30` | ||||
| | `tableName` | N | Name of the table where the data is stored. Defaults to `state`. | `"state"` | ||||
| | `metadataTableName` | N | Name of the table used by Dapr to store metadata for the component. Defaults to `metadata`. | `"metadata"` | ||||
| | `cleanupInterval` | N | Interval, as a [Go duration](https://pkg.go.dev/time#ParseDuration), to clean up rows with an expired TTL. Setting this to values <=0 disables the periodic cleanup. Default: `0` (i.e. disabled) | `"2h"`, `"30m"`, `-1` | ||||
| | `busyTimeout` | N | Interval, as a [Go duration](https://pkg.go.dev/time#ParseDuration), to wait in case the SQLite database is currently busy serving another request, before returning a "database busy" error. Default: `2s` | `"100ms"`, `"5s"` | ||||
| | `disableWAL` | N | If set to true, disables Write-Ahead Logging for journaling of the SQLite database. You should set this to `false` if the database is stored on a network file system (e.g. a folder mounted as a SMB or NFS share). This option is ignored for read-only or in-memory databases. | `"100ms"`, `"5s"` | ||||
| | `actorStateStore` | N | Consider this state store for actors. Defaults to `"false"` | `"true"`, `"false"` | ||||
| | `connectionString` | Y | The connection string for the SQLite database. See below for more details. | `"path/to/data.db"`, `"file::memory:?cache=shared"` | | ||||
| | `timeout` | N | Timeout for operations on the database, as a [Go duration](https://pkg.go.dev/time#ParseDuration). Integers are interpreted as number of seconds. Defaults to `20s` | `"30s"`, `30` | | ||||
| | `tableName` | N | Name of the table where the data is stored. Defaults to `state`. | `"state"` | | ||||
| | `metadataTableName` | N | Name of the table used by Dapr to store metadata for the component. Defaults to `metadata`. | `"metadata"` | | ||||
| | `cleanupInterval` | N | Interval, as a [Go duration](https://pkg.go.dev/time#ParseDuration), to clean up rows with an expired TTL. Setting this to values <=0 disables the periodic cleanup. Default: `0` (i.e. disabled) | `"2h"`, `"30m"`, `-1` | | ||||
| | `busyTimeout` | N | Interval, as a [Go duration](https://pkg.go.dev/time#ParseDuration), to wait in case the SQLite database is currently busy serving another request, before returning a "database busy" error. Default: `2s` | `"100ms"`, `"5s"` | | ||||
| | `disableWAL` | N | If set to true, disables Write-Ahead Logging for journaling of the SQLite database. You should set this to `false` if the database is stored on a network file system (for example, a folder mounted as a SMB or NFS share). This option is ignored for read-only or in-memory databases. | `"true"`, `"false"` | | ||||
| | `actorStateStore` | N | Consider this state store for actors. Defaults to `"false"` | `"true"`, `"false"` | | ||||
| 
 | ||||
| The **`connectionString`** parameter configures how to open the SQLite database. | ||||
| 
 | ||||
|  |  | |||
|  | @ -0,0 +1,10 @@ | |||
| --- | ||||
| type: docs | ||||
| title: "Workflow backend component specs" | ||||
| linkTitle: "Workflow backend" | ||||
| weight: 9000 | ||||
| description: The supported workflow backend that orchestrate workflow and save workflow state | ||||
| no_list: true | ||||
| --- | ||||
| 
 | ||||
| {{< partial "components/description.html" >}} | ||||
|  | @ -0,0 +1,24 @@ | |||
| --- | ||||
| type: docs | ||||
| title: "Actor workflow backend" | ||||
| linkTitle: "Actor workflow backend" | ||||
| description: Detailed information on the Actor workflow backend component | ||||
| --- | ||||
| 
 | ||||
| ## Component format | ||||
| 
 | ||||
| The Actor workflow backend is the default backend in Dapr. If no workflow backend is explicitly defined, the Actor backend will be used automatically. | ||||
| 
 | ||||
| You don't need to define any components to use the Actor workflow backend. It's ready to use out-of-the-box. | ||||
| 
 | ||||
| However, if you wish to explicitly define the Actor workflow backend as a component, you can do so, as shown in the example below. | ||||
| 
 | ||||
| ```yaml | ||||
| apiVersion: dapr.io/v1alpha1 | ||||
| kind: Component | ||||
| metadata: | ||||
|   name: actorbackend | ||||
| spec: | ||||
|   type: workflowbackend.actor | ||||
|   version: v1 | ||||
| ``` | ||||
|  | @ -29,3 +29,4 @@ The following table lists the environment variables used by the Dapr runtime, CL | |||
| | DAPR_COMPONENTS_SOCKETS_EXTENSION | .NET and Java pluggable component SDKs | A per-SDK configuration that indicates the default file extension applied to socket files created by the SDKs. Not a Dapr-enforced behavior. | | ||||
| | DAPR_PLACEMENT_METADATA_ENABLED | Dapr placement | Enable an endpoint for the Placement service that exposes placement table information on actor usage. Set to `true` to enable in self-hosted mode. [Learn more about the Placement API]({{< ref placement_api.md >}}) | | ||||
| | DAPR_HOST_IP | Dapr sidecar | The host's chosen IP address. If not specified, will loop over the network interfaces and select the first non-loopback address it finds.| | ||||
| | DAPR_HEALTH_TIMEOUT | SDKs | Sets the time on the "wait for sidecar" availability. Overrides the default timeout setting of 60 seconds. | | ||||
|  | @ -27,8 +27,17 @@ spec: | |||
|     stdout: true | ||||
|     otel: | ||||
|       endpointAddress: <REPLACE-WITH-ENDPOINT-ADDRESS> | ||||
|       isSecure: false | ||||
|       isSecure: <TRUE-OR-FALSE> | ||||
|       protocol: <HTTP-OR-GRPC> | ||||
|   metrics: | ||||
|     enabled: <TRUE-OR-FALSE> | ||||
|     rules: | ||||
|       - name: <METRIC-NAME> | ||||
|         labels: | ||||
|           - name: <LABEL-NAME> | ||||
|             regex: {} | ||||
|     http: | ||||
|       increasedCardinality: <TRUE-OR-FALSE> | ||||
|   httpPipeline: # for incoming http calls | ||||
|     handlers: | ||||
|       - name: <HANDLER-NAME> | ||||
|  | @ -37,6 +46,11 @@ spec: | |||
|     handlers: | ||||
|       - name: <HANDLER-NAME> | ||||
|         type: <HANDLER-TYPE> | ||||
|   nameResolution: | ||||
|     component: <NAME-OF-NAME-RESOLUTION-COMPONENT> | ||||
|     version: <NAME-RESOLUTION-COMPONENT-VERSION> | ||||
|     configuration: | ||||
|      <NAME-RESOLUTION-COMPONENT-METADATA-CONFIGURATION> | ||||
|   secrets: | ||||
|     scopes: | ||||
|       - storeName: <NAME-OF-SCOPED-STORE> | ||||
|  |  | |||
|  | @ -14,6 +14,14 @@ | |||
|   features: | ||||
|     input: true | ||||
|     output: true | ||||
| - component: Azure OpenAI | ||||
|   link: openai | ||||
|   state: Alpha | ||||
|   version: v1 | ||||
|   since: "1.11" | ||||
|   features: | ||||
|     input: true | ||||
|     output: true | ||||
| - component: Azure SignalR | ||||
|   link: signalr | ||||
|   state: Alpha | ||||
|  |  | |||
|  | @ -3,3 +3,8 @@ | |||
|   state: Alpha | ||||
|   version: v1 | ||||
|   since: "1.2" | ||||
| - component: SQLite | ||||
|   link: nr-sqlite | ||||
|   state: Alpha | ||||
|   version: v1 | ||||
|   since: "1.13" | ||||
|  |  | |||
|  | @ -46,14 +46,6 @@ | |||
|   features: | ||||
|     bulkPublish: false | ||||
|     bulkSubscribe: false | ||||
| - component: NATS Streaming | ||||
|   link: setup-nats-streaming | ||||
|   state: Deprecated | ||||
|   version: v1 | ||||
|   since: "1.11" | ||||
|   features: | ||||
|     bulkPublish: false | ||||
|     bulkSubscribe: false | ||||
| - component: RabbitMQ | ||||
|   link: setup-rabbitmq | ||||
|   state: Stable | ||||
|  |  | |||
|  | @ -1,8 +1,8 @@ | |||
| - component: Azure Blob Storage | ||||
|   link: setup-azure-blobstorage | ||||
|   state: Stable | ||||
|   version: v1 | ||||
|   since: "1.0" | ||||
|   version: v2 | ||||
|   since: "1.13" | ||||
|   features: | ||||
|     crud: true | ||||
|     transactions: false | ||||
|  |  | |||
|  | @ -141,8 +141,8 @@ | |||
|     etag: true | ||||
|     ttl: true | ||||
|     query: false | ||||
| - component: PostgreSQL | ||||
|   link: setup-postgresql | ||||
| - component: PostgreSQL v1 | ||||
|   link: setup-postgresql-v1 | ||||
|   state: Stable | ||||
|   version: v1 | ||||
|   since: "1.0" | ||||
|  | @ -152,6 +152,17 @@ | |||
|     etag: true | ||||
|     ttl: true | ||||
|     query: true | ||||
| - component: PostgreSQL v2 | ||||
|   link: setup-postgresql-v2 | ||||
|   state: Stable | ||||
|   version: v2 | ||||
|   since: "1.13" | ||||
|   features: | ||||
|     crud: true | ||||
|     transactions: true | ||||
|     etag: true | ||||
|     ttl: true | ||||
|     query: false | ||||
| - component: Redis | ||||
|   link: setup-redis | ||||
|   state: Stable | ||||
|  |  | |||
										
											Binary file not shown.
										
									
								
							| After Width: | Height: | Size: 255 KiB | 
										
											Binary file not shown.
										
									
								
							| After Width: | Height: | Size: 62 KiB | 
										
											Binary file not shown.
										
									
								
							| After Width: | Height: | Size: 24 KiB | 
										
											Binary file not shown.
										
									
								
							
										
											Binary file not shown.
										
									
								
							|  | @ -1 +1 @@ | |||
| Subproject commit d023a43ba4fd4cddb7aa2c0962cf786f01f58c24 | ||||
| Subproject commit c07eb698ac5d1b152a60d76c64af4841ffa07397 | ||||
|  | @ -1 +1 @@ | |||
| Subproject commit a65eddaa4e9217ed5cdf436b3438d2ffd837ba55 | ||||
| Subproject commit 5ef7aa2234d4d4c07769ad31cde223ef11c4e33e | ||||
|  | @ -1 +1 @@ | |||
| Subproject commit a9a09ba2acc39bc7e54a5a7092e1c5820818e23c | ||||
| Subproject commit 2f5947392a33bc7911e6669601ddb9e8b59b58fe | ||||
|  | @ -1 +1 @@ | |||
| Subproject commit 5c2b40ac94b50f6a5bdb32008f6a47da69946d95 | ||||
| Subproject commit 4189a3d2ad6897406abd766f4ccbf2300c8f8852 | ||||
|  | @ -1 +1 @@ | |||
| Subproject commit ef732090e8e04629ca573d127c5ee187a505aba4 | ||||
| Subproject commit 0b7aafdab1d4fade424b1b6c9569329ad10bb516 | ||||
		Loading…
	
		Reference in New Issue