mirror of https://github.com/dapr/docs.git
updates per sync
Signed-off-by: Hannah Hunter <hannahhunter@microsoft.com>
This commit is contained in:
parent
2197ca6c9f
commit
8c355a5583
|
@ -105,7 +105,6 @@ Lock components are used as a distributed lock to provide mutually exclusive acc
|
|||
|
||||
A [workflow]({{< ref workflow-overview.md >}}) is custom application logic that defines a reliable business process or data flow. Workflow components are workflow runtimes (or engines) that run the business logic written for that workflow and store their state into a state store.
|
||||
|
||||
- [List of supported workflow components]({{< ref supported-workflows >}})
|
||||
- [Workflow implementations]()
|
||||
- [Binding implementations](https://github.com/dapr/components-contrib/tree/master/workflows)
|
||||
|
||||
|
|
|
@ -1,7 +1,7 @@
|
|||
---
|
||||
type: docs
|
||||
title: "Workflow management"
|
||||
linkTitle: "Workflow management"
|
||||
title: "Workflow"
|
||||
linkTitle: "Workflow"
|
||||
weight: 100
|
||||
description: "Manage your Dapr workflows"
|
||||
description: "Orchestrate logic across various microservices"
|
||||
---
|
|
@ -1,174 +0,0 @@
|
|||
---
|
||||
type: docs
|
||||
title: "How to: Use a workflow"
|
||||
linkTitle: "How to: Use workflows"
|
||||
weight: 2000
|
||||
description: Integrate, manage, and expose workflows
|
||||
---
|
||||
|
||||
Now that you've read about [the Workflow building block]({{< ref workflow-overview >}}), let's dive into how you can:
|
||||
|
||||
- Write the workflow into your application code
|
||||
- Run the workflow using HTTP API calls
|
||||
|
||||
{{% alert title="Note" color="primary" %}}
|
||||
If you haven't already, [try out the .NET SDK Workflow example](https://github.com/dapr/dotnet-sdk/tree/master/examples/Workflow) for a quick walk-through on how to use the service invocation API.
|
||||
|
||||
{{% /alert %}}
|
||||
|
||||
When you run `dapr init`, Dapr creates a default workflow runtime written in Go that implements workflow instances as actors to promote placement and scalability.
|
||||
|
||||
## Register the workflow
|
||||
|
||||
To start using the workflow building block, you simply write the workflow details directly into your application code. [In the following example](https://github.com/dapr/dotnet-sdk/blob/master/examples/Workflow/WorkflowWebApp/Program.cs), for a basic ASP.NET order processing application using the .NET SDK, your project code would include:
|
||||
|
||||
- A NuGet package called `Dapr.Workflow` to receive the .NET SDK capabilities
|
||||
- A builder with an extension method called `AddDaprWorkflow`
|
||||
- This will allow you to register workflows and workflow activities (tasks that workflows can schedule)
|
||||
- HTTP API calls
|
||||
- One for submitting a new order
|
||||
- One for checking the status of an existing order
|
||||
|
||||
```csharp
|
||||
using Dapr.Workflow;
|
||||
//...
|
||||
|
||||
// Dapr workflows are registered as part of the service configuration
|
||||
builder.Services.AddDaprWorkflow(options =>
|
||||
{
|
||||
// Note that it's also possible to register a lambda function as the workflow
|
||||
// or activity implementation instead of a class.
|
||||
options.RegisterWorkflow<OrderProcessingWorkflow>();
|
||||
|
||||
// These are the activities that get invoked by the workflow(s).
|
||||
options.RegisterActivity<NotifyActivity>();
|
||||
options.RegisterActivity<ReserveInventoryActivity>();
|
||||
options.RegisterActivity<ProcessPaymentActivity>();
|
||||
});
|
||||
|
||||
WebApplication app = builder.Build();
|
||||
|
||||
// POST starts new order workflow instance
|
||||
app.MapPost("/orders", async (WorkflowEngineClient client, [FromBody] OrderPayload orderInfo) =>
|
||||
{
|
||||
if (orderInfo?.Name == null)
|
||||
{
|
||||
return Results.BadRequest(new
|
||||
{
|
||||
message = "Order data was missing from the request",
|
||||
example = new OrderPayload("Paperclips", 99.95),
|
||||
});
|
||||
}
|
||||
|
||||
//...
|
||||
});
|
||||
|
||||
// GET fetches state for order workflow to report status
|
||||
app.MapGet("/orders/{orderId}", async (string orderId, WorkflowEngineClient client) =>
|
||||
{
|
||||
WorkflowState state = await client.GetWorkflowStateAsync(orderId, true);
|
||||
if (!state.Exists)
|
||||
{
|
||||
return Results.NotFound($"No order with ID = '{orderId}' was found.");
|
||||
}
|
||||
|
||||
var httpResponsePayload = new
|
||||
{
|
||||
details = state.ReadInputAs<OrderPayload>(),
|
||||
status = state.RuntimeStatus.ToString(),
|
||||
result = state.ReadOutputAs<OrderResult>(),
|
||||
};
|
||||
|
||||
//...
|
||||
}).WithName("GetOrderInfoEndpoint");
|
||||
|
||||
app.Run();
|
||||
```
|
||||
|
||||
## Register the workflow activities
|
||||
|
||||
Next, you'll define the workflow activities you'd like your workflow to perform. Activities are a class definition and can take inputs and outputs. Activities also participate in dependency injection, like a class constructor to access the logger for ASP.NET or binding to a Dapr client.
|
||||
|
||||
Continuing the ASP.NET order processing example, the `OrderProcessingWorkflow` class is derived from a base class called `Workflow` with input and output parameter types.
|
||||
|
||||
It also includes a `RunAsync` method that will do the heavy lifting of the workflow and call the workflow activities. The activities called in the example are:
|
||||
- `NotifyActivity`: Receive notification of a new order
|
||||
- `ReserveInventoryActivity`: Check for sufficient inventory to meet the new order
|
||||
- `ProcessPaymentActivity`: Process payment for the order. Includes `NotifyActivity` to send notification of successful order
|
||||
|
||||
```csharp
|
||||
class OrderProcessingWorkflow : Workflow<OrderPayload, OrderResult>
|
||||
{
|
||||
public override async Task<OrderResult> RunAsync(WorkflowContext context, OrderPayload order)
|
||||
{
|
||||
//...
|
||||
|
||||
await context.CallActivityAsync(
|
||||
nameof(NotifyActivity),
|
||||
new Notification($"Received order {orderId} for {order.Name} at {order.TotalCost:c}"));
|
||||
|
||||
//...
|
||||
|
||||
InventoryResult result = await context.CallActivityAsync<InventoryResult>(
|
||||
nameof(ReserveInventoryActivity),
|
||||
new InventoryRequest(RequestId: orderId, order.Name, order.Quantity));
|
||||
//...
|
||||
await context.CallActivityAsync(
|
||||
nameof(ProcessPaymentActivity),
|
||||
new PaymentRequest(RequestId: orderId, order.TotalCost, "USD"));
|
||||
|
||||
await context.CallActivityAsync(
|
||||
nameof(NotifyActivity),
|
||||
new Notification($"Order {orderId} processed successfully!"));
|
||||
|
||||
// End the workflow with a success result
|
||||
return new OrderResult(Processed: true);
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
|
||||
{{% alert title="Important" color="primary" %}}
|
||||
Because of how replay-based workflows execute, you'll write most logic that does things like IO and interacting with systems **inside activities**. Meanwhile, **workflow method** is just for orchestrating those activities.
|
||||
|
||||
{{% /alert %}}
|
||||
|
||||
## Run the workflow
|
||||
|
||||
Now that you've set up the workflow and its activities in your application, it's time to run alongside the Dapr sidecar. For a .NET application:
|
||||
|
||||
```bash
|
||||
dapr run --app-id <your application app ID> dotnet run
|
||||
```
|
||||
|
||||
Next, run your workflow using the following HTTP API methods. For more information, read the [workflow API reference]({{< ref workflow_api.md >}}).
|
||||
|
||||
### Start
|
||||
|
||||
To start your workflow, run:
|
||||
|
||||
```bash
|
||||
POST http://localhost:3500/v1.0/workflows/{workflowType}/{instanceId}/start
|
||||
```
|
||||
|
||||
### Terminate
|
||||
|
||||
To terminate your workflow, run:
|
||||
|
||||
```bash
|
||||
POST http://localhost:3500/v1.0/workflows/{workflowType}/{instanceId}/terminate
|
||||
```
|
||||
|
||||
### Get metadata
|
||||
|
||||
To fetch workflow outputs and inputs, run:
|
||||
|
||||
```bash
|
||||
GET http://localhost:3500/v1.0/workflows/{workflowType}/{instanceId}
|
||||
```
|
||||
|
||||
## Next steps
|
||||
|
||||
- Learn more about [authoring workflows for the built-in engine component]()
|
||||
- Learn more about [supported workflow components]()
|
||||
|
|
@ -1,20 +1,20 @@
|
|||
---
|
||||
type: docs
|
||||
title: Workflow management building block overview
|
||||
title: Workflow building block overview
|
||||
linkTitle: Overview
|
||||
weight: 1000
|
||||
description: "Overview of the workflow management API"
|
||||
description: "Overview of the workflow API"
|
||||
---
|
||||
|
||||
{{% alert title="Note" color="primary" %}}
|
||||
The Workflow building block is currently in alpha state supporting .NET.
|
||||
{{% /alert %}}
|
||||
|
||||
The Dapr Workflow Management API strives to make orchestrating logic for messaging, state management, and failure handling across various microservices easier for developers. Prior to adding workflows to Dapr, you'd often need to build ad-hoc workflows behind-the-scenes in order to bridge that gap.
|
||||
The Dapr Workflow API strives to make orchestrating logic for messaging, state management, and failure handling across various microservices easier for developers. Prior to adding workflows to Dapr, you'd often need to build ad-hoc workflows behind-the-scenes in order to bridge that gap.
|
||||
|
||||
The durable, resilient Dapr Workflow Management API:
|
||||
The durable, resilient Dapr Workflow API:
|
||||
|
||||
- Provides a workflow management API for running workflows
|
||||
- Provides a workflow API for running workflows
|
||||
- Offers a built-in workflow runtime to write Dapr workflows (of type `workflow.dapr`)
|
||||
- Will integrate with future supported external workflow runtime components
|
||||
|
||||
|
@ -49,6 +49,105 @@ You can also get information on the workflow (even if it has been terminated or
|
|||
- The time that the run started
|
||||
- The current running status, whether that be “Running”, “Terminated”, or “Completed”
|
||||
|
||||
## Workflow patterns
|
||||
|
||||
Dapr workflows simplify complex, stateful coordination requirements in event-driven applications. The following sections describe several application patterns that can benefit from Dapr workflows:
|
||||
|
||||
### Function chaining
|
||||
|
||||
In the function chaining pattern, multiple functions are called in succession on a single input, and the output of one function is passed as the input to the next function. With this pattern, you can create a sequence of operations that need to be performed on some data, such as filtering, transforming, and reducing.
|
||||
|
||||
TODO: DIAGRAM?
|
||||
|
||||
You can use Dapr workflows to implement the function chaining pattern concisely as shown in the following example.
|
||||
|
||||
TODO: CODE EXAMPLE
|
||||
|
||||
### Fan out/fan in
|
||||
|
||||
In the fan out/fan in design pattern, you execute multiple tasks simultaneously across mulitple workers and wait for them to recombine.
|
||||
|
||||
The fan out part of the pattern involves distributing the input data to multiple workers, each of which processes a portion of the data in parallel.
|
||||
|
||||
The fan in part of the pattern involves recombining the results from the workers into a single output.
|
||||
|
||||
TODO: DIAGRAM?
|
||||
|
||||
This pattern can be implemented in a variety of ways, such as using message queues, channels, or async/await. The Dapr workflows extension handles this pattern with relatively simple code:
|
||||
|
||||
TODO: CODE EXAMPLE
|
||||
|
||||
### Async HTTP APIs
|
||||
|
||||
In an asynchronous HTTP API pattern, you coordinate non-blocking requests and responses with external clients. This increases performance and scalability. One way to implement an asynchronous API is to use an event-driven architecture, where the server listens for incoming requests and triggers an event to handle each request as it comes in. Another way is to use asynchronous programming libraries or frameworks, which allow you to write non-blocking code using callbacks, promises, or async/await.
|
||||
|
||||
TODO: DIAGRAM?
|
||||
|
||||
Dapr workflows simplifies or even removing the code you need to write to interact with long-running function executions.
|
||||
|
||||
TODO: CODE EXAMPLE
|
||||
|
||||
### Monitor
|
||||
|
||||
The monitor pattern is a flexible, recurring process in a workflow that coordinates the actions of multiple threads by controlling access to shared resources. Typically:
|
||||
|
||||
1. The thread must first acquire the monitor.
|
||||
1. Once the thread has acquired the monitor, it can access the shared resource.
|
||||
1. The thread then releases the monitor.
|
||||
|
||||
This ensures that only one thread can access the shared resource at a time, preventing synchronization issues.
|
||||
|
||||
TODO: DIAGRAM?
|
||||
|
||||
In a few lines of code, you can create multiple monitors that observe arbitrary endpoints. The following code implements a basic monitor:
|
||||
|
||||
TODO: CODE EXAMPLE
|
||||
|
||||
## What is the authoring SDK?
|
||||
|
||||
The Dapr Workflow _authoring SDK_ is a language-specific SDK that you use to implement workflow logic. The workflow logic lives in your application and is orchestrated by the Dapr workflow engine running in the Dapr sidecar via a gRPC stream.
|
||||
|
||||
TODO: Diagram
|
||||
|
||||
The Dapr Workflow authoring SDK contains many types and functions that allow you to take full advantage of the features and capabilities offered by the Dapr workflow engine.
|
||||
|
||||
NOTE: The Dapr Workflow authoring SDK is only valid for use with the Dapr Workflow engine. It cannot be used with other external workflow services.
|
||||
|
||||
### Currently supported SDK languages
|
||||
|
||||
Currently, you can use the following SDK languages to author a workflow.
|
||||
|
||||
| Language stack | Package |
|
||||
| - | - |
|
||||
| .NET | [Dapr.Workflow](https://www.nuget.org/packages/Dapr.Workflow) |
|
||||
|
||||
### Declarative workflows support
|
||||
|
||||
Dapr workflow doesn't currently provides any experience for declarative workflows. However, you can use the Dapr SDKs to build a new, portable workflow runtime that leverages the Dapr sidecar to load and execute declarative workflows as a layer on top of the "workflow-as-code" foundation. Such an approach could be used to support a variety of declarative workflows, including:
|
||||
|
||||
- The [AWS Step Functions workflow syntax](https://docs.aws.amazon.com/step-functions/latest/dg/concepts-amazon-states-language.html)
|
||||
- The [Azure Logic Apps workflow syntax](https://learn.microsoft.com/azure/logic-apps/logic-apps-workflow-definition-language)
|
||||
- The [Google Cloud Workflows syntax](https://cloud.google.com/workflows/docs/reference/syntax)
|
||||
- The [Serverless workflow specification](https://serverlessworkflow.io/) (a CNCF sandbox project)
|
||||
|
||||
This topic is currently outside the scope of this article. However, it may be explored more in future iterations of Dapr Workflow.
|
||||
|
||||
## Try out the workflow API
|
||||
|
||||
### Quickstarts and tutorials
|
||||
|
||||
Want to put the Dapr Workflow API to the test? Walk through the following quickstart and tutorials to see Dapr Workflows in action:
|
||||
|
||||
| Quickstart/tutorial | Description |
|
||||
| ------------------- | ----------- |
|
||||
| [Workflow quickstart](link) | Description of the quickstart. |
|
||||
| [Workflow tutorial](link) | Description of the tutorial. |
|
||||
|
||||
### Start using workflows directly in your app
|
||||
|
||||
Want to skip the quickstarts? Not a problem. You can try out the workflow building block directly in your application. After [Dapr is installed]({{< ref install-dapr-cli.md >}}), you can begin using the workflow API, starting with [how to author a workflow]({{< ref howto-author-workflow.md >}}).
|
||||
|
||||
|
||||
## Watch the demo
|
||||
|
||||
Watch [this video for an overview on Dapr Workflows](https://youtu.be/s1p9MNl4VGo?t=131):
|
||||
|
|
|
@ -1,217 +0,0 @@
|
|||
---
|
||||
type: docs
|
||||
title: "How to: Author a workflow"
|
||||
linkTitle: "Authoring workflows"
|
||||
weight: 200
|
||||
description: "Learn how to develop and author workflows"
|
||||
---
|
||||
|
||||
This article provides a high-level overview of how to author workflows that are executed by the Dapr Workflow engine. In particular, this article lists the SDKs available, supported authoring patterns, and introduces the various concepts you'll need to understand when building Dapr workflows.
|
||||
|
||||
## Author workflows as code
|
||||
|
||||
Dapr workflow logic is implemented using general purpose programming languages, allowing you to:
|
||||
|
||||
- Use your preferred programming language (no need to learn a new DSL or YAML schema)
|
||||
- Have access to the language’s standard libraries
|
||||
- Build your own libraries and abstractions
|
||||
- Use debuggers and examine local variables
|
||||
- Write unit tests for your workflows, just like any other part of your application logic
|
||||
|
||||
The Dapr sidecar doesn’t load any workflow definitions. Rather, the sidecar simply drives the execution of the workflows, leaving all other details to the application layer.
|
||||
|
||||
## Declarative workflows support
|
||||
|
||||
Dapr workflow doesn't currently provides any experience for declarative workflows. However, you can use the Dapr SDKs to build a new, portable workflow runtime that leverages the Dapr sidecar to load and execute declarative workflows as a layer on top of the "workflow-as-code" foundation. Such an approach could be used to support a variety of declarative workflows, including:
|
||||
|
||||
- The [AWS Step Functions workflow syntax](https://docs.aws.amazon.com/step-functions/latest/dg/concepts-amazon-states-language.html)
|
||||
- The [Azure Logic Apps workflow syntax](https://learn.microsoft.com/azure/logic-apps/logic-apps-workflow-definition-language)
|
||||
- The [Google Cloud Workflows syntax](https://cloud.google.com/workflows/docs/reference/syntax)
|
||||
- The [Serverless workflow specification](https://serverlessworkflow.io/) (a CNCF sandbox project)
|
||||
|
||||
This topic is currently outside the scope of this article. However, it may be explored more in future iterations of Dapr Workflow.
|
||||
|
||||
## What is the authoring SDK?
|
||||
|
||||
The Dapr Workflow _authoring SDK_ is a language-specific SDK that you use to implement workflow logic. The workflow logic lives in your application and is orchestrated by the Dapr workflow engine running in the Dapr sidecar via a gRPC stream.
|
||||
|
||||
TODO: Diagram
|
||||
|
||||
The Dapr Workflow authoring SDK contains many types and functions that allow you to take full advantage of the features and capabilities offered by the Dapr workflow engine.
|
||||
|
||||
NOTE: The Dapr Workflow authoring SDK is only valid for use with the Dapr Workflow engine. It cannot be used with other external workflow services.
|
||||
|
||||
## Currently supported SDK languages
|
||||
|
||||
Currently, you can use the following SDK languages to author a workflow.
|
||||
|
||||
| Language stack | Package |
|
||||
| - | - |
|
||||
| .NET | [Dapr.Workflow](https://www.nuget.org/packages/Dapr.Workflow) |
|
||||
|
||||
## Workflow patterns
|
||||
|
||||
Dapr workflows simplify complex, stateful coordination requirements in event-driven applications. The following sections describe several application patterns that can benefit from Dapr workflows:
|
||||
|
||||
### Function chaining
|
||||
|
||||
In the function chaining pattern, multiple functions are called in succession on a single input, and the output of one function is passed as the input to the next function. With this pattern, you can create a sequence of operations that need to be performed on some data, such as filtering, transforming, and reducing.
|
||||
|
||||
TODO: DIAGRAM?
|
||||
|
||||
You can use Dapr workflows to implement the function chaining pattern concisely as shown in the following example.
|
||||
|
||||
TODO: CODE EXAMPLE
|
||||
|
||||
### Fan out/fan in
|
||||
|
||||
In the fan out/fan in design pattern, you execute multiple tasks simultaneously across mulitple workers and wait for them to recombine.
|
||||
|
||||
The fan out part of the pattern involves distributing the input data to multiple workers, each of which processes a portion of the data in parallel.
|
||||
|
||||
The fan in part of the pattern involves recombining the results from the workers into a single output.
|
||||
|
||||
TODO: DIAGRAM?
|
||||
|
||||
This pattern can be implemented in a variety of ways, such as using message queues, channels, or async/await. The Dapr workflows extension handles this pattern with relatively simple code:
|
||||
|
||||
TODO: CODE EXAMPLE
|
||||
|
||||
### Async HTTP APIs
|
||||
|
||||
In an asynchronous HTTP API pattern, you coordinate non-blocking requests and responses with external clients. This increases performance and scalability. One way to implement an asynchronous API is to use an event-driven architecture, where the server listens for incoming requests and triggers an event to handle each request as it comes in. Another way is to use asynchronous programming libraries or frameworks, which allow you to write non-blocking code using callbacks, promises, or async/await.
|
||||
|
||||
TODO: DIAGRAM?
|
||||
|
||||
Dapr workflows simplifies or even removing the code you need to write to interact with long-running function executions.
|
||||
|
||||
TODO: CODE EXAMPLE
|
||||
|
||||
### Monitor
|
||||
|
||||
The monitor pattern is a flexible, recurring process in a workflow that coordinates the actions of multiple threads by controlling access to shared resources. Typically:
|
||||
|
||||
1. The thread must first acquire the monitor.
|
||||
1. Once the thread has acquired the monitor, it can access the shared resource.
|
||||
1. The thread then releases the monitor.
|
||||
|
||||
This ensures that only one thread can access the shared resource at a time, preventing synchronization issues.
|
||||
|
||||
TODO: DIAGRAM?
|
||||
|
||||
In a few lines of code, you can create multiple monitors that observe arbitrary endpoints. The following code implements a basic monitor:
|
||||
|
||||
TODO: CODE EXAMPLE
|
||||
|
||||
## Features and concepts
|
||||
|
||||
The Dapr Workflow SDK exposes several core features and concepts which are common across all supported languages. This section will provide a brief introduction to each of those features.
|
||||
|
||||
### Workflows
|
||||
|
||||
Dapr workflows are functions you write that define a series of steps or tasks to be executed in a particular order. The Dapr workflow engine takes care of coordinating and managing the execution of the steps, including managing failures and retries. If the app hosting your workflows is scaled out across multiple machines, the workflow engine may also load balance the execution of workflows and their tasks across multiple machines.
|
||||
|
||||
There are several different kinds of tasks that a workflow can schedule, including [activities]() for executing custom logic, [durable timers]() for putting the workflow to sleep for arbitrary lengths of time, [child workflows]() for breaking larger workflows into smaller pieces, and [external event waiters]() for blocking workflows until they receive external event signals. These tasks are described in more details in their corresponding sections.
|
||||
|
||||
#### Workflow identity
|
||||
|
||||
Each workflow you define has a name, and individual executions of a workflow have a unique _instance ID_. Workflow instance IDs can be generated by your app code, which is useful when workflows correspond to business entities like documents or jobs, or can be auto-generated UUIDs. A workflow's instance ID is useful for debugging and also for managing workflows using the [Workflow management APIs]().
|
||||
|
||||
Only one workflow instance with a given ID can exist at any given time. However, if a workflow instance completes or fails, its ID can be reused by a new workflow instance. Note, however, that the new workflow instance will effectively replace the old one in the configured state store.
|
||||
|
||||
#### Workflow replay
|
||||
|
||||
Dapr workflows maintain their execution state by using a technique known as [event sourcing](https://learn.microsoft.com/azure/architecture/patterns/event-sourcing). Instead of directly storing the current state of a workflow as a snapshot, the workflow engine manages an append-only log of history events that describe the various steps that a workflow has taken. When using the workflow authoring SDK, the storing of these history events happens automatically whenever the workflow "awaits" for the result of a scheduled task.
|
||||
|
||||
{{% alert title="Note" color="primary" %}}
|
||||
For more information on how workflow state is managed, see the [workflow engine operational guide]({{<ref "operations/components/workflow-engine">}}).
|
||||
{{% /alert %}}
|
||||
|
||||
When a workflow "awaits" a scheduled task, it may unload itself from memory until the task completes. Once the task completes, the workflow engine will schedule the workflow function to run again. This second execution of the workflow function is known as a _replay_. When a workflow function is replayed, it runs again from the beginning. However, when it encounters a task that it already scheduled, instead of scheduling that task again, the workflow engine will return the result of the scheduled task to the workflow and continue execution until the next "await" point. This "replay" behavior continues until the workflow function completes or fails with an error.
|
||||
|
||||
Using this replay technique, a workflow is able to resume execution from any "await" point as if it had never been unloaded from memory. Even the values of local variables from previous runs can be restored without the workflow engine knowing anything about what data they stored. This ability to restore state is also what makes Dapr workflows _durable_ and fault tolerant.
|
||||
|
||||
#### Workflow determinism and code constraints
|
||||
|
||||
The workflow replay behavior described previously requires that workflow function code be _deterministic_. A deterministic workflow function is one that takes the exact same actions when provided the exact same inputs.
|
||||
|
||||
You must follow the following rules to ensure that your workflow code is deterministic.
|
||||
|
||||
1. **Workflow functions must not call non-deterministic APIs.** For example, APIs that generate random numbers, random UUIDs, or the current date are non-deterministic. To work around this limitation, use these APIs in activity functions or (preferred) use built-in equivalent APIs offered by the authoring SDK. For example, each authoring SDK provides an API for retrieving the current time in a deterministic manner.
|
||||
|
||||
1. **Workflow functions must not interact _directly_ with external state.** External data includes any data that isn't stored in the workflow state. For example, workflows must not interact with global variables, environment variables, the file system, or make network calls. Instead, workflows should interact with external state _indirectly_ using workflow inputs, activity tasks, and through external event handling.
|
||||
|
||||
1. **Workflow functions must execute only on the workflow dispatch thread.** The implementation of each language SDK requires that all workflow function operations operate on the same thread (goroutine, etc.) that the function was scheduled on. Workflow functions must never schedule background threads or use APIs that schedule a callback function to run on another thread. Failure to follow this rule could result in undefined behavior. Any background processing should instead be delegated to activity tasks, which can be scheduled to run serially or concurrently.
|
||||
|
||||
While it's critically important to follow these determinism code constraints, you'll quickly become familiar with them and learn how to work with them effectively when writing workflow code.
|
||||
|
||||
#### Infinite loops and eternal workflows
|
||||
|
||||
As discussed in the [workflow replay]({{<ref "#workflow-replay">}}) section, workflows maintain a write-only event-sourced history log of all its operations. To avoid runaway resource usage, workflows should limit the number of operations they schedule. For example, a workflow should never use infinite loops in its implementation, nor should it schedule millions of tasks.
|
||||
|
||||
There are two techniques that can be used to write workflows that need to potentially schedule extreme numbers of tasks:
|
||||
|
||||
1. **Use the _continue-as-new_ API**: Each workflow authoring SDK exposes a _continue-as-new_ API that workflows can invoke to restart themselves with a new input and history. The _continue-as-new_ API is especially ideal for implementing "eternal workflows" or workflows that have no logical end state, like monitoring agents, which would otherwise be implemented using a `while (true)`-like construct. Using _continue-as-new_ is a great way to keep the workflow history size small.
|
||||
|
||||
1. **Use child workflows**: Each workflow authoring SDK also exposes an API for creating child workflows. A child workflow is just like any other workflow except that it's scheduled by a parent workflow. Child workflows have their own history and also have the benefit of allowing you to distribute workflow function execution across multiple machines. If a workflow needs to schedule thousands of tasks or more, it's recommended that those tasks be distributed across child workflows so that no single workflow's history size grows too large.
|
||||
|
||||
#### Updating workflow code
|
||||
|
||||
Because workflows are long-running and durable, updating workflow code must be done with extreme care. As discussed in the [Workflow determinism]({{<ref "#workflow-determinism-and-code-constraints">}}) section, workflow code must be deterministic so that the workflow engine can rebuild its state to exactly match its previous checkpoint. Updates to workflow code must preserve this determinism if there are any non-completed workflow instances in the system. Otherwise, updates to workflow code can result in runtime failures the next time those workflows execute.
|
||||
|
||||
We'll mention a couple examples of code updates that can break workflow determinism:
|
||||
|
||||
* **Changing workflow function signatures**: Changing the name, input, or output of a workflow or activity function is considered a breaking change and must be avoided.
|
||||
* **Changing the number or order of workflow tasks**: Changing the number or order of workflow tasks will cause a workflow instance's history to no longer match the code and may result in runtime errors or other unexpected behavior.
|
||||
|
||||
To work around these constraints, instead of updating existing workflow code, leave the existing workflow code as-is and create new workflow definitions that include the updates. Upstream code that creates workflows should also be updated to only create instances of the new workflows. Leaving the old code around ensures that existing workflow instances can continue to run without interruption. If and when it's known that all instances of the old workflow logic have completed, then the old workflow code can be safely deleted.
|
||||
|
||||
### Workflow activities
|
||||
|
||||
Workflow activities are the basic unit of work in a workflow and are the tasks that get orchestrated in the business process. For example, you might create a workflow to process an order. The tasks may involve checking the inventory, charging the customer, and creating a shipment. Each task would be a separate activity. These activities may be executed serially, in parallel, or some combination of both.
|
||||
|
||||
Unlike workflows, activities aren't restricted in the type of work you can do in them. Activities are frequently used to make network calls or run CPU intensive operations. An activity can also return data back to the workflow.
|
||||
|
||||
The Dapr workflow engine guarantees that each called activity will be executed **at least once** as part of a workflow's execution. Because activities only guarantee at-least-once execution, it's recommended that activity logic be implemented as idempotent whenever possible.
|
||||
|
||||
### Child workflows
|
||||
|
||||
In addition to activities, workflows can schedule other workflows as _child workflows_. A child workflow has its own instance ID, history, and status that is independent of the parent workflow that started it.
|
||||
|
||||
Child workflows have many benefits:
|
||||
|
||||
* You can split large workflows into a series of smaller child workflows, making your code more maintainable.
|
||||
* You can distribute workflow logic across multiple compute nodes concurrently, which is useful if your workflow logic otherwise needs to coordinate a lot of tasks.
|
||||
* You can reduce memory usage and CPU overhead by keeping the history of parent workflow smaller.
|
||||
|
||||
The return value of a child workflow is its output. If a child workflow fails with an exception, then that exception will be surfaced to the parent workflow, just like it is when an activity task fails with an exception. Child workflows also support automatic retry policies.
|
||||
|
||||
{{% alert title="Note" color="primary" %}}
|
||||
Because child workflows are independent of their parents, terminating a parent workflow does not affect any child workflows. You must terminate each child workflow independently using its instance ID.
|
||||
{{% /alert %}}
|
||||
|
||||
### Durable timers
|
||||
|
||||
Dapr workflows allow you to schedule reminder-like durable delays for any time range, including minutes, days, or even years. These _durable timers_ can be scheduled by workflows to implement simple delays or to set up ad-hoc timeouts on other async tasks. More specifically, a durable timer can be set to trigger on a particular date or after a specified duration. There are no limits to the maximum duration of durable timers, which are internally backed by internal actor reminders. For example, a workflow that tracks a 30-day free subscription to a service could be implemented using a durable timer that fires 30-days after the workflow is created. Workflows can be safely unloaded from memory while waiting for a durable timer to fire.
|
||||
|
||||
{{% alert title="Note" color="primary" %}}
|
||||
Some APIs in the workflow authoring SDK may internally schedule durable timers to implement internal timeout behavior.
|
||||
{{% /alert %}}
|
||||
|
||||
### External events
|
||||
|
||||
Sometimes workflows will need to wait for events that are raised by external systems. For example, an approval workflow may require a human to explicitly approve an order request within an order processing workflow if the total cost exceeds some threshold. Another example is a trivia game orchestration workflow that pauses while waiting for all participants to submit their answers to trivia questions. These mid-execution inputs are referred to as _external events_.
|
||||
|
||||
External events have a _name_ and a _payload_ and are delivered to a single workflow instance. Workflows can create "_wait for external event_" tasks that subscribe to external events and _await_ those tasks to block execution until the event is received. The workflow can then read the payload of these events and make decisions about which next steps to take. External events can be processed serially or in parallel. External events can be raised by other workflows or by workflow management code.
|
||||
|
||||
{{% alert title="Note" color="primary" %}}
|
||||
The ability to raise external events to workflows is not included in the alpha version of Dapr's workflow management API.
|
||||
{{% /alert %}}
|
||||
|
||||
Workflows can also wait for multiple external event signals of the same name, in which case they are dispatched to the corresponding workflow tasks in a first-in, first-out (FIFO) manner. If a workflow receives an external event signal but has not yet created a "wait for external event" task, the event will be saved into the workflow's history and consumed immediately after the workflow requests the event.
|
||||
|
||||
## Next steps
|
||||
|
||||
- [Learn more about the Workflow API]({{< ref workflow-overview.md >}})
|
||||
- [Workflow component spec]({{< ref temporal-io.md >}})
|
||||
- [Workflow API reference]({{< ref workflow_api.md >}})
|
|
@ -1,7 +0,0 @@
|
|||
---
|
||||
type: docs
|
||||
title: "Workflow components"
|
||||
linkTitle: "Workflow components"
|
||||
description: "Guidance on working with the workflow components"
|
||||
weight: 4700
|
||||
---
|
|
@ -1,152 +0,0 @@
|
|||
---
|
||||
type: docs
|
||||
title: "Dapr workflow component overview"
|
||||
linkTitle: "Dapr workflow component"
|
||||
weight: 4400
|
||||
description: "Overview of the Dapr workflow engine component"
|
||||
---
|
||||
|
||||
# Overview
|
||||
|
||||
The Dapr workflow engine is a component that allows developers to define workflows using ordinary code in a variety of programming languages. The workflow engine runs inside of the Dapr sidecar and orchestrates workflow code that is deployed as part of your application. This article describes the architecture of the Dapr workflow engine, how it interacts with application code, and how it fits into the overall Dapr architecture.
|
||||
|
||||
{{% alert title="Note" color="primary" %}}
|
||||
For information on how to author workflows that are executed by the Dapr workflow engine, see the [workflow application developer guide]({{<ref "workflow-overview.md" >}}).
|
||||
{{% /alert %}}
|
||||
|
||||
The Dapr Workflow engine is internally implemented using then open source [durabletask-go](https://github.com/microsoft/durabletask-go) library, which is embedded directly into the Dapr sidecar. Dapr implements a custom durable task "backend" using internally managed actors, which manage workflow scale-out, persistence, and leader election. This article will go into more details in subsequent sections.
|
||||
|
||||
## Sidecar interactions
|
||||
|
||||
TODO: Describe the gRPC protocol used in the SDK/sidecar interactions. Make sure to also emphasize the responsibilities of the app vs. the responsibilities of the sidecar.
|
||||
|
||||
When a workflow application starts up, it uses a workflow authoring SDK to send a gRPC request to the Dapr sidecar and get back a stream of workflow work-items, following the [Server streaming RPC pattern](https://grpc.io/docs/what-is-grpc/core-concepts/#server-streaming-rpc). These work-items can be anything from "start a new X workflow" (where X is the type of a workflow) to "schedule activity Y with input Z to run on behalf of workflow X". The workflow app executes the appropriate workflow code and then sends a gRPC request back to the sidecar with the execution results.
|
||||
|
||||
<img src="/images/workflow-overview/workflow-engine-protocol.png" alt="Dapr Workflow Engine Protocol" />
|
||||
|
||||
All of these interactions happen over a single gRPC channel. All interactions are initiated by the application, which means the application doesn't need to open any inbound ports. The details of these interactions are internally handled by the language-specific Dapr Workflow authoring SDK.
|
||||
|
||||
### Differences between workflow and actor sidecar interactions
|
||||
|
||||
If you're familiar with Dapr actors, you may notice a few differences in terms of how sidecar interactions works for workflows compared to actors.
|
||||
|
||||
* Actors can interact with the sidecar using either HTTP or gRPC. Workflows, however, only use gRPC. Furthermore, the workflow gRPC protocol is sufficiently complex that an SDK is effectively _required_ when implementing workflows.
|
||||
* Actor operations are pushed to application code from the sidecar. This requires the application to listen on a particular _app port_. With workflows, however, operations are _pulled_ from the sidecar by the application using a streaming protocol. The application doesn't need to listen on any ports to run workflows.
|
||||
* Actors explicitly register themselves with the sidecar. Workflows, however, do not register themselves with the sidecar. The embedded engine doesn't keep track of workflow types. This responsibility is instead delegated to the workflow application and its SDK.
|
||||
|
||||
## Workflow distributed tracing
|
||||
|
||||
The durabletask-go core used by the workflow engine writes distributed traces using Open Telemetry SDKs. These traces are captured automatically by the Dapr sidecar and exported to the configured Open Telemetry provider, such as Zipkin.
|
||||
|
||||
Each workflow instance managed by the engine is represented as one or more spans. There is a single parent span representing the full workflow execution and child spans for the various tasks, including spans for activity task execution and durable timers. Workflow activity code also has access to the trace context, allowing distributed trace context to flow to external services that are invoked by the workflow.
|
||||
|
||||
## Internal actors
|
||||
|
||||
There are two types of actors that are internally registered within the Dapr sidecar in support of the workflow engine:
|
||||
`dapr.internal.wfengine.workflow` and `dapr.internal.wfengine.activity`.
|
||||
|
||||
TODO: Diagram
|
||||
|
||||
Just like normal actors, internal actors are distributed across the cluster by the actor placement service. They also maintain their own state and make use of reminders. However, unlike actors that live in application code, these _internal_ actors are embedded into the Dapr sidecar. Application code is completely unaware that these actors exist.
|
||||
|
||||
There are two types of actors registered by the Dapr sidecar for workflow: the _workflow_ actor and the _activity_ actor. The next sections will go into more details on each.
|
||||
|
||||
### Workflow actors
|
||||
|
||||
A new instance of the `dapr.internal.wfengine.workflow` actor is activated for every workflow instance that gets created. The ID of the _workflow_ actor is the ID of the workflow. This internal actor stores the state of the workflow as it progresses and determines the node on which the workflow code executes via the actor placement service.
|
||||
|
||||
Each workflow actor saves its state using the following keys in the configured state store:
|
||||
|
||||
* `inbox-NNNNNN`: A workflow's inbox is effectively a FIFO queue of _messages_ that drive a workflow's execution. Example messages include workflow creation messages, activity task completion messages, etc. Each message is stored in its own key in the state store with the name `inbox-NNNNNN` where `NNNNNN` is a 6-digit number indicating the ordering of the messages. These state keys are removed once the corresponding messages are consumed by the workflow.
|
||||
* `history-NNNNNN`: A workflow's history is an ordered list of events that represent a workflow's execution history. Each key in the history holds the data for a single history event. Like an append-only log, workflow history events are only added and never removed (except when a workflow performs a "continue as new" operation, which purges all history and restarts a workflow with a new input).
|
||||
* `customStatus`: Contains a user-defined workflow status value. There is exactly one `customStatus` key for each workflow actor instance.
|
||||
* `metadata`: Contains meta information about the workflow as a JSON blob and includes details such as the length of the inbox, the length of the history, and a 64-bit integer representing the workflow generation (for cases where the instance ID gets reused). The length information is used to determine which keys need to be read or written to when loading or saving workflow state updates.
|
||||
|
||||
{{% alert title="Warning" color="primary" %}}
|
||||
In the alpha release of the Dapr workflow engine, workflow actor state will remain in the state store even after a workflow has completed. Creating a large number of workflows could result in unbounded storage usage. In a future release, data retention policies will be introduced that can automatically purge the state store of old workflow state.
|
||||
{{% /alert %}}
|
||||
|
||||
The following diagram illustrates the typical lifecycle of a workflow actor.
|
||||
|
||||
<img src="/images/workflow-overview/workflow-actor-flowchart.png" alt="Dapr Workflow Actor Flowchart"/>
|
||||
|
||||
To summarize, a workflow actor is activated when it receives a new message. New messages will then trigger the associated workflow code (in your application) to run and return an execution result back to the workflow actor. Once the result is received, the actor will schedule any tasks as necessary, update its state in the state store, and then go idle until it receives another message. During this idle time, the sidecar may decide to unload the workflow actor from memory.
|
||||
|
||||
### Activity actors
|
||||
|
||||
A new instance of the `dapr.internal.wfengine.activity` actor is activated for every activity task that gets scheduled by a workflow. The ID of the _activity_ actor is the ID of the workflow combined with a sequence number. For example, if a workflow has an ID of `876bf371` and is the third activity to be scheduled by the workflow, it's ID will be `876bf371#2` where `2` is the sequence number (sequence numbers start with 0).
|
||||
|
||||
Each activity actor stores a single key into the state store:
|
||||
|
||||
* `activityreq-N`: The key contains the activity invocation payload, which includes the serialized activity input data. The `N` value is a 64-bit unsigned integer that represents the _generation_ of the workflow, a concept which is outside the scope of this documentation.
|
||||
|
||||
{{% alert title="Warning" color="primary" %}}
|
||||
In the alpha release of the Dapr workflow engine, activity actor state will remain in the state store even after the activity task has completed. Scheduling a large number of workflow activities could result in unbounded storage usage. In a future release, data retention policies will be introduced that can automatically purge the state store of completed activity state.
|
||||
{{% /alert %}}
|
||||
|
||||
Activity actors are short-lived. They are activated when a workflow actor schedules an activity task and will immediately call into the workflow application to invoke the associated activity code. One the activity code has finished running and has returned its result, the activity actor will send a message to the parent workflow actor with the execution results, triggering the workflow to move forward to its next step.
|
||||
|
||||
<img src="/images/workflow-overview/workflow-activity-actor-flowchart.png" alt="Workflow Activity Actor Flowchart"/>
|
||||
|
||||
### Reminder usage and execution guarantees
|
||||
|
||||
TODO: Describe how reminders are used, and what kinds of reminder pressure may be added to a system.
|
||||
|
||||
The Dapr workflow engine ensures workflow fault-tolerance by using actor reminders to recover from transient system failures. Prior to invoking application workflow code, the workflow or activity actor will create a new reminder. If the application code executes without interruption, the reminder is deleted. However, if the node or the sidecar hosting the associated workflow or activity crashes, the reminder will reactivate the corresponding actor and the execution will be retried.
|
||||
|
||||
TODO: Diagrams showing the process of invoking workflow and activity actors
|
||||
|
||||
{{% alert title="Important" color="warning" %}}
|
||||
Too many active reminders in a cluster may result in performance issues. If your application is already using actors and reminders heavily, be mindful of the additional load that Dapr workflows may add to your system.
|
||||
{{% /alert %}}
|
||||
|
||||
### State store usage
|
||||
|
||||
Dapr workflows use actors internally to drive the execution of workflows. Like any actors, these internal workflow actors store their state in the configured state store. Any state store that supports actors implicitly supports Dapr workflow.
|
||||
|
||||
As discussed in the [workflow actors]({{<ref "workflow-engine.md#workflow-actors" >}}) section, workflows save their state incrementally by appending to a history log. The history log for a workflow is distributed across multiple state store keys so that each "checkpoint" only needs to append the newest entries.
|
||||
|
||||
The size of each checkpoint is determined by the number of concurrent actions scheduled by the workflow before it goes into an idle state. Sequential workflows that take one action at a time will therefore make smaller batch updates to the state store whereas "fan-out" workflows that run many tasks in parallel will requiring larger batches to be written. The size of the batch is also impacted by the size of inputs and outputs when workflows invoke activities or child-workflows.
|
||||
|
||||
TODO: Image illustrating a workflow appending a batch of keys to a state store.
|
||||
|
||||
Different state store implementations may implicitly put restrictions on the types of workflows you can author. For example, the Azure Cosmos DB state store limits item sizes to 2 MB of UTF-8 encoded JSON ([source](https://learn.microsoft.com/azure/cosmos-db/concepts-limits#per-item-limits)). The input or output payload of an activity or child-workflow is stored as a single record in the state store, so a item limit of 2 MB means that workflow and activity inputs and outputs can't exceed 2 MB of JSON-serialized data. Similarly, if a state store imposes restrictions on the size of a batch transaction, that may limit the number of parallel actions that can be scheduled by a workflow.
|
||||
|
||||
## Workflow scalability
|
||||
|
||||
Because Dapr workflows are internally implemented using actors, Dapr workflows have the same scalability characteristics as actors. The placement service doesn't distinguish workflow actors and actors you define in your application and will load balance workflows using the same algorithms that it uses for actors.
|
||||
|
||||
The expected scalability of a workflow is determined by the following factors:
|
||||
|
||||
* The number of machines used to host your workflow application
|
||||
* The CPU and memory resources available on the machines running workflows
|
||||
* The scalability of the state store configured for actors
|
||||
* The scalability of the actor placement service and the reminder subsystem
|
||||
|
||||
The implementation details of the workflow code in the target application also plays a role in the scalability of individual workflow instances. Each workflow instance executes on a single node at a time, but a workflow can schedule activities and child-workflows which run on other nodes. Workflows can also schedule these activities and child-workflows to run in parallel, allowing a single workflow to potentially distribute compute tasks across all available nodes in the cluster.
|
||||
|
||||
TODO: Diagram showing an example distribution of workflows, child-workflows, and activity tasks.
|
||||
|
||||
{{% alert title="Important" color="warning" %}}
|
||||
At the time of writing, there are no global limits imposed on workflow and activity concurrency. A runaway workflow could therefore potentially consume all resources in a cluster if it attempts to schedule too many tasks in parallel. Developers should use care when authoring Dapr workflows that schedule large batches of work in parallel.
|
||||
|
||||
It's also worth noting that the Dapr workflow engine requires that all instances of each workflow app register the exact same set of workflows and activities. In other words, it's not possible to scale certain workflows or activities independently. All workflows and activities within an app must be scaled together.
|
||||
{{% /alert %}}
|
||||
|
||||
Workflows don't control the specifics of how load is distributed across the cluster. For example, if a workflow schedules 10 activity tasks to run in parallel, all 10 tasks may run on as many as 10 different compute nodes or as few as a single compute node. The actual scale behavior is determined by the actor placement service, which manages the distribution of the actors that represent each of the workflow's tasks.
|
||||
|
||||
## Workflow latency
|
||||
|
||||
In order to provide guarantees around durability and resiliency, Dapr workflows frequently write to the state store and rely on reminders to drive execution. Dapr workflows therefore may not be appropriate for latency-sensitive workloads. Expected sources of high latency include:
|
||||
|
||||
* Latency from the state store when persisting workflow state.
|
||||
* Latency from the state store when rehydrating workflows with large histories.
|
||||
* Latency caused by too many active reminders in the cluster.
|
||||
* Latency caused by high CPU usage in the cluster.
|
||||
|
||||
See the [Reminder usage and execution guarantees]({{<ref "workflow-engine.md#reminder-usage-and-execution-guarantees" >}}) for more details on how the design of workflow actors may impact execution latency.
|
||||
|
||||
## Next steps
|
||||
|
||||
Learn more about the other workflow components:
|
||||
- [Temporal.io]
|
|
@ -31,24 +31,24 @@ spec:
|
|||
|
||||
### POST start workflow request
|
||||
```bash
|
||||
POST http://localhost:3500/v1.0-alpha1/workflows/{workflowComponent}/{workflowType}/{instanceId}/start
|
||||
POST http://localhost:3500/v1.0-alpha1/workflows/<workflowComponentName>/<workflowName>/<instanceId>/start
|
||||
```
|
||||
### POST terminate workflow request
|
||||
```bash
|
||||
POST http://localhost:3500/v1.0-alpha1/workflows/{workflowComponent}/{instanceId}/terminate
|
||||
POST http://localhost:3500/v1.0-alpha1/workflows/<workflowComponentName>/<instanceId>/terminate
|
||||
```
|
||||
### GET workflow request
|
||||
```bash
|
||||
GET http://localhost:3500/v1.0-alpha1/workflows/{workflowComponent}/{workflowType}/{instanceId}
|
||||
GET http://localhost:3500/v1.0-alpha1/workflows/<workflowComponentName>/<workflowName>/<instanceId>
|
||||
```
|
||||
|
||||
### URL parameters
|
||||
|
||||
Parameter | Description
|
||||
--------- | -----------
|
||||
`workflowType` | Identify the workflow type
|
||||
`workflowComponentName` | One of the [supported workflow components][]
|
||||
`workflowName` | Identify the workflow type
|
||||
`instanceId` | Unique value created for each run of a specific workflow
|
||||
`workflowComponent` | One of the [supported workflow components][]
|
||||
|
||||
|
||||
### Headers
|
||||
|
|
|
@ -1,12 +0,0 @@
|
|||
---
|
||||
type: docs
|
||||
title: "Workflows component specs"
|
||||
linkTitle: "Workflows"
|
||||
weight: 7000
|
||||
description: The supported workflows that interface with Dapr
|
||||
no_list: true
|
||||
---
|
||||
|
||||
{{< partial "components/description.html" >}}
|
||||
|
||||
{{< partial "components/workflows.html" >}}
|
|
@ -1,55 +0,0 @@
|
|||
---
|
||||
type: docs
|
||||
title: "Temporal.io"
|
||||
linkTitle: "Temporal.io"
|
||||
description: Detailed information on the Temporal.io workflow component
|
||||
---
|
||||
|
||||
## Component format
|
||||
|
||||
To set up the Temporal.io workflow, create a component of type `todo`. See [this guide](todo) on how to create a workflow.
|
||||
|
||||
```yaml
|
||||
apiVersion: dapr.io/v1alpha1
|
||||
kind: Component
|
||||
metadata:
|
||||
name: temporal
|
||||
spec:
|
||||
type: workflow.temporal
|
||||
version: v1
|
||||
metadata:
|
||||
- name: hostport
|
||||
value: <HOST>
|
||||
- name: identity
|
||||
value: "WF Identity"
|
||||
```
|
||||
|
||||
## Spec metadata fields
|
||||
|
||||
| Field | Required | Details | Example |
|
||||
|-------|:--------:|---------------------------------------------------|------------------|
|
||||
| name | Y | Connection string for the Temporal.io host | `localhost:6379` |
|
||||
| name | Y | Unique ID of the workflow | ` |
|
||||
|
||||
## Setup Temporal.io
|
||||
|
||||
{{< tabs "Self-Hosted" "Kubernetes" >}}
|
||||
|
||||
{{% codetab %}}
|
||||
|
||||
todo
|
||||
|
||||
{{% /codetab %}}
|
||||
|
||||
{{% codetab %}}
|
||||
|
||||
todo
|
||||
|
||||
{{% /codetab %}}
|
||||
|
||||
{{< /tabs >}}
|
||||
|
||||
|
||||
## Related links
|
||||
- [Basic schema for a Dapr component]({{< ref component-schema >}})
|
||||
- [Workflow building block]({{< ref workflow-overview >}})
|
|
@ -1,5 +0,0 @@
|
|||
- component: Temporal.io
|
||||
link: temporal-io
|
||||
state: Alpha
|
||||
version: v1
|
||||
since: "1.10"
|
|
@ -1,28 +0,0 @@
|
|||
{{- $groups := dict
|
||||
" Generic" $.Site.Data.components.workflows.generic
|
||||
|
||||
}}
|
||||
|
||||
{{ range $group, $components := $groups }}
|
||||
<h3>{{ $group }}</h3>
|
||||
<table width="100%">
|
||||
<tr>
|
||||
<th>Component</th>
|
||||
<th>Status</th>
|
||||
<th>Component version</th>
|
||||
<th>Since runtime version</th>
|
||||
</tr>
|
||||
{{ range sort $components "component" }}
|
||||
<tr>
|
||||
<td><a href="/reference/components-reference/supported-workflows/{{ .link }}/" }}>{{ .component
|
||||
}}</a>
|
||||
</td>
|
||||
<td>{{ .state }}</td>
|
||||
<td>{{ .version }}</td>
|
||||
<td>{{ .since }}</td>
|
||||
</tr>
|
||||
{{ end }}
|
||||
</table>
|
||||
{{ end }}
|
||||
|
||||
{{ partial "components/componenttoc.html" . }}
|
Loading…
Reference in New Issue