mirror of https://github.com/dapr/docs.git
parent
0209168e08
commit
cc7a5494fd
|
@ -6,19 +6,23 @@ weight: 2000
|
|||
description: "Learn more about the Dapr Workflow features and concepts"
|
||||
---
|
||||
|
||||
Now that you've learned about the [workflow building block]({{< ref workflow-overview.md >}}) on a high level, let's deep dive into the features and concepts included with the Dapr Workflow SDK. The Dapr Workflow SDK exposes several core features and concepts which are common across all supported languages.
|
||||
Now that you've learned about the [workflow building block]({{< ref workflow-overview.md >}}) at a high level, let's deep dive into the features and concepts included with the Dapr Workflow engine and SDKs. Dapr Workflows expose several core features and concepts which are common across all supported languages.
|
||||
|
||||
## Workflows
|
||||
|
||||
Dapr Workflows are functions you write that define a series of steps or tasks to be executed in a particular order. The Dapr Workflow engine takes care of coordinating and managing the execution of the steps, including managing failures and retries. If the app hosting your workflows is scaled out across multiple machines, the workflow engine may also load balance the execution of workflows and their tasks across multiple machines.
|
||||
|
||||
There are several different kinds of tasks that a workflow can schedule, including [activities]({{< ref "workflow-features-concepts.md#workflow-activities" >}}) for executing custom logic, [durable timers]({{< ref "workflow-features-concepts.md#durable-timers" >}}) for putting the workflow to sleep for arbitrary lengths of time, [child workflows]({{< ref "workflow-features-concepts.md#child-workflows" >}}) for breaking larger workflows into smaller pieces, and [external event waiters]({{< ref "workflow-features-concepts.md#external-events" >}}) for blocking workflows until they receive external event signals. These tasks are described in more details in their corresponding sections.
|
||||
There are several different kinds of tasks that a workflow can schedule, including
|
||||
- [Activities]({{< ref "workflow-features-concepts.md#workflow-activities" >}}) for executing custom logic
|
||||
- [Durable timers]({{< ref "workflow-features-concepts.md#durable-timers" >}}) for putting the workflow to sleep for arbitrary lengths of time
|
||||
- [Child workflows]({{< ref "workflow-features-concepts.md#child-workflows" >}}) for breaking larger workflows into smaller pieces
|
||||
- [External event waiters]({{< ref "workflow-features-concepts.md#external-events" >}}) for blocking workflows until they receive external event signals. These tasks are described in more details in their corresponding sections.
|
||||
|
||||
### Workflow identity
|
||||
|
||||
Each workflow you define has a name, and individual executions of a workflow have a unique _instance ID_. Workflow instance IDs can be generated by your app code, which is useful when workflows correspond to business entities like documents or jobs, or can be auto-generated UUIDs. A workflow's instance ID is useful for debugging and also for managing workflows using the [Workflow APIs]({{< ref workflow_api.md >}}).
|
||||
Each workflow you define has a type name, and individual executions of a workflow have a unique _instance ID_. Workflow instance IDs can be generated by your app code, which is useful when workflows correspond to business entities like documents or jobs, or can be auto-generated UUIDs. A workflow's instance ID is useful for debugging and also for managing workflows using the [Workflow APIs]({{< ref workflow_api.md >}}).
|
||||
|
||||
Only one workflow instance with a given ID can exist at any given time. However, if a workflow instance completes or fails, its ID can be reused by a new workflow instance. Note, however, that the new workflow instance will effectively replace the old one in the configured state store.
|
||||
Only one workflow instance with a given ID can exist at any given time. However, if a workflow instance completes or fails, its ID can be reused by a new workflow instance. Note, however, that the new workflow instance effectively replaces the old one in the configured state store.
|
||||
|
||||
### Workflow replay
|
||||
|
||||
|
@ -28,9 +32,9 @@ Dapr Workflows maintain their execution state by using a technique known as [eve
|
|||
For more information on how workflow state is managed, see the [workflow architecture guide]({{< ref workflow-architecture.md >}}).
|
||||
{{% /alert %}}
|
||||
|
||||
When a workflow "awaits" a scheduled task, it may unload itself from memory until the task completes. Once the task completes, the workflow engine will schedule the workflow function to run again. This second execution of the workflow function is known as a _replay_. When a workflow function is replayed, it runs again from the beginning. However, when it encounters a task that it already scheduled, instead of scheduling that task again, the workflow engine will return the result of the scheduled task to the workflow and continue execution until the next "await" point. This "replay" behavior continues until the workflow function completes or fails with an error.
|
||||
When a workflow "awaits" a scheduled task, it may unload itself from memory until the task completes. Once the task completes, the workflow engine schedules the workflow function to run again. This second execution of the workflow function is known as a _replay_. When a workflow function is replayed, it runs again from the beginning. However, when it encounters a task that it already scheduled, instead of scheduling that task again, the workflow engine returns the result of the scheduled task to the workflow and continues execution until the next "await" point. This "replay" behavior continues until the workflow function completes or fails with an error.
|
||||
|
||||
Using this replay technique, a workflow is able to resume execution from any "await" point as if it had never been unloaded from memory. Even the values of local variables from previous runs can be restored without the workflow engine knowing anything about what data they stored. This ability to restore state is also what makes Dapr Workflows _durable_ and fault tolerant.
|
||||
Using this replay technique, a workflow is able to resume execution from any "await" point as if it had never been unloaded from memory. Even the values of local variables from previous runs can be restored without the workflow engine knowing anything about what data they stored. This ability to restore state is what makes Dapr Workflows _durable_ and fault tolerant.
|
||||
|
||||
### Workflow determinism and code constraints
|
||||
|
||||
|
@ -68,7 +72,7 @@ Because workflows are long-running and durable, updating workflow code must be d
|
|||
We'll mention a couple examples of code updates that can break workflow determinism:
|
||||
|
||||
* **Changing workflow function signatures**: Changing the name, input, or output of a workflow or activity function is considered a breaking change and must be avoided.
|
||||
* **Changing the number or order of workflow tasks**: Changing the number or order of workflow tasks will cause a workflow instance's history to no longer match the code and may result in runtime errors or other unexpected behavior.
|
||||
* **Changing the number or order of workflow tasks**: Changing the number or order of workflow tasks causes a workflow instance's history to no longer match the code and may result in runtime errors or other unexpected behavior.
|
||||
|
||||
To work around these constraints, instead of updating existing workflow code, leave the existing workflow code as-is and create new workflow definitions that include the updates. Upstream code that creates workflows should also be updated to only create instances of the new workflows. Leaving the old code around ensures that existing workflow instances can continue to run without interruption. If and when it's known that all instances of the old workflow logic have completed, then the old workflow code can be safely deleted.
|
||||
|
||||
|
@ -78,7 +82,7 @@ Workflow activities are the basic unit of work in a workflow and are the tasks t
|
|||
|
||||
Unlike workflows, activities aren't restricted in the type of work you can do in them. Activities are frequently used to make network calls or run CPU intensive operations. An activity can also return data back to the workflow.
|
||||
|
||||
The Dapr Workflow engine guarantees that each called activity will be executed **at least once** as part of a workflow's execution. Because activities only guarantee at-least-once execution, it's recommended that activity logic be implemented as idempotent whenever possible.
|
||||
The Dapr Workflow engine guarantees that each called activity is executed **at least once** as part of a workflow's execution. Because activities only guarantee at-least-once execution, it's recommended that activity logic be implemented as idempotent whenever possible.
|
||||
|
||||
## Child workflows
|
||||
|
||||
|
@ -90,7 +94,7 @@ Child workflows have many benefits:
|
|||
* You can distribute workflow logic across multiple compute nodes concurrently, which is useful if your workflow logic otherwise needs to coordinate a lot of tasks.
|
||||
* You can reduce memory usage and CPU overhead by keeping the history of parent workflow smaller.
|
||||
|
||||
The return value of a child workflow is its output. If a child workflow fails with an exception, then that exception will be surfaced to the parent workflow, just like it is when an activity task fails with an exception. Child workflows also support automatic retry policies.
|
||||
The return value of a child workflow is its output. If a child workflow fails with an exception, then that exception is surfaced to the parent workflow, just like it is when an activity task fails with an exception. Child workflows also support automatic retry policies.
|
||||
|
||||
{{% alert title="Note" color="primary" %}}
|
||||
Because child workflows are independent of their parents, terminating a parent workflow does not affect any child workflows. You must terminate each child workflow independently using its instance ID.
|
||||
|
|
Loading…
Reference in New Issue