mirror of https://github.com/dapr/docs.git
move things around per nyemade
Signed-off-by: Hannah Hunter <hannahhunter@microsoft.com>
This commit is contained in:
parent
2fd1a6a85f
commit
3a70766a0c
|
@ -20,12 +20,13 @@ The following are the building blocks provided by Dapr:
|
|||
|
||||
| Building Block | Endpoint | Description |
|
||||
|----------------|----------|-------------|
|
||||
| [**Service-to-service invocation**]({{<ref "service-invocation-overview.md">}}) | `/v1.0/invoke` | Service invocation enables applications to communicate with each other through well-known endpoints in the form of http or gRPC messages. Dapr provides an endpoint that acts as a combination of a reverse proxy with built-in service discovery, while leveraging built-in distributed tracing and error handling.
|
||||
| [**State management**]({{<ref "state-management-overview.md">}}) | `/v1.0/state` | Application state is anything an application wants to preserve beyond a single session. Dapr provides a key/value-based state and query APIs with pluggable state stores for persistence.
|
||||
| [**Publish and subscribe**]({{<ref "pubsub-overview.md">}}) | `/v1.0/publish` `/v1.0/subscribe`| Pub/Sub is a loosely coupled messaging pattern where senders (or publishers) publish messages to a topic, to which subscribers subscribe. Dapr supports the pub/sub pattern between applications.
|
||||
| [**Bindings**]({{<ref "bindings-overview.md">}}) | `/v1.0/bindings` | A binding provides a bi-directional connection to an external cloud/on-premise service or system. Dapr allows you to invoke the external service through the Dapr binding API, and it allows your application to be triggered by events sent by the connected service.
|
||||
| [**Actors**]({{<ref "actors-overview.md">}}) | `/v1.0/actors` | An actor is an isolated, independent unit of compute and state with single-threaded execution. Dapr provides an actor implementation based on the virtual actor pattern which provides a single-threaded programming model and where actors are garbage collected when not in use.
|
||||
| [**Observability**]({{<ref "observability-concept.md">}}) | `N/A` | Dapr system components and runtime emit metrics, logs, and traces to debug, operate and monitor Dapr system services, components and user applications.
|
||||
| [**Secrets**]({{<ref "secrets-overview.md">}}) | `/v1.0/secrets` | Dapr provides a secrets building block API and integrates with secret stores such as public cloud stores, local stores and Kubernetes to store the secrets. Services can call the secrets API to retrieve secrets, for example to get a connection string to a database.
|
||||
| [**Configuration**]({{<ref "configuration-api-overview.md">}}) | `/v1.0-alpha1/configuration` | The Configuration API enables you to retrieve and subscribe to application configuration items for supported configuration stores. This enables an application to retrieve specific configuration information, for example, at start up or when configuration changes are made in the store.
|
||||
| [**Distributed lock**]({{<ref "distributed-lock-api-overview.md">}}) | `/v1.0-alpha1/lock` | The distributed lock API enables you to take a lock on a resource so that multiple instances of an application can access the resource without conflicts and provide consistency guarantees.
|
||||
| [**Service-to-service invocation**]({{< ref "service-invocation-overview.md" >}}) | `/v1.0/invoke` | Service invocation enables applications to communicate with each other through well-known endpoints in the form of http or gRPC messages. Dapr provides an endpoint that acts as a combination of a reverse proxy with built-in service discovery, while leveraging built-in distributed tracing and error handling.
|
||||
| [**State management**]({{< ref "state-management-overview.md" >}}) | `/v1.0/state` | Application state is anything an application wants to preserve beyond a single session. Dapr provides a key/value-based state and query APIs with pluggable state stores for persistence.
|
||||
| [**Publish and subscribe**]({{< ref "pubsub-overview.md" >}}) | `/v1.0/publish` `/v1.0/subscribe`| Pub/Sub is a loosely coupled messaging pattern where senders (or publishers) publish messages to a topic, to which subscribers subscribe. Dapr supports the pub/sub pattern between applications.
|
||||
| [**Bindings**]({{< ref "bindings-overview.md" >}}) | `/v1.0/bindings` | A binding provides a bi-directional connection to an external cloud/on-premise service or system. Dapr allows you to invoke the external service through the Dapr binding API, and it allows your application to be triggered by events sent by the connected service.
|
||||
| [**Actors**]({{< ref "actors-overview.md" >}}) | `/v1.0/actors` | An actor is an isolated, independent unit of compute and state with single-threaded execution. Dapr provides an actor implementation based on the virtual actor pattern which provides a single-threaded programming model and where actors are garbage collected when not in use.
|
||||
| [**Observability**]({{< ref "observability-concept.md" >}}) | `N/A` | Dapr system components and runtime emit metrics, logs, and traces to debug, operate and monitor Dapr system services, components and user applications.
|
||||
| [**Secrets**]({{< ref "secrets-overview.md" >}}) | `/v1.0/secrets` | Dapr provides a secrets building block API and integrates with secret stores such as public cloud stores, local stores and Kubernetes to store the secrets. Services can call the secrets API to retrieve secrets, for example to get a connection string to a database.
|
||||
| [**Configuration**]({{< ref "configuration-api-overview.md" >}}) | `/v1.0-alpha1/configuration` | The Configuration API enables you to retrieve and subscribe to application configuration items for supported configuration stores. This enables an application to retrieve specific configuration information, for example, at start up or when configuration changes are made in the store.
|
||||
| [**Distributed lock**]({{< ref "distributed-lock-api-overview.md" >}}) | `/v1.0-alpha1/lock` | The distributed lock API enables you to take a lock on a resource so that multiple instances of an application can access the resource without conflicts and provide consistency guarantees.
|
||||
| [**Workflows**]({{< ref "workflow-overview.md" >}}) | `/v1.0-alpha1/workflow` | The workflow API enables you to define a processes or data flows that span multiple microservices via an embedded workflow engine. With this built-in workflow engine, you can easily integrate with existing Dapr building blocks while maintaining portability.
|
|
@ -11,11 +11,11 @@ Dapr uses a modular design where functionality is delivered as a component. Each
|
|||
You can contribute implementations and extend Dapr's component interfaces capabilities via:
|
||||
|
||||
- The [components-contrib repository](https://github.com/dapr/components-contrib)
|
||||
- [Pluggable components]({{<ref "components-concept.md#pluggable-components" >}}).
|
||||
- [Pluggable components]({{< ref "components-concept.md#pluggable-components" >}}).
|
||||
|
||||
A building block can use any combination of components. For example, the [actors]({{<ref "actors-overview.md">}}) and the [state management]({{<ref "state-management-overview.md">}}) building blocks both use [state components](https://github.com/dapr/components-contrib/tree/master/state).
|
||||
A building block can use any combination of components. For example, the [actors]({{< ref "actors-overview.md" >}}) and the [state management]({{< ref "state-management-overview.md" >}}) building blocks both use [state components](https://github.com/dapr/components-contrib/tree/master/state).
|
||||
|
||||
As another example, the [pub/sub]({{<ref "pubsub-overview.md">}}) building block uses [pub/sub components](https://github.com/dapr/components-contrib/tree/master/pubsub).
|
||||
As another example, the [pub/sub]({{< ref "pubsub-overview.md" >}}) building block uses [pub/sub components](https://github.com/dapr/components-contrib/tree/master/pubsub).
|
||||
|
||||
You can get a list of current components available in the hosting environment using the `dapr components` CLI command.
|
||||
|
||||
|
@ -26,9 +26,9 @@ Each component has a specification (or spec) that it conforms to. Components are
|
|||
- A `components/local` folder within your solution, or
|
||||
- Globally in the `.dapr` folder created when invoking `dapr init`.
|
||||
|
||||
These YAML files adhere to the generic [Dapr component schema]({{<ref "component-schema.md">}}), but each is specific to the component specification.
|
||||
These YAML files adhere to the generic [Dapr component schema]({{< ref "component-schema.md" >}}), but each is specific to the component specification.
|
||||
|
||||
It is important to understand that the component spec values, particularly the spec `metadata`, can change between components of the same component type, for example between different state stores, and that some design-time spec values can be overridden at runtime when making requests to a component's API. As a result, it is strongly recommended to review a [component's specs]({{<ref "components-reference">}}), paying particular attention to the sample payloads for requests to set the metadata used to interact with the component.
|
||||
It is important to understand that the component spec values, particularly the spec `metadata`, can change between components of the same component type, for example between different state stores, and that some design-time spec values can be overridden at runtime when making requests to a component's API. As a result, it is strongly recommended to review a [component's specs]({{< ref "components-reference" >}}), paying particular attention to the sample payloads for requests to set the metadata used to interact with the component.
|
||||
|
||||
The diagram below shows some examples of the components for each component type
|
||||
<img src="/images/concepts-components.png" width=1200>
|
||||
|
@ -46,7 +46,7 @@ For example:
|
|||
- Your component may be specific to your company or pose IP concerns, so it cannot be included in the Dapr component repo.
|
||||
- You want decouple your component updates from the Dapr release cycle.
|
||||
|
||||
For more information read [Pluggable components overview]({{<ref "pluggable-components-overview">}})
|
||||
For more information read [Pluggable components overview]({{< ref "pluggable-components-overview" >}})
|
||||
|
||||
## Available component types
|
||||
|
||||
|
@ -61,7 +61,7 @@ State store components are data stores (databases, files, memory) that store key
|
|||
|
||||
### Name resolution
|
||||
|
||||
Name resolution components are used with the [service invocation]({{<ref "service-invocation-overview.md">}}) building block to integrate with the hosting environment and provide service-to-service discovery. For example, the Kubernetes name resolution component integrates with the Kubernetes DNS service, self-hosted uses mDNS and clusters of VMs can use the Consul name resolution component.
|
||||
Name resolution components are used with the [service invocation]({{< ref "service-invocation-overview.md" >}}) building block to integrate with the hosting environment and provide service-to-service discovery. For example, the Kubernetes name resolution component integrates with the Kubernetes DNS service, self-hosted uses mDNS and clusters of VMs can use the Consul name resolution component.
|
||||
|
||||
- [List of name resolution components]({{< ref supported-name-resolution >}})
|
||||
- [Name resolution implementations](https://github.com/dapr/components-contrib/tree/master/nameresolution)
|
||||
|
@ -82,7 +82,7 @@ External resources can connect to Dapr in order to trigger a method on an applic
|
|||
|
||||
### Secret stores
|
||||
|
||||
A [secret]({{<ref "secrets-overview.md">}}) is any piece of private information that you want to guard against unwanted access. Secrets stores are used to store secrets that can be retrieved and used in applications.
|
||||
A [secret]({{< ref "secrets-overview.md" >}}) is any piece of private information that you want to guard against unwanted access. Secrets stores are used to store secrets that can be retrieved and used in applications.
|
||||
|
||||
- [List of supported secret stores]({{< ref supported-secret-stores >}})
|
||||
- [Secret store implementations](https://github.com/dapr/components-contrib/tree/master/secretstores)
|
||||
|
@ -101,9 +101,16 @@ Lock components are used as a distributed lock to provide mutually exclusive acc
|
|||
- [List of supported locks]({{< ref supported-locks >}})
|
||||
- [Lock implementations](https://github.com/dapr/components-contrib/tree/master/lock)
|
||||
|
||||
### Workflows
|
||||
|
||||
A [workflow]({{< ref workflow-overview.md >}}) is custom application logic that defines business process or data flow in a reliable way across mulitple microservices. The workflow API is exposed by a [lightweight, embedded workflow engine]({{< ref "operations/components/workflow-engine/workflow-engine.md" >}}) in the Dapr sidecar, allowing you to easily integrate with existing Dapr building blocks.
|
||||
|
||||
- [List of supported workflows]({{< ref supported-workflows >}})
|
||||
- Workflow implementations
|
||||
|
||||
### Middleware
|
||||
|
||||
Dapr allows custom [middleware]({{<ref "middleware.md">}}) to be plugged into the HTTP request processing pipeline. Middleware can perform additional actions on an HTTP request (such as authentication, encryption, and message transformation) before the request is routed to the user code, or the response is returned to the client. The middleware components are used with the [service invocation]({{<ref "service-invocation-overview.md">}}) building block.
|
||||
Dapr allows custom [middleware]({{< ref "middleware.md" >}}) to be plugged into the HTTP request processing pipeline. Middleware can perform additional actions on an HTTP request (such as authentication, encryption, and message transformation) before the request is routed to the user code, or the response is returned to the client. The middleware components are used with the [service invocation]({{< ref "service-invocation-overview.md" >}}) building block.
|
||||
|
||||
- [List of supported middleware components]({{< ref supported-middleware >}})
|
||||
- [Middleware implementations](https://github.com/dapr/components-contrib/tree/master/middleware)
|
||||
|
|
|
@ -33,24 +33,7 @@ A workflow's utility for microservices makes it a great fit for Dapr’s mission
|
|||
| An efficient, more manageable single sidecar for all microservices | Separate sidecars (or services) for workflows. |
|
||||
| Portable workflows keeps Dapr portable. | Lessened portability. |
|
||||
|
||||
With a [lightweight, embedded workflow engine](#embedded-workflow-engine), you can create orchestration on top of existing Dapr building blocks in a portable and adoptable way. The workflow building block consists of:
|
||||
|
||||
- A pluggable component model for integrating various workflow engines
|
||||
- A set of APIs for managing workflows (start, schedule, pause, resume, cancel)
|
||||
|
||||
Workflows supported by your platforms can be exposed as APIs with support for both HTTP and the Dapr SDKs, including:
|
||||
|
||||
- mTLS, distributed tracing, etc.
|
||||
- Various abstractions, such as async HTTP polling
|
||||
|
||||
### Sidecar interactions and the Dapr SDK
|
||||
|
||||
Behind the scenes, the `DaprWorkflowClient` SDK object handles all the interactions with the Dapr sidecar, including:
|
||||
|
||||
- Responding to invocation requests from the Dapr sidecar.
|
||||
- Sending the necessary commands to the Dapr sidecar as the workflow progresses.
|
||||
- Checkpointing the progress so that the workflow can be resumed after any infrastructure failures.
|
||||
|
||||
With a [lightweight, embedded workflow engine](#embedded-workflow-engine), you can create orchestration on top of existing Dapr building blocks in a portable and adoptable way.
|
||||
|
||||
<!--
|
||||
Include a diagram or image, if possible.
|
||||
|
@ -71,59 +54,6 @@ The workflow API brings several core features executed by the Dapr sidecar:
|
|||
|
||||
These capabilities are enabled by the sidecar-embedded DTFx-go engine and its Dapr-specific configuration.
|
||||
|
||||
### DTFx-go workflow engine
|
||||
|
||||
The workflow engine is written in Go and inspired by the existing Durable Task Framework (DTFx) engine. DTFx-go exists as an open-source project with a permissive (like Apache 2.0) license, maintaing compatibility as a dependency for CNCF projects.
|
||||
|
||||
DTFx-go is not exposed to the application layer. Rather, the Dapr sidecar:
|
||||
|
||||
- Exposes DTFx-go functionality over a gRPC stream
|
||||
- Sends and receives workflow commands over gRPC to and from a connected app’s workflow logic
|
||||
- Executes commands on behalf of the workflow (service invocation, invoking bindings, etc.)
|
||||
|
||||
Meanwhile, app containers:
|
||||
|
||||
- Execute and/or host any app-specific workflow logic, or
|
||||
- Load any declarative workflow documents.
|
||||
|
||||
Other concerns such as activation, scale-out, and state persistence are handled by internally managed actors.
|
||||
|
||||
#### Executing, scheduling, and resilience
|
||||
|
||||
Dapr workflow instances are implemented as actors. Actors drive workflow execution by communicating with the workflow SDK over a gRPC stream. Using actors solves the problem of placement and scalability.
|
||||
|
||||
<img src="/images/workflow-overview/workflow-execution.png" width=1000 alt="Diagram showing scalable execution of workflows using actors">
|
||||
|
||||
The execution of individual workflows is triggered using actor reminders, which are both persistent and durable. If a container or node crashes during a workflow execution, the actor reminder ensures reactivates and resumes where it left off, using state storage to provide durability.
|
||||
|
||||
To prevent a workflow from unintentional blocking, each workflow is composed of two separate actor components. In the diagram below, the Dapr sidecar has:
|
||||
|
||||
1. One actor component acting as the scheduler/coordinator (WF scheduler actor)
|
||||
1. Another actor component performing the actual work (WF worker actor)
|
||||
|
||||
<img src="/images/workflow-overview/workflow-execution-2.png" width=1000 alt="Diagram showing zoomed in view of actor components working for workwflow">
|
||||
|
||||
#### Storage of state and durability
|
||||
|
||||
For workflow execution to complete reliably in the face of transient errors, it must be durable - meaning the ability to store data at checkpoints as it progresses. To achieve this, workflow executions rely on Dapr's state storage to provide stable storage. This allows the workflow to be safely resumed from a known-state in the event that:
|
||||
- The workflow is explicitly paused, or
|
||||
- A step is prematurely terminated (system failure, lack of resources, etc.).
|
||||
|
||||
#### Automatic failure handling
|
||||
|
||||
Every time the workflow logic encounters its first yield statement, control returns to the SDK for committing state changes and scheduling work. If the process hosting the workflow goes down for any reason, it will resume from the last yield once the process comes back up.
|
||||
|
||||
DTFx-go, running in the Dapr sidecar, enables this by:
|
||||
|
||||
- Re-executing the workflow function from the beginning
|
||||
- Providing the context object with historical data about:
|
||||
- Which tasks have already completed
|
||||
- What their return values were
|
||||
|
||||
This allows any previously executed `context.invoker.invoke` calls to return immediately with a return value, instead of invoking the service method a second time.
|
||||
|
||||
This results in durable and stateful workflows – even the state of local variables is effectively preserved because they can be recreated via replays.
|
||||
|
||||
### Workflow as code
|
||||
|
||||
"Workflow as code" refers to the developer-friendly implementation of a workflow’s logic using general purpose programming languages, allowing you to:
|
||||
|
@ -178,4 +108,4 @@ Want to skip the quickstarts? Not a problem. You can try out the workflow buildi
|
|||
## Next steps
|
||||
|
||||
- [Learn how to set up a workflow]({{< ref howto-workflow.md >}})
|
||||
- [Check out some workflow code examples]({{< ref workflow-scenarios.md >}})
|
||||
- [Supported workflows]({{< ref supported-workflows.md >}})
|
|
@ -1,179 +0,0 @@
|
|||
---
|
||||
type: docs
|
||||
title: "Workflow scenarios"
|
||||
linkTitle: "Workflow scenarios"
|
||||
weight: 4000
|
||||
description: Review examples of implementing the workflow API in your project
|
||||
---
|
||||
|
||||
See how Dapr's workflow API works in three real-world scenarios.
|
||||
|
||||
## Example 1: Bank transaction
|
||||
|
||||
In this example, the workflow is implemented as a JavaScript generator function.
|
||||
|
||||
The `"bank1"` and `"bank2"` parameters are microservice apps that use Dapr, each of which expose `"withdraw"` and `"deposit"` APIs.
|
||||
|
||||
The Dapr APIs available to the workflow come from the context parameter object and return a "task", which effectively the same as a promise. Calling yield on the task causes the workflow to:
|
||||
|
||||
- Durably checkpoint its progress
|
||||
- Wait until Dapr responds with the output of the service method.
|
||||
|
||||
The value of the task is the service invocation result. If any service method call fails with an error, the error is surfaced as a raised JavaScript error that can be caught using normal try/catch syntax. This code can also be debugged using a Node.js debugger.
|
||||
|
||||
|
||||
```js
|
||||
import { DaprWorkflowClient, DaprWorkflowContext, HttpMethod } from "dapr-client";
|
||||
|
||||
const daprHost = process.env.DAPR_HOST || "127.0.0.1"; // Dapr sidecar host
|
||||
|
||||
const daprPort = process.env.DAPR_WF_PORT || "50001"; // Dapr sidecar port for workflow
|
||||
|
||||
const workflowClient = new DaprWorkflowClient(daprHost, daprPort);
|
||||
|
||||
// Funds transfer workflow which receives a context object from Dapr and an input
|
||||
workflowClient.addWorkflow('transfer-funds-workflow', function*(context: DaprWorkflowContext, op: any) {
|
||||
// use built-in methods for generating psuedo-random data in a workflow-safe way
|
||||
const transactionId = context.createV5uuid();
|
||||
|
||||
// try to withdraw funds from the source account.
|
||||
const success = yield context.invoker.invoke("bank1", "withdraw", HttpMethod.POST, {
|
||||
srcAccount: op.srcAccount,
|
||||
amount: op.amount,
|
||||
transactionId
|
||||
});
|
||||
|
||||
if (!success.success) {
|
||||
return "Insufficient funds";
|
||||
}
|
||||
|
||||
try {
|
||||
// attempt to deposit into the dest account, which is part of a separate microservice app
|
||||
yield context.invoker.invoke("bank2", "deposit", HttpMethod.POST, {
|
||||
destAccount: op.destAccount,
|
||||
amount: op.amount,
|
||||
transactionId
|
||||
});
|
||||
return "success";
|
||||
} catch {
|
||||
// compensate for failures by returning the funds to the original account
|
||||
yield context.invoker.invoke("bank1", "deposit", HttpMethod.POST, {
|
||||
destAccount: op.srcAccount,
|
||||
amount: op.amount,
|
||||
transactionId
|
||||
});
|
||||
return "failure";
|
||||
}
|
||||
});
|
||||
|
||||
// Call start() to start processing workflow events
|
||||
workflowClient.start();
|
||||
```
|
||||
|
||||
{{% alert title="Note" color="primary" %}}
|
||||
The details around how code is written will vary depending on the language. For example, a C# SDK would allow developers to use async/await instead of yield. Regardless, the core capabilities will be the same across all languages.
|
||||
{{% /alert %}}
|
||||
|
||||
## Example 2: Phone verification
|
||||
|
||||
This example demonstrates building an SMS phone verification workflow. The workflow:
|
||||
|
||||
- Receives some user’s phone number
|
||||
- Creates a challenge code
|
||||
- Delivers the challenge code to the user’s SMS number
|
||||
- Waits for the user to respond with the correct challenge code
|
||||
|
||||
The important takeaway in this example is that the end-to-end workflow can be represented as a single, easy-to-understand function. Rather than relying on actors to hold state explicitly, state (such as the challenge code) can simply be stored in local variables, drastically reducing the overall code complexity and making the solution easily unit-testable.
|
||||
|
||||
```js
|
||||
import { DaprWorkflowClient, DaprWorkflowContext, HttpMethod } from "dapr-client";
|
||||
|
||||
const daprHost = process.env.DAPR_HOST || "127.0.0.1"; // Dapr sidecar host
|
||||
const daprPort = process.env.DAPR_WF_PORT || "50001"; // Dapr sidecar port for workflow
|
||||
const workflowClient = new DaprWorkflowClient(daprHost, daprPort);
|
||||
|
||||
// Phone number verification workflow which receives a context object from Dapr and an input
|
||||
workflowClient.addWorkflow('phone-verification', function*(context: DaprWorkflowContext, phoneNumber: string) {
|
||||
|
||||
// Create a challenge code and send a notification to the user's phone
|
||||
const challengeCode = yield context.invoker.invoke("authService", "createSmsChallenge", HttpMethod.POST, {
|
||||
phoneNumber
|
||||
});
|
||||
|
||||
// Schedule a durable timer for some future date (e.g. 5 minutes or perhaps even 24 hours from now)
|
||||
const expirationTimer = context.createTimer(challengeCode.expiration);
|
||||
|
||||
// The user gets three tries to respond with the right challenge code
|
||||
let authenticated = false;
|
||||
|
||||
for (let i = 0; i <= 3; i++) {
|
||||
// subscribe to the event representing the user challenge response
|
||||
const responseTask = context.pubsub.subscribeOnce("my-pubsub-component", "sms-challenge-topic");
|
||||
|
||||
// block the workflow until either the timeout expires or we get a response event
|
||||
const winner = yield context.whenAny([expirationTimer, responseTask]);
|
||||
|
||||
if (winner === expirationTimer) {
|
||||
break; // timeout expired
|
||||
}
|
||||
|
||||
// we get a pubsub event with the user's SMS challenge response
|
||||
if (responseTask.result.data.challengeNumber === challengeCode.number) {
|
||||
authenticated = true; // challenge verified!
|
||||
expirationTimer.cancel();
|
||||
break;
|
||||
}
|
||||
}
|
||||
|
||||
// the return value is available as part of the workflow status. Alternatively, we could send a notification.
|
||||
return authenticated;
|
||||
});
|
||||
|
||||
// Call listen() to start processing workflow events
|
||||
workflowClient.listen();
|
||||
```
|
||||
|
||||
## Example 3: Declarative workflow for monitoring patient vitals
|
||||
|
||||
The following is an example of a very simple SLWF workflow definition that:
|
||||
|
||||
- Listens on three different event types
|
||||
- Invokes a function depending on which event was received
|
||||
|
||||
```json
|
||||
{
|
||||
"id": "monitorPatientVitalsWorkflow",
|
||||
"version": "1.0",
|
||||
"name": "Monitor Patient Vitals Workflow",
|
||||
"states": [
|
||||
{
|
||||
"name": "Monitor Vitals",
|
||||
"type": "event",
|
||||
"onEvents": [
|
||||
{
|
||||
"eventRefs": [
|
||||
"High Body Temp Event",
|
||||
"High Blood Pressure Event"
|
||||
],
|
||||
"actions": [{"functionRef": "Invoke Dispatch Nurse Function"}]
|
||||
},
|
||||
{
|
||||
"eventRefs": ["High Respiration Rate Event"],
|
||||
"actions": [{"functionRef": "Invoke Dispatch Pulmonologist Function"}]
|
||||
}
|
||||
],
|
||||
"end": true
|
||||
}
|
||||
],
|
||||
"functions": "file://my/services/asyncapipatientservicedefs.json",
|
||||
"events": "file://my/events/patientcloudeventsdefs.yml"
|
||||
}
|
||||
```
|
||||
|
||||
The functions defined in this workflow map to Dapr service invocation calls. Similarly, the events map to incoming Dapr pub/sub events.
|
||||
|
||||
Behind the scenes, the runtime (which is built using the Dapr SDK APIs) handles the communication with the Dapr sidecar, which, in turn, manages the checkpointing of state and recovery semantics for the workflows.
|
||||
|
||||
## Next steps
|
||||
|
||||
- [Learn how to implement the workflow API in your own project]({{< ref howto-workflow.md >}})
|
|
@ -0,0 +1,7 @@
|
|||
---
|
||||
type: docs
|
||||
title: "Workflow components"
|
||||
linkTitle: "Workflow components"
|
||||
description: "Guidance on working with the workflow components"
|
||||
weight: 4700
|
||||
---
|
|
@ -0,0 +1,82 @@
|
|||
---
|
||||
type: docs
|
||||
title: "Built-in workflow component overview"
|
||||
linkTitle: "Built-in workflow component"
|
||||
weight: 4400
|
||||
description: "Overview of the built-in workflow engine (DTFx-go) component"
|
||||
---
|
||||
|
||||
The workflow building block consists of:
|
||||
|
||||
- A pluggable component model for integrating various workflow engines
|
||||
- A set of APIs for managing workflows (start, schedule, pause, resume, cancel)
|
||||
|
||||
Workflows supported by your platforms can be exposed as APIs with support for both HTTP and the Dapr SDKs, including:
|
||||
|
||||
- mTLS, distributed tracing, etc.
|
||||
- Various abstractions, such as async HTTP polling
|
||||
|
||||
Behind the scenes, the `DaprWorkflowClient` SDK object handles all the interactions with the Dapr sidecar, including:
|
||||
|
||||
- Responding to invocation requests from the Dapr sidecar.
|
||||
- Sending the necessary commands to the Dapr sidecar as the workflow progresses.
|
||||
- Checkpointing the progress so that the workflow can be resumed after any infrastructure failures.
|
||||
|
||||
## DTFx-go workflow engine
|
||||
|
||||
The workflow engine is written in Go and inspired by the existing Durable Task Framework (DTFx) engine. DTFx-go exists as an open-source project with a permissive (like Apache 2.0) license, maintaing compatibility as a dependency for CNCF projects.
|
||||
|
||||
DTFx-go is not exposed to the application layer. Rather, the Dapr sidecar:
|
||||
|
||||
- Exposes DTFx-go functionality over a gRPC stream
|
||||
- Sends and receives workflow commands over gRPC to and from a connected app’s workflow logic
|
||||
- Executes commands on behalf of the workflow (service invocation, invoking bindings, etc.)
|
||||
|
||||
Meanwhile, app containers:
|
||||
|
||||
- Execute and/or host any app-specific workflow logic, or
|
||||
- Load any declarative workflow documents.
|
||||
|
||||
Other concerns such as activation, scale-out, and state persistence are handled by internally managed actors.
|
||||
|
||||
### Executing, scheduling, and resilience
|
||||
|
||||
Dapr workflow instances are implemented as actors. Actors drive workflow execution by communicating with the workflow SDK over a gRPC stream. Using actors solves the problem of placement and scalability.
|
||||
|
||||
<img src="/images/workflow-overview/workflow-execution.png" width=1000 alt="Diagram showing scalable execution of workflows using actors">
|
||||
|
||||
The execution of individual workflows is triggered using actor reminders, which are both persistent and durable. If a container or node crashes during a workflow execution, the actor reminder ensures reactivates and resumes where it left off, using state storage to provide durability.
|
||||
|
||||
To prevent a workflow from unintentional blocking, each workflow is composed of two separate actor components. In the diagram below, the Dapr sidecar has:
|
||||
|
||||
1. One actor component acting as the scheduler/coordinator (WF scheduler actor)
|
||||
1. Another actor component performing the actual work (WF worker actor)
|
||||
|
||||
<img src="/images/workflow-overview/workflow-execution-2.png" width=1000 alt="Diagram showing zoomed in view of actor components working for workwflow">
|
||||
|
||||
### Storage of state and durability
|
||||
|
||||
For workflow execution to complete reliably in the face of transient errors, it must be durable - meaning the ability to store data at checkpoints as it progresses. To achieve this, workflow executions rely on Dapr's state storage to provide stable storage. This allows the workflow to be safely resumed from a known-state in the event that:
|
||||
- The workflow is explicitly paused, or
|
||||
- A step is prematurely terminated (system failure, lack of resources, etc.).
|
||||
|
||||
### Automatic failure handling
|
||||
|
||||
Every time the workflow logic encounters its first yield statement, control returns to the SDK for committing state changes and scheduling work. If the process hosting the workflow goes down for any reason, it will resume from the last yield once the process comes back up.
|
||||
|
||||
DTFx-go, running in the Dapr sidecar, enables this by:
|
||||
|
||||
- Re-executing the workflow function from the beginning
|
||||
- Providing the context object with historical data about:
|
||||
- Which tasks have already completed
|
||||
- What their return values were
|
||||
|
||||
This allows any previously executed `context.invoker.invoke` calls to return immediately with a return value, instead of invoking the service method a second time.
|
||||
|
||||
This results in durable and stateful workflows – even the state of local variables is effectively preserved because they can be recreated via replays.
|
||||
|
||||
## Next steps
|
||||
|
||||
Learn more about the other workflow components:
|
||||
- [Temporal.io]
|
||||
- [Azure Logic Apps]
|
|
@ -102,7 +102,7 @@ If a payload is included in the POST request, it will be saved as the output of
|
|||
|
||||
## Raise Event
|
||||
|
||||
Workflows are especially useful when they can wait for and be driven by external events. For example, a workflow could subscribe to events from a pub/sub topic, as shown in the [phone verification sample]({{< ref "workflow-scenarios.md#example-2-phone-verification" >}}).
|
||||
Workflows are especially useful when they can wait for and be driven by external events. For example, a workflow could subscribe to events from a pub/sub topic, as shown in the [phone verification sample]().
|
||||
|
||||
### HTTP / gRPC request
|
||||
|
||||
|
|
|
@ -2,7 +2,7 @@
|
|||
type: docs
|
||||
title: "Configuration store component specs"
|
||||
linkTitle: "Configuration stores"
|
||||
weight: 4500
|
||||
weight: 5000
|
||||
description: The supported configuration stores that interface with Dapr
|
||||
aliases:
|
||||
- "/operations/components/setup-configuration-store/supported-configuration-stores/"
|
||||
|
|
|
@ -2,7 +2,7 @@
|
|||
type: docs
|
||||
title: "Lock component specs"
|
||||
linkTitle: "Locks"
|
||||
weight: 4500
|
||||
weight: 6000
|
||||
description: The supported locks that interface with Dapr
|
||||
no_list: true
|
||||
---
|
||||
|
|
|
@ -2,7 +2,7 @@
|
|||
type: docs
|
||||
title: "Middleware component specs"
|
||||
linkTitle: "Middleware"
|
||||
weight: 6000
|
||||
weight: 9000
|
||||
description: List of all the supported middleware components that can be injected in Dapr's processing pipeline.
|
||||
no_list: true
|
||||
aliases:
|
||||
|
|
|
@ -2,7 +2,7 @@
|
|||
type: docs
|
||||
title: "Name resolution provider component specs"
|
||||
linkTitle: "Name resolution"
|
||||
weight: 5000
|
||||
weight: 8000
|
||||
description: The supported name resolution providers that interface with Dapr service invocation
|
||||
no_list: true
|
||||
---
|
||||
|
|
|
@ -0,0 +1,12 @@
|
|||
---
|
||||
type: docs
|
||||
title: "Workflows component specs"
|
||||
linkTitle: "Workflows"
|
||||
weight: 7000
|
||||
description: The supported workflows that interface with Dapr
|
||||
no_list: true
|
||||
---
|
||||
|
||||
{{< partial "components/description.html" >}}
|
||||
|
||||
{{< partial "components/workflows.html" >}}
|
|
@ -0,0 +1,52 @@
|
|||
---
|
||||
type: docs
|
||||
title: "Azure Logic Apps"
|
||||
linkTitle: "Azure Logic Apps"
|
||||
description: Detailed information on the Azure Logic Apps workflow component
|
||||
---
|
||||
|
||||
## Component format
|
||||
|
||||
To set up the Azure Logic Apps workflow, create a component of type `todo`. See [this guide](todo) on how to create a workflow.
|
||||
|
||||
```yaml
|
||||
apiVersion: dapr.io/v1alpha1
|
||||
kind: Component
|
||||
metadata:
|
||||
name: <NAME>
|
||||
spec:
|
||||
type: todo
|
||||
version: v1
|
||||
metadata:
|
||||
```
|
||||
|
||||
## Spec metadata fields
|
||||
|
||||
| Field | Required | Details | Example |
|
||||
|-------|:--------:|---------|---------|
|
||||
| | | | |
|
||||
| | | | |
|
||||
| | | | |
|
||||
|
||||
## Setup Azure Logic Apps
|
||||
|
||||
{{< tabs "Self-Hosted" "Kubernetes" >}}
|
||||
|
||||
{{% codetab %}}
|
||||
|
||||
todo
|
||||
|
||||
{{% /codetab %}}
|
||||
|
||||
{{% codetab %}}
|
||||
|
||||
todo
|
||||
|
||||
{{% /codetab %}}
|
||||
|
||||
{{< /tabs >}}
|
||||
|
||||
|
||||
## Related links
|
||||
- [Basic schema for a Dapr component]({{< ref component-schema >}})
|
||||
- [Workflow building block]({{< ref workflow-overview >}})
|
|
@ -0,0 +1,52 @@
|
|||
---
|
||||
type: docs
|
||||
title: "Built-in workflow engine"
|
||||
linkTitle: "Built-in workflow engine"
|
||||
description: Detailed information on the built-in workflow engine
|
||||
---
|
||||
|
||||
## Component format
|
||||
|
||||
To set up the built-in workflow engine, create a component of type `todo`. See [this guide](todo) on how to create a workflow.
|
||||
|
||||
```yaml
|
||||
apiVersion: dapr.io/v1alpha1
|
||||
kind: Component
|
||||
metadata:
|
||||
name: <NAME>
|
||||
spec:
|
||||
type: todo
|
||||
version: v1
|
||||
metadata:
|
||||
```
|
||||
|
||||
## Spec metadata fields
|
||||
|
||||
| Field | Required | Details | Example |
|
||||
|-------|:--------:|---------|---------|
|
||||
| | | | |
|
||||
| | | | |
|
||||
| | | | |
|
||||
|
||||
## Setup the built-in workflow engine
|
||||
|
||||
{{< tabs "Self-Hosted" "Kubernetes" >}}
|
||||
|
||||
{{% codetab %}}
|
||||
|
||||
todo
|
||||
|
||||
{{% /codetab %}}
|
||||
|
||||
{{% codetab %}}
|
||||
|
||||
todo
|
||||
|
||||
{{% /codetab %}}
|
||||
|
||||
{{< /tabs >}}
|
||||
|
||||
|
||||
## Related links
|
||||
- [Basic schema for a Dapr component]({{< ref component-schema >}})
|
||||
- [Workflow building block]({{< ref workflow-overview >}})
|
|
@ -0,0 +1,52 @@
|
|||
---
|
||||
type: docs
|
||||
title: "Temporal.io"
|
||||
linkTitle: "Temporal.io"
|
||||
description: Detailed information on the Temporal.io workflow component
|
||||
---
|
||||
|
||||
## Component format
|
||||
|
||||
To set up the Temporal.io workflow, create a component of type `todo`. See [this guide](todo) on how to create a workflow.
|
||||
|
||||
```yaml
|
||||
apiVersion: dapr.io/v1alpha1
|
||||
kind: Component
|
||||
metadata:
|
||||
name: <NAME>
|
||||
spec:
|
||||
type: todo
|
||||
version: v1
|
||||
metadata:
|
||||
```
|
||||
|
||||
## Spec metadata fields
|
||||
|
||||
| Field | Required | Details | Example |
|
||||
|-------|:--------:|---------|---------|
|
||||
| | | | |
|
||||
| | | | |
|
||||
| | | | |
|
||||
|
||||
## Setup Temporal.io
|
||||
|
||||
{{< tabs "Self-Hosted" "Kubernetes" >}}
|
||||
|
||||
{{% codetab %}}
|
||||
|
||||
todo
|
||||
|
||||
{{% /codetab %}}
|
||||
|
||||
{{% codetab %}}
|
||||
|
||||
todo
|
||||
|
||||
{{% /codetab %}}
|
||||
|
||||
{{< /tabs >}}
|
||||
|
||||
|
||||
## Related links
|
||||
- [Basic schema for a Dapr component]({{< ref component-schema >}})
|
||||
- [Workflow building block]({{< ref workflow-overview >}})
|
|
@ -0,0 +1,5 @@
|
|||
- component: Azure Logic Apps
|
||||
link: azure-logic-apps
|
||||
state: Alpha
|
||||
version: v1
|
||||
since: "1.10"
|
|
@ -0,0 +1,10 @@
|
|||
- component: Temporal.io
|
||||
link: temporal-io
|
||||
state: Alpha
|
||||
version: v1
|
||||
since: "1.10"
|
||||
- component: Built-in (DTFx-go) engine
|
||||
link: dtfx-go
|
||||
state: Alpha
|
||||
version: v1
|
||||
since: "1.10"
|
|
@ -0,0 +1,29 @@
|
|||
{{- $groups := dict
|
||||
" Generic" $.Site.Data.components.workflows.generic
|
||||
"Microsoft Azure" $.Site.Data.components.workflows.azure
|
||||
|
||||
}}
|
||||
|
||||
{{ range $group, $components := $groups }}
|
||||
<h3>{{ $group }}</h3>
|
||||
<table width="100%">
|
||||
<tr>
|
||||
<th>Component</th>
|
||||
<th>Status</th>
|
||||
<th>Component version</th>
|
||||
<th>Since runtime version</th>
|
||||
</tr>
|
||||
{{ range sort $components "component" }}
|
||||
<tr>
|
||||
<td><a href="/reference/components-reference/supported-workflows/{{ .link }}/" }}>{{ .component
|
||||
}}</a>
|
||||
</td>
|
||||
<td>{{ .state }}</td>
|
||||
<td>{{ .version }}</td>
|
||||
<td>{{ .since }}</td>
|
||||
</tr>
|
||||
{{ end }}
|
||||
</table>
|
||||
{{ end }}
|
||||
|
||||
{{ partial "components/componenttoc.html" . }}
|
Loading…
Reference in New Issue