update overview, add to how-to

Signed-off-by: Hannah Hunter <hannahhunter@microsoft.com>
This commit is contained in:
Hannah Hunter 2022-10-27 13:23:53 -05:00
parent 9437a9a4e5
commit 9212dda747
2 changed files with 90 additions and 18 deletions

View File

@ -18,25 +18,70 @@ If applicable, link to the related quickstart in a shortcode note or alert with
If you haven't already, [try out the <topic> quickstart](link) for a quick walk-through on how to use <topic>.
-->
<!--
Make sure the how-to includes examples for multiple programming languages, OS, or deployment targets, if applicable.
-->
## Start Workflow API
## <Action or task>
### HTTP / gRPC
Developers can start workflow instances by issuing an HTTP (or gRPC) API call to the Dapr sidecar:
<!--
Unlike quickstarts, do not use "Step 1", "Step 2", etc.
-->
```bash
POST http://localhost:3500/v1.0/workflows/{workflowType}/{instanceId}/start
```
## <Action or task>
Workflows are assumed to have a type that is identified by the {workflowType} parameter. Each workflow instance must also be created with a unique {instanceId} value. The payload of the request is the input of the workflow. If a workflow instance with this ID already exists, this call will fail with an HTTP 409 Conflict.
<!--
Each H2 step should start with a verb/action word.
-->
To support asynchronous HTTP polling pattern by HTTP clients, this API will return an HTTP 202 Accepted response with a Location header containing a URL that can be used to get the status of the workflow (see further below). When the workflow completes, this endpoint will return an HTTP 200 response. If it fails, the endpoint can return a 4XX or 5XX error HTTP response code. Some of these details may need to be configurable since there is no universal protocol for async API handling.
<!--
Include code snippets where possible.
-->
### Input bindings
For certain types of automation scenarios, it can be useful to trigger new instances of workflows directly from Dapr input bindings. For example, it may be useful to trigger a workflow in response to a tweet from a particular user account using the Twitter input binding. Another example is starting a new workflow in response to a Kubernetes event, like a deployment creation event.
The instance ID and input payload for the workflow depends on the configuration of the input binding. For example, a user may want to use a Tweets unique ID or the name of the Kubernetes deployment as the instance ID.
### Pub/Sub
Workflows can also be started directly from pub/sub events, similar to the proposal for Actor pub/sub. Configuration on the pub/sub topic can be used to identify an appropriate instance ID and input payload to use for initializing the workflow. In the simplest case, the source + ID of the cloud event message can be used as the workflows instance ID.
## Terminate workflow API
### HTTP / gRPC
Workflow instances can also be terminated using an explicit API call.
```bash
POST http://localhost:3500/v1.0/workflows/{workflowType}/{instanceId}/terminate
```
Workflow termination is primarily an operation that a service operator takes if a particular business process needs to be cancelled, or if a problem with the workflow requires it to be stopped to mitigate impact to other services.
If a payload is included in the POST request, it will be saved as the output of the workflow instance.
## Raise Event API
Workflows are especially useful when they can wait for and be driven by external events. For example, a workflow could subscribe to events from a pubsub topic as shown in the Phone Verification sample. However, this capability shouldnt be limited to pub/sub events.
### HTTP / gRPC
An API should exist for publishing events directly to a workflow instance:
```bash
POST http://localhost:3500/v1.0/workflows/{workflowType}/{instanceId}/raiseEvent
```
The result of the "raise event" API is an HTTP 202 Accepted, indicating that the event was received but possibly not yet processed. A workflow can consume an external event using the waitForExternalEvent SDK method.
## Get workflow metadata API
### HTTP / gRPC
Users can fetch the metadata of a workflow instance using an explicit API call.
```bash
GET http://localhost:3500/v1.0/workflows/{workflowType}/{instanceId}
```
The result of this call is workflow instance metadata, such as its start time, runtime status, completion time (if completed), and custom or runtime-specific status. If supported by the target runtime, workflow inputs and outputs can also be fetched using the query API.
## Purge workflow metadata API
Users can delete all state associated with a workflow using the following API:
```bash
DELETE http://localhost:3500/v1.0/workflows/{workflowType}/{instanceId}
```
When using the embedded workflow component, this will delete all state stored by the workflows underlying actor(s).
## Next steps

View File

@ -55,13 +55,40 @@ The workflow API brings several features to your application.
<!-- todo -->
Unlike actors, the workflow runtime component can be swapped out with an alternate implementation. If developers want to work with other workflow engines, such as externally hosted workflow services like Azure Logic Apps, AWS Step Functions, or Temporal.io, they can do so with alternate community-contributed workflow components.
Similar to the built-in support for actors, Dapr has implemented a built-in runtime for workflows. Unlike actors, the workflow runtime component can be swapped out with an alternate implementation. If you want to work with other workflow engines (such as externally hosted workflow services like Azure Logic Apps, AWS Step Functions, or Temporal.io), you can use alternate community-contributed workflow components.
We propose adding a lightweight, portable, embedded workflow engine (DTFx-go) in the Dapr sidecar that leverages existing Dapr components, including actors and state storage, in its underlying implementation. By being lightweight and portable developers will be able to execute workflows that run inside DFTx-go locally as well as in production with minimal overhead; this enhances the developer experience by integrating workflows with the existing Dapr development model that users enjoy.
In an effort to enhance the developer experience, the Dapr sidecar contains a lightweight, portable, embedded workflow engine (DTFx-go) that leverages and integrates with existing Dapr components, including actors and state storage, in its underlying implementation. The engine's portability enables you to execute workflows that run:
- Inside DFTx-go locally
- In production with minimal overhead.
The new engine will be written in Go and inspired by the existing Durable Task Framework (DTFx) engine. Well call this new version of the framework DTFx-go to distinguish it from the .NET implementation (which is not part of this proposal) and it will exist as an open-source project with a permissive, e.g., Apache 2.0, license so that it remains compatible as a dependency for CNCF projects. Note that its important to ensure this engine remains lightweight so as not to noticeably increase the size of the Dapr sidecar.
#### DTFx-go workflow engine
Importantly, DTFx-go will not be exposed to the application layer. Rather, the Dapr sidecar will expose DTFx-go functionality over a gRPC stream. The Dapr sidecar will not execute any app-specific workflow logic or load any declarative workflow documents. Instead, app containers will be responsible for hosting the actual workflow logic. The Dapr sidecar can send and receive workflow commands over gRPC to and from connected apps workflow logic, execute commands on behalf of the workflow (service invocation, invoking bindings, etc.). Other concerns such as activation, scale-out, and state persistence will be handled by internally managed actors. More details on all of this will be discussed in subsequent sections.
The workflow engine is written in Go and inspired by the existing Durable Task Framework (DTFx) engine. DTFx-go exists as an open-source project with a permissive (like Apache 2.0) license, maintaing compatibility as a dependency for CNCF projects.
DTFx-go is not exposed to the application layer. Rather, the Dapr sidecar:
- Exposes DTFx-go functionality over a gRPC stream
- Sends and receives workflow commands over gRPC to and from a connected apps workflow logic
- Executes commands on behalf of the workflow (service invocation, invoking bindings, etc.)
Meanwhile, app containers:
- Execute and/or host any app-specific workflow logic, or
- Load any declarative workflow documents.
Other concerns such as activation, scale-out, and state persistence are handled by internally managed actors.
#### Executing, scheduling, and resilience
#### Storage of state and durability
### Workflows as code
### Declarative workflows support
#### CNCF serverless workflows
#### Hosting serverless workflows
## Try out the workflow API