mirror of https://github.com/dapr/docs.git
Merge branch 'v1.16' into fix-net-command
This commit is contained in:
commit
af3a9f2044
|
@ -33,50 +33,66 @@ The workflow app executes the appropriate workflow code and then sends a gRPC re
|
|||
|
||||
<img src="/images/workflow-overview/workflow-engine-protocol.png" width=500 alt="Dapr Workflow Engine Protocol" />
|
||||
|
||||
All interactions happen over a single gRPC channel and are initiated by the application, which means the application doesn't need to open any inbound ports. The details of these interactions are internally handled by the language-specific Dapr Workflow authoring SDK.
|
||||
All interactions happen over a single gRPC channel and are initiated by the application, which means the application doesn't need to open any inbound ports.
|
||||
The details of these interactions are internally handled by the language-specific Dapr Workflow authoring SDK.
|
||||
|
||||
### Differences between workflow and actor sidecar interactions
|
||||
### Differences between workflow and application actor interactions
|
||||
|
||||
If you're familiar with Dapr actors, you may notice a few differences in terms of how sidecar interactions works for workflows compared to actors.
|
||||
If you're familiar with Dapr actors, you may notice a few differences in terms of how sidecar interactions works for workflows compared to application defined actors.
|
||||
|
||||
| Actors | Workflows |
|
||||
| ------ | --------- |
|
||||
| Actors can interact with the sidecar using either HTTP or gRPC. | Workflows only use gRPC. Due to the workflow gRPC protocol's complexity, an SDK is _required_ when implementing workflows. |
|
||||
| Actors created by the application can interact with the sidecar using either HTTP or gRPC. | Workflows only use gRPC. Due to the workflow gRPC protocol's complexity, an SDK is _required_ when implementing workflows. |
|
||||
| Actor operations are pushed to application code from the sidecar. This requires the application to listen on a particular _app port_. | For workflows, operations are _pulled_ from the sidecar by the application using a streaming protocol. The application doesn't need to listen on any ports to run workflows. |
|
||||
| Actors explicitly register themselves with the sidecar. | Workflows do not register themselves with the sidecar. The embedded engine doesn't keep track of workflow types. This responsibility is instead delegated to the workflow application and its SDK. |
|
||||
|
||||
## Workflow distributed tracing
|
||||
|
||||
The `durabletask-go` core used by the workflow engine writes distributed traces using Open Telemetry SDKs. These traces are captured automatically by the Dapr sidecar and exported to the configured Open Telemetry provider, such as Zipkin.
|
||||
The [`durabletask-go`](https://github.com/dapr/durabletask-go) core used by the workflow engine writes distributed traces using Open Telemetry SDKs.
|
||||
These traces are captured automatically by the Dapr sidecar and exported to the configured Open Telemetry provider, such as Zipkin.
|
||||
|
||||
Each workflow instance managed by the engine is represented as one or more spans. There is a single parent span representing the full workflow execution and child spans for the various tasks, including spans for activity task execution and durable timers.
|
||||
Each workflow instance managed by the engine is represented as one or more spans.
|
||||
There is a single parent span representing the full workflow execution and child spans for the various tasks, including spans for activity task execution and durable timers.
|
||||
|
||||
> Workflow activity code currently **does not** have access to the trace context.
|
||||
|
||||
## Internal workflow actors
|
||||
## Workflow actors
|
||||
|
||||
There are two types of actors that are internally registered within the Dapr sidecar in support of the workflow engine:
|
||||
Upon the workflow client connecting to the sidecar, there are two types of actors that are registered in support of the workflow engine:
|
||||
|
||||
- `dapr.internal.{namespace}.{appID}.workflow`
|
||||
- `dapr.internal.{namespace}.{appID}.activity`
|
||||
|
||||
The `{namespace}` value is the Dapr namespace and defaults to `default` if no namespace is configured. The `{appID}` value is the app's ID. For example, if you have a workflow app named "wfapp", then the type of the workflow actor would be `dapr.internal.default.wfapp.workflow` and the type of the activity actor would be `dapr.internal.default.wfapp.activity`.
|
||||
The `{namespace}` value is the Dapr namespace and defaults to `default` if no namespace is configured.
|
||||
The `{appID}` value is the app's ID.
|
||||
For example, if you have a workflow app named "wfapp", then the type of the workflow actor would be `dapr.internal.default.wfapp.workflow` and the type of the activity actor would be `dapr.internal.default.wfapp.activity`.
|
||||
|
||||
The following diagram demonstrates how internal workflow actors operate in a Kubernetes scenario:
|
||||
The following diagram demonstrates how workflow actors operate in a Kubernetes scenario:
|
||||
|
||||
<img src="/images/workflow-overview/workflow-execution.png" alt="Diagram demonstrating internally registered actors across a cluster" />
|
||||
|
||||
Just like user-defined actors, internal workflow actors are distributed across the cluster by the actor placement service. They also maintain their own state and make use of reminders. However, unlike actors that live in application code, these _internal_ actors are embedded into the Dapr sidecar. Application code is completely unaware that these actors exist.
|
||||
Just like user-defined actors, workflow actors are distributed across the cluster by the hashing lookup table provided by the actor placement service.
|
||||
They also maintain their own state and make use of reminders.
|
||||
However, unlike actors that live in application code, these workflow actors are embedded into the Dapr sidecar.
|
||||
Application code is completely unaware that these actors exist.
|
||||
|
||||
{{% alert title="Note" color="primary" %}}
|
||||
The internal workflow actor types are only registered after an app has registered a workflow using a Dapr Workflow SDK. If an app never registers a workflow, then the internal workflow actors are never registered.
|
||||
The workflow actor types are only registered after an app has registered a workflow using a Dapr Workflow SDK.
|
||||
If an app never registers a workflow, then the internal workflow actors are never registered.
|
||||
{{% /alert %}}
|
||||
|
||||
### Workflow actors
|
||||
|
||||
There are 2 different types of actors used with workflows: workflow actors and activity actors. Workflow actors are responsible for managing the state and placement of all workflows running in the app. A new instance of the workflow actor is activated for every workflow instance that gets created. The ID of the workflow actor is the ID of the workflow. This internal actor stores the state of the workflow as it progresses and determines the node on which the workflow code executes via the actor placement service.
|
||||
There are 2 different types of actors used with workflows: workflow actors and activity actors.
|
||||
Workflow actors are responsible for managing the state and placement of all workflows running in the app.
|
||||
A new instance of the workflow actor is activated for every workflow instance that gets scheduled.
|
||||
The ID of the workflow actor is the ID of the workflow.
|
||||
This workflow actor stores the state of the workflow as it progresses, and determines the node on which the workflow code executes via the actor lookup table.
|
||||
|
||||
Each workflow actor saves its state using the following keys in the configured state store:
|
||||
As workflows are based on actors, all workflow and activity work is randomly distributed across all replicas of the application implementing workflows.
|
||||
There is no locality or relationship between where a workflow is started and where each work item is executed.
|
||||
|
||||
Each workflow actor saves its state using the following keys in the configured actor state store:
|
||||
|
||||
| Key | Description |
|
||||
| --- | ----------- |
|
||||
|
@ -86,7 +102,9 @@ Each workflow actor saves its state using the following keys in the configured s
|
|||
| `metadata` | Contains meta information about the workflow as a JSON blob and includes details such as the length of the inbox, the length of the history, and a 64-bit integer representing the workflow generation (for cases where the instance ID gets reused). The length information is used to determine which keys need to be read or written to when loading or saving workflow state updates. |
|
||||
|
||||
{{% alert title="Warning" color="warning" %}}
|
||||
Workflow actor state remains in the state store even after a workflow has completed. Creating a large number of workflows could result in unbounded storage usage. To address this either purge workflows using their ID or directly delete entries in the workflow DB store.
|
||||
Workflow actor state remains in the state store even after a workflow has completed.
|
||||
Creating a large number of workflows could result in unbounded storage usage.
|
||||
To address this either purge workflows using their ID or directly delete entries in the workflow DB store.
|
||||
{{% /alert %}}
|
||||
|
||||
The following diagram illustrates the typical lifecycle of a workflow actor.
|
||||
|
@ -103,13 +121,13 @@ To summarize:
|
|||
|
||||
### Activity actors
|
||||
|
||||
Activity actors are responsible for managing the state and placement of all workflow activity invocations. A new instance of the activity actor is activated for every activity task that gets scheduled by a workflow. The ID of the activity actor is the ID of the workflow combined with a sequence number (sequence numbers start with 0). For example, if a workflow has an ID of `876bf371` and is the third activity to be scheduled by the workflow, it's ID will be `876bf371::2` where `2` is the sequence number.
|
||||
Activity actors are responsible for managing the state and placement of all workflow activity invocations.
|
||||
A new instance of the activity actor is activated for every activity task that gets scheduled by a workflow.
|
||||
The ID of the activity actor is the ID of the workflow combined with a sequence number (sequence numbers start with 0), as well as the "generation" (incremented during instances of rerunning from using `continue as new`).
|
||||
For example, if a workflow has an ID of `876bf371` and is the third activity to be scheduled by the workflow, it's ID will be `876bf371::2::1` where `2` is the sequence number, and `1` is the generation.
|
||||
If the activity is scheduled again after a `continue as new`, the ID will be `876bf371::2::2`.
|
||||
|
||||
Each activity actor stores a single key into the state store:
|
||||
|
||||
| Key | Description |
|
||||
| --- | ----------- |
|
||||
| `activityState` | The key contains the activity invocation payload, which includes the serialized activity input data. This key is deleted automatically after the activity invocation has completed. |
|
||||
No state is stored by activity actors, and instead all resulting data is sent back to the parent workflow actor.
|
||||
|
||||
The following diagram illustrates the typical lifecycle of an activity actor.
|
||||
|
||||
|
@ -118,39 +136,49 @@ The following diagram illustrates the typical lifecycle of an activity actor.
|
|||
Activity actors are short-lived:
|
||||
|
||||
1. Activity actors are activated when a workflow actor schedules an activity task.
|
||||
1. Activity actors then immediately call into the workflow application to invoke the associated activity code.
|
||||
1. Activity actors then immediately call into the workflow application to invoke the associated activity code.
|
||||
1. Once the activity code has finished running and has returned its result, the activity actor sends a message to the parent workflow actor with the execution results.
|
||||
1. The activity actor then deactivates itself.
|
||||
1. Once the results are sent, the workflow is triggered to move forward to its next step.
|
||||
|
||||
### Reminder usage and execution guarantees
|
||||
|
||||
The Dapr Workflow ensures workflow fault-tolerance by using [actor reminders]({{% ref "../actors/actors-timers-reminders.md##actor-reminders" %}}) to recover from transient system failures. Prior to invoking application workflow code, the workflow or activity actor will create a new reminder. If the application code executes without interruption, the reminder is deleted. However, if the node or the sidecar hosting the associated workflow or activity crashes, the reminder will reactivate the corresponding actor and the execution will be retried.
|
||||
The Dapr Workflow ensures workflow fault-tolerance by using [actor reminders]({{% ref "../actors/actors-timers-reminders.md##actor-reminders" %}}) to recover from transient system failures.
|
||||
Prior to invoking application workflow code, the workflow or activity actor will create a new reminder.
|
||||
These reminders are made "one shot", meaning that they will expire after successful triggering.
|
||||
If the application code executes without interruption, the reminder is triggered and expired.
|
||||
However, if the node or the sidecar hosting the associated workflow or activity crashes, the reminder will reactivate the corresponding actor and the execution will be retried, forever.
|
||||
|
||||
<img src="/images/workflow-overview/workflow-actor-reminder-flow.png" width=600 alt="Diagram showing the process of invoking workflow actors"/>
|
||||
|
||||
{{% alert title="Important" color="warning" %}}
|
||||
Too many active reminders in a cluster may result in performance issues. If your application is already using actors and reminders heavily, be mindful of the additional load that Dapr Workflows may add to your system.
|
||||
{{% /alert %}}
|
||||
|
||||
### State store usage
|
||||
|
||||
Dapr Workflows use actors internally to drive the execution of workflows. Like any actors, these internal workflow actors store their state in the configured state store. Any state store that supports actors implicitly supports Dapr Workflow.
|
||||
Dapr Workflows use actors internally to drive the execution of workflows.
|
||||
Like any actors, these workflow actors store their state in the configured actor state store.
|
||||
Any state store that supports actors implicitly supports Dapr Workflow.
|
||||
|
||||
As discussed in the [workflow actors]({{% ref "workflow-architecture.md#workflow-actors" %}}) section, workflows save their state incrementally by appending to a history log. The history log for a workflow is distributed across multiple state store keys so that each "checkpoint" only needs to append the newest entries.
|
||||
As discussed in the [workflow actors]({{% ref "workflow-architecture.md#workflow-actors" %}}) section, workflows save their state incrementally by appending to a history log.
|
||||
The history log for a workflow is distributed across multiple state store keys so that each "checkpoint" only needs to append the newest entries.
|
||||
|
||||
The size of each checkpoint is determined by the number of concurrent actions scheduled by the workflow before it goes into an idle state. [Sequential workflows]({{% ref "workflow-overview.md#task-chaining" %}}) will therefore make smaller batch updates to the state store, while [fan-out/fan-in workflows]({{% ref "workflow-overview.md#fan-outfan-in" %}}) will require larger batches. The size of the batch is also impacted by the size of inputs and outputs when workflows [invoke activities]({{% ref "workflow-features-concepts.md#workflow-activities" %}}) or [child workflows]({{% ref "workflow-features-concepts.md#child-workflows" %}}).
|
||||
The size of each checkpoint is determined by the number of concurrent actions scheduled by the workflow before it goes into an idle state.
|
||||
[Sequential workflows]({{% ref "workflow-overview.md#task-chaining" %}}) will therefore make smaller batch updates to the state store, while [fan-out/fan-in workflows]({{% ref "workflow-overview.md#fan-outfan-in" %}}) will require larger batches.
|
||||
The size of the batch is also impacted by the size of inputs and outputs when workflows [invoke activities]({{% ref "workflow-features-concepts.md#workflow-activities" %}}) or [child workflows]({{% ref "workflow-features-concepts.md#child-workflows" %}}).
|
||||
|
||||
<img src="/images/workflow-overview/workflow-state-store-interactions.png" width=600 alt="Diagram of workflow actor state store interactions"/>
|
||||
|
||||
Different state store implementations may implicitly put restrictions on the types of workflows you can author. For example, the Azure Cosmos DB state store limits item sizes to 2 MB of UTF-8 encoded JSON ([source](https://learn.microsoft.com/azure/cosmos-db/concepts-limits#per-item-limits)). The input or output payload of an activity or child workflow is stored as a single record in the state store, so a item limit of 2 MB means that workflow and activity inputs and outputs can't exceed 2 MB of JSON-serialized data.
|
||||
Different state store implementations may implicitly put restrictions on the types of workflows you can author.
|
||||
For example, the Azure Cosmos DB state store limits item sizes to 2 MB of UTF-8 encoded JSON ([source](https://learn.microsoft.com/azure/cosmos-db/concepts-limits#per-item-limits)).
|
||||
The input or output payload of an activity or child workflow is stored as a single record in the state store, so a item limit of 2 MB means that workflow and activity inputs and outputs can't exceed 2 MB of JSON-serialized data.
|
||||
|
||||
Similarly, if a state store imposes restrictions on the size of a batch transaction, that may limit the number of parallel actions that can be scheduled by a workflow.
|
||||
|
||||
Workflow state can be purged from a state store, including all its history. Each Dapr SDK exposes APIs for purging all metadata related to specific workflow instances.
|
||||
Workflow state can be purged from a state store, including all its history.
|
||||
Each Dapr SDK exposes APIs for purging all metadata related to specific workflow instances.
|
||||
|
||||
## Workflow scalability
|
||||
|
||||
Because Dapr Workflows are internally implemented using actors, Dapr Workflows have the same scalability characteristics as actors. The placement service:
|
||||
Because Dapr Workflows are internally implemented using actors, Dapr Workflows have the same scalability characteristics as actors.
|
||||
The placement service:
|
||||
|
||||
- Doesn't distinguish between workflow actors and actors you define in your application
|
||||
- Will load balance workflows using the same algorithms that it uses for actors
|
||||
|
@ -162,41 +190,49 @@ The expected scalability of a workflow is determined by the following factors:
|
|||
- The scalability of the state store configured for actors
|
||||
- The scalability of the actor placement service and the reminder subsystem
|
||||
|
||||
The implementation details of the workflow code in the target application also plays a role in the scalability of individual workflow instances. Each workflow instance executes on a single node at a time, but a workflow can schedule activities and child workflows which run on other nodes.
|
||||
The implementation details of the workflow code in the target application also plays a role in the scalability of individual workflow instances.
|
||||
Each workflow instance executes on a single node at a time, but a workflow can schedule activities and child workflows which run on other nodes.
|
||||
|
||||
Workflows can also schedule these activities and child workflows to run in parallel, allowing a single workflow to potentially distribute compute tasks across all available nodes in the cluster.
|
||||
|
||||
<img src="/images/workflow-overview/workflow-actor-scale-out.png" width=800 alt="Diagram of workflow and activity actors scaled out across multiple Dapr instances"/>
|
||||
|
||||
{{% alert title="Important" color="warning" %}}
|
||||
Currently, there are no global limits imposed on workflow and activity concurrency. A runaway workflow could therefore potentially consume all resources in a cluster if it attempts to schedule too many tasks in parallel. Use care when authoring Dapr Workflows that schedule large batches of work in parallel.
|
||||
|
||||
Also, the Dapr Workflow engine requires that all instances of each workflow app register the exact same set of workflows and activities. In other words, it's not possible to scale certain workflows or activities independently. All workflows and activities within an app must be scaled together.
|
||||
By default, there are no global limits imposed on workflow and activity concurrency.
|
||||
A runaway workflow could therefore potentially consume all resources in a cluster if it attempts to schedule too many tasks in parallel.
|
||||
Use care when authoring Dapr Workflows that schedule large batches of work in parallel.
|
||||
{{% /alert %}}
|
||||
|
||||
Workflows don't control the specifics of how load is distributed across the cluster. For example, if a workflow schedules 10 activity tasks to run in parallel, all 10 tasks may run on as many as 10 different compute nodes or as few as a single compute node. The actual scale behavior is determined by the actor placement service, which manages the distribution of the actors that represent each of the workflow's tasks.
|
||||
|
||||
## Workflow backend
|
||||
|
||||
The workflow backend is responsible for orchestrating and preserving the state of workflows. At any given time, only one backend can be supported. You can configure the workflow backend as a component, similar to any other component in Dapr. Configuration requires:
|
||||
1. Specifying the type of workflow backend.
|
||||
1. Providing the configuration specific to that backend.
|
||||
|
||||
For instance, the following sample demonstrates how to define a actor backend component. Dapr workflow currently supports only the actor backend by default, and users are not required to define an actor backend component to use it.
|
||||
You can configure the maximum concurrent workflows and activities that can be executed at any one time with the following configuration.
|
||||
These limits are imposed on a _per_ sidecar basis, meaning that if you have 10 replicas of your workflow app, the effective limit is 10 times the configured value.
|
||||
These limits do not distinguish between different workflow or activity definitions.
|
||||
|
||||
```yaml
|
||||
apiVersion: dapr.io/v1alpha1
|
||||
kind: Component
|
||||
kind: Configuration
|
||||
metadata:
|
||||
name: actorbackend
|
||||
name: appconfig
|
||||
spec:
|
||||
type: workflowbackend.actor
|
||||
version: v1
|
||||
workflow:
|
||||
maxConcurrentWorkflowInvocations: 100 # Default is infinite
|
||||
maxConcurrentActivityInvocations: 1000 # Default is infinite
|
||||
```
|
||||
|
||||
{{% alert title="Important" color="warning" %}}
|
||||
The Dapr Workflow engine requires that all instances of a workflow app register the exact same set of workflows and activities.
|
||||
In other words, it's not possible to scale certain workflows or activities independently.
|
||||
All workflows and activities within an app must be scaled together.
|
||||
{{% /alert %}}
|
||||
|
||||
Workflows don't control the specifics of how load is distributed across the cluster.
|
||||
For example, if a workflow schedules 10 activity tasks to run in parallel, all 10 tasks may run on as many as 10 different compute nodes or as few as a single compute node.
|
||||
The actual scale behavior is determined by the actor placement service, which manages the distribution of the actors that represent each of the workflow's tasks.
|
||||
|
||||
## Workflow latency
|
||||
|
||||
In order to provide guarantees around durability and resiliency, Dapr Workflows frequently write to the state store and rely on reminders to drive execution. Dapr Workflows therefore may not be appropriate for latency-sensitive workloads. Expected sources of high latency include:
|
||||
In order to provide guarantees around durability and resiliency, Dapr Workflows frequently write to the state store and rely on reminders to drive execution.
|
||||
Dapr Workflows therefore may not be appropriate for latency-sensitive workloads.
|
||||
Expected sources of high latency include:
|
||||
|
||||
- Latency from the state store when persisting workflow state.
|
||||
- Latency from the state store when rehydrating workflows with large histories.
|
||||
|
@ -205,6 +241,18 @@ In order to provide guarantees around durability and resiliency, Dapr Workflows
|
|||
|
||||
See the [Reminder usage and execution guarantees section]({{% ref "workflow-architecture.md#reminder-usage-and-execution-guarantees" %}}) for more details on how the design of workflow actors may impact execution latency.
|
||||
|
||||
## Increasing scheduling throughput
|
||||
|
||||
By default, when a client schedules a workflow, the workflow engine waits for the workflow to be fully started before returning a response to the client.
|
||||
Waiting for the workflow to start before returning can decrease the scheduling throughput of workflows.
|
||||
When scheduling a workflow with a start time, the workflow engine does not wait for the workflow to start before returning a response to the client.
|
||||
To increase scheduling throughput, consider adding a start time of "now" when scheduling a workflow.
|
||||
An example of scheduling a workflow with a start time of "now" in the Go SDK is shown below:
|
||||
|
||||
```go
|
||||
client.ScheduleNewWorkflow(ctx, "MyCoolWorkflow", workflow.WithStartTime(time.Now()))
|
||||
```
|
||||
|
||||
## Next steps
|
||||
|
||||
{{< button text="Author workflows >>" page="howto-author-workflow.md" >}}
|
||||
|
|
|
@ -6,7 +6,7 @@ weight: 2000
|
|||
description: "Learn more about the Dapr Workflow features and concepts"
|
||||
---
|
||||
|
||||
Now that you've learned about the [workflow building block]({{% ref workflow-overview.md %}}) at a high level, let's deep dive into the features and concepts included with the Dapr Workflow engine and SDKs. Dapr Workflow exposes several core features and concepts which are common across all supported languages.
|
||||
Now that you've learned about the [workflow building block]({{% ref workflow-overview.md %}}) at a high level, let's deep dive into the features and concepts included with the Dapr Workflow engine and SDKs. Dapr Workflow exposes several core features and concepts which are common across all supported languages.
|
||||
|
||||
{{% alert title="Note" color="primary" %}}
|
||||
For more information on how workflow state is managed, see the [workflow architecture guide]({{% ref workflow-architecture.md %}}).
|
||||
|
@ -14,7 +14,9 @@ For more information on how workflow state is managed, see the [workflow archite
|
|||
|
||||
## Workflows
|
||||
|
||||
Dapr Workflows are functions you write that define a series of tasks to be executed in a particular order. The Dapr Workflow engine takes care of scheduling and execution of the tasks, including managing failures and retries. If the app hosting your workflows is scaled out across multiple machines, the workflow engine may also load balance the execution of workflows and their tasks across multiple machines.
|
||||
Dapr Workflows are functions you write that define a series of tasks to be executed in a particular order.
|
||||
The Dapr Workflow engine takes care of scheduling and execution of the tasks, including managing failures and retries.
|
||||
If the app hosting your workflows is scaled out across multiple machines, the workflow engine load balances the execution of workflows and their tasks across multiple machines.
|
||||
|
||||
There are several different kinds of tasks that a workflow can schedule, including
|
||||
- [Activities]({{% ref "workflow-features-concepts.md#workflow-activities" %}}) for executing custom logic
|
||||
|
@ -32,7 +34,7 @@ Only one workflow instance with a given ID can exist at any given time. However,
|
|||
|
||||
Dapr Workflows maintain their execution state by using a technique known as [event sourcing](https://learn.microsoft.com/azure/architecture/patterns/event-sourcing). Instead of storing the current state of a workflow as a snapshot, the workflow engine manages an append-only log of history events that describe the various steps that a workflow has taken. When using the workflow SDK, these history events are stored automatically whenever the workflow "awaits" for the result of a scheduled task.
|
||||
|
||||
When a workflow "awaits" a scheduled task, it unloads itself from memory until the task completes. Once the task completes, the workflow engine schedules the workflow function to run again. This second workflow function execution is known as a _replay_.
|
||||
When a workflow "awaits" a scheduled task, it unloads itself from memory until the task completes. Once the task completes, the workflow engine schedules the workflow function to run again. This second workflow function execution is known as a _replay_.
|
||||
|
||||
When a workflow function is replayed, it runs again from the beginning. However, when it encounters a task that already completed, instead of scheduling that task again, the workflow engine:
|
||||
|
||||
|
@ -57,16 +59,16 @@ As discussed in the [workflow replay]({{% ref "#workflow-replay" %}}) section, w
|
|||
|
||||
You can use the following two techniques to write workflows that may need to schedule extreme numbers of tasks:
|
||||
|
||||
1. **Use the _continue-as-new_ API**:
|
||||
1. **Use the _continue-as-new_ API**:
|
||||
Each workflow SDK exposes a _continue-as-new_ API that workflows can invoke to restart themselves with a new input and history. The _continue-as-new_ API is especially ideal for implementing "eternal workflows", like monitoring agents, which would otherwise be implemented using a `while (true)`-like construct. Using _continue-as-new_ is a great way to keep the workflow history size small.
|
||||
|
||||
|
||||
> The _continue-as-new_ API truncates the existing history, replacing it with a new history.
|
||||
|
||||
1. **Use child workflows**:
|
||||
1. **Use child workflows**:
|
||||
Each workflow SDK exposes an API for creating child workflows. A child workflow behaves like any other workflow, except that it's scheduled by a parent workflow. Child workflows have:
|
||||
- Their own history
|
||||
- The benefit of distributing workflow function execution across multiple machines.
|
||||
|
||||
- Their own history
|
||||
- The benefit of distributing workflow function execution across multiple machines.
|
||||
|
||||
If a workflow needs to schedule thousands of tasks or more, it's recommended that those tasks be distributed across child workflows so that no single workflow's history size grows too large.
|
||||
|
||||
### Updating workflow code
|
||||
|
@ -145,18 +147,6 @@ Workflows can also wait for multiple external event signals of the same name, in
|
|||
|
||||
Learn more about [external system interaction.]({{% ref "workflow-patterns.md#external-system-interaction" %}})
|
||||
|
||||
## Workflow backend
|
||||
|
||||
Dapr Workflow relies on the Durable Task Framework for Go (a.k.a. [durabletask-go](https://github.com/dapr/durabletask-go)) as the core engine for executing workflows. This engine is designed to support multiple backend implementations. For example, the [durabletask-go](https://github.com/dapr/durabletask-go) repo includes a SQLite implementation and the Dapr repo includes an Actors implementation.
|
||||
|
||||
By default, Dapr Workflow supports the Actors backend, which is stable and scalable. However, you can choose a different backend supported in Dapr Workflow. For example, [SQLite](https://github.com/dapr/durabletask-go/tree/main/backend/sqlite)(TBD future release) could be an option for backend for local development and testing.
|
||||
|
||||
The backend implementation is largely decoupled from the workflow core engine or the programming model that you see. The backend primarily impacts:
|
||||
- How workflow state is stored
|
||||
- How workflow execution is coordinated across replicas
|
||||
|
||||
In that sense, it's similar to Dapr's state store abstraction, except designed specifically for workflow. All APIs and programming model features are the same, regardless of which backend is used.
|
||||
|
||||
## Purging
|
||||
|
||||
Workflow state can be purged from a state store, purging all its history and removing all metadata related to a specific workflow instance. The purge capability is used for workflows that have run to a `COMPLETED`, `FAILED`, or `TERMINATED` state.
|
||||
|
@ -165,11 +155,11 @@ Learn more in [the workflow API reference guide]({{% ref workflow_api.md %}}).
|
|||
|
||||
## Limitations
|
||||
|
||||
### Workflow determinism and code restraints
|
||||
### Workflow determinism and code restraints
|
||||
|
||||
To take advantage of the workflow replay technique, your workflow code needs to be deterministic. For your workflow code to be deterministic, you may need to work around some limitations.
|
||||
|
||||
#### Workflow functions must call deterministic APIs.
|
||||
#### Workflow functions must call deterministic APIs.
|
||||
APIs that generate random numbers, random UUIDs, or the current date are _non-deterministic_. To work around this limitation, you can:
|
||||
- Use these APIs in activity functions, or
|
||||
- (Preferred) Use built-in equivalent APIs offered by the SDK. For example, each authoring SDK provides an API for retrieving the current time in a deterministic manner.
|
||||
|
@ -269,9 +259,9 @@ const currentTime = ctx.CurrentUTCDateTime()
|
|||
{{< /tabpane >}}
|
||||
|
||||
|
||||
#### Workflow functions must only interact _indirectly_ with external state.
|
||||
External data includes any data that isn't stored in the workflow state. Workflows must not interact with global variables, environment variables, the file system, or make network calls.
|
||||
|
||||
#### Workflow functions must only interact _indirectly_ with external state.
|
||||
External data includes any data that isn't stored in the workflow state. Workflows must not interact with global variables, environment variables, the file system, or make network calls.
|
||||
|
||||
Instead, workflows should interact with external state _indirectly_ using workflow inputs, activity tasks, and through external event handling.
|
||||
|
||||
For example, instead of this:
|
||||
|
@ -377,11 +367,11 @@ err := ctx.CallActivity(MakeHttpCallActivity, workflow.ActivityInput("https://ex
|
|||
{{< /tabpane >}}
|
||||
|
||||
|
||||
#### Workflow functions must execute only on the workflow dispatch thread.
|
||||
#### Workflow functions must execute only on the workflow dispatch thread.
|
||||
The implementation of each language SDK requires that all workflow function operations operate on the same thread (goroutine, etc.) that the function was scheduled on. Workflow functions must never:
|
||||
- Schedule background threads, or
|
||||
- Use APIs that schedule a callback function to run on another thread.
|
||||
|
||||
- Use APIs that schedule a callback function to run on another thread.
|
||||
|
||||
Failure to follow this rule could result in undefined behavior. Any background processing should instead be delegated to activity tasks, which can be scheduled to run serially or concurrently.
|
||||
|
||||
For example, instead of this:
|
||||
|
@ -478,19 +468,18 @@ task.Await(nil)
|
|||
|
||||
Make sure updates you make to the workflow code maintain its determinism. A couple examples of code updates that can break workflow determinism:
|
||||
|
||||
- **Changing workflow function signatures**:
|
||||
Changing the name, input, or output of a workflow or activity function is considered a breaking change and must be avoided.
|
||||
- **Changing workflow function signatures**:
|
||||
Changing the name, input, or output of a workflow or activity function is considered a breaking change and must be avoided.
|
||||
|
||||
- **Changing the number or order of workflow tasks**:
|
||||
- **Changing the number or order of workflow tasks**:
|
||||
Changing the number or order of workflow tasks causes a workflow instance's history to no longer match the code and may result in runtime errors or other unexpected behavior.
|
||||
|
||||
To work around these constraints:
|
||||
|
||||
- Instead of updating existing workflow code, leave the existing workflow code as-is and create new workflow definitions that include the updates.
|
||||
- Upstream code that creates workflows should only be updated to create instances of the new workflows.
|
||||
- Instead of updating existing workflow code, leave the existing workflow code as-is and create new workflow definitions that include the updates.
|
||||
- Upstream code that creates workflows should only be updated to create instances of the new workflows.
|
||||
- Leave the old code around to ensure that existing workflow instances can continue to run without interruption. If and when it's known that all instances of the old workflow logic have completed, then the old workflow code can be safely deleted.
|
||||
|
||||
|
||||
## Next steps
|
||||
|
||||
{{< button text="Workflow patterns >>" page="workflow-patterns.md" >}}
|
||||
|
|
|
@ -6,14 +6,15 @@ weight: 1000
|
|||
description: "Overview of Dapr Workflow"
|
||||
---
|
||||
|
||||
Dapr workflow makes it easy for developers to write business logic and integrations in a reliable way. Since Dapr workflows are stateful, they support long-running and fault-tolerant applications, ideal for orchestrating microservices. Dapr workflow works seamlessly with other Dapr building blocks, such as service invocation, pub/sub, state management, and bindings.
|
||||
Dapr workflow makes it easy for developers to write business logic and integrations in a reliable way.
|
||||
Since Dapr workflows are stateful, they support long-running and fault-tolerant applications, ideal for orchestrating microservices.
|
||||
Dapr workflow works seamlessly with other Dapr building blocks, such as service invocation, pub/sub, state management, and bindings.
|
||||
|
||||
The durable, resilient Dapr Workflow capability:
|
||||
|
||||
- Offers a built-in workflow runtime for driving Dapr Workflow execution.
|
||||
- Provides SDKs for authoring workflows in code, using any language.
|
||||
- Provides HTTP and gRPC APIs for managing workflows (start, query, pause/resume, raise event, terminate, purge).
|
||||
- Integrates with any other workflow runtime via workflow components.
|
||||
|
||||
<img src="/images/workflow-overview/workflow-overview.png" width=800 alt="Diagram showing basics of Dapr Workflow">
|
||||
|
||||
|
@ -28,16 +29,20 @@ Some example scenarios that Dapr Workflow can perform are:
|
|||
|
||||
### Workflows and activities
|
||||
|
||||
With Dapr Workflow, you can write activities and then orchestrate those activities in a workflow. Workflow activities are:
|
||||
With Dapr Workflow, you can write activities and then orchestrate those activities in a workflow.
|
||||
Workflow activities are:
|
||||
|
||||
- The basic unit of work in a workflow
|
||||
- Used for calling other (Dapr) services, interacting with state stores, and pub/sub brokers.
|
||||
- Used for calling external third party services.
|
||||
|
||||
[Learn more about workflow activities.]({{% ref "workflow-features-concepts.md##workflow-activities" %}})
|
||||
|
||||
### Child workflows
|
||||
|
||||
In addition to activities, you can write workflows to schedule other workflows as child workflows. A child workflow has its own instance ID, history, and status that is independent of the parent workflow that started it, except for the fact that terminating the parent workflow terminates all of the child workflows created by it. Child workflow also supports automatic retry policies.
|
||||
In addition to activities, you can write workflows to schedule other workflows as child workflows.
|
||||
A child workflow has its own instance ID, history, and status that is independent of the parent workflow that started it, except for the fact that terminating the parent workflow terminates all of the child workflows created by it.
|
||||
Child workflow also supports automatic retry policies.
|
||||
|
||||
[Learn more about child workflows.]({{% ref "workflow-features-concepts.md#child-workflows" %}})
|
||||
|
||||
|
@ -49,7 +54,8 @@ Same as Dapr actors, you can schedule reminder-like durable delays for any time
|
|||
|
||||
### Workflow HTTP calls to manage a workflow
|
||||
|
||||
When you create an application with workflow code and run it with Dapr, you can call specific workflows that reside in the application. Each individual workflow can be:
|
||||
When you create an application with workflow code and run it with Dapr, you can call specific workflows that reside in the application.
|
||||
Each individual workflow can be:
|
||||
|
||||
- Started or terminated through a POST request
|
||||
- Triggered to deliver a named event through a POST request
|
||||
|
@ -61,13 +67,15 @@ When you create an application with workflow code and run it with Dapr, you can
|
|||
|
||||
## Workflow patterns
|
||||
|
||||
Dapr Workflow simplifies complex, stateful coordination requirements in microservice architectures. The following sections describe several application patterns that can benefit from Dapr Workflow.
|
||||
Dapr Workflow simplifies complex, stateful coordination requirements in microservice architectures.
|
||||
The following sections describe several application patterns that can benefit from Dapr Workflow.
|
||||
|
||||
Learn more about [different types of workflow patterns]({{% ref workflow-patterns.md %}})
|
||||
|
||||
## Workflow SDKs
|
||||
|
||||
The Dapr Workflow _authoring SDKs_ are language-specific SDKs that contain types and functions to implement workflow logic. The workflow logic lives in your application and is orchestrated by the Dapr Workflow engine running in the Dapr sidecar via a gRPC stream.
|
||||
The Dapr Workflow _authoring SDKs_ are language-specific SDKs that contain types and functions to implement workflow logic.
|
||||
The workflow logic lives in your application and is orchestrated by the Dapr Workflow engine running in the Dapr sidecar via a gRPC stream.
|
||||
|
||||
### Supported SDKs
|
||||
|
||||
|
|
|
@ -86,14 +86,14 @@ To perform a create blob operation, invoke the Azure Blob Storage binding with a
|
|||
##### Save text to a random generated UUID blob
|
||||
|
||||
{{< tabpane text=true >}}
|
||||
{{% tab %}}
|
||||
{{% tab "Windows" %}}
|
||||
On Windows, utilize cmd prompt (PowerShell has different escaping mechanism)
|
||||
```bash
|
||||
curl -d "{ \"operation\": \"create\", \"data\": \"Hello World\" }" http://localhost:<dapr-port>/v1.0/bindings/<binding-name>
|
||||
```
|
||||
{{% /tab %}}
|
||||
|
||||
{{% tab %}}
|
||||
{{% tab "Linux" %}}
|
||||
```bash
|
||||
curl -d '{ "operation": "create", "data": "Hello World" }' \
|
||||
http://localhost:<dapr-port>/v1.0/bindings/<binding-name>
|
||||
|
@ -106,14 +106,14 @@ To perform a create blob operation, invoke the Azure Blob Storage binding with a
|
|||
|
||||
{{< tabpane text=true >}}
|
||||
|
||||
{{% tab %}}
|
||||
{{% tab "Windows" %}}
|
||||
```bash
|
||||
curl -d "{ \"operation\": \"create\", \"data\": \"Hello World\", \"metadata\": { \"blobName\": \"my-test-file.txt\" } }" \
|
||||
http://localhost:<dapr-port>/v1.0/bindings/<binding-name>
|
||||
```
|
||||
{{% /tab %}}
|
||||
|
||||
{{% tab %}}
|
||||
{{% tab "Linux" %}}
|
||||
```bash
|
||||
curl -d '{ "operation": "create", "data": "Hello World", "metadata": { "blobName": "my-test-file.txt" } }' \
|
||||
http://localhost:<dapr-port>/v1.0/bindings/<binding-name>
|
||||
|
@ -150,13 +150,13 @@ Then you can upload it as you would normally:
|
|||
|
||||
{{< tabpane text=true >}}
|
||||
|
||||
{{% tab %}}
|
||||
{{% tab "Windows" %}}
|
||||
```bash
|
||||
curl -d "{ \"operation\": \"create\", \"data\": \"YOUR_BASE_64_CONTENT\", \"metadata\": { \"blobName\": \"my-test-file.jpg\" } }" http://localhost:<dapr-port>/v1.0/bindings/<binding-name>
|
||||
```
|
||||
{{% /tab %}}
|
||||
|
||||
{{% tab %}}
|
||||
{{% tab "Linux" %}}
|
||||
```bash
|
||||
curl -d '{ "operation": "create", "data": "YOUR_BASE_64_CONTENT", "metadata": { "blobName": "my-test-file.jpg" } }' \
|
||||
http://localhost:<dapr-port>/v1.0/bindings/<binding-name>
|
||||
|
@ -199,13 +199,13 @@ The metadata parameters are:
|
|||
|
||||
{{< tabpane text=true >}}
|
||||
|
||||
{{% tab %}}
|
||||
{{% tab "Windows" %}}
|
||||
```bash
|
||||
curl -d '{ \"operation\": \"get\", \"metadata\": { \"blobName\": \"myblob\" }}' http://localhost:<dapr-port>/v1.0/bindings/<binding-name>
|
||||
```
|
||||
{{% /tab %}}
|
||||
|
||||
{{% tab %}}
|
||||
{{% tab "Linux" %}}
|
||||
```bash
|
||||
curl -d '{ "operation": "get", "metadata": { "blobName": "myblob" }}' \
|
||||
http://localhost:<dapr-port>/v1.0/bindings/<binding-name>
|
||||
|
@ -247,13 +247,13 @@ The metadata parameters are:
|
|||
|
||||
{{< tabpane text=true >}}
|
||||
|
||||
{{% tab %}}
|
||||
{{% tab "Windows" %}}
|
||||
```bash
|
||||
curl -d '{ \"operation\": \"delete\", \"metadata\": { \"blobName\": \"myblob\" }}' http://localhost:<dapr-port>/v1.0/bindings/<binding-name>
|
||||
```
|
||||
{{% /tab %}}
|
||||
|
||||
{{% tab %}}
|
||||
{{% tab "Linux" %}}
|
||||
```bash
|
||||
curl -d '{ "operation": "delete", "metadata": { "blobName": "myblob" }}' \
|
||||
http://localhost:<dapr-port>/v1.0/bindings/<binding-name>
|
||||
|
@ -266,13 +266,13 @@ The metadata parameters are:
|
|||
|
||||
{{< tabpane text=true >}}
|
||||
|
||||
{{% tab %}}
|
||||
{{% tab "Windows" %}}
|
||||
```bash
|
||||
curl -d '{ \"operation\": \"delete\", \"metadata\": { \"blobName\": \"myblob\", \"deleteSnapshots\": \"only\" }}' http://localhost:<dapr-port>/v1.0/bindings/<binding-name>
|
||||
```
|
||||
{{% /tab %}}
|
||||
|
||||
{{% tab %}}
|
||||
{{% tab "Linux" %}}
|
||||
```bash
|
||||
curl -d '{ "operation": "delete", "metadata": { "blobName": "myblob", "deleteSnapshots": "only" }}' \
|
||||
http://localhost:<dapr-port>/v1.0/bindings/<binding-name>
|
||||
|
@ -285,13 +285,13 @@ The metadata parameters are:
|
|||
|
||||
{{< tabpane text=true >}}
|
||||
|
||||
{{% tab %}}
|
||||
{{% tab "Windows" %}}
|
||||
```bash
|
||||
curl -d '{ \"operation\": \"delete\", \"metadata\": { \"blobName\": \"myblob\", \"deleteSnapshots\": \"include\" }}' http://localhost:<dapr-port>/v1.0/bindings/<binding-name>
|
||||
```
|
||||
{{% /tab %}}
|
||||
|
||||
{{% tab %}}
|
||||
{{% tab "Linux" %}}
|
||||
```bash
|
||||
curl -d '{ "operation": "delete", "metadata": { "blobName": "myblob", "deleteSnapshots": "include" }}' \
|
||||
http://localhost:<dapr-port>/v1.0/bindings/<binding-name>
|
||||
|
|
|
@ -110,14 +110,14 @@ The metadata parameters are:
|
|||
##### Save text to a random generated UUID file
|
||||
|
||||
{{< tabpane text=true >}}
|
||||
{{% tab %}}
|
||||
{{% tab "Windows" %}}
|
||||
On Windows, utilize cmd prompt (PowerShell has different escaping mechanism)
|
||||
```bash
|
||||
curl -d "{ \"operation\": \"create\", \"data\": \"Hello World\" }" http://localhost:<dapr-port>/v1.0/bindings/<binding-name>
|
||||
```
|
||||
{{% /tab %}}
|
||||
|
||||
{{% tab %}}
|
||||
{{% tab "Linux" %}}
|
||||
```bash
|
||||
curl -d '{ "operation": "create", "data": "Hello World" }' \
|
||||
http://localhost:<dapr-port>/v1.0/bindings/<binding-name>
|
||||
|
@ -130,14 +130,14 @@ The metadata parameters are:
|
|||
|
||||
{{< tabpane text=true >}}
|
||||
|
||||
{{% tab %}}
|
||||
{{% tab "Windows" %}}
|
||||
```bash
|
||||
curl -d "{ \"operation\": \"create\", \"data\": \"Hello World\", \"metadata\": { \"key\": \"my-test-file.txt\" } }" \
|
||||
http://localhost:<dapr-port>/v1.0/bindings/<binding-name>
|
||||
```
|
||||
{{% /tab %}}
|
||||
|
||||
{{% tab %}}
|
||||
{{% tab "Linux" %}}
|
||||
```bash
|
||||
curl -d '{ "operation": "create", "data": "Hello World", "metadata": { "key": "my-test-file.txt" } }' \
|
||||
http://localhost:<dapr-port>/v1.0/bindings/<binding-name>
|
||||
|
@ -155,13 +155,13 @@ Then you can upload it as you would normally:
|
|||
|
||||
{{< tabpane text=true >}}
|
||||
|
||||
{{% tab %}}
|
||||
{{% tab "Windows" %}}
|
||||
```bash
|
||||
curl -d "{ \"operation\": \"create\", \"data\": \"(YOUR_FILE_CONTENTS)\", \"metadata\": { \"key\": \"my-test-file.jpg\" } }" http://localhost:<dapr-port>/v1.0/bindings/<binding-name>
|
||||
```
|
||||
{{% /tab %}}
|
||||
|
||||
{{% tab %}}
|
||||
{{% tab "Linux" %}}
|
||||
```bash
|
||||
curl -d '{ "operation": "create", "data": "$(cat my-test-file.jpg)", "metadata": { "key": "my-test-file.jpg" } }' \
|
||||
http://localhost:<dapr-port>/v1.0/bindings/<binding-name>
|
||||
|
@ -202,13 +202,13 @@ The metadata parameters are:
|
|||
|
||||
{{< tabpane text=true >}}
|
||||
|
||||
{{% tab %}}
|
||||
{{% tab "Windows" %}}
|
||||
```bash
|
||||
curl -d '{ \"operation\": \"get\", \"metadata\": { \"key\": \"my-test-file.txt\" }}' http://localhost:<dapr-port>/v1.0/bindings/<binding-name>
|
||||
```
|
||||
{{% /tab %}}
|
||||
|
||||
{{% tab %}}
|
||||
{{% tab "Linux" %}}
|
||||
```bash
|
||||
curl -d '{ "operation": "get", "metadata": { "key": "my-test-file.txt" }}' \
|
||||
http://localhost:<dapr-port>/v1.0/bindings/<binding-name>
|
||||
|
@ -312,13 +312,13 @@ The metadata parameters are:
|
|||
|
||||
{{< tabpane text=true >}}
|
||||
|
||||
{{% tab %}}
|
||||
{{% tab "Windows" %}}
|
||||
```bash
|
||||
curl -d '{ \"operation\": \"delete\", \"metadata\": { \"key\": \"my-test-file.txt\" }}' http://localhost:<dapr-port>/v1.0/bindings/<binding-name>
|
||||
```
|
||||
{{% /tab %}}
|
||||
|
||||
{{% tab %}}
|
||||
{{% tab "Linux" %}}
|
||||
```bash
|
||||
curl -d '{ "operation": "delete", "metadata": { "key": "my-test-file.txt" }}' \
|
||||
http://localhost:<dapr-port>/v1.0/bindings/<binding-name>
|
||||
|
|
|
@ -73,14 +73,14 @@ To perform a create operation, invoke the Huawei OBS binding with a `POST` metho
|
|||
##### Save text to a random generated UUID file
|
||||
|
||||
{{< tabpane text=true >}}
|
||||
{{% tab %}}
|
||||
{{% tab "Windows" %}}
|
||||
On Windows, utilize cmd prompt (PowerShell has different escaping mechanism)
|
||||
```bash
|
||||
curl -d "{ \"operation\": \"create\", \"data\": \"Hello World\" }" http://localhost:<dapr-port>/v1.0/bindings/<binding-name>
|
||||
```
|
||||
{{% /tab %}}
|
||||
|
||||
{{% tab %}}
|
||||
{{% tab "Linux" %}}
|
||||
```bash
|
||||
curl -d '{ "operation": "create", "data": "Hello World" }' \
|
||||
http://localhost:<dapr-port>/v1.0/bindings/<binding-name>
|
||||
|
@ -93,14 +93,14 @@ To perform a create operation, invoke the Huawei OBS binding with a `POST` metho
|
|||
|
||||
{{< tabpane text=true >}}
|
||||
|
||||
{{% tab %}}
|
||||
{{% tab "Windows" %}}
|
||||
```bash
|
||||
curl -d "{ \"operation\": \"create\", \"data\": \"Hello World\", \"metadata\": { \"key\": \"my-test-file.txt\" } }" \
|
||||
http://localhost:<dapr-port>/v1.0/bindings/<binding-name>
|
||||
```
|
||||
{{% /tab %}}
|
||||
|
||||
{{% tab %}}
|
||||
{{% tab "Linux" %}}
|
||||
```bash
|
||||
curl -d '{ "operation": "create", "data": "Hello World", "metadata": { "key": "my-test-file.txt" } }' \
|
||||
http://localhost:<dapr-port>/v1.0/bindings/<binding-name>
|
||||
|
@ -135,14 +135,14 @@ To upload a binary file (for example, _.jpg_, _.zip_), invoke the Huawei OBS bin
|
|||
|
||||
{{< tabpane text=true >}}
|
||||
|
||||
{{% tab %}}
|
||||
{{% tab "Windows" %}}
|
||||
```bash
|
||||
curl -d "{ \"operation\": \"upload\", \"data\": { \"sourceFile\": \".\my-test-file.jpg\" }, \"metadata\": { \"key\": \"my-test-file.jpg\" } }" \
|
||||
http://localhost:<dapr-port>/v1.0/bindings/<binding-name>
|
||||
```
|
||||
{{% /tab %}}
|
||||
|
||||
{{% tab %}}
|
||||
{{% tab "Linux" %}}
|
||||
```bash
|
||||
curl -d '{ "operation": "upload", "data": { "sourceFile": "./my-test-file.jpg" }, "metadata": { "key": "my-test-file.jpg" } }' \
|
||||
http://localhost:<dapr-port>/v1.0/bindings/<binding-name>
|
||||
|
@ -176,13 +176,13 @@ The metadata parameters are:
|
|||
|
||||
{{< tabpane text=true >}}
|
||||
|
||||
{{% tab %}}
|
||||
{{% tab "Windows" %}}
|
||||
```bash
|
||||
curl -d '{ \"operation\": \"get\", \"metadata\": { \"key\": \"my-test-file.txt\" }}' http://localhost:<dapr-port>/v1.0/bindings/<binding-name>
|
||||
```
|
||||
{{% /tab %}}
|
||||
|
||||
{{% tab %}}
|
||||
{{% tab "Linux" %}}
|
||||
```bash
|
||||
curl -d '{ "operation": "get", "metadata": { "key": "my-test-file.txt" }}' \
|
||||
http://localhost:<dapr-port>/v1.0/bindings/<binding-name>
|
||||
|
@ -220,13 +220,13 @@ The metadata parameters are:
|
|||
|
||||
{{< tabpane text=true >}}
|
||||
|
||||
{{% tab %}}
|
||||
{{% tab "Windows" %}}
|
||||
```bash
|
||||
curl -d '{ \"operation\": \"delete\", \"metadata\": { \"key\": \"my-test-file.txt\" }}' http://localhost:<dapr-port>/v1.0/bindings/<binding-name>
|
||||
```
|
||||
{{% /tab %}}
|
||||
|
||||
{{% tab %}}
|
||||
{{% tab "Linux" %}}
|
||||
```bash
|
||||
curl -d '{ "operation": "delete", "metadata": { "key": "my-test-file.txt" }}' \
|
||||
http://localhost:<dapr-port>/v1.0/bindings/<binding-name>
|
||||
|
@ -267,13 +267,13 @@ The data parameters are:
|
|||
|
||||
{{< tabpane text=true >}}
|
||||
|
||||
{{% tab %}}
|
||||
{{% tab "Windows" %}}
|
||||
```bash
|
||||
curl -d '{ \"operation\": \"list\", \"data\": { \"maxResults\": 5, \"prefix\": \"dapr-\", \"marker\": \"obstest\", \"delimiter\": \"jpg\" }}' http://localhost:<dapr-port>/v1.0/bindings/<binding-name>
|
||||
```
|
||||
{{% /tab %}}
|
||||
|
||||
{{% tab %}}
|
||||
{{% tab "Linux" %}}
|
||||
```bash
|
||||
curl -d '{ "operation": "list", "data": { "maxResults": 5, "prefix": "dapr-", "marker": "obstest", "delimiter": "jpg" }}' \
|
||||
http://localhost:<dapr-port>/v1.0/bindings/<binding-name>
|
||||
|
|
|
@ -59,14 +59,14 @@ To perform a create file operation, invoke the Local Storage binding with a `POS
|
|||
##### Save text to a random generated UUID file
|
||||
|
||||
{{< tabpane text=true >}}
|
||||
{{% tab %}}
|
||||
{{% tab "Windows" %}}
|
||||
On Windows, utilize cmd prompt (PowerShell has different escaping mechanism)
|
||||
```bash
|
||||
curl -d "{ \"operation\": \"create\", \"data\": \"Hello World\" }" http://localhost:<dapr-port>/v1.0/bindings/<binding-name>
|
||||
```
|
||||
{{% /tab %}}
|
||||
|
||||
{{% tab %}}
|
||||
{{% tab "Linux" %}}
|
||||
```bash
|
||||
curl -d '{ "operation": "create", "data": "Hello World" }' \
|
||||
http://localhost:<dapr-port>/v1.0/bindings/<binding-name>
|
||||
|
@ -79,14 +79,14 @@ To perform a create file operation, invoke the Local Storage binding with a `POS
|
|||
|
||||
{{< tabpane text=true >}}
|
||||
|
||||
{{% tab %}}
|
||||
{{% tab "Windows" %}}
|
||||
```bash
|
||||
curl -d "{ \"operation\": \"create\", \"data\": \"Hello World\", \"metadata\": { \"fileName\": \"my-test-file.txt\" } }" \
|
||||
http://localhost:<dapr-port>/v1.0/bindings/<binding-name>
|
||||
```
|
||||
{{% /tab %}}
|
||||
|
||||
{{% tab %}}
|
||||
{{% tab "Linux" %}}
|
||||
```bash
|
||||
curl -d '{ "operation": "create", "data": "Hello World", "metadata": { "fileName": "my-test-file.txt" } }' \
|
||||
http://localhost:<dapr-port>/v1.0/bindings/<binding-name>
|
||||
|
@ -102,13 +102,13 @@ To upload a file, encode it as Base64. The binding should automatically detect t
|
|||
|
||||
{{< tabpane text=true >}}
|
||||
|
||||
{{% tab %}}
|
||||
{{% tab "Windows" %}}
|
||||
```bash
|
||||
curl -d "{ \"operation\": \"create\", \"data\": \"YOUR_BASE_64_CONTENT\", \"metadata\": { \"fileName\": \"my-test-file.jpg\" } }" http://localhost:<dapr-port>/v1.0/bindings/<binding-name>
|
||||
```
|
||||
{{% /tab %}}
|
||||
|
||||
{{% tab %}}
|
||||
{{% tab "Linux" %}}
|
||||
```bash
|
||||
curl -d '{ "operation": "create", "data": "YOUR_BASE_64_CONTENT", "metadata": { "fileName": "my-test-file.jpg" } }' \
|
||||
http://localhost:<dapr-port>/v1.0/bindings/<binding-name>
|
||||
|
@ -145,13 +145,13 @@ To perform a get file operation, invoke the Local Storage binding with a `POST`
|
|||
|
||||
{{< tabpane text=true >}}
|
||||
|
||||
{{% tab %}}
|
||||
{{% tab "Windows" %}}
|
||||
```bash
|
||||
curl -d '{ \"operation\": \"get\", \"metadata\": { \"fileName\": \"myfile\" }}' http://localhost:<dapr-port>/v1.0/bindings/<binding-name>
|
||||
```
|
||||
{{% /tab %}}
|
||||
|
||||
{{% tab %}}
|
||||
{{% tab "Linux" %}}
|
||||
```bash
|
||||
curl -d '{ "operation": "get", "metadata": { "fileName": "myfile" }}' \
|
||||
http://localhost:<dapr-port>/v1.0/bindings/<binding-name>
|
||||
|
@ -189,13 +189,13 @@ If you only want to list the files beneath a particular directory below the `roo
|
|||
|
||||
{{< tabpane text=true >}}
|
||||
|
||||
{{% tab %}}
|
||||
{{% tab "Windows" %}}
|
||||
```bash
|
||||
curl -d '{ \"operation\": \"list\", \"metadata\": { \"fileName\": \"my/cool/directory\" }}' http://localhost:<dapr-port>/v1.0/bindings/<binding-name>
|
||||
```
|
||||
{{% /tab %}}
|
||||
|
||||
{{% tab %}}
|
||||
{{% tab "Linux" %}}
|
||||
```bash
|
||||
curl -d '{ "operation": "list", "metadata": { "fileName": "my/cool/directory" }}' \
|
||||
http://localhost:<dapr-port>/v1.0/bindings/<binding-name>
|
||||
|
@ -225,13 +225,13 @@ To perform a delete file operation, invoke the Local Storage binding with a `POS
|
|||
|
||||
{{< tabpane text=true >}}
|
||||
|
||||
{{% tab %}}
|
||||
{{% tab "Windows" %}}
|
||||
```bash
|
||||
curl -d '{ \"operation\": \"delete\", \"metadata\": { \"fileName\": \"myfile\" }}' http://localhost:<dapr-port>/v1.0/bindings/<binding-name>
|
||||
```
|
||||
{{% /tab %}}
|
||||
|
||||
{{% tab %}}
|
||||
{{% tab "Linux" %}}
|
||||
```bash
|
||||
curl -d '{ "operation": "delete", "metadata": { "fileName": "myfile" }}' \
|
||||
http://localhost:<dapr-port>/v1.0/bindings/<binding-name>
|
||||
|
|
|
@ -103,7 +103,7 @@ Read more about the importance and usage of these parameters in the [Azure OpenA
|
|||
#### Examples
|
||||
|
||||
{{< tabpane text=true >}}
|
||||
{{% tab %}}
|
||||
{{% tab "Linux" %}}
|
||||
```bash
|
||||
curl -d '{ "data": {"deploymentId: "my-model" , "prompt": "A dog is ", "maxTokens":15}, "operation": "completion" }' \
|
||||
http://localhost:<dapr-port>/v1.0/bindings/<binding-name>
|
||||
|
@ -176,7 +176,7 @@ Each message is of the form:
|
|||
|
||||
{{< tabpane text=true >}}
|
||||
|
||||
{{% tab %}}
|
||||
{{% tab "Linux" %}}
|
||||
```bash
|
||||
curl -d '{
|
||||
"data": {
|
||||
|
|
|
@ -190,14 +190,14 @@ Valid values for `presignTTL` are [Go duration strings](https://pkg.go.dev/maze.
|
|||
|
||||
{{< tabpane text=true >}}
|
||||
|
||||
{{% tab %}}
|
||||
{{% tab "Windows" %}}
|
||||
```bash
|
||||
curl -d "{ \"operation\": \"create\", \"data\": \"Hello World\", \"metadata\": { \"presignTTL\": \"15m\" } }" \
|
||||
http://localhost:<dapr-port>/v1.0/bindings/<binding-name>
|
||||
```
|
||||
{{% /tab %}}
|
||||
|
||||
{{% tab %}}
|
||||
{{% tab "Linux" %}}
|
||||
```bash
|
||||
curl -d '{ "operation": "create", "data": "Hello World", "metadata": { "presignTTL": "15m" } }' \
|
||||
http://localhost:<dapr-port>/v1.0/bindings/<binding-name>
|
||||
|
@ -222,14 +222,14 @@ The response body contains the following example JSON:
|
|||
##### Save text to a random generated UUID file
|
||||
|
||||
{{< tabpane text=true >}}
|
||||
{{% tab %}}
|
||||
{{% tab "Windows" %}}
|
||||
On Windows, utilize cmd prompt (PowerShell has different escaping mechanism)
|
||||
```bash
|
||||
curl -d "{ \"operation\": \"create\", \"data\": \"Hello World\" }" http://localhost:<dapr-port>/v1.0/bindings/<binding-name>
|
||||
```
|
||||
{{% /tab %}}
|
||||
|
||||
{{% tab %}}
|
||||
{{% tab "Linux" %}}
|
||||
```bash
|
||||
curl -d '{ "operation": "create", "data": "Hello World" }' \
|
||||
http://localhost:<dapr-port>/v1.0/bindings/<binding-name>
|
||||
|
@ -242,14 +242,14 @@ The response body contains the following example JSON:
|
|||
|
||||
{{< tabpane text=true >}}
|
||||
|
||||
{{% tab %}}
|
||||
{{% tab "Windows" %}}
|
||||
```bash
|
||||
curl -d "{ \"operation\": \"create\", \"data\": \"Hello World\", \"metadata\": { \"key\": \"my-test-file.txt\" } }" \
|
||||
http://localhost:<dapr-port>/v1.0/bindings/<binding-name>
|
||||
```
|
||||
{{% /tab %}}
|
||||
|
||||
{{% tab %}}
|
||||
{{% tab "Linux" %}}
|
||||
```bash
|
||||
curl -d '{ "operation": "create", "data": "Hello World", "metadata": { "key": "my-test-file.txt" } }' \
|
||||
http://localhost:<dapr-port>/v1.0/bindings/<binding-name>
|
||||
|
@ -293,13 +293,13 @@ Then you can upload it as you would normally:
|
|||
|
||||
{{< tabpane text=true >}}
|
||||
|
||||
{{% tab %}}
|
||||
{{% tab "Windows" %}}
|
||||
```bash
|
||||
curl -d "{ \"operation\": \"create\", \"data\": \"YOUR_BASE_64_CONTENT\", \"metadata\": { \"key\": \"my-test-file.jpg\" } }" http://localhost:<dapr-port>/v1.0/bindings/<binding-name>
|
||||
```
|
||||
{{% /tab %}}
|
||||
|
||||
{{% tab %}}
|
||||
{{% tab "Linux" %}}
|
||||
```bash
|
||||
curl -d '{ "operation": "create", "data": "YOUR_BASE_64_CONTENT", "metadata": { "key": "my-test-file.jpg" } }' \
|
||||
http://localhost:<dapr-port>/v1.0/bindings/<binding-name>
|
||||
|
@ -313,13 +313,13 @@ To upload a file from a supplied path (relative or absolute), use the `filepath`
|
|||
|
||||
{{< tabpane text=true >}}
|
||||
|
||||
{{% tab %}}
|
||||
{{% tab "Windows" %}}
|
||||
```bash
|
||||
curl -d '{ \"operation\": \"create\", \"metadata\": { \"filePath\": \"my-test-file.txt\" }}' http://localhost:<dapr-port>/v1.0/bindings/<binding-name>
|
||||
```
|
||||
{{% /tab %}}
|
||||
|
||||
{{% tab %}}
|
||||
{{% tab "Linux" %}}
|
||||
```bash
|
||||
curl -d '{ "operation": "create", "metadata": { "filePath": "my-test-file.txt" }}' \
|
||||
http://localhost:<dapr-port>/v1.0/bindings/<binding-name>
|
||||
|
@ -346,14 +346,14 @@ Valid values for `presignTTL` are [Go duration strings](https://pkg.go.dev/maze.
|
|||
|
||||
{{< tabpane text=true >}}
|
||||
|
||||
{{% tab %}}
|
||||
{{% tab "Windows" %}}
|
||||
```bash
|
||||
curl -d "{ \"operation\": \"presign\", \"metadata\": { \"presignTTL\": \"15m\", \"key\": \"my-test-file.txt\" } }" \
|
||||
http://localhost:<dapr-port>/v1.0/bindings/<binding-name>
|
||||
```
|
||||
{{% /tab %}}
|
||||
|
||||
{{% tab %}}
|
||||
{{% tab "Linux" %}}
|
||||
```bash
|
||||
curl -d '{ "operation": "presign", "metadata": { "presignTTL": "15m", "key": "my-test-file.txt" } }' \
|
||||
http://localhost:<dapr-port>/v1.0/bindings/<binding-name>
|
||||
|
@ -393,13 +393,13 @@ The metadata parameters are:
|
|||
|
||||
{{< tabpane text=true >}}
|
||||
|
||||
{{% tab %}}
|
||||
{{% tab "Windows" %}}
|
||||
```bash
|
||||
curl -d '{ \"operation\": \"get\", \"metadata\": { \"key\": \"my-test-file.txt\" }}' http://localhost:<dapr-port>/v1.0/bindings/<binding-name>
|
||||
```
|
||||
{{% /tab %}}
|
||||
|
||||
{{% tab %}}
|
||||
{{% tab "Linux" %}}
|
||||
```bash
|
||||
curl -d '{ "operation": "get", "metadata": { "key": "my-test-file.txt" }}' \
|
||||
http://localhost:<dapr-port>/v1.0/bindings/<binding-name>
|
||||
|
@ -437,13 +437,13 @@ The metadata parameters are:
|
|||
|
||||
{{< tabpane text=true >}}
|
||||
|
||||
{{% tab %}}
|
||||
{{% tab "Windows" %}}
|
||||
```bash
|
||||
curl -d '{ \"operation\": \"delete\", \"metadata\": { \"key\": \"my-test-file.txt\" }}' http://localhost:<dapr-port>/v1.0/bindings/<binding-name>
|
||||
```
|
||||
{{% /tab %}}
|
||||
|
||||
{{% tab %}}
|
||||
{{% tab "Linux" %}}
|
||||
```bash
|
||||
curl -d '{ "operation": "delete", "metadata": { "key": "my-test-file.txt" }}' \
|
||||
http://localhost:<dapr-port>/v1.0/bindings/<binding-name>
|
||||
|
|
|
@ -81,13 +81,13 @@ To perform a create file operation, invoke the SFTP binding with a `POST` method
|
|||
|
||||
{{< tabpane text=true >}}
|
||||
|
||||
{{% tab %}}
|
||||
{{% tab "Windows" %}}
|
||||
```bash
|
||||
curl -d "{ \"operation\": \"create\", \"data\": \"YOUR_BASE_64_CONTENT\", \"metadata\": { \"fileName\": \"my-test-file.jpg\" } }" http://localhost:<dapr-port>/v1.0/bindings/<binding-name>
|
||||
```
|
||||
{{% /tab %}}
|
||||
|
||||
{{% tab %}}
|
||||
{{% tab "Linux" %}}
|
||||
```bash
|
||||
curl -d '{ "operation": "create", "data": "YOUR_BASE_64_CONTENT", "metadata": { "fileName": "my-test-file.jpg" } }' \
|
||||
http://localhost:<dapr-port>/v1.0/bindings/<binding-name>
|
||||
|
@ -124,13 +124,13 @@ To perform a get file operation, invoke the SFTP binding with a `POST` method an
|
|||
|
||||
{{< tabpane text=true >}}
|
||||
|
||||
{{% tab %}}
|
||||
{{% tab "Windows" %}}
|
||||
```bash
|
||||
curl -d '{ \"operation\": \"get\", \"metadata\": { \"fileName\": \"filename\" }}' http://localhost:<dapr-port>/v1.0/bindings/<binding-name>
|
||||
```
|
||||
{{% /tab %}}
|
||||
|
||||
{{% tab %}}
|
||||
{{% tab "Linux" %}}
|
||||
```bash
|
||||
curl -d '{ "operation": "get", "metadata": { "fileName": "filename" }}' \
|
||||
http://localhost:<dapr-port>/v1.0/bindings/<binding-name>
|
||||
|
@ -168,13 +168,13 @@ If you only want to list the files beneath a particular directory below the `roo
|
|||
|
||||
{{< tabpane text=true >}}
|
||||
|
||||
{{% tab %}}
|
||||
{{% tab "Windows" %}}
|
||||
```bash
|
||||
curl -d '{ \"operation\": \"list\", \"metadata\": { \"fileName\": \"my/cool/directory\" }}' http://localhost:<dapr-port>/v1.0/bindings/<binding-name>
|
||||
```
|
||||
{{% /tab %}}
|
||||
|
||||
{{% tab %}}
|
||||
{{% tab "Linux" %}}
|
||||
```bash
|
||||
curl -d '{ "operation": "list", "metadata": { "fileName": "my/cool/directory" }}' \
|
||||
http://localhost:<dapr-port>/v1.0/bindings/<binding-name>
|
||||
|
@ -204,13 +204,13 @@ To perform a delete file operation, invoke the SFTP binding with a `POST` method
|
|||
|
||||
{{< tabpane text=true >}}
|
||||
|
||||
{{% tab %}}
|
||||
{{% tab "Windows" %}}
|
||||
```bash
|
||||
curl -d '{ \"operation\": \"delete\", \"metadata\": { \"fileName\": \"myfile\" }}' http://localhost:<dapr-port>/v1.0/bindings/<binding-name>
|
||||
```
|
||||
{{% /tab %}}
|
||||
|
||||
{{% tab %}}
|
||||
{{% tab "Linux" %}}
|
||||
```bash
|
||||
curl -d '{ "operation": "delete", "metadata": { "fileName": "myfile" }}' \
|
||||
http://localhost:<dapr-port>/v1.0/bindings/<binding-name>
|
||||
|
|
|
@ -150,7 +150,7 @@
|
|||
features:
|
||||
input: false
|
||||
output: true
|
||||
- component: Twilio
|
||||
- component: Twilio SMS
|
||||
link: twilio
|
||||
state: Alpha
|
||||
version: v1
|
||||
|
@ -158,7 +158,7 @@
|
|||
features:
|
||||
input: false
|
||||
output: true
|
||||
- component: SendGrid
|
||||
- component: Twillio SendGrid
|
||||
link: sendgrid
|
||||
state: Alpha
|
||||
version: v1
|
||||
|
|
Loading…
Reference in New Issue