Merge branch 'v1.15' into upmerge_02-12

This commit is contained in:
Hannah Hunter 2025-02-12 14:20:17 -05:00 committed by GitHub
commit 65ada85350
No known key found for this signature in database
GPG Key ID: B5690EEEBB952194
15 changed files with 128 additions and 98 deletions

View File

@ -22,7 +22,7 @@ Dapr provides the following building blocks:
|----------------|----------|-------------|
| [**Service-to-service invocation**]({{< ref "service-invocation-overview.md" >}}) | `/v1.0/invoke` | Service invocation enables applications to communicate with each other through well-known endpoints in the form of http or gRPC messages. Dapr provides an endpoint that acts as a combination of a reverse proxy with built-in service discovery, while leveraging built-in distributed tracing and error handling.
| [**Publish and subscribe**]({{< ref "pubsub-overview.md" >}}) | `/v1.0/publish` `/v1.0/subscribe`| Pub/Sub is a loosely coupled messaging pattern where senders (or publishers) publish messages to a topic, to which subscribers subscribe. Dapr supports the pub/sub pattern between applications.
| [**Workflows**]({{< ref "workflow-overview.md" >}}) | `/v1.0/workflow` | The Workflow API enables you to define long running, persistent processes or data flows that span multiple microservices using Dapr workflows or workflow components. The Workflow API can be combined with other Dapr API building blocks. For example, a workflow can call another service with service invocation or retrieve secrets, providing flexibility and portability.
| [**Workflows**]({{< ref "workflow-overview.md" >}}) | `/v1.0/workflow` | The Workflow API enables you to define long running, persistent processes or data flows that span multiple microservices using Dapr workflows. The Workflow API can be combined with other Dapr API building blocks. For example, a workflow can call another service with service invocation or retrieve secrets, providing flexibility and portability.
| [**State management**]({{< ref "state-management-overview.md" >}}) | `/v1.0/state` | Application state is anything an application wants to preserve beyond a single session. Dapr provides a key/value-based state and query APIs with pluggable state stores for persistence.
| [**Bindings**]({{< ref "bindings-overview.md" >}}) | `/v1.0/bindings` | A binding provides a bi-directional connection to an external cloud/on-premise service or system. Dapr allows you to invoke the external service through the Dapr binding API, and it allows your application to be triggered by events sent by the connected service.
| [**Actors**]({{< ref "actors-overview.md" >}}) | `/v1.0/actors` | An actor is an isolated, independent unit of compute and state with single-threaded execution. Dapr provides an actor implementation based on the virtual actor pattern which provides a single-threaded programming model and where actors are garbage collected when not in use.

View File

@ -78,13 +78,6 @@ Pub/sub broker components are message brokers that can pass messages to/from ser
- [List of pub/sub brokers]({{< ref supported-pubsub >}})
- [Pub/sub broker implementations](https://github.com/dapr/components-contrib/tree/master/pubsub)
### Workflows
A [workflow]({{< ref workflow-overview.md >}}) is custom application logic that defines a reliable business process or data flow. Workflow components are workflow runtimes (or engines) that run the business logic written for that workflow and store their state into a state store.
<!--- [List of supported workflows]()
- [Workflow implementations](https://github.com/dapr/components-contrib/tree/master/workflows)-->
### State stores
State store components are data stores (databases, files, memory) that store key-value pairs as part of the [state management]({{< ref "state-management-overview.md" >}}) building block.

View File

@ -5,17 +5,61 @@ linkTitle: "Scheduler"
description: "Overview of the Dapr scheduler service"
---
The Dapr Scheduler service is used to schedule jobs, running in [self-hosted mode]({{< ref self-hosted >}}) or on [Kubernetes]({{< ref kubernetes >}}).
The Dapr Scheduler service is used to schedule different types of jobs, running in [self-hosted mode]({{< ref self-hosted >}}) or on [Kubernetes]({{< ref kubernetes >}}).
- Jobs created through the Jobs API
- Actor reminder jobs (used by the actor reminders)
- Actor reminder jobs created by the Workflow API (which uses actor reminders)
The diagram below shows how the Scheduler service is used via the jobs API when called from your application. All the jobs that are tracked by the Scheduler service are stored in an embedded Etcd database.
From Dapr v1.15, the Scheduler service is used by default to schedule actor reminders as well as actor reminders for the Workflow API.
There is no concept of a leader Scheduler instance. All Scheduler service replicas are considered peers. All receive jobs to be scheduled for execution and the jobs are allocated between the available Scheduler service replicas for load balancing of the trigger events.
The diagram below shows how the Scheduler service is used via the jobs API when called from your application. All the jobs that are tracked by the Scheduler service are stored in an embedded etcd database.
<img src="/images/scheduler/scheduler-architecture.png" alt="Diagram showing the Scheduler control plane service and the jobs API">
## Actor reminders
## Actor Reminders
Prior to Dapr v1.15, [actor reminders]({{< ref "actors-timers-reminders.md#actor-reminders" >}}) were run using the Placement service. Now, by default, the [`SchedulerReminders` feature flag]({{< ref "support-preview-features.md#current-preview-features" >}}) is set to `true`, and all new actor reminders you create are run using the Scheduler service to make them more scalable.
When you deploy Dapr v1.15, any _existing_ actor reminders are migrated from the Placement service to the Scheduler service as a one time operation for each actor type. You can prevent this migration by setting the `SchedulerReminders` flag to `false` in application configuration file for the actor type.
When you deploy Dapr v1.15, any _existing_ actor reminders are automatically migrated from the Actor State Store to the Scheduler service as a one time operation for each actor type. Each replica will only migrate the reminders whose actor type and id are associated with that host. This means that only when all replicas implementing an actor type are upgraded to 1.15, will all the reminders associated with that type be migrated. There will be _no_ loss of reminder triggers during the migration. However, you can prevent this migration and keep the existing actor reminders running using the Actor State Store by setting the `SchedulerReminders` flag to `false` in the application configuration file for the actor type.
To confirm that the migration was successful, check the Dapr sidecar logs for the following:
```sh
Running actor reminder migration from state store to scheduler
```
coupled with
```sh
Migrated X reminders from state store to scheduler successfully
```
or
```sh
Skipping migration, no missing scheduler reminders found
```
## Job Locality
### Default Job Behavior
By default, when the Scheduler service triggers jobs, they are sent back to a single replica for the same app ID that scheduled the job in a randomly load balanced manner. This provides basic load balancing across your application's replicas, which is suitable for most use cases where strict locality isn't required.
### Using Actor Reminders for Perfect Locality
For users who require perfect job locality (having jobs triggered on the exact same host that created them), actor reminders provide a solution. To enforce perfect locality for a job:
1. Create an actor type with a random UUID that is unique to the specific replica
2. Use this actor type to create an actor reminder
This approach ensures that the job will always be triggered on the same host which created it, rather than being randomly distributed among replicas.
## Job Triggering
### Job Failure Policy and Staging Queue
When the Scheduler service triggers a job and it has a client side error, the job is retried by default with a 1s interval and 3 maximum retries.
For non-client side errors, for example, when a job cannot be sent to an available Dapr sidecar at trigger time, it is placed in a staging queue within the Scheduler service. Jobs remain in this queue until a suitable sidecar instance becomes available, at which point they are automatically sent to the appropriate Dapr sidecar instance.
## Self-hosted mode
@ -23,10 +67,56 @@ The Scheduler service Docker container is started automatically as part of `dapr
## Kubernetes mode
The Scheduler service is deployed as part of `dapr init -k`, or via the Dapr Helm charts. You can run Scheduler in high availability (HA) mode. [Learn more about setting HA mode in your Kubernetes service.]({{< ref "kubernetes-production.md#individual-service-ha-helm-configuration" >}})
The Scheduler service is deployed as part of `dapr init -k`, or via the Dapr Helm charts. When running in Kubernetes mode, the Scheduler service is configured to run with exactly 3 replicas to ensure data integrity.
You can run Scheduler in high availability (HA) mode. [Learn more about setting HA mode in your Kubernetes service.]({{< ref "kubernetes-production.md#individual-service-ha-helm-configuration" >}})
When a Kubernetes namespace is deleted, all the Job and Actor Reminders corresponding to that namespace are deleted.
## Back Up and Restore Scheduler Data
In production environments, it's recommended to perform periodic backups of this data at an interval that aligns with your recovery point objectives.
### Port Forward for Backup Operations
To perform backup and restore operations, you'll need to access the embedded etcd instance. This requires port forwarding to expose the etcd ports (port 2379).
#### Docker Compose Example
Here's how to expose the etcd ports in a Docker Compose configuration for standalone mode:
```yaml
scheduler-1:
image: "diagrid/dapr/scheduler:dev110-linux-arm64"
command: ["./scheduler",
"--etcd-data-dir", "/var/run/dapr/scheduler",
"--replica-count", "3",
"--id","scheduler-1",
"--initial-cluster", "scheduler-1=http://scheduler-1:2380,scheduler-0=http://scheduler-0:2380,scheduler-2=http://scheduler-2:2380",
"--etcd-client-ports", "scheduler-0=2379,scheduler-1=2379,scheduler-2=2379",
"--etcd-client-http-ports", "scheduler-0=2330,scheduler-1=2330,scheduler-2=2330",
"--log-level=debug"
]
ports:
- 2379:2379
volumes:
- ./dapr_scheduler/1:/var/run/dapr/scheduler
networks:
- network
```
When running in HA mode, you only need to expose the ports for one scheduler instance to perform backup operations.
### Performing Backup and Restore
Once you have access to the etcd ports, you can follow the [official etcd backup and restore documentation](https://etcd.io/docs/v3.5/op-guide/recovery/) to perform backup and restore operations. The process involves using standard etcd commands to create snapshots and restore from them.
## Disabling the Scheduler service
If you are not using any features that require the Scheduler service (Jobs API, Actor Reminders, or Workflows), you can disable it by setting `global.scheduler.enabled=false`.
For more information on running Dapr on Kubernetes, visit the [Kubernetes hosting page]({{< ref kubernetes >}}).
## Related links
[Learn more about the Jobs API.]({{< ref jobs_api.md >}})
[Learn more about the Jobs API.]({{< ref jobs_api.md >}})

View File

@ -27,11 +27,11 @@ Creating a new actor follows a local call like `http://localhost:3500/v1.0/actor
The Dapr runtime SDKs have language-specific actor frameworks. For example, the .NET SDK has C# actors. The goal is for all the Dapr language SDKs to have an actor framework. Currently .NET, Java, Go and Python SDK have actor frameworks.
### Does Dapr have any SDKs I can use if I want to work with a particular programming language or framework?
## Does Dapr have any SDKs I can use if I want to work with a particular programming language or framework?
To make using Dapr more natural for different languages, it includes [language specific SDKs]({{<ref sdks>}}) for Go, Java, JavaScript, .NET, Python, PHP, Rust and C++. These SDKs expose the functionality in the Dapr building blocks, such as saving state, publishing an event or creating an actor, through a typed language API rather than calling the http/gRPC API. This enables you to write a combination of stateless and stateful functions and actors all in the language of your choice. And because these SDKs share the Dapr runtime, you get cross-language actor and functions support.
### What frameworks does Dapr integrate with?
## What frameworks does Dapr integrate with?
Dapr can be integrated with any developer framework. For example, in the Dapr .NET SDK you can find ASP.NET Core integration, which brings stateful routing controllers that respond to pub/sub events from other services.
Dapr is integrated with the following frameworks;

View File

@ -46,7 +46,7 @@ Each of these building block APIs is independent, meaning that you can use any n
|----------------|-------------|
| [**Service-to-service invocation**]({{< ref "service-invocation-overview.md" >}}) | Resilient service-to-service invocation enables method calls, including retries, on remote services, wherever they are located in the supported hosting environment.
| [**Publish and subscribe**]({{< ref "pubsub-overview.md" >}}) | Publishing events and subscribing to topics between services enables event-driven architectures to simplify horizontal scalability and make them resilient to failure. Dapr provides at-least-once message delivery guarantee, message TTL, consumer groups and other advance features.
| [**Workflows**]({{< ref "workflow-overview.md" >}}) | The workflow API can be combined with other Dapr building blocks to define long running, persistent processes or data flows that span multiple microservices using Dapr workflows or workflow components.
| [**Workflows**]({{< ref "workflow-overview.md" >}}) | The workflow API can be combined with other Dapr building blocks to define long running, persistent processes or data flows that span multiple microservices using Dapr workflows.
| [**State management**]({{< ref "state-management-overview.md" >}}) | With state management for storing and querying key/value pairs, long-running, highly available, stateful services can be easily written alongside stateless services in your application. The state store is pluggable and examples include AWS DynamoDB, Azure Cosmos DB, Azure SQL Server, GCP Firebase, PostgreSQL or Redis, among others.
| [**Resource bindings**]({{< ref "bindings-overview.md" >}}) | Resource bindings with triggers builds further on event-driven architectures for scale and resiliency by receiving and sending events to and from any external source such as databases, queues, file systems, etc.
| [**Actors**]({{< ref "actors-overview.md" >}}) | A pattern for stateful and stateless objects that makes concurrency simple, with method and state encapsulation. Dapr provides many capabilities in its actor runtime, including concurrency, state, and life-cycle management for actor activation/deactivation, and timers and reminders to wake up actors.

View File

@ -108,7 +108,7 @@ Refer [api spec]({{< ref "actors_api.md#invoke-timer" >}}) for more details.
## Actor reminders
{{% alert title="Note" color="primary" %}}
In Dapr v1.15, actor reminders are stored by default in the [Scheduler service]({{< ref "scheduler.md#actor-reminders" >}}).
In Dapr v1.15, actor reminders are stored by default in the [Scheduler service]({{< ref "scheduler.md#actor-reminders" >}}). When upgrading to Dapr v1.15 all existing reminders are automatically migrated to the Scheduler service with no loss of reminders as a one time operation for each actor type.
{{% /alert %}}
Reminders are a mechanism to trigger *persistent* callbacks on an actor at specified times. Their functionality is similar to timers. But unlike timers, reminders are triggered under all circumstances until the actor explicitly unregisters them or the actor is explicitly deleted or the number in invocations is exhausted. Specifically, reminders are triggered across actor deactivations and failovers because the Dapr actor runtime persists the information about the actors' reminders using Dapr actor state provider.

View File

@ -8,7 +8,7 @@ description: "Overview of the jobs API building block"
Many applications require job scheduling, or the need to take an action in the future. The jobs API is an orchestrator for scheduling these future jobs, either at a specific time or for a specific interval.
Not only does the jobs API help you with scheduling jobs, but internally, Dapr uses the scheduler service to schedule actor reminders.
Not only does the jobs API help you with scheduling jobs, but internally, Dapr uses the Scheduler service to schedule actor reminders.
Jobs in Dapr consist of:
- [The jobs API building block]({{< ref jobs_api.md >}})
@ -57,7 +57,9 @@ The jobs API provides several features to make it easy for you to schedule jobs.
### Schedule jobs across multiple replicas
The Scheduler service enables the scheduling of jobs to scale across multiple replicas, while guaranteeing that a job is only triggered by 1 scheduler service instance.
When you create a job, it replaces any existing job with the same name. This means that every time a job is created, it resets the count and only keeps 1 record in the embedded etcd for that job. Therefore, you don't need to worry about multiple jobs being created and firing off — only the most recent job is recorded and executed, even if all your apps schedule the same job on startup.
The Scheduler service enables the scheduling of jobs to scale across multiple replicas, while guaranteeing that a job is only triggered by 1 Scheduler service instance.
## Try out the jobs API

View File

@ -6,7 +6,9 @@ weight: 4000
description: "The Dapr Workflow engine architecture"
---
[Dapr Workflows]({{< ref "workflow-overview.md" >}}) allow developers to define workflows using ordinary code in a variety of programming languages. The workflow engine runs inside of the Dapr sidecar and orchestrates workflow code deployed as part of your application. This article describes:
[Dapr Workflows]({{< ref "workflow-overview.md" >}}) allow developers to define workflows using ordinary code in a variety of programming languages. The workflow engine runs inside of the Dapr sidecar and orchestrates workflow code deployed as part of your application. Dapr Workflows are built on top of Dapr Actors providing durability and scalability for workflow execution.
This article describes:
- The architecture of the Dapr Workflow engine
- How the workflow engine interacts with application code
@ -72,7 +74,7 @@ The internal workflow actor types are only registered after an app has registere
### Workflow actors
Workflow actors are responsible for managing the state and placement of all workflows running in the app. A new instance of the workflow actor is activated for every workflow instance that gets created. The ID of the workflow actor is the ID of the workflow. This internal actor stores the state of the workflow as it progresses and determines the node on which the workflow code executes via the actor placement service.
There are 2 different types of actors used with workflows: workflow actors and activity actors. Workflow actors are responsible for managing the state and placement of all workflows running in the app. A new instance of the workflow actor is activated for every workflow instance that gets created. The ID of the workflow actor is the ID of the workflow. This internal actor stores the state of the workflow as it progresses and determines the node on which the workflow code executes via the actor placement service.
Each workflow actor saves its state using the following keys in the configured state store:
@ -84,7 +86,7 @@ Each workflow actor saves its state using the following keys in the configured s
| `metadata` | Contains meta information about the workflow as a JSON blob and includes details such as the length of the inbox, the length of the history, and a 64-bit integer representing the workflow generation (for cases where the instance ID gets reused). The length information is used to determine which keys need to be read or written to when loading or saving workflow state updates. |
{{% alert title="Warning" color="warning" %}}
In the [Alpha release of the Dapr Workflow engine]({{< ref support-preview-features.md >}}), workflow actor state will remain in the state store even after a workflow has completed. Creating a large number of workflows could result in unbounded storage usage. In a future release, data retention policies will be introduced that can automatically purge the state store of old workflow state.
Workflow actor state remains in the state store even after a workflow has completed. Creating a large number of workflows could result in unbounded storage usage. To address this either purge workflows using their ID or directly delete entries in the workflow DB store.
{{% /alert %}}
The following diagram illustrates the typical lifecycle of a workflow actor.
@ -122,7 +124,7 @@ Activity actors are short-lived:
### Reminder usage and execution guarantees
The Dapr Workflow ensures workflow fault-tolerance by using [actor reminders]({{< ref "howto-actors.md#actor-timers-and-reminders" >}}) to recover from transient system failures. Prior to invoking application workflow code, the workflow or activity actor will create a new reminder. If the application code executes without interruption, the reminder is deleted. However, if the node or the sidecar hosting the associated workflow or activity crashes, the reminder will reactivate the corresponding actor and the execution will be retried.
The Dapr Workflow ensures workflow fault-tolerance by using [actor reminders]({{< ref "../actors/actors-timers-reminders.md##actor-reminders" >}}) to recover from transient system failures. Prior to invoking application workflow code, the workflow or activity actor will create a new reminder. If the application code executes without interruption, the reminder is deleted. However, if the node or the sidecar hosting the associated workflow or activity crashes, the reminder will reactivate the corresponding actor and the execution will be retried.
<img src="/images/workflow-overview/workflow-actor-reminder-flow.png" width=600 alt="Diagram showing the process of invoking workflow actors"/>

View File

@ -8,12 +8,13 @@ description: "Overview of how to get Dapr running on your Kubernetes cluster"
Dapr can be configured to run on any supported versions of Kubernetes. To achieve this, Dapr begins by deploying the following Kubernetes services, which provide first-class integration to make running applications with Dapr easy.
| Kubernetes services | Description |
| ------------------- | ----------- |
| `dapr-operator` | Manages [component]({{< ref components >}}) updates and Kubernetes services endpoints for Dapr (state stores, pub/subs, etc.) |
| Kubernetes services | Description |
|-------------------------|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| `dapr-operator` | Manages [component]({{< ref components >}}) updates and Kubernetes services endpoints for Dapr (state stores, pub/subs, etc.) |
| `dapr-sidecar-injector` | Injects Dapr into [annotated](#adding-dapr-to-a-kubernetes-deployment) deployment pods and adds the environment variables `DAPR_HTTP_PORT` and `DAPR_GRPC_PORT` to enable user-defined applications to easily communicate with Dapr without hard-coding Dapr port values. |
| `dapr-placement` | Used for [actors]({{< ref actors >}}) only. Creates mapping tables that map actor instances to pods |
| `dapr-sentry` | Manages mTLS between services and acts as a certificate authority. For more information read the [security overview]({{< ref "security-concept.md" >}}) |
| `dapr-placement` | Used for [actors]({{< ref actors >}}) only. Creates mapping tables that map actor instances to pods |
| `dapr-sentry` | Manages mTLS between services and acts as a certificate authority. For more information read the [security overview]({{< ref "security-concept.md" >}}) |
| `dapr-scheduler` | Provides distributed job scheduling capabilities used by the Jobs API, Workflow API, and Actor Reminders |
<img src="/images/overview-kubernetes.png" width=1000>

View File

@ -120,13 +120,15 @@ In some scenarios, nodes may have memory and/or cpu pressure and the Dapr contro
for eviction. To prevent this, you can set a critical priority class name for the Dapr control plane pods. This ensures that
the Dapr control plane pods are not evicted unless all other pods with lower priority are evicted.
It's particularly important to protect the Dapr control plane components from eviction, especially the Scheduler service. When Schedulers are rescheduled or restarted, it can be highly disruptive to inflight jobs, potentially causing them to fire duplicate times. To prevent such disruptions, you should ensure the Dapr control plane components have a higher priority class than your application workloads.
Learn more about [Protecting Mission-Critical Pods](https://kubernetes.io/blog/2023/01/12/protect-mission-critical-pods-priorityclass/).
There are two built-in critical priority classes in Kubernetes:
- `system-cluster-critical`
- `system-node-critical` (highest priority)
It's recommended to set the `priorityClassName` to `system-cluster-critical` for the Dapr control plane pods.
It's recommended to set the `priorityClassName` to `system-cluster-critical` for the Dapr control plane pods. If you have your own custom priority classes for your applications, ensure they have a lower priority value than the one assigned to the Dapr control plane to maintain system stability and prevent disruption of core Dapr services.
For a new Dapr control plane deployment, the `system-cluster-critical` priority class mode can be set via the helm value `global.priorityClassName`.
@ -155,7 +157,6 @@ spec:
values: [system-cluster-critical]
```
## Deploy Dapr with Helm
[Visit the full guide on deploying Dapr with Helm]({{< ref "kubernetes-deploy.md#install-with-helm-advanced" >}}).

View File

@ -17,7 +17,6 @@ For CLI there is no explicit opt-in, just the version that this was first made a
| --- | --- | --- | --- | --- |
| **Pluggable components** | Allows creating self-hosted gRPC-based components written in any language that supports gRPC. The following component APIs are supported: State stores, Pub/sub, Bindings | N/A | [Pluggable components concept]({{<ref "components-concept#pluggable-components" >}})| v1.9 |
| **Multi-App Run for Kubernetes** | Configure multiple Dapr applications from a single configuration file and run from a single command on Kubernetes | `dapr run -k -f` | [Multi-App Run]({{< ref multi-app-dapr-run.md >}}) | v1.12 |
| **Workflows** | Author workflows as code to automate and orchestrate tasks within your application, like messaging, state management, and failure handling | N/A | [Workflows concept]({{< ref "components-concept#workflows" >}})| v1.10 |
| **Cryptography** | Encrypt or decrypt data without having to manage secrets keys | N/A | [Cryptography concept]({{< ref "components-concept#cryptography" >}})| v1.11 |
| **Actor State TTL** | Allow actors to save records to state stores with Time To Live (TTL) set to automatically clean up old data. In its current implementation, actor state with TTL may not be reflected correctly by clients, read [Actor State Transactions]({{< ref actors_api.md >}}) for more information. | `ActorStateTTL` | [Actor State Transactions]({{< ref actors_api.md >}}) | v1.11 |
| **Component Hot Reloading** | Allows for Dapr-loaded components to be "hot reloaded". A component spec is reloaded when it is created/updated/deleted in Kubernetes or on file when running in self-hosted mode. Ignores changes to actor state stores and workflow backends. | `HotReload`| [Hot Reloading]({{< ref components-concept.md >}}) | v1.13 |

View File

@ -13,11 +13,11 @@ The jobs API is currently in alpha.
With the jobs API, you can schedule jobs and tasks in the future.
> The HTTP APIs are intended for development and testing only. For production scenarios, the use of the SDKs is strongly
> recommended as they implement the gRPC APIs providing higher performance and capability than the HTTP APIs.
> recommended as they implement the gRPC APIs providing higher performance and capability than the HTTP APIs. This is because HTTP does JSON marshalling which can be expensive, while with gRPC, the data is transmitted over the wire and stored as-is being more performant.
## Schedule a job
Schedule a job with a name.
Schedule a job with a name. Jobs are scheduled based on the clock of the server where the Scheduler service is running. The timestamp is not converted to UTC. You can provide the timezone with the timestamp in RFC3339 format to specify which timezone you'd like the job to adhere to. If no timezone is provided, the server's local time is used.
```
POST http://localhost:3500/v1.0-alpha1/jobs/<name>

View File

@ -6,7 +6,7 @@ description: "Detailed documentation on the workflow API"
weight: 300
---
Dapr provides users with the ability to interact with workflows and comes with a built-in `dapr` component.
Dapr provides users with the ability to interact with workflows through its built-in workflow engine, which is implemented using Dapr Actors. This workflow engine is accessed using the name `dapr` in API calls as the `workflowComponentName`.
## Start workflow request
@ -36,7 +36,7 @@ Code | Description
---- | -----------
`202` | Accepted
`400` | Request was malformed
`500` | Request formatted correctly, error in dapr code or underlying component
`500` | Request formatted correctly, error in dapr code
### Response content
@ -76,7 +76,7 @@ Code | Description
---- | -----------
`202` | Accepted
`400` | Request was malformed
`500` | Request formatted correctly, error in dapr code or underlying component
`500` | Request formatted correctly, error in dapr code
### Response content
@ -163,7 +163,7 @@ Code | Description
---- | -----------
`202` | Accepted
`400` | Request was malformed
`500` | Error in Dapr code or underlying component
`500` | Error in Dapr code
### Response content
@ -194,7 +194,7 @@ Code | Description
---- | -----------
`202` | Accepted
`400` | Request was malformed
`500` | Error in Dapr code or underlying component
`500` | Error in Dapr code
### Response content
@ -221,7 +221,7 @@ Code | Description
---- | -----------
`200` | OK
`400` | Request was malformed
`500` | Request formatted correctly, error in dapr code or underlying component
`500` | Error in Dapr code
### Response content
@ -244,30 +244,6 @@ Parameter | Description
--------- | -----------
`runtimeStatus` | The status of the workflow instance. Values include: `"RUNNING"`, `"COMPLETED"`, `"CONTINUED_AS_NEW"`, `"FAILED"`, `"CANCELED"`, `"TERMINATED"`, `"PENDING"`, `"SUSPENDED"`
## Component format
A Dapr `workflow.yaml` component file has the following structure:
```yaml
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
name: <NAME>
spec:
type: workflow.<TYPE>
version: v1.0-alpha1
metadata:
- name: <NAME>
value: <VALUE>
```
| Setting | Description |
| ------- | ----------- |
| `metadata.name` | The name of the workflow component. |
| `spec/metadata` | Additional metadata parameters specified by workflow component |
However, Dapr comes with a built-in `dapr` workflow component that is built on Dapr Actors. No component file is required to use the built-in Dapr workflow component.
## Next Steps
- [Workflow API overview]({{< ref workflow-overview.md >}})

View File

@ -1,10 +0,0 @@
---
type: docs
title: "Workflow backend component specs"
linkTitle: "Workflow backend"
weight: 2000
description: The supported workflow backend that orchestrate workflow and save workflow state
no_list: true
---
{{< partial "components/description.html" >}}

View File

@ -1,24 +0,0 @@
---
type: docs
title: "Actor workflow backend"
linkTitle: "Actor workflow backend"
description: Detailed information on the Actor workflow backend component
---
## Component format
The Actor workflow backend is the default backend in Dapr. If no workflow backend is explicitly defined, the Actor backend will be used automatically.
You don't need to define any components to use the Actor workflow backend. It's ready to use out-of-the-box.
However, if you wish to explicitly define the Actor workflow backend as a component, you can do so, as shown in the example below.
```yaml
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
name: actorbackend
spec:
type: workflowbackend.actor
version: v1
```