mirror of https://github.com/dapr/docs.git
Apply suggestions from code review
Co-authored-by: Mark Fussell <markfussell@gmail.com> Co-authored-by: Josh van Leeuwen <me@joshvanl.dev> Signed-off-by: Cassie Coyle <cassie.i.coyle@gmail.com>
This commit is contained in:
parent
19e00ea4f9
commit
3602fccfb8
|
@ -7,12 +7,12 @@ description: "Overview of the Dapr scheduler service"
|
|||
|
||||
The Dapr Scheduler service is used to schedule different types of jobs, running in [self-hosted mode]({{< ref self-hosted >}}) or on [Kubernetes]({{< ref kubernetes >}}).
|
||||
- Jobs created through the Jobs API
|
||||
- Actor reminder jobs (used by the Actor Reminders feature)
|
||||
- Actor reminder jobs created by the Workflow API (which uses Actor Reminders under the hood)
|
||||
- Actor reminder jobs (used by the actor reminders)
|
||||
- Actor reminder jobs created by the Workflow API (which uses actor reminders)
|
||||
|
||||
As of Dapr v1.15, the Scheduler service is used by default to schedule actor reminders as well as actor reminders under the hood for the Workflow API. All of these jobs are tracked by the Scheduler service and stored in an embedded etcd database.
|
||||
From Dapr v1.15, the Scheduler service is used by default to schedule actor reminders as well as actor reminders for the Workflow API.
|
||||
|
||||
There is no concept of a leader Scheduler instance. All Scheduler service replicas are considered peers. All receive jobs to be scheduled for execution and the jobs are divvied up between the available Scheduler service replicas for trigger load balancing.
|
||||
There is no concept of a leader Scheduler instance. All Scheduler service replicas are considered peers. All receive jobs to be scheduled for execution and the jobs are allocated between the available Scheduler service replicas for load balancing of the trigger events.
|
||||
|
||||
The diagram below shows how the Scheduler service is used via the jobs API when called from your application. All the jobs that are tracked by the Scheduler service are stored in an embedded etcd database.
|
||||
|
||||
|
@ -22,15 +22,15 @@ The diagram below shows how the Scheduler service is used via the jobs API when
|
|||
|
||||
Prior to Dapr v1.15, [actor reminders]({{< ref "actors-timers-reminders.md#actor-reminders" >}}) were run using the Placement service. Now, by default, the [`SchedulerReminders` feature flag]({{< ref "support-preview-features.md#current-preview-features" >}}) is set to `true`, and all new actor reminders you create are run using the Scheduler service to make them more scalable.
|
||||
|
||||
When you deploy Dapr v1.15, any _existing_ actor reminders are automatically migrated from the Placement service to the Scheduler service as a one time operation for each actor type. There will be _no_ loss of reminder triggers during the migration. However, you can prevent this migration and keep the existing actor reminders running using the Placement service by setting the `SchedulerReminders` flag to `false` in application configuration file for the actor type.
|
||||
When you deploy Dapr v1.15, any _existing_ actor reminders are automatically migrated from the Placement service to the Scheduler service as a one time operation for each actor type. There will be _no_ loss of reminder triggers during the migration. However, you can prevent this migration and keep the existing actor reminders running using the Placement service by setting the `SchedulerReminders` flag to `false` in the application configuration file for the actor type.
|
||||
|
||||
## Job Triggering
|
||||
## Job triggering
|
||||
|
||||
### Job Ordering
|
||||
### Job ordering
|
||||
|
||||
When the Scheduler service triggers a job there is no guarantee of job trigger ordering, meaning we do not guarantee FIFO or LIFO trigger ordering.
|
||||
When the Scheduler service triggers a job there is no guarantee of job trigger ordering, meaning there are no guarantees on FIFO or LIFO trigger ordering.
|
||||
|
||||
### Job Failure Policy and Staging Queue
|
||||
### Job failure policy and staging queue
|
||||
|
||||
When the Scheduler service triggers a job and it has a client side error, with the failure policy, the job is retried by default with a 1s interval and 3 maximum retries. A failure policy can be configured for a consistent retry or to drop a job.
|
||||
- Actor reminder type jobs retry forever until successful completion.
|
||||
|
@ -46,7 +46,7 @@ The Scheduler service Docker container is started automatically as part of `dapr
|
|||
|
||||
The Scheduler service is deployed as part of `dapr init -k`, or via the Dapr Helm charts. You can run Scheduler in high availability (HA) mode. [Learn more about setting HA mode in your Kubernetes service.]({{< ref "kubernetes-production.md#individual-service-ha-helm-configuration" >}})
|
||||
|
||||
When a Kubernetes namespace is cleaned up, all the jobs corresponding to that namespace are also cleaned up preventing unnecessary resource and memory usage in the embedded etcd.
|
||||
When a Kubernetes namespace is deleted, all the Job and Actor Reminders corresponding to that namespace are deleted.
|
||||
|
||||
## Disabling the Scheduler service
|
||||
|
||||
|
|
|
@ -108,7 +108,7 @@ Refer [api spec]({{< ref "actors_api.md#invoke-timer" >}}) for more details.
|
|||
## Actor reminders
|
||||
|
||||
{{% alert title="Note" color="primary" %}}
|
||||
In Dapr v1.15, actor reminders are stored by default in the [Scheduler service]({{< ref "scheduler.md#actor-reminders" >}}). The actual API surface that you use to author Reminders/Timers for Actors hasn't changed and will continue to be available. All existing reminders are automatically migrated to the Scheduler service with _no_ loss of reminders as a one time operation for each actor type.
|
||||
In Dapr v1.15, actor reminders are stored by default in the [Scheduler service]({{< ref "scheduler.md#actor-reminders" >}}). When upgrading to Dapr v1.15 all existing reminders are automatically migrated to the Scheduler service with no loss of reminders as a one time operation for each actor type.
|
||||
{{% /alert %}}
|
||||
|
||||
Reminders are a mechanism to trigger *persistent* callbacks on an actor at specified times. Their functionality is similar to timers. But unlike timers, reminders are triggered under all circumstances until the actor explicitly unregisters them or the actor is explicitly deleted or the number in invocations is exhausted. Specifically, reminders are triggered across actor deactivations and failovers because the Dapr actor runtime persists the information about the actors' reminders using Dapr actor state provider.
|
||||
|
|
|
@ -57,7 +57,7 @@ The jobs API provides several features to make it easy for you to schedule jobs.
|
|||
|
||||
### Schedule jobs across multiple replicas
|
||||
|
||||
When you create a job, it will replace any existing job with the same name. This means that every time a job is created, it resets the count and only keeps 1 record in the embedded etcd for that job. Therefore, you don't need to worry about multiple jobs being created and firing off — only the most recent job will be recorded and executed, even if all your apps schedule the same job on startup.
|
||||
When you create a job, it replaces any existing job with the same name. This means that every time a job is created, it resets the count and only keeps 1 record in the embedded etcd for that job. Therefore, you don't need to worry about multiple jobs being created and firing off — only the most recent job is recorded and executed, even if all your apps schedule the same job on startup.
|
||||
|
||||
The Scheduler service enables the scheduling of jobs to scale across multiple replicas, while guaranteeing that a job is only triggered by 1 Scheduler service instance.
|
||||
|
||||
|
|
|
@ -6,7 +6,7 @@ weight: 4000
|
|||
description: "The Dapr Workflow engine architecture"
|
||||
---
|
||||
|
||||
[Dapr Workflows]({{< ref "workflow-overview.md" >}}) allow developers to define workflows using ordinary code in a variety of programming languages. The workflow engine runs inside of the Dapr sidecar and orchestrates workflow code deployed as part of your application. Dapr Workflows are built on top of Dapr Actors, which serve as the sole backend implementation, providing durability and scalability for workflow execution.
|
||||
[Dapr Workflows]({{< ref "workflow-overview.md" >}}) allow developers to define workflows using ordinary code in a variety of programming languages. The workflow engine runs inside of the Dapr sidecar and orchestrates workflow code deployed as part of your application. Dapr Workflows are built on top of Dapr Actors providing durability and scalability for workflow execution.
|
||||
|
||||
This article describes:
|
||||
|
||||
|
@ -86,7 +86,7 @@ Each workflow actor saves its state using the following keys in the configured s
|
|||
| `metadata` | Contains meta information about the workflow as a JSON blob and includes details such as the length of the inbox, the length of the history, and a 64-bit integer representing the workflow generation (for cases where the instance ID gets reused). The length information is used to determine which keys need to be read or written to when loading or saving workflow state updates. |
|
||||
|
||||
{{% alert title="Warning" color="warning" %}}
|
||||
Workflow actor state will remain in the state store even after a workflow has completed. Creating a large number of workflows could result in unbounded storage usage. In a future release, data retention policies will be introduced that can automatically purge the state store of old workflow state.
|
||||
Workflow actor state remains in the state store even after a workflow has completed. Creating a large number of workflows could result in unbounded storage usage. To address this either purge workflows using their ID or directly delete entries in the workflow DB store.
|
||||
{{% /alert %}}
|
||||
|
||||
The following diagram illustrates the typical lifecycle of a workflow actor.
|
||||
|
|
|
@ -6,7 +6,7 @@ description: "Detailed documentation on the workflow API"
|
|||
weight: 300
|
||||
---
|
||||
|
||||
Dapr provides users with the ability to interact with workflows through its built-in workflow engine, which is implemented exclusively using Dapr Actors as its backend. This workflow engine is accessed using the name dapr in API calls as the `workflowComponentName`.
|
||||
Dapr provides users with the ability to interact with workflows through its built-in workflow engine, which is implemented using Dapr Actors. This workflow engine is accessed using the name `dapr` in API calls as the `workflowComponentName`.
|
||||
|
||||
## Start workflow request
|
||||
|
||||
|
|
Loading…
Reference in New Issue