mirror of https://github.com/dapr/docs.git
Merge branch 'v1.14' into eks-tutorial
This commit is contained in:
commit
c216f3ad98
|
@ -0,0 +1,149 @@
|
|||
---
|
||||
type: docs
|
||||
title: "How-To: Schedule and handle triggered jobs"
|
||||
linkTitle: "How-To: Schedule and handle triggered jobs"
|
||||
weight: 2000
|
||||
description: "Learn how to use the jobs API to schedule and handle triggered jobs"
|
||||
---
|
||||
|
||||
Now that you've learned what the [jobs building block]({{< ref jobs-overview.md >}}) provides, let's look at an example of how to use the API. The code example below describes an application that schedules jobs for a database backup application and handles them at trigger time, also known as the time the job was sent back to the application because it reached it's dueTime.
|
||||
|
||||
<!--
|
||||
Include a diagram or image, if possible.
|
||||
-->
|
||||
|
||||
## Start the Scheduler service
|
||||
|
||||
When you [run `dapr init` in either self-hosted mode or on Kubernetes]({{< ref install-dapr-selfhost.md >}}), the Dapr Scheduler service is started.
|
||||
|
||||
## Set up the Jobs API
|
||||
|
||||
In your code, set up and schedule jobs within your application.
|
||||
|
||||
{{< tabs "Go" >}}
|
||||
|
||||
{{% codetab %}}
|
||||
|
||||
<!--go-->
|
||||
|
||||
The following Go SDK code sample schedules the job named `prod-db-backup`. Job data is housed in a backup database (`"my-prod-db"`) and is scheduled with `ScheduleJobAlpha1`. This provides the `jobData`, which includes:
|
||||
- The backup `Task` name
|
||||
- The backup task's `Metadata`, including:
|
||||
- The database name (`DBName`)
|
||||
- The database location (`BackupLocation`)
|
||||
|
||||
|
||||
```go
|
||||
package main
|
||||
|
||||
import (
|
||||
//...
|
||||
|
||||
daprc "github.com/dapr/go-sdk/client"
|
||||
"github.com/dapr/go-sdk/examples/dist-scheduler/api"
|
||||
"github.com/dapr/go-sdk/service/common"
|
||||
daprs "github.com/dapr/go-sdk/service/grpc"
|
||||
)
|
||||
|
||||
func main() {
|
||||
// Initialize the server
|
||||
server, err := daprs.NewService(":50070")
|
||||
// ...
|
||||
|
||||
if err = server.AddJobEventHandler("prod-db-backup", prodDBBackupHandler); err != nil {
|
||||
log.Fatalf("failed to register job event handler: %v", err)
|
||||
}
|
||||
|
||||
log.Println("starting server")
|
||||
go func() {
|
||||
if err = server.Start(); err != nil {
|
||||
log.Fatalf("failed to start server: %v", err)
|
||||
}
|
||||
}()
|
||||
// ...
|
||||
|
||||
// Set up backup location
|
||||
jobData, err := json.Marshal(&api.DBBackup{
|
||||
Task: "db-backup",
|
||||
Metadata: api.Metadata{
|
||||
DBName: "my-prod-db",
|
||||
BackupLocation: "/backup-dir",
|
||||
},
|
||||
},
|
||||
)
|
||||
// ...
|
||||
}
|
||||
```
|
||||
|
||||
The job is scheduled with a `Schedule` set and the amount of `Repeats` desired. These settings determine a max amount of times the job should be triggered and sent back to the app.
|
||||
|
||||
In this example, at trigger time, which is `@every 1s` according to the `Schedule`, this job is triggered and sent back to the application up to the max `Repeats` (`10`).
|
||||
|
||||
```go
|
||||
// ...
|
||||
// Set up the job
|
||||
job := daprc.Job{
|
||||
Name: "prod-db-backup",
|
||||
Schedule: "@every 1s",
|
||||
Repeats: 10,
|
||||
Data: &anypb.Any{
|
||||
Value: jobData,
|
||||
},
|
||||
}
|
||||
```
|
||||
|
||||
At the trigger time, the `prodDBBackupHandler` function is called, executing the desired business logic for this job at trigger time. For example:
|
||||
|
||||
```go
|
||||
// ...
|
||||
|
||||
// At job trigger time this function is called
|
||||
func prodDBBackupHandler(ctx context.Context, job *common.JobEvent) error {
|
||||
var jobData common.Job
|
||||
if err := json.Unmarshal(job.Data, &jobData); err != nil {
|
||||
// ...
|
||||
}
|
||||
decodedPayload, err := base64.StdEncoding.DecodeString(jobData.Value)
|
||||
// ...
|
||||
|
||||
var jobPayload api.DBBackup
|
||||
if err := json.Unmarshal(decodedPayload, &jobPayload); err != nil {
|
||||
// ...
|
||||
}
|
||||
fmt.Printf("job %d received:\n type: %v \n typeurl: %v\n value: %v\n extracted payload: %v\n", jobCount, job.JobType, jobData.TypeURL, jobData.Value, jobPayload)
|
||||
jobCount++
|
||||
return nil
|
||||
}
|
||||
```
|
||||
|
||||
{{% /codetab %}}
|
||||
|
||||
{{< /tabs >}}
|
||||
|
||||
## Run the Dapr sidecar
|
||||
|
||||
Once you've set up the Jobs API in your application, in a terminal window run the Dapr sidecar with the following command.
|
||||
|
||||
{{< tabs "Go" >}}
|
||||
|
||||
{{% codetab %}}
|
||||
|
||||
```bash
|
||||
dapr run --app-id=distributed-scheduler \
|
||||
--metrics-port=9091 \
|
||||
--dapr-grpc-port 50001 \
|
||||
--app-port 50070 \
|
||||
--app-protocol grpc \
|
||||
--log-level debug \
|
||||
go run ./main.go
|
||||
```
|
||||
|
||||
{{% /codetab %}}
|
||||
|
||||
{{< /tabs >}}
|
||||
|
||||
|
||||
## Next steps
|
||||
|
||||
- [Learn more about the Scheduler control plane service]({{< ref "concepts/dapr-services/scheduler.md" >}})
|
||||
- [Jobs API reference]({{< ref jobs_api.md >}})
|
|
@ -1,32 +0,0 @@
|
|||
---
|
||||
type: docs
|
||||
title: "How-To: Schedule jobs"
|
||||
linkTitle: "How-To: Schedule jobs"
|
||||
weight: 2000
|
||||
description: "Learn how to use the jobs API to schedule jobs"
|
||||
---
|
||||
|
||||
Now that you've learned what the [jobs building block]({{< ref jobs-overview.md >}}) provides, let's look at an example of how to use the API. The code example below describes an application that schedules jobs for a **TBD** application.
|
||||
|
||||
<!--
|
||||
Include a diagram or image, if possible.
|
||||
-->
|
||||
|
||||
|
||||
|
||||
## Set up the Scheduler service
|
||||
|
||||
When you run `dapr init` in either self-hosted mode or on Kubernetes, the Dapr scheduler service is started.
|
||||
|
||||
## Run the Dapr sidecar
|
||||
|
||||
Run the Dapr sidecar alongside your application.
|
||||
|
||||
```bash
|
||||
dapr run --app-id=jobs --app-port 50070 --app-protocol grpc --log-level debug -- go run main.go
|
||||
```
|
||||
|
||||
## Next steps
|
||||
|
||||
- [Learn more about the Scheduler control plane service]({{< ref "concepts/dapr-services/scheduler.md" >}})
|
||||
- [Jobs API reference]({{< ref jobs_api.md >}})
|
|
@ -8,14 +8,19 @@ description: "Overview of the jobs API building block"
|
|||
|
||||
Many applications require job scheduling, or the need to take an action in the future. The jobs API is an orchestrator for scheduling these future jobs, either at a specific time or for a specific interval.
|
||||
|
||||
Not only does the jobs API help you with scheduling jobs, but internally, Dapr uses the scheduler service to schedule actor reminders.
|
||||
Not only does the jobs API help you with scheduling jobs, but internally, Dapr uses the scheduler service to schedule actor reminders.
|
||||
|
||||
Jobs in Dapr consist of:
|
||||
- The jobs API building block
|
||||
- [The jobs API building block]({{< ref jobs_api.md >}})
|
||||
- [The Scheduler control plane service]({{< ref "concepts/dapr-services/scheduler.md" >}})
|
||||
|
||||
[See example scenarios.]({{< ref "#scenarios" >}})
|
||||
|
||||
{{% alert title="Warning" color="warning" %}}
|
||||
By default, job data is not resilient to [Scheduler]({{< ref scheduler.md >}}) service restarts.
|
||||
A persistent volume must be provided to Scheduler to ensure job data is not lost in either [Kubernetes]({{< ref kubernetes-persisting-scheduler.md >}}) or [Self-hosted]({{< ref self-hosted-persisting-scheduler.md >}}) mode.
|
||||
{{% /alert %}}
|
||||
|
||||
<img src="/images/scheduler/scheduler-architecture.png" alt="Diagram showing the Scheduler control plane service and the jobs API">
|
||||
|
||||
## How it works
|
||||
|
@ -34,19 +39,19 @@ You can use jobs to:
|
|||
|
||||
Job scheduling can prove helpful in the following scenarios:
|
||||
|
||||
- **Automated Database Backups**:
|
||||
- **Automated Database Backups**:
|
||||
Ensure a database is backed up daily to prevent data loss. Schedule a backup script to run every night at 2 AM, which will create a backup of the database and store it in a secure location.
|
||||
|
||||
- **Regular Data Processing and ETL (Extract, Transform, Load)**:
|
||||
- **Regular Data Processing and ETL (Extract, Transform, Load)**:
|
||||
Process and transform raw data from various sources and load it into a data warehouse. Schedule ETL jobs to run at specific times (for example: hourly, daily) to fetch new data, process it, and update the data warehouse with the latest information.
|
||||
|
||||
- **Email Notifications and Reports**:
|
||||
- **Email Notifications and Reports**:
|
||||
Receive daily sales reports and weekly performance summaries via email. Schedule a job that generates the required reports and sends them via email at 6 a.m. every day for daily reports and 8 a.m. every Monday for weekly summaries.
|
||||
|
||||
- **Maintenance Tasks and System Updates**:
|
||||
- **Maintenance Tasks and System Updates**:
|
||||
Perform regular maintenance tasks such as clearing temporary files, updating software, and checking system health. Schedule various maintenance scripts to run at off-peak hours, such as weekends or late nights, to minimize disruption to users.
|
||||
|
||||
- **Batch Processing for Financial Transactions**:
|
||||
- **Batch Processing for Financial Transactions**:
|
||||
Processes a large number of transactions that need to be batched and settled at the end of each business day. Schedule batch processing jobs to run at 5 PM every business day, aggregating the day’s transactions and performing necessary settlements and reconciliations.
|
||||
|
||||
Dapr's jobs API ensures the tasks represented in these scenarios are performed consistently and reliably without manual intervention, improving efficiency and reducing the risk of errors.
|
||||
|
@ -65,10 +70,10 @@ Actors have actor reminders, but present some limitations involving scalability
|
|||
|
||||
## Try out the jobs API
|
||||
|
||||
You can try out the jobs API in your application. After [Dapr is installed]({{< ref install-dapr-cli.md >}}), you can begin using the jobs API, starting with [the How-to: Schedule jobs guide]({{< ref howto-schedule-jobs.md >}}).
|
||||
You can try out the jobs API in your application. After [Dapr is installed]({{< ref install-dapr-cli.md >}}), you can begin using the jobs API, starting with [the How-to: Schedule jobs guide]({{< ref howto-schedule-and-handle-triggered-jobs.md >}}).
|
||||
|
||||
## Next steps
|
||||
|
||||
- [Learn how to use the jobs API]({{< ref howto-schedule-jobs.md >}})
|
||||
- [Learn how to use the jobs API]({{< ref howto-schedule-and-handle-triggered-jobs.md >}})
|
||||
- [Learn more about the Scheduler control plane service]({{< ref "concepts/dapr-services/scheduler.md" >}})
|
||||
- [Jobs API reference]({{< ref jobs_api.md >}})
|
||||
|
|
|
@ -95,28 +95,11 @@ dapr init
|
|||
|
||||
**Expected output:**
|
||||
|
||||
```
|
||||
⌛ Making the jump to hyperspace...
|
||||
✅ Downloaded binaries and completed components set up.
|
||||
ℹ️ daprd binary has been installed to $HOME/.dapr/bin.
|
||||
ℹ️ dapr_placement container is running.
|
||||
ℹ️ dapr_scheduler container is running.
|
||||
ℹ️ dapr_redis container is running.
|
||||
ℹ️ dapr_zipkin container is running.
|
||||
ℹ️ Use `docker ps` to check running containers.
|
||||
✅ Success! Dapr is up and running. To get started, go here: https://aka.ms/dapr-getting-started
|
||||
```
|
||||
<img src="/images/install-dapr-selfhost/dapr-init-output.png" style=
|
||||
"padding-bottom: 5px" >
|
||||
|
||||
[See the troubleshooting guide if you encounter any error messages regarding Docker not being installed or running.]({{< ref "common_issues.md#dapr-cant-connect-to-docker-when-installing-the-dapr-cli" >}})
|
||||
|
||||
#### Slim init
|
||||
|
||||
To install the CLI without any default configuration files or Docker containers, use the `--slim` flag. [Learn more about the `init` command and its flags.]({{< ref dapr-init.md >}})
|
||||
|
||||
```bash
|
||||
dapr init --slim
|
||||
```
|
||||
|
||||
### Step 3: Verify Dapr version
|
||||
|
||||
```bash
|
||||
|
@ -138,7 +121,7 @@ docker ps
|
|||
|
||||
**Output:**
|
||||
|
||||
<img src="/images/install-dapr-selfhost/docker-containers.png" width=800>
|
||||
<img src="/images/install-dapr-selfhost/docker-containers.png">
|
||||
|
||||
### Step 5: Verify components directory has been initialized
|
||||
|
||||
|
@ -189,5 +172,14 @@ explorer "%USERPROFILE%\.dapr"
|
|||
|
||||
<br>
|
||||
|
||||
### Slim init
|
||||
|
||||
To install the CLI without any default configuration files or Docker containers, use the `--slim` flag. [Learn more about the `init` command and its flags.]({{< ref dapr-init.md >}})
|
||||
|
||||
```bash
|
||||
dapr init --slim
|
||||
```
|
||||
|
||||
|
||||
{{< button text="Next step: Use the Dapr API >>" page="getting-started/get-started-api.md" >}}
|
||||
|
||||
|
|
|
@ -0,0 +1,55 @@
|
|||
---
|
||||
type: docs
|
||||
title: "How-to: Persist Scheduler Jobs"
|
||||
linkTitle: "How-to: Persist Scheduler Jobs"
|
||||
weight: 50000
|
||||
description: "Configure Scheduler to persist its database to make it resilient to restarts"
|
||||
---
|
||||
|
||||
The [Scheduler]({{< ref scheduler.md >}}) service is responsible for writing jobs to its embedded database and scheduling them for execution.
|
||||
By default, the Scheduler service database writes this data to an in-memory ephemeral tempfs volume, meaning that **this data is not persisted across restarts**. Job data will be lost during these events.
|
||||
|
||||
To make the Scheduler data resilient to restarts, a persistent volume must be mounted to the Scheduler `StatefulSet`.
|
||||
This persistent volume is backed by a real disk that is provided by the hosted Cloud Provider or Kubernetes infrastructure platform.
|
||||
Disk size is determined by how many jobs are expected to be persisted at once; however, 64Gb should be more than sufficient for most use cases.
|
||||
Some Kubernetes providers recommend using a [CSI driver](https://kubernetes.io/docs/concepts/storage/volumes/#csi) to provision the underlying disks.
|
||||
Below are a list of useful links to the relevant documentation for creating a persistent disk for the major cloud providers:
|
||||
- [Google Cloud Persistent Disk](https://cloud.google.com/compute/docs/disks)
|
||||
- [Amazon EBS Volumes](https://aws.amazon.com/blogs/storage/persistent-storage-for-kubernetes/)
|
||||
- [Azure AKS Storage Options](https://learn.microsoft.com/azure/aks/concepts-storage)
|
||||
- [Digital Ocean Block Storage](https://www.digitalocean.com/docs/kubernetes/how-to/add-volumes/)
|
||||
- [VMWare vSphere Storage](https://docs.vmware.com/VMware-vSphere/7.0/vmware-vsphere-with-tanzu/GUID-A19F6480-40DC-4343-A5A9-A5D3BFC0742E.html)
|
||||
- [OpenShift Persistent Storage](https://docs.openshift.com/container-platform/4.6/storage/persistent_storage/persistent-storage-aws-efs.html)
|
||||
- [Alibaba Cloud Disk Storage](https://www.alibabacloud.com/help/ack/ack-managed-and-ack-dedicated/user-guide/create-a-pvc)
|
||||
|
||||
|
||||
Once the persistent volume class is available, you can install Dapr using the following command, with Scheduler configured to use the persistent volume class (replace `my-storage-class` with the name of the storage class):
|
||||
|
||||
{{% alert title="Note" color="primary" %}}
|
||||
If Dapr is already installed, the control plane needs to be completely [uninstalled]({{< ref dapr-uninstall.md >}}) in order for the Scheduler `StatefulSet` to be recreated with the new persistent volume.
|
||||
{{% /alert %}}
|
||||
|
||||
{{< tabs "Dapr CLI" "Helm" >}}
|
||||
<!-- Dapr CLI -->
|
||||
{{% codetab %}}
|
||||
|
||||
```bash
|
||||
dapr init -k --set dapr_scheduler.cluster.storageClassName=my-storage-class
|
||||
```
|
||||
|
||||
{{% /codetab %}}
|
||||
|
||||
<!-- Helm -->
|
||||
{{% codetab %}}
|
||||
|
||||
```bash
|
||||
helm upgrade --install dapr dapr/dapr \
|
||||
--version={{% dapr-latest-version short="true" %}} \
|
||||
--namespace dapr-system \
|
||||
--create-namespace \
|
||||
--set dapr_scheduler.cluster.storageClassName=my-storage-class \
|
||||
--wait
|
||||
```
|
||||
|
||||
{{% /codetab %}}
|
||||
{{< /tabs >}}
|
|
@ -0,0 +1,27 @@
|
|||
---
|
||||
type: docs
|
||||
title: "How-to: Persist Scheduler Jobs"
|
||||
linkTitle: "How-to: Persist Scheduler Jobs"
|
||||
weight: 50000
|
||||
description: "Configure Scheduler to persist its database to make it resilient to restarts"
|
||||
---
|
||||
|
||||
The [Scheduler]({{< ref scheduler.md >}}) service is responsible for writing jobs to its embedded database and scheduling them for execution.
|
||||
By default, the Scheduler service database writes this data to the local volume `dapr_scheduler`, meaning that **this data is persisted across restarts**.
|
||||
|
||||
The host file location for this local volume is typically located at either `/var/lib/docker/volumes/dapr_scheduler/_data` or `~/.local/share/containers/storage/volumes/dapr_scheduler/_data`, depending on your container runtime.
|
||||
Note that if you are using Docker Desktop, this volume is located in the Docker Desktop VM's filesystem, which can be accessed using:
|
||||
|
||||
```bash
|
||||
docker run -it --privileged --pid=host debian nsenter -t 1 -m -u -n -i sh
|
||||
```
|
||||
|
||||
The Scheduler persistent volume can be modified with a custom volume that is pre-existing, or is created by Dapr.
|
||||
|
||||
{{% alert title="Note" color="primary" %}}
|
||||
By default `dapr init` creates a local persistent volume on your drive called `dapr_scheduler`. If Dapr is already installed, the control plane needs to be completely [uninstalled]({{< ref dapr-uninstall.md >}}) in order for the Scheduler container to be recreated with the new persistent volume.
|
||||
{{% /alert %}}
|
||||
|
||||
```bash
|
||||
dapr init --scheduler-volume my-scheduler-volume
|
||||
```
|
|
@ -10,8 +10,16 @@ weight: 1300
|
|||
The jobs API is currently in alpha.
|
||||
{{% /alert %}}
|
||||
|
||||
{{% alert title="Warning" color="warning" %}}
|
||||
By default, job data is not resilient to [Scheduler]({{< ref scheduler.md >}}) service restarts.
|
||||
A persistent volume must be provided to Scheduler to ensure job data is not lost in either [Kubernetes]({{< ref kubernetes-persisting-scheduler.md >}}) or [Self-Hosted]({{< ref self-hosted-persisting-scheduler.md >}}) mode.
|
||||
{{% /alert %}}
|
||||
|
||||
With the jobs API, you can schedule jobs and tasks in the future.
|
||||
|
||||
> The HTTP APIs are intended for development and testing only. For production scenarios, the use of the SDKs is strongly
|
||||
> recommended as they implement the gRPC APIs providing higher performance and capability than the HTTP APIs.
|
||||
|
||||
## Schedule a job
|
||||
|
||||
Schedule a job with a name.
|
||||
|
@ -22,22 +30,50 @@ POST http://localhost:3500/v1.0-alpha1/jobs/<name>
|
|||
|
||||
### URL parameters
|
||||
|
||||
{{% alert title="Note" color="primary" %}}
|
||||
At least one of `schedule` or `dueTime` must be provided, but they can also be provided together.
|
||||
{{% /alert %}}
|
||||
|
||||
Parameter | Description
|
||||
--------- | -----------
|
||||
`name` | Name of the job you're scheduling
|
||||
`data` | A string value and can be any related content. Content is returned when the reminder expires. For example, this may be useful for returning a URL or anything related to the content.
|
||||
`dueTime` | Specifies the time after which this job is invoked. Its format should be [time.ParseDuration](https://pkg.go.dev/time#ParseDuration)
|
||||
`data` | A protobuf message `@type`/`value` pair. `@type` must be of a [well-known type](https://protobuf.dev/reference/protobuf/google.protobuf). `value` is the serialized data.
|
||||
`schedule` | An optional schedule at which the job is to be run. Details of the format are below.
|
||||
`dueTime` | An optional time at which the job should be active, or the "one shot" time, if other scheduling type fields are not provided. Accepts a "point in time" string in the format of RFC3339, Go duration string (calculated from creation time), or non-repeating ISO8601.
|
||||
`repeats` | An optional number of times in which the job should be triggered. If not set, the job runs indefinitely or until expiration.
|
||||
`ttl` | An optional time to live or expiration of the job. Accepts a "point in time" string in the format of RFC3339, Go duration string (calculated from job creation time), or non-repeating ISO8601.
|
||||
|
||||
#### schedule
|
||||
`schedule` accepts both systemd timer-style cron expressions, as well as human readable '@' prefixed period strings, as defined below.
|
||||
|
||||
Systemd timer style cron accepts 6 fields:
|
||||
seconds | minutes | hours | day of month | month | day of week
|
||||
0-59 | 0-59 | 0-23 | 1-31 | 1-12/jan-dec | 0-7/sun-sat
|
||||
|
||||
"0 30 * * * *" - every hour on the half hour
|
||||
"0 15 3 * * *" - every day at 03:15
|
||||
|
||||
Period string expressions:
|
||||
Entry | Description | Equivalent To
|
||||
----- | ----------- | -------------
|
||||
@every <duration> | Run every <duration> (e.g. '@every 1h30m') | N/A
|
||||
@yearly (or @annually) | Run once a year, midnight, Jan. 1st | 0 0 0 1 1 *
|
||||
@monthly | Run once a month, midnight, first of month | 0 0 0 1 * *
|
||||
@weekly | Run once a week, midnight on Sunday | 0 0 0 * * 0
|
||||
@daily (or @midnight) | Run once a day, midnight | 0 0 0 * * *
|
||||
@hourly | Run once an hour, beginning of hour | 0 0 * * * *
|
||||
|
||||
|
||||
### Request body
|
||||
|
||||
```json
|
||||
{
|
||||
"job": {
|
||||
"data": {
|
||||
"@type": "type.googleapis.com/google.type.Expr",
|
||||
"expression": "<expression>"
|
||||
},
|
||||
"dueTime": "30s"
|
||||
"data": {
|
||||
"@type": "type.googleapis.com/google.protobuf.StringValue",
|
||||
"value": "\"someData\""
|
||||
},
|
||||
"dueTime": "30s"
|
||||
}
|
||||
}
|
||||
```
|
||||
|
@ -46,24 +82,26 @@ Parameter | Description
|
|||
|
||||
Code | Description
|
||||
---- | -----------
|
||||
`202` | Accepted
|
||||
`204` | Accepted
|
||||
`400` | Request was malformed
|
||||
`500` | Request formatted correctly, error in dapr code or Scheduler control plane service
|
||||
|
||||
### Response content
|
||||
|
||||
The following example curl command creates a job, naming the job `jobforjabba` and specifying the `dueTime` and the `data`.
|
||||
The following example curl command creates a job, naming the job `jobforjabba` and specifying the `schedule`, `repeats` and the `data`.
|
||||
|
||||
```bash
|
||||
$ curl -X POST \
|
||||
http://localhost:3500/v1.0-alpha1/jobs/jobforjabba \
|
||||
-H "Content-Type: application/json"
|
||||
-H "Content-Type: application/json"
|
||||
-d '{
|
||||
"job": {
|
||||
"data": {
|
||||
"HanSolo": "Running spice"
|
||||
"@type": "type.googleapis.com/google.protobuf.StringValue",
|
||||
"value": "Running spice"
|
||||
},
|
||||
"dueTime": "30s"
|
||||
"schedule": "@every 1m",
|
||||
"repeats": 5
|
||||
}
|
||||
}'
|
||||
```
|
||||
|
@ -87,33 +125,35 @@ Parameter | Description
|
|||
|
||||
Code | Description
|
||||
---- | -----------
|
||||
`202` | Accepted
|
||||
`200` | Accepted
|
||||
`400` | Request was malformed
|
||||
`500` | Request formatted correctly, error in dapr code or Scheduler control plane service
|
||||
`500` | Request formatted correctly, Job doesn't exist or error in dapr code or Scheduler control plane service
|
||||
|
||||
### Response content
|
||||
|
||||
After running the following example curl command, the returned response is JSON containing the `name` of the job, the `dueTime`, and the `data`.
|
||||
|
||||
```bash
|
||||
$ curl -X GET http://localhost:3500/v1.0-alpha1/jobs/jobforjabba -H "Content-Type: application/json"
|
||||
$ curl -X GET http://localhost:3500/v1.0-alpha1/jobs/jobforjabba -H "Content-Type: application/json"
|
||||
```
|
||||
|
||||
```json
|
||||
{
|
||||
"name":"test1",
|
||||
"dueTime":"30s",
|
||||
"name": "jobforjabba",
|
||||
"schedule": "@every 1m",
|
||||
"repeats": 5,
|
||||
"data": {
|
||||
"HanSolo": "Running spice"
|
||||
}
|
||||
}
|
||||
"@type": "type.googleapis.com/google.protobuf.StringValue",
|
||||
"value": "Running spice"
|
||||
}
|
||||
}
|
||||
```
|
||||
## Delete a job
|
||||
|
||||
Delete a named job.
|
||||
|
||||
```
|
||||
DELETE http://localhost:3500/v1.0-alpha1/jobs/<name>
|
||||
DELETE http://localhost:3500/v1.0-alpha1/jobs/<name>
|
||||
```
|
||||
|
||||
### URL parameters
|
||||
|
@ -126,7 +166,7 @@ Parameter | Description
|
|||
|
||||
Code | Description
|
||||
---- | -----------
|
||||
`202` | Accepted
|
||||
`204` | Accepted
|
||||
`400` | Request was malformed
|
||||
`500` | Request formatted correctly, error in dapr code or Scheduler control plane service
|
||||
|
||||
|
@ -135,7 +175,7 @@ Code | Description
|
|||
In the following example curl command, the job named `test1` with app-id `sub` will be deleted
|
||||
|
||||
```bash
|
||||
$ curl -X DELETE http://localhost:3500/v1.0-alpha1/jobs/jobforjabba -H "Content-Type: application/json"
|
||||
$ curl -X DELETE http://localhost:3500/v1.0-alpha1/jobs/jobforjabba -H "Content-Type: application/json"
|
||||
```
|
||||
|
||||
|
||||
|
|
|
@ -45,6 +45,7 @@ dapr init [flags]
|
|||
| N/A | DAPR_HELM_REPO_PASSWORD | A password for a private Helm chart |The password required to access the private Dapr Helm chart. If it can be accessed publicly, this env variable does not need to be set| |
|
||||
| `--container-runtime` | | `docker` | Used to pass in a different container runtime other than Docker. Supported container runtimes are: `docker`, `podman` |
|
||||
| `--dev` | | | Creates Redis and Zipkin deployments when run in Kubernetes. |
|
||||
| `--scheduler-volume` | | | Self-hosted only. Optionally, you can specify a volume for the scheduler service data directory. By default, without this flag, scheduler data is not persisted and not resilient to restarts. |
|
||||
|
||||
|
||||
### Examples
|
||||
|
@ -55,7 +56,9 @@ dapr init [flags]
|
|||
|
||||
**Install**
|
||||
|
||||
Install Dapr by pulling container images for Placement, Scheduler, Redis, and Zipkin. By default, these images are pulled from Docker Hub.
|
||||
Install Dapr by pulling container images for Placement, Scheduler, Redis, and Zipkin. By default, these images are pulled from Docker Hub.
|
||||
|
||||
> By default, a `dapr_scheduler` local volume is created for Scheduler service to be used as the database directory. The host file location for this volume is likely located at `/var/lib/docker/volumes/dapr_scheduler/_data` or `~/.local/share/containers/storage/volumes/dapr_scheduler/_data`, depending on your container runtime.
|
||||
|
||||
```bash
|
||||
dapr init
|
||||
|
|
|
@ -24,10 +24,10 @@ dapr uninstall [flags]
|
|||
|
||||
| Name | Environment Variable | Default | Description |
|
||||
| -------------------- | -------------------- | ------------- | --------------------------------------------------------------------------------------------------------------------------------------------------- |
|
||||
| `--all` | | `false` | Remove Redis, Zipkin containers in addition to the scheduler service and the actor placement container. Remove default dapr dir located at `$HOME/.dapr or %USERPROFILE%\.dapr\`. |
|
||||
| `--all` | | `false` | Remove Redis, Zipkin containers in addition to the Scheduler service and the actor Placement service containers. Remove default Dapr dir located at `$HOME/.dapr or %USERPROFILE%\.dapr\`. |
|
||||
| `--help`, `-h` | | | Print this help message |
|
||||
| `--kubernetes`, `-k` | | `false` | Uninstall Dapr from a Kubernetes cluster |
|
||||
| `--namespace`, `-n` | | `dapr-system` | The Kubernetes namespace to uninstall Dapr from |
|
||||
| `--namespace`, `-n` | | `dapr-system` | The Kubernetes namespace from which Dapr is uninstalled |
|
||||
| `--container-runtime` | | `docker` | Used to pass in a different container runtime other than Docker. Supported container runtimes are: `docker`, `podman` |
|
||||
|
||||
### Examples
|
||||
|
|
Binary file not shown.
After Width: | Height: | Size: 41 KiB |
Binary file not shown.
Before Width: | Height: | Size: 21 KiB After Width: | Height: | Size: 23 KiB |
Loading…
Reference in New Issue