[Jobs API] Describe Triggered Job Handling Assumptions (#4376)

* add specific logic for what assumptions are made for triggered jobs for http, grpc, sdks

Signed-off-by: Cassandra Coyle <cassie@diagrid.io>

* rm space

Signed-off-by: Cassandra Coyle <cassie@diagrid.io>

* add a note about this applying to all programming languages to avoid confusion

Signed-off-by: Cassandra Coyle <cassie@diagrid.io>

* Update howto-schedule-and-handle-triggered-jobs.md

Signed-off-by: Yaron Schneider <schneider.yaron@live.com>

---------

Signed-off-by: Cassandra Coyle <cassie@diagrid.io>
Signed-off-by: Yaron Schneider <schneider.yaron@live.com>
Co-authored-by: Yaron Schneider <schneider.yaron@live.com>
This commit is contained in:
Cassie Coyle 2024-10-10 16:59:09 -05:00 committed by GitHub
parent ae6d065260
commit e3068a9a5d
No known key found for this signature in database
GPG Key ID: B5690EEEBB952194
1 changed files with 70 additions and 1 deletions

View File

@ -94,6 +94,75 @@ In this example, at trigger time, which is `@every 1s` according to the `Schedul
At the trigger time, the `prodDBBackupHandler` function is called, executing the desired business logic for this job at trigger time. For example:
#### HTTP
When you create a job using Dapr's Jobs API, Dapr will automatically assume there is an endpoint available at
`/job/<job-name>`. For instance, if you schedule a job named `test`, Dapr expects your application to listen for job
events at `/job/test`. Ensure your application has a handler set up for this endpoint to process the job when it is
triggered. For example:
*Note: The following example is in Go but applies to any programming language.*
```go
func main() {
...
http.HandleFunc("/job/", handleJob)
http.HandleFunc("/job/<job-name>", specificJob)
...
}
func specificJob(w http.ResponseWriter, r *http.Request) {
// Handle specific triggered job
}
func handleJob(w http.ResponseWriter, r *http.Request) {
// Handle the triggered jobs
}
```
#### gRPC
When a job reaches its scheduled trigger time, the triggered job is sent back to the application via the following
callback function:
*Note: The following example is in Go but applies to any programming language with gRPC support.*
```go
import rtv1 "github.com/dapr/dapr/pkg/proto/runtime/v1"
...
func (s *JobService) OnJobEventAlpha1(ctx context.Context, in *rtv1.JobEventRequest) (*rtv1.JobEventResponse, error) {
// Handle the triggered job
}
```
This function processes the triggered jobs within the context of your gRPC server. When you set up the server, ensure that
you register the callback server, which will invoke this function when a job is triggered:
```go
...
js := &JobService{}
rtv1.RegisterAppCallbackAlphaServer(server, js)
```
In this setup, you have full control over how triggered jobs are received and processed, as they are routed directly
through this gRPC method.
#### SDKs
For SDK users, handling triggered jobs is simpler. When a job is triggered, Dapr will automatically route the job to the
event handler you set up during the server initialization. For example, in Go, you'd register the event handler like this:
```go
...
if err = server.AddJobEventHandler("prod-db-backup", prodDBBackupHandler); err != nil {
log.Fatalf("failed to register job event handler: %v", err)
}
```
Dapr takes care of the underlying routing. When the job is triggered, your `prodDBBackupHandler` function is called with
the triggered job data. Heres an example of handling the triggered job:
```go
// ...
@ -144,4 +213,4 @@ dapr run --app-id=distributed-scheduler \
## Next steps
- [Learn more about the Scheduler control plane service]({{< ref "concepts/dapr-services/scheduler.md" >}})
- [Jobs API reference]({{< ref jobs_api.md >}})
- [Jobs API reference]({{< ref jobs_api.md >}})