Update jobs-run-to-completion.md (#7843)
* Clarified the roles of backoffLimit and activeDeadlineSeconds * Updated formatting * Clarified activeDeadlineSeconds based on PR comments
This commit is contained in:
parent
f2bc498fd0
commit
60fe4c8102
|
|
@ -209,17 +209,19 @@ When a Job completes, no more Pods are created, but the Pods are not deleted eit
|
|||
they don't show up with `kubectl get pods`, but they will show up with `kubectl get pods -a`. Keeping them around
|
||||
allows you to still view the logs of completed pods to check for errors, warnings, or other diagnostic output.
|
||||
The job object also remains after it is completed so that you can view its status. It is up to the user to delete
|
||||
old jobs after noting their status. Delete the job with `kubectl` (e.g. `kubectl delete jobs/pi` or `kubectl delete -f ./job.yaml`). When you delete the job using `kubectl`, all the pods it created are deleted too.
|
||||
old jobs after noting their status. Delete the job with `kubectl` (e.g. `kubectl delete jobs/pi` or `kubectl delete -f ./job.yaml`). When you delete the job using `kubectl`, all the pods it created are deleted too.
|
||||
|
||||
If a Job's pods are failing repeatedly, the Job will keep creating new pods forever, by default.
|
||||
Retrying forever can be a useful pattern. If an external dependency of the Job's
|
||||
pods is missing (for example an input file on a networked storage volume is not present), then the
|
||||
Job will keep trying Pods, and when you later resolve the external dependency (for example, creating
|
||||
the missing file) the Job will then complete without any further action.
|
||||
By default, a Job will run uninterrupted unless a Pod fails, at which point the Job defers to the
|
||||
`.spec.backoffLimit` described above. Another way to terminate a Job is by setting an active deadline.
|
||||
Do this by setting the `.spec.activeDeadlineSeconds` field of the Job to a number of seconds.
|
||||
|
||||
However, if you prefer not to retry forever, you can set a deadline on the job. Do this by setting the
|
||||
`spec.activeDeadlineSeconds` field of the job to a number of seconds. The job will have status with
|
||||
`reason: DeadlineExceeded`. No more pods will be created, and existing pods will be deleted.
|
||||
The `activeDeadlineSeconds` applies to the duration of the job, no matter how many Pods are created.
|
||||
Once a Job reaches `activeDeadlineSeconds`, the Job and all of its Pods are terminated.
|
||||
The result is that the job has a status with `reason: DeadlineExceeded`.
|
||||
|
||||
Note that a Job's `.spec.activeDeadlineSeconds` takes precedence over its `.spec.backoffLimit`. Therefore, a Job that is retrying one or more failed Pods will not deploy additional Pods once it reaches the time limit specified by `activeDeadlineSeconds`, even if the `backoffLimit` is not yet reached.
|
||||
|
||||
Example:
|
||||
|
||||
```yaml
|
||||
apiVersion: batch/v1
|
||||
|
|
@ -238,8 +240,8 @@ spec:
|
|||
restartPolicy: Never
|
||||
```
|
||||
|
||||
Note that both the Job Spec and the Pod Template Spec within the Job have a field with the same name.
|
||||
Set the one on the Job.
|
||||
Note that both the Job Spec and the [Pod Template Spec](https://kubernetes.io/docs/concepts/workloads/pods/init-containers/#detailed-behavior) within the Job have an `activeDeadlineSeconds` field. Ensure that you set this field at the proper level.
|
||||
|
||||
|
||||
## Job Patterns
|
||||
|
||||
|
|
|
|||
Loading…
Reference in New Issue