From f9a2afa4aaedf2007561fecd3b814eb91c6d9d72 Mon Sep 17 00:00:00 2001 From: Patrick Liu Date: Thu, 26 Oct 2017 00:28:54 -0500 Subject: [PATCH] Fix `backoffLimit` field misplacement (#6042) It should be placed in JobSpec according to: https://github.com/kubernetes/kubernetes/blob/master/api/swagger-spec/batch_v1.json#L1488-L1514 --- docs/concepts/workloads/controllers/jobs-run-to-completion.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/docs/concepts/workloads/controllers/jobs-run-to-completion.md b/docs/concepts/workloads/controllers/jobs-run-to-completion.md index c04f0febfe..6621b3b17d 100644 --- a/docs/concepts/workloads/controllers/jobs-run-to-completion.md +++ b/docs/concepts/workloads/controllers/jobs-run-to-completion.md @@ -199,7 +199,7 @@ multiple pods running at once. Therefore, your pods must also be tolerant of co ### Pod Backoff failure policy There are situations where you want to fail a Job after some amount of retries due to a logical error in configuration etc. -To do so set `.spec.template.spec.backoffLimit` to specify the number of retries before considering a Job as failed. +To do so set `.spec.backoffLimit` to specify the number of retries before considering a Job as failed. The back-off limit is set by default to 6. Failed Pods associated with the Job are recreated by the Job controller with an exponential back-off delay (10s, 20s, 40s ...) capped at six minutes, The back-off limit is reset if no new failed Pods appear before the Job's next status check. ## Job Termination and Cleanup @@ -226,6 +226,7 @@ kind: Job metadata: name: pi-with-timeout spec: + backoffLimit: 5 activeDeadlineSeconds: 100 template: metadata: @@ -236,7 +237,6 @@ spec: image: perl command: ["perl", "-Mbignum=bpi", "-wle", "print bpi(2000)"] restartPolicy: Never - backoffLimit: 5 ``` Note that both the Job Spec and the Pod Template Spec within the Job have a field with the same name.