From 16b00693fcb49686e6261a6439bf0598666fc25d Mon Sep 17 00:00:00 2001 From: Deirdre Storck Date: Fri, 29 Jun 2018 18:37:20 -0700 Subject: [PATCH] added link to spec.minReadySeconds and maxUnavailable for clarity (#9194) * added link to minReadySeconds * link for max unavailable * used relative path --- .../tasks/manage-daemon/update-daemon-set.md | 47 +++++++++---------- 1 file changed, 22 insertions(+), 25 deletions(-) diff --git a/content/en/docs/tasks/manage-daemon/update-daemon-set.md b/content/en/docs/tasks/manage-daemon/update-daemon-set.md index 9650d05ed0..564f8a8fbb 100644 --- a/content/en/docs/tasks/manage-daemon/update-daemon-set.md +++ b/content/en/docs/tasks/manage-daemon/update-daemon-set.md @@ -7,7 +7,7 @@ content_template: templates/task {{% capture overview %}} -This page shows how to perform a rolling update on a DaemonSet. +This page shows how to perform a rolling update on a DaemonSet. {{% /capture %}} @@ -32,32 +32,32 @@ DaemonSet has two update strategy types: before. * RollingUpdate: With `RollingUpdate` update strategy, after you update a DaemonSet template, old DaemonSet pods will be killed, and new DaemonSet pods - will be created automatically, in a controlled fashion. + will be created automatically, in a controlled fashion. ## Performing a Rolling Update To enable the rolling update feature of a DaemonSet, you must set its -`.spec.updateStrategy.type` to `RollingUpdate`. +`.spec.updateStrategy.type` to `RollingUpdate`. -You may want to set `.spec.updateStrategy.rollingUpdate.maxUnavailable` (default -to 1) and `.spec.minReadySeconds` (default to 0) as well. +You may want to set [`.spec.updateStrategy.rollingUpdate.maxUnavailable`](/docs/concepts/workloads/controllers/deployment/#max-unavailable) (default +to 1) and [`.spec.minReadySeconds`](/docs/concepts/workloads/controllers/deployment/#min-ready-seconds) (default to 0) as well. ### Step 1: Checking DaemonSet `RollingUpdate` update strategy -First, check the update strategy of your DaemonSet, and make sure it's set to +First, check the update strategy of your DaemonSet, and make sure it's set to `RollingUpdate`: ```shell kubectl get ds/ -o go-template='{{.spec.updateStrategy.type}}{{"\n"}}' -``` +``` If you haven't created the DaemonSet in the system, check your DaemonSet manifest with the following command instead: ```shell kubectl create -f ds.yaml --dry-run -o go-template='{{.spec.updateStrategy.type}}{{"\n"}}' -``` +``` The output from both commands should be: @@ -80,13 +80,13 @@ kubectl create -f ds.yaml ``` Alternatively, use `kubectl apply` to create the same DaemonSet if you plan to -update the DaemonSet with `kubectl apply`. +update the DaemonSet with `kubectl apply`. ```shell kubectl apply -f ds.yaml ``` -### Step 3: Updating a DaemonSet template +### Step 3: Updating a DaemonSet template Any updates to a `RollingUpdate` DaemonSet `.spec.template` will trigger a rolling update. This can be done with several different `kubectl` commands. @@ -115,7 +115,7 @@ kubectl edit ds/ kubectl patch ds/ -p= ``` -##### Updating only the container image +##### Updating only the container image If you just need to update the container image in the DaemonSet template, i.e. `.spec.template.spec.containers[*].image`, use `kubectl set image`: @@ -128,8 +128,8 @@ kubectl set image ds/ = Finally, watch the rollout status of the latest DaemonSet rolling update: -```shell -kubectl rollout status ds/ +```shell +kubectl rollout status ds/ ``` When the rollout is complete, the output is similar to this: @@ -138,7 +138,7 @@ When the rollout is complete, the output is similar to this: daemonset "" successfully rolled out ``` -## Troubleshooting +## Troubleshooting ### DaemonSet rolling update is stuck @@ -148,20 +148,20 @@ causes: #### Some nodes run out of resources The rollout is stuck because new DaemonSet pods can't be scheduled on at least one -node. This is possible when the node is +node. This is possible when the node is [running out of resources](/docs/tasks/administer-cluster/out-of-resource/). When this happens, find the nodes that don't have the DaemonSet pods scheduled on by comparing the output of `kubectl get nodes` and the output of: ```shell -kubectl get pods -l = -o wide +kubectl get pods -l = -o wide ``` Once you've found those nodes, delete some non-DaemonSet pods from the node to -make room for new DaemonSet pods. Note that this will cause service disruption +make room for new DaemonSet pods. Note that this will cause service disruption if the deleted pods are not controlled by any controllers, or if the pods aren't -replicated. This doesn't respect +replicated. This doesn't respect [PodDisruptionBudget](/docs/tasks/configure-pod-container/configure-pod-disruption-budget/) either. @@ -169,16 +169,16 @@ either. If the recent DaemonSet template update is broken, for example, the container is crash looping, or the container image doesn't exist (often due to a typo), -DaemonSet rollout won't progress. +DaemonSet rollout won't progress. To fix this, just update the DaemonSet template again. New rollout won't be -blocked by previous unhealthy rollouts. +blocked by previous unhealthy rollouts. #### Clock skew -If `.spec.minReadySeconds` is specified in the DaemonSet, clock skew between +If `.spec.minReadySeconds` is specified in the DaemonSet, clock skew between master and nodes will make DaemonSet unable to detect the right rollout -progress. +progress. {{% /capture %}} @@ -191,6 +191,3 @@ progress. * See [Concepts: Creating a DaemonSet to adopt existing DaemonSet pods](/docs/concepts/workloads/controllers/daemonset/) {{% /capture %}} - - -