diff --git a/content/en/docs/tasks/run-application/configure-pdb.md b/content/en/docs/tasks/run-application/configure-pdb.md index 99ab27ab5d..ed3eebe627 100644 --- a/content/en/docs/tasks/run-application/configure-pdb.md +++ b/content/en/docs/tasks/run-application/configure-pdb.md @@ -50,7 +50,9 @@ specified by one of the built-in Kubernetes controllers: In this case, make a note of the controller's `.spec.selector`; the same selector goes into the PDBs `.spec.selector`. -From version 1.15 PDBs support custom controllers where the [scale subresource](/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/#scale-subresource) is enabled. +From version 1.15 PDBs support custom controllers where the +[scale subresource](/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/#scale-subresource) +is enabled. You can also use PDBs with pods which are not controlled by one of the above controllers, or arbitrary groups of pods, but there are some restrictions, @@ -74,7 +76,8 @@ due to a voluntary disruption. - Multiple-instance Stateful application such as Consul, ZooKeeper, or etcd: - Concern: Do not reduce number of instances below quorum, otherwise writes fail. - Possible Solution 1: set maxUnavailable to 1 (works with varying scale of application). - - Possible Solution 2: set minAvailable to quorum-size (e.g. 3 when scale is 5). (Allows more disruptions at once). + - Possible Solution 2: set minAvailable to quorum-size (e.g. 3 when scale is 5). + (Allows more disruptions at once). - Restartable Batch Job: - Concern: Job needs to complete in case of voluntary disruption. - Possible solution: Do not create a PDB. The Job controller will create a replacement pod. @@ -83,17 +86,20 @@ due to a voluntary disruption. Values for `minAvailable` or `maxUnavailable` can be expressed as integers or as a percentage. -- When you specify an integer, it represents a number of Pods. For instance, if you set `minAvailable` to 10, then 10 - Pods must always be available, even during a disruption. -- When you specify a percentage by setting the value to a string representation of a percentage (eg. `"50%"`), it represents a percentage of - total Pods. For instance, if you set `minAvailable` to `"50%"`, then at least 50% of the Pods remain available during a - disruption. +- When you specify an integer, it represents a number of Pods. For instance, if you set + `minAvailable` to 10, then 10 Pods must always be available, even during a disruption. +- When you specify a percentage by setting the value to a string representation of a + percentage (eg. `"50%"`), it represents a percentage of total Pods. For instance, if + you set `minAvailable` to `"50%"`, then at least 50% of the Pods remain available + during a disruption. -When you specify the value as a percentage, it may not map to an exact number of Pods. For example, if you have 7 Pods and -you set `minAvailable` to `"50%"`, it's not immediately obvious whether that means 3 Pods or 4 Pods must be available. -Kubernetes rounds up to the nearest integer, so in this case, 4 Pods must be available. When you specify the value -`maxUnavailable` as a percentage, Kubernetes rounds up the number of Pods that may be disrupted. Thereby a disruption -can exceed your defined `maxUnavailable` percentage. You can examine the +When you specify the value as a percentage, it may not map to an exact number of Pods. +For example, if you have 7 Pods and you set `minAvailable` to `"50%"`, it's not +immediately obvious whether that means 3 Pods or 4 Pods must be available. Kubernetes +rounds up to the nearest integer, so in this case, 4 Pods must be available. When you +specify the value `maxUnavailable` as a percentage, Kubernetes rounds up the number of +Pods that may be disrupted. Thereby a disruption can exceed your defined +`maxUnavailable` percentage. You can examine the [code](https://github.com/kubernetes/kubernetes/blob/23be9587a0f8677eb8091464098881df939c44a9/pkg/controller/disruption/disruption.go#L539) that controls this behavior. @@ -151,8 +157,8 @@ voluntary evictions, not all causes of unavailability. If you set `maxUnavailable` to 0% or 0, or you set `minAvailable` to 100% or the number of replicas, you are requiring zero voluntary evictions. When you set zero voluntary evictions for a workload object such as ReplicaSet, then you cannot successfully drain a Node running one of those Pods. -If you try to drain a Node where an unevictable Pod is running, the drain never completes. This is permitted as per the -semantics of `PodDisruptionBudget`. +If you try to drain a Node where an unevictable Pod is running, the drain never completes. +This is permitted as per the semantics of `PodDisruptionBudget`. You can find examples of pod disruption budgets defined below. They match pods with the label `app: zookeeper`. @@ -229,7 +235,8 @@ status: ### Healthiness of a Pod -The current implementation considers healthy pods, as pods that have `.status.conditions` item with `type="Ready"` and `status="True"`. +The current implementation considers healthy pods, as pods that have `.status.conditions` +item with `type="Ready"` and `status="True"`. These pods are tracked via `.status.currentHealthy` field in the PDB status. ## Unhealthy Pod Eviction Policy @@ -251,22 +258,26 @@ to the `IfHealthyBudget` policy. Policies: `IfHealthyBudget` -: Running pods (`.status.phase="Running"`), but not yet healthy can be evicted only if the guarded application is not -disrupted (`.status.currentHealthy` is at least equal to `.status.desiredHealthy`). +: Running pods (`.status.phase="Running"`), but not yet healthy can be evicted only + if the guarded application is not disrupted (`.status.currentHealthy` is at least + equal to `.status.desiredHealthy`). -: This policy ensures that running pods of an already disrupted application have the best chance to become healthy. -This has negative implications for draining nodes, which can be blocked by misbehaving applications that are guarded by a PDB. -More specifically applications with pods in `CrashLoopBackOff` state (due to a bug or misconfiguration), -or pods that are just failing to report the `Ready` condition. +: This policy ensures that running pods of an already disrupted application have + the best chance to become healthy. This has negative implications for draining + nodes, which can be blocked by misbehaving applications that are guarded by a PDB. + More specifically applications with pods in `CrashLoopBackOff` state + (due to a bug or misconfiguration), or pods that are just failing to report the + `Ready` condition. `AlwaysAllow` -: Running pods (`.status.phase="Running"`), but not yet healthy are considered disrupted and can be evicted -regardless of whether the criteria in a PDB is met. +: Running pods (`.status.phase="Running"`), but not yet healthy are considered + disrupted and can be evicted regardless of whether the criteria in a PDB is met. -: This means prospective running pods of a disrupted application might not get a chance to become healthy. -By using this policy, cluster managers can easily evict misbehaving applications that are guarded by a PDB. -More specifically applications with pods in `CrashLoopBackOff` state (due to a bug or misconfiguration), -or pods that are just failing to report the `Ready` condition. +: This means prospective running pods of a disrupted application might not get a + chance to become healthy. By using this policy, cluster managers can easily evict + misbehaving applications that are guarded by a PDB. More specifically applications + with pods in `CrashLoopBackOff` state (due to a bug or misconfiguration), or pods + that are just failing to report the `Ready` condition. {{< note >}} Pods in `Pending`, `Succeeded` or `Failed` phase are always considered for eviction. diff --git a/content/en/docs/tasks/run-application/delete-stateful-set.md b/content/en/docs/tasks/run-application/delete-stateful-set.md index 41e6ddd970..e8cd5c3398 100644 --- a/content/en/docs/tasks/run-application/delete-stateful-set.md +++ b/content/en/docs/tasks/run-application/delete-stateful-set.md @@ -22,7 +22,8 @@ This task shows you how to delete a {{< glossary_tooltip term_id="StatefulSet" > ## Deleting a StatefulSet -You can delete a StatefulSet in the same way you delete other resources in Kubernetes: use the `kubectl delete` command, and specify the StatefulSet either by file or by name. +You can delete a StatefulSet in the same way you delete other resources in Kubernetes: +use the `kubectl delete` command, and specify the StatefulSet either by file or by name. ```shell kubectl delete -f @@ -38,14 +39,17 @@ You may need to delete the associated headless service separately after the Stat kubectl delete service ``` -When deleting a StatefulSet through `kubectl`, the StatefulSet scales down to 0. All Pods that are part of this workload are also deleted. If you want to delete only the StatefulSet and not the Pods, use `--cascade=orphan`. -For example: +When deleting a StatefulSet through `kubectl`, the StatefulSet scales down to 0. +All Pods that are part of this workload are also deleted. If you want to delete +only the StatefulSet and not the Pods, use `--cascade=orphan`. For example: ```shell kubectl delete -f --cascade=orphan ``` -By passing `--cascade=orphan` to `kubectl delete`, the Pods managed by the StatefulSet are left behind even after the StatefulSet object itself is deleted. If the pods have a label `app.kubernetes.io/name=MyApp`, you can then delete them as follows: +By passing `--cascade=orphan` to `kubectl delete`, the Pods managed by the StatefulSet +are left behind even after the StatefulSet object itself is deleted. If the pods have +a label `app.kubernetes.io/name=MyApp`, you can then delete them as follows: ```shell kubectl delete pods -l app.kubernetes.io/name=MyApp @@ -53,7 +57,12 @@ kubectl delete pods -l app.kubernetes.io/name=MyApp ### Persistent Volumes -Deleting the Pods in a StatefulSet will not delete the associated volumes. This is to ensure that you have the chance to copy data off the volume before deleting it. Deleting the PVC after the pods have terminated might trigger deletion of the backing Persistent Volumes depending on the storage class and reclaim policy. You should never assume ability to access a volume after claim deletion. +Deleting the Pods in a StatefulSet will not delete the associated volumes. +This is to ensure that you have the chance to copy data off the volume before +deleting it. Deleting the PVC after the pods have terminated might trigger +deletion of the backing Persistent Volumes depending on the storage class +and reclaim policy. You should never assume ability to access a volume +after claim deletion. {{< note >}} Use caution when deleting a PVC, as it may lead to data loss. @@ -61,7 +70,8 @@ Use caution when deleting a PVC, as it may lead to data loss. ### Complete deletion of a StatefulSet -To delete everything in a StatefulSet, including the associated pods, you can run a series of commands similar to the following: +To delete everything in a StatefulSet, including the associated pods, +you can run a series of commands similar to the following: ```shell grace=$(kubectl get pods --template '{{.spec.terminationGracePeriodSeconds}}') @@ -71,11 +81,17 @@ kubectl delete pvc -l app.kubernetes.io/name=MyApp ``` -In the example above, the Pods have the label `app.kubernetes.io/name=MyApp`; substitute your own label as appropriate. +In the example above, the Pods have the label `app.kubernetes.io/name=MyApp`; +substitute your own label as appropriate. ### Force deletion of StatefulSet pods -If you find that some pods in your StatefulSet are stuck in the 'Terminating' or 'Unknown' states for an extended period of time, you may need to manually intervene to forcefully delete the pods from the apiserver. This is a potentially dangerous task. Refer to [Force Delete StatefulSet Pods](/docs/tasks/run-application/force-delete-stateful-set-pod/) for details. +If you find that some pods in your StatefulSet are stuck in the 'Terminating' +or 'Unknown' states for an extended period of time, you may need to manually +intervene to forcefully delete the pods from the apiserver. +This is a potentially dangerous task. Refer to +[Force Delete StatefulSet Pods](/docs/tasks/run-application/force-delete-stateful-set-pod/) +for details. ## {{% heading "whatsnext" %}} diff --git a/content/en/docs/tasks/run-application/scale-stateful-set.md b/content/en/docs/tasks/run-application/scale-stateful-set.md index 51ae43ccfd..025eb47a44 100644 --- a/content/en/docs/tasks/run-application/scale-stateful-set.md +++ b/content/en/docs/tasks/run-application/scale-stateful-set.md @@ -14,14 +14,17 @@ weight: 50 -This task shows how to scale a StatefulSet. Scaling a StatefulSet refers to increasing or decreasing the number of replicas. +This task shows how to scale a StatefulSet. Scaling a StatefulSet refers to +increasing or decreasing the number of replicas. ## {{% heading "prerequisites" %}} - StatefulSets are only available in Kubernetes version 1.5 or later. To check your version of Kubernetes, run `kubectl version`. -- Not all stateful applications scale nicely. If you are unsure about whether to scale your StatefulSets, see [StatefulSet concepts](/docs/concepts/workloads/controllers/statefulset/) or [StatefulSet tutorial](/docs/tutorials/stateful-application/basic-stateful-set/) for further information. +- Not all stateful applications scale nicely. If you are unsure about whether + to scale your StatefulSets, see [StatefulSet concepts](/docs/concepts/workloads/controllers/statefulset/) + or [StatefulSet tutorial](/docs/tutorials/stateful-application/basic-stateful-set/) for further information. - You should perform scaling only when you are confident that your stateful application cluster is completely healthy. @@ -46,7 +49,9 @@ kubectl scale statefulsets --replicas= ### Make in-place updates on your StatefulSets -Alternatively, you can do [in-place updates](/docs/concepts/cluster-administration/manage-deployment/#in-place-updates-of-resources) on your StatefulSets. +Alternatively, you can do +[in-place updates](/docs/concepts/cluster-administration/manage-deployment/#in-place-updates-of-resources) +on your StatefulSets. If your StatefulSet was initially created with `kubectl apply`, update `.spec.replicas` of the StatefulSet manifests, and then do a `kubectl apply`: @@ -71,10 +76,12 @@ kubectl patch statefulsets -p '{"spec":{"replicas": 1, Kubernetes cannot determine the reason for an unhealthy Pod. It might be the result of a permanent fault or of a transient fault. A transient fault can be caused by a restart required by upgrading or maintenance. +If spec.replicas > 1, Kubernetes cannot determine the reason for an unhealthy Pod. +It might be the result of a permanent fault or of a transient fault. A transient +fault can be caused by a restart required by upgrading or maintenance. If the Pod is unhealthy due to a permanent fault, scaling without correcting the fault may lead to a state where the StatefulSet membership