clarify behavior of topo constraints

This commit is contained in:
chenxuc 2021-06-18 23:06:10 +08:00
parent 304bf1bc53
commit c6bf5d13c8
1 changed files with 7 additions and 5 deletions

View File

@ -82,12 +82,11 @@ spec:
You can define one or multiple `topologySpreadConstraint` to instruct the kube-scheduler how to place each incoming Pod in relation to the existing Pods across your cluster. The fields are:
- **maxSkew** describes the degree to which Pods may be unevenly distributed.
It's the maximum permitted difference between the number of matching Pods in
any two topology domains of a given topology type. It must be greater than
zero. Its semantics differs according to the value of `whenUnsatisfiable`:
It must be greater than zero. Its semantics differs according to the value of `whenUnsatisfiable`:
- when `whenUnsatisfiable` equals to "DoNotSchedule", `maxSkew` is the maximum
permitted difference between the number of matching pods in the target
topology and the global minimum.
topology and the global minimum
(the minimum number of pods that match the label selector in a topology domain. For example, if you have 3 zones with 0, 2 and 3 matching pods respectively, The global minimum is 0).
- when `whenUnsatisfiable` equals to "ScheduleAnyway", scheduler gives higher
precedence to topologies that would help reduce the skew.
- **topologyKey** is the key of node labels. If two Nodes are labelled with this key and have identical values for that label, the scheduler treats both Nodes as being in the same topology. The scheduler tries to place a balanced number of Pods into each topology domain.
@ -96,6 +95,8 @@ You can define one or multiple `topologySpreadConstraint` to instruct the kube-s
- `ScheduleAnyway` tells the scheduler to still schedule it while prioritizing nodes that minimize the skew.
- **labelSelector** is used to find matching Pods. Pods that match this label selector are counted to determine the number of Pods in their corresponding topology domain. See [Label Selectors](/docs/concepts/overview/working-with-objects/labels/#label-selectors) for more details.
When a Pod defines more than one `topologySpreadConstraint`, those constraints are ANDed: The kube-scheduler looks for a node for the incoming Pod that satisfies all the constraints.
You can read more about this field by running `kubectl explain Pod.spec.topologySpreadConstraints`.
### Example: One TopologySpreadConstraint
@ -387,7 +388,8 @@ for more details.
## Known Limitations
- Scaling down a Deployment may result in imbalanced Pods distribution.
- There's no guarantee that the constraints remain satisfied when Pods are removed. For example, scaling down a Deployment may result in imbalanced Pods distribution.
You can use [Descheduler](https://github.com/kubernetes-sigs/descheduler) to rebalance the Pods distribution.
- Pods matched on tainted nodes are respected. See [Issue 80921](https://github.com/kubernetes/kubernetes/issues/80921)
## {{% heading "whatsnext" %}}