update the documents of podTopologySpread

Signed-off-by: kerthcet <kerthcet@gmail.com>
This commit is contained in:
kerthcet 2022-08-15 10:37:52 +08:00
parent acdef19888
commit d0e8a08ab1
2 changed files with 17 additions and 12 deletions

View File

@ -48,7 +48,8 @@ Pod topology spread constraints offer you a declarative way to configure that.
## `topologySpreadConstraints` field
The Pod API includes a field, `spec.topologySpreadConstraints`. Here is an example:
The Pod API includes a field, `spec.topologySpreadConstraints`. The usage of this field looks like
the following:
```yaml
---
@ -67,7 +68,8 @@ spec:
### other Pod fields go here
```
You can read more about this field by running `kubectl explain Pod.spec.topologySpreadConstraints`.
You can read more about this field by running `kubectl explain Pod.spec.topologySpreadConstraints` or refer to the
documents in [scheduling](/docs/reference/kubernetes-api/workload-resources/pod-v1/#scheduling).
### Spread constraint definition
@ -82,9 +84,9 @@ your cluster. Those fields are:
- if you select `whenUnsatisfiable: DoNotSchedule`, then `maxSkew` defines the
maximum permitted difference between the number of matching pods in the target
topology and the _global minimum_
(the minimum number of pods that match the label selector in a topology domain).
For example, if you have 3 zones with 2, 4 and 5 matching pods respectively,
then the global minimum is 2 and `maxSkew` is compared relative to that number.
(the minimum number of matching pods in an eligible domain or zero if the number of eligible domains is less than MinDomains).
For example, if you have 3 zones with 2, 2 and 1 matching pods respectively,
`MaxSkew` is set to 1 then the global minimum is 1.
- if you select `whenUnsatisfiable: ScheduleAnyway`, the scheduler gives higher
precedence to topologies that would help reduce the skew.
@ -108,10 +110,13 @@ your cluster. Those fields are:
`minDomains`, this value has no effect on scheduling.
- If you do not specify `minDomains`, the constraint behaves as if `minDomains` is 1.
- **topologyKey** is the key of [node labels](#node-labels). If two Nodes are labelled
with this key and have identical values for that label, the scheduler treats both
Nodes as being in the same topology. The scheduler tries to place a balanced number
of Pods into each topology domain.
- **topologyKey** is the key of [node labels](#node-labels). Nodes that have a label with this key
and identical values are considered to be in the same topology.
We consider each <key, value> as a "bucket", and try to put balanced number
of pods into each bucket.
We define a domain as a particular instance of a topology.
Also, we define an eligible domain as a domain whose nodes meet the requirements of
nodeAffinityPolicy and nodeTaintsPolicy.
- **whenUnsatisfiable** indicates how to deal with a Pod if it doesn't satisfy the spread constraint:
- `DoNotSchedule` (default) tells the scheduler not to schedule it.

View File

@ -995,8 +995,8 @@ Each feature gate is designed for enabling/disabling a specific feature:
NUMA topology.
- `MemoryQoS`: Enable memory protection and usage throttle on pod / container using
cgroup v2 memory controller.
- `MinDomainsInPodTopologySpread`: Enable `minDomains` in Pod
[topology spread constraints](/docs/concepts/scheduling-eviction/topology-spread-constraints/).
- `MinDomainsInPodTopologySpread`: Enable `minDomains` in
[Pod topology spread constraints](/docs/concepts/scheduling-eviction/topology-spread-constraints/).
- `MixedProtocolLBService`: Enable using different protocols in the same `LoadBalancer` type
Service instance.
- `MountContainers`: Enable using utility containers on host as the volume mounter.