update the documents of podTopologySpread
Signed-off-by: kerthcet <kerthcet@gmail.com>
This commit is contained in:
parent
acdef19888
commit
d0e8a08ab1
|
|
@ -48,7 +48,8 @@ Pod topology spread constraints offer you a declarative way to configure that.
|
||||||
|
|
||||||
## `topologySpreadConstraints` field
|
## `topologySpreadConstraints` field
|
||||||
|
|
||||||
The Pod API includes a field, `spec.topologySpreadConstraints`. Here is an example:
|
The Pod API includes a field, `spec.topologySpreadConstraints`. The usage of this field looks like
|
||||||
|
the following:
|
||||||
|
|
||||||
```yaml
|
```yaml
|
||||||
---
|
---
|
||||||
|
|
@ -67,7 +68,8 @@ spec:
|
||||||
### other Pod fields go here
|
### other Pod fields go here
|
||||||
```
|
```
|
||||||
|
|
||||||
You can read more about this field by running `kubectl explain Pod.spec.topologySpreadConstraints`.
|
You can read more about this field by running `kubectl explain Pod.spec.topologySpreadConstraints` or refer to the
|
||||||
|
documents in [scheduling](/docs/reference/kubernetes-api/workload-resources/pod-v1/#scheduling).
|
||||||
|
|
||||||
### Spread constraint definition
|
### Spread constraint definition
|
||||||
|
|
||||||
|
|
@ -82,9 +84,9 @@ your cluster. Those fields are:
|
||||||
- if you select `whenUnsatisfiable: DoNotSchedule`, then `maxSkew` defines the
|
- if you select `whenUnsatisfiable: DoNotSchedule`, then `maxSkew` defines the
|
||||||
maximum permitted difference between the number of matching pods in the target
|
maximum permitted difference between the number of matching pods in the target
|
||||||
topology and the _global minimum_
|
topology and the _global minimum_
|
||||||
(the minimum number of pods that match the label selector in a topology domain).
|
(the minimum number of matching pods in an eligible domain or zero if the number of eligible domains is less than MinDomains).
|
||||||
For example, if you have 3 zones with 2, 4 and 5 matching pods respectively,
|
For example, if you have 3 zones with 2, 2 and 1 matching pods respectively,
|
||||||
then the global minimum is 2 and `maxSkew` is compared relative to that number.
|
`MaxSkew` is set to 1 then the global minimum is 1.
|
||||||
- if you select `whenUnsatisfiable: ScheduleAnyway`, the scheduler gives higher
|
- if you select `whenUnsatisfiable: ScheduleAnyway`, the scheduler gives higher
|
||||||
precedence to topologies that would help reduce the skew.
|
precedence to topologies that would help reduce the skew.
|
||||||
|
|
||||||
|
|
@ -108,10 +110,13 @@ your cluster. Those fields are:
|
||||||
`minDomains`, this value has no effect on scheduling.
|
`minDomains`, this value has no effect on scheduling.
|
||||||
- If you do not specify `minDomains`, the constraint behaves as if `minDomains` is 1.
|
- If you do not specify `minDomains`, the constraint behaves as if `minDomains` is 1.
|
||||||
|
|
||||||
- **topologyKey** is the key of [node labels](#node-labels). If two Nodes are labelled
|
- **topologyKey** is the key of [node labels](#node-labels). Nodes that have a label with this key
|
||||||
with this key and have identical values for that label, the scheduler treats both
|
and identical values are considered to be in the same topology.
|
||||||
Nodes as being in the same topology. The scheduler tries to place a balanced number
|
We consider each <key, value> as a "bucket", and try to put balanced number
|
||||||
of Pods into each topology domain.
|
of pods into each bucket.
|
||||||
|
We define a domain as a particular instance of a topology.
|
||||||
|
Also, we define an eligible domain as a domain whose nodes meet the requirements of
|
||||||
|
nodeAffinityPolicy and nodeTaintsPolicy.
|
||||||
|
|
||||||
- **whenUnsatisfiable** indicates how to deal with a Pod if it doesn't satisfy the spread constraint:
|
- **whenUnsatisfiable** indicates how to deal with a Pod if it doesn't satisfy the spread constraint:
|
||||||
- `DoNotSchedule` (default) tells the scheduler not to schedule it.
|
- `DoNotSchedule` (default) tells the scheduler not to schedule it.
|
||||||
|
|
|
||||||
|
|
@ -995,8 +995,8 @@ Each feature gate is designed for enabling/disabling a specific feature:
|
||||||
NUMA topology.
|
NUMA topology.
|
||||||
- `MemoryQoS`: Enable memory protection and usage throttle on pod / container using
|
- `MemoryQoS`: Enable memory protection and usage throttle on pod / container using
|
||||||
cgroup v2 memory controller.
|
cgroup v2 memory controller.
|
||||||
- `MinDomainsInPodTopologySpread`: Enable `minDomains` in Pod
|
- `MinDomainsInPodTopologySpread`: Enable `minDomains` in
|
||||||
[topology spread constraints](/docs/concepts/scheduling-eviction/topology-spread-constraints/).
|
[Pod topology spread constraints](/docs/concepts/scheduling-eviction/topology-spread-constraints/).
|
||||||
- `MixedProtocolLBService`: Enable using different protocols in the same `LoadBalancer` type
|
- `MixedProtocolLBService`: Enable using different protocols in the same `LoadBalancer` type
|
||||||
Service instance.
|
Service instance.
|
||||||
- `MountContainers`: Enable using utility containers on host as the volume mounter.
|
- `MountContainers`: Enable using utility containers on host as the volume mounter.
|
||||||
|
|
|
||||||
Loading…
Reference in New Issue