From fb937306e498d6fd238e388058aa332c59d7f4cd Mon Sep 17 00:00:00 2001 From: xin gu <418294249@qq.com> Date: Sun, 24 Mar 2024 20:49:40 +0800 Subject: [PATCH] sync kube-scheduler taint-and-toleration topology-spread-constraints --- .../docs/concepts/scheduling-eviction/kube-scheduler.md | 2 +- .../concepts/scheduling-eviction/taint-and-toleration.md | 4 ++-- .../scheduling-eviction/topology-spread-constraints.md | 6 +++--- 3 files changed, 6 insertions(+), 6 deletions(-) diff --git a/content/zh-cn/docs/concepts/scheduling-eviction/kube-scheduler.md b/content/zh-cn/docs/concepts/scheduling-eviction/kube-scheduler.md index b419e20f5d..54f3307ecf 100644 --- a/content/zh-cn/docs/concepts/scheduling-eviction/kube-scheduler.md +++ b/content/zh-cn/docs/concepts/scheduling-eviction/kube-scheduler.md @@ -122,7 +122,7 @@ kube-scheduler 给一个 Pod 做调度选择时包含两个步骤: 上述例子中 `effect` 使用的值为 `NoSchedule`,你也可以使用另外一个值 `PreferNoSchedule`。 @@ -389,7 +389,7 @@ are true. The following taints are built in: * `node.kubernetes.io/network-unavailable`: Node's network is unavailable. * `node.kubernetes.io/unschedulable`: Node is unschedulable. * `node.cloudprovider.kubernetes.io/uninitialized`: When the kubelet is started - with "external" cloud provider, this taint is set on a node to mark it + with an "external" cloud provider, this taint is set on a node to mark it as unusable. After a controller from the cloud-controller-manager initializes this node, the kubelet removes this taint. --> diff --git a/content/zh-cn/docs/concepts/scheduling-eviction/topology-spread-constraints.md b/content/zh-cn/docs/concepts/scheduling-eviction/topology-spread-constraints.md index ef6cf4e31b..3a3db98d46 100644 --- a/content/zh-cn/docs/concepts/scheduling-eviction/topology-spread-constraints.md +++ b/content/zh-cn/docs/concepts/scheduling-eviction/topology-spread-constraints.md @@ -496,7 +496,7 @@ can use a manifest similar to: @@ -981,7 +981,7 @@ section of the enhancement proposal about Pod topology spread constraints. because, in this case, those topology domains won't be considered until there is at least one node in them. - You can work around this by using an cluster autoscaling tool that is aware of + You can work around this by using a cluster autoscaling tool that is aware of Pod topology spread constraints and is also aware of the overall set of topology domains. -->