Merge pull request #28176 from chenrui333/zh/resync-scheduling-eviction-files

zh: resync scheduling files
This commit is contained in:
Kubernetes Prow Robot 2021-06-05 06:26:38 -07:00 committed by GitHub
commit 04ea603058
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
7 changed files with 168 additions and 94 deletions

View File

@ -1,5 +1,53 @@
---
title: 调度和驱逐 (Scheduling and Eviction)
title: 调度,抢占和驱逐
weight: 90
description: 在Kubernetes中调度 (Scheduling) 指的是确保 Pods 匹配到合适的节点,以便 kubelet 能够运行它们。驱逐 (Eviction) 是在资源匮乏的节点上,主动让一个或多个 Pods 失效的过程。
content_type: concept
description: >
在Kubernetes中调度 (scheduling) 指的是确保 Pods 匹配到合适的节点,
以便 kubelet 能够运行它们。抢占 (Preemption) 指的是终止低优先级的 Pods 以便高优先级的 Pods 可以
调度运行的过程。驱逐 (Eviction) 是在资源匮乏的节点上,主动让一个或多个 Pods 失效的过程。
---
<!--
---
title: "Scheduling, Preemption and Eviction"
weight: 90
content_type: concept
description: >
In Kubernetes, scheduling refers to making sure that Pods are matched to Nodes
so that the kubelet can run them. Preemption is the process of terminating
Pods with lower Priority so that Pods with higher Priority can schedule on
Nodes. Eviction is the process of proactively terminating one or more Pods on
resource-starved Nodes.
no_list: true
---
-->
<!--
In Kubernetes, scheduling refers to making sure that {{<glossary_tooltip text="Pods" term_id="pod">}}
are matched to {{<glossary_tooltip text="Nodes" term_id="node">}} so that the
{{<glossary_tooltip text="kubelet" term_id="kubelet">}} can run them. Preemption
is the process of terminating Pods with lower {{<glossary_tooltip text="Priority" term_id="pod-priority">}}
so that Pods with higher Priority can schedule on Nodes. Eviction is the process
of terminating one or more Pods on Nodes.
-->
<!-- ## Scheduling -->
## 调度
* [Kubernetes 调度器](/zh/docs/concepts/scheduling-eviction/kube-scheduler/)
* [将 Pods 指派到节点](/zh/docs/concepts/scheduling-eviction/assign-pod-node/)
* [Pod 开销](/zh/docs/concepts/scheduling-eviction/pod-overhead/)
* [污点和容忍](/zh/docs/concepts/scheduling-eviction/taint-and-toleration/)
* [调度框架](/zh/docs/concepts/scheduling-eviction/scheduling-framework)
* [调度器的性能调试](/zh/docs/concepts/scheduling-eviction/scheduler-perf-tuning/)
* [扩展资源的资源装箱](/zh/docs/concepts/scheduling-eviction/resource-bin-packing/)
<!-- ## Pod Disruption -->
## Pod 干扰
* [Pod 优先级和抢占](/zh/docs/concepts/scheduling-eviction/pod-priority-preemption/)
* [节点压力驱逐](/zh/docs/concepts/scheduling-eviction/node-pressure-eviction/)
* [API发起的驱逐](/zh/docs/concepts/scheduling-eviction/api-eviction/)

View File

@ -136,8 +136,18 @@ with a standard set of labels. See [Well-Known Labels, Annotations and Taints](/
-->
## 插曲:内置的节点标签 {#built-in-node-labels}
除了你[添加](#attach-labels-to-node)的标签外,节点还预先填充了一组标准标签。
参见[常用标签、注解和污点](/zh/docs/reference/labels-annotations-taints/)。
除了你[添加](#step-one-attach-label-to-the-node)的标签外,节点还预制了一组标准标签。
参见这些[常用的标签,注解以及污点](/zh/docs/reference/labels-annotations-taints/)
* [`kubernetes.io/hostname`](/zh/docs/reference/kubernetes-api/labels-annotations-taints/#kubernetes-io-hostname)
* [`failure-domain.beta.kubernetes.io/zone`](/zh/docs/reference/kubernetes-api/labels-annotations-taints/#failure-domainbetakubernetesiozone)
* [`failure-domain.beta.kubernetes.io/region`](/zh/docs/reference/kubernetes-api/labels-annotations-taints/#failure-domainbetakubernetesioregion)
* [`topology.kubernetes.io/zone`](/zh/docs/reference/kubernetes-api/labels-annotations-taints/#topologykubernetesiozone)
* [`topology.kubernetes.io/region`](/zh/docs/reference/kubernetes-api/labels-annotations-taints/#topologykubernetesiozone)
* [`beta.kubernetes.io/instance-type`](/zh/docs/reference/kubernetes-api/labels-annotations-taints/#beta-kubernetes-io-instance-type)
* [`node.kubernetes.io/instance-type`](/zh/docs/reference/kubernetes-api/labels-annotations-taints/#nodekubernetesioinstance-type)
* [`kubernetes.io/os`](/zh/docs/reference/kubernetes-api/labels-annotations-taints/#kubernetes-io-os)
* [`kubernetes.io/arch`](/zh/docs/reference/kubernetes-api/labels-annotations-taints/#kubernetes-io-arch)
{{< note >}}
<!--
@ -828,4 +838,3 @@ resource allocation decisions.
一旦 Pod 分配给 节点kubelet 应用将运行该 pod 并且分配节点本地资源。
[拓扑管理器](/zh/docs/tasks/administer-cluster/topology-manager/)
可以参与到节点级别的资源分配决定中。

View File

@ -1,9 +1,21 @@
---
title: Pod 开销
content_type: concept
weight: 20
weight: 30
---
<!--
---
reviewers:
- dchen1107
- egernst
- tallclair
title: Pod Overhead
content_type: concept
weight: 30
---
-->
<!-- overview -->
{{< feature-state for_k8s_version="v1.18" state="beta" >}}
@ -226,8 +238,9 @@ cgroups directly on the node.
First, on the particular node, determine the Pod identifier:
-->
在工作负载所运行的节点上检查 Pod 的内存 cgroups. 在接下来的例子中,将在该节点上使用具备 CRI 兼容的容器运行时命令行工具 [`crictl`](https://github.com/kubernetes-sigs/cri-tools/blob/master/docs/crictl.md).
这是一个展示 PodOverhead 行为的进阶示例,用户并不需要直接在该节点上检查 cgroups.
在工作负载所运行的节点上检查 Pod 的内存 cgroups. 在接下来的例子中,
将在该节点上使用具备 CRI 兼容的容器运行时命令行工具
[`crictl`](https://github.com/kubernetes-sigs/cri-tools/blob/master/docs/crictl.md)。
首先在特定的节点上确定该 Pod 的标识符:
@ -298,8 +311,11 @@ running with a defined Overhead. This functionality is not available in the 1.9
kube-state-metrics, but is expected in a following release. Users will need to build kube-state-metrics
from source in the meantime.
-->
在 [kube-state-metrics](https://github.com/kubernetes/kube-state-metrics) 中可以通过 `kube_pod_overhead` 指标来协助确定何时使用 PodOverhead 以及协助观察以一个既定开销运行的工作负载的稳定性。
该特性在 kube-state-metrics 的 1.9 发行版本中不可用,不过预计将在后续版本中发布。在此之前,用户需要从源代码构建 kube-state-metrics.
在 [kube-state-metrics](https://github.com/kubernetes/kube-state-metrics) 中可以通过
`kube_pod_overhead` 指标来协助确定何时使用 PodOverhead 以及协助观察以一个既定
开销运行的工作负载的稳定性。
该特性在 kube-state-metrics 的 1.9 发行版本中不可用,不过预计将在后续版本中发布。
在此之前,用户需要从源代码构建 kube-state-metrics。
## {{% heading "whatsnext" %}}
@ -310,4 +326,3 @@ from source in the meantime.
* [RuntimeClass](/zh/docs/concepts/containers/runtime-class/)
* [PodOverhead 设计](https://github.com/kubernetes/enhancements/tree/master/keps/sig-node/688-pod-overhead)

View File

@ -1,16 +1,18 @@
---
title: 扩展资源的资源装箱
content_type: concept
weight: 30
weight: 80
---
<!--
---
reviewers:
- bsalamat
- k82cn
- ahg-g
title: Resource Bin Packing for Extended Resources
content_type: concept
weight: 30
weight: 80
---
-->
<!-- overview -->
@ -249,4 +251,3 @@ CPU = resourceScoringFunction((2+6),8)
NodeScore = (5 * 5) + (7 * 1) + (10 * 3) / (5 + 1 + 3)
= 7
```

View File

@ -1,14 +1,16 @@
---
title: 调度器性能调优
content_type: concept
weight: 80
weight: 100
---
<!--
---
reviewers:
- bsalamat
title: Scheduler Performance Tuning
content_type: concept
weight: 80
weight: 100
---
-->
<!-- overview -->
@ -81,11 +83,11 @@ kube-scheduler 的表现等价于设置值为 100。
To change the value, edit the
[kube-scheduler configuration file](/docs/reference/config-api/kube-scheduler-config.v1beta1/)
and then restart the scheduler.
In many cases, the configuration file can be found at `/etc/kubernetes/config/kube-scheduler.yaml`.
In many cases, the configuration file can be found at `/etc/kubernetes/config/kube-scheduler.yaml`
-->
要修改这个值,编辑 [kube-scheduler 的配置文件](/zh/docs/reference/config-api/kube-scheduler-config.v1beta1/)
后重启调度器。
在很多场合下,配置文件位于 `/etc/kubernetes/config/kube-scheduler.yaml`
要修改这个值,编辑 [kube-scheduler 的配置文件](/zh/docs/reference/config-api/kube-scheduler-config.v1beta1/)
后重启调度器。
大多数情况下,这个配置文件是 `/etc/kubernetes/config/kube-scheduler.yaml`
<!--
After you have made this change, you can run
@ -195,7 +197,7 @@ minimum value of 50 nodes.
另外,还有一个 50 个 Node 的最小值是硬编码在程序中。
<!--
In clusters with less than 50 feasible nodes, the scheduler still
{{< note >}} In clusters with less than 50 feasible nodes, the scheduler still
checks all the nodes because there are not enough feasible nodes to stop
the scheduler's search early.
@ -294,8 +296,8 @@ After going over all the Nodes, it goes back to Node 1.
-->
在评估完所有 Node 后,将会返回到 Node 1从头开始。
## {{% heading "whatsnext" %}}
* 查阅 [kube-scheduler 配置参考 (v1beta1)](/zh/docs/reference/config-api/kube-scheduler-config.v1beta1/)
<!-- * Check the [kube-scheduler configuration reference (v1beta1)](/docs/reference/config-api/kube-scheduler-config.v1beta1/) -->
* 参见 [kube-scheduler 配置参考 (v1beta1)](/zh/docs/reference/config-api/kube-scheduler-config.v1beta1/)

View File

@ -1,15 +1,17 @@
---
title: 调度框架
content_type: concept
weight: 70
weight: 90
---
<!--
---
reviewers:
- ahg-g
title: Scheduling Framework
content_type: concept
weight: 70
weight: 90
---
-->
<!-- overview -->
@ -17,17 +19,15 @@ weight: 70
{{< feature-state for_k8s_version="1.15" state="alpha" >}}
<!--
The scheduling framework is a plugable architecture for the Kubernetes Scheduler.
It adds a new set of "plugin" APIs to the existing scheduler. Plugins are compiled
into the scheduler. The APIs allow most scheduling features to be implemented as
plugins, while keeping the
scheduling "core" simple and maintainable. Refer to the [design proposal of the
The scheduling framework is a pluggable architecture for the Kubernetes scheduler.
It adds a new set of "plugin" APIs to the existing scheduler. Plugins are compiled into the scheduler. The APIs allow most scheduling features to be implemented as plugins, while keeping the
scheduling "core" lightweight and maintainable. Refer to the [design proposal of the
scheduling framework][kep] for more technical information on the design of the
framework.
-->
调度框架是 Kubernetes 调度器的一种可插入架构。
调度框架向现有的调度器增加了一组新的“插件Plugin” API。
插件被编译到调度器程序中。
调度框架是面向 Kubernetes 调度器的一种插件架构,
它为现有的调度器添加了一组新的“插件” API。插件会被编译到调度器之中。
这些 API 允许大多数调度功能以插件的形式实现,同时使调度“核心”保持简单且可维护。
请参考[调度框架的设计提案](https://github.com/kubernetes/enhancements/blob/master/keps/sig-scheduling/624-scheduling-framework/README.md)
获取框架设计的更多技术信息。
@ -333,9 +333,9 @@ is approved, it is sent to the [PreBind](#pre-bind) phase.
-->
{{< note >}}
尽管任何插件可以访问 “等待中” 状态的 Pod 列表并批准它们
(参阅 [`FrameworkHandle`](https://git.k8s.io/enhancements/keps/sig-scheduling/624-scheduling-framework#frameworkhandle)
我们希望只有被允许的插件可以批准处于“等待中”状态的预留 Pod 的绑定。
一旦 Pod 被批准了,它将进入到[预绑定](#pre-bind) 阶段。
(查看 [`FrameworkHandle`](https://git.k8s.io/enhancements/keps/sig-scheduling/624-scheduling-framework#frameworkhandle))
我们期望只有允许插件可以批准处于 “等待中” 状态的预留 Pod 的绑定。
一旦 Pod 被批准了,它将发送到[预绑定](#pre-bind) 阶段。
{{< /note >}}
<!--
@ -472,4 +472,3 @@ Learn more at [multiple profiles](/docs/reference/scheduling/config/#multiple-pr
如果你正在使用 Kubernetes v1.18 或更高版本,你可以将一组插件设置为
一个调度器配置文件,然后定义不同的配置文件来满足各类工作负载。
了解更多关于[多配置文件](/zh/docs/reference/scheduling/config/#multiple-profiles)。

View File

@ -357,9 +357,9 @@ true. The following taints are built in:
the NodeCondition `Ready` being "`False`".
* `node.kubernetes.io/unreachable`: Node is unreachable from the node
controller. This corresponds to the NodeCondition `Ready` being "`Unknown`".
* `node.kubernetes.io/out-of-disk`: Node becomes out of disk.
* `node.kubernetes.io/memory-pressure`: Node has memory pressure.
* `node.kubernetes.io/disk-pressure`: Node has disk pressure.
* `node.kubernetes.io/pid-pressure`: Node has PID pressure.
* `node.kubernetes.io/network-unavailable`: Node's network is unavailable.
* `node.kubernetes.io/unschedulable`: Node is unschedulable.
* `node.cloudprovider.kubernetes.io/uninitialized`: When the kubelet is started
@ -371,9 +371,9 @@ true. The following taints are built in:
* `node.kubernetes.io/not-ready`:节点未准备好。这相当于节点状态 `Ready` 的值为 "`False`"。
* `node.kubernetes.io/unreachable`:节点控制器访问不到节点. 这相当于节点状态 `Ready` 的值为 "`Unknown`"。
* `node.kubernetes.io/out-of-disk`:节点磁盘耗尽。
* `node.kubernetes.io/memory-pressure`:节点存在内存压力。
* `node.kubernetes.io/disk-pressure`:节点存在磁盘压力。
* `node.kubernetes.io/pid-pressure`: 节点的 PID 压力。
* `node.kubernetes.io/network-unavailable`:节点网络不可用。
* `node.kubernetes.io/unschedulable`: 节点不可调度。
* `node.cloudprovider.kubernetes.io/uninitialized`:如果 kubelet 启动时指定了一个 "外部" 云平台驱动,
@ -486,7 +486,7 @@ breaking.
* `node.kubernetes.io/memory-pressure`
* `node.kubernetes.io/disk-pressure`
* `node.kubernetes.io/out-of-disk` (*only for critical pods*)
* `node.kubernetes.io/pid-pressure` (1.14 or later)
* `node.kubernetes.io/unschedulable` (1.10 or later)
* `node.kubernetes.io/network-unavailable` (*host network only*)
-->
@ -498,7 +498,7 @@ DaemonSet 控制器自动为所有守护进程添加如下 `NoSchedule` 容忍
* `node.kubernetes.io/memory-pressure`
* `node.kubernetes.io/disk-pressure`
* `node.kubernetes.io/out-of-disk` (*只适合关键 Pod*)
* `node.kubernetes.io/pid-pressure` (1.14 或更高版本)
* `node.kubernetes.io/unschedulable` (1.10 或更高版本)
* `node.kubernetes.io/network-unavailable` (*只适合主机网络配置*)