Update reserve-compute-resources.md
update zh-trans to match master branch, the original seems to be based on release-1.16
This commit is contained in:
parent
f2ff9c02ac
commit
1bcc87a3f4
|
@ -5,6 +5,7 @@ reviewers:
|
|||
- dashpole
|
||||
title: 为系统守护进程预留计算资源
|
||||
content_type: task
|
||||
min-kubernetes-server-version: 1.8
|
||||
---
|
||||
<!--
|
||||
---
|
||||
|
@ -14,6 +15,7 @@ reviewers:
|
|||
- dashpole
|
||||
title: Reserve Compute Resources for System Daemons
|
||||
content_type: task
|
||||
min-kubernetes-server-version: 1.8
|
||||
---
|
||||
-->
|
||||
|
||||
|
@ -35,14 +37,16 @@ Kubernetes 的节点可以按照 `Capacity` 调度。默认情况下 pod 能够
|
|||
|
||||
`kubelet` 公开了一个名为 `Node Allocatable` 的特性,有助于为系统守护进程预留计算资源。Kubernetes 推荐集群管理员按照每个节点上的工作负载密度配置 `Node Allocatable`。
|
||||
|
||||
|
||||
|
||||
## {{% heading "prerequisites" %}}
|
||||
|
||||
|
||||
{{< include "task-tutorial-prereqs.md" >}} {{< version-check >}}
|
||||
|
||||
|
||||
<!--
|
||||
Your Kubernetes server must be at or later than version 1.17 to use
|
||||
the kubelet command line option `--reserved-cpus` to set an
|
||||
[explicitly reserved CPU list](#explicitly-reserved-cpu-list).
|
||||
-->
|
||||
您的 kubernetes 服务器版本必须高于 1.17 版本,才能使用 kubelet 命令行选项 `--reserved-cpus` 来设置 [显式 CPU 保留列表](#explicitly-reserved-cpu-list)
|
||||
|
||||
<!-- steps -->
|
||||
|
||||
|
@ -152,13 +156,7 @@ be configured to use the `systemd` cgroup driver.
|
|||
`kube-reserved` is meant to capture resource reservation for kubernetes system
|
||||
daemons like the `kubelet`, `container runtime`, `node problem detector`, etc.
|
||||
It is not meant to reserve resources for system daemons that are run as pods.
|
||||
`kube-reserved` is typically a function of `pod density` on the nodes. [This
|
||||
performance dashboard](http://node-perf-dash.k8s.io/#/builds) exposes `cpu` and
|
||||
`memory` usage profiles of `kubelet` and `docker engine` at multiple levels of
|
||||
pod density. [This blog
|
||||
post](https://kubernetes.io/blog/2016/11/visualize-kubelet-performance-with-node-dashboard)
|
||||
explains how the dashboard can be interpreted to come up with a suitable
|
||||
`kube-reserved` reservation.
|
||||
`kube-reserved` is typically a function of `pod density` on the nodes.
|
||||
|
||||
In addition to `cpu`, `memory`, and `ephemeral-storage`, `pid` may be
|
||||
specified to reserve the specified number of process IDs for
|
||||
|
@ -185,8 +183,6 @@ exist. Kubelet will fail if an invalid cgroup is specified.
|
|||
|
||||
`kube-reserved` 是为了给诸如 `kubelet`、`container runtime`、`node problem detector` 等 kubernetes 系统守护进程争取资源预留。
|
||||
这并不代表要给以 pod 形式运行的系统守护进程保留资源。`kube-reserved` 通常是节点上的一个 `pod 密度` 功能。
|
||||
[这个性能仪表盘](http://node-perf-dash.k8s.io/#/builds) 从 pod 密度的多个层面展示了 `kubelet` 和 `docker engine` 的 `cpu` 和 `内存` 使用情况。
|
||||
[这个博文](https://kubernetes.io/blog/2016/11/visualize-kubelet-performance-with-node-dashboard)解释了如何仪表板以提出合适的 `kube-reserved` 预留。
|
||||
|
||||
除了 `cpu`,`内存` 和 `ephemeral-storage` 之外,`pid` 可能是指定为 kubernetes 系统守护进程预留指定数量的进程 ID。
|
||||
|
||||
|
@ -243,7 +239,7 @@ exist. Kubelet will fail if an invalid cgroup is specified.
|
|||
<!--
|
||||
### Explicitly Reserved CPU List
|
||||
-->
|
||||
### 明确保留的 CPU 列表
|
||||
### 显式保留的 CPU 列表
|
||||
{{< feature-state for_k8s_version="v1.17" state="stable" >}}
|
||||
|
||||
- **Kubelet Flag**: `--reserved-cpus=0-3`
|
||||
|
@ -283,7 +279,7 @@ For example: in Centos, you can do this using the tuned toolset.
|
|||
Memory pressure at the node level leads to System OOMs which affects the entire
|
||||
node and all pods running on it. Nodes can go offline temporarily until memory
|
||||
has been reclaimed. To avoid (or reduce the probability of) system OOMs kubelet
|
||||
provides [`Out of Resource`](./out-of-resource.md) management. Evictions are
|
||||
provides [`Out of Resource`](/docs/tasks/administer-cluster/out-of-resource/) management. Evictions are
|
||||
supported for `memory` and `ephemeral-storage` only. By reserving some memory via
|
||||
`--eviction-hard` flag, the `kubelet` attempts to `evict` pods whenever memory
|
||||
availability on the node drops below the reserved value. Hypothetically, if
|
||||
|
@ -296,7 +292,7 @@ available for pods.
|
|||
- **Kubelet Flag**: `--eviction-hard=[memory.available<500Mi]`
|
||||
|
||||
节点级别的内存压力将导致系统内存不足,这将影响到整个节点及其上运行的所有 pod。节点可以暂时离线直到内存已经回收为止。
|
||||
为了防止(或减少可能性)系统内存不足,kubelet 提供了[资源不足](./out-of-resource.md)管理。驱逐操作只支持 `memory` 和 `ephemeral-storage`。
|
||||
为了防止(或减少可能性)系统内存不足,kubelet 提供了[资源不足](/docs/tasks/administer-cluster/out-of-resource/)管理。驱逐操作只支持 `memory` 和 `ephemeral-storage`。
|
||||
通过 `--eviction-hard` 标志预留一些内存后,当节点上的可用内存降至保留值以下时,`kubelet` 将尝试`驱逐` pod。
|
||||
假设,如果节点上不存在系统守护进程,pod 将不能使用超过 `capacity-eviction-hard` 的资源。因此,为驱逐而预留的资源对 pod 是不可用的。
|
||||
|
||||
|
@ -310,7 +306,7 @@ The scheduler treats `Allocatable` as the available `capacity` for pods.
|
|||
`kubelet` enforce `Allocatable` across pods by default. Enforcement is performed
|
||||
by evicting pods whenever the overall usage across all pods exceeds
|
||||
`Allocatable`. More details on eviction policy can be found
|
||||
[here](./out-of-resource.md#eviction-policy). This enforcement is controlled by
|
||||
[here](/docs/tasks/administer-cluster/out-of-resource/#eviction-policy). This enforcement is controlled by
|
||||
specifying `pods` value to the kubelet flag `--enforce-node-allocatable`.
|
||||
|
||||
|
||||
|
@ -326,7 +322,7 @@ respectively.
|
|||
|
||||
调度器将 `Allocatable` 按 pod 的可用 `capacity` 对待。
|
||||
|
||||
`kubelet` 默认在 pod 中执行 `Allocatable`。无论何时,如果所有 pod 的总用量超过了 `Allocatable`,驱逐 pod 的措施将被执行。有关驱逐策略的更多细节可以在[这里](./out-of-resource.md#eviction-policy)找到。请通过设置 kubelet `--enforce-node-allocatable` 标志值为 `pods` 控制这个措施。
|
||||
`kubelet` 默认在 pod 中执行 `Allocatable`。无论何时,如果所有 pod 的总用量超过了 `Allocatable`,驱逐 pod 的措施将被执行。有关驱逐策略的更多细节可以在[这里](/docs/tasks/administer-cluster/out-of-resource/#eviction-policy).找到。请通过设置 kubelet `--enforce-node-allocatable` 标志值为 `pods` 控制这个措施。
|
||||
|
||||
可选的,通过在相同标志中同时指定 `kube-reserved` 和 `system-reserved` 值能够使 `kubelet` 执行 `kube-reserved` 和 `system-reserved`。请注意,要想执行 `kube-reserved` 或者 `system-reserved` 时,需要分别指定 `--kube-reserved-cgroup` 或者 `--system-reserved-cgroup`。
|
||||
|
||||
|
@ -411,55 +407,3 @@ usage is higher than `31.5Gi` or `storage` is greater than `90Gi`
|
|||
当 pod 的内存使用总量超过 `28.5Gi` 或者磁盘使用总量超过 `88Gi` 时,Kubelet 将会驱逐它们。如果节点上的所有进程都尽可能多的使用 CPU,则 pod 加起来不能使用超过 `14.5 CPUs` 的资源。
|
||||
|
||||
当没有执行 `kube-reserved` 和/或 `system-reserved` 且系统守护进程使用量超过其预留时,如果节点内存用量高于 `31.5Gi` 或`存储`大于 `90Gi`,kubelet 将会驱逐 pod。
|
||||
|
||||
<!--
|
||||
## Feature Availability
|
||||
|
||||
As of Kubernetes version 1.2, it has been possible to **optionally** specify
|
||||
`kube-reserved` and `system-reserved` reservations. The scheduler switched to
|
||||
using `Allocatable` instead of `Capacity` when available in the same release.
|
||||
|
||||
As of Kubernetes version 1.6, `eviction-thresholds` are being considered by
|
||||
computing `Allocatable`. To revert to the old behavior set
|
||||
`--experimental-allocatable-ignore-eviction` kubelet flag to `true`.
|
||||
|
||||
As of Kubernetes version 1.6, `kubelet` enforces `Allocatable` on pods using
|
||||
control groups. To revert to the old behavior unset `--enforce-node-allocatable`
|
||||
kubelet flag. Note that unless `--kube-reserved`, or `--system-reserved` or
|
||||
`--eviction-hard` flags have non-default values, `Allocatable` enforcement does
|
||||
not affect existing deployments.
|
||||
|
||||
As of Kubernetes version 1.6, `kubelet` launches pods in their own cgroup
|
||||
sandbox in a dedicated part of the cgroup hierarchy it manages. Operators are
|
||||
required to drain their nodes prior to upgrade of the `kubelet` from prior
|
||||
versions in order to ensure pods and their associated containers are launched in
|
||||
the proper part of the cgroup hierarchy.
|
||||
|
||||
As of Kubernetes version 1.7, `kubelet` supports specifying `storage` as a resource
|
||||
for `kube-reserved` and `system-reserved`.
|
||||
|
||||
As of Kubernetes version 1.8, the `storage` key name was changed to `ephemeral-storage`
|
||||
for the alpha release.
|
||||
-->
|
||||
## 可用特性
|
||||
|
||||
截至 Kubernetes 1.2 版本,已经可以**可选**的指定 `kube-reserved` 和 `system-reserved` 预留。当在相同的发布中都可用时,调度器将转为使用 `Allocatable` 替代 `Capacity`。
|
||||
|
||||
截至 Kubernetes 1.6 版本,`eviction-thresholds` 是通过计算 `Allocatable` 进行考虑。要使用旧版本的行为,请设置 `--experimental-allocatable-ignore-eviction` kubelet 标志为 `true`。
|
||||
|
||||
截至 Kubernetes 1.6 版本,`kubelet` 使用控制组在 pod 上执行 `Allocatable`。要使用旧版本行为,请取消设置 `--enforce-node-allocatable` kubelet 标志。请注意,除非 `--kube-reserved` 或者 `--system-reserved` 或者 `--eviction-hard` 标志没有默认参数,否则 `Allocatable` 的实施不会影响已经存在的 deployment。
|
||||
|
||||
截至 Kubernetes 1.6 版本,`kubelet` 在 pod 自己的 cgroup 沙箱中启动它们,这个 cgroup 沙箱在 `kubelet` 管理的 cgroup 层次结构中的一个独占部分中。在从前一个版本升级 kubelet 之前,要求操作员 drain 节点,以保证 pod 及其关联的容器在 cgroup 层次结构中合适的部分中启动。
|
||||
|
||||
截至 Kubernetes 1.7 版本,`kubelet` 支持指定 `storage` 为 `kube-reserved` 和 `system-reserved` 的资源。
|
||||
|
||||
截至 Kubernetes 1.8 版本,对于 alpha 版本,`storage` 键值名称已更改为 `ephemeral-storage`。
|
||||
|
||||
<!--
|
||||
As of Kubernetes version 1.17, you can optionally specify
|
||||
explicit cpuset by `reserved-cpus` as CPUs reserved for OS system
|
||||
daemons/interrupts/timers and Kubernetes daemons.
|
||||
-->
|
||||
从 Kubernetes 1.17 版本开始,可以选择将 `reserved-cpus` 显式 cpuset 指定为操作系统守护程序、中断、计时器和 Kubernetes 守护程序保留的 CPU。
|
||||
|
||||
|
||||
|
|
Loading…
Reference in New Issue