Merge pull request #30211 from superleo/branch1

[zh]Concept files to sync for 1.22 #29325 task 12
This commit is contained in:
Kubernetes Prow Robot 2021-11-07 16:40:52 -08:00 committed by GitHub
commit 565f0259f0
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
6 changed files with 264 additions and 607 deletions

View File

@ -204,6 +204,11 @@ To mark a Node unschedulable, run:
```shell
kubectl cordon $NODENAME
```
<!--
See [Safely Drain a Node](/docs/tasks/administer-cluster/safely-drain-node/)
for more details.
-->
更多细节参考[安全腾空节点](/zh/docs/tasks/administer-cluster/safely-drain-node/)。
<!--
Pods that are part of a {{< glossary_tooltip term_id="daemonset" >}} tolerate
@ -279,9 +284,9 @@ The `conditions` field describes the status of all `Running` nodes. Examples of
{{< table caption = "Node conditions, and a description of when each condition applies." >}}
| Node Condition | Description |
|----------------|-------------|
| `Ready` | `True` if the node is healthy and ready to accept pods, `False` if the node is not healthy and is not accepting pods, and `Unknown` if the node controller has not heard from the node in the last `node-monitor-grace-period` (default is 40 seconds) |
| `DiskPressure` | `True` if there is insufficient free space on the node for adding new pods, otherwise `False` |
| `MemoryPressure` | `True` if pressure exists on the node memory - that is, if the node memory is low; otherwise `False` |
| `Ready` | `True` if the node is healthy and ready to accept pods, `False` if the node is not healthy and is not accepting pods, and `Unknown` if the node controller has not heard from the node in the last `node-monitor-grace-period` (default is 40 seconds) |
| `DiskPressure` | `True` if pressure exists on the disk size—that is, if the disk capacity is low; otherwise `False` |
| `MemoryPressure` | `True` if pressure exists on the node memorythat is, if the node memory is low; otherwise `False` |
| `PIDPressure` | `True` if pressure exists on the processes - that is, if there are too many processes on the node; otherwise `False` |
| `NetworkUnavailable` | `True` if the network for the node is not correctly configured, otherwise `False` |
{{< /table >}}
@ -290,7 +295,7 @@ The `conditions` field describes the status of all `Running` nodes. Examples of
| 节点状况 | 描述 |
|----------------|-------------|
| `Ready` | 如节点是健康的并已经准备好接收 Pod 则为 `True``False` 表示节点不健康而且不能接收 Pod`Unknown` 表示节点控制器在最近 `node-monitor-grace-period` 期间(默认 40 秒)没有收到节点的消息 |
| `DiskPressure` | `True` 表示节点的空闲空间不足以用于添加新 Pod, 否则为 `False` |
| `DiskPressure` | `True` 表示节点存在磁盘空间压力,即磁盘可用量低, 否则为 `False` |
| `MemoryPressure` | `True` 表示节点存在内存压力,即节点内存可用量低,否则为 `False` |
| `PIDPressure` | `True` 表示节点存在进程压力,即节点上进程过多;否则为 `False` |
| `NetworkUnavailable` | `True` 表示节点网络配置不正确;否则为 `False` |
@ -308,9 +313,11 @@ Condition被保护起来的节点在其规约中被标记为不可调度Un
{{< /note >}}
<!--
The node condition is represented as a JSON object. For example, the following response describes a healthy node.
In the Kubernetes API, a node's condition is represented as part of the `.status`
of the Node resource. For example, the following JSON structure describes a healthy node:
-->
节点条件使用 JSON 对象表示。例如,下面的响应描述了一个健康的节点。
在 Kubernetes API 中,节点的状况表示节点资源中`.status` 的一部分。
例如,以下 JSON 结构描述了一个健康节点:
```json
"conditions": [
@ -326,11 +333,23 @@ The node condition is represented as a JSON object. For example, the following r
```
<!--
If the Status of the Ready condition remains `Unknown` or `False` for longer than the `pod-eviction-timeout`, an argument is passed to the {{< glossary_tooltip text="kube-controller-manager" term_id="kube-controller-manager" >}}), all the Pods on the node are scheduled for deletion by the Node Controller. The default eviction timeout duration is **five minutes**. In some cases when the node is unreachable, the apiserver is unable to communicate with the kubelet on the node. The decision to delete the pods cannot be communicated to the kubelet until communication with the API server is re-established. In the meantime, the pods that are scheduled for deletion may continue to run on the partitioned node.
If the `status` of the Ready condition remains `Unknown` or `False` for longer
than the `pod-eviction-timeout` (an argument passed to the
{{< glossary_tooltip text="kube-controller-manager" term_id="kube-controller-manager"
>}}), then the [node controller](#node-controller) triggers
{{< glossary_tooltip text="API-initiated eviction" term_id="api-eviction" >}}
for all Pods assigned to that node. The default eviction timeout duration is
**five minutes**.
In some cases when the node is unreachable, the API server is unable to communicate
with the kubelet on the node. The decision to delete the pods cannot be communicated to
the kubelet until communication with the API server is re-established. In the meantime,
the pods that are scheduled for deletion may continue to run on the partitioned node.
-->
如果 Ready 条件处于 `Unknown` 或者 `False` 状态的时间超过了 `pod-eviction-timeout` 值,
如果 Ready 条件`status` 处于 `Unknown` 或者 `False` 状态的时间超过了 `pod-eviction-timeout` 值,
(一个传递给 {{< glossary_tooltip text="kube-controller-manager" term_id="kube-controller-manager" >}} 的参数),
节点上的所有 Pod 都会被节点控制器计划删除。默认的逐出超时时长为 **5 分钟**
[节点控制器](#node-controller) 会对节点上的所有 Pod 触发
{{< glossary_tooltip text="API-发起的驱逐" term_id="api-eviction" >}}。
默认的逐出超时时长为 **5 分钟**
某些情况下当节点不可达时API 服务器不能和其上的 kubelet 通信。
删除 Pod 的决定不能传达给 kubelet直到它重新建立和 API 服务器的连接为止。
与此同时,被计划删除的 Pod 可能会继续在游离的节点上运行。
@ -389,17 +408,74 @@ to [reserve compute resources](/docs/tasks/administer-cluster/reserve-compute-re
<!--
### Info
Describes general information about the node, such as kernel version, Kubernetes version (kubelet and kube-proxy version), Docker version (if used), and OS name.
This information is gathered by Kubelet from the node.
Describes general information about the node, such as kernel version, Kubernetes
version (kubelet and kube-proxy version), container runtime details, and which
operating system the node uses.
The kubelet gathers this information from the node and publishes it into
the Kubernetes API.
-->
### 信息 {#info}
关于节点的一般性信息例如内核版本、Kubernetes 版本(`kubelet` 和 `kube-proxy` 版本)、
Docker 版本(如果使用了)和操作系统名称。这些信息由 `kubelet` 从节点上搜集而来。
描述节点的一般信息如内核版本、Kubernetes 版本(`kubelet` 和 `kube-proxy` 版本)、
容器运行时详细信息,以及 节点使用的操作系统。
`kubelet` 从节点收集这些信息并将其发布到 Kubernetes API。
<!--
### Node Controller
## Heartbeats
Heartbeats, sent by Kubernetes nodes, help your cluster determine the
availability of each node, and to take action when failures are detected.
For nodes there are two forms of heartbeats:
* updates to the `.status` of a Node
* [Lease](/docs/reference/kubernetes-api/cluster-resources/lease-v1/) objects
within the `kube-node-lease`
{{< glossary_tooltip term_id="namespace" text="namespace">}}.
Each Node has an associated Lease object.
Compared to updates to `.status` of a Node, a Lease is a lightweight resource.
Using Leases for heartbeats reduces the performance impact of these updates
for large clusters.
The kubelet is responsible for creating and updating the `.status` of Nodes,
and for updating their related Leases.
- The kubelet updates the node's `.status` either when there is change in status
or if there has been no update for a configured interval. The default interval
for `.status` updates to Nodes is 5 minutes, which is much longer than the 40
second default timeout for unreachable nodes.
- The kubelet creates and then updates its Lease object every 10 seconds
(the default update interval). Lease updates occur independently from
updates to the Node's `.status`. If the Lease update fails, the kubelet retries,
using exponential backoff that starts at 200 milliseconds and capped at 7 seconds.
-->
## 心跳 {#heartbeats}
Kubernetes 节点发送的心跳帮助你的集群确定每个节点的可用性,并在检测到故障时采取行动。
对于节点,有两种形式的心跳:
* 更新节点的 `.status`
* [Lease](/docs/reference/kubernetes-api/cluster-resources/lease-v1/) 对象
`kube-node-lease` {{<glossary_tooltip term_id="namespace" text="命名空间">}}中。
每个节点都有一个关联的 Lease 对象。
与 Node 的 `.status` 更新相比,`Lease` 是一种轻量级资源。
使用 `Leases` 心跳在大型集群中可以减少这些更新对性能的影响。
kubelet 负责创建和更新节点的 `.status`,以及更新它们对应的 `Lease`
- 当状态发生变化时或者在配置的时间间隔内没有更新事件时kubelet 会更新 `.status`
`.status` 更新的默认间隔为 5 分钟(比不可达节点的 40 秒默认超时时间长很多)。
- `kubelet` 会每 10 秒(默认更新间隔时间)创建并更新其 `Lease` 对象。
`Lease` 更新独立于 `NodeStatus` 更新而发生。
如果 `Lease` 的更新操作失败,`kubelet` 会采用指数回退机制,从 200 毫秒开始
重试,最长重试间隔为 7 秒钟。
<!--
## Node Controller
The node {{< glossary_tooltip text="controller" term_id="controller" >}} is a
Kubernetes control plane component that manages various aspects of nodes.
@ -407,7 +483,7 @@ Kubernetes control plane component that manages various aspects of nodes.
The node controller has multiple roles in a node's life. The first is assigning a
CIDR block to the node when it is registered (if CIDR assignment is turned on).
-->
### 节点控制器 {#node-controller}
## 节点控制器 {#node-controller}
节点{{< glossary_tooltip text="控制器" term_id="controller" >}}是
Kubernetes 控制面组件,管理节点的方方面面。
@ -428,74 +504,35 @@ controller deletes the node from its list of nodes.
<!--
The third is monitoring the nodes' health. The node controller is
responsible for updating the NodeReady condition of NodeStatus to
ConditionUnknown when a node becomes unreachable (i.e. the node controller stops
receiving heartbeats for some reason, e.g. due to the node being down), and then later evicting
all the pods from the node (using graceful termination) if the node continues
to be unreachable. (The default timeouts are 40s to start reporting
ConditionUnknown and 5m after that to start evicting pods.)
responsible for:
- In the case that a node becomes unreachable, updating the NodeReady condition
of within the Node's `.status`. In this case the node controller sets the
NodeReady condition to `ConditionUnknown`.
- If a node remains unreachable: triggering
[API-initiated eviction](/docs/concepts/scheduling-eviction/api-eviction/)
for all of the Pods on the unreachable node. By default, the node controller
waits 5 minutes between marking the node as `ConditionUnknown` and submitting
the first eviction request.
The node controller checks the state of each node every `-node-monitor-period` seconds.
-->
第三个是监控节点的健康情况。节点控制器负责在节点不可达
(即,节点控制器因为某些原因没有收到心跳,例如节点宕机)时,
将节点状态的 `NodeReady` 状况更新为 "`Unknown`"。
如果节点接下来持续处于不可达状态,节点控制器将逐出节点上的所有 Pod使用体面终止
默认情况下 40 秒后开始报告 "`Unknown`",在那之后 5 分钟开始逐出 Pod。
第三个是监控节点的健康状况。 节点控制器是负责:
- 在节点节点不可达的情况下,在 Node 的 `.status` 中更新 `NodeReady` 状况。
在这种情况下,节点控制器将 `NodeReady` 状况更新为 `ConditionUnknown`
- 如果节点仍然无法访问:对于不可达节点上的所有 Pod触发
[API-发起的逐出](/zh/docs/concepts/scheduling-eviction/api-eviction/)。
默认情况下,节点控制器 在将节点标记为 `ConditionUnknown` 后等待 5 分钟 提交第一个驱逐请求。
节点控制器每隔 `--node-monitor-period` 秒检查每个节点的状态。
<!--
#### Heartbeats
Heartbeats, sent by Kubernetes nodes, help determine the availability of a node.
There are two forms of heartbeats: updates of `NodeStatus` and the
[Lease object](/docs/reference/generated/kubernetes-api/{{< latest-version >}}/#lease-v1-coordination-k8s-io).
Each Node has an associated Lease object in the `kube-node-lease`
{{< glossary_tooltip term_id="namespace" text="namespace">}}.
Lease is a lightweight resource, which improves the performance
of the node heartbeats as the cluster scales.
-->
#### 心跳机制 {#heartbeats}
Kubernetes 节点发送的心跳Heartbeats有助于确定节点的可用性。
心跳有两种形式:`NodeStatus` 和 [`Lease` 对象](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#lease-v1-coordination-k8s-io)。
每个节点在 `kube-node-lease`{{< glossary_tooltip term_id="namespace" text="名字空间">}}
中都有一个与之关联的 `Lease` 对象。
`Lease` 是一种轻量级的资源,可在集群规模扩大时提高节点心跳机制的性能。
<!--
The kubelet is responsible for creating and updating the `NodeStatus` and
a Lease object.
-->
`kubelet` 负责创建和更新 `NodeStatus``Lease` 对象。
<!--
- The kubelet updates the `NodeStatus` either when there is change in status,
or if there has been no update for a configured interval. The default interval
for `NodeStatus` updates is 5 minutes (much longer than the 40 second default
timeout for unreachable nodes).
- The kubelet creates and then updates its Lease object every 10 seconds
(the default update interval). Lease updates occur independently from the
`NodeStatus` updates. If the Lease update fails, the kubelet retries with
exponential backoff starting at 200 milliseconds and capped at 7 seconds.
-->
- 当状态发生变化时或者在配置的时间间隔内没有更新事件时kubelet 会更新 `NodeStatus`
`NodeStatus` 更新的默认间隔为 5 分钟(比不可达节点的 40 秒默认超时时间长很多)。
- `kubelet` 会每 10 秒(默认更新间隔时间)创建并更新其 `Lease` 对象。
`Lease` 更新独立于 `NodeStatus` 更新而发生。
如果 `Lease` 的更新操作失败,`kubelet` 会采用指数回退机制,从 200 毫秒开始
重试,最长重试间隔为 7 秒钟。
<!--
#### Reliability
### Rate limits on eviction
In most cases, the node controller limits the eviction rate to
`-node-eviction-rate` (default 0.1) per second, meaning it won't evict pods
from more than 1 node per 10 seconds.
-->
#### 可靠性 {#reliability}
### 逐出速率限制 {#rate-limits-on-eviction}
大部分情况下,节点控制器把逐出速率限制在每秒 `--node-eviction-rate` 个(默认为 0.1)。
这表示它每 10 秒钟内至多从一个节点驱逐 Pod。
@ -503,25 +540,28 @@ from more than 1 node per 10 seconds.
<!--
The node eviction behavior changes when a node in a given availability zone
becomes unhealthy. The node controller checks what percentage of nodes in the zone
are unhealthy (NodeReady condition is ConditionUnknown or ConditionFalse) at
the same time. If the fraction of unhealthy nodes is at least
`-unhealthy-zone-threshold` (default 0.55) then the eviction rate is reduced:
if the cluster is small (i.e. has less than or equal to
`-large-cluster-size-threshold` nodes - default 50) then evictions are
stopped, otherwise the eviction rate is reduced to
`-secondary-node-eviction-rate` (default 0.01) per second.
are unhealthy (NodeReady condition is `ConditionUnknown` or `ConditionFalse`) at
the same time:
- If the fraction of unhealthy nodes is at least `--unhealthy-zone-threshold`
(default 0.55), then the eviction rate is reduced.
- If the cluster is small (i.e. has less than or equal to
`--large-cluster-size-threshold` nodes - default 50), then evictions are stopped.
- Otherwise, the eviction rate is reduced to `--secondary-node-eviction-rate`
(default 0.01) per second.
The reason these policies are implemented per availability zone is because one availability zone
might become partitioned from the master while the others remain connected. If
your cluster does not span multiple cloud provider availability zones, then
there is only one availability zone (the whole cluster).
The reason these policies are implemented per availability zone is because one
availability zone might become partitioned from the master while the others remain
connected. If your cluster does not span multiple cloud provider availability zones,
then the eviction mechanism does not take per-zone unavailability into account.
-->
当一个可用区域Availability Zone中的节点变为不健康时节点的驱逐行为将发生改变。
节点控制器会同时检查可用区域中不健康NodeReady 状况为 Unknown 或 False
的节点的百分比。如果不健康节点的比例超过 `--unhealthy-zone-threshold` (默认为 0.55
驱逐速率将会降低:如果集群较小(意即小于等于 `--large-cluster-size-threshold`
个节点 - 默认为 50驱逐操作将会停止否则驱逐速率将降为每秒
`--secondary-node-eviction-rate` 个(默认为 0.01)。
节点控制器会同时检查可用区域中不健康NodeReady 状况为 `ConditionUnknown``ConditionFalse`
的节点的百分比:
- 如果不健康节点的比例超过 `--unhealthy-zone-threshold` (默认为 0.55
驱逐速率将会降低。
- 如果集群较小(意即小于等于 `--large-cluster-size-threshold`
个节点 - 默认为 50驱逐操作将会停止。
- 否则驱逐速率将降为每秒 `--secondary-node-eviction-rate` 个(默认为 0.01)。
在单个可用区域实施这些策略的原因是当一个可用区域可能从控制面脱离时其它可用区域
可能仍然保持连接。
@ -532,17 +572,19 @@ A key reason for spreading your nodes across availability zones is so that the
workload can be shifted to healthy zones when one entire zone goes down.
Therefore, if all nodes in a zone are unhealthy then node controller evicts at
the normal rate `-node-eviction-rate`. The corner case is when all zones are
completely unhealthy (i.e. there are no healthy nodes in the cluster). In such
case, the node controller assumes that there's some problem with master
connectivity and stops all evictions until some connectivity is restored.
completely unhealthy (none of the nodes in the cluster are healthy). In such a
case, the node controller assumes that there is some problem with connectivity
between the control plane and the nodes, and doesn't perform any evictions.
(If there has been an outage and some nodes reappear, the node controller does
evict pods from the remaining nodes that are unhealthy or unreachable).
-->
跨多个可用区域部署你的节点的一个关键原因是当某个可用区域整体出现故障时,
工作负载可以转移到健康的可用区域。
因此,如果一个可用区域中的所有节点都不健康时,节点控制器会以正常的速率
`--node-eviction-rate` 进行驱逐操作。
在所有的可用区域都不健康(也即集群中没有健康节点)的极端情况下,
节点控制器将假设控制面节点的连接出了某些问题,
它将停止所有驱逐动作直到一些连接恢复
节点控制器将假设控制面节点的连接出了某些问题,它将停止所有驱逐动作(如果故障后部分节点重新连接,
节点控制器会从剩下不健康或者不可达节点中驱逐 `pods`
<!--
The Node Controller is also responsible for evicting pods running on nodes with
@ -558,7 +600,7 @@ that the scheduler won't place Pods onto unhealthy nodes.
这意味着调度器不会将 Pod 调度到不健康的节点上。
<!--
### Node capacity
## Resource capacity tracking {#node-capacity}
Node objects track information about the Node's resource capacity (for example: the amount
of memory available, and the number of CPUs).
@ -566,7 +608,7 @@ Nodes that [self register](#self-registration-of-nodes) report their capacity du
registration. If you [manually](#manual-node-administration) add a Node, then
you need to set the node's capacity information when you add it.
-->
### 节点容量 {#node-capacity}
### 资源容量跟踪 {#node-capacity}
Node 对象会跟踪节点上资源的容量(例如可用内存和 CPU 数量)。
通过[自注册](#self-registration-of-nodes)机制生成的 Node 对象会在注册期间报告自身容量。
@ -701,6 +743,117 @@ reserved for terminating [critical pods](/docs/tasks/administer-cluster/guarante
而保留最后 10 秒用于终止
[关键 Pod](/zh/docs/tasks/administer-cluster/guaranteed-scheduling-critical-addon-pods/#marking-pod-as-critical)。
<!--
When pods were evicted during the graceful node shutdown, they are marked as failed.
Running `kubectl get pods` shows the status of the the evicted pods as `Shutdown`.
And `kubectl describe pod` indicates that the pod was evicted because of node shutdown:
```
Status: Failed
Reason: Shutdown
Message: Node is shutting, evicting pods
```
Failed pod objects will be preserved until explicitly deleted or [cleaned up by the GC](/docs/concepts/workloads/pods/pod-lifecycle/#pod-garbage-collection).
This is a change of behavior compared to abrupt node termination.
-->
{{< note >}}
当 Pod 在正常节点关闭期间被驱逐时,它们会被标记为 `failed`
运行 `kubectl get pods` 将被驱逐的 pod 的状态显示为 `Shutdown`
并且 `kubectl describe pod` 表示 pod 因节点关闭而被驱逐:
```
Status: Failed
Reason: Shutdown
Message: Node is shutting, evicting pods
```
`Failed` 的 pod 对象将被保留,直到被明确删除或
[由 GC 清理](/zh/docs/concepts/workloads/pods/pod-lifecycle/#pod-garbage-collection)。
与突然的节点终止相比这是一种行为变化。
{{< /note >}}
<!--
## Swap memory management {#swap-memory}
{{< feature-state state="alpha" for_k8s_version="v1.22" >}}
Prior to Kubernetes 1.22, nodes did not support the use of swap memory, and a
kubelet would by default fail to start if swap was detected on a node. In 1.22
onwards, swap memory support can be enabled on a per-node basis.
To enable swap on a node, the `NodeSwap` feature gate must be enabled on
the kubelet, and the `--fail-swap-on` command line flag or `failSwapOn`
[configuration setting](/docs/reference/config-api/kubelet-config.v1beta1/#kubelet-config-k8s-io-v1beta1-KubeletConfiguration)
must be set to false.
A user can also optionally configure `memorySwap.swapBehavior` in order to
specify how a node will use swap memory. For example,
-->
## 交换内存管理 {#swap-memory}
{{< feature-state state="alpha" for_k8s_version="v1.22" >}}
在 Kubernetes 1.22 之前,节点不支持使用交换内存,并且
默认情况下如果在节点上检测到交换内存配置kubelet 将无法启动。 在 1.22
以后,可以在每个节点的基础上启用交换内存支持。
要在节点上启用交换内存必须启用kubelet 的 `NodeSwap` 特性门控,
同时使用 `--fail-swap-on` 命令行参数或者将 `failSwapOn`
[配置](/zh/docs/reference/config-api/kubelet-config.v1beta1/#kubelet-config-k8s-io-v1beta1-KubeletConfiguration)
设置为false。
用户还可以选择配置 `memorySwap.swapBehavior` 以指定节点使用交换内存的方式。 例如:
```yaml
memorySwap:
swapBehavior: LimitedSwap
```
<!--
The available configuration options for `swapBehavior` are:
- `LimitedSwap`: Kubernetes workloads are limited in how much swap they can
use. Workloads on the node not managed by Kubernetes can still swap.
- `UnlimitedSwap`: Kubernetes workloads can use as much swap memory as they
request, up to the system limit.
If configuration for `memorySwap` is not specified and the feature gate is
enabled, by default the kubelet will apply the same behaviour as the
`LimitedSwap` setting.
The behaviour of the `LimitedSwap` setting depends if the node is running with
v1 or v2 of control groups (also known as "cgroups"):
- **cgroupsv1:** Kubernetes workloads can use any combination of memory and
swap, up to the pod's memory limit, if set.
- **cgroupsv2:** Kubernetes workloads cannot use swap memory.
For more information, and to assist with testing and provide feedback, please
see [KEP-2400](https://github.com/kubernetes/enhancements/issues/2400) and its
[design proposal](https://github.com/kubernetes/enhancements/blob/master/keps/sig-node/2400-node-swap/README.md).
-->
已有的 `swapBehavior` 的配置选项有:
- `LimitedSwap`Kubernetes 工作负载的交换内存会受限制。
不受 Kubernetes 管理的节点上的工作负载仍然可以交换。
- `UnlimitedSwap`Kubernetes 工作负载可以使用尽可能多的交换内存
请求,一直到系统限制。
如果启用了特性门控但是未指定 `memorySwap` 的配置,默认情况下 kubelet 将使用
`LimitedSwap` 设置。
`LimitedSwap` 设置的行为还取决于节点运行的是 v1 还是 v2 的控制组(也就是 `cgroups`
- **cgroupsv1:** Kubernetes 工作负载可以使用内存和
交换,达到 pod 的内存限制(如果设置)。
- **cgroupsv2:** Kubernetes 工作负载不能使用交换内存。
如需更多信息以及协助测试和提供反馈,请
参见 [KEP-2400](https://github.com/kubernetes/enhancements/issues/2400) 及其
[设计方案](https://github.com/kubernetes/enhancements/blob/master/keps/sig-node/2400-node-swap/README.md)。
## {{% heading "whatsnext" %}}
<!--

View File

@ -1,179 +0,0 @@
---
title: 容器镜像的垃圾收集
content_type: concept
weight: 70
---
<!--
title: Garbage collection for container images
content_type: concept
weight: 70
-->
<!-- overview -->
<!--
Garbage collection is a helpful function of kubelet that will clean up unused [images](/docs/concepts/containers/#container-images) and unused [containers](/docs/concepts/containers/). Kubelet will perform garbage collection for containers every minute and garbage collection for images every five minutes.
External garbage collection tools are not recommended as these tools can potentially break the behavior of kubelet by removing containers expected to exist.
-->
垃圾回收是 kubelet 的一个有用功能,它将清理未使用的[镜像](/zh/docs/concepts/containers/#container-images)和[容器](/zh/docs/concepts/containers/)。
Kubelet 将每分钟对容器执行一次垃圾回收,每五分钟对镜像执行一次垃圾回收。
不建议使用外部垃圾收集工具,因为这些工具可能会删除原本期望存在的容器进而破坏 kubelet 的行为。
<!-- body -->
<!--
## Image Collection
Kubernetes manages lifecycle of all images through imageManager, with the cooperation
of cadvisor.
The policy for garbage collecting images takes two factors into consideration:
`HighThresholdPercent` and `LowThresholdPercent`. Disk usage above the high threshold
will trigger garbage collection. The garbage collection will delete least recently used
images until the low threshold has been met.
-->
## 镜像回收 {#image-collection}
Kubernetes 借助于 cadvisor 通过 imageManager 来管理所有镜像的生命周期。
镜像垃圾回收策略只考虑两个因素:`HighThresholdPercent` 和 `LowThresholdPercent`
磁盘使用率超过上限阈值HighThresholdPercent将触发垃圾回收。
垃圾回收将删除最近最少使用的镜像直到磁盘使用率满足下限阈值LowThresholdPercent
<!--
## Container Collection
The policy for garbage collecting containers considers three user-defined variables. `MinAge` is the minimum age at which a container can be garbage collected. `MaxPerPodContainer` is the maximum number of dead containers every single
pod (UID, container name) pair is allowed to have. `MaxContainers` is the maximum number of total dead containers. These variables can be individually disabled by setting `MinAge` to zero and setting `MaxPerPodContainer` and `MaxContainers` respectively to less than zero.
-->
## 容器回收 {#container-collection}
容器垃圾回收策略考虑三个用户定义变量。
`MinAge` 是容器可以被执行垃圾回收的最小生命周期。
`MaxPerPodContainer` 是每个 pod 内允许存在的死亡容器的最大数量。
`MaxContainers` 是全部死亡容器的最大数量。
可以分别独立地通过将 `MinAge` 设置为 0以及将 `MaxPerPodContainer``MaxContainers`
设置为小于 0 来禁用这些变量。
<!--
Kubelet will act on containers that are unidentified, deleted, or outside of the boundaries set by the previously mentioned flags. The oldest containers will generally be removed first. `MaxPerPodContainer` and `MaxContainer` may potentially conflict with each other in situations where retaining the maximum number of containers per pod (`MaxPerPodContainer`) would go outside the allowable range of global dead containers (`MaxContainers`). `MaxPerPodContainer` would be adjusted in this situation: A worst case scenario would be to downgrade `MaxPerPodContainer` to 1 and evict the oldest containers. Additionally, containers owned by pods that have been deleted are removed once they are older than `MinAge`.
-->
`kubelet` 将处理无法辨识的、已删除的以及超出前面提到的参数所设置范围的容器。
最老的容器通常会先被移除。
`MaxPerPodContainer``MaxContainer` 在某些场景下可能会存在冲突,
例如在保证每个 pod 内死亡容器的最大数量(`MaxPerPodContainer`)的条件下可能会超过
允许存在的全部死亡容器的最大数量(`MaxContainer`)。
`MaxPerPodContainer` 在这种情况下会被进行调整:
最坏的情况是将 `MaxPerPodContainer` 降级为 1并驱逐最老的容器。
此外pod 内已经被删除的容器一旦年龄超过 `MinAge` 就会被清理。
<!--
Containers that are not managed by kubelet are not subject to container garbage collection.
-->
不被 kubelet 管理的容器不受容器垃圾回收的约束。
<!--
## User Configuration
Users can adjust the following thresholds to tune image garbage collection with the following kubelet flags :
-->
## 用户配置 {#user-configuration}
用户可以使用以下 kubelet 参数调整相关阈值来优化镜像垃圾回收:
<!--
1. `image-gc-high-threshold`, the percent of disk usage which triggers image garbage collection.
Default is 85%.
2. `image-gc-low-threshold`, the percent of disk usage to which image garbage collection attempts
to free. Default is 80%.
-->
1. `image-gc-high-threshold`,触发镜像垃圾回收的磁盘使用率百分比。默认值为 85%。
2. `image-gc-low-threshold`,镜像垃圾回收试图释放资源后达到的磁盘使用率百分比。默认值为 80%。
<!--
We also allow users to customize garbage collection policy through the following kubelet flags:
-->
我们还允许用户通过以下 kubelet 参数自定义垃圾收集策略:
<!--
1. `minimum-container-ttl-duration`, minimum age for a finished container before it is
garbage collected. Default is 0 minute, which means every finished container will be garbage collected.
2. `maximum-dead-containers-per-container`, maximum number of old instances to be retained
per container. Default is 1.
3. `maximum-dead-containers`, maximum number of old instances of containers to retain globally.
Default is -1, which means there is no global limit.
-->
1. `minimum-container-ttl-duration`,完成的容器在被垃圾回收之前的最小年龄,默认是 0 分钟。
这意味着每个完成的容器都会被执行垃圾回收。
2. `maximum-dead-containers-per-container`,每个容器要保留的旧实例的最大数量。默认值为 1。
3. `maximum-dead-containers`,要全局保留的旧容器实例的最大数量。
默认值是 -1意味着没有全局限制。
<!--
Containers can potentially be garbage collected before their usefulness has expired. These containers
can contain logs and other data that can be useful for troubleshooting. A sufficiently large value for
`maximum-dead-containers-per-container` is highly recommended to allow at least 1 dead container to be
retained per expected container. A larger value for `maximum-dead-containers` is also recommended for a
similar reason.
See [this issue](https://github.com/kubernetes/kubernetes/issues/13287) for more details.
-->
容器可能会在其效用过期之前被垃圾回收。这些容器可能包含日志和其他对故障诊断有用的数据。
强烈建议为 `maximum-dead-containers-per-container` 设置一个足够大的值,以便每个预期容器至少保留一个死亡容器。
由于同样的原因,`maximum-dead-containers` 也建议使用一个足够大的值。
查阅[这个 Issue](https://github.com/kubernetes/kubernetes/issues/13287) 获取更多细节。
<!--
## Deprecation
Some kubelet Garbage Collection features in this doc will be replaced by kubelet eviction in the future.
Including:
-->
## 弃用 {#deprecation}
这篇文档中的一些 kubelet 垃圾收集Garbage Collection功能将在未来被 kubelet 驱逐回收eviction所替代。
包括:
<!--
| Existing Flag | New Flag | Rationale |
| ------------- | -------- | --------- |
| `--image-gc-high-threshold` | `--eviction-hard` or `--eviction-soft` | existing eviction signals can trigger image garbage collection |
| `--image-gc-low-threshold` | `--eviction-minimum-reclaim` | eviction reclaims achieve the same behavior |
| `--maximum-dead-containers` | | deprecated once old logs are stored outside of container's context |
| `--maximum-dead-containers-per-container` | | deprecated once old logs are stored outside of container's context |
| `--minimum-container-ttl-duration` | | deprecated once old logs are stored outside of container's context |
| `--low-diskspace-threshold-mb` | `--eviction-hard` or `eviction-soft` | eviction generalizes disk thresholds to other resources |
| `--outofdisk-transition-frequency` | `--eviction-pressure-transition-period` | eviction generalizes disk pressure transition to other resources |
-->
| 现存参数 | 新参数 | 解释 |
| ------------- | -------- | --------- |
| `--image-gc-high-threshold` | `--eviction-hard``--eviction-soft` | 现存的驱逐回收信号可以触发镜像垃圾回收 |
| `--image-gc-low-threshold` | `--eviction-minimum-reclaim` | 驱逐回收实现相同行为 |
| `--maximum-dead-containers` | | 一旦旧日志存储在容器上下文之外,就会被弃用 |
| `--maximum-dead-containers-per-container` | | 一旦旧日志存储在容器上下文之外,就会被弃用 |
| `--minimum-container-ttl-duration` | | 一旦旧日志存储在容器上下文之外,就会被弃用 |
| `--low-diskspace-threshold-mb` | `--eviction-hard` or `eviction-soft` | 驱逐回收将磁盘阈值泛化到其他资源 |
| `--outofdisk-transition-frequency` | `--eviction-pressure-transition-period` | 驱逐回收将磁盘压力转换到其他资源 |
## {{% heading "whatsnext" %}}
<!--
See [Configuring Out Of Resource Handling](/docs/tasks/administer-cluster/out-of-resource/) for more details.
-->
查阅[配置资源不足情况的处理](/zh/docs/tasks/administer-cluster/out-of-resource/)了解更多细节。

View File

@ -151,21 +151,25 @@ Kubernetes 并不负责轮转日志,而是通过部署工具建立一个解决
<!--
As an example, you can find detailed information about how `kube-up.sh` sets
up logging for COS image on GCP in the corresponding
[`configure-helper` script](https://github.com/kubernetes/kubernetes/blob/{{< param "githubbranch" >}}/cluster/gce/gci/configure-helper.sh).
[`configure-helper` script](https://github.com/kubernetes/kubernetes/blob/master/cluster/gce/gci/configure-helper.sh).
-->
例如,你可以找到关于 `kube-up.sh` 为 GCP 环境的 COS 镜像设置日志的详细信息,
脚本为
[`configure-helper` 脚本](https://github.com/kubernetes/kubernetes/blob/{{< param "githubbranch" >}}/cluster/gce/gci/configure-helper.sh)。
[`configure-helper` 脚本](https://github.com/kubernetes/kubernetes/blob/master/cluster/gce/gci/configure-helper.sh)。
<!--
When using a **CRI container runtime**, the kubelet is responsible for rotating the logs and managing the logging directory structure. The kubelet
sends this information to the CRI container runtime and the runtime writes the container logs to the given location. The two kubelet flags `container-log-max-size` and `container-log-max-files` can be used to configure the maximum size for each log file and the maximum number of files allowed for each container respectively.
sends this information to the CRI container runtime and the runtime writes the container logs to the given location.
The two kubelet parameters [`containerLogMaxSize` and `containerLogMaxFiles`](/docs/reference/config-api/kubelet-config.v1beta1/#kubelet-config-k8s-io-v1beta1-KubeletConfiguration)
in [kubelet config file](/docs/tasks/administer-cluster/kubelet-config-file/)
can be used to configure the maximum size for each log file and the maximum number of files allowed for each container respectively.
-->
当使用某 *CRI 容器运行时*kubelet 要负责对日志进行轮换,并
管理日志目录的结构。kubelet 将此信息发送给 CRI 容器运行时,后者
将容器日志写入到指定的位置。kubelet 标志 `container-log-max-size`
`container-log-max-files` 可以用来配置每个日志文件的最大长度
和每个容器可以生成的日志文件个数上限。
将容器日志写入到指定的位置。在 [kubelet 配置文件](/docs/tasks/administer-cluster/kubelet-config-file/)
中的两个 kubelet 参数
[`containerLogMaxSize` 和 `containerLogMaxFiles`](/docs/reference/config-api/kubelet-config.v1beta1/#kubelet-config-k8s-io-v1beta1-KubeletConfiguration)
可以用来配置每个日志文件的最大长度和每个容器可以生成的日志文件个数上限。
<!--
When you run [`kubectl logs`](/docs/reference/generated/kubectl/kubectl-commands#logs) as in

View File

@ -85,7 +85,7 @@ A URL can also be specified as a configuration source, which is handy for deploy
还可以使用 URL 作为配置源,便于直接使用已经提交到 Github 上的配置文件进行部署:
```shell
kubectl apply -f https://raw.githubusercontent.com/kubernetes/website/master/content/zh/examples/application/nginx/nginx-deployment.yaml
kubectl apply -f https://raw.githubusercontent.com/kubernetes/website/main/content/zh/examples/application/nginx/nginx-deployment.yaml
```
```
@ -252,10 +252,10 @@ The examples we've used so far apply at most a single label to any resource. The
在许多情况下,应使用多个标签来区分集合。
<!--
For instance, different applications would use different values for the `app` label, but a multi-tier application, such as the [guestbook example](https://github.com/kubernetes/examples/tree/{{< param "githubbranch" >}}/guestbook/), would additionally need to distinguish each tier. The frontend could carry the following labels:
For instance, different applications would use different values for the `app` label, but a multi-tier application, such as the [guestbook example](https://github.com/kubernetes/examples/tree/master/guestbook/), would additionally need to distinguish each tier. The frontend could carry the following labels:
-->
例如,不同的应用可能会为 `app` 标签设置不同的值。
但是,类似 [guestbook 示例](https://github.com/kubernetes/examples/tree/{{< param "githubbranch" >}}/guestbook/)
但是,类似 [guestbook 示例](https://github.com/kubernetes/examples/tree/master/guestbook/)
这样的多层应用,还需要区分每一层。前端可以带以下标签:
```yaml

View File

@ -1,7 +1,9 @@
---
title: 云原生安全概述
description: >
在云原生安全的背景下思考 Kubernetes 安全模型。
content_type: concept
weight: 10
weight: 1
---
<!-- overview -->
@ -88,6 +90,7 @@ Amazon Web Services | https://aws.amazon.com/security/ |
Google Cloud Platform | https://cloud.google.com/security/ |
IBM Cloud | https://www.ibm.com/cloud/security |
Microsoft Azure | https://docs.microsoft.com/en-us/azure/security/azure-security |
Oracle Cloud Infrastructure | https://www.oracle.com/security/ |
VMWare VSphere | https://www.vmware.com/security/hardening-guides.html |
{{< /table >}}

View File

@ -1,324 +0,0 @@
---
title: 垃圾收集
content_type: concept
weight: 60
---
<!--
title: Garbage Collection
content_type: concept
weight: 60
-->
<!-- overview -->
<!--
The role of the Kubernetes garbage collector is to delete certain objects
that once had an owner, but no longer have an owner.
-->
Kubernetes 垃圾收集器的作用是删除某些曾经拥有属主Owner但现在不再拥有属主的对象。
<!-- body -->
<!--
## Owners and dependents
Some Kubernetes objects are owners of other objects. For example, a ReplicaSet
is the owner of a set of Pods. The owned objects are called *dependents* of the
owner object. Every dependent object has a `metadata.ownerReferences` field that
points to the owning object.
Sometimes, Kubernetes sets the value of `ownerReference` automatically. For
example, when you create a ReplicaSet, Kubernetes automatically sets the
`ownerReference` field of each Pod in the ReplicaSet. In 1.8, Kubernetes
automatically sets the value of `ownerReference` for objects created or adopted
by ReplicationController, ReplicaSet, StatefulSet, DaemonSet, Deployment, Job
and CronJob.
-->
## 属主和附属 {#owners-and-dependents}
某些 Kubernetes 对象是其它一些对象的属主。
例如,一个 ReplicaSet 是一组 Pod 的属主。
具有属主的对象被称为是属主的 *附属*
每个附属对象具有一个指向其所属对象的 `metadata.ownerReferences` 字段。
有时Kubernetes 会自动设置 `ownerReference` 的值。
例如,当创建一个 ReplicaSet 时Kubernetes 自动设置 ReplicaSet 中每个 Pod 的 `ownerReference` 字段值。
在 Kubernetes 1.8 版本Kubernetes 会自动为某些对象设置 `ownerReference` 的值。
这些对象是由 ReplicationController、ReplicaSet、StatefulSet、DaemonSet、Deployment、
Job 和 CronJob 所创建或管理的。
<!--
You can also specify relationships between owners and dependents by manually
setting the `ownerReference` field.
Here's a configuration file for a ReplicaSet that has three Pods:
-->
你也可以通过手动设置 `ownerReference` 的值,来指定属主和附属之间的关系。
下面的配置文件中包含一个具有 3 个 Pod 的 ReplicaSet
{{< codenew file="controllers/replicaset.yaml" >}}
<!--
If you create the ReplicaSet and then view the Pod metadata, you can see
OwnerReferences field:
-->
如果你创建该 ReplicaSet然后查看 Pod 的 metadata 字段,能够看到 OwnerReferences 字段:
```shell
kubectl apply -f https://k8s.io/examples/controllers/replicaset.yaml
kubectl get pods --output=yaml
```
<!--
The output shows that the Pod owner is a ReplicaSet named `my-repset`:
-->
输出显示了 Pod 的属主是名为 my-repset 的 ReplicaSet
```yaml
apiVersion: v1
kind: Pod
metadata:
...
ownerReferences:
- apiVersion: apps/v1
controller: true
blockOwnerDeletion: true
kind: ReplicaSet
name: my-repset
uid: d9607e19-f88f-11e6-a518-42010a800195
...
```
<!--
Cross-namespace owner references are disallowed by design.
Namespaced dependents can specify cluster-scoped or namespaced owners.
A namespaced owner **must** exist in the same namespace as the dependent.
If it does not, the owner reference is treated as absent, and the dependent
is subject to deletion once all owners are verified absent.
-->
{{< note >}}
根据设计kubernetes 不允许跨名字空间指定属主。
名字空间范围的附属可以指定集群范围的或者名字空间范围的属主。
名字空间范围的属主**必须**和该附属处于相同的名字空间。
如果名字空间范围的属主和附属不在相同的名字空间,那么该属主引用就会被认为是缺失的,
并且当附属的所有属主引用都被确认不再存在之后,该附属就会被删除。
<!--
Cluster-scoped dependents can only specify cluster-scoped owners.
In v1.20+, if a cluster-scoped dependent specifies a namespaced kind as an owner,
it is treated as having an unresolveable owner reference, and is not able to be garbage collected.
-->
集群范围的附属只能指定集群范围的属主。
在 v1.20+ 版本,如果一个集群范围的附属指定了一个名字空间范围类型的属主,
那么该附属就会被认为是拥有一个不可解析的属主引用,并且它不能够被垃圾回收。
<!--
In v1.20+, if the garbage collector detects an invalid cross-namespace `ownerReference`,
or a cluster-scoped dependent with an `ownerReference` referencing a namespaced kind, a warning Event
with a reason of `OwnerRefInvalidNamespace` and an `involvedObject` of the invalid dependent is reported.
You can check for that kind of Event by running
`kubectl get events -A --field-selector=reason=OwnerRefInvalidNamespace`.
-->
在 v1.20+ 版本,如果垃圾收集器检测到无效的跨名字空间的属主引用,
或者一个集群范围的附属指定了一个名字空间范围类型的属主,
那么它就会报告一个警告事件。该事件的原因是 `OwnerRefInvalidNamespace`
`involvedObject` 属性中包含无效的附属。你可以通过以下命令来获取该类型的事件:
```shell
kubectl get events -A --field-selector=reason=OwnerRefInvalidNamespace
```
{{< /note >}}
<!--
## Controlling how the garbage collector deletes dependents
When you delete an object, you can specify whether the object's dependents are
also deleted automatically. Deleting dependents automatically is called *cascading
deletion*. There are two modes of *cascading deletion*: *background* and *foreground*.
If you delete an object without deleting its dependents
automatically, the dependents are said to be *orphaned*.
-->
## 控制垃圾收集器删除附属
当你删除对象时,可以指定该对象的附属是否也自动删除。
自动删除附属的行为也称为 *级联删除Cascading Deletion*
Kubernetes 中有两种 *级联删除* 模式:*后台Background* 模式和 *前台Foreground* 模式。
如果删除对象时,不自动删除它的附属,这些附属被称作 *孤立对象Orphaned*
<!--
### Foreground cascading deletion
In *foreground cascading deletion*, the root object first
enters a "deletion in progress" state. In the "deletion in progress" state,
the following things are true:
* The object is still visible via the REST API
* The object's `deletionTimestamp` is set
* The object's `metadata.finalizers` contains the value "foregroundDeletion".
-->
### 前台级联删除
*前台级联删除* 模式下,根对象首先进入 `deletion in progress` 状态。
`deletion in progress` 状态,会有如下的情况:
* 对象仍然可以通过 REST API 可见。
* 对象的 `deletionTimestamp` 字段被设置。
* 对象的 `metadata.finalizers` 字段包含值 `foregroundDeletion`
<!--
Once the "deletion in progress" state is set, the garbage
collector deletes the object's dependents. Once the garbage collector has deleted all
"blocking" dependents (objects with `ownerReference.blockOwnerDeletion=true`), it deletes
the owner object.
-->
一旦对象被设置为 `deletion in progress` 状态,垃圾收集器会删除对象的所有附属。
垃圾收集器在删除了所有有阻塞能力的附属(对象的 `ownerReference.blockOwnerDeletion=true`
之后,删除属主对象。
<!--
Note that in the "foregroundDeletion", only dependents with
`ownerReference.blockOwnerDeletion=true` block the deletion of the owner object.
Kubernetes version 1.7 added an [admission controller](/docs/reference/access-authn-authz/admission-controllers/#ownerreferencespermissionenforcement) that controls user access to set
`blockOwnerDeletion` to true based on delete permissions on the owner object, so that
unauthorized dependents cannot delay deletion of an owner object.
If an object's `ownerReferences` field is set by a controller (such as Deployment or ReplicaSet),
blockOwnerDeletion is set automatically and you do not need to manually modify this field.
-->
注意,在 `foregroundDeletion` 模式下,只有设置了 `ownerReference.blockOwnerDeletion`
值的附属才能阻止删除属主对象。
在 Kubernetes 1.7 版本增加了
[准入控制器](/zh/docs/reference/access-authn-authz/admission-controllers/#ownerreferencespermissionenforcement)
基于属主对象上的删除权限来控制用户设置 `blockOwnerDeletion` 的值为 True
这样未经授权的附属不能够阻止属主对象的删除。
如果一个对象的 `ownerReferences` 字段被一个控制器(例如 Deployment 或 ReplicaSet设置
`blockOwnerDeletion` 也会被自动设置,你不需要手动修改这个字段。
<!--
### Background cascading deletion
In *background cascading deletion*, Kubernetes deletes the owner object
immediately and the garbage collector then deletes the dependents in
the background.
-->
### 后台级联删除
*后台级联删除* 模式下Kubernetes 会立即删除属主对象,之后垃圾收集器
会在后台删除其附属对象。
<!--
### Setting the cascading deletion policy
To control the cascading deletion policy, set the `propagationPolicy`
field on the `deleteOptions` argument when deleting an Object. Possible values include "Orphan",
"Foreground", or "Background".
-->
### 设置级联删除策略
通过为属主对象设置 `deleteOptions.propagationPolicy` 字段,可以控制级联删除策略。
可能的取值包括:`Orphan`、`Foreground` 或者 `Background`
<!--
Here's an example that deletes dependents in background:
-->
下面是一个在后台删除附属对象的示例:
```shell
kubectl proxy --port=8080
curl -X DELETE localhost:8080/apis/apps/v1/namespaces/default/replicasets/my-repset \
-d '{"kind":"DeleteOptions","apiVersion":"v1","propagationPolicy":"Background"}' \
-H "Content-Type: application/json"
```
<!--
Here's an example that deletes dependents in foreground:
-->
下面是一个在前台中删除附属对象的示例:
```shell
kubectl proxy --port=8080
curl -X DELETE localhost:8080/apis/apps/v1/namespaces/default/replicasets/my-repset \
-d '{"kind":"DeleteOptions","apiVersion":"v1","propagationPolicy":"Foreground"}' \
-H "Content-Type: application/json"
```
<!--
Here's an example that orphans dependents:
-->
下面是一个令附属成为孤立对象的示例:
```shell
kubectl proxy --port=8080
curl -X DELETE localhost:8080/apis/apps/v1/namespaces/default/replicasets/my-repset \
-d '{"kind":"DeleteOptions","apiVersion":"v1","propagationPolicy":"Orphan"}' \
-H "Content-Type: application/json"
```
<!--
kubectl also supports cascading deletion.
To delete dependents in the foreground using kubectl, set `--cascade=foreground`. To
orphan dependents, set `--cascade=orphan`.
The default behavior is to delete the dependents in the background which is the
behavior when `--cascade` is omitted or explicitly set to `background`.
Here's an example that orphans the dependents of a ReplicaSet
-->
`kubectl` 命令也支持级联删除。
通过设置 `--cascade=foreground`,可以使用 kubectl 在前台删除附属对象。
设置 `--cascade=orphan`,会使附属对象成为孤立附属对象。
当不指定 `--cascade` 或者明确地指定它的值为 `background` 的时候,
默认的行为是在后台删除附属对象。
下面是一个例子,使一个 ReplicaSet 的附属对象成为孤立附属:
```shell
kubectl delete replicaset my-repset --cascade=orphan
```
<!--
### Additional note on Deployments
Prior to 1.7, When using cascading deletes with Deployments you *must* use `propagationPolicy: Foreground`
to delete not only the ReplicaSets created, but also their Pods. If this type of _propagationPolicy_
is not used, only the ReplicaSets will be deleted, and the Pods will be orphaned.
See [kubeadm/#149](https://github.com/kubernetes/kubeadm/issues/149#issuecomment-284766613) for more information.
-->
### Deployment 的附加说明
在 1.7 之前的版本中,当在 Deployment 中使用级联删除时,你 *必须*使用
`propagationPolicy:Foreground` 模式以便在删除所创建的 ReplicaSet 的同时,还删除其 Pod。
如果不使用这种类型的 `propagationPolicy`,将只删除 ReplicaSet而 Pod 被孤立。
有关信息请参考 [kubeadm/#149](https://github.com/kubernetes/kubeadm/issues/149#issuecomment-284766613)。
<!--
## Known issues
Tracked at [#26120](https://github.com/kubernetes/kubernetes/issues/26120)
-->
## 已知的问题
跟踪 [#26120](https://github.com/kubernetes/kubernetes/issues/26120)
## {{% heading "whatsnext" %}}
<!--
[Design Doc 1](https://git.k8s.io/community/contributors/design-proposals/api-machinery/garbage-collection.md)
[Design Doc 2](https://git.k8s.io/community/contributors/design-proposals/api-machinery/synchronous-garbage-collection.md)
-->
* [设计文档 1](https://git.k8s.io/community/contributors/design-proposals/api-machinery/garbage-collection.md)
* [设计文档 2](https://git.k8s.io/community/contributors/design-proposals/api-machinery/synchronous-garbage-collection.md)