Misc Batch 1 in Issue #26177
[zh] Umbrella issue: pages out of sync in concepts section #26177 Misc Batch 1: ``` content/zh/docs/concepts/configuration/overview.md content/zh/docs/concepts/scheduling-eviction/assign-pod-node.md content/zh/docs/concepts/scheduling-eviction/pod-overhead.md content/zh/docs/concepts/storage/volume-snapshot-classes.md content/zh/docs/concepts/storage/volumes.md ``` Signed-off-by: ydFu <ader.ydfu@gmail.com>
This commit is contained in:
parent
a088d1f27a
commit
f26b3322a7
|
@ -183,6 +183,15 @@ A desired state of an object is described by a Deployment, and if changes to tha
|
|||
Deployment 描述了对象的期望状态,并且如果对该规范的更改被成功应用,
|
||||
则 Deployment 控制器以受控速率将实际状态改变为期望状态。
|
||||
|
||||
<!--
|
||||
- Use the [Kubernetes common labels](/docs/concepts/overview/working-with-objects/common-labels/) for common use cases. These standardized labels enrich the metadata in a way that allows tools, including `kubectl` and [dashboard](/docs/tasks/access-application-cluster/web-ui-dashboard), to work in an interoperable way.
|
||||
-->
|
||||
|
||||
- 对于常见场景,应使用 [Kubernetes 通用标签](/zh/docs/concepts/overview/working-with-objects/common-labels/)。
|
||||
这些标准化的标签丰富了对象的元数据,使得包括 `kubectl` 和
|
||||
[仪表板(Dashboard)](/zh/docs/tasks/access-application-cluster/web-ui-dashboard)
|
||||
这些工具能够以可互操作的方式工作。
|
||||
|
||||
<!--
|
||||
- You can manipulate labels for debugging. Because Kubernetes controllers (such as ReplicaSet) and Services match to Pods using selector labels, removing the relevant labels from a Pod will stop it from being considered by a controller or from being served traffic by a Service. If you remove the labels of an existing Pod, its controller will create a new Pod to take its place. This is a useful way to debug a previously "live" Pod in a "quarantine" environment. To interactively remove or add labels, use [`kubectl label`](/docs/reference/generated/kubectl/kubectl-commands#label).
|
||||
-->
|
||||
|
|
|
@ -500,8 +500,8 @@ as at least one already-running pod that has a label with key "security" and val
|
|||
on node N if node N has a label with key `topology.kubernetes.io/zone` and some value V
|
||||
such that there is at least one node in the cluster with key `topology.kubernetes.io/zone` and
|
||||
value V that is running a pod that has a label with key "security" and value "S1".) The pod anti-affinity
|
||||
rule says that the pod prefers not to be scheduled onto a node if that node is already running a pod with label
|
||||
having key "security" and value "S2". (If the `topologyKey` were `topology.kubernetes.io/zone` then
|
||||
rule says that the pod should not be scheduled onto a node if that node is in the same zone as a pod with
|
||||
label having key "security" and value "S2". (If the `topologyKey` were `topology.kubernetes.io/zone` then
|
||||
it would mean that the pod cannot be scheduled onto a node if that node is in the same zone as a pod with
|
||||
label having key "security" and value "S2".) See the
|
||||
[design doc](https://git.k8s.io/community/contributors/design-proposals/scheduling/podaffinity.md)
|
||||
|
@ -517,8 +517,8 @@ Pod 亲和性规则表示,仅当节点和至少一个已运行且有键为“s
|
|||
则 Pod 有资格在节点 N 上运行,以便集群中至少有一个节点具有键
|
||||
`topology.kubernetes.io/zone` 和值为 V 的节点正在运行具有键“security”和值
|
||||
“S1”的标签的 pod。)
|
||||
Pod 反亲和性规则表示,如果节点已经运行了一个具有键“security”和值“S2”的标签的 Pod,
|
||||
则该 pod 不希望将其调度到该节点上。
|
||||
Pod 反亲和性规则表示,如果节点处于 Pod 所在的同一可用区且具有键“security”和值“S2”的标签,
|
||||
则该 pod 不应将其调度到该节点上。
|
||||
(如果 `topologyKey` 为 `topology.kubernetes.io/zone`,则意味着当节点和具有键
|
||||
“security”和值“S2”的标签的 Pod 处于相同的区域,Pod 不能被调度到该节点上。)
|
||||
查阅[设计文档](https://git.k8s.io/community/contributors/design-proposals/scheduling/podaffinity.md)
|
||||
|
|
|
@ -38,10 +38,12 @@ time according to the overhead associated with the Pod's
|
|||
|
||||
<!--
|
||||
When Pod Overhead is enabled, the overhead is considered in addition to the sum of container
|
||||
resource requests when scheduling a Pod. Similarly, Kubelet will include the Pod overhead when sizing
|
||||
resource requests when scheduling a Pod. Similarly,the kubelet will include the Pod overhead when sizing
|
||||
the Pod cgroup, and when carrying out Pod eviction ranking.
|
||||
-->
|
||||
当启用 Pod 开销时,在调度 Pod 时,除了考虑容器资源请求的总和外,还要考虑 Pod 开销。类似地,Kubelet 将在确定 Pod cgroup 的大小和执行 Pod 驱逐排序时包含 Pod 开销。
|
||||
|
||||
如果启用了 Pod Overhead,在调度 Pod 时,除了考虑容器资源请求的总和外,还要考虑 Pod 开销。
|
||||
类似地,kubelet 将在确定 Pod cgroups 的大小和执行 Pod 驱逐排序时也会考虑 Pod 开销。
|
||||
|
||||
<!--
|
||||
## Enabling Pod Overhead {#set-up}
|
||||
|
@ -301,6 +303,11 @@ from source in the meantime.
|
|||
|
||||
## {{% heading "whatsnext" %}}
|
||||
|
||||
* [RuntimeClass](/zh/docs/concepts/containers/runtime-class/)
|
||||
* [PodOverhead 设计](https://github.com/kubernetes/enhancements/blob/master/keps/sig-node/20190226-pod-overhead.md)
|
||||
<!--
|
||||
* [RuntimeClass](/docs/concepts/containers/runtime-class/)
|
||||
* [PodOverhead Design](https://github.com/kubernetes/enhancements/tree/master/keps/sig-node/688-pod-overhead)
|
||||
-->
|
||||
|
||||
* [RuntimeClass](/zh/docs/concepts/containers/runtime-class/)
|
||||
* [PodOverhead 设计](https://github.com/kubernetes/enhancements/tree/master/keps/sig-node/688-pod-overhead)
|
||||
|
||||
|
|
|
@ -95,7 +95,7 @@ used for provisioning VolumeSnapshots. This field must be specified.
|
|||
<!--
|
||||
### DeletionPolicy
|
||||
|
||||
Volume snapshot classes have a deletionPolicy. It enables you to configure what happens to a VolumeSnapshotContent when the VolumeSnapshot object it is bound to is to be deleted. The deletionPolicy of a volume snapshot can either be `Retain` or `Delete`. This field must be specified.
|
||||
Volume snapshot classes have a deletionPolicy. It enables you to configure what happens to a VolumeSnapshotContent when the VolumeSnapshot object it is bound to is to be deleted. The deletionPolicy of a volume snapshot class can either be `Retain` or `Delete`. This field must be specified.
|
||||
|
||||
If the deletionPolicy is `Delete`, then the underlying storage snapshot will be deleted along with the VolumeSnapshotContent object. If the deletionPolicy is `Retain`, then both the underlying snapshot and VolumeSnapshotContent remain.
|
||||
-->
|
||||
|
@ -103,7 +103,7 @@ If the deletionPolicy is `Delete`, then the underlying storage snapshot will be
|
|||
|
||||
卷快照类具有 `deletionPolicy` 属性。用户可以配置当所绑定的 VolumeSnapshot
|
||||
对象将被删除时,如何处理 VolumeSnapshotContent 对象。
|
||||
卷快照的这个策略可以是 `Retain` 或者 `Delete`。这个策略字段必须指定。
|
||||
卷快照类的这个策略可以是 `Retain` 或者 `Delete`。这个策略字段必须指定。
|
||||
|
||||
如果删除策略是 `Delete`,那么底层的存储快照会和 VolumeSnapshotContent 对象
|
||||
一起删除。如果删除策略是 `Retain`,那么底层快照和 VolumeSnapshotContent
|
||||
|
|
|
@ -509,6 +509,20 @@ memory limit.
|
|||
虽然 tmpfs 速度非常快,但是要注意它与磁盘不同。
|
||||
tmpfs 在节点重启时会被清除,并且你所写入的所有文件都会计入容器的内存消耗,受容器内存限制约束。
|
||||
|
||||
<!--
|
||||
{{< note >}}
|
||||
If the `SizeMemoryBackedVolumes` [feature gate](/docs/reference/command-line-tools-reference/feature-gates/) is enabled,
|
||||
you can specify a size for memory backed volumes. If no size is specified, memory
|
||||
backed volumes are sized to 50% of the memory on a Linux host.
|
||||
{{< /note>}}
|
||||
-->
|
||||
|
||||
{{< note >}}
|
||||
当启用 `SizeMemoryBackedVolumes` [特性门控](/zh/docs/reference/command-line-tools-reference/feature-gates/)时,
|
||||
你可以为基于内存提供的卷指定大小。
|
||||
如果未指定大小,则基于内存的卷的大小为 Linux 主机上内存的 50%。
|
||||
{{< /note>}}
|
||||
|
||||
<!--
|
||||
#### emptyDir configuration example
|
||||
-->
|
||||
|
|
Loading…
Reference in New Issue