[zh] Fix links of /storage/storage-capacity.md
This commit is contained in:
parent
cf139fbf95
commit
8997ec4798
|
@ -3,15 +3,30 @@ title: 存储容量
|
|||
content_type: concept
|
||||
weight: 70
|
||||
---
|
||||
<!--
|
||||
reviewers:
|
||||
- jsafrane
|
||||
- saad-ali
|
||||
- msau42
|
||||
- xing-yang
|
||||
- pohly
|
||||
title: Storage Capacity
|
||||
content_type: concept
|
||||
weight: 70
|
||||
-->
|
||||
|
||||
<!-- overview -->
|
||||
<!--
|
||||
Storage capacity is limited and may vary depending on the node on
|
||||
which a pod runs: network-attached storage might not be accessible by
|
||||
all nodes, or storage is local to a node to begin with.
|
||||
-->
|
||||
存储容量是有限的,并且会因为运行 Pod 的节点不同而变化:
|
||||
网络存储可能并非所有节点都能够访问,或者对于某个节点存储是本地的。
|
||||
|
||||
{{< feature-state for_k8s_version="v1.24" state="stable" >}}
|
||||
|
||||
<!--
|
||||
This page describes how Kubernetes keeps track of storage capacity and
|
||||
how the scheduler uses that information to [schedule Pods](/docs/concepts/scheduling-eviction/) onto nodes
|
||||
that have access to enough storage capacity for the remaining missing
|
||||
|
@ -19,11 +34,6 @@ volumes. Without storage capacity tracking, the scheduler may choose a
|
|||
node that doesn't have enough capacity to provision a volume and
|
||||
multiple scheduling retries will be needed.
|
||||
-->
|
||||
存储容量是有限的,并且会因为运行 Pod 的节点不同而变化:
|
||||
网络存储可能并非所有节点都能够访问,或者对于某个节点存储是本地的。
|
||||
|
||||
{{< feature-state for_k8s_version="v1.24" state="stable" >}}
|
||||
|
||||
本页面描述了 Kubernetes 如何跟踪存储容量以及调度程序如何为了余下的尚未挂载的卷使用该信息将
|
||||
[Pod 调度](/zh-cn/docs/concepts/scheduling-eviction/)到能够访问到足够存储容量的节点上。
|
||||
如果没有跟踪存储容量,调度程序可能会选择一个没有足够容量来提供卷的节点,并且需要多次调度重试。
|
||||
|
@ -60,10 +70,10 @@ There are two API extensions for this feature:
|
|||
## API
|
||||
|
||||
这个特性有两个 API 扩展接口:
|
||||
- [CSIStorageCapacity](/docs/reference/kubernetes-api/config-and-storage-resources/csi-storage-capacity-v1/) 对象:这些对象由
|
||||
- [CSIStorageCapacity](/zh-cn/docs/reference/kubernetes-api/config-and-storage-resources/csi-storage-capacity-v1/) 对象:这些对象由
|
||||
CSI 驱动程序在安装驱动程序的命名空间中产生。
|
||||
每个对象都包含一个存储类的容量信息,并定义哪些节点可以访问该存储。
|
||||
- [`CSIDriverSpec.StorageCapacity` 字段](/docs/reference/kubernetes-api/config-and-storage-resources/csi-driver-v1/#CSIDriverSpec):
|
||||
- [`CSIDriverSpec.StorageCapacity` 字段](/zh-cn/docs/reference/kubernetes-api/config-and-storage-resources/csi-driver-v1/#CSIDriverSpec):
|
||||
设置为 true 时,Kubernetes 调度程序将考虑使用 CSI 驱动程序的卷的存储容量。
|
||||
|
||||
<!--
|
||||
|
@ -89,18 +99,18 @@ decides where to create the volume, independently of Pods that will
|
|||
use the volume. The scheduler then schedules Pods onto nodes where the
|
||||
volume is available after the volume has been created.
|
||||
|
||||
For [CSI ephemeral volumes](/docs/concepts/storage/volumes/#csi),
|
||||
For [CSI ephemeral volumes](/docs/concepts/storage/ephemeral-volumes/#csi-ephemeral-volumes),
|
||||
scheduling always happens without considering storage capacity. This
|
||||
is based on the assumption that this volume type is only used by
|
||||
special CSI drivers which are local to a node and do not need
|
||||
significant resources there.
|
||||
-->
|
||||
## 调度
|
||||
## 调度 {#scheduling}
|
||||
|
||||
如果有以下情况,存储容量信息将会被 Kubernetes 调度程序使用:
|
||||
- Pod 使用的卷还没有被创建,
|
||||
- 卷使用引用了 CSI 驱动的 {{< glossary_tooltip text="StorageClass" term_id="storage-class" >}},
|
||||
并且使用了 `WaitForFirstConsumer` [卷绑定模式](/zh-cn/docs/concepts/storage/storage-classes/#volume-binding-mode),
|
||||
并且使用了 `WaitForFirstConsumer` [卷绑定模式](/zh-cn/docs/concepts/storage/storage-classes/#volume-binding-mode),
|
||||
- 驱动程序的 `CSIDriver` 对象的 `StorageCapacity` 被设置为 true。
|
||||
|
||||
在这种情况下,调度程序仅考虑将 Pod 调度到有足够存储容量的节点上。这个检测非常简单,
|
||||
|
@ -109,7 +119,8 @@ significant resources there.
|
|||
对于具有 `Immediate` 卷绑定模式的卷,存储驱动程序将决定在何处创建该卷,而不取决于将使用该卷的 Pod。
|
||||
然后,调度程序将 Pod 调度到创建卷后可使用该卷的节点上。
|
||||
|
||||
对于 [CSI 临时卷](/zh-cn/docs/concepts/storage/volumes/#csi),调度总是在不考虑存储容量的情况下进行。
|
||||
对于 [CSI 临时卷](/zh-cn/docs/concepts/storage/ephemeral-volumes/#csi-ephemeral-volumes),
|
||||
调度总是在不考虑存储容量的情况下进行。
|
||||
这是基于这样的假设:该卷类型仅由节点本地的特殊 CSI 驱动程序使用,并且不需要大量资源。
|
||||
|
||||
<!--
|
||||
|
@ -125,7 +136,7 @@ capacity information, it is possible that the volume cannot really be
|
|||
created. The node selection is then reset and the Kubernetes scheduler
|
||||
tries again to find a node for the Pod.
|
||||
-->
|
||||
## 重新调度
|
||||
## 重新调度 {#rescheduling}
|
||||
|
||||
当为带有 `WaitForFirstConsumer` 的卷的 Pod 来选择节点时,该决定仍然是暂定的。
|
||||
下一步是要求 CSI 存储驱动程序创建卷,并提示该卷在被选择的节点上可用。
|
||||
|
@ -149,7 +160,7 @@ another volume. Manual intervention is necessary to recover from this,
|
|||
for example by increasing capacity or deleting the volume that was
|
||||
already created.
|
||||
-->
|
||||
## 限制
|
||||
## 限制 {#limitations}
|
||||
|
||||
存储容量跟踪增加了调度器第一次尝试即成功的机会,但是并不能保证这一点,因为调度器必须根据可能过期的信息来进行决策。
|
||||
通常,与没有任何存储容量信息的调度相同的重试机制可以处理调度失败。
|
||||
|
|
Loading…
Reference in New Issue