Merge release1.14 into master (#16549)
* initial commit * promote AWS-NLB Support from alpha to beta (#14451) (#16459) (#16484) * 1. Sync release-1.15 into master 2. Sync with en version * 1. Add the lost yaml file. * Update he cluster administration folder of concepts 1. Sync with 1.14 branch 2. Sync with en version * Add yaml files which are used * 1. Sync the configuration folder from 1.14 version 2. Sync the configuration folder files with en version 3. Changed some files structure for easily review 4. Add glossary folder for fixup build error * 1. Sync the storage folder from 1.14 version 2. Sync the storage folder files with 1.16 version en document 3. Changed some files structure for easily review
This commit is contained in:
parent
7b96722a86
commit
9b2cb7a276
|
|
@ -2,10 +2,3 @@
|
|||
title: "存储"
|
||||
weight: 70
|
||||
---
|
||||
|
||||
<!--
|
||||
---
|
||||
title: "Storage"
|
||||
weight: 70
|
||||
---
|
||||
-->
|
||||
|
|
@ -0,0 +1,203 @@
|
|||
---
|
||||
title: 动态卷供应
|
||||
content_template: templates/concept
|
||||
weight: 40
|
||||
---
|
||||
|
||||
{{% capture overview %}}
|
||||
|
||||
<!--
|
||||
Dynamic volume provisioning allows storage volumes to be created on-demand.
|
||||
Without dynamic provisioning, cluster administrators have to manually make
|
||||
calls to their cloud or storage provider to create new storage volumes, and
|
||||
then create [`PersistentVolume` objects](/docs/concepts/storage/persistent-volumes/)
|
||||
to represent them in Kubernetes. The dynamic provisioning feature eliminates
|
||||
the need for cluster administrators to pre-provision storage. Instead, it
|
||||
automatically provisions storage when it is requested by users.
|
||||
-->
|
||||
动态卷供应允许按需创建存储卷。
|
||||
如果没有动态供应,集群管理员必须手动地联系他们的云或存储提供商来创建新的存储卷,
|
||||
然后在 Kubernetes 集群创建 [`PersistentVolume` 对象](/docs/concepts/storage/persistent-volumes/)来表示这些卷。
|
||||
动态供应功能消除了集群管理员预先配置存储的需要。 相反,它在用户请求时自动供应存储。
|
||||
|
||||
{{% /capture %}}
|
||||
|
||||
{{% capture body %}}
|
||||
|
||||
<!--
|
||||
## Background
|
||||
-->
|
||||
## 背景
|
||||
|
||||
<!--
|
||||
The implementation of dynamic volume provisioning is based on the API object `StorageClass`
|
||||
from the API group `storage.k8s.io`. A cluster administrator can define as many
|
||||
`StorageClass` objects as needed, each specifying a *volume plugin* (aka
|
||||
*provisioner*) that provisions a volume and the set of parameters to pass to
|
||||
that provisioner when provisioning.
|
||||
-->
|
||||
动态卷供应的实现基于 `storage.k8s.io` API 组中的 `StorageClass` API 对象。
|
||||
集群管理员可以根据需要定义多个 `StorageClass` 对象,每个对象指定一个*卷插件*(又名 *provisioner*),
|
||||
卷插件向卷供应商提供在创建卷时需要的数据卷信息及相关参数。
|
||||
|
||||
<!--
|
||||
A cluster administrator can define and expose multiple flavors of storage (from
|
||||
the same or different storage systems) within a cluster, each with a custom set
|
||||
of parameters. This design also ensures that end users don’t have to worry
|
||||
about the complexity and nuances of how storage is provisioned, but still
|
||||
have the ability to select from multiple storage options.
|
||||
-->
|
||||
集群管理员可以在集群中定义和公开多种存储(来自相同或不同的存储系统),每种都具有自定义参数集。
|
||||
该设计也确保终端用户不必担心存储供应的复杂性和细微差别,但仍然能够从多个存储选项中进行选择。
|
||||
|
||||
<!--
|
||||
More information on storage classes can be found
|
||||
[here](/docs/concepts/storage/storage-classes/).
|
||||
-->
|
||||
点击[这里](/docs/concepts/storage/storage-classes/)查阅有关存储类的更多信息。
|
||||
|
||||
<!--
|
||||
## Enabling Dynamic Provisioning
|
||||
-->
|
||||
## 启用动态卷供应
|
||||
|
||||
<!--
|
||||
To enable dynamic provisioning, a cluster administrator needs to pre-create
|
||||
one or more StorageClass objects for users.
|
||||
StorageClass objects define which provisioner should be used and what parameters
|
||||
should be passed to that provisioner when dynamic provisioning is invoked.
|
||||
The following manifest creates a storage class "slow" which provisions standard
|
||||
disk-like persistent disks.
|
||||
-->
|
||||
要启用动态供应功能,集群管理员需要为用户预先创建一个或多个 `StorageClass` 对象。
|
||||
`StorageClass` 对象定义在进行动态卷供应时应使用哪个卷供应商,以及应该将哪些参数传递给该供应商。
|
||||
以下清单创建了一个存储类 "slow",它提供类似标准磁盘的永久磁盘。
|
||||
|
||||
```yaml
|
||||
apiVersion: storage.k8s.io/v1
|
||||
kind: StorageClass
|
||||
metadata:
|
||||
name: slow
|
||||
provisioner: kubernetes.io/gce-pd
|
||||
parameters:
|
||||
type: pd-standard
|
||||
```
|
||||
|
||||
<!--
|
||||
The following manifest creates a storage class "fast" which provisions
|
||||
SSD-like persistent disks.
|
||||
-->
|
||||
以下清单创建了一个 "fast" 存储类,它提供类似 SSD 的永久磁盘。
|
||||
|
||||
```yaml
|
||||
apiVersion: storage.k8s.io/v1
|
||||
kind: StorageClass
|
||||
metadata:
|
||||
name: fast
|
||||
provisioner: kubernetes.io/gce-pd
|
||||
parameters:
|
||||
type: pd-ssd
|
||||
```
|
||||
|
||||
<!--
|
||||
## Using Dynamic Provisioning
|
||||
-->
|
||||
## 使用动态卷供应
|
||||
|
||||
<!--
|
||||
Users request dynamically provisioned storage by including a storage class in
|
||||
their `PersistentVolumeClaim`. Before Kubernetes v1.6, this was done via the
|
||||
`volume.beta.kubernetes.io/storage-class` annotation. However, this annotation
|
||||
is deprecated since v1.6. Users now can and should instead use the
|
||||
`storageClassName` field of the `PersistentVolumeClaim` object. The value of
|
||||
this field must match the name of a `StorageClass` configured by the
|
||||
administrator (see [below](#enabling-dynamic-provisioning)).
|
||||
-->
|
||||
用户通过在 `PersistentVolumeClaim` 中包含存储类来请求动态供应的存储。
|
||||
在 Kubernetes v1.6 之前,这通过 `volume.beta.kubernetes.io/storage-class` 注解实现。然而,这个注解自 v1.6 起就不被推荐使用了。
|
||||
用户现在能够而且应该使用 `PersistentVolumeClaim` 对象的 `storageClassName` 字段。
|
||||
这个字段的值必须能够匹配到集群管理员配置的 `StorageClass` 名称(见[下面](#enabling-dynamic-provisioning))。
|
||||
|
||||
<!--
|
||||
To select the “fast” storage class, for example, a user would create the
|
||||
following `PersistentVolumeClaim`:
|
||||
-->
|
||||
例如,要选择 "fast" 存储类,用户将创建如下的 `PersistentVolumeClaim`:
|
||||
|
||||
```yaml
|
||||
apiVersion: v1
|
||||
kind: PersistentVolumeClaim
|
||||
metadata:
|
||||
name: claim1
|
||||
spec:
|
||||
accessModes:
|
||||
- ReadWriteOnce
|
||||
storageClassName: fast
|
||||
resources:
|
||||
requests:
|
||||
storage: 30Gi
|
||||
```
|
||||
|
||||
<!--
|
||||
This claim results in an SSD-like Persistent Disk being automatically
|
||||
provisioned. When the claim is deleted, the volume is destroyed.
|
||||
-->
|
||||
该声明会自动供应一块类似 SSD 的永久磁盘。
|
||||
在删除该声明后,这个卷也会被销毁。
|
||||
|
||||
<!--
|
||||
## Defaulting Behavior
|
||||
-->
|
||||
## 默认行为
|
||||
|
||||
<!--
|
||||
Dynamic provisioning can be enabled on a cluster such that all claims are
|
||||
dynamically provisioned if no storage class is specified. A cluster administrator
|
||||
can enable this behavior by:
|
||||
-->
|
||||
可以在群集上启用动态卷供应,以便在未指定存储类的情况下动态设置所有声明。
|
||||
集群管理员可以通过以下方式启用此行为:
|
||||
|
||||
<!--
|
||||
- Marking one `StorageClass` object as *default*;
|
||||
- Making sure that the [`DefaultStorageClass` admission controller](/docs/reference/access-authn-authz/admission-controllers/#defaultstorageclass)
|
||||
is enabled on the API server.
|
||||
-->
|
||||
- 标记一个 `StorageClass` 为 *默认*;
|
||||
- 确保 [`DefaultStorageClass` 准入控制器](/docs/reference/access-authn-authz/admission-controllers/#defaultstorageclass)在 API 服务端被启用。
|
||||
|
||||
<!--
|
||||
An administrator can mark a specific `StorageClass` as default by adding the
|
||||
`storageclass.kubernetes.io/is-default-class` annotation to it.
|
||||
When a default `StorageClass` exists in a cluster and a user creates a
|
||||
`PersistentVolumeClaim` with `storageClassName` unspecified, the
|
||||
`DefaultStorageClass` admission controller automatically adds the
|
||||
`storageClassName` field pointing to the default storage class.
|
||||
-->
|
||||
管理员可以通过向其添加 `storageclass.kubernetes.io/is-default-class` 注解来将特定的 `StorageClass` 标记为默认。
|
||||
当集群中存在默认的 `StorageClass` 并且用户创建了一个未指定 `storageClassName` 的 `PersistentVolumeClaim` 时,
|
||||
`DefaultStorageClass` 准入控制器会自动向其中添加指向默认存储类的 `storageClassName` 字段。
|
||||
|
||||
<!--
|
||||
Note that there can be at most one *default* storage class on a cluster, or
|
||||
a `PersistentVolumeClaim` without `storageClassName` explicitly specified cannot
|
||||
be created.
|
||||
-->
|
||||
请注意,群集上最多只能有一个 *默认* 存储类,否则无法创建没有明确指定 `storageClassName` 的 `PersistentVolumeClaim`。
|
||||
|
||||
<!--
|
||||
## Topology Awareness
|
||||
-->
|
||||
## 拓扑感知
|
||||
|
||||
<!--
|
||||
In [Multi-Zone](/docs/setup/multiple-zones) clusters, Pods can be spread across
|
||||
Zones in a Region. Single-Zone storage backends should be provisioned in the Zones where
|
||||
Pods are scheduled. This can be accomplished by setting the [Volume Binding
|
||||
Mode](/docs/concepts/storage/storage-classes/#volume-binding-mode).
|
||||
-->
|
||||
在[多区域](/docs/setup/multiple-zones)集群中,Pod 可以被分散到多个区域。
|
||||
单区域存储后端应该被供应到 Pod 被调度到的区域。
|
||||
这可以通过设置[卷绑定模式](/docs/concepts/storage/storage-classes/#volume-binding-mode)来实现。
|
||||
|
||||
{{% /capture %}}
|
||||
File diff suppressed because it is too large
Load Diff
|
|
@ -0,0 +1,144 @@
|
|||
---
|
||||
title: 特定于节点的卷数限制
|
||||
content_template: templates/concept
|
||||
---
|
||||
|
||||
{{% capture overview %}}
|
||||
|
||||
<!--
|
||||
This page describes the maximum number of volumes that can be attached
|
||||
to a Node for various cloud providers.
|
||||
-->
|
||||
|
||||
此页面描述了各个云供应商可关联至一个节点的最大卷数。
|
||||
|
||||
<!--
|
||||
Cloud providers like Google, Amazon, and Microsoft typically have a limit on
|
||||
how many volumes can be attached to a Node. It is important for Kubernetes to
|
||||
respect those limits. Otherwise, Pods scheduled on a Node could get stuck
|
||||
waiting for volumes to attach.
|
||||
-->
|
||||
|
||||
谷歌、亚马逊和微软等云供应商通常对可以关联到节点的卷数量进行限制。
|
||||
Kubernetes 需要尊重这些限制。 否则,在节点上调度的 Pod 可能会卡住去等待卷的关联。
|
||||
|
||||
{{% /capture %}}
|
||||
|
||||
{{% capture body %}}
|
||||
|
||||
<!--
|
||||
## Kubernetes default limits
|
||||
|
||||
The Kubernetes scheduler has default limits on the number of volumes
|
||||
that can be attached to a Node:
|
||||
-->
|
||||
|
||||
## Kubernetes 的默认限制
|
||||
|
||||
The Kubernetes 调度器对关联于一个节点的卷数有默认限制:
|
||||
<!--
|
||||
<table>
|
||||
<tr><th>Cloud service</th><th>Maximum volumes per Node</th></tr>
|
||||
<tr><td><a href="https://aws.amazon.com/ebs/">Amazon Elastic Block Store (EBS)</a></td><td>39</td></tr>
|
||||
<tr><td><a href="https://cloud.google.com/persistent-disk/">Google Persistent Disk</a></td><td>16</td></tr>
|
||||
<tr><td><a href="https://azure.microsoft.com/en-us/services/storage/main-disks/">Microsoft Azure Disk Storage</a></td><td>16</td></tr>
|
||||
</table>
|
||||
-->
|
||||
<table>
|
||||
<tr><th>云服务</th><th>每节点最大卷数</th></tr>
|
||||
<tr><td><a href="https://aws.amazon.com/ebs/">Amazon Elastic Block Store (EBS)</a></td><td>39</td></tr>
|
||||
<tr><td><a href="https://cloud.google.com/persistent-disk/">Google Persistent Disk</a></td><td>16</td></tr>
|
||||
<tr><td><a href="https://azure.microsoft.com/en-us/services/storage/main-disks/">Microsoft Azure Disk Storage</a></td><td>16</td></tr>
|
||||
</table>
|
||||
|
||||
<!--
|
||||
## Custom limits
|
||||
|
||||
You can change these limits by setting the value of the
|
||||
`KUBE_MAX_PD_VOLS` environment variable, and then starting the scheduler.
|
||||
|
||||
Use caution if you set a limit that is higher than the default limit. Consult
|
||||
the cloud provider's documentation to make sure that Nodes can actually support
|
||||
the limit you set.
|
||||
|
||||
The limit applies to the entire cluster, so it affects all Nodes.
|
||||
-->
|
||||
|
||||
## 自定义限制
|
||||
|
||||
您可以通过设置 `KUBE_MAX_PD_VOLS` 环境变量的值来设置这些限制,然后再启动调度器。
|
||||
|
||||
如果设置的限制高于默认限制,请谨慎使用。请参阅云提供商的文档以确保节点可支持您设置的限制。
|
||||
|
||||
此限制应用于整个集群,所以它会影响所有节点。
|
||||
|
||||
<!--
|
||||
## Dynamic volume limits
|
||||
-->
|
||||
|
||||
## 动态卷限制
|
||||
|
||||
{{< feature-state state="beta" for_k8s_version="v1.12" >}}
|
||||
|
||||
<!--
|
||||
Kubernetes 1.11 introduced support for dynamic volume limits based on Node type as an Alpha feature.
|
||||
In Kubernetes 1.12 this feature is graduating to Beta and will be enabled by default.
|
||||
|
||||
Dynamic volume limits are supported for following volume types.
|
||||
|
||||
- Amazon EBS
|
||||
- Google Persistent Disk
|
||||
- Azure Disk
|
||||
- CSI
|
||||
-->
|
||||
|
||||
Kubernetes 1.11 引入了基于节点类型的动态卷限制的支持作为 Alpha 功能。
|
||||
在 Kubernetes 1.12 中,此功能升级到 Beta 版,将默认开启。
|
||||
|
||||
以下卷类型支持动态卷限制。
|
||||
|
||||
- Amazon EBS
|
||||
- Google Persistent Disk
|
||||
- Azure Disk
|
||||
- CSI
|
||||
|
||||
<!--
|
||||
When the dynamic volume limits feature is enabled, Kubernetes automatically
|
||||
determines the Node type and enforces the appropriate number of attachable
|
||||
volumes for the node. For example:
|
||||
-->
|
||||
|
||||
启用动态卷限制功能后,Kubernetes 会自动确定节点类型并确保节点上可关联的卷数目合规。 例如:
|
||||
|
||||
<!--
|
||||
* On
|
||||
<a href="https://cloud.google.com/compute/">Google Compute Engine</a>,
|
||||
up to 128 volumes can be attached to a node, [depending on the node
|
||||
type](https://cloud.google.com/compute/docs/disks/#pdnumberlimits).
|
||||
|
||||
* For Amazon EBS disks on M5,C5,R5,T3 and Z1D instance types, Kubernetes allows only 25
|
||||
volumes to be attached to a Node. For other instance types on
|
||||
<a href="https://aws.amazon.com/ec2/">Amazon Elastic Compute Cloud (EC2)</a>,
|
||||
Kubernetes allows 39 volumes to be attached to a Node.
|
||||
|
||||
* On Azure, up to 64 disks can be attached to a node, depending on the node type. For more details, refer to [Sizes for virtual machines in Azure](https://docs.microsoft.com/en-us/azure/virtual-machines/windows/sizes).
|
||||
|
||||
* For CSI, any driver that advertises volume attach limits via CSI specs will have those limits available as the Node's allocatable property
|
||||
and the Scheduler will not schedule Pods with volumes on any Node that is already at its capacity. Refer to the [CSI specs](https://github.com/container-storage-interface/spec/blob/master/spec.md#nodegetinfo) for more details.
|
||||
-->
|
||||
|
||||
* 在
|
||||
<a href="https://cloud.google.com/compute/">Google Compute Engine</a>环境中,
|
||||
[根据节点类型](https://cloud.google.com/compute/docs/disks/#pdnumberlimits)最多可以将128个卷关联到节点。
|
||||
|
||||
* 对于 M5、C5、R5、T3 和 Z1D 类型实例的 Amazon EBS 磁盘,Kubernetes 仅允许 25 个卷关联到节点。
|
||||
对于 ec2 上的其他实例类型
|
||||
<a href="https://aws.amazon.com/ec2/">Amazon Elastic Compute Cloud (EC2)</a>,
|
||||
Kubernetes 允许 39 个卷关联至节点。
|
||||
|
||||
* 在 Azure 环境中, 根据节点类型,最多 64 个磁盘可以关联至一个节点。
|
||||
更多详细信息,请参阅[Azure 虚拟机的数量大小](https://docs.microsoft.com/en-us/azure/virtual-machines/windows/sizes)。
|
||||
|
||||
* 对于 CSI,任何符合 CSI 规范中卷关联限制的驱动都将这些限制作为 Node 的 allocatable 属性。调度器不会往已经达到其容量限制的任何节点上调度具有卷的Pod。 参考 [CSI 规范](https://github.com/container-storage-interface/spec/blob/master/spec.md#nodegetinfo) 获取更多详细信息。
|
||||
|
||||
{{% /capture %}}
|
||||
|
|
@ -0,0 +1,92 @@
|
|||
---
|
||||
title: 卷快照类
|
||||
content_template: templates/concept
|
||||
weight: 20
|
||||
---
|
||||
|
||||
{{% capture overview %}}
|
||||
|
||||
<!--
|
||||
This document describes the concept of `VolumeSnapshotClass` in Kubernetes. Familiarity
|
||||
with [volume snapshots](/docs/concepts/storage/volume-snapshots/) and
|
||||
[storage classes](/docs/concepts/storage/storage-classes) is suggested.
|
||||
-->
|
||||
|
||||
本文档描述了 Kubernetes 中 `VolumeSnapshotClass` 的概念。 建议熟悉[卷快照(Volume Snapshots)](/docs/concepts/storage/volume-snapshots/)和[存储类(Storage Class)](/docs/concepts/storage/storage-classes)。
|
||||
|
||||
{{% /capture %}}
|
||||
|
||||
|
||||
{{% capture body %}}
|
||||
|
||||
<!--
|
||||
## Introduction
|
||||
|
||||
Just like `StorageClass` provides a way for administrators to describe the "classes"
|
||||
of storage they offer when provisioning a volume, `VolumeSnapshotClass` provides a
|
||||
way to describe the "classes" of storage when provisioning a volume snapshot.
|
||||
-->
|
||||
|
||||
## 介绍
|
||||
|
||||
就像 `StorageClass` 为管理员提供了一种在配置卷时描述存储“类”的方法,`VolumeSnapshotClass` 提供了一种在配置卷快照时描述存储“类”的方法。
|
||||
|
||||
<!--
|
||||
## The VolumeSnapshotClass Resource
|
||||
|
||||
Each `VolumeSnapshotClass` contains the fields `snapshotter` and `parameters`,
|
||||
which are used when a `VolumeSnapshot` belonging to the class needs to be
|
||||
dynamically provisioned.
|
||||
|
||||
The name of a `VolumeSnapshotClass` object is significant, and is how users can
|
||||
request a particular class. Administrators set the name and other parameters
|
||||
of a class when first creating `VolumeSnapshotClass` objects, and the objects cannot
|
||||
be updated once they are created.
|
||||
|
||||
Administrators can specify a default `VolumeSnapshotClass` just for VolumeSnapshots
|
||||
that don't request any particular class to bind to.
|
||||
-->
|
||||
|
||||
## VolumeSnapshotClass 资源
|
||||
|
||||
每个 `VolumeSnapshotClass` 都包含 `snapshotter` 和 `parameters` 字段,当需要动态配置属于该类的 `VolumeSnapshot` 时使用。
|
||||
|
||||
`VolumeSnapshotClass` 对象的名称很重要,是用户可以请求特定类的方式。
|
||||
管理员在首次创建 `VolumeSnapshotClass` 对象时设置类的名称和其他参数,对象一旦创建就无法更新。
|
||||
|
||||
管理员可以为不请求任何特定类绑定的 VolumeSnapshots 指定默认的 `VolumeSnapshotClass`。
|
||||
|
||||
|
||||
```yaml
|
||||
apiVersion: snapshot.storage.k8s.io/v1alpha1
|
||||
kind: VolumeSnapshotClass
|
||||
metadata:
|
||||
name: csi-hostpath-snapclass
|
||||
snapshotter: csi-hostpath
|
||||
parameters:
|
||||
```
|
||||
|
||||
<!--
|
||||
### Snapshotter
|
||||
|
||||
Volume snapshot classes have a snapshotter that determines what CSI volume plugin is
|
||||
used for provisioning VolumeSnapshots. This field must be specified.
|
||||
-->
|
||||
|
||||
### 快照生成器(Snapshotter)
|
||||
|
||||
卷快照类具有一个快照生成器,用于确定配置 VolumeSnapshot 的 CSI 卷插件。 必须指定此字段。
|
||||
|
||||
<!--
|
||||
## Parameters
|
||||
|
||||
Volume snapshot classes have parameters that describe volume snapshots belonging to
|
||||
the volume snapshot class. Different parameters may be accepted depending on the
|
||||
`snapshotter`.
|
||||
-->
|
||||
|
||||
## 参数
|
||||
|
||||
卷快照类具有描述属于卷快照类的卷快照参数。 可根据 `snapshotter` 接受不同的参数。
|
||||
|
||||
{{% /capture %}}
|
||||
File diff suppressed because it is too large
Load Diff
Loading…
Reference in New Issue