Update In-tree storage driver status (#42415)
* Update In-tree storage driver status * Update In-tree storage driver `gcePersistentDisk` status to removed * Remove GCEPersistentDisk volume info from the storage-classes page
This commit is contained in:
parent
d5bd665a80
commit
976ead0a1a
|
|
@ -184,8 +184,8 @@ and the volume is considered "released". But it is not yet available for
|
|||
another claim because the previous claimant's data remains on the volume.
|
||||
An administrator can manually reclaim the volume with the following steps.
|
||||
|
||||
1. Delete the PersistentVolume. The associated storage asset in external infrastructure
|
||||
(such as an AWS EBS or GCE PD volume) still exists after the PV is deleted.
|
||||
1. Delete the PersistentVolume. The associated storage asset in external infrastructure
|
||||
still exists after the PV is deleted.
|
||||
1. Manually clean up the data on the associated storage asset accordingly.
|
||||
1. Manually delete the associated storage asset.
|
||||
|
||||
|
|
@ -196,7 +196,7 @@ the same storage asset definition.
|
|||
|
||||
For volume plugins that support the `Delete` reclaim policy, deletion removes
|
||||
both the PersistentVolume object from Kubernetes, as well as the associated
|
||||
storage asset in the external infrastructure, such as an AWS EBS or GCE PD volume. Volumes that were dynamically provisioned
|
||||
storage asset in the external infrastructure. Volumes that were dynamically provisioned
|
||||
inherit the [reclaim policy of their StorageClass](#reclaim-policy), which
|
||||
defaults to `Delete`. The administrator should configure the StorageClass
|
||||
according to users' expectations; otherwise, the PV must be edited or
|
||||
|
|
@ -370,7 +370,6 @@ the following types of volumes:
|
|||
* azureFile (deprecated)
|
||||
* {{< glossary_tooltip text="csi" term_id="csi" >}}
|
||||
* flexVolume (deprecated)
|
||||
* gcePersistentDisk (deprecated)
|
||||
* rbd (deprecated)
|
||||
* portworxVolume (deprecated)
|
||||
|
||||
|
|
@ -438,11 +437,6 @@ Similar to other volume types - FlexVolume volumes can also be expanded when in-
|
|||
FlexVolume resize is possible only when the underlying driver supports resize.
|
||||
{{< /note >}}
|
||||
|
||||
{{< note >}}
|
||||
Expanding EBS volumes is a time-consuming operation.
|
||||
Also, there is a per-volume quota of one modification every 6 hours.
|
||||
{{< /note >}}
|
||||
|
||||
#### Recovering from Failure when Expanding Volumes
|
||||
|
||||
If a user specifies a new size that is too big to be satisfied by underlying
|
||||
|
|
@ -518,8 +512,6 @@ This means that support is still available but will be removed in a future Kuber
|
|||
(**deprecated** in v1.21)
|
||||
* [`flexVolume`](/docs/concepts/storage/volumes/#flexvolume) - FlexVolume
|
||||
(**deprecated** in v1.23)
|
||||
* [`gcePersistentDisk`](/docs/concepts/storage/volumes/#gcepersistentdisk) - GCE Persistent Disk
|
||||
(**deprecated** in v1.17)
|
||||
* [`portworxVolume`](/docs/concepts/storage/volumes/#portworxvolume) - Portworx volume
|
||||
(**deprecated** in v1.25)
|
||||
* [`vsphereVolume`](/docs/concepts/storage/volumes/#vspherevolume) - vSphere VMDK volume
|
||||
|
|
@ -663,8 +655,7 @@ are specified as ReadWriteOncePod, the volume is constrained and can be mounted
|
|||
{{< /note >}}
|
||||
|
||||
> __Important!__ A volume can only be mounted using one access mode at a time,
|
||||
> even if it supports many. For example, a GCEPersistentDisk can be mounted as
|
||||
> ReadWriteOnce by a single node or ReadOnlyMany by many nodes, but not at the same time.
|
||||
> even if it supports many.
|
||||
|
||||
| Volume Plugin | ReadWriteOnce | ReadOnlyMany | ReadWriteMany | ReadWriteOncePod |
|
||||
| :--- | :---: | :---: | :---: | - |
|
||||
|
|
@ -673,8 +664,6 @@ are specified as ReadWriteOncePod, the volume is constrained and can be mounted
|
|||
| CSI | depends on the driver | depends on the driver | depends on the driver | depends on the driver |
|
||||
| FC | ✓ | ✓ | - | - |
|
||||
| FlexVolume | ✓ | ✓ | depends on the driver | - |
|
||||
| GCEPersistentDisk | ✓ | ✓ | - | - |
|
||||
| Glusterfs | ✓ | ✓ | ✓ | - |
|
||||
| HostPath | ✓ | - | - | - |
|
||||
| iSCSI | ✓ | ✓ | - | - |
|
||||
| NFS | ✓ | ✓ | ✓ | - |
|
||||
|
|
@ -701,9 +690,9 @@ Current reclaim policies are:
|
|||
|
||||
* Retain -- manual reclamation
|
||||
* Recycle -- basic scrub (`rm -rf /thevolume/*`)
|
||||
* Delete -- associated storage asset such as AWS EBS or GCE PD volume is deleted
|
||||
* Delete -- associated storage asset
|
||||
|
||||
Currently, only NFS and HostPath support recycling. AWS EBS and GCE PD volumes support deletion.
|
||||
For Kubernetes {{< skew currentVersion >}}, only `nfs` and `hostPath` volume types support recycling.
|
||||
|
||||
### Mount Options
|
||||
|
||||
|
|
@ -719,7 +708,6 @@ The following volume types support mount options:
|
|||
* `azureFile`
|
||||
* `cephfs` (**deprecated** in v1.28)
|
||||
* `cinder` (**deprecated** in v1.18)
|
||||
* `gcePersistentDisk` (**deprecated** in v1.28)
|
||||
* `iscsi`
|
||||
* `nfs`
|
||||
* `rbd` (**deprecated** in v1.28)
|
||||
|
|
@ -734,8 +722,7 @@ it will become fully deprecated in a future Kubernetes release.
|
|||
### Node Affinity
|
||||
|
||||
{{< note >}}
|
||||
For most volume types, you do not need to set this field. It is automatically
|
||||
populated for [GCE PD](/docs/concepts/storage/volumes/#gcepersistentdisk) volume block types.
|
||||
For most volume types, you do not need to set this field.
|
||||
You need to explicitly set this for [local](/docs/concepts/storage/volumes/#local) volumes.
|
||||
{{< /note >}}
|
||||
|
||||
|
|
@ -956,7 +943,6 @@ applicable:
|
|||
|
||||
* CSI
|
||||
* FC (Fibre Channel)
|
||||
* GCEPersistentDisk (deprecated)
|
||||
* iSCSI
|
||||
* Local volume
|
||||
* OpenStack Cinder
|
||||
|
|
|
|||
|
|
@ -78,7 +78,6 @@ for provisioning PVs. This field must be specified.
|
|||
| CephFS | - | - |
|
||||
| FC | - | - |
|
||||
| FlexVolume | - | - |
|
||||
| GCEPersistentDisk | ✓ | [GCE PD](#gce-pd) |
|
||||
| iSCSI | - | - |
|
||||
| NFS | - | [NFS](#nfs) |
|
||||
| RBD | ✓ | [Ceph RBD](#ceph-rbd) |
|
||||
|
|
@ -125,7 +124,6 @@ StorageClass has the field `allowVolumeExpansion` set to true.
|
|||
|
||||
| Volume type | Required Kubernetes version |
|
||||
| :------------------- | :-------------------------- |
|
||||
| gcePersistentDisk | 1.11 |
|
||||
| rbd | 1.11 |
|
||||
| Azure File | 1.11 |
|
||||
| Portworx | 1.11 |
|
||||
|
|
@ -169,13 +167,7 @@ requirements](/docs/concepts/configuration/manage-resources-containers/),
|
|||
anti-affinity](/docs/concepts/scheduling-eviction/assign-pod-node/#affinity-and-anti-affinity),
|
||||
and [taints and tolerations](/docs/concepts/scheduling-eviction/taint-and-toleration).
|
||||
|
||||
The following plugins support `WaitForFirstConsumer` with dynamic provisioning:
|
||||
|
||||
- [GCEPersistentDisk](#gce-pd)
|
||||
|
||||
The following plugins support `WaitForFirstConsumer` with pre-created PersistentVolume binding:
|
||||
|
||||
- All of the above
|
||||
- [Local](#local)
|
||||
|
||||
[CSI volumes](/docs/concepts/storage/volumes/#csi) are also supported with dynamic provisioning
|
||||
|
|
@ -294,55 +286,6 @@ parameters:
|
|||
[allowedTopologies](#allowed-topologies)
|
||||
{{< /note >}}
|
||||
|
||||
### GCE PD
|
||||
|
||||
```yaml
|
||||
apiVersion: storage.k8s.io/v1
|
||||
kind: StorageClass
|
||||
metadata:
|
||||
name: slow
|
||||
provisioner: kubernetes.io/gce-pd
|
||||
parameters:
|
||||
type: pd-standard
|
||||
fstype: ext4
|
||||
replication-type: none
|
||||
```
|
||||
|
||||
- `type`: `pd-standard` or `pd-ssd`. Default: `pd-standard`
|
||||
- `zone` (Deprecated): GCE zone. If neither `zone` nor `zones` is specified, volumes are
|
||||
generally round-robin-ed across all active zones where Kubernetes cluster has
|
||||
a node. `zone` and `zones` parameters must not be used at the same time.
|
||||
- `zones` (Deprecated): A comma separated list of GCE zone(s). If neither `zone` nor `zones`
|
||||
is specified, volumes are generally round-robin-ed across all active zones
|
||||
where Kubernetes cluster has a node. `zone` and `zones` parameters must not
|
||||
be used at the same time.
|
||||
- `fstype`: `ext4` or `xfs`. Default: `ext4`. The defined filesystem type must be supported by the host operating system.
|
||||
|
||||
- `replication-type`: `none` or `regional-pd`. Default: `none`.
|
||||
|
||||
If `replication-type` is set to `none`, a regular (zonal) PD will be provisioned.
|
||||
|
||||
If `replication-type` is set to `regional-pd`, a
|
||||
[Regional Persistent Disk](https://cloud.google.com/compute/docs/disks/#repds)
|
||||
will be provisioned. It's highly recommended to have
|
||||
`volumeBindingMode: WaitForFirstConsumer` set, in which case when you create
|
||||
a Pod that consumes a PersistentVolumeClaim which uses this StorageClass, a
|
||||
Regional Persistent Disk is provisioned with two zones. One zone is the same
|
||||
as the zone that the Pod is scheduled in. The other zone is randomly picked
|
||||
from the zones available to the cluster. Disk zones can be further constrained
|
||||
using `allowedTopologies`.
|
||||
|
||||
{{< note >}}
|
||||
`zone` and `zones` parameters are deprecated and replaced with
|
||||
[allowedTopologies](#allowed-topologies). When
|
||||
[GCE CSI Migration](/docs/concepts/storage/volumes/#gce-csi-migration) is
|
||||
enabled, a GCE PD volume can be provisioned in a topology that does not match
|
||||
any nodes, but any pod trying to use that volume will fail to schedule. With
|
||||
legacy pre-migration GCE PD, in this case an error will be produced
|
||||
instead at provisioning time. GCE CSI Migration is enabled by default beginning
|
||||
from the Kubernetes 1.23 release.
|
||||
{{< /note >}}
|
||||
|
||||
### NFS
|
||||
|
||||
```yaml
|
||||
|
|
|
|||
|
|
@ -295,127 +295,15 @@ beforehand so that Kubernetes hosts can access them.
|
|||
See the [fibre channel example](https://github.com/kubernetes/examples/tree/master/staging/volumes/fibre_channel)
|
||||
for more details.
|
||||
|
||||
### gcePersistentDisk (deprecated) {#gcepersistentdisk}
|
||||
### gcePersistentDisk (removed) {#gcepersistentdisk}
|
||||
|
||||
{{< feature-state for_k8s_version="v1.17" state="deprecated" >}}
|
||||
Kubernetes {{< skew currentVersion >}} does not include a `gcePersistentDisk` volume type.
|
||||
|
||||
A `gcePersistentDisk` volume mounts a Google Compute Engine (GCE)
|
||||
[persistent disk](https://cloud.google.com/compute/docs/disks) (PD) into your Pod.
|
||||
Unlike `emptyDir`, which is erased when a pod is removed, the contents of a PD are
|
||||
preserved and the volume is merely unmounted. This means that a PD can be
|
||||
pre-populated with data, and that data can be shared between pods.
|
||||
The `gcePersistentDisk` in-tree storage driver was deprecated in the Kubernetes v1.17 release
|
||||
and then removed entirely in the v1.28 release.
|
||||
|
||||
{{< note >}}
|
||||
You must create a PD using `gcloud` or the GCE API or UI before you can use it.
|
||||
{{< /note >}}
|
||||
|
||||
There are some restrictions when using a `gcePersistentDisk`:
|
||||
|
||||
* the nodes on which Pods are running must be GCE VMs
|
||||
* those VMs need to be in the same GCE project and zone as the persistent disk
|
||||
|
||||
One feature of GCE persistent disk is concurrent read-only access to a persistent disk.
|
||||
A `gcePersistentDisk` volume permits multiple consumers to simultaneously
|
||||
mount a persistent disk as read-only. This means that you can pre-populate a PD with your dataset
|
||||
and then serve it in parallel from as many Pods as you need. Unfortunately,
|
||||
PDs can only be mounted by a single consumer in read-write mode. Simultaneous
|
||||
writers are not allowed.
|
||||
|
||||
Using a GCE persistent disk with a Pod controlled by a ReplicaSet will fail unless
|
||||
the PD is read-only or the replica count is 0 or 1.
|
||||
|
||||
#### Creating a GCE persistent disk {#gce-create-persistent-disk}
|
||||
|
||||
Before you can use a GCE persistent disk with a Pod, you need to create it.
|
||||
|
||||
```shell
|
||||
gcloud compute disks create --size=500GB --zone=us-central1-a my-data-disk
|
||||
```
|
||||
|
||||
#### GCE persistent disk configuration example
|
||||
|
||||
```yaml
|
||||
apiVersion: v1
|
||||
kind: Pod
|
||||
metadata:
|
||||
name: test-pd
|
||||
spec:
|
||||
containers:
|
||||
- image: registry.k8s.io/test-webserver
|
||||
name: test-container
|
||||
volumeMounts:
|
||||
- mountPath: /test-pd
|
||||
name: test-volume
|
||||
volumes:
|
||||
- name: test-volume
|
||||
# This GCE PD must already exist.
|
||||
gcePersistentDisk:
|
||||
pdName: my-data-disk
|
||||
fsType: ext4
|
||||
```
|
||||
|
||||
#### Regional persistent disks
|
||||
|
||||
The [Regional persistent disks](https://cloud.google.com/compute/docs/disks/#repds)
|
||||
feature allows the creation of persistent disks that are available in two zones
|
||||
within the same region. In order to use this feature, the volume must be provisioned
|
||||
as a PersistentVolume; referencing the volume directly from a pod is not supported.
|
||||
|
||||
#### Manually provisioning a Regional PD PersistentVolume
|
||||
|
||||
Dynamic provisioning is possible using a
|
||||
[StorageClass for GCE PD](/docs/concepts/storage/storage-classes/#gce-pd).
|
||||
Before creating a PersistentVolume, you must create the persistent disk:
|
||||
|
||||
```shell
|
||||
gcloud compute disks create --size=500GB my-data-disk
|
||||
--region us-central1
|
||||
--replica-zones us-central1-a,us-central1-b
|
||||
```
|
||||
|
||||
#### Regional persistent disk configuration example
|
||||
|
||||
```yaml
|
||||
apiVersion: v1
|
||||
kind: PersistentVolume
|
||||
metadata:
|
||||
name: test-volume
|
||||
spec:
|
||||
capacity:
|
||||
storage: 400Gi
|
||||
accessModes:
|
||||
- ReadWriteOnce
|
||||
gcePersistentDisk:
|
||||
pdName: my-data-disk
|
||||
fsType: ext4
|
||||
nodeAffinity:
|
||||
required:
|
||||
nodeSelectorTerms:
|
||||
- matchExpressions:
|
||||
# failure-domain.beta.kubernetes.io/zone should be used prior to 1.21
|
||||
- key: topology.kubernetes.io/zone
|
||||
operator: In
|
||||
values:
|
||||
- us-central1-a
|
||||
- us-central1-b
|
||||
```
|
||||
|
||||
#### GCE CSI migration
|
||||
|
||||
{{< feature-state for_k8s_version="v1.25" state="stable" >}}
|
||||
|
||||
The `CSIMigration` feature for GCE PD, when enabled, redirects all plugin operations
|
||||
from the existing in-tree plugin to the `pd.csi.storage.gke.io` Container
|
||||
Storage Interface (CSI) Driver. In order to use this feature, the [GCE PD CSI
|
||||
Driver](https://github.com/kubernetes-sigs/gcp-compute-persistent-disk-csi-driver)
|
||||
must be installed on the cluster.
|
||||
|
||||
#### GCE CSI migration complete
|
||||
|
||||
{{< feature-state for_k8s_version="v1.21" state="alpha" >}}
|
||||
|
||||
To disable the `gcePersistentDisk` storage plugin from being loaded by the controller manager
|
||||
and the kubelet, set the `InTreePluginGCEUnregister` flag to `true`.
|
||||
The Kubernetes project suggests that you use the [Google Compute Engine Persistent Disk CSI](https://github.com/kubernetes-sigs/gcp-compute-persistent-disk-csi-driver)
|
||||
third party storage driver instead.
|
||||
|
||||
### gitRepo (deprecated) {#gitrepo}
|
||||
|
||||
|
|
@ -704,8 +592,8 @@ for an example of mounting NFS volumes with PersistentVolumes.
|
|||
|
||||
A `persistentVolumeClaim` volume is used to mount a
|
||||
[PersistentVolume](/docs/concepts/storage/persistent-volumes/) into a Pod. PersistentVolumeClaims
|
||||
are a way for users to "claim" durable storage (such as a GCE PersistentDisk or an
|
||||
iSCSI volume) without knowing the details of the particular cloud environment.
|
||||
are a way for users to "claim" durable storage (such as an iSCSI volume)
|
||||
without knowing the details of the particular cloud environment.
|
||||
|
||||
See the information about [PersistentVolumes](/docs/concepts/storage/persistent-volumes/) for more
|
||||
details.
|
||||
|
|
|
|||
|
|
@ -66,5 +66,4 @@ The following broad classes of Kubernetes volume plugins are supported on Window
|
|||
The following in-tree plugins support persistent storage on Windows nodes:
|
||||
|
||||
* [`azureFile`](/docs/concepts/storage/volumes/#azurefile)
|
||||
* [`gcePersistentDisk`](/docs/concepts/storage/volumes/#gcepersistentdisk)
|
||||
* [`vsphereVolume`](/docs/concepts/storage/volumes/#vspherevolume)
|
||||
|
|
|
|||
Loading…
Reference in New Issue