Merge pull request #42974 from vmware/VSANPolicyProvisioningForKubernetesOnKubernetesRepo

Automatic merge from submit-queue (batch tested with PRs 42835, 42974)

VSAN policy support for storage volume provisioning inside kubernetes

The vsphere users will have the ability to specify custom Virtual SAN Storage Capabilities during dynamic volume provisioning. You can now define storage requirements, such as performance and availability, in the form of storage capabilities during dynamic volume provisioning. The storage capability requirements are converted into a Virtual SAN policy which are then pushed down to the Virtual SAN layer when a storage volume (virtual disk) is being created. The virtual disk is distributed across the Virtual SAN datastore to meet the requirements.

For example, User creates a storage class with VSAN storage capabilities:

> kind: StorageClass
> apiVersion: storage.k8s.io/v1beta1
> metadata:
>   name: slow
> provisioner: kubernetes.io/vsphere-volume
> parameters:
>   hostFailuresToTolerate: "2"
>   diskStripes: "1"
>   cacheReservation: "20"
>   datastore: VSANDatastore

The vSphere Cloud provider provisions a virtual disk (VMDK) on VSAN with the policy configured to the disk.

When you know storage requirements of your application that is being deployed on a container, you can specify these storage capabilities when you create a storage class inside Kubernetes.

@pdhamdhere @tthole @abrarshivani @divyenpatel 

**Release note**:

```release-note
None
```
This commit is contained in:
Kubernetes Submit Queue 2017-03-27 17:00:23 -07:00 committed by GitHub
commit 31d575ad75
3 changed files with 199 additions and 0 deletions

View File

@ -5,6 +5,7 @@
- [Volumes](#volumes)
- [Persistent Volumes](#persistent-volumes)
- [Storage Class](#storage-class)
- [Virtual SAN policy support inside Kubernetes](#virtual-san-policy-support-inside-kubernetes)
- [Stateful Set](#stateful-set)
## Prerequisites
@ -353,6 +354,185 @@
NAME READY STATUS RESTARTS AGE
pvpod 1/1 Running 0 48m
```
### Virtual SAN policy support inside Kubernetes
Vsphere Infrastructure(VI) Admins will have the ability to specify custom Virtual SAN Storage Capabilities during dynamic volume provisioning. You can now define storage requirements, such as performance and availability, in the form of storage capabilities during dynamic volume provisioning. The storage capability requirements are converted into a Virtual SAN policy which are then pushed down to the Virtual SAN layer when a persistent volume (virtual disk) is being created. The virtual disk is distributed across the Virtual SAN datastore to meet the requirements.
The official [VSAN policy documentation](https://pubs.vmware.com/vsphere-65/index.jsp?topic=%2Fcom.vmware.vsphere.virtualsan.doc%2FGUID-08911FD3-2462-4C1C-AE81-0D4DBC8F7990.html) describes in detail about each of the individual storage capabilities that are supported by VSAN. The user can specify these storage capabilities as part of storage class defintion based on his application needs.
The policy settings can be one or more of the following:
* *hostFailuresToTolerate*: represents NumberOfFailuresToTolerate
* *diskStripes*: represents NumberofDiskStripesPerObject
* *objectSpaceReservation*: represents ObjectSpaceReservation
* *cacheReservation*: represents FlashReadCacheReservation
* *iopsLimit*: represents IOPSLimitForObject
* *forceProvisioning*: represents if volume must be Force Provisioned
__Note: Here you don't need to create persistent volume it is created for you.__
1. Create Storage Class.
Example 1:
```yaml
kind: StorageClass
apiVersion: storage.k8s.io/v1beta1
metadata:
name: fast
provisioner: kubernetes.io/vsphere-volume
parameters:
diskformat: zeroedthick
hostFailuresToTolerate: "2"
cachereservation: "20"
```
[Download example](vsphere-volume-sc-vsancapabilities.yaml?raw=true)
Here a persistent volume will be created with the Virtual SAN capabilities - hostFailuresToTolerate to 2 and cachereservation is 20% read cache reserved for storage object. Also the persistent volume will be *zeroedthick* disk.
The official [VSAN policy documentation](https://pubs.vmware.com/vsphere-65/index.jsp?topic=%2Fcom.vmware.vsphere.virtualsan.doc%2FGUID-08911FD3-2462-4C1C-AE81-0D4DBC8F7990.html) describes in detail about each of the individual storage capabilities that are supported by VSAN and can be configured on the virtual disk.
You can also specify the datastore in the Storageclass as shown in example 2. The volume will be created on the datastore specified in the storage class.
This field is optional. If not specified as shown in example 1, the volume will be created on the datastore specified in the vsphere config file used to initialize the vSphere Cloud Provider.
Example 2:
```yaml
kind: StorageClass
apiVersion: storage.k8s.io/v1beta1
metadata:
name: fast
provisioner: kubernetes.io/vsphere-volume
parameters:
diskformat: zeroedthick
datastore: VSANDatastore
hostFailuresToTolerate: "2"
cachereservation: "20"
```
[Download example](vsphere-volume-sc-vsancapabilities-with-datastore.yaml?raw=true)
__Note: If you do not apply a storage policy during dynamic provisioning on a VSAN datastore, it will use a default Virtual SAN policy.__
Creating the storageclass:
``` bash
$ kubectl create -f examples/volumes/vsphere/vsphere-volume-sc-vsancapabilities.yaml
```
Verifying storage class is created:
``` bash
$ kubectl describe storageclass fast
Name: fast
Annotations: <none>
Provisioner: kubernetes.io/vsphere-volume
Parameters: diskformat=zeroedthick, hostFailuresToTolerate="2", cachereservation="20"
No events.
```
2. Create Persistent Volume Claim.
See example:
```yaml
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: pvcsc-vsan
annotations:
volume.beta.kubernetes.io/storage-class: fast
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 2Gi
```
[Download example](vsphere-volume-pvcsc.yaml?raw=true)
Creating the persistent volume claim:
``` bash
$ kubectl create -f examples/volumes/vsphere/vsphere-volume-pvcsc.yaml
```
Verifying persistent volume claim is created:
``` bash
$ kubectl describe pvc pvcsc-vsan
Name: pvcsc-vsan
Namespace: default
Status: Bound
Volume: pvc-80f7b5c1-94b6-11e6-a24f-005056a79d2d
Labels: <none>
Capacity: 2Gi
Access Modes: RWO
No events.
```
Persistent Volume is automatically created and is bounded to this pvc.
Verifying persistent volume claim is created:
``` bash
$ kubectl describe pv pvc-80f7b5c1-94b6-11e6-a24f-005056a79d2d
Name: pvc-80f7b5c1-94b6-11e6-a24f-005056a79d2d
Labels: <none>
Status: Bound
Claim: default/pvcsc-vsan
Reclaim Policy: Delete
Access Modes: RWO
Capacity: 2Gi
Message:
Source:
Type: vSphereVolume (a Persistent Disk resource in vSphere)
VolumePath: [VSANDatastore] kubevols/kubernetes-dynamic-pvc-80f7b5c1-94b6-11e6-a24f-005056a79d2d.vmdk
FSType: ext4
No events.
```
__Note: VMDK is created inside ```kubevols``` folder in datastore which is mentioned in 'vsphere' cloudprovider configuration.
The cloudprovider config is created during setup of Kubernetes cluster on vSphere.__
3. Create Pod which uses Persistent Volume Claim with storage class.
See example:
```yaml
apiVersion: v1
kind: Pod
metadata:
name: pvpod
spec:
containers:
- name: test-container
image: gcr.io/google_containers/test-webserver
volumeMounts:
- name: test-volume
mountPath: /test
volumes:
- name: test-volume
persistentVolumeClaim:
claimName: pvcsc-vsan
```
[Download example](vsphere-volume-pvcscpod.yaml?raw=true)
Creating the pod:
``` bash
$ kubectl create -f examples/volumes/vsphere/vsphere-volume-pvcscpod.yaml
```
Verifying pod is created:
``` bash
$ kubectl get pod pvpod
NAME READY STATUS RESTARTS AGE
pvpod 1/1 Running 0 48m
```
### Stateful Set
vSphere volumes can be consumed by Stateful Sets.

View File

@ -0,0 +1,10 @@
kind: StorageClass
apiVersion: storage.k8s.io/v1beta1
metadata:
name: fast
provisioner: kubernetes.io/vsphere-volume
parameters:
diskformat: zeroedthick
datastore: vsanDatastore
hostFailuresToTolerate: "2"
cachereservation: "20"

View File

@ -0,0 +1,9 @@
kind: StorageClass
apiVersion: storage.k8s.io/v1beta1
metadata:
name: fast
provisioner: kubernetes.io/vsphere-volume
parameters:
diskformat: zeroedthick
hostFailuresToTolerate: "2"
cachereservation: "20"