Rewritten UCP NFS Storage Doc

This commit is contained in:
ollypom 2019-02-19 17:26:41 +00:00
parent 09b8a4e913
commit 4de0970f60
No known key found for this signature in database
GPG Key ID: 2E6D9F4EBCB6B160
9 changed files with 273 additions and 447 deletions

View File

@ -1345,10 +1345,6 @@ manuals:
section:
- title: Access Kubernetes Resources
path: /ee/ucp/kubernetes/kube-resources/
- title: Use NFS persistent storage
path: /ee/ucp/admin/configure/use-nfs-volumes/
- title: Configure AWS EBS Storage for Kubernetes
path: /ee/ucp/kubernetes/configure-aws-storage/
- title: Deploy a workload
path: /ee/ucp/kubernetes/
- title: Deploy a Compose-based app
@ -1361,6 +1357,12 @@ manuals:
path: /ee/ucp/kubernetes/install-cni-plugin/
- title: Kubernetes network encryption
path: /ee/ucp/kubernetes/kubernetes-network-encryption/
- sectiontitle: Persistent Storage
section:
- title: Use NFS storage
path: /ee/ucp/kubernetes/storage/use-nfs-volumes/
- title: Use AWS EBS Storage
path: /ee/ucp/kubernetes/storage/configure-aws-storage/
- title: API reference
path: /reference/ucp/3.1/api/
nosync: true

View File

@ -1,443 +0,0 @@
---
title: Use NFS persistent storage
description: Learn how to add support for NFS persistent storage by adding a default storage class.
keywords: Universal Control Plane, UCP, Docker EE, Kubernetes, storage, volume
---
Docker UCP supports Network File System (NFS) persistent volumes for
Kubernetes. To enable this feature on a UCP cluster, you need to set up
an NFS storage volume provisioner.
> ### Kubernetes storage drivers
>
>NFS is one of the Kubernetes storage drivers that UCP supports. See [Kubernetes Volume Drivers](https://success.docker.com/article/compatibility-matrix#kubernetesvolumedrivers) in the Compatibility Matrix for the full list.
{: important}
## Enable NFS volume provisioning
The following steps enable NFS volume provisioning on a UCP cluster:
1. Create an NFS server pod.
2. Create a default storage class.
3. Create persistent volumes that use the default storage class.
4. Deploy your persistent volume claims and applications.
The following procedure shows you how to deploy WordPress and a MySQL backend
that use NFS volume provisioning.
[Install the Kubernetes CLI](../../user-access/kubectl.md) to complete the
procedure for enabling NFS provisioning.
## Create the NFS Server
To enable NFS volume provisioning on a UCP cluster, you need to install
an NFS server. Google provides an image for this purpose.
On any node in the cluster with a [UCP client bundle](../../user-access/cli.md),
copy the following yaml to a file named nfs-server.yaml.
```yaml
apiVersion: v1
kind: Pod
metadata:
name: nfs-server
namespace: default
labels:
role: nfs-server
spec:
tolerations:
- key: node-role.kubernetes.io/master
effect: NoSchedule
nodeSelector:
node-role.kubernetes.io/master: ""
containers:
- name: nfs-server
image: gcr.io/google_containers/volume-nfs:0.8
securityContext:
privileged: true
ports:
- name: nfs-0
containerPort: 2049
protocol: TCP
restartPolicy: Always
```
Run the following command to create the NFS server pod.
```bash
kubectl create -f nfs-server.yaml
```
The default storage class needs the IP address of the NFS server pod.
Run the following command to get the pod's IP address.
```bash
kubectl describe pod nfs-server | grep IP:
```
The result looks like this:
```
IP: 192.168.106.67
```
## Create the default storage class
To enable NFS provisioning, create a storage class that has the
`storageclass.kubernetes.io/is-default-class` annotation set to `true`.
Also, provide the IP address of the NFS server pod as a parameter.
Copy the following yaml to a file named default-storage.yaml. Replace
`<nfs-server-pod-ip-address>` with the IP address from the previous step.
```yaml
kind: StorageClass
apiVersion: storage.k8s.io/v1beta1
metadata:
namespace: default
name: default-storage
annotations:
storageclass.kubernetes.io/is-default-class: "true"
labels:
kubernetes.io/cluster-service: "true"
provisioner: kubernetes.io/nfs
parameters:
path: /
server: <nfs-server-pod-ip-address>
```
Run the following command to create the default storage class.
```bash
kubectl create -f default-storage.yaml
```
Confirm that the storage class was created and that it's assigned as the
default for the cluster.
```bash
kubectl get storageclass
```
It should look like this:
```
NAME PROVISIONER AGE
default-storage (default) kubernetes.io/nfs 58s
```
## Create persistent volumes
Create two persistent volumes based on the `default-storage` storage class.
One volume is for the MySQL database, and the other is for WordPress.
To create an NFS volume, specify `storageClassName: default-storage` in the
persistent volume spec.
Copy the following yaml to a file named local-volumes.yaml.
```yaml
apiVersion: v1
kind: PersistentVolume
metadata:
name: local-pv-1
labels:
type: local
spec:
storageClassName: default-storage
capacity:
storage: 20Gi
accessModes:
- ReadWriteOnce
hostPath:
path: /tmp/data/pv-1
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: local-pv-2
labels:
type: local
spec:
storageClassName: default-storage
capacity:
storage: 20Gi
accessModes:
- ReadWriteOnce
hostPath:
path: /tmp/data/pv-2
```
Run this command to create the persistent volumes.
```bash
kubectl create -f local-volumes.yaml
```
Inspect the volumes:
```bash
kubectl get persistentvolumes
```
They should look like this:
```
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
local-pv-1 20Gi RWO Retain Available default-storage 1m
local-pv-2 20Gi RWO Retain Available default-storage 1m
```
## Create a secret for the MySQL password
Create a secret for the password that you want to use for accessing the MySQL
database. Use this command to create the secret object:
```bash
kubectl create secret generic mysql-pass --from-literal=password=<mysql-password>
```
## Deploy persistent volume claims and applications
You have two persistent volumes that are available for claims. The MySQL
deployment uses one volume, and WordPress uses the other.
Copy the following yaml to a file named `wordpress-deployment.yaml`.
The claims in this file make no reference to a particular storage class, so
they bind to any available volumes that can satisfy the storage request.
In this example, both claims request `20Gi` of storage.
> Use specific persistent volume
>
>If you are attempting to use a specific persistent volume and not let Kubernetes choose at random, ensure that the `storageClassName` key is populated in the persistent claim itself.
{: important}
```yaml
apiVersion: v1
kind: Service
metadata:
name: wordpress-mysql
labels:
app: wordpress
spec:
ports:
- port: 3306
selector:
app: wordpress
tier: mysql
clusterIP: None
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: mysql-pv-claim
labels:
app: wordpress
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 20Gi
---
apiVersion: apps/v1beta2
kind: Deployment
metadata:
name: wordpress-mysql
labels:
app: wordpress
spec:
selector:
matchLabels:
app: wordpress
tier: mysql
strategy:
type: Recreate
template:
metadata:
labels:
app: wordpress
tier: mysql
spec:
containers:
- image: mysql:5.6
name: mysql
env:
- name: MYSQL_ROOT_PASSWORD
valueFrom:
secretKeyRef:
name: mysql-pass
key: password
ports:
- containerPort: 3306
name: mysql
volumeMounts:
- name: mysql-persistent-storage
mountPath: /var/lib/mysql
volumes:
- name: mysql-persistent-storage
persistentVolumeClaim:
claimName: mysql-pv-claim
---
apiVersion: v1
kind: Service
metadata:
name: wordpress
labels:
app: wordpress
spec:
ports:
- port: 80
selector:
app: wordpress
tier: frontend
type: LoadBalancer
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: wp-pv-claim
labels:
app: wordpress
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 20Gi
---
apiVersion: apps/v1beta2
kind: Deployment
metadata:
name: wordpress
labels:
app: wordpress
spec:
selector:
matchLabels:
app: wordpress
tier: frontend
strategy:
type: Recreate
template:
metadata:
labels:
app: wordpress
tier: frontend
spec:
containers:
- image: wordpress:4.8-apache
name: wordpress
env:
- name: WORDPRESS_DB_HOST
value: wordpress-mysql
- name: WORDPRESS_DB_PASSWORD
valueFrom:
secretKeyRef:
name: mysql-pass
key: password
ports:
- containerPort: 80
name: wordpress
volumeMounts:
- name: wordpress-persistent-storage
mountPath: /var/www/html
volumes:
- name: wordpress-persistent-storage
persistentVolumeClaim:
claimName: wp-pv-claim
```
Run the following command to deploy the MySQL and WordPress images.
```bash
kubectl create -f wordpress-deployment.yaml
```
Confirm that the pods are up and running.
```bash
kubectl get pods
```
You should see something like this:
```
NAME READY STATUS RESTARTS AGE
nfs-server 1/1 Running 0 2h
wordpress-f4dcfdf45-4rkgs 1/1 Running 0 1m
wordpress-mysql-7bdd6d857c-fvgqx 1/1 Running 0 1m
```
It may take a few minutes for both pods to enter the `Running` state.
## Inspect the deployment
The WordPress deployment is ready to go. You can see it in action by opening
a web browser on the URL of the WordPress service. The easiest way to get the
URL is to open the UCP web UI, navigate to the Kubernetes **Load Balancers**
page, and click the **wordpress** service. In the details pane, the URL is
listed in the **Ports** section.
![](../../images/use-nfs-volume-1.png){: .with-border}
Also, you can get the URL by using the command line.
On any node in the cluster, run the following command to get the IP addresses
that are assigned to the current node.
```bash
{% raw %}
docker node inspect --format '{{ index .Spec.Labels "com.docker.ucp.SANs" }}' <node-id>
{% endraw %}
```
You should see a list of IP addresses, like this:
```
172.31.36.167,jg-latest-ubuntu-0,127.0.0.1,172.17.0.1,54.213.225.17
```
One of these corresponds with the external node IP address. Look for an address
that's not in the `192.*`, `127.*`, and `172.*` ranges. In the current example,
the IP address is `54.213.225.17`.
The WordPress web UI is served through a `NodePort`, which you get with this
command:
```bash
kubectl describe svc wordpress | grep NodePort
```
Which returns something like this:
```
NodePort: <unset> 34746/TCP
```
Put the two together to get the URL for the WordPress service:
`http://<node-ip>:<node-port>`.
For this example, the URL is `http://54.213.225.17:34746`.
![](../../images/use-nfs-volume-2.png){: .with-border}
## Write a blog post to use the storage
Open the URL for the WordPress service and follow the instructions for
installing WordPress. In this example, the blog is named "NFS Volumes".
![](../../images/use-nfs-volume-3.png){: .with-border}
Create a new blog post and publish it.
![](../../images/use-nfs-volume-4.png){: .with-border}
Click the **permalink** to view the site.
![](../../images/use-nfs-volume-5.png){: .with-border}
## Where to go next
- [Example of NFS based persistent volume](https://github.com/kubernetes/examples/tree/master/staging/volumes/nfs#nfs-server-part)
- [Example: Deploying WordPress and MySQL with Persistent Volumes](https://v1-8.docs.kubernetes.io/docs/tutorials/stateful-application/mysql-wordpress-persistent-volume/)

Binary file not shown.

Before

Width:  |  Height:  |  Size: 82 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 35 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 47 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 73 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 357 KiB

View File

@ -2,6 +2,8 @@
title: Configure AWS EBS Storage for Kubernetes
description: Learn how configure AWS EBS storage for Kubernetes clusters.
keywords: UCP, Docker Enterprise, Kubernetes, storage, AWS, ELB
redirect_from:
- /ee/ucp/kubernetes/configure-aws-storage/
---
[AWS Elastic Block Store](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/AmazonEBS.html) (EBS) can be deployed with Kubernetes in Docker Enterprise 2.1 to use AWS volumes as peristent storage for applications. Before using EBS volumes, configure UCP and the AWS infrastructure for storage orchestration to function.
@ -131,3 +133,8 @@ pvc-751c006e-a00b-11e8-8007-0242ac110012 1Gi RWO Retain
The AWS console shows a volume has been provisioned having a matching name with type `gp2` and a `1GiB` size.
![](../images/aws-ebs.png)
## Where to go next
- [Deploy an Ingress Controller on Kubernetes](/ee/ucp/kubernetes/layer-7-routing/)
- [Discover Network Encryption on Kubernetes](/ee/ucp/kubernetes/kubernetes-network-encryption/)

View File

@ -0,0 +1,260 @@
---
title: Configuring NFS Storage for Kubernetes
description: Learn how to add support for NFS persistent storage by adding a default storage class.
keywords: Universal Control Plane, UCP, Docker EE, Kubernetes, storage, volume
redirect_from:
- /ee/ucp/admin/configure/use-nfs-volumes/
---
Users can provide persistent storage for workloads running on Docker Enterprise
by using NFS storage. These NFS shares, when mounted into the running container,
providing state to the application, managing data externally to the container's
lifecycle.
> Note: Provisioning an NFS server and/or exporting an NFS share is out of scope
> of this guide. Additionally, using external [Kubernetes
> plugins](https://github.com/kubernetes-incubator/external-storage/tree/master/nfs)
> to dynamically provision NFS shares, is also out of scope for this guide.
To mount existing NFS shares within Kubernetes Pods, we have 2 options:
- We can define NFS shares within our Pod definitions. NFS shares are defined
manually by each tenant when creating a workload.
- We can define NFS shares as a Cluster object through Persistent Volumes, with
its lifecycle handled separately to the workload. This is common if an
operator wanted to define a range of NFS shares, for tenants to request and
consume.
## Defining NFS Shares in the Pod Spec
When defining workloads in Kubernetes manifest files, an end user can directly
reference the NFS shares to mount inside of each Pod. The NFS share is defined
within the Pod specification, this could be a standalone pod, or could be
wrapped in a higher-level object like a Deployment, Daemonset or StatefulSet.
In the following example, we have a running UCP cluster, and have downloaded a
[client bundle](../../user-access/cli/#download-client-certificates), with
permission to schedule pods in a namespace.
An example pod specification with an NFS volume defined:
```bash
$ cat nfs-in-a-pod.yaml
kind: Pod
apiVersion: v1
metadata:
name: nfs-in-a-pod
spec:
containers:
- name: app
image: alpine
volumeMounts:
- name: nfs-volume
mountPath: /var/nfs # Please change the destination you like the share to be mounted too
command: ["/bin/sh"]
args: ["-c", "sleep 500000"]
volumes:
- name: nfs-volume
nfs:
server: nfs.example.com # Please change this to your NFS server
path: /share1 # Please change this to the relevant share
```
To deploy the pod, and ensure that is has started up correctly we will use [kubectl](../../user-access/kubectl/) command line.
```bash
$ kubectl create -f nfsinapod.yaml
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
nfs-in-a-pod 1/1 Running 0 6m
```
We can check everything has been mounted correctly by getting a shell prompt
within the container, and searching for our mount.
```bash
$ kubectl exec -it pod-using-nfs sh
/ #
/ # mount | grep nfs.example.com
nfs.example.com://share1 on /var/nfs type nfs4 (rw,relatime,vers=4.0,rsize=262144,wsize=262144,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,clientaddr=172.31.42.23,local_lock=none,addr=nfs.example.com)
/ #
```
As we have defined the NFS share as part of the Pod Spec, UCP or Kubernetes
doesn't know anything about this NFS share. This means that when the pod gets
deleted, the NFS share will be unattached from the Cluster. The data will of
course still remain in the NFS share.
## Exposing NFS shares as a Cluster Object
For this method we will use the Kubernetes Objects [Persistent
Volumes](https://kubernetes.io/docs/concepts/storage/persistent-volumes/#persistent-volumes)
and [Persistent Volume
Claims](https://kubernetes.io/docs/concepts/storage/persistent-volumes/#persistentvolumeclaims)
to manage the lifecycle and access to NFS Shares.
Here an operator could define multiple shares for a tenant to use within the
cluster. The [Persistent
Volumes](https://kubernetes.io/docs/concepts/storage/persistent-volumes/#persistent-volumes)
is a cluster wide object so could be pre-provisioned by an operator. A
[Persistent Volume
Claims](https://kubernetes.io/docs/concepts/storage/persistent-volumes/#persistentvolumeclaims)
is a claim by a tenant, for use of a PV within their namespace.
> Note: NFS Share Lifecycle in this sense, is referring to granting and removing
> end user's ability to consume NFS storage, rather than managing the lifecycle
> of the NFS Server.
### Persistent Volume
As an operator define the persistent volume at the cluster level:
```bash
$ cat pvwithnfs.yaml
apiVersion: v1
kind: PersistentVolume
metadata:
name: my-nfs-share
spec:
capacity:
storage: 5Gi # This size is used to match a volume to a tenents claim
accessModes:
- ReadWriteOnce # Access modes are defined below
persistentVolumeReclaimPolicy: Recycle # Reclaim policies are defined below
nfs:
server: nfs.example.com # Please change this to your NFS server
path: /share1 # Please change this to the relevant share
```
To create a Physical Volume on the cluster, an operator would need a [Cluster
Role
Binding](https://kubernetes.io/docs/reference/access-authn-authz/rbac/#rolebinding-and-clusterrolebinding)
grant, to create persistent volume objects at the Cluster level. Once again a we
will use the [kubectl](../../user-access/kubectl/) command line to create the
volume.
```
$ kubectl create -f pvwithnfs.yaml
$ kubectl get pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
my-nfs-share 5Gi RWO Recycle Available slow 7s
```
#### Access Modes
The access mode for a NFS persistent volume can either be:
- ReadWriteOnce the volume can be mounted as read-write by a single node
- ReadOnlyMany the volume can be mounted read-only by many nodes
- ReadWriteMany the volume can be mounted as read-write by many nodes
The access mode in the Persistent Volume definition is used to match a
Persistent Volume to a Claim. When a Persistent Volume is defined and created
inside of Kubernetes, a Volume is not mounted. For more information on [access
modes](https://kubernetes.io/docs/concepts/storage/persistent-volumes/#access-modes)
see the kubernetes documentation.
#### Reclaim
The [reclaim
policy](https://kubernetes.io/docs/concepts/storage/persistent-volumes/#reclaiming)
is used to define what the cluster should do after a persistent volume has been
released from a claim. A persistent volume reclaim policy could be: Reclaim,
Recycle and Delete. Please see the [Kubernetes
documentation](https://kubernetes.io/docs/concepts/storage/persistent-volumes/#reclaiming)
for a deeper understanding.
### Persistent Volume Claim
A tenant can now "claim" that persistent volume for use within their workloads
by using a Kubernetes persistent volume claim. A persistent volume claim will
live within a namespace, and it will try and match available persistent volumes
to what a tenant has requested.
``` bash
$ cat myapp-claim.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: myapp-nfs
namespace: default
spec:
accessModes:
- ReadWriteOnce # Access modes for volumes is defined under Persistent Volumes
resources:
requests:
storage: 5Gi # volume size requested
```
A tenant, with a
[RoleBinding](https://kubernetes.io/docs/reference/access-authn-authz/rbac/#rolebinding-and-clusterrolebinding)
to create persistent volume claims, will now be able to deploy this persistent
volume claim. Assuming there is a persistent volume that meets the tenants
criteria, Kubernetes will now bind the persistent volume to the Claim. Once
again, this is not mounting the share.
```bash
$ kubectl create -f myapp-claim.yaml
persistentvolumeclaim "myapp-nfs" created
$ kubectl get pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
myapp-nfs Bound my-nfs-share 5Gi RWO slow 2s
$ kubectl get pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
my-nfs-share 5Gi RWO Recycle Bound default/myapp-nfs slow 4m
```
### Defining a Workload
Finally, a tenant can deploy a workload to consume this persistent volume claim.
The persistent volume claim is defined within the Pod specification, this could
be a standalone pod, or could be wrapped in a higher-level object like a
Deployment, Daemonset or StatefulSet.
```bash
$ cat myapp-pod.yaml
kind: Pod
apiVersion: v1
metadata:
name: pod-using-nfs
spec:
containers:
- name: app
image: alpine
volumeMounts:
- name: data
mountPath: /var/nfs # Please change the destination you like the share to be mounted too
command: ["/bin/sh"]
args: ["-c", "sleep 500000"]
volumes:
- name: data
persistentVolumeClaim:
claimName: myapp-nfs
```
The pod can be deployed by a tenant, using the
[kubectl](../../user-access/kubectl/) command line tool. Additionally, we can
check that the pod is running successfully, and the NFS share has been mounted
inside of the container.
```bash
$ kubectl create -f myapp-pod.yaml
$ kubectl get pod
NAME READY STATUS RESTARTS AGE
pod-using-nfs 1/1 Running 0 1m
$ kubectl exec -it pod-using-nfs sh
/ # mount | grep nfs.example.com
nfs.example.com://share1 on /var/nfs type nfs4 (rw,relatime,vers=4.1,rsize=262144,wsize=262144,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,clientaddr=172.31.42.23,local_lock=none,addr=nfs.example.com)
/ #
```
## Where to go next
- [Deploy an Ingress Controller on Kubernetes](/ee/ucp/kubernetes/layer-7-routing/)
- [Discover Network Encryption on Kubernetes](/ee/ucp/kubernetes/kubernetes-network-encryption/)