Clean up configure-persistent-volume-storage task (#16907)
This commit is contained in:
		
							parent
							
								
									3fa00be466
								
							
						
					
					
						commit
						6551689040
					
				| 
						 | 
				
			
			@ -6,22 +6,26 @@ weight: 60
 | 
			
		|||
 | 
			
		||||
{{% capture overview %}}
 | 
			
		||||
 | 
			
		||||
This page shows how to configure a Pod to use a PersistentVolumeClaim for storage.
 | 
			
		||||
This page shows yo how to configure a Pod to use a
 | 
			
		||||
{{< glossary_tooltip text="PersistentVolumeClaim" term_id="persistent-volume-claim" >}}
 | 
			
		||||
for storage.
 | 
			
		||||
Here is a summary of the process:
 | 
			
		||||
 | 
			
		||||
1. A cluster administrator creates a PersistentVolume that is backed by physical
 | 
			
		||||
storage. The administrator does not associate the volume with any Pod.
 | 
			
		||||
1. You, as cluster adminstrator, create a PersistentVolume backed by physical
 | 
			
		||||
storage. You do not associate the volume with any Pod.
 | 
			
		||||
 | 
			
		||||
1. A cluster user creates a PersistentVolumeClaim, which gets automatically
 | 
			
		||||
bound to a suitable PersistentVolume.
 | 
			
		||||
1. You, now taking the role of a developer / cluster user, create a
 | 
			
		||||
PersistentVolumeClaim that is automatically bound to a suitable
 | 
			
		||||
PersistentVolume.
 | 
			
		||||
 | 
			
		||||
1. The user creates a Pod that uses the PersistentVolumeClaim as storage.
 | 
			
		||||
1. You create a Pod that uses the above PersistentVolumeClaim for storage.
 | 
			
		||||
 | 
			
		||||
{{% /capture %}}
 | 
			
		||||
 | 
			
		||||
{{% capture prerequisites %}}
 | 
			
		||||
 | 
			
		||||
* You need to have a Kubernetes cluster that has only one Node, and the kubectl
 | 
			
		||||
* You need to have a Kubernetes cluster that has only one Node, and the
 | 
			
		||||
{{< glossary_tooltip text="kubectl" term_id="kubectl" >}}
 | 
			
		||||
command-line tool must be configured to communicate with your cluster. If you
 | 
			
		||||
do not already have a single-node cluster, you can create one by using
 | 
			
		||||
[Minikube](/docs/getting-started-guides/minikube).
 | 
			
		||||
| 
						 | 
				
			
			@ -35,17 +39,44 @@ do not already have a single-node cluster, you can create one by using
 | 
			
		|||
 | 
			
		||||
## Create an index.html file on your Node
 | 
			
		||||
 | 
			
		||||
Open a shell to the Node in your cluster. How you open a shell depends on how
 | 
			
		||||
you set up your cluster. For example, if you are using Minikube, you can open a
 | 
			
		||||
shell to your Node by entering `minikube ssh`.
 | 
			
		||||
Open a shell to the single Node in your cluster. How you open a shell depends
 | 
			
		||||
on how you set up your cluster. For example, if you are using Minikube, you
 | 
			
		||||
can open a shell to your Node by entering `minikube ssh`.
 | 
			
		||||
 | 
			
		||||
In your shell, create a `/mnt/data` directory:
 | 
			
		||||
In your shell on that Node, create a `/mnt/data` directory:
 | 
			
		||||
 | 
			
		||||
```shell
 | 
			
		||||
# This assumes that your Node uses "sudo" to run commands
 | 
			
		||||
# as the superuser
 | 
			
		||||
sudo mkdir /mnt/data
 | 
			
		||||
```
 | 
			
		||||
 | 
			
		||||
    sudo mkdir /mnt/data
 | 
			
		||||
 | 
			
		||||
In the `/mnt/data` directory, create an `index.html` file:
 | 
			
		||||
 | 
			
		||||
    sudo sh -c "echo 'Hello from Kubernetes storage' > /mnt/data/index.html"
 | 
			
		||||
```shell
 | 
			
		||||
# This again assumes that your Node uses "sudo" to run commands
 | 
			
		||||
# as the superuser
 | 
			
		||||
sudo sh -c "echo 'Hello from Kubernetes storage' > /mnt/data/index.html"
 | 
			
		||||
```
 | 
			
		||||
 | 
			
		||||
{{< note >}}
 | 
			
		||||
If your Node uses a tool for superuser access other than `sudo`, you can
 | 
			
		||||
usually make this work if you replace `sudo` with the name of the other tool.
 | 
			
		||||
{{< /note >}}
 | 
			
		||||
 | 
			
		||||
Test that the `index.html` file exists:
 | 
			
		||||
 | 
			
		||||
```shell
 | 
			
		||||
cat /mnt/data/index.html
 | 
			
		||||
```
 | 
			
		||||
 | 
			
		||||
The output should be:
 | 
			
		||||
```
 | 
			
		||||
Hello from Kubernetes storage
 | 
			
		||||
```
 | 
			
		||||
 | 
			
		||||
You can now close the shell to your Node.
 | 
			
		||||
 | 
			
		||||
## Create a PersistentVolume
 | 
			
		||||
 | 
			
		||||
| 
						 | 
				
			
			@ -73,11 +104,15 @@ PersistentVolumeClaim requests to this PersistentVolume.
 | 
			
		|||
 | 
			
		||||
Create the PersistentVolume:
 | 
			
		||||
 | 
			
		||||
    kubectl apply -f https://k8s.io/examples/pods/storage/pv-volume.yaml
 | 
			
		||||
```shell
 | 
			
		||||
kubectl apply -f https://k8s.io/examples/pods/storage/pv-volume.yaml
 | 
			
		||||
```
 | 
			
		||||
 | 
			
		||||
View information about the PersistentVolume:
 | 
			
		||||
 | 
			
		||||
    kubectl get pv task-pv-volume
 | 
			
		||||
```shell
 | 
			
		||||
kubectl get pv task-pv-volume
 | 
			
		||||
```
 | 
			
		||||
 | 
			
		||||
The output shows that the PersistentVolume has a `STATUS` of `Available`. This
 | 
			
		||||
means it has not yet been bound to a PersistentVolumeClaim.
 | 
			
		||||
| 
						 | 
				
			
			@ -107,7 +142,9 @@ claim to the volume.
 | 
			
		|||
 | 
			
		||||
Look again at the PersistentVolume:
 | 
			
		||||
 | 
			
		||||
    kubectl get pv task-pv-volume
 | 
			
		||||
```shell
 | 
			
		||||
kubectl get pv task-pv-volume
 | 
			
		||||
```
 | 
			
		||||
 | 
			
		||||
Now the output shows a `STATUS` of `Bound`.
 | 
			
		||||
 | 
			
		||||
| 
						 | 
				
			
			@ -116,7 +153,9 @@ Now the output shows a `STATUS` of `Bound`.
 | 
			
		|||
 | 
			
		||||
Look at the PersistentVolumeClaim:
 | 
			
		||||
 | 
			
		||||
    kubectl get pvc task-pv-claim
 | 
			
		||||
```shell
 | 
			
		||||
kubectl get pvc task-pv-claim
 | 
			
		||||
```
 | 
			
		||||
 | 
			
		||||
The output shows that the PersistentVolumeClaim is bound to your PersistentVolume,
 | 
			
		||||
`task-pv-volume`.
 | 
			
		||||
| 
						 | 
				
			
			@ -138,28 +177,42 @@ is a volume.
 | 
			
		|||
 | 
			
		||||
Create the Pod:
 | 
			
		||||
 | 
			
		||||
    kubectl apply -f https://k8s.io/examples/pods/storage/pv-pod.yaml
 | 
			
		||||
```shell
 | 
			
		||||
kubectl apply -f https://k8s.io/examples/pods/storage/pv-pod.yaml
 | 
			
		||||
```
 | 
			
		||||
 | 
			
		||||
Verify that the Container in the Pod is running;
 | 
			
		||||
Verify that the container in the Pod is running;
 | 
			
		||||
 | 
			
		||||
    kubectl get pod task-pv-pod
 | 
			
		||||
```shell
 | 
			
		||||
kubectl get pod task-pv-pod
 | 
			
		||||
```
 | 
			
		||||
 | 
			
		||||
Get a shell to the Container running in your Pod:
 | 
			
		||||
Get a shell to the container running in your Pod:
 | 
			
		||||
 | 
			
		||||
    kubectl exec -it task-pv-pod -- /bin/bash
 | 
			
		||||
```shell
 | 
			
		||||
kubectl exec -it task-pv-pod -- /bin/bash
 | 
			
		||||
```
 | 
			
		||||
 | 
			
		||||
In your shell, verify that nginx is serving the `index.html` file from the
 | 
			
		||||
hostPath volume:
 | 
			
		||||
 | 
			
		||||
    root@task-pv-pod:/# apt-get update
 | 
			
		||||
    root@task-pv-pod:/# apt-get install curl
 | 
			
		||||
    root@task-pv-pod:/# curl localhost
 | 
			
		||||
```shell
 | 
			
		||||
# Be sure to run these 3 commands inside the root shell that comes from
 | 
			
		||||
# running "kubectl exec" in the previous step
 | 
			
		||||
apt update
 | 
			
		||||
apt install curl
 | 
			
		||||
curl http://localhost/
 | 
			
		||||
```
 | 
			
		||||
 | 
			
		||||
The output shows the text that you wrote to the `index.html` file on the
 | 
			
		||||
hostPath volume:
 | 
			
		||||
 | 
			
		||||
    Hello from Kubernetes storage
 | 
			
		||||
 | 
			
		||||
 | 
			
		||||
If you see that message, you have successfully configured a Pod to
 | 
			
		||||
use storage from a PersistentVolumeClaim.
 | 
			
		||||
 | 
			
		||||
## Clean up
 | 
			
		||||
 | 
			
		||||
Delete the Pod,  the PersistentVolumeClaim and the PersistentVolume:
 | 
			
		||||
| 
						 | 
				
			
			@ -170,14 +223,20 @@ kubectl delete pvc task-pv-claim
 | 
			
		|||
kubectl delete pv task-pv-volume
 | 
			
		||||
```
 | 
			
		||||
 | 
			
		||||
Open the shell to the Node in your cluster again (how you open a shell depends on how
 | 
			
		||||
you set up your cluster. For example, if you are using Minikube, you can open a
 | 
			
		||||
shell to your Node by entering `minikube ssh`) and remove the file:
 | 
			
		||||
If you don't already have a shell open to the Node in your cluster,
 | 
			
		||||
open a new shell the same way that you did earlier.
 | 
			
		||||
 | 
			
		||||
In the shell on your Node, remove the file and directory that you created:
 | 
			
		||||
 | 
			
		||||
```shell
 | 
			
		||||
sudo rm -rf /mnt/data
 | 
			
		||||
# This assumes that your Node uses "sudo" to run commands
 | 
			
		||||
# as the superuser
 | 
			
		||||
sudo rm /mnt/data/index.html
 | 
			
		||||
sudo rmdir /mnt/data
 | 
			
		||||
```
 | 
			
		||||
 | 
			
		||||
You can now close the shell to your Node.
 | 
			
		||||
 | 
			
		||||
{{% /capture %}}
 | 
			
		||||
 | 
			
		||||
 | 
			
		||||
| 
						 | 
				
			
			@ -201,10 +260,10 @@ metadata:
 | 
			
		|||
    pv.beta.kubernetes.io/gid: "1234"
 | 
			
		||||
```
 | 
			
		||||
When a Pod consumes a PersistentVolume that has a GID annotation, the annotated GID
 | 
			
		||||
is applied to all Containers in the Pod in the same way that GIDs specified in the
 | 
			
		||||
is applied to all containers in the Pod in the same way that GIDs specified in the
 | 
			
		||||
Pod’s security context are. Every GID, whether it originates from a PersistentVolume
 | 
			
		||||
annotation or the Pod’s specification, is applied to the first process run in
 | 
			
		||||
each Container.
 | 
			
		||||
each container.
 | 
			
		||||
 | 
			
		||||
{{< note >}}
 | 
			
		||||
When a Pod consumes a PersistentVolume, the GIDs associated with the
 | 
			
		||||
| 
						 | 
				
			
			
 | 
			
		|||
		Loading…
	
		Reference in New Issue