Merge pull request #467 from victtsl/update-readme

Updating README to reflect changes from #442
This commit is contained in:
Kubernetes Prow Robot 2023-02-09 02:10:24 -08:00 committed by GitHub
commit 57c3fdc13b
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
1 changed files with 14 additions and 14 deletions

View File

@ -29,7 +29,7 @@ $ kubectl create -f examples/staging/volumes/nfs/provisioner/nfs-server-gce-pv.y
# On Azure (create Azure Disk PVC): # On Azure (create Azure Disk PVC):
$ kubectl create -f examples/staging/volumes/nfs/provisioner/nfs-server-azure-pv.yaml $ kubectl create -f examples/staging/volumes/nfs/provisioner/nfs-server-azure-pv.yaml
# Common steps after creating either GCE PD or Azure Disk PVC: # Common steps after creating either GCE PD or Azure Disk PVC:
$ kubectl create -f examples/staging/volumes/nfs/nfs-server-rc.yaml $ kubectl create -f examples/staging/volumes/nfs/nfs-server-deployment.yaml
$ kubectl create -f examples/staging/volumes/nfs/nfs-server-service.yaml $ kubectl create -f examples/staging/volumes/nfs/nfs-server-service.yaml
# get the cluster IP of the server using the following command # get the cluster IP of the server using the following command
$ kubectl describe services nfs-server $ kubectl describe services nfs-server
@ -37,7 +37,7 @@ $ kubectl describe services nfs-server
$ kubectl create -f examples/staging/volumes/nfs/nfs-pv.yaml $ kubectl create -f examples/staging/volumes/nfs/nfs-pv.yaml
$ kubectl create -f examples/staging/volumes/nfs/nfs-pvc.yaml $ kubectl create -f examples/staging/volumes/nfs/nfs-pvc.yaml
# run a fake backend # run a fake backend
$ kubectl create -f examples/staging/volumes/nfs/nfs-busybox-rc.yaml $ kubectl create -f examples/staging/volumes/nfs/nfs-busybox-deployment.yaml
# get pod name from this command # get pod name from this command
$ kubectl get pod -l name=nfs-busybox $ kubectl get pod -l name=nfs-busybox
# use the pod name to check the test file # use the pod name to check the test file
@ -46,19 +46,19 @@ $ kubectl exec nfs-busybox-jdhf3 -- cat /mnt/index.html
## Example of NFS based persistent volume ## Example of NFS based persistent volume
See [NFS Service and Replication Controller](nfs-web-rc.yaml) for a quick example of how to use an NFS See [NFS Service and Deployment](nfs-web-deployment.yaml) for a quick example of how to use an NFS
volume claim in a replication controller. It relies on the volume claim in a deployment. It relies on the
[NFS persistent volume](nfs-pv.yaml) and [NFS persistent volume](nfs-pv.yaml) and
[NFS persistent volume claim](nfs-pvc.yaml) in this example as well. [NFS persistent volume claim](nfs-pvc.yaml) in this example as well.
## Complete setup ## Complete setup
The example below shows how to export a NFS share from a single pod replication The example below shows how to export a NFS share from a single pod
controller and import it into two replication controllers. deployment and import it into two deployments.
### NFS server part ### NFS server part
Define [the NFS Service and Replication Controller](nfs-server-rc.yaml) and Define [the NFS Service and Deployment](nfs-server-deployment.yaml) and
[NFS service](nfs-server-service.yaml): [NFS service](nfs-server-service.yaml):
The NFS server exports an auto-provisioned persistent volume backed by GCE PD or Azure Disk. If you are on GCE, create a GCE PD-based PVC: The NFS server exports an auto-provisioned persistent volume backed by GCE PD or Azure Disk. If you are on GCE, create a GCE PD-based PVC:
@ -76,7 +76,7 @@ $ kubectl create -f examples/staging/volumes/nfs/provisioner/nfs-server-azure-pv
Then using the created PVC, create an NFS server and service: Then using the created PVC, create an NFS server and service:
```console ```console
$ kubectl create -f examples/staging/volumes/nfs/nfs-server-rc.yaml $ kubectl create -f examples/staging/volumes/nfs/nfs-server-deployment.yaml
$ kubectl create -f examples/staging/volumes/nfs/nfs-server-service.yaml $ kubectl create -f examples/staging/volumes/nfs/nfs-server-service.yaml
``` ```
@ -85,7 +85,7 @@ by checking `kubectl get pods -l role=nfs-server`.
### Create the NFS based persistent volume claim ### Create the NFS based persistent volume claim
The [NFS busybox controller](nfs-busybox-rc.yaml) uses a simple script to The [NFS busybox deployment](nfs-busybox-deployment.yaml) uses a simple script to
generate data written to the NFS server we just started. First, you'll need to generate data written to the NFS server we just started. First, you'll need to
find the cluster IP of the server: find the cluster IP of the server:
@ -110,11 +110,11 @@ $ kubectl create -f examples/staging/volumes/nfs/nfs-pvc.yaml
## Setup the fake backend ## Setup the fake backend
The [NFS busybox controller](nfs-busybox-rc.yaml) updates `index.html` on the The [NFS busybox deployment](nfs-busybox-deployment.yaml) updates `index.html` on the
NFS server every 10 seconds. Let's start that now: NFS server every 10 seconds. Let's start that now:
```console ```console
$ kubectl create -f examples/staging/volumes/nfs/nfs-busybox-rc.yaml $ kubectl create -f examples/staging/volumes/nfs/nfs-busybox-deployment.yaml
``` ```
Conveniently, it's also a `busybox` pod, so we can get an early check Conveniently, it's also a `busybox` pod, so we can get an early check
@ -137,14 +137,14 @@ and make sure the `describe services` command above had endpoints listed
### Setup the web server ### Setup the web server
The [web server controller](nfs-web-rc.yaml) is an another simple replication The [web server deployment](nfs-web-deployment.yaml) is an another simple
controller demonstrates reading from the NFS share exported above as a NFS deployment demonstrates reading from the NFS share exported above as a NFS
volume and runs a simple web server on it. volume and runs a simple web server on it.
Define the pod: Define the pod:
```console ```console
$ kubectl create -f examples/staging/volumes/nfs/nfs-web-rc.yaml $ kubectl create -f examples/staging/volumes/nfs/nfs-web-deployment.yaml
``` ```
This creates two pods, each of which serve the `index.html` from above. We can This creates two pods, each of which serve the `index.html` from above. We can