Minor refactor of Ceph RBD provisioning docs.

Improved clarity of documentation, expanded on how to translate Ceph settings into secrets.
This commit is contained in:
Elson Rodriguez 2016-08-23 10:58:50 -07:00
parent 4d9da82ff9
commit 99b6481e10
2 changed files with 41 additions and 10 deletions

View File

@ -347,31 +347,55 @@ $ kubectl get pv
#### Ceph RBD
First create Ceph admin's Secret in the system namespace. Here the Secret is created in `kube-system`:
This section will guide you on how to configure and use the Ceph RBD provisioner.
##### Pre-requisites
For this to work you must have a functional Ceph cluster, and the `rbd` command line utility must be installed on any host/container that `kube-controller-manager` or `kubelet` is running on.
##### Configuration
First we must identify the Ceph client admin key. This is usually found in `/etc/ceph/ceph.client.admin.keyring` on your Ceph cluster nodes. The file will look something like this:
```
$ kubectl create -f examples/experimental/persistent-volume-provisioning/rbd/ceph-secret-admin.yaml --namespace=kube-system
[client.admin]
key = AQBfxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx==
auid = 0
caps mds = "allow"
caps mon = "allow *"
caps osd = "allow *"
```
Then create RBD Storage Class:
From the key value, we will create a secret. We must create the Ceph admin Secret in the namespace defined in our `StorageClass`. In this example we set the namespace to `kube-system`.
```
$ kubectl create secret generic ceph-secret-admin --from-literal=key='AQBfxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx==' --namespace=kube-system
```
Now modify `examples/experimental/persistent-volume-provisioning/rbd/rbd-storage-class.yaml` to reflect your environment, particularly the `monitors` field. We are now ready to create our RBD Storage Class:
```
$ kubectl create -f examples/experimental/persistent-volume-provisioning/rbd/rbd-storage-class.yaml
```
Before creating PVC in user's namespace (e.g. myns), make sure the Ceph user's Secret exists, if not, create the Secret:
The kube-controller-manager is now able to provision storage, however we still need to be able to map it. Mapping should be done with a non-privileged key, if you have existing users you can get all keys by running `ceph auth list` on your Ceph cluster with the admin key. For this example we will create a new user and pool.
```
$ kubectl create -f examples/experimental/persistent-volume-provisioning/rbd/ceph-secret-user.yaml --namespace=myns
$ ceph osd pool create kube 512
$ ceph auth get-or-create client.kube mon 'allow r' osd 'allow rwx pool=kube'
[client.kube]
key = AQBQyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyy==
```
Now create a PVC in user's namespace (e.g. myns):
##### Usage
Once configured, create a PVC in a user's namespace (e.g. myns):
```
$ kubectl create -f examples/experimental/persistent-volume-provisioning/claim1.json --namespace=myns
```
Check the PV and PVC are created:
Eventually the PVC creation will result in a PV and RBD volume to match:
```
$ kubectl describe pvc --namespace=myns
@ -395,7 +419,7 @@ Capacity: 3Gi
Message:
Source:
Type: RBD (a Rados Block Device mount on the host that shares a pod's lifetime)
CephMonitors: [10.16.153.105:6789]
CephMonitors: [127.0.0.1:6789]
RBDImage: kubernetes-dynamic-pvc-1cfb1862-664b-11e6-9a5d-90b11c09520d
FSType:
RBDPool: kube
@ -406,12 +430,19 @@ Source:
No events.
```
Create a Pod to use the PVC:
With our storage provisioned, we can now create a Pod to use the PVC:
```
$ kubectl create -f examples/experimental/persistent-volume-provisioning/rbd/pod.yaml --namespace=myns
```
Now our pod has an RBD mount!
```
$ export PODNAME=`kubectl get pod --selector='role=server' --namespace=myns --output=template --template="{{with index .items 0}}{{.metadata.name}}{{end}}"`
$ kubectl exec -it $PODNAME --namespace=myns -- df -h | grep rbd
/dev/rbd1 2.9G 4.5M 2.8G 1% /var/lib/www/html
```
<!-- BEGIN MUNGE: GENERATED_ANALYTICS -->
[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/examples/experimental/persistent-volume-provisioning/README.md?pixel)]()

View File

@ -4,7 +4,7 @@ metadata:
name: slow
provisioner: kubernetes.io/rbd
parameters:
monitors: 10.16.153.105:6789
monitors: 127.0.0.1:6789
adminId: admin
adminSecretName: ceph-secret-admin
adminSecretNamespace: "kube-system"