Review zookeper tutorial and fix command error (#31914)
* Misplaced command result On zookeeper tutorial https://kubernetes.io/docs/tutorials/stateful-application/zookeeper/#surviving-maintenance command result is concatenated to the command itself: kubectl drain $(kubectl get pod zk-1 --template {{.spec.nodeName}}) --ignore-daemonsets --force --delete-emptydir-data "kubernetes-node-ixsl" cordoned * Review zookeeper tutorial https://github.com/kubernetes/website/pull/31873 Review done!
This commit is contained in:
parent
cd26a2bb6b
commit
b6f0d8ffbc
|
|
@ -442,7 +442,7 @@ datadir-zk-2 Bound pvc-bee0817e-bcb1-11e6-994f-42010a800002 20Gi R
|
|||
|
||||
The `volumeMounts` section of the `StatefulSet`'s container `template` mounts the PersistentVolumes in the ZooKeeper servers' data directories.
|
||||
|
||||
```shell
|
||||
```yaml
|
||||
volumeMounts:
|
||||
- name: datadir
|
||||
mountPath: /var/lib/zookeeper
|
||||
|
|
@ -661,6 +661,8 @@ Use the `kubectl rollout history` command to view a history or previous configur
|
|||
kubectl rollout history sts/zk
|
||||
```
|
||||
|
||||
The output is similar to this:
|
||||
|
||||
```
|
||||
statefulsets "zk"
|
||||
REVISION
|
||||
|
|
@ -674,6 +676,8 @@ Use the `kubectl rollout undo` command to roll back the modification.
|
|||
kubectl rollout undo sts/zk
|
||||
```
|
||||
|
||||
The output is similar to this:
|
||||
|
||||
```
|
||||
statefulset.apps/zk rolled back
|
||||
```
|
||||
|
|
@ -773,7 +777,7 @@ kubectl get pod -w -l app=zk
|
|||
In another window, using the following command to delete the `zookeeper-ready` script from the file system of Pod `zk-0`.
|
||||
|
||||
```shell
|
||||
kubectl exec zk-0 -- rm /usr/bin/zookeeper-ready
|
||||
kubectl exec zk-0 -- rm /opt/zookeeper/bin/zookeeper-ready
|
||||
```
|
||||
|
||||
When the liveness probe for the ZooKeeper process fails, Kubernetes will
|
||||
|
|
@ -926,6 +930,8 @@ In another terminal, use this command to get the nodes that the Pods are current
|
|||
for i in 0 1 2; do kubectl get pod zk-$i --template {{.spec.nodeName}}; echo ""; done
|
||||
```
|
||||
|
||||
The output is similar to this:
|
||||
|
||||
```
|
||||
kubernetes-node-pb41
|
||||
kubernetes-node-ixsl
|
||||
|
|
@ -939,6 +945,8 @@ drain the node on which the `zk-0` Pod is scheduled.
|
|||
kubectl drain $(kubectl get pod zk-0 --template {{.spec.nodeName}}) --ignore-daemonsets --force --delete-emptydir-data
|
||||
```
|
||||
|
||||
The output is similar to this:
|
||||
|
||||
```
|
||||
node "kubernetes-node-pb41" cordoned
|
||||
|
||||
|
|
@ -971,15 +979,19 @@ Keep watching the `StatefulSet`'s Pods in the first terminal and drain the node
|
|||
`zk-1` is scheduled.
|
||||
|
||||
```shell
|
||||
kubectl drain $(kubectl get pod zk-1 --template {{.spec.nodeName}}) --ignore-daemonsets --force --delete-emptydir-data "kubernetes-node-ixsl" cordoned
|
||||
kubectl drain $(kubectl get pod zk-1 --template {{.spec.nodeName}}) --ignore-daemonsets --force --delete-emptydir-data
|
||||
```
|
||||
|
||||
The output is similar to this:
|
||||
|
||||
```
|
||||
"kubernetes-node-ixsl" cordoned
|
||||
WARNING: Deleting pods not managed by ReplicationController, ReplicaSet, Job, or DaemonSet: fluentd-cloud-logging-kubernetes-node-ixsl, kube-proxy-kubernetes-node-ixsl; Ignoring DaemonSet-managed pods: node-problem-detector-v0.1-voc74
|
||||
pod "zk-1" deleted
|
||||
node "kubernetes-node-ixsl" drained
|
||||
```
|
||||
|
||||
|
||||
The `zk-1` Pod cannot be scheduled because the `zk` `StatefulSet` contains a `PodAntiAffinity` rule preventing
|
||||
co-location of the Pods, and as only two nodes are schedulable, the Pod will remain in a Pending state.
|
||||
|
||||
|
|
@ -987,6 +999,8 @@ co-location of the Pods, and as only two nodes are schedulable, the Pod will rem
|
|||
kubectl get pods -w -l app=zk
|
||||
```
|
||||
|
||||
The output is similar to this:
|
||||
|
||||
```
|
||||
NAME READY STATUS RESTARTS AGE
|
||||
zk-0 1/1 Running 2 1h
|
||||
|
|
@ -1017,6 +1031,8 @@ Continue to watch the Pods of the StatefulSet, and drain the node on which
|
|||
kubectl drain $(kubectl get pod zk-2 --template {{.spec.nodeName}}) --ignore-daemonsets --force --delete-emptydir-data
|
||||
```
|
||||
|
||||
The output is similar to this:
|
||||
|
||||
```
|
||||
node "kubernetes-node-i4c4" cordoned
|
||||
|
||||
|
|
@ -1060,6 +1076,8 @@ Use [`kubectl uncordon`](/docs/reference/generated/kubectl/kubectl-commands/#unc
|
|||
kubectl uncordon kubernetes-node-pb41
|
||||
```
|
||||
|
||||
The output is similar to this:
|
||||
|
||||
```
|
||||
node "kubernetes-node-pb41" uncordoned
|
||||
```
|
||||
|
|
@ -1070,6 +1088,8 @@ node "kubernetes-node-pb41" uncordoned
|
|||
kubectl get pods -w -l app=zk
|
||||
```
|
||||
|
||||
The output is similar to this:
|
||||
|
||||
```
|
||||
NAME READY STATUS RESTARTS AGE
|
||||
zk-0 1/1 Running 2 1h
|
||||
|
|
@ -1103,7 +1123,7 @@ Attempt to drain the node on which `zk-2` is scheduled.
|
|||
kubectl drain $(kubectl get pod zk-2 --template {{.spec.nodeName}}) --ignore-daemonsets --force --delete-emptydir-data
|
||||
```
|
||||
|
||||
The output:
|
||||
The output is similar to this:
|
||||
|
||||
```
|
||||
node "kubernetes-node-i4c4" already cordoned
|
||||
|
|
@ -1121,6 +1141,8 @@ Uncordon the second node to allow `zk-2` to be rescheduled.
|
|||
kubectl uncordon kubernetes-node-ixsl
|
||||
```
|
||||
|
||||
The output is similar to this:
|
||||
|
||||
```
|
||||
node "kubernetes-node-ixsl" uncordoned
|
||||
```
|
||||
|
|
|
|||
Loading…
Reference in New Issue