Merge pull request #22816 from McCoyAle/statefulset
Update command output
This commit is contained in:
		
						commit
						dcd9374466
					
				| 
						 | 
				
			
			@ -91,7 +91,7 @@ kubectl apply -f https://k8s.io/examples/application/zookeeper/zookeeper.yaml
 | 
			
		|||
This creates the `zk-hs` Headless Service, the `zk-cs` Service,
 | 
			
		||||
the `zk-pdb` PodDisruptionBudget, and the `zk` StatefulSet.
 | 
			
		||||
 | 
			
		||||
```shell
 | 
			
		||||
```
 | 
			
		||||
service/zk-hs created
 | 
			
		||||
service/zk-cs created
 | 
			
		||||
poddisruptionbudget.policy/zk-pdb created
 | 
			
		||||
| 
						 | 
				
			
			@ -107,7 +107,7 @@ kubectl get pods -w -l app=zk
 | 
			
		|||
 | 
			
		||||
Once the `zk-2` Pod is Running and Ready, use `CTRL-C` to terminate kubectl.
 | 
			
		||||
 | 
			
		||||
```shell
 | 
			
		||||
```
 | 
			
		||||
NAME      READY     STATUS    RESTARTS   AGE
 | 
			
		||||
zk-0      0/1       Pending   0          0s
 | 
			
		||||
zk-0      0/1       Pending   0         0s
 | 
			
		||||
| 
						 | 
				
			
			@ -143,7 +143,7 @@ for i in 0 1 2; do kubectl exec zk-$i -- hostname; done
 | 
			
		|||
The StatefulSet controller provides each Pod with a unique hostname based on its ordinal index. The hostnames take the form of `<statefulset name>-<ordinal index>`. Because the `replicas` field of the `zk` StatefulSet is set to `3`, the Set's controller creates three Pods with their hostnames set to `zk-0`, `zk-1`, and
 | 
			
		||||
`zk-2`.
 | 
			
		||||
 | 
			
		||||
```shell
 | 
			
		||||
```
 | 
			
		||||
zk-0
 | 
			
		||||
zk-1
 | 
			
		||||
zk-2
 | 
			
		||||
| 
						 | 
				
			
			@ -159,7 +159,7 @@ for i in 0 1 2; do echo "myid zk-$i";kubectl exec zk-$i -- cat /var/lib/zookeepe
 | 
			
		|||
 | 
			
		||||
Because the identifiers are natural numbers and the ordinal indices are non-negative integers, you can generate an identifier by adding 1 to the ordinal.
 | 
			
		||||
 | 
			
		||||
```shell
 | 
			
		||||
```
 | 
			
		||||
myid zk-0
 | 
			
		||||
1
 | 
			
		||||
myid zk-1
 | 
			
		||||
| 
						 | 
				
			
			@ -177,7 +177,7 @@ for i in 0 1 2; do kubectl exec zk-$i -- hostname -f; done
 | 
			
		|||
The `zk-hs` Service creates a domain for all of the Pods,
 | 
			
		||||
`zk-hs.default.svc.cluster.local`.
 | 
			
		||||
 | 
			
		||||
```shell
 | 
			
		||||
```
 | 
			
		||||
zk-0.zk-hs.default.svc.cluster.local
 | 
			
		||||
zk-1.zk-hs.default.svc.cluster.local
 | 
			
		||||
zk-2.zk-hs.default.svc.cluster.local
 | 
			
		||||
| 
						 | 
				
			
			@ -196,7 +196,7 @@ the file, the `1`, `2`, and `3` correspond to the identifiers in the
 | 
			
		|||
ZooKeeper servers' `myid` files. They are set to the FQDNs for the Pods in
 | 
			
		||||
the `zk` StatefulSet.
 | 
			
		||||
 | 
			
		||||
```shell
 | 
			
		||||
```
 | 
			
		||||
clientPort=2181
 | 
			
		||||
dataDir=/var/lib/zookeeper/data
 | 
			
		||||
dataLogDir=/var/lib/zookeeper/log
 | 
			
		||||
| 
						 | 
				
			
			@ -219,7 +219,9 @@ Consensus protocols require that the identifiers of each participant be unique.
 | 
			
		|||
 | 
			
		||||
```shell
 | 
			
		||||
kubectl get pods -w -l app=zk
 | 
			
		||||
```
 | 
			
		||||
 | 
			
		||||
```
 | 
			
		||||
NAME      READY     STATUS    RESTARTS   AGE
 | 
			
		||||
zk-0      0/1       Pending   0          0s
 | 
			
		||||
zk-0      0/1       Pending   0         0s
 | 
			
		||||
| 
						 | 
				
			
			@ -243,7 +245,7 @@ the FQDNs of the ZooKeeper servers will resolve to a single endpoint, and that
 | 
			
		|||
endpoint will be the unique ZooKeeper server claiming the identity configured
 | 
			
		||||
in its `myid` file.
 | 
			
		||||
 | 
			
		||||
```shell
 | 
			
		||||
```
 | 
			
		||||
zk-0.zk-hs.default.svc.cluster.local
 | 
			
		||||
zk-1.zk-hs.default.svc.cluster.local
 | 
			
		||||
zk-2.zk-hs.default.svc.cluster.local
 | 
			
		||||
| 
						 | 
				
			
			@ -252,7 +254,7 @@ zk-2.zk-hs.default.svc.cluster.local
 | 
			
		|||
This ensures that the `servers` properties in the ZooKeepers' `zoo.cfg` files
 | 
			
		||||
represents a correctly configured ensemble.
 | 
			
		||||
 | 
			
		||||
```shell
 | 
			
		||||
```
 | 
			
		||||
server.1=zk-0.zk-hs.default.svc.cluster.local:2888:3888
 | 
			
		||||
server.2=zk-1.zk-hs.default.svc.cluster.local:2888:3888
 | 
			
		||||
server.3=zk-2.zk-hs.default.svc.cluster.local:2888:3888
 | 
			
		||||
| 
						 | 
				
			
			@ -269,7 +271,8 @@ The command below executes the `zkCli.sh` script to write `world` to the path `/
 | 
			
		|||
 | 
			
		||||
```shell
 | 
			
		||||
kubectl exec zk-0 zkCli.sh create /hello world
 | 
			
		||||
 | 
			
		||||
```
 | 
			
		||||
```
 | 
			
		||||
WATCHER::
 | 
			
		||||
 | 
			
		||||
WatchedEvent state:SyncConnected type:None path:null
 | 
			
		||||
| 
						 | 
				
			
			@ -285,7 +288,7 @@ kubectl exec zk-1 zkCli.sh get /hello
 | 
			
		|||
The data that you created on `zk-0` is available on all the servers in the
 | 
			
		||||
ensemble.
 | 
			
		||||
 | 
			
		||||
```shell
 | 
			
		||||
```
 | 
			
		||||
WATCHER::
 | 
			
		||||
 | 
			
		||||
WatchedEvent state:SyncConnected type:None path:null
 | 
			
		||||
| 
						 | 
				
			
			@ -316,6 +319,9 @@ Use the [`kubectl delete`](/docs/reference/generated/kubectl/kubectl-commands/#d
 | 
			
		|||
 | 
			
		||||
```shell
 | 
			
		||||
kubectl delete statefulset zk
 | 
			
		||||
```
 | 
			
		||||
 | 
			
		||||
```
 | 
			
		||||
statefulset.apps "zk" deleted
 | 
			
		||||
```
 | 
			
		||||
 | 
			
		||||
| 
						 | 
				
			
			@ -327,7 +333,7 @@ kubectl get pods -w -l app=zk
 | 
			
		|||
 | 
			
		||||
When `zk-0` if fully terminated, use `CTRL-C` to terminate kubectl.
 | 
			
		||||
 | 
			
		||||
```shell
 | 
			
		||||
```
 | 
			
		||||
zk-2      1/1       Terminating   0         9m
 | 
			
		||||
zk-0      1/1       Terminating   0         11m
 | 
			
		||||
zk-1      1/1       Terminating   0         10m
 | 
			
		||||
| 
						 | 
				
			
			@ -358,7 +364,7 @@ kubectl get pods -w -l app=zk
 | 
			
		|||
 | 
			
		||||
Once the `zk-2` Pod is Running and Ready, use `CTRL-C` to terminate kubectl.
 | 
			
		||||
 | 
			
		||||
```shell
 | 
			
		||||
```
 | 
			
		||||
NAME      READY     STATUS    RESTARTS   AGE
 | 
			
		||||
zk-0      0/1       Pending   0          0s
 | 
			
		||||
zk-0      0/1       Pending   0         0s
 | 
			
		||||
| 
						 | 
				
			
			@ -386,7 +392,7 @@ kubectl exec zk-2 zkCli.sh get /hello
 | 
			
		|||
 | 
			
		||||
Even though you terminated and recreated all of the Pods in the `zk` StatefulSet, the ensemble still serves the original value.
 | 
			
		||||
 | 
			
		||||
```shell
 | 
			
		||||
```
 | 
			
		||||
WATCHER::
 | 
			
		||||
 | 
			
		||||
WatchedEvent state:SyncConnected type:None path:null
 | 
			
		||||
| 
						 | 
				
			
			@ -430,7 +436,7 @@ kubectl get pvc -l app=zk
 | 
			
		|||
 | 
			
		||||
When the `StatefulSet` recreated its Pods, it remounts the Pods' PersistentVolumes.
 | 
			
		||||
 | 
			
		||||
```shell
 | 
			
		||||
```
 | 
			
		||||
NAME           STATUS    VOLUME                                     CAPACITY   ACCESSMODES   AGE
 | 
			
		||||
datadir-zk-0   Bound     pvc-bed742cd-bcb1-11e6-994f-42010a800002   20Gi       RWO           1h
 | 
			
		||||
datadir-zk-1   Bound     pvc-bedd27d2-bcb1-11e6-994f-42010a800002   20Gi       RWO           1h
 | 
			
		||||
| 
						 | 
				
			
			@ -464,6 +470,8 @@ Get the `zk` StatefulSet.
 | 
			
		|||
 | 
			
		||||
```shell
 | 
			
		||||
kubectl get sts zk -o yaml
 | 
			
		||||
```
 | 
			
		||||
```
 | 
			
		||||
…
 | 
			
		||||
command:
 | 
			
		||||
      - sh
 | 
			
		||||
| 
						 | 
				
			
			@ -506,7 +514,7 @@ kubectl exec zk-0 cat /usr/etc/zookeeper/log4j.properties
 | 
			
		|||
The logging configuration below will cause the ZooKeeper process to write all
 | 
			
		||||
of its logs to the standard output file stream.
 | 
			
		||||
 | 
			
		||||
```shell
 | 
			
		||||
```
 | 
			
		||||
zookeeper.root.logger=CONSOLE
 | 
			
		||||
zookeeper.console.threshold=INFO
 | 
			
		||||
log4j.rootLogger=${zookeeper.root.logger}
 | 
			
		||||
| 
						 | 
				
			
			@ -526,7 +534,7 @@ kubectl logs zk-0 --tail 20
 | 
			
		|||
 | 
			
		||||
You can view application logs written to standard out or standard error using `kubectl logs` and from the Kubernetes Dashboard.
 | 
			
		||||
 | 
			
		||||
```shell
 | 
			
		||||
```
 | 
			
		||||
2016-12-06 19:34:16,236 [myid:1] - INFO  [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxn@827] - Processing ruok command from /127.0.0.1:52740
 | 
			
		||||
2016-12-06 19:34:16,237 [myid:1] - INFO  [Thread-1136:NIOServerCnxn@1008] - Closed socket connection for client /127.0.0.1:52740 (no session established for client)
 | 
			
		||||
2016-12-06 19:34:26,155 [myid:1] - INFO  [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxnFactory@192] - Accepted socket connection from /127.0.0.1:52749
 | 
			
		||||
| 
						 | 
				
			
			@ -583,7 +591,7 @@ kubectl exec zk-0 -- ps -elf
 | 
			
		|||
As the `runAsUser` field of the `securityContext` object is set to 1000,
 | 
			
		||||
instead of running as root, the ZooKeeper process runs as the zookeeper user.
 | 
			
		||||
 | 
			
		||||
```shell
 | 
			
		||||
```
 | 
			
		||||
F S UID        PID  PPID  C PRI  NI ADDR SZ WCHAN  STIME TTY          TIME CMD
 | 
			
		||||
4 S zookeep+     1     0  0  80   0 -  1127 -      20:46 ?        00:00:00 sh -c zkGenConfig.sh && zkServer.sh start-foreground
 | 
			
		||||
0 S zookeep+    27     1  0  80   0 - 1155556 -    20:46 ?        00:00:19 /usr/lib/jvm/java-8-openjdk-amd64/bin/java -Dzookeeper.log.dir=/var/log/zookeeper -Dzookeeper.root.logger=INFO,CONSOLE -cp /usr/bin/../build/classes:/usr/bin/../build/lib/*.jar:/usr/bin/../share/zookeeper/zookeeper-3.4.9.jar:/usr/bin/../share/zookeeper/slf4j-log4j12-1.6.1.jar:/usr/bin/../share/zookeeper/slf4j-api-1.6.1.jar:/usr/bin/../share/zookeeper/netty-3.10.5.Final.jar:/usr/bin/../share/zookeeper/log4j-1.2.16.jar:/usr/bin/../share/zookeeper/jline-0.9.94.jar:/usr/bin/../src/java/lib/*.jar:/usr/bin/../etc/zookeeper: -Xmx2G -Xms2G -Dcom.sun.management.jmxremote -Dcom.sun.management.jmxremote.local.only=false org.apache.zookeeper.server.quorum.QuorumPeerMain /usr/bin/../etc/zookeeper/zoo.cfg
 | 
			
		||||
| 
						 | 
				
			
			@ -599,7 +607,7 @@ kubectl exec -ti zk-0 -- ls -ld /var/lib/zookeeper/data
 | 
			
		|||
 | 
			
		||||
Because the `fsGroup` field of the `securityContext` object is set to 1000, the ownership of the Pods' PersistentVolumes is set to the zookeeper group, and the ZooKeeper process is able to read and write its data.
 | 
			
		||||
 | 
			
		||||
```shell
 | 
			
		||||
```
 | 
			
		||||
drwxr-sr-x 3 zookeeper zookeeper 4096 Dec  5 20:45 /var/lib/zookeeper/data
 | 
			
		||||
```
 | 
			
		||||
 | 
			
		||||
| 
						 | 
				
			
			@ -621,7 +629,8 @@ You can use `kubectl patch` to update the number of `cpus` allocated to the serv
 | 
			
		|||
 | 
			
		||||
```shell
 | 
			
		||||
kubectl patch sts zk --type='json' -p='[{"op": "replace", "path": "/spec/template/spec/containers/0/resources/requests/cpu", "value":"0.3"}]'
 | 
			
		||||
 | 
			
		||||
```
 | 
			
		||||
```
 | 
			
		||||
statefulset.apps/zk patched
 | 
			
		||||
```
 | 
			
		||||
 | 
			
		||||
| 
						 | 
				
			
			@ -629,7 +638,8 @@ Use `kubectl rollout status` to watch the status of the update.
 | 
			
		|||
 | 
			
		||||
```shell
 | 
			
		||||
kubectl rollout status sts/zk
 | 
			
		||||
 | 
			
		||||
```
 | 
			
		||||
```
 | 
			
		||||
waiting for statefulset rolling update to complete 0 pods at revision zk-5db4499664...
 | 
			
		||||
Waiting for 1 pods to be ready...
 | 
			
		||||
Waiting for 1 pods to be ready...
 | 
			
		||||
| 
						 | 
				
			
			@ -648,7 +658,9 @@ Use the `kubectl rollout history` command to view a history or previous configur
 | 
			
		|||
 | 
			
		||||
```shell
 | 
			
		||||
kubectl rollout history sts/zk
 | 
			
		||||
```
 | 
			
		||||
 | 
			
		||||
```
 | 
			
		||||
statefulsets "zk"
 | 
			
		||||
REVISION
 | 
			
		||||
1
 | 
			
		||||
| 
						 | 
				
			
			@ -659,7 +671,9 @@ Use the `kubectl rollout undo` command to roll back the modification.
 | 
			
		|||
 | 
			
		||||
```shell
 | 
			
		||||
kubectl rollout undo sts/zk
 | 
			
		||||
```
 | 
			
		||||
 | 
			
		||||
```
 | 
			
		||||
statefulset.apps/zk rolled back
 | 
			
		||||
```
 | 
			
		||||
 | 
			
		||||
| 
						 | 
				
			
			@ -680,7 +694,7 @@ kubectl exec zk-0 -- ps -ef
 | 
			
		|||
The command used as the container's entry point has PID 1, and
 | 
			
		||||
the ZooKeeper process, a child of the entry point, has PID 27.
 | 
			
		||||
 | 
			
		||||
```shell
 | 
			
		||||
```
 | 
			
		||||
UID        PID  PPID  C STIME TTY          TIME CMD
 | 
			
		||||
zookeep+     1     0  0 15:03 ?        00:00:00 sh -c zkGenConfig.sh && zkServer.sh start-foreground
 | 
			
		||||
zookeep+    27     1  0 15:03 ?        00:00:03 /usr/lib/jvm/java-8-openjdk-amd64/bin/java -Dzookeeper.log.dir=/var/log/zookeeper -Dzookeeper.root.logger=INFO,CONSOLE -cp /usr/bin/../build/classes:/usr/bin/../build/lib/*.jar:/usr/bin/../share/zookeeper/zookeeper-3.4.9.jar:/usr/bin/../share/zookeeper/slf4j-log4j12-1.6.1.jar:/usr/bin/../share/zookeeper/slf4j-api-1.6.1.jar:/usr/bin/../share/zookeeper/netty-3.10.5.Final.jar:/usr/bin/../share/zookeeper/log4j-1.2.16.jar:/usr/bin/../share/zookeeper/jline-0.9.94.jar:/usr/bin/../src/java/lib/*.jar:/usr/bin/../etc/zookeeper: -Xmx2G -Xms2G -Dcom.sun.management.jmxremote -Dcom.sun.management.jmxremote.local.only=false org.apache.zookeeper.server.quorum.QuorumPeerMain /usr/bin/../etc/zookeeper/zoo.cfg
 | 
			
		||||
| 
						 | 
				
			
			@ -700,7 +714,7 @@ kubectl exec zk-0 -- pkill java
 | 
			
		|||
 | 
			
		||||
The termination of the ZooKeeper process caused its parent process to terminate. Because the `RestartPolicy` of the container is Always, it restarted the parent process.
 | 
			
		||||
 | 
			
		||||
```shell
 | 
			
		||||
```
 | 
			
		||||
NAME      READY     STATUS    RESTARTS   AGE
 | 
			
		||||
zk-0      1/1       Running   0          21m
 | 
			
		||||
zk-1      1/1       Running   0          20m
 | 
			
		||||
| 
						 | 
				
			
			@ -740,7 +754,7 @@ The Pod `template` for the `zk` `StatefulSet` specifies a liveness probe.
 | 
			
		|||
The probe calls a bash script that uses the ZooKeeper `ruok` four letter
 | 
			
		||||
word to test the server's health.
 | 
			
		||||
 | 
			
		||||
```bash
 | 
			
		||||
```
 | 
			
		||||
OK=$(echo ruok | nc 127.0.0.1 $1)
 | 
			
		||||
if [ "$OK" == "imok" ]; then
 | 
			
		||||
    exit 0
 | 
			
		||||
| 
						 | 
				
			
			@ -767,7 +781,9 @@ the ensemble are restarted.
 | 
			
		|||
 | 
			
		||||
```shell
 | 
			
		||||
kubectl get pod -w -l app=zk
 | 
			
		||||
```
 | 
			
		||||
 | 
			
		||||
```
 | 
			
		||||
NAME      READY     STATUS    RESTARTS   AGE
 | 
			
		||||
zk-0      1/1       Running   0          1h
 | 
			
		||||
zk-1      1/1       Running   0          1h
 | 
			
		||||
| 
						 | 
				
			
			@ -832,7 +848,7 @@ for i in 0 1 2; do kubectl get pod zk-$i --template {{.spec.nodeName}}; echo "";
 | 
			
		|||
 | 
			
		||||
All of the Pods in the `zk` `StatefulSet` are deployed on different nodes.
 | 
			
		||||
 | 
			
		||||
```shell
 | 
			
		||||
```
 | 
			
		||||
kubernetes-node-cxpk
 | 
			
		||||
kubernetes-node-a5aq
 | 
			
		||||
kubernetes-node-2g2d
 | 
			
		||||
| 
						 | 
				
			
			@ -891,7 +907,7 @@ kubectl get pdb zk-pdb
 | 
			
		|||
The `max-unavailable` field indicates to Kubernetes that at most one Pod from
 | 
			
		||||
`zk` `StatefulSet` can be unavailable at any time.
 | 
			
		||||
 | 
			
		||||
```shell
 | 
			
		||||
```
 | 
			
		||||
NAME      MIN-AVAILABLE   MAX-UNAVAILABLE   ALLOWED-DISRUPTIONS   AGE
 | 
			
		||||
zk-pdb    N/A             1                 1
 | 
			
		||||
```
 | 
			
		||||
| 
						 | 
				
			
			@ -906,7 +922,9 @@ In another terminal, use this command to get the nodes that the Pods are current
 | 
			
		|||
 | 
			
		||||
```shell
 | 
			
		||||
for i in 0 1 2; do kubectl get pod zk-$i --template {{.spec.nodeName}}; echo ""; done
 | 
			
		||||
```
 | 
			
		||||
 | 
			
		||||
```
 | 
			
		||||
kubernetes-node-pb41
 | 
			
		||||
kubernetes-node-ixsl
 | 
			
		||||
kubernetes-node-i4c4
 | 
			
		||||
| 
						 | 
				
			
			@ -917,6 +935,9 @@ drain the node on which the `zk-0` Pod is scheduled.
 | 
			
		|||
 | 
			
		||||
```shell
 | 
			
		||||
kubectl drain $(kubectl get pod zk-0 --template {{.spec.nodeName}}) --ignore-daemonsets --force --delete-local-data
 | 
			
		||||
```
 | 
			
		||||
 | 
			
		||||
```
 | 
			
		||||
node "kubernetes-node-pb41" cordoned
 | 
			
		||||
 | 
			
		||||
WARNING: Deleting pods not managed by ReplicationController, ReplicaSet, Job, or DaemonSet: fluentd-cloud-logging-kubernetes-node-pb41, kube-proxy-kubernetes-node-pb41; Ignoring DaemonSet-managed pods: node-problem-detector-v0.1-o5elz
 | 
			
		||||
| 
						 | 
				
			
			@ -927,7 +948,7 @@ node "kubernetes-node-pb41" drained
 | 
			
		|||
As there are four nodes in your cluster, `kubectl drain`, succeeds and the
 | 
			
		||||
`zk-0` is rescheduled to another node.
 | 
			
		||||
 | 
			
		||||
```shell
 | 
			
		||||
```
 | 
			
		||||
NAME      READY     STATUS    RESTARTS   AGE
 | 
			
		||||
zk-0      1/1       Running   2          1h
 | 
			
		||||
zk-1      1/1       Running   0          1h
 | 
			
		||||
| 
						 | 
				
			
			@ -949,7 +970,9 @@ Keep watching the `StatefulSet`'s Pods in the first terminal and drain the node
 | 
			
		|||
 | 
			
		||||
```shell
 | 
			
		||||
kubectl drain $(kubectl get pod zk-1 --template {{.spec.nodeName}}) --ignore-daemonsets --force --delete-local-data "kubernetes-node-ixsl" cordoned
 | 
			
		||||
```
 | 
			
		||||
 | 
			
		||||
```
 | 
			
		||||
WARNING: Deleting pods not managed by ReplicationController, ReplicaSet, Job, or DaemonSet: fluentd-cloud-logging-kubernetes-node-ixsl, kube-proxy-kubernetes-node-ixsl; Ignoring DaemonSet-managed pods: node-problem-detector-v0.1-voc74
 | 
			
		||||
pod "zk-1" deleted
 | 
			
		||||
node "kubernetes-node-ixsl" drained
 | 
			
		||||
| 
						 | 
				
			
			@ -959,7 +982,9 @@ The `zk-1` Pod cannot be scheduled because the `zk` `StatefulSet` contains a `Po
 | 
			
		|||
 | 
			
		||||
```shell
 | 
			
		||||
kubectl get pods -w -l app=zk
 | 
			
		||||
```
 | 
			
		||||
 | 
			
		||||
```
 | 
			
		||||
NAME      READY     STATUS    RESTARTS   AGE
 | 
			
		||||
zk-0      1/1       Running   2          1h
 | 
			
		||||
zk-1      1/1       Running   0          1h
 | 
			
		||||
| 
						 | 
				
			
			@ -987,6 +1012,8 @@ Continue to watch the Pods of the stateful set, and drain the node on which
 | 
			
		|||
 | 
			
		||||
```shell
 | 
			
		||||
kubectl drain $(kubectl get pod zk-2 --template {{.spec.nodeName}}) --ignore-daemonsets --force --delete-local-data
 | 
			
		||||
```
 | 
			
		||||
```
 | 
			
		||||
node "kubernetes-node-i4c4" cordoned
 | 
			
		||||
 | 
			
		||||
WARNING: Deleting pods not managed by ReplicationController, ReplicaSet, Job, or DaemonSet: fluentd-cloud-logging-kubernetes-node-i4c4, kube-proxy-kubernetes-node-i4c4; Ignoring DaemonSet-managed pods: node-problem-detector-v0.1-dyrog
 | 
			
		||||
| 
						 | 
				
			
			@ -1007,7 +1034,7 @@ kubectl exec zk-0 zkCli.sh get /hello
 | 
			
		|||
 | 
			
		||||
The service is still available because its `PodDisruptionBudget` is respected.
 | 
			
		||||
 | 
			
		||||
```shell
 | 
			
		||||
```
 | 
			
		||||
WatchedEvent state:SyncConnected type:None path:null
 | 
			
		||||
world
 | 
			
		||||
cZxid = 0x200000002
 | 
			
		||||
| 
						 | 
				
			
			@ -1027,7 +1054,8 @@ Use [`kubectl uncordon`](/docs/reference/generated/kubectl/kubectl-commands/#unc
 | 
			
		|||
 | 
			
		||||
```shell
 | 
			
		||||
kubectl uncordon kubernetes-node-pb41
 | 
			
		||||
 | 
			
		||||
```
 | 
			
		||||
```
 | 
			
		||||
node "kubernetes-node-pb41" uncordoned
 | 
			
		||||
```
 | 
			
		||||
 | 
			
		||||
| 
						 | 
				
			
			@ -1035,7 +1063,8 @@ node "kubernetes-node-pb41" uncordoned
 | 
			
		|||
 | 
			
		||||
```shell
 | 
			
		||||
kubectl get pods -w -l app=zk
 | 
			
		||||
 | 
			
		||||
```
 | 
			
		||||
```
 | 
			
		||||
NAME      READY     STATUS    RESTARTS   AGE
 | 
			
		||||
zk-0      1/1       Running   2          1h
 | 
			
		||||
zk-1      1/1       Running   0          1h
 | 
			
		||||
| 
						 | 
				
			
			@ -1102,6 +1131,3 @@ You can use `kubectl drain` in conjunction with `PodDisruptionBudgets` to ensure
 | 
			
		|||
  used in this tutorial. Follow the necessary steps, based on your environment,
 | 
			
		||||
  storage configuration, and provisioning method, to ensure that all storage is
 | 
			
		||||
  reclaimed.
 | 
			
		||||
 | 
			
		||||
 | 
			
		||||
    
 | 
			
		||||
| 
						 | 
				
			
			
 | 
			
		|||
		Loading…
	
		Reference in New Issue