Merge pull request #5568 from jianglingxia/jlx-92115

some error in statefulset
This commit is contained in:
Tim(Xiaoyu) Zhang 2017-09-22 15:07:23 +08:00 committed by GitHub
commit c4180cb5c4
1 changed files with 6 additions and 12 deletions

View File

@ -262,8 +262,7 @@ www-web-0 Bound pvc-15c268c7-b507-11e6-932f-42010a800002 1Gi RWO
www-web-1 Bound pvc-15c79307-b507-11e6-932f-42010a800002 1Gi RWO 48s www-web-1 Bound pvc-15c79307-b507-11e6-932f-42010a800002 1Gi RWO 48s
``` ```
The StatefulSet controller created two PersistentVolumeClaims that are The StatefulSet controller created two PersistentVolumeClaims that are
bound to two [PersistentVolumes](/docs/concepts/storage/volumes/). As the bound to two [PersistentVolumes](/docs/concepts/storage/persistent-volumes/). As the cluster used in this tutorial is configured to dynamically provision
cluster used in this tutorial is configured to dynamically provision
PersistentVolumes, the PersistentVolumes were created and bound automatically. PersistentVolumes, the PersistentVolumes were created and bound automatically.
The NGINX webservers, by default, will serve an index file at The NGINX webservers, by default, will serve an index file at
@ -330,7 +329,7 @@ web-1
Even though `web-0` and `web-1` were rescheduled, they continue to serve their Even though `web-0` and `web-1` were rescheduled, they continue to serve their
hostnames because the PersistentVolumes associated with their hostnames because the PersistentVolumes associated with their
PersistentVolumeClaims are remounted to their `volumeMount`s. No matter what PersistentVolumeClaims are remounted to their `volumeMounts`. No matter what
node `web-0`and `web-1` are scheduled on, their PersistentVolumes will be node `web-0`and `web-1` are scheduled on, their PersistentVolumes will be
mounted to the appropriate mount points. mounted to the appropriate mount points.
@ -338,8 +337,7 @@ mounted to the appropriate mount points.
Scaling a StatefulSet refers to increasing or decreasing the number of replicas. Scaling a StatefulSet refers to increasing or decreasing the number of replicas.
This is accomplished by updating the `replicas` field. You can use either This is accomplished by updating the `replicas` field. You can use either
[`kubectl scale`](/docs/user-guide/kubectl/{{page.version}}/#scale) or [`kubectl scale`](/docs/user-guide/kubectl/{{page.version}}/#scale) or
[`kubectl patch`](/docs/user-guide/kubectl/{{page.version}}/#patch) to scale a Stateful [`kubectl patch`](/docs/user-guide/kubectl/{{page.version}}/#patch) to scale a StatefulSet.
Set.
### Scaling Up ### Scaling Up
@ -440,10 +438,7 @@ www-web-4 Bound pvc-e11bb5f8-b508-11e6-932f-42010a800002 1Gi RWO
``` ```
There are still five PersistentVolumeClaims and five PersistentVolumes. There are still five PersistentVolumeClaims and five PersistentVolumes.
When exploring a Pod's [stable storage](#stable-storage), we saw that the When exploring a Pod's [stable storage](#writing-to-stable-storage), we saw that the PersistentVolumes mounted to the Pods of a StatefulSet are not deleted whenthe StatefulSet's Pods are deleted. This is still true when Pod deletion is caused by scaling the StatefulSet down.
PersistentVolumes mounted to the Pods of a StatefulSet are not deleted when
the StatefulSet's Pods are deleted. This is still true when Pod deletion is
caused by scaling the StatefulSet down.
## Updating StatefulSets ## Updating StatefulSets
@ -800,8 +795,7 @@ continue the update process.
## Deleting StatefulSets ## Deleting StatefulSets
StatefulSet supports both Non-Cascading and Cascading deletion. In a StatefulSet supports both Non-Cascading and Cascading deletion. In a
Non-Cascading Delete, the StatefulSet's Pods are not deleted when the Stateful Non-Cascading Delete, the StatefulSet's Pods are not deleted when the StatefulSet is deleted. In a Cascading Delete, both the StatefulSet and its Pods are
Set is deleted. In a Cascading Delete, both the StatefulSet and its Pods are
deleted. deleted.
### Non-Cascading Delete ### Non-Cascading Delete
@ -945,7 +939,7 @@ web-1 0/1 Terminating 0 29m
``` ```
As you saw in the [Scaling Down](#ordered-pod-termination) section, the Pods As you saw in the [Scaling Down](#scaling-down) section, the Pods
are terminated one at a time, with respect to the reverse order of their ordinal are terminated one at a time, with respect to the reverse order of their ordinal
indices. Before terminating a Pod, the StatefulSet controller waits for indices. Before terminating a Pod, the StatefulSet controller waits for
the Pod's successor to be completely terminated. the Pod's successor to be completely terminated.