adds sclaing and deployment section. fixes spelling errors. changes all

StatefulSet referneces to Stateful Set and all PetSet references to Pet
Set
This commit is contained in:
Kenneth Owens 2016-11-17 13:00:33 -08:00 committed by Devin Donnelly
parent f073d532d3
commit 6c26a5d50a
2 changed files with 52 additions and 43 deletions

View File

@ -5,8 +5,9 @@ The Concepts section of the Kubernetes documentation is a work in progress.
#### Object Metadata
* [StatefulSets](/docs/concepts/object-metadata/statefulsets/)
* [Annotations](/docs/concepts/object-metadata/annotations/)
* [Stateful Sets](/docs/concepts/object-metadata/statefulsets/)
### What's next

View File

@ -2,28 +2,28 @@
---
{% capture overview %}
**StatefulSets are a beta feature in 1.5. This feature replaces the deprecated
PetSets feature from 1.4. Users of PetSets are referred to the 1.5
[Upgrade Guide](/docs/task/upgrade-to-statefulset)
for further information on how to upgrade existing PetSets to StatefulSets.**
**Stateful Sets are a beta feature in 1.5. This feature replaces the deprecated
Pet Sets feature from 1.4. Users of Pet Sets are referred to the 1.5
[Upgrade Guide](/docs/tasks/stateful-set/upgrade-from-petsets-to-stateful-sets/)
for further information on how to upgrade existing Pet Sets to Stateful Sets.**
A StatefulSet is a Controller that ensures that, at most, a given number of
A Stateful Set is a Controller that ensures that, at most, a given number of
replicas of a Pod are running at a time. Pods in a Stateful Set have an ordinal
(a unique integer index in the StatefulSet), a stable, unique network id that is
avialable in DNS, and stable, persistent storage.
(a unique integer index in the Stateful Set), a stable, unique network id that is
available in DNS, and stable, persistent storage.
For a StatefulSet with N replicas, when Pods are being deployed, they are
For a Stateful Set with N replicas, when Pods are being deployed, they are
created sequentially, in order from {0..N-1}. Before a new Pod is deployed, all
of its predecessors must be [Running and Ready](/docs/user-guide/pod-states).
When Pods are being deleted, they are terminated in reverse order, from {N-1..0},
and no pod is terminated until its successors have been terminated and are
completely shutdown or its [Termination Grace Period](/docs/user-guide/pods/index#termination-of-pods))
completely shutdown or their[Termination Grace Periods](/docs/user-guide/pods/index#termination-of-pods)
has elapsed.
The exmpale below demonstrates the components of a StatefulSet.
The example below demonstrates the components of a Stateful Set.
* A [Headless Service](/docs/user-guide/services/#headless-services), named nginx, is used to control the network domain.
* The StatefulSet, named web, has a Spec that indicates that 3 replicas of the nginx container will be launched in unique Pods.
* The Stateful Set, named web, has a Spec that indicates that 3 replicas of the nginx container will be launched in unique Pods.
* The volumeClaimTemplates, will provide stable storage using [Persistent Volumes](/docs/user-guide/volumes/) provisioned by a
[Persistent Volume Provisioner](http://releases.k8s.io/{{page.githubbranch}}/examples/experimental/persistent-volume-provisioning/README.md).
@ -83,7 +83,7 @@ spec:
{% capture body %}
### When to Use a Stateful Set
StatefulSets are valuable for applications that require one or more of the
Stateful Sets are valuable for applications that require one or more of the
following.
* Stable, unique network identifiers.
@ -92,46 +92,46 @@ following.
* Ordered, graceful deletion and termination.
As it is generally easier to manage, if an application doesn't require any of
the above garuantees, and if it is feasible to do so, it should be deployed as
the above guarantees, and if it is feasible to do so, it should be deployed as
a set of stateless replicas.
### Limitations
* StatefulSet is a beta resource, not available in any Kubernetes release prior to 1.5.
* Stateful Set is a beta resource, not available in any Kubernetes release prior to 1.5.
* As with all alpha/beta resources, it can be disabled through the `--runtime-config` option passed to the apiserver.
* The only updatable field on a StatefulSet is `replicas`
* The storage for a given pet must either be provisioned by a [Persistent Volume Provisioner](http://releases.k8s.io/{{page.githubbranch}}/examples/experimental/persistent-volume-provisioning/README.md) based on the requested `storage class`, or pre-provisioned by an admin.
* Deleting and/or scaling a StatefulSet down will *not* delete the volumes associated with the StatefulSet. This is done to ensure safety first, your data is more valuable than an auto purge of all related PetSet resources. **Deleting the Persistent Volume Claims will result in a deletion of the associated volumes**.
* All StatefulSets currently require a [Headless Service](/docs/user-guide/services/#headless-services) to be responsible for the network identity of the pets. The user is responsible for this Service.
* Updating an existing StatefulSet is currently a manual process, meaning you either need to deploy a new StatefulSet with the new image version, or orphan Pets one by one, update their image, and join them back to the cluster.
* The only mutable field on a Stateful Set is `replicas`
* The storage for a given Pod must either be provisioned by a [Persistent Volume Provisioner](http://releases.k8s.io/{{page.githubbranch}}/examples/experimental/persistent-volume-provisioning/README.md) based on the requested `storage class`, or pre-provisioned by an admin.
* Deleting and/or scaling a Stateful Set down will *not* delete the volumes associated with the Stateful Set. This is done to ensure safety first, your data is more valuable than an auto purge of all related Stateful Set resources. **Deleting the Persistent Volume Claims will result in a deletion of the associated volumes**.
* Stateful Sets currently require a [Headless Service](/docs/user-guide/services/#headless-services) to be responsible for the network identity of the Pods. The user is responsible for this Service.
* Updating an existing Stateful Set is currently a manual process, meaning you either need to deploy a new Stateful Set with the new image version, or orphan Pods one by one, update their image, and join them back to the cluster.
### Pod Identity
StatefulSet Pods have a unique identity that is comprised of an ordinal, a
Stateful Set Pods have a unique identity that is comprised of an ordinal, a
stable network identity, and stable storage. The identity sticks to the Pod,
regardless of which node it's (re) scheduled on.
__Ordinal Index__
For a StatefulSet with N replicas, each Pod in the StatefulSet will be
assinged a integer ordinal, in the range [0,N), that is unique over the Set.
For a Stateful Set with N replicas, each Pod in the Stateful Set will be
assigned a integer ordinal, in the range [0,N), that is unique over the Set.
__Stable Network Id__
The hostname of a Pod in a StatefulSet is derived from the name of the
StatefulSet and the ordinal of the Pod. The pattern for the constructed hostname
The hostname of a Pod in a Stateful Set is derived from the name of the
Stateful Set and the ordinal of the Pod. The pattern for the constructed hostname
is `$(statefulset name)-$(ordinal)`. The example above will create three Pods
named `web-0,web-1,web-2`.
A StatelefulSet can use a [Headless Service](/docs/user-guide/services/#headless-services)
A Stateful Set can use a [Headless Service](/docs/user-guide/services/#headless-services)
to control the domain of its Pods. The domain managed by this Service takes the form:
`$(service name).$(namespace).svc.cluster.local`, where "cluster.local"
is the [cluster domain](http://releases.k8s.io/{{page.githubbranch}}/build/kube-dns/README.md#how-do-i-configure-it).
As each Pod is created, it gets a matching DNS subdomain, taking the form:
`$(podname).$(governing service domain)`, where the governing service is defined
by the `serviceName` field on the StatefulSet.
by the `serviceName` field on the Stateful Set.
Here are some examples of choices for Cluster Domain, Service name,
StatefulSet name, and how that affects the DNS names for the StatefulSet's Pods.
Stateful Set name, and how that affects the DNS names for the Stateful Set's Pods.
Cluster Domain | Service (ns/name) | PetSet (ns/name) | PetSet Domain | Pet DNS | Pet Hostname |
Cluster Domain | Service (ns/name) | Stateful Set (ns/name) | Stateful Set Domain | Pod DNS | Pod Hostname |
-------------- | ----------------- | ----------------- | -------------- | ------- | ------------ |
cluster.local | default/nginx | default/web | nginx.default.svc.cluster.local | web-{0..N-1}.nginx.default.svc.cluster.local | web-{0..N-1} |
cluster.local | foo/nginx | foo/web | nginx.foo.svc.cluster.local | web-{0..N-1}.nginx.foo.svc.cluster.local | web-{0..N-1} |
@ -141,24 +141,32 @@ Note that Cluster Domain will be set to `cluster.local` unless [otherwise config
__Stable Storage__
[Persistent Volumes](/docs/user-guide/volumes/), one for each VolumeClaimTemplate,
are created based on the `volumeClaimTemplates` field of the StatefulSet. In the
example above, each Pod will recieve a single persistent volume with a storage
class of anything and 1 Gib of provisioned storage. When a Pod is (re)scheculed,
its volume(s) are avialable on the node on which it is launched. Note that, the
[Persistent Volumes](/docs/user-guide/volumes/), one for each Volume Claim Template,
are created based on the `volumeClaimTemplates` field of the Stateful Set. In the
example above, each Pod will receive a single persistent volume with a storage
class of anything and 1 Gib of provisioned storage. When a Pod is (re)scheduled,
its volume(s) are available on the node on which it is launched. Note that, the
volumes associated with the Pods' Persistent Volume Claims are not deleted when
the Pods, or Stateful Set are deleted. This must be done manually.
### Deployment and Scaling Garuantees
{% endcapture %}
When the exmample above is created, three Pods will be deployed in the order
### Deployment and Scaling Guarantee
* For a Stateful Set with N replicas, when Pods are being deployed, they are created sequentially, in order from {0..N-1}.
* When Pods are being deleted, they are terminated in reverse order, from {N-1..0}.
* Before a scaling operation is applied to a Pod, all of its predecessors must be Running and Ready.
* Before a Pod is terminated, all of its successors must be completely shutdown.
When the web example above is created, three Pods will be deployed in the order
web-0, web-1, web-2. web-1 will not be deployed before web-0 is
[Running and Ready](/docs/user-guide/pod-states), and web-2 will not be until
web-1 is Running and Ready.
[Running and Ready](/docs/user-guide/pod-states), and web-2 will not be deployed until
web-1 is Running and Ready. If web-0 should fail, after web-1 is Running and Ready, but before
web-2 is launched, web-2 will not be launched until web-0 is successfully relaunched and
becomes Running and Ready.
{% capture whatsnext %}
* Learn more about [this](...).
* See this [related task](...).
If a user were to scale the deployed example by patching the Stateful Set such that
`replicas=1`, web-2 would be terminated first. web-1 would not be terminated until web-2
is fully shutdown and deleted. If web-0 were to fail after web-2 has been terminated and
is completely shutdown, but prior to web-1's termination, web-1 would not be terminated
until web-0 is Running and Ready.
{% endcapture %}
{% include templates/concept.md %}