This commit is contained in:
lovejoy 2018-05-22 22:30:55 +08:00 committed by k8s-ci-robot
parent 0c5c9a5fdd
commit efbcfeee7c
3 changed files with 8 additions and 8 deletions

View File

@ -374,7 +374,7 @@ nginx-deployment-3066724191-eocby 0/1 ImagePullBackOff 0 6s
{{< note >}}
**Note:** The Deployment controller will stop the bad rollout automatically, and will stop scaling up the new
ReplicaSet. This depends on the rollingUpdate parameters (`maxUnavailable` specifically) that you have specified.
Kubernetes by default sets the value to 1 and `spec.replicas` to 1 so if you haven't cared about setting those
Kubernetes by default sets the value to 1 and `.spec.replicas` to 1 so if you haven't cared about setting those
parameters, your Deployment can have 100% unavailability by default! This will be fixed in Kubernetes in a future
version.
{{< /note >}}
@ -702,7 +702,7 @@ due to some of the following factors:
* Application runtime misconfiguration
One way you can detect this condition is to specify a deadline parameter in your Deployment spec:
([`spec.progressDeadlineSeconds`](#progress-deadline-seconds)). `spec.progressDeadlineSeconds` denotes the
([`.spec.progressDeadlineSeconds`](#progress-deadline-seconds)). `.spec.progressDeadlineSeconds` denotes the
number of seconds the Deployment controller waits before indicating (in the Deployment status) that the
Deployment progress has stalled.

View File

@ -198,7 +198,7 @@ status check.
{{< note >}}
**Note:** Due to a known issue [#54870](https://github.com/kubernetes/kubernetes/issues/54870),
when the `spec.template.spec.restartPolicy` field is set to "`OnFailure`", the
when the `.spec.template.spec.restartPolicy` field is set to "`OnFailure`", the
back-off limit may be ineffective. As a short-term workaround, set the restart
policy for the embedded template to "`Never`".
{{< /note >}}
@ -299,12 +299,12 @@ Here, `W` is the number of work items.
### Specifying your own pod selector
Normally, when you create a job object, you do not specify `spec.selector`.
Normally, when you create a job object, you do not specify `.spec.selector`.
The system defaulting logic adds this field when the job is created.
It picks a selector value that will not overlap with any other jobs.
However, in some cases, you might need to override this automatically set selector.
To do this, you can specify the `spec.selector` of the job.
To do this, you can specify the `.spec.selector` of the job.
Be very careful when doing this. If you specify a label selector which is not
unique to the pods of that job, and which matches unrelated pods, then pods of the unrelated
@ -312,7 +312,7 @@ job may be deleted, or this job may count other pods as completing it, or one or
of the jobs may refuse to create pods or run to completion. If a non-unique selector is
chosen, then other controllers (e.g. ReplicationController) and their pods may behave
in unpredictable ways too. Kubernetes will not stop you from making a mistake when
specifying `spec.selector`.
specifying `.spec.selector`.
Here is an example of a case when you might want to use this feature.

View File

@ -108,7 +108,7 @@ spec:
```
## Pod Selector
You must set the `spec.selector` field of a StatefulSet to match the labels of its `.spec.template.metadata.labels`. Prior to Kubernetes 1.8, the `spec.selector` field was defaulted when omitted. In 1.8 and later versions, failing to specify a matching Pod Selector will result in a validation error during StatefulSet creation.
You must set the `.spec.selector` field of a StatefulSet to match the labels of its `.spec.template.metadata.labels`. Prior to Kubernetes 1.8, the `.spec.selector` field was defaulted when omitted. In 1.8 and later versions, failing to specify a matching Pod Selector will result in a validation error during StatefulSet creation.
## Pod Identity
StatefulSet Pods have a unique identity that is comprised of an ordinal, a
@ -217,7 +217,7 @@ create new Pods that reflect modifications made to a StatefulSet's `.spec.templa
### Rolling Updates
The `RollingUpdate` update strategy implements automated, rolling update for the Pods in a
StatefulSet. It is the default strategy when `spec.updateStrategy` is left unspecified. When a StatefulSet's `.spec.updateStrategy.type` is set to `RollingUpdate`, the
StatefulSet. It is the default strategy when `.spec.updateStrategy` is left unspecified. When a StatefulSet's `.spec.updateStrategy.type` is set to `RollingUpdate`, the
StatefulSet controller will delete and recreate each Pod in the StatefulSet. It will proceed
in the same order as Pod termination (from the largest ordinal to the smallest), updating
each Pod one at a time. It will wait until an updated Pod is Running and Ready prior to