update misspelled words and code block

This commit is contained in:
Xingcai Zhang 2017-10-21 21:32:06 +08:00
parent e4c1caeaad
commit 9f0e4610a3
4 changed files with 4 additions and 4 deletions

View File

@ -13,7 +13,7 @@ prevent future challenges in upgrading.
1. Ensure ThirdPartyResource APIs operate consistently with first party
Kubernetes APIs.
2. Enable ThirdPartyResources to specify how they will appear in API
discovery to be consistent with other resources and avoid naming confilcts
discovery to be consistent with other resources and avoid naming conflicts
3. Move TPR into their own API group to allow the extensions group to be
[removed](https://github.com/kubernetes/kubernetes/issues/43214)
4. Support cluster scoped TPR resources

View File

@ -427,7 +427,7 @@ its feasibility, we construct such a scheme here. However, this proposal does
not mandate its use.
Given a hash function with output size `HashSize` defined
as `func H(s srtring) [HashSize] byte`, in order to resolve collisions we
as `func H(s string) [HashSize] byte`, in order to resolve collisions we
define a new function `func H'(s string, n int) [HashSize]byte` where `H'`
returns the result of invoking `H` on the concatenation of `s` with the string
value of `n`. We define a third function

View File

@ -207,7 +207,7 @@ but old ones still satisfy the schedule and are not re-run just because the temp
If you delete and replace a CronJob with one of the same name, it will:
- not use any old Status.Active, and not consider any existing running or terminated jobs from the previous
CronJob (with a different UID) at all when determining coflicts, what needs to be started, etc.
CronJob (with a different UID) at all when determining conflicts, what needs to be started, etc.
- If there is an existing Job with the same time-based hash in its name (see below), then
new instances of that job will not be able to be created. So, delete it if you want to re-run.
with the same name as conflicts.

View File

@ -181,7 +181,7 @@ status, back-off (like a scheduler or replication controller), and try again lat
by a StatefulSet controller must have a set of labels that match the selector, support orphaning, and have a
controller back reference annotation identifying the owning StatefulSet by name and UID.
When a StatefulSet is scaled down, the pod for the removed indentity should be deleted. It is less clear what the
When a StatefulSet is scaled down, the pod for the removed identity should be deleted. It is less clear what the
controller should do to supporting resources. If every pod requires a PV, and a user accidentally scales
up to N=200 and then back down to N=3, leaving 197 PVs lying around may be undesirable (potential for
abuse). On the other hand, a cluster of 5 that is accidentally scaled down to 3 might irreparably destroy