diff --git a/contributors/design-proposals/api-machinery/thirdpartyresources.md b/contributors/design-proposals/api-machinery/thirdpartyresources.md index 8f651d0d2..05dfff76e 100644 --- a/contributors/design-proposals/api-machinery/thirdpartyresources.md +++ b/contributors/design-proposals/api-machinery/thirdpartyresources.md @@ -13,7 +13,7 @@ prevent future challenges in upgrading. 1. Ensure ThirdPartyResource APIs operate consistently with first party Kubernetes APIs. 2. Enable ThirdPartyResources to specify how they will appear in API -discovery to be consistent with other resources and avoid naming confilcts +discovery to be consistent with other resources and avoid naming conflicts 3. Move TPR into their own API group to allow the extensions group to be [removed](https://github.com/kubernetes/kubernetes/issues/43214) 4. Support cluster scoped TPR resources diff --git a/contributors/design-proposals/apps/controller_history.md b/contributors/design-proposals/apps/controller_history.md index fbf89b702..5c650510f 100644 --- a/contributors/design-proposals/apps/controller_history.md +++ b/contributors/design-proposals/apps/controller_history.md @@ -427,7 +427,7 @@ its feasibility, we construct such a scheme here. However, this proposal does not mandate its use. Given a hash function with output size `HashSize` defined -as `func H(s srtring) [HashSize] byte`, in order to resolve collisions we +as `func H(s string) [HashSize] byte`, in order to resolve collisions we define a new function `func H'(s string, n int) [HashSize]byte` where `H'` returns the result of invoking `H` on the concatenation of `s` with the string value of `n`. We define a third function diff --git a/contributors/design-proposals/apps/cronjob.md b/contributors/design-proposals/apps/cronjob.md index 03ea6bb02..1ce926ead 100644 --- a/contributors/design-proposals/apps/cronjob.md +++ b/contributors/design-proposals/apps/cronjob.md @@ -207,7 +207,7 @@ but old ones still satisfy the schedule and are not re-run just because the temp If you delete and replace a CronJob with one of the same name, it will: - not use any old Status.Active, and not consider any existing running or terminated jobs from the previous - CronJob (with a different UID) at all when determining coflicts, what needs to be started, etc. + CronJob (with a different UID) at all when determining conflicts, what needs to be started, etc. - If there is an existing Job with the same time-based hash in its name (see below), then new instances of that job will not be able to be created. So, delete it if you want to re-run. with the same name as conflicts. diff --git a/contributors/design-proposals/apps/stateful-apps.md b/contributors/design-proposals/apps/stateful-apps.md index ee9f9cd5e..5f965b77d 100644 --- a/contributors/design-proposals/apps/stateful-apps.md +++ b/contributors/design-proposals/apps/stateful-apps.md @@ -181,7 +181,7 @@ status, back-off (like a scheduler or replication controller), and try again lat by a StatefulSet controller must have a set of labels that match the selector, support orphaning, and have a controller back reference annotation identifying the owning StatefulSet by name and UID. -When a StatefulSet is scaled down, the pod for the removed indentity should be deleted. It is less clear what the +When a StatefulSet is scaled down, the pod for the removed identity should be deleted. It is less clear what the controller should do to supporting resources. If every pod requires a PV, and a user accidentally scales up to N=200 and then back down to N=3, leaving 197 PVs lying around may be undesirable (potential for abuse). On the other hand, a cluster of 5 that is accidentally scaled down to 3 might irreparably destroy