diff --git a/contributors/design-proposals/cluster-lifecycle/cluster-deployment.md b/contributors/design-proposals/cluster-lifecycle/cluster-deployment.md index a0402ebc2..46af0c5c9 100644 --- a/contributors/design-proposals/cluster-lifecycle/cluster-deployment.md +++ b/contributors/design-proposals/cluster-lifecycle/cluster-deployment.md @@ -47,7 +47,7 @@ ship with all of the requirements for the node specification by default. 8. Create node machines 9. Install docker on all machines -**Exit critera**: +**Exit criteria**: 1. Can ```ssh``` to all machines and run a test docker image 2. Can ```ssh``` to master and nodes and ping other machines @@ -104,7 +104,7 @@ Each node can be deployed separately and the implementation should make it ~impo 1. kubelet config file - we will read kubelet configuration file from disk instead of apiserver; it will be generated locally and copied to all nodes. -**Exit critera**: +**Exit criteria**: 1. All nodes are registered, but not ready due to lack of kubernetes networking. diff --git a/contributors/design-proposals/network/nodeport-ip-range.md b/contributors/design-proposals/network/nodeport-ip-range.md index c9a763929..908222dec 100644 --- a/contributors/design-proposals/network/nodeport-ip-range.md +++ b/contributors/design-proposals/network/nodeport-ip-range.md @@ -58,7 +58,7 @@ However, if IP address of `eth0` changes from `172.10.1.2` to `192.168.3.4` When refer to DHCP user case, network administrator usually reserves a RANGE of IP addresses for the DHCP server. So, IP address change will always fall in an IP range in DHCP scenario. That's to say an IP address of a interface will not change from `172.10.1.2` to `192.168.3.4` in our example. -## Kube-proxy implementation suport +## Kube-proxy implementation support The implementation is simple. @@ -74,7 +74,7 @@ Same as iptables. ### ipvs -Create IPVS virutal services one by one according to provided node IPs, which is almost same as current behaviour(fetch all IPs from host). +Create IPVS virtual services one by one according to provided node IPs, which is almost same as current behaviour(fetch all IPs from host). ### Window userspace diff --git a/contributors/design-proposals/node/dynamic-kubelet-configuration.md b/contributors/design-proposals/node/dynamic-kubelet-configuration.md index 3dc8870f3..fdbce1b25 100644 --- a/contributors/design-proposals/node/dynamic-kubelet-configuration.md +++ b/contributors/design-proposals/node/dynamic-kubelet-configuration.md @@ -180,7 +180,7 @@ Regarding (1), the Kubelet will check the "bad configs" file on startup. It will Regarding (3), the Kubelet should report via the `Node`'s status: - That it is using LKG. -- The configuration LKG referrs to. +- The configuration LKG refers to. - The supposedly bad configuration that the Kubelet decided to avoid. - The reason it thinks the configuration is bad. diff --git a/contributors/design-proposals/node/optional-configmap.md b/contributors/design-proposals/node/optional-configmap.md index 715ac6a7b..1b11bb124 100644 --- a/contributors/design-proposals/node/optional-configmap.md +++ b/contributors/design-proposals/node/optional-configmap.md @@ -28,12 +28,12 @@ files if the ConfigMap exists. A container can specify an entire ConfigMap to be populated as environment variables via `EnvFrom`. When required, the container fails to start if the ConfigMap does not exist. If the ConfigMap is optional, the container will -skip the non-existant ConfigMap and proceed as normal. +skip the non-existent ConfigMap and proceed as normal. A container may also specify a single environment variable to retrieve its value from a ConfigMap via `Env`. If the key does not exist in the ConfigMap during container start, the container will fail to start. If however, the -ConfigMap is marked optional, during container start, a non-existant ConfigMap +ConfigMap is marked optional, during container start, a non-existent ConfigMap or a missing key in the ConfigMap will not prevent the container from starting. Any previous value for the given key will be used. diff --git a/contributors/design-proposals/node/sysctl.md b/contributors/design-proposals/node/sysctl.md index 4d6f505fe..7e397960f 100644 --- a/contributors/design-proposals/node/sysctl.md +++ b/contributors/design-proposals/node/sysctl.md @@ -313,7 +313,7 @@ Issues: uses [Resizable virtual memory filesystem](https://github.com/torvalds/linux/blob/master/mm/shmem.c) * [x] hence **safe to customize** * [x] **no application influence** with high values - * **defaults to** [unlimited pages, unlimited size, 4096 segments on today's kernels](https://github.com/torvalds/linux/blob/0e06f5c0deeef0332a5da2ecb8f1fcf3e024d958/include/uapi/linux/shm.h#L20). This makes **customization practically unneccessary**, at least for the segment sizes. IBM's DB2 suggests `256*GB of RAM` for `kernel.shmmni` (compare http://www.ibm.com/support/knowledgecenter/SSEPGG_10.1.0/com.ibm.db2.luw.qb.server.doc/doc/c0057140.html), exceeding the kernel defaults for machines with >16GB of RAM. + * **defaults to** [unlimited pages, unlimited size, 4096 segments on today's kernels](https://github.com/torvalds/linux/blob/0e06f5c0deeef0332a5da2ecb8f1fcf3e024d958/include/uapi/linux/shm.h#L20). This makes **customization practically unnecessary**, at least for the segment sizes. IBM's DB2 suggests `256*GB of RAM` for `kernel.shmmni` (compare http://www.ibm.com/support/knowledgecenter/SSEPGG_10.1.0/com.ibm.db2.luw.qb.server.doc/doc/c0057140.html), exceeding the kernel defaults for machines with >16GB of RAM. - `kernel.shm_rmid_forced`: enforce removal of shared memory segments on process shutdown * [x] **namespaced** in ipc ns - `kernel.msgmax`, `kernel.msgmnb`, `kernel.msgmni`: configure System V messages @@ -634,7 +634,7 @@ Alternative 1 or 2 has to be chosen for the external API once the feature is pro Finally, the container runtime will interpret `pod.spec.securityPolicy.sysctls`, e.g. in the case of Docker the `DockerManager` will apply the given sysctls to the infra container in `createPodInfraContainer`. -In a later implementation of a container runtime interface (compare https://github.com/kubernetes/kubernetes/pull/25899), sysctls will be part of `LinuxPodSandboxConfig` (compare https://github.com/kubernetes/kubernetes/pull/25899#discussion_r64867763) and to be applied by the runtime implementaiton to the `PodSandbox` by the `PodSandboxManager` implementation. +In a later implementation of a container runtime interface (compare https://github.com/kubernetes/kubernetes/pull/25899), sysctls will be part of `LinuxPodSandboxConfig` (compare https://github.com/kubernetes/kubernetes/pull/25899#discussion_r64867763) and to be applied by the runtime implementation to the `PodSandbox` by the `PodSandboxManager` implementation. ## Examples diff --git a/contributors/design-proposals/scheduling/scheduler-equivalence-class.md b/contributors/design-proposals/scheduling/scheduler-equivalence-class.md index c9ef5fbdd..fdc2e8d34 100644 --- a/contributors/design-proposals/scheduling/scheduler-equivalence-class.md +++ b/contributors/design-proposals/scheduling/scheduler-equivalence-class.md @@ -144,7 +144,7 @@ func (ec *EquivalenceCache) PredicateWithECache( } ``` -One thing to note is, if the `hostPredicate` is not present in the logic above, it will be considered as `invalid`. That means although this pod has equivalence class, it does not have cached predicate result yet, or the cached data is not valid. It needs to go through normal predicate process and write the result into equivalence clas cache. +One thing to note is, if the `hostPredicate` is not present in the logic above, it will be considered as `invalid`. That means although this pod has equivalence class, it does not have cached predicate result yet, or the cached data is not valid. It needs to go through normal predicate process and write the result into equivalence class cache. ### 2.3 What if no equivalence class is found for pod? diff --git a/events/2017/12-contributor-summit/breaking-up-the-monolith.md b/events/2017/12-contributor-summit/breaking-up-the-monolith.md index 3616aecab..baf957273 100644 --- a/events/2017/12-contributor-summit/breaking-up-the-monolith.md +++ b/events/2017/12-contributor-summit/breaking-up-the-monolith.md @@ -45,10 +45,10 @@ Assumption: "big tangled ball of pasta" is hard to contribute to - what about storage providers, if we break them up there's a clear interface for storage providers - second thing: here are api/components that are required, required but swappable, optional... how can you do that swapping around of stuff in the monorepo - robertbailey: when I proposed this session I thought we had all agreed that breaking the repo was the way to go, but maybe we don't have that conensus could we try and figure out who is the decision maker / set of decision makers there and try to come to consensus -- dchen1107: I heard a lot of benefits of the split. So there's a huge cost for release management, as the release manager, I don't know what's in the other repositiory. Clear APIs and interfaces could be built without needing separate repositories. Gave example of Docker and CRI, which got API without moving repos. +- dchen1107: I heard a lot of benefits of the split. So there's a huge cost for release management, as the release manager, I don't know what's in the other repository. Clear APIs and interfaces could be built without needing separate repositories. Gave example of Docker and CRI, which got API without moving repos. - Example of Cloud Foundry, build process which took two weeks. How do we ensure that we're delivering security updates quickly? - thockin: We can put many things in APIs and cloud providers are a good example of that. Splitting stuff out into multiple repos comes down to the question of: are we a piece of software or a distribution? You'll have to come to my talk to see the rest. -- spiffxp: Increasing our dependency on integration tools will add overhead and process we don't have now. But I understand that most OSS people expect multiple repos and it's easier for them. Github notifications are awful, having multiple repos would make this better. The bots have improved the automation situation, but not really triaging notifications. How many issues do we have that are based on Github. Maybe we should improve the routing of GH notificaitons instead? +- spiffxp: Increasing our dependency on integration tools will add overhead and process we don't have now. But I understand that most OSS people expect multiple repos and it's easier for them. Github notifications are awful, having multiple repos would make this better. The bots have improved the automation situation, but not really triaging notifications. How many issues do we have that are based on Github. Maybe we should improve the routing of GH notifications instead? - solly: if we split, we really need to make it easier for tiny repos to plug into our automation and other tools. I shouldn't have to figure out who to ask about getting plugged in, we should have a doc. We need to make sure it's not possible for repos to fall through the cracks like Heapster, which I work on. - bgrant: We're already in the land of 90 repos. We don't need to debate splitting, we're alread split. We have incubator, and kubernates-client. I think client has helped a lot, we have 7 languages and that'll grow. - bgrant: the velocity of things in the monorepo are static diff --git a/events/2017/12-contributor-summit/enabling-kubernetes-ecosystem.md b/events/2017/12-contributor-summit/enabling-kubernetes-ecosystem.md index 50be4b663..23ef10826 100644 --- a/events/2017/12-contributor-summit/enabling-kubernetes-ecosystem.md +++ b/events/2017/12-contributor-summit/enabling-kubernetes-ecosystem.md @@ -72,7 +72,7 @@ Notes by @directxman12 * Need to look at human-consumable media, not necessarily machine-consumable -* Question: do we currate +* Question: do we curate * Do we require "discoverable" things to be maintained/use a CLA/etc? diff --git a/events/2017/12-contributor-summit/onboarding-new-developers-through-better-docs.md b/events/2017/12-contributor-summit/onboarding-new-developers-through-better-docs.md index ddb6d6886..7afa5b16a 100644 --- a/events/2017/12-contributor-summit/onboarding-new-developers-through-better-docs.md +++ b/events/2017/12-contributor-summit/onboarding-new-developers-through-better-docs.md @@ -148,7 +148,7 @@ Note: the focus is on documentation, so we won’t be able to act on suggestions * Templates/guidelines mentioned above may be helpful - * Talk to SIG docs (SIG docs wants to try and send delagates to SIG, but for now, come to meetings) + * Talk to SIG docs (SIG docs wants to try and send delegates to SIG, but for now, come to meetings) * Question: should there even *be* a split between user and contributor docs? diff --git a/events/2017/12-contributor-summit/role-of-sig-lead.md b/events/2017/12-contributor-summit/role-of-sig-lead.md index 54e3d26f4..6b68e9c34 100644 --- a/events/2017/12-contributor-summit/role-of-sig-lead.md +++ b/events/2017/12-contributor-summit/role-of-sig-lead.md @@ -8,7 +8,7 @@ Joe wants to redefine the session Power and role of sig leads is defined in how we pick these people. Voting? Membership? What do we do for governance in general -Paul: Roles that are valuable that are not sig lead. Svc cat, moderator of discussions and took notes. Secratary of committee. +Paul: Roles that are valuable that are not sig lead. Svc cat, moderator of discussions and took notes. Secretary of committee. Back to joe; what does the sig think. Each leader doesn’t know. Who shows up to a meeting isn’t helpful as not everyone can attend. What does the lead thing? Allows too much personal power. Decision maker or tech lead? @@ -31,7 +31,7 @@ Joe -need someone who cares Eric chang - every sig needs a person doing X,Y, Z. docs, tests, etc. fundamental goals. Sig lead has too much to do. Formalize roles and fill them with people, especially different people for each role. Large sigs have more people to allocate to alt roles. -Jessie - Can’t have the person who facilitates be unbiased techincally. Best solution vs alternative monetary gain. +Jessie - Can’t have the person who facilitates be unbiased technically. Best solution vs alternative monetary gain. Joe - people as sig leads in name only to game the system as being important diff --git a/keps/README.md b/keps/README.md index f1fbf39f5..c514dc178 100644 --- a/keps/README.md +++ b/keps/README.md @@ -1,6 +1,6 @@ # Kubernetes Enhancement Proposals (KEPs) -A Kubernetes Enhancement Proposal (KEP) is a way to propose, communicate and coordiante on new efforts for the Kubernetes project. +A Kubernetes Enhancement Proposal (KEP) is a way to propose, communicate and coordinate on new efforts for the Kubernetes project. You can read the full details of the project in [KEP-1](0001-kubernetes-enhancement-proposal-process.md). This process is still in a _beta_ state and is opt-in for those that want to provide feedback for the process. diff --git a/sig-scalability/blogs/scalability-regressions-case-studies.md b/sig-scalability/blogs/scalability-regressions-case-studies.md index 729c816dc..686a2bf84 100644 --- a/sig-scalability/blogs/scalability-regressions-case-studies.md +++ b/sig-scalability/blogs/scalability-regressions-case-studies.md @@ -24,7 +24,7 @@ This document is a compilation of some interesting scalability/performance regre | [#51903](https://github.com/kubernetes/kubernetes/issues/51903) | Few nodes failing to start in kubemark due to reduced PIDs limit for docker in newer COS image | When COS m60 image was introduced, we started seeing that some of the kubemark hollow-node pods were failing to start due to docker on the host-node crossing the PID limit. This is a risky regression in terms of the damage it could've caused if rolled out to production, and our scalability tests caught it. Besides the low PID threshold issue, it helped also catch another issue on containerd-shim starting too many threads. |