Merge pull request #1859 from cheyang/master

fix typo around the project
This commit is contained in:
k8s-ci-robot 2018-02-27 18:44:46 -08:00 committed by GitHub
commit 17736667f1
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
12 changed files with 18 additions and 18 deletions

View File

@ -47,7 +47,7 @@ ship with all of the requirements for the node specification by default.
8. Create node machines
9. Install docker on all machines
**Exit critera**:
**Exit criteria**:
1. Can ```ssh``` to all machines and run a test docker image
2. Can ```ssh``` to master and nodes and ping other machines
@ -104,7 +104,7 @@ Each node can be deployed separately and the implementation should make it ~impo
1. kubelet config file - we will read kubelet configuration file from disk instead of apiserver; it will
be generated locally and copied to all nodes.
**Exit critera**:
**Exit criteria**:
1. All nodes are registered, but not ready due to lack of kubernetes networking.

View File

@ -58,7 +58,7 @@ However, if IP address of `eth0` changes from `172.10.1.2` to `192.168.3.4`
When refer to DHCP user case, network administrator usually reserves a RANGE of IP addresses for the DHCP server. So, IP address change will always fall in an IP range in DHCP scenario. That's to say an IP address of a interface will not change from `172.10.1.2` to `192.168.3.4` in our example.
## Kube-proxy implementation suport
## Kube-proxy implementation support
The implementation is simple.
@ -74,7 +74,7 @@ Same as iptables.
### ipvs
Create IPVS virutal services one by one according to provided node IPs, which is almost same as current behaviour(fetch all IPs from host).
Create IPVS virtual services one by one according to provided node IPs, which is almost same as current behaviour(fetch all IPs from host).
### Window userspace

View File

@ -180,7 +180,7 @@ Regarding (1), the Kubelet will check the "bad configs" file on startup. It will
Regarding (3), the Kubelet should report via the `Node`'s status:
- That it is using LKG.
- The configuration LKG referrs to.
- The configuration LKG refers to.
- The supposedly bad configuration that the Kubelet decided to avoid.
- The reason it thinks the configuration is bad.

View File

@ -28,12 +28,12 @@ files if the ConfigMap exists.
A container can specify an entire ConfigMap to be populated as environment
variables via `EnvFrom`. When required, the container fails to start if the
ConfigMap does not exist. If the ConfigMap is optional, the container will
skip the non-existant ConfigMap and proceed as normal.
skip the non-existent ConfigMap and proceed as normal.
A container may also specify a single environment variable to retrieve its
value from a ConfigMap via `Env`. If the key does not exist in the ConfigMap
during container start, the container will fail to start. If however, the
ConfigMap is marked optional, during container start, a non-existant ConfigMap
ConfigMap is marked optional, during container start, a non-existent ConfigMap
or a missing key in the ConfigMap will not prevent the container from
starting. Any previous value for the given key will be used.

View File

@ -313,7 +313,7 @@ Issues:
uses [Resizable virtual memory filesystem](https://github.com/torvalds/linux/blob/master/mm/shmem.c)
* [x] hence **safe to customize**
* [x] **no application influence** with high values
* **defaults to** [unlimited pages, unlimited size, 4096 segments on today's kernels](https://github.com/torvalds/linux/blob/0e06f5c0deeef0332a5da2ecb8f1fcf3e024d958/include/uapi/linux/shm.h#L20). This makes **customization practically unneccessary**, at least for the segment sizes. IBM's DB2 suggests `256*GB of RAM` for `kernel.shmmni` (compare http://www.ibm.com/support/knowledgecenter/SSEPGG_10.1.0/com.ibm.db2.luw.qb.server.doc/doc/c0057140.html), exceeding the kernel defaults for machines with >16GB of RAM.
* **defaults to** [unlimited pages, unlimited size, 4096 segments on today's kernels](https://github.com/torvalds/linux/blob/0e06f5c0deeef0332a5da2ecb8f1fcf3e024d958/include/uapi/linux/shm.h#L20). This makes **customization practically unnecessary**, at least for the segment sizes. IBM's DB2 suggests `256*GB of RAM` for `kernel.shmmni` (compare http://www.ibm.com/support/knowledgecenter/SSEPGG_10.1.0/com.ibm.db2.luw.qb.server.doc/doc/c0057140.html), exceeding the kernel defaults for machines with >16GB of RAM.
- `kernel.shm_rmid_forced`: enforce removal of shared memory segments on process shutdown
* [x] **namespaced** in ipc ns
- `kernel.msgmax`, `kernel.msgmnb`, `kernel.msgmni`: configure System V messages
@ -634,7 +634,7 @@ Alternative 1 or 2 has to be chosen for the external API once the feature is pro
Finally, the container runtime will interpret `pod.spec.securityPolicy.sysctls`,
e.g. in the case of Docker the `DockerManager` will apply the given sysctls to the infra container in `createPodInfraContainer`.
In a later implementation of a container runtime interface (compare https://github.com/kubernetes/kubernetes/pull/25899), sysctls will be part of `LinuxPodSandboxConfig` (compare https://github.com/kubernetes/kubernetes/pull/25899#discussion_r64867763) and to be applied by the runtime implementaiton to the `PodSandbox` by the `PodSandboxManager` implementation.
In a later implementation of a container runtime interface (compare https://github.com/kubernetes/kubernetes/pull/25899), sysctls will be part of `LinuxPodSandboxConfig` (compare https://github.com/kubernetes/kubernetes/pull/25899#discussion_r64867763) and to be applied by the runtime implementation to the `PodSandbox` by the `PodSandboxManager` implementation.
## Examples

View File

@ -144,7 +144,7 @@ func (ec *EquivalenceCache) PredicateWithECache(
}
```
One thing to note is, if the `hostPredicate` is not present in the logic above, it will be considered as `invalid`. That means although this pod has equivalence class, it does not have cached predicate result yet, or the cached data is not valid. It needs to go through normal predicate process and write the result into equivalence clas cache.
One thing to note is, if the `hostPredicate` is not present in the logic above, it will be considered as `invalid`. That means although this pod has equivalence class, it does not have cached predicate result yet, or the cached data is not valid. It needs to go through normal predicate process and write the result into equivalence class cache.
### 2.3 What if no equivalence class is found for pod?

View File

@ -45,10 +45,10 @@ Assumption: "big tangled ball of pasta" is hard to contribute to
- what about storage providers, if we break them up there's a clear interface for storage providers
- second thing: here are api/components that are required, required but swappable, optional... how can you do that swapping around of stuff in the monorepo
- robertbailey: when I proposed this session I thought we had all agreed that breaking the repo was the way to go, but maybe we don't have that conensus could we try and figure out who is the decision maker / set of decision makers there and try to come to consensus
- dchen1107: I heard a lot of benefits of the split. So there's a huge cost for release management, as the release manager, I don't know what's in the other repositiory. Clear APIs and interfaces could be built without needing separate repositories. Gave example of Docker and CRI, which got API without moving repos.
- dchen1107: I heard a lot of benefits of the split. So there's a huge cost for release management, as the release manager, I don't know what's in the other repository. Clear APIs and interfaces could be built without needing separate repositories. Gave example of Docker and CRI, which got API without moving repos.
- Example of Cloud Foundry, build process which took two weeks. How do we ensure that we're delivering security updates quickly?
- thockin: We can put many things in APIs and cloud providers are a good example of that. Splitting stuff out into multiple repos comes down to the question of: are we a piece of software or a distribution? You'll have to come to my talk to see the rest.
- spiffxp: Increasing our dependency on integration tools will add overhead and process we don't have now. But I understand that most OSS people expect multiple repos and it's easier for them. Github notifications are awful, having multiple repos would make this better. The bots have improved the automation situation, but not really triaging notifications. How many issues do we have that are based on Github. Maybe we should improve the routing of GH notificaitons instead?
- spiffxp: Increasing our dependency on integration tools will add overhead and process we don't have now. But I understand that most OSS people expect multiple repos and it's easier for them. Github notifications are awful, having multiple repos would make this better. The bots have improved the automation situation, but not really triaging notifications. How many issues do we have that are based on Github. Maybe we should improve the routing of GH notifications instead?
- solly: if we split, we really need to make it easier for tiny repos to plug into our automation and other tools. I shouldn't have to figure out who to ask about getting plugged in, we should have a doc. We need to make sure it's not possible for repos to fall through the cracks like Heapster, which I work on.
- bgrant: We're already in the land of 90 repos. We don't need to debate splitting, we're alread split. We have incubator, and kubernates-client. I think client has helped a lot, we have 7 languages and that'll grow.
- bgrant: the velocity of things in the monorepo are static

View File

@ -72,7 +72,7 @@ Notes by @directxman12
* Need to look at human-consumable media, not necessarily machine-consumable
* Question: do we currate
* Question: do we curate
* Do we require "discoverable" things to be maintained/use a CLA/etc?

View File

@ -148,7 +148,7 @@ Note: the focus is on documentation, so we wont be able to act on suggestions
* Templates/guidelines mentioned above may be helpful
* Talk to SIG docs (SIG docs wants to try and send delagates to SIG, but for now, come to meetings)
* Talk to SIG docs (SIG docs wants to try and send delegates to SIG, but for now, come to meetings)
* Question: should there even *be* a split between user and contributor docs?

View File

@ -8,7 +8,7 @@ Joe wants to redefine the session
Power and role of sig leads is defined in how we pick these people. Voting? Membership? What do we do for governance in general
Paul: Roles that are valuable that are not sig lead. Svc cat, moderator of discussions and took notes. Secratary of committee.
Paul: Roles that are valuable that are not sig lead. Svc cat, moderator of discussions and took notes. Secretary of committee.
Back to joe; what does the sig think. Each leader doesnt know. Who shows up to a meeting isnt helpful as not everyone can attend. What does the lead thing? Allows too much personal power. Decision maker or tech lead?
@ -31,7 +31,7 @@ Joe -need someone who cares
Eric chang - every sig needs a person doing X,Y, Z. docs, tests, etc. fundamental goals. Sig lead has too much to do. Formalize roles and fill them with people, especially different people for each role.
Large sigs have more people to allocate to alt roles.
Jessie - Cant have the person who facilitates be unbiased techincally. Best solution vs alternative monetary gain.
Jessie - Cant have the person who facilitates be unbiased technically. Best solution vs alternative monetary gain.
Joe - people as sig leads in name only to game the system as being important

View File

@ -1,6 +1,6 @@
# Kubernetes Enhancement Proposals (KEPs)
A Kubernetes Enhancement Proposal (KEP) is a way to propose, communicate and coordiante on new efforts for the Kubernetes project.
A Kubernetes Enhancement Proposal (KEP) is a way to propose, communicate and coordinate on new efforts for the Kubernetes project.
You can read the full details of the project in [KEP-1](0001-kubernetes-enhancement-proposal-process.md).
This process is still in a _beta_ state and is opt-in for those that want to provide feedback for the process.

View File

@ -24,7 +24,7 @@ This document is a compilation of some interesting scalability/performance regre
| [#51903](https://github.com/kubernetes/kubernetes/issues/51903) | Few nodes failing to start in kubemark due to reduced PIDs limit for docker in newer COS image | When COS m60 image was introduced, we started seeing that some of the kubemark hollow-node pods were failing to start due to docker on the host-node crossing the PID limit. This is a risky regression in terms of the damage it could've caused if rolled out to production, and our scalability tests caught it. Besides the low PID threshold issue, it helped also catch another issue on containerd-shim starting too many threads. | <ul><li>Kubelet</li><li>Docker</li><li>Containerd-shim</li></ul> | sig-node | - | 500
| [#51899 (part)](https://github.com/kubernetes/kubernetes/issues/51899#issuecomment-331924016) | "PATCH node-status" calls seeing high latency due to blocking on audit-logging | Those calls are made by kubelets once every X seconds - which adds up to be quite some qps for large clusters. Part of handling those calls is audit-logging them. When a change moving the default audit-log format to JSON was made, a performance issue with the design was exposed. The update handler for those calls was doing the audit-writing synchronously (instead of buffering + asynchronous writing), which slowed down those calls by an order of magnitude. | <ul><li>Audit-logging (feature)</li><li>Apiserver</li></ul> | sig-auth <br> sig-instrumentation <br> sig-api-machinery | 2000 | -
| [#51899 (part)](https://github.com/kubernetes/kubernetes/issues/51899) | "DELETE pods" API call latencies shot up on large cluster tests due to kubelet thundering herd | A change to kubelet pod deletion resulted in delete pod api calls from kubelets being concentrated immediately after container garbage collection. When performing deletion of large numbers (O(10k)) of pods across large numbers (O(1k)) of nodes, the resulting concentrated delete calls from the kubelets cause increased latency of "DELETE pods" API calls (above our target SLO of 1s). | <ul><li>Container GC (feature)</li><li>Kubelet</li></ul> | sig-node | 2000 | -
| [#51099](https://github.com/kubernetes/kubernetes/issues/51099) | gRPC update causing failure of API calls with large responses | When gRPC vendor library was updated to v1.5.1, the default MTU for response size b/w apiserver <-> etcd changed to 4MB. This could only be caught by scalability tests, as our regular tests run at a much smaller scale - so they don't actually encounter such large reponse sizes. | <ul><li>gRPC framework (feature)</li><li>Etcd</li><li>Apiserver</li></ul> | sig-api-machinery | 100 | 100
| [#51099](https://github.com/kubernetes/kubernetes/issues/51099) | gRPC update causing failure of API calls with large responses | When gRPC vendor library was updated to v1.5.1, the default MTU for response size b/w apiserver <-> etcd changed to 4MB. This could only be caught by scalability tests, as our regular tests run at a much smaller scale - so they don't actually encounter such large response sizes. | <ul><li>gRPC framework (feature)</li><li>Etcd</li><li>Apiserver</li></ul> | sig-api-machinery | 100 | 100
| [#50854](https://github.com/kubernetes/kubernetes/issues/50854) | Route-controller timing out while listing routes from cloud-provider | Route-controller was failing to list routes from the cloud-provider API and in turn failed to create routes for the nodes. The reason was that the project in which the cluster was being created, started to have another huge cluster running there (with O(5k) routes) which was interfering with the list routes call for this cluster, due to cloud-provider side issues. | <ul><li>Controller-manager (route-controller)</li><li>Cloud-provider API (GCE)</li></ul> | sig-network <br> sig-gcp | - | 5000 (running besides a real 5000 cluster)
| [#50366](https://github.com/kubernetes/kubernetes/issues/50366) | Failing to fit some pods on cluster due to accidentally increased fluentd resource request | Some change around setting fluentd resource requests accidentally doubled it's CPU request. This was caught by our kubemark scalability test where we tightly fit our hollow-node pods onto a small set of nodes. With the fluentd increase, some of those pods couldn't be scheduled due to CPU shortage and we caught it. This bug was risky for production, as it could've preempted some of the users pods for fluentd (a critical pod). | <ul><li>Resource requests (feature)</li><li>Fluentd</li></ul> | sig-instrumentation | - | 500
| [#48700](https://github.com/kubernetes/kubernetes/issues/48700) | Apiserver panic while logging a request in TooManyRequests handler | A change in the ordering of apiserver request handlers (where one of them is the TooManyRequests handler) caused a panic while instrumenting the request. Though this is not a scalability regression per se, this is a scenario which was exposed only by our scale tests where we actually see 429s (TooManyRequests) due to the scale at which we run the clusters (unlike normal scale tests). | <ul><li>Apiserver</li></ul> | sig-api-machinery | 100 | 500