a lot of typos

This commit is contained in:
Xingcai Zhang 2017-11-10 22:03:13 +08:00
parent 3907591a7d
commit 20d2f6f549
4 changed files with 5 additions and 5 deletions

View File

@ -32,7 +32,7 @@ how careful we need to be.
### Huge number of handshakes slows down API server
It was a long standing issue for performance and is/was an important bottleneck for scalability (https://github.com/kubernetes/kubernetes/issues/13671). The bug directly
causing this problem was incorrect (from the golangs standpoint) handling of TCP connections. Secondary issue was that elliptic curve encryption (only one available in go 1.4)
causing this problem was incorrect (from the golang's standpoint) handling of TCP connections. Secondary issue was that elliptic curve encryption (only one available in go 1.4)
is unbelievably slow.
## Proposed metrics/statistics to gather/compute to avoid problems
@ -42,7 +42,7 @@ is unbelievably slow.
Basic ideas:
- number of Pods/ReplicationControllers/Services in the cluster
- number of running replicas of master components (if they are replicated)
- current elected master of ectd cluster (if running distributed version)
- current elected master of etcd cluster (if running distributed version)
- number of master component restarts
- number of lost Nodes

View File

@ -162,7 +162,7 @@ For the 1.4 release, this feature will be implemented for the GCE cloud provider
- Node: On the node, we expect to see the real source IP of the client. Destination IP will be the Service Virtual External IP.
- Pod: For processes running inside the Pod network namepsace, the source IP will be the real client source IP. The destination address will the be Pod IP.
- Pod: For processes running inside the Pod network namespace, the source IP will be the real client source IP. The destination address will the be Pod IP.
#### GCE Expected Packet Destination IP (HealthCheck path)

View File

@ -199,7 +199,7 @@ clients attaching as well.
There are ad-hoc solutions/discussions that addresses one or two of the
requirements, but no comprehensive solution for CRI specifically has been
proposed so far (with the excpetion of @tmrtfs's proposal
proposed so far (with the exception of @tmrtfs's proposal
[#33111](https://github.com/kubernetes/kubernetes/pull/33111), which has a much
wider scope). It has come up in discussions that kubelet can delegate all the
logging management to the runtime to allow maximum flexibility. However, it is

View File

@ -74,7 +74,7 @@ In addition, the rkt cli has historically been the primary interface to the rkt
The initial integration will execute the rkt binary directly for app creation/start/stop/removal, as well as image pulling/removal.
The creation of pod sanbox is also done via rkt command line, but it will run under `systemd-run` so it's monitored by the init process.
The creation of pod sandbox is also done via rkt command line, but it will run under `systemd-run` so it's monitored by the init process.
In the future, some of these decisions are expected to be changed such that rkt is vendored as a library dependency for all operations, and other init systems will be supported as well.