From 20d2f6f5498a5668bae2aea9dcaf4875b9c06ccb Mon Sep 17 00:00:00 2001 From: Xingcai Zhang Date: Fri, 10 Nov 2017 22:03:13 +0800 Subject: [PATCH] a lot of typos --- .../instrumentation/performance-related-monitoring.md | 4 ++-- .../network/external-lb-source-ip-preservation.md | 2 +- contributors/design-proposals/node/kubelet-cri-logging.md | 2 +- contributors/design-proposals/node/kubelet-rkt-runtime.md | 2 +- 4 files changed, 5 insertions(+), 5 deletions(-) diff --git a/contributors/design-proposals/instrumentation/performance-related-monitoring.md b/contributors/design-proposals/instrumentation/performance-related-monitoring.md index 1279cd293..91b799f15 100644 --- a/contributors/design-proposals/instrumentation/performance-related-monitoring.md +++ b/contributors/design-proposals/instrumentation/performance-related-monitoring.md @@ -32,7 +32,7 @@ how careful we need to be. ### Huge number of handshakes slows down API server It was a long standing issue for performance and is/was an important bottleneck for scalability (https://github.com/kubernetes/kubernetes/issues/13671). The bug directly -causing this problem was incorrect (from the golangs standpoint) handling of TCP connections. Secondary issue was that elliptic curve encryption (only one available in go 1.4) +causing this problem was incorrect (from the golang's standpoint) handling of TCP connections. Secondary issue was that elliptic curve encryption (only one available in go 1.4) is unbelievably slow. ## Proposed metrics/statistics to gather/compute to avoid problems @@ -42,7 +42,7 @@ is unbelievably slow. Basic ideas: - number of Pods/ReplicationControllers/Services in the cluster - number of running replicas of master components (if they are replicated) -- current elected master of ectd cluster (if running distributed version) +- current elected master of etcd cluster (if running distributed version) - number of master component restarts - number of lost Nodes diff --git a/contributors/design-proposals/network/external-lb-source-ip-preservation.md b/contributors/design-proposals/network/external-lb-source-ip-preservation.md index ac48ed5de..50140a0ee 100644 --- a/contributors/design-proposals/network/external-lb-source-ip-preservation.md +++ b/contributors/design-proposals/network/external-lb-source-ip-preservation.md @@ -162,7 +162,7 @@ For the 1.4 release, this feature will be implemented for the GCE cloud provider - Node: On the node, we expect to see the real source IP of the client. Destination IP will be the Service Virtual External IP. -- Pod: For processes running inside the Pod network namepsace, the source IP will be the real client source IP. The destination address will the be Pod IP. +- Pod: For processes running inside the Pod network namespace, the source IP will be the real client source IP. The destination address will the be Pod IP. #### GCE Expected Packet Destination IP (HealthCheck path) diff --git a/contributors/design-proposals/node/kubelet-cri-logging.md b/contributors/design-proposals/node/kubelet-cri-logging.md index 37f3e3da9..a19ff3f5b 100644 --- a/contributors/design-proposals/node/kubelet-cri-logging.md +++ b/contributors/design-proposals/node/kubelet-cri-logging.md @@ -199,7 +199,7 @@ clients attaching as well. There are ad-hoc solutions/discussions that addresses one or two of the requirements, but no comprehensive solution for CRI specifically has been -proposed so far (with the excpetion of @tmrtfs's proposal +proposed so far (with the exception of @tmrtfs's proposal [#33111](https://github.com/kubernetes/kubernetes/pull/33111), which has a much wider scope). It has come up in discussions that kubelet can delegate all the logging management to the runtime to allow maximum flexibility. However, it is diff --git a/contributors/design-proposals/node/kubelet-rkt-runtime.md b/contributors/design-proposals/node/kubelet-rkt-runtime.md index 98a06187a..1bc6435bc 100644 --- a/contributors/design-proposals/node/kubelet-rkt-runtime.md +++ b/contributors/design-proposals/node/kubelet-rkt-runtime.md @@ -74,7 +74,7 @@ In addition, the rkt cli has historically been the primary interface to the rkt The initial integration will execute the rkt binary directly for app creation/start/stop/removal, as well as image pulling/removal. -The creation of pod sanbox is also done via rkt command line, but it will run under `systemd-run` so it's monitored by the init process. +The creation of pod sandbox is also done via rkt command line, but it will run under `systemd-run` so it's monitored by the init process. In the future, some of these decisions are expected to be changed such that rkt is vendored as a library dependency for all operations, and other init systems will be supported as well.