Remove excess Spaces (#10706)

This commit is contained in:
LiuDui 2018-10-23 19:32:44 +08:00 committed by k8s-ci-robot
parent 712f463ba1
commit d565f7de6a
4 changed files with 4 additions and 4 deletions

View File

@ -37,7 +37,7 @@ why you might want multiple clusters are:
* Low latency: Having clusters in multiple regions minimises latency by serving * Low latency: Having clusters in multiple regions minimises latency by serving
users from the cluster that is closest to them. users from the cluster that is closest to them.
* Fault isolation: It might be better to have multiple small clusters rather * Fault isolation: It might be better to have multiple small clusters rather
than a single large cluster for fault isolation (for example: multiple than a single large cluster for fault isolation (for example: multiple
clusters in different availability zones of a cloud provider). clusters in different availability zones of a cloud provider).
* Scalability: There are scalability limits to a single kubernetes cluster (this * Scalability: There are scalability limits to a single kubernetes cluster (this
should not be the case for most users. For more details: should not be the case for most users. For more details:

View File

@ -460,7 +460,7 @@ root filesystem (i.e. no writable layer).
This specifies a whiltelist of Flexvolume drivers that are allowed to be used This specifies a whiltelist of Flexvolume drivers that are allowed to be used
by flexvolume. An empty list or nil means there is no restriction on the drivers. by flexvolume. An empty list or nil means there is no restriction on the drivers.
Please make sure [`volumes`](#volumes-and-file-systems) field contains the Please make sure [`volumes`](#volumes-and-file-systems) field contains the
`flexVolume` volume type; no Flexvolume driver is allowed otherwise. `flexVolume` volume type; no Flexvolume driver is allowed otherwise.
For example: For example:

View File

@ -208,7 +208,7 @@ Till now we have only accessed the nginx server from within the cluster. Before
* An nginx server configured to use the certificates * An nginx server configured to use the certificates
* A [secret](/docs/concepts/configuration/secret/) that makes the certificates accessible to pods * A [secret](/docs/concepts/configuration/secret/) that makes the certificates accessible to pods
You can acquire all these from the [nginx https example](https://github.com/kubernetes/examples/tree/{{< param "githubbranch" >}}/staging/https-nginx/). This requires having go and make tools installed. If you don't want to install those, then follow the manual steps later. In short: You can acquire all these from the [nginx https example](https://github.com/kubernetes/examples/tree/{{< param "githubbranch" >}}/staging/https-nginx/). This requires having go and make tools installed. If you don't want to install those, then follow the manual steps later. In short:
```shell ```shell
$ make keys secret KEY=/tmp/nginx.key CERT=/tmp/nginx.crt SECRET=/tmp/secret.json $ make keys secret KEY=/tmp/nginx.key CERT=/tmp/nginx.crt SECRET=/tmp/secret.json

View File

@ -196,7 +196,7 @@ having working [readiness probes](/docs/tasks/configure-pod-container/configure-
In this mode, kube-proxy watches Kubernetes Services and Endpoints, In this mode, kube-proxy watches Kubernetes Services and Endpoints,
calls `netlink` interface to create ipvs rules accordingly and syncs ipvs rules with Kubernetes calls `netlink` interface to create ipvs rules accordingly and syncs ipvs rules with Kubernetes
Services and Endpoints periodically, to make sure ipvs status is Services and Endpoints periodically, to make sure ipvs status is
consistent with the expectation. When Service is accessed, traffic will consistent with the expectation. When Service is accessed, traffic will
be redirected to one of the backend Pods. be redirected to one of the backend Pods.