diff --git a/docs/getting-started-guides/rackspace.md b/docs/getting-started-guides/rackspace.md index 5ed73f495f..5087344b83 100644 --- a/docs/getting-started-guides/rackspace.md +++ b/docs/getting-started-guides/rackspace.md @@ -62,7 +62,7 @@ There is a specific `cluster/rackspace` directory with the scripts for the follo ## Network Design - eth0 - Public Interface used for servers/containers to reach the internet -- eth1 - ServiceNet - Intra-cluster communication (k8s, etcd, etc) communicate via this interface. The `cloud-config` files use the special CoreOS identifier `$private_ipv4` to configure the services. +- eth1 - ServiceNet - Intra-cluster communication (k8s, etcd, etc.) communicate via this interface. The `cloud-config` files use the special CoreOS identifier `$private_ipv4` to configure the services. - eth2 - Cloud Network - Used for k8s pods to communicate with one another. The proxy service will pass traffic via this interface. ## Support Level diff --git a/docs/getting-started-guides/scratch.md b/docs/getting-started-guides/scratch.md index 7fffe7145e..970939465d 100644 --- a/docs/getting-started-guides/scratch.md +++ b/docs/getting-started-guides/scratch.md @@ -72,7 +72,7 @@ accomplished in two ways: pod network through traffic encapsulation (e.g vxlan). - Encapsulation reduces performance, though exactly how much depends on your solution. - **Without an overlay network** - - Configure the underlying network fabric (switches, routers, etc) to be aware of pod IP addresses. + - Configure the underlying network fabric (switches, routers, etc.) to be aware of pod IP addresses. - This does not require the encapsulation provided by an overlay, and so can achieve better performance. diff --git a/docs/user-guide/jobs.md b/docs/user-guide/jobs.md index 59e09e7bcd..ae5e16b3d3 100644 --- a/docs/user-guide/jobs.md +++ b/docs/user-guide/jobs.md @@ -157,7 +157,7 @@ parallelism, for a variety or reasons: remaining completions. Higher values of `.spec.parallelism` are effectively ignored. - For work queue jobs, no new pods are started after any pod has succeeded -- remaining pods are allowed to complete, however. - If the controller has not had time to react. -- If the controller failed to create pods for any reason (lack of ResourceQuota, lack of permission, etc), +- If the controller failed to create pods for any reason (lack of ResourceQuota, lack of permission, etc.), then there may be fewer pods than requested. - The controller may throttle new pod creation due to excessive previous pod failures in the same Job. - When a pod is gracefully shutdown, it make take time to stop. diff --git a/docs/user-guide/volumes.md b/docs/user-guide/volumes.md index e9f1f75a82..e33bd3e018 100644 --- a/docs/user-guide/volumes.md +++ b/docs/user-guide/volumes.md @@ -502,7 +502,7 @@ See the [Quobyte example](https://github.com/kubernetes/kubernetes/tree/{{page.g ## Resources -The storage media (Disk, SSD, etc) of an `emptyDir` volume is determined by the +The storage media (Disk, SSD, etc.) of an `emptyDir` volume is determined by the medium of the filesystem holding the kubelet root dir (typically `/var/lib/kubelet`). There is no limit on how much space an `emptyDir` or `hostPath` volume can consume, and no isolation between containers or between