Fix typos and linted_packages sorting

This commit is contained in:
Christian Koep 2016-10-18 10:12:26 +02:00
parent 2fddf46ea4
commit f8e0608e9e
11 changed files with 21 additions and 21 deletions

View File

@ -123,7 +123,7 @@ log4j:WARN No such property [maxBackupIndex] in org.apache.log4j.DailyRollingFil
## Access the service
*Don't forget* that services in Kubernetes are only acessible from containers in the cluster. For different behavior you should [configure the creation of an external load-balancer](http://kubernetes.io/v1.0/docs/user-guide/services.html#type-loadbalancer). While it's supported within this example service descriptor, its usage is out of scope of this document, for now.
*Don't forget* that services in Kubernetes are only accessible from containers in the cluster. For different behavior you should [configure the creation of an external load-balancer](http://kubernetes.io/v1.0/docs/user-guide/services.html#type-loadbalancer). While it's supported within this example service descriptor, its usage is out of scope of this document, for now.
```
$ kubectl get service elasticsearch

View File

@ -2,7 +2,7 @@
This example shows how to use experimental persistent volume provisioning.
### Pre-requisites
### Prerequisites
This example assumes that you have an understanding of Kubernetes administration and can modify the
scripts that launch kube-controller-manager.

View File

@ -1,11 +1,11 @@
### explorer
Explorer is a little container for examining the runtime environment kubernetes produces for your pods.
Explorer is a little container for examining the runtime environment Kubernetes produces for your pods.
The intended use is to substitute gcr.io/google_containers/explorer for your intended container, and then visit it via the proxy.
Currently, you can look at:
* The environment variables to make sure kubernetes is doing what you expect.
* The environment variables to make sure Kubernetes is doing what you expect.
* The filesystem to make sure the mounted volumes and files are also what you expect.
* Perform DNS lookups, to see how DNS works.

View File

@ -98,7 +98,7 @@ $ kubectl delete -f examples/guestbook/
### Step One: Start up the redis master
Before continuing to the gory details, we also recommend you to read [Quick walkthrough](../../docs/user-guide/#quick-walkthrough), [Thorough walkthough](../../docs/user-guide/#thorough-walkthrough) and [Concept guide](../../docs/user-guide/#concept-guide).
Before continuing to the gory details, we also recommend you to read [Quick walkthrough](../../docs/user-guide/#quick-walkthrough), [Thorough walkthrough](../../docs/user-guide/#thorough-walkthrough) and [Concept guide](../../docs/user-guide/#concept-guide).
**Note**: The redis master in this example is *not* highly available. Making it highly available would be an interesting, but intricate exercise — redis doesn't actually support multi-master Deployments at this point in time, so high availability would be a somewhat tricky thing to implement, and might involve periodic serialization to disk, and so on.
#### Define a Deployment

View File

@ -68,7 +68,7 @@ Examples are not:
in the example config.
* Only use the code highlighting types
[supported by Rouge](https://github.com/jneen/rouge/wiki/list-of-supported-languages-and-lexers),
as this is what Github Pages uses.
as this is what GitHub Pages uses.
* Commands to be copied use the `shell` syntax highlighting type, and
do not include any kind of prompt.
* Example output is in a separate block quote to distinguish it from

View File

@ -1,4 +1,4 @@
# Mysql installation with cinder volume plugin
# MySQL installation with cinder volume plugin
Cinder is a Block Storage service for OpenStack. This example shows how it can be used as an attachment mounted to a pod in Kubernets.

View File

@ -23,7 +23,7 @@ Demonstrated Kubernetes Concepts:
## Quickstart
Put your desired mysql password in a file called `password.txt` with
Put your desired MySQL password in a file called `password.txt` with
no trailing newline. The first `tr` command will remove the newline if
your editor added one.

View File

@ -6,7 +6,7 @@ This example will create a DaemonSet which places the New Relic monitoring agent
### Step 0: Prerequisites
This process will create priviliged containers which have full access to the host system for logging. Beware of the security implications of this.
This process will create privileged containers which have full access to the host system for logging. Beware of the security implications of this.
If you are using a Salt based KUBERNETES\_PROVIDER (**gce**, **vagrant**, **aws**), you should make sure the creation of privileged containers via the API is enabled. Check `cluster/saltbase/pillar/privilege.sls`.

View File

@ -168,7 +168,7 @@ You now have 10 Firefox and 10 Chrome nodes, happy Seleniuming!
### Debugging
Sometimes it is neccessary to check on a hung test. Each pod is running VNC. To check on one of the browser nodes via VNC, it's recommended that you proxy, since we don't want to expose a service for every pod, and the containers have a weak VNC password. Replace POD_NAME with the name of the pod you want to connect to.
Sometimes it is necessary to check on a hung test. Each pod is running VNC. To check on one of the browser nodes via VNC, it's recommended that you proxy, since we don't want to expose a service for every pod, and the containers have a weak VNC password. Replace POD_NAME with the name of the pod you want to connect to.
```console
kubectl port-forward --pod=POD_NAME 5900:5900

View File

@ -101,7 +101,7 @@ kubectl scale rc cassandra --replicas=4
kubectl delete rc cassandra
#
# Create a daemonset to place a cassandra node on each kubernetes node
# Create a DaemonSet to place a cassandra node on each kubernetes node
#
kubectl create -f examples/storage/cassandra/cassandra-daemonset.yaml --validate=false
@ -659,7 +659,7 @@ cluster can react by re-replicating the data to other running nodes.
`DaemonSet` is designed to place a single pod on each node in the Kubernetes
cluster. That will give us data redundancy. Let's create a
daemonset to start our storage cluster:
DaemonSet to start our storage cluster:
<!-- BEGIN MUNGE: EXAMPLE cassandra-daemonset.yaml -->
@ -725,16 +725,16 @@ spec:
[Download example](cassandra-daemonset.yaml?raw=true)
<!-- END MUNGE: EXAMPLE cassandra-daemonset.yaml -->
Most of this Daemonset definition is identical to the ReplicationController
Most of this DaemonSet definition is identical to the ReplicationController
definition above; it simply gives the daemon set a recipe to use when it creates
new Cassandra pods, and targets all Cassandra nodes in the cluster.
Differentiating aspects are the `nodeSelector` attribute, which allows the
Daemonset to target a specific subset of nodes (you can label nodes just like
DaemonSet to target a specific subset of nodes (you can label nodes just like
other resources), and the lack of a `replicas` attribute due to the 1-to-1 node-
pod relationship.
Create this daemonset:
Create this DaemonSet:
```console
@ -750,7 +750,7 @@ $ kubectl create -f examples/storage/cassandra/cassandra-daemonset.yaml --valida
```
You can see the daemonset running:
You can see the DaemonSet running:
```console
@ -793,8 +793,8 @@ UN 10.244.3.3 51.28 KB 256 100.0% dafe3154-1d67-42e1-ac1d-78e
```
**Note**: This example had you delete the cassandra Replication Controller before
you created the Daemonset. This is because to keep this example simple the
RC and the Daemonset are using the same `app=cassandra` label (so that their pods map to the
you created the DaemonSet. This is because to keep this example simple the
RC and the DaemonSet are using the same `app=cassandra` label (so that their pods map to the
service we created, and so that the SeedProvider can identify them).
If we didn't delete the RC first, the two resources would conflict with
@ -821,7 +821,7 @@ In Cassandra, a `SeedProvider` bootstraps the gossip protocol that Cassandra use
Cassandra nodes. Seed addresses are hosts deemed as contact points. Cassandra
instances use the seed list to find each other and learn the topology of the
ring. The [`KubernetesSeedProvider`](java/src/main/java/io/k8s/cassandra/KubernetesSeedProvider.java)
discovers Cassandra seeds IP addresses vis the Kubernetes API, those Cassandra
discovers Cassandra seeds IP addresses via the Kubernetes API, those Cassandra
instances are defined within the Cassandra Service.
Refer to the custom seed provider [README](java/README.md) for further

View File

@ -16,7 +16,7 @@ The basic idea is this: three replication controllers with a single pod, corresp
By defaults, there are only three pods (hence replication controllers) for this cluster. This number can be increased using the variable NUM_NODES, specified in the replication controller configuration file. It's important to know the number of nodes must always be odd.
When the replication controller is created, it results in the corresponding container to start, run an entrypoint script that installs the mysql system tables, set up users, and build up a list of servers that is used with the galera parameter ```wsrep_cluster_address```. This is a list of running nodes that galera uses for election of a node to obtain SST (Single State Transfer) from.
When the replication controller is created, it results in the corresponding container to start, run an entrypoint script that installs the MySQL system tables, set up users, and build up a list of servers that is used with the galera parameter ```wsrep_cluster_address```. This is a list of running nodes that galera uses for election of a node to obtain SST (Single State Transfer) from.
Note: Kubernetes best-practices is to pre-create the services for each controller, and the configuration files which contain the service and replication controller for each node, when created, will result in both a service and replication contrller running for the given node. An important thing to know is that it's important that initially pxc-node1.yaml be processed first and no other pxc-nodeN services that don't have corresponding replication controllers should exist. The reason for this is that if there is a node in ```wsrep_clsuter_address``` without a backing galera node there will be nothing to obtain SST from which will cause the node to shut itself down and the container in question to exit (and another soon relaunched, repeatedly).
@ -32,7 +32,7 @@ Create the service and replication controller for the first node:
Repeat the same previous steps for ```pxc-node2``` and ```pxc-node3```
When complete, you should be able connect with a mysql client to the IP address
When complete, you should be able connect with a MySQL client to the IP address
service ```pxc-cluster``` to find a working cluster
### An example of creating a cluster