Merge pull request #11424 from lavalamp/mungePreformatted
Munge preformatted
This commit is contained in:
commit
b9c54f26f1
|
|
@ -36,14 +36,19 @@ volume.
|
|||
Create a volume in the same region as your node add your volume
|
||||
information in the pod description file aws-ebs-web.yaml then create
|
||||
the pod:
|
||||
|
||||
```shell
|
||||
$ kubectl create -f examples/aws_ebs/aws-ebs-web.yaml
|
||||
```
|
||||
|
||||
Add some data to the volume if is empty:
|
||||
|
||||
```shell
|
||||
$ echo "Hello World" >& /var/lib/kubelet/plugins/kubernetes.io/aws-ebs/mounts/aws/{Region}/{Volume ID}/index.html
|
||||
```
|
||||
|
||||
You should now be able to query your web server:
|
||||
|
||||
```shell
|
||||
$ curl <Pod IP address>
|
||||
$ Hello World
|
||||
|
|
|
|||
|
|
@ -96,6 +96,7 @@ In theory could create a single Cassandra pod right now but since `KubernetesSee
|
|||
In Kubernetes a _[Service](../../docs/user-guide/services.md)_ describes a set of Pods that perform the same task. For example, the set of Pods in a Cassandra cluster can be a Kubernetes Service, or even just the single Pod we created above. An important use for a Service is to create a load balancer which distributes traffic across members of the set of Pods. But a _Service_ can also be used as a standing query which makes a dynamically changing set of Pods (or the single Pod we've already created) available via the Kubernetes API. This is the way that we use initially use Services with Cassandra.
|
||||
|
||||
Here is the service description:
|
||||
|
||||
```yaml
|
||||
apiVersion: v1
|
||||
kind: Service
|
||||
|
|
@ -113,6 +114,7 @@ spec:
|
|||
The important thing to note here is the ```selector```. It is a query over labels, that identifies the set of _Pods_ contained by the _Service_. In this case the selector is ```name=cassandra```. If you look back at the Pod specification above, you'll see that the pod has the corresponding label, so it will be selected for membership in this Service.
|
||||
|
||||
Create this service as follows:
|
||||
|
||||
```sh
|
||||
$ kubectl create -f examples/cassandra/cassandra-service.yaml
|
||||
```
|
||||
|
|
@ -224,6 +226,7 @@ $ kubectl create -f examples/cassandra/cassandra-controller.yaml
|
|||
Now this is actually not that interesting, since we haven't actually done anything new. Now it will get interesting.
|
||||
|
||||
Let's scale our cluster to 2:
|
||||
|
||||
```sh
|
||||
$ kubectl scale rc cassandra --replicas=2
|
||||
```
|
||||
|
|
@ -253,11 +256,13 @@ UN 10.244.3.3 51.28 KB 256 100.0% dafe3154-1d67-42e1-ac1d-78e
|
|||
```
|
||||
|
||||
Now let's scale our cluster to 4 nodes:
|
||||
|
||||
```sh
|
||||
$ kubectl scale rc cassandra --replicas=4
|
||||
```
|
||||
|
||||
In a few moments, you can examine the status again:
|
||||
|
||||
```sh
|
||||
$ kubectl exec -ti cassandra -- nodetool status
|
||||
Datacenter: datacenter1
|
||||
|
|
|
|||
|
|
@ -228,6 +228,7 @@ On GCE this can be done with:
|
|||
```
|
||||
$ gcloud compute firewall-rules create --allow=tcp:5555 --target-tags=kubernetes-minion kubernetes-minion-5555
|
||||
```
|
||||
|
||||
Please remember to delete the rule after you are done with the example (on GCE: `$ gcloud compute firewall-rules delete kubernetes-minion-5555`)
|
||||
|
||||
To bring up the pods, run this command `$ kubectl create -f examples/celery-rabbitmq/flower-controller.yaml`. This controller is defined as so:
|
||||
|
|
|
|||
|
|
@ -47,6 +47,7 @@ with the basic authentication username and password.
|
|||
|
||||
Here is an example replication controller specification that creates 4 instances of Elasticsearch which is in the file
|
||||
[music-rc.yaml](music-rc.yaml).
|
||||
|
||||
```
|
||||
apiVersion: v1
|
||||
kind: ReplicationController
|
||||
|
|
@ -88,6 +89,7 @@ spec:
|
|||
secret:
|
||||
secretName: apiserver-secret
|
||||
```
|
||||
|
||||
The `CLUSTER_NAME` variable gives a name to the cluster and allows multiple separate clusters to
|
||||
exist in the same namespace.
|
||||
The `SELECTOR` variable should be set to a label query that identifies the Elasticsearch
|
||||
|
|
@ -99,6 +101,7 @@ for the replication controller (in this case `mytunes`).
|
|||
|
||||
Before creating pods with the replication controller a secret containing the bearer authentication token
|
||||
should be set up. A template is provided in the file [apiserver-secret.yaml](apiserver-secret.yaml):
|
||||
|
||||
```
|
||||
apiVersion: v1
|
||||
kind: Secret
|
||||
|
|
@ -109,8 +112,10 @@ data:
|
|||
token: "TOKEN"
|
||||
|
||||
```
|
||||
|
||||
Replace `NAMESPACE` with the actual namespace to be used and `TOKEN` with the basic64 encoded
|
||||
versions of the bearer token reported by `kubectl config view` e.g.
|
||||
|
||||
```
|
||||
$ kubectl config view
|
||||
...
|
||||
|
|
@ -122,7 +127,9 @@ $ echo yGlDcMvSZPX4PyP0Q5bHgAYgi1iyEHv2 | base64
|
|||
eUdsRGNNdlNaUFg0UHlQMFE1YkhnQVlnaTFpeUVIdjIK=
|
||||
|
||||
```
|
||||
|
||||
resulting in the file:
|
||||
|
||||
```
|
||||
apiVersion: v1
|
||||
kind: Secret
|
||||
|
|
@ -133,20 +140,26 @@ data:
|
|||
token: "eUdsRGNNdlNaUFg0UHlQMFE1YkhnQVlnaTFpeUVIdjIK="
|
||||
|
||||
```
|
||||
|
||||
which can be used to create the secret in your namespace:
|
||||
|
||||
```
|
||||
kubectl create -f examples/elasticsearch/apiserver-secret.yaml --namespace=mytunes
|
||||
secrets/apiserver-secret
|
||||
|
||||
```
|
||||
|
||||
Now you are ready to create the replication controller which will then create the pods:
|
||||
|
||||
```
|
||||
$ kubectl create -f examples/elasticsearch/music-rc.yaml --namespace=mytunes
|
||||
replicationcontrollers/music-db
|
||||
|
||||
```
|
||||
|
||||
It's also useful to have a [service](../../docs/user-guide/services.md) with an load balancer for accessing the Elasticsearch
|
||||
cluster which can be found in the file [music-service.yaml](music-service.yaml).
|
||||
|
||||
```
|
||||
apiVersion: v1
|
||||
kind: Service
|
||||
|
|
@ -164,13 +177,17 @@ spec:
|
|||
targetPort: es
|
||||
type: LoadBalancer
|
||||
```
|
||||
|
||||
Let's create the service with an external load balancer:
|
||||
|
||||
```
|
||||
$ kubectl create -f examples/elasticsearch/music-service.yaml --namespace=mytunes
|
||||
services/music-server
|
||||
|
||||
```
|
||||
|
||||
Let's see what we've got:
|
||||
|
||||
```
|
||||
$ kubectl get pods,rc,services,secrets --namespace=mytunes
|
||||
|
||||
|
|
@ -187,7 +204,9 @@ music-server name=music-db name=music-db 10.0.45.177 9200/TCP
|
|||
NAME TYPE DATA
|
||||
apiserver-secret Opaque 1
|
||||
```
|
||||
|
||||
This shows 4 instances of Elasticsearch running. After making sure that port 9200 is accessible for this cluster (e.g. using a firewall rule for Google Compute Engine) we can make queries via the service which will be fielded by the matching Elasticsearch pods.
|
||||
|
||||
```
|
||||
$ curl 104.197.12.157:9200
|
||||
{
|
||||
|
|
@ -218,7 +237,9 @@ $ curl 104.197.12.157:9200
|
|||
"tagline" : "You Know, for Search"
|
||||
}
|
||||
```
|
||||
|
||||
We can query the nodes to confirm that an Elasticsearch cluster has been formed.
|
||||
|
||||
```
|
||||
$ curl 104.197.12.157:9200/_nodes?pretty=true
|
||||
{
|
||||
|
|
@ -261,7 +282,9 @@ $ curl 104.197.12.157:9200/_nodes?pretty=true
|
|||
"hosts" : [ "10.244.2.48", "10.244.0.24", "10.244.3.31", "10.244.1.37" ]
|
||||
...
|
||||
```
|
||||
|
||||
Let's ramp up the number of Elasticsearch nodes from 4 to 10:
|
||||
|
||||
```
|
||||
$ kubectl scale --replicas=10 replicationcontrollers music-db --namespace=mytunes
|
||||
scaled
|
||||
|
|
@ -279,7 +302,9 @@ music-db-x7j2w 1/1 Running 0 1m
|
|||
music-db-zjqyv 1/1 Running 0 1m
|
||||
|
||||
```
|
||||
|
||||
Let's check to make sure that these 10 nodes are part of the same Elasticsearch cluster:
|
||||
|
||||
```
|
||||
$ curl 104.197.12.157:9200/_nodes?pretty=true | grep name
|
||||
"cluster_name" : "mytunes-db",
|
||||
|
|
|
|||
|
|
@ -44,6 +44,7 @@ Currently, you can look at:
|
|||
`pod.json` is supplied as an example. You can control the port it serves on with the -port flag.
|
||||
|
||||
Example from command line (the DNS lookup looks better from a web browser):
|
||||
|
||||
```
|
||||
$ kubectl create -f examples/explorer/pod.json
|
||||
$ kubectl proxy &
|
||||
|
|
|
|||
|
|
@ -56,14 +56,17 @@ Here is a snippet of [glusterfs-endpoints.json](glusterfs-endpoints.json),
|
|||
]
|
||||
|
||||
```
|
||||
|
||||
The "IP" field should be filled with the address of a node in the Glusterfs server cluster. In this example, it is fine to give any valid value (from 1 to 65535) to the "port" field.
|
||||
|
||||
Create the endpoints,
|
||||
|
||||
```shell
|
||||
$ kubectl create -f examples/glusterfs/glusterfs-endpoints.json
|
||||
```
|
||||
|
||||
You can verify that the endpoints are successfully created by running
|
||||
|
||||
```shell
|
||||
$ kubectl get endpoints
|
||||
NAME ENDPOINTS
|
||||
|
|
@ -92,9 +95,11 @@ The parameters are explained as the followings.
|
|||
- **readOnly** is the boolean that sets the mountpoint readOnly or readWrite.
|
||||
|
||||
Create a pod that has a container using Glusterfs volume,
|
||||
|
||||
```shell
|
||||
$ kubectl create -f examples/glusterfs/glusterfs-pod.json
|
||||
```
|
||||
|
||||
You can verify that the pod is running:
|
||||
|
||||
```shell
|
||||
|
|
@ -107,6 +112,7 @@ $ kubectl get pods glusterfs -t '{{.status.hostIP}}{{"\n"}}'
|
|||
```
|
||||
|
||||
You may ssh to the host (the hostIP) and run 'mount' to see if the Glusterfs volume is mounted,
|
||||
|
||||
```shell
|
||||
$ mount | grep kube_vol
|
||||
10.240.106.152:kube_vol on /var/lib/kubelet/pods/f164a571-fa68-11e4-ad5c-42010af019b7/volumes/kubernetes.io~glusterfs/glusterfsvol type fuse.glusterfs (rw,relatime,user_id=0,group_id=0,default_permissions,allow_other,max_read=131072)
|
||||
|
|
|
|||
|
|
@ -58,30 +58,36 @@ This example assumes that you have a working cluster. See the [Getting Started G
|
|||
Use the `examples/guestbook-go/redis-master-controller.json` file to create a [replication controller](../../docs/user-guide/replication-controller.md) and Redis master [pod](../../docs/user-guide/pods.md). The pod runs a Redis key-value server in a container. Using a replication controller is the preferred way to launch long-running pods, even for 1 replica, so that the pod benefits from the self-healing mechanism in Kubernetes (keeps the pods alive).
|
||||
|
||||
1. Use the [redis-master-controller.json](redis-master-controller.json) file to create the Redis master replication controller in your Kubernetes cluster by running the `kubectl create -f` *`filename`* command:
|
||||
|
||||
```shell
|
||||
$ kubectl create -f examples/guestbook-go/redis-master-controller.json
|
||||
replicationcontrollers/redis-master
|
||||
```
|
||||
|
||||
2. To verify that the redis-master-controller is up, list all the replication controllers in the cluster with the `kubectl get rc` command:
|
||||
|
||||
```shell
|
||||
$ kubectl get rc
|
||||
CONTROLLER CONTAINER(S) IMAGE(S) SELECTOR REPLICAS
|
||||
redis-master redis-master gurpartap/redis app=redis,role=master 1
|
||||
...
|
||||
```
|
||||
|
||||
Result: The replication controller then creates the single Redis master pod.
|
||||
|
||||
3. To verify that the redis-master pod is running, list all the pods in cluster with the `kubectl get pods` command:
|
||||
|
||||
```shell
|
||||
$ kubectl get pods
|
||||
NAME READY STATUS RESTARTS AGE
|
||||
redis-master-xx4uv 1/1 Running 0 1m
|
||||
...
|
||||
```
|
||||
|
||||
Result: You'll see a single Redis master pod and the machine where the pod is running after the pod gets placed (may take up to thirty seconds).
|
||||
|
||||
4. To verify what containers are running in the redis-master pod, you can SSH to that machine with `gcloud comput ssh --zone` *`zone_name`* *`host_name`* and then run `docker ps`:
|
||||
|
||||
```shell
|
||||
me@workstation$ gcloud compute ssh --zone us-central1-b kubernetes-minion-bz1p
|
||||
|
||||
|
|
@ -89,6 +95,7 @@ Use the `examples/guestbook-go/redis-master-controller.json` file to create a [r
|
|||
CONTAINER ID IMAGE COMMAND CREATED STATUS
|
||||
d5c458dabe50 gurpartap/redis:latest "/usr/local/bin/redi 5 minutes ago Up 5 minutes
|
||||
```
|
||||
|
||||
Note: The initial `docker pull` can take a few minutes, depending on network conditions.
|
||||
|
||||
### Step Two: Create the Redis master service <a id="step-two"></a>
|
||||
|
|
@ -97,18 +104,21 @@ A Kubernetes '[service](../../docs/user-guide/services.md)' is a named load bala
|
|||
Services find the containers to load balance based on pod labels. The pod that you created in Step One has the label `app=redis` and `role=master`. The selector field of the service determines which pods will receive the traffic sent to the service.
|
||||
|
||||
1. Use the [redis-master-service.json](redis-master-service.json) file to create the service in your Kubernetes cluster by running the `kubectl create -f` *`filename`* command:
|
||||
|
||||
```shell
|
||||
$ kubectl create -f examples/guestbook-go/redis-master-service.json
|
||||
services/redis-master
|
||||
```
|
||||
|
||||
2. To verify that the redis-master service is up, list all the services in the cluster with the `kubectl get services` command:
|
||||
|
||||
```shell
|
||||
$ kubectl get services
|
||||
NAME LABELS SELECTOR IP(S) PORT(S)
|
||||
redis-master app=redis,role=master app=redis,role=master 10.0.136.3 6379/TCP
|
||||
...
|
||||
```
|
||||
|
||||
Result: All new pods will see the `redis-master` service running on the host (`$REDIS_MASTER_SERVICE_HOST` environment variable) at port 6379, or running on `redis-master:6379`. After the service is created, the service proxy on each node is configured to set up a proxy on the specified port (in our example, that's port 6379).
|
||||
|
||||
|
||||
|
|
@ -116,12 +126,14 @@ Services find the containers to load balance based on pod labels. The pod that y
|
|||
The Redis master we created earlier is a single pod (REPLICAS = 1), while the Redis read slaves we are creating here are 'replicated' pods. In Kubernetes, a replication controller is responsible for managing the multiple instances of a replicated pod.
|
||||
|
||||
1. Use the file [redis-slave-controller.json](redis-slave-controller.json) to create the replication controller by running the `kubectl create -f` *`filename`* command:
|
||||
|
||||
```shell
|
||||
$ kubectl create -f examples/guestbook-go/redis-slave-controller.json
|
||||
replicationcontrollers/redis-slave
|
||||
```
|
||||
|
||||
2. To verify that the guestbook replication controller is running, run the `kubectl get rc` command:
|
||||
|
||||
```shell
|
||||
$ kubectl get rc
|
||||
CONTROLLER CONTAINER(S) IMAGE(S) SELECTOR REPLICAS
|
||||
|
|
@ -129,15 +141,18 @@ The Redis master we created earlier is a single pod (REPLICAS = 1), while the Re
|
|||
redis-slave redis-slave gurpartap/redis app=redis,role=slave 2
|
||||
...
|
||||
```
|
||||
|
||||
Result: The replication controller creates and configures the Redis slave pods through the redis-master service (name:port pair, in our example that's `redis-master:6379`).
|
||||
|
||||
Example:
|
||||
The Redis slaves get started by the replication controller with the following command:
|
||||
|
||||
```shell
|
||||
redis-server --slaveof redis-master 6379
|
||||
```
|
||||
|
||||
2. To verify that the Redis master and slaves pods are running, run the `kubectl get pods` command:
|
||||
|
||||
```shell
|
||||
$ kubectl get pods
|
||||
NAME READY STATUS RESTARTS AGE
|
||||
|
|
@ -146,6 +161,7 @@ The Redis master we created earlier is a single pod (REPLICAS = 1), while the Re
|
|||
redis-slave-iai40 1/1 Running 0 1m
|
||||
...
|
||||
```
|
||||
|
||||
Result: You see the single Redis master and two Redis slave pods.
|
||||
|
||||
### Step Four: Create the Redis slave service <a id="step-four"></a>
|
||||
|
|
@ -153,12 +169,14 @@ The Redis master we created earlier is a single pod (REPLICAS = 1), while the Re
|
|||
Just like the master, we want to have a service to proxy connections to the read slaves. In this case, in addition to discovery, the Redis slave service provides transparent load balancing to clients.
|
||||
|
||||
1. Use the [redis-slave-service.json](redis-slave-service.json) file to create the Redis slave service by running the `kubectl create -f` *`filename`* command:
|
||||
|
||||
```shell
|
||||
$ kubectl create -f examples/guestbook-go/redis-slave-service.json
|
||||
services/redis-slave
|
||||
```
|
||||
|
||||
2. To verify that the redis-slave service is up, list all the services in the cluster with the `kubectl get services` command:
|
||||
|
||||
```shell
|
||||
$ kubectl get services
|
||||
NAME LABELS SELECTOR IP(S) PORT(S)
|
||||
|
|
@ -166,6 +184,7 @@ Just like the master, we want to have a service to proxy connections to the read
|
|||
redis-slave app=redis,role=slave app=redis,role=slave 10.0.21.92 6379/TCP
|
||||
...
|
||||
```
|
||||
|
||||
Result: The service is created with labels `app=redis` and `role=slave` to identify that the pods are running the Redis slaves.
|
||||
|
||||
Tip: It is helpful to set labels on your services themselves--as we've done here--to make it easy to locate them later.
|
||||
|
|
@ -175,12 +194,14 @@ Tip: It is helpful to set labels on your services themselves--as we've done here
|
|||
This is a simple Go `net/http` ([negroni](https://github.com/codegangsta/negroni) based) server that is configured to talk to either the slave or master services depending on whether the request is a read or a write. The pods we are creating expose a simple JSON interface and serves a jQuery-Ajax based UI. Like the Redis read slaves, these pods are also managed by a replication controller.
|
||||
|
||||
1. Use the [guestbook-controller.json](guestbook-controller.json) file to create the guestbook replication controller by running the `kubectl create -f` *`filename`* command:
|
||||
|
||||
```shell
|
||||
$ kubectl create -f examples/guestbook-go/guestbook-controller.json
|
||||
replicationcontrollers/guestbook
|
||||
```
|
||||
|
||||
2. To verify that the guestbook replication controller is running, run the `kubectl get rc` command:
|
||||
|
||||
```
|
||||
$ kubectl get rc
|
||||
CONTROLLER CONTAINER(S) IMAGE(S) SELECTOR REPLICAS
|
||||
|
|
@ -191,6 +212,7 @@ This is a simple Go `net/http` ([negroni](https://github.com/codegangsta/negroni
|
|||
```
|
||||
|
||||
3. To verify that the guestbook pods are running (it might take up to thirty seconds to create the pods), list all the pods in cluster with the `kubectl get pods` command:
|
||||
|
||||
```shell
|
||||
$ kubectl get pods
|
||||
NAME READY STATUS RESTARTS AGE
|
||||
|
|
@ -202,6 +224,7 @@ This is a simple Go `net/http` ([negroni](https://github.com/codegangsta/negroni
|
|||
redis-slave-iai40 1/1 Running 0 6m
|
||||
...
|
||||
```
|
||||
|
||||
Result: You see a single Redis master, two Redis slaves, and three guestbook pods.
|
||||
|
||||
### Step Six: Create the guestbook service <a id="step-six"></a>
|
||||
|
|
@ -209,12 +232,14 @@ This is a simple Go `net/http` ([negroni](https://github.com/codegangsta/negroni
|
|||
Just like the others, we create a service to group the guestbook pods but this time, to make the guestbook front-end externally visible, we specify `"type": "LoadBalancer"`.
|
||||
|
||||
1. Use the [guestbook-service.json](guestbook-service.json) file to create the guestbook service by running the `kubectl create -f` *`filename`* command:
|
||||
|
||||
```shell
|
||||
$ kubectl create -f examples/guestbook-go/guestbook-service.json
|
||||
```
|
||||
|
||||
|
||||
2. To verify that the guestbook service is up, list all the services in the cluster with the `kubectl get services` command:
|
||||
|
||||
```
|
||||
$ kubectl get services
|
||||
NAME LABELS SELECTOR IP(S) PORT(S)
|
||||
|
|
@ -224,6 +249,7 @@ Just like the others, we create a service to group the guestbook pods but this t
|
|||
redis-slave app=redis,role=slave app=redis,role=slave 10.0.21.92 6379/TCP
|
||||
...
|
||||
```
|
||||
|
||||
Result: The service is created with label `app=guestbook`.
|
||||
|
||||
### Step Seven: View the guestbook <a id="step-seven"></a>
|
||||
|
|
@ -253,6 +279,7 @@ You can now play with the guestbook that you just created by opening it in a bro
|
|||
After you're done playing with the guestbook, you can cleanup by deleting the guestbook service and removing the associated resources that were created, including load balancers, forwarding rules, target pools, and Kuberentes replication controllers and services.
|
||||
|
||||
Delete all the resources by running the following `kubectl delete -f` *`filename`* command:
|
||||
|
||||
```shell
|
||||
$ kubectl delete -f examples/guestbook-go
|
||||
guestbook-controller
|
||||
|
|
|
|||
|
|
@ -130,6 +130,7 @@ NAME READY STATUS RESTARTS AG
|
|||
...
|
||||
redis-master-dz33o 1/1 Running 0 2h
|
||||
```
|
||||
|
||||
(Note that an initial `docker pull` to grab a container image may take a few minutes, depending on network conditions. A pod will be reported as `Pending` while its image is being downloaded.)
|
||||
|
||||
#### Optional Interlude
|
||||
|
|
@ -221,6 +222,7 @@ Create the service by running:
|
|||
$ kubectl create -f examples/guestbook/redis-master-service.yaml
|
||||
services/redis-master
|
||||
```
|
||||
|
||||
Then check the list of services, which should include the redis-master:
|
||||
|
||||
```shell
|
||||
|
|
|
|||
|
|
@ -61,6 +61,7 @@ In this case, we shall not run a single Hazelcast pod, because the discovery mec
|
|||
In Kubernetes a _[Service](../../docs/user-guide/services.md)_ describes a set of Pods that perform the same task. For example, the set of nodes in a Hazelcast cluster. An important use for a Service is to create a load balancer which distributes traffic across members of the set. But a _Service_ can also be used as a standing query which makes a dynamically changing set of Pods available via the Kubernetes API. This is actually how our discovery mechanism works, by relying on the service to discover other Hazelcast pods.
|
||||
|
||||
Here is the service description:
|
||||
|
||||
```yaml
|
||||
apiVersion: v1
|
||||
kind: Service
|
||||
|
|
@ -78,6 +79,7 @@ spec:
|
|||
The important thing to note here is the `selector`. It is a query over labels, that identifies the set of _Pods_ contained by the _Service_. In this case the selector is `name: hazelcast`. If you look at the Replication Controller specification below, you'll see that the pod has the corresponding label, so it will be selected for membership in this Service.
|
||||
|
||||
Create this service as follows:
|
||||
|
||||
```sh
|
||||
$ kubectl create -f examples/hazelcast/hazelcast-service.yaml
|
||||
```
|
||||
|
|
@ -138,6 +140,7 @@ $ kubectl create -f examples/hazelcast/hazelcast-controller.yaml
|
|||
```
|
||||
|
||||
After the controller provisions successfully the pod, you can query the service endpoints:
|
||||
|
||||
```sh
|
||||
$ kubectl get endpoints hazelcast -o json
|
||||
{
|
||||
|
|
@ -184,6 +187,7 @@ You can see that the _Service_ has found the pod created by the replication cont
|
|||
Now it gets even more interesting.
|
||||
|
||||
Let's scale our cluster to 2 pods:
|
||||
|
||||
```sh
|
||||
$ kubectl scale rc hazelcast --replicas=2
|
||||
```
|
||||
|
|
@ -229,8 +233,11 @@ Members [2] {
|
|||
2015-07-10 13:26:47.723 INFO 5 --- [ main] com.github.pires.hazelcast.Application : Started Application in 13.792 seconds (JVM running for 14.542)```
|
||||
|
||||
Now let's scale our cluster to 4 nodes:
|
||||
|
||||
```sh
|
||||
|
||||
$ kubectl scale rc hazelcast --replicas=4
|
||||
|
||||
```
|
||||
|
||||
Examine the status again by checking the logs and you should see the 4 members connected.
|
||||
|
|
@ -239,6 +246,7 @@ Examine the status again by checking the logs and you should see the 4 members c
|
|||
For those of you who are impatient, here is the summary of the commands we ran in this tutorial.
|
||||
|
||||
```sh
|
||||
|
||||
# create a service to track all hazelcast nodes
|
||||
kubectl create -f examples/hazelcast/hazelcast-service.yaml
|
||||
|
||||
|
|
@ -250,6 +258,7 @@ kubectl scale rc hazelcast --replicas=2
|
|||
|
||||
# scale up to 4 nodes
|
||||
kubectl scale rc hazelcast --replicas=4
|
||||
|
||||
```
|
||||
|
||||
### Hazelcast Discovery Source
|
||||
|
|
|
|||
|
|
@ -85,6 +85,7 @@ On the Kubernetes node, I got these in mount output
|
|||
```
|
||||
|
||||
If you ssh to that machine, you can run `docker ps` to see the actual pod.
|
||||
|
||||
```console
|
||||
# docker ps
|
||||
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
|
||||
|
|
@ -93,6 +94,7 @@ cc051196e7af kubernetes/pause:latest "/pause
|
|||
```
|
||||
|
||||
Run *docker inspect* and I found the Containers mounted the host directory into the their */mnt/iscsipd* directory.
|
||||
|
||||
```console
|
||||
# docker inspect --format '{{index .Volumes "/mnt/iscsipd"}}' cc051196e7af
|
||||
/var/lib/kubelet/pods/75e0af2b-f8e8-11e4-9ae7-42010af01964/volumes/kubernetes.io~iscsi/iscsipd-rw
|
||||
|
|
|
|||
|
|
@ -62,6 +62,7 @@ gcloud config set project <project-name>
|
|||
```
|
||||
|
||||
Next, start up a Kubernetes cluster:
|
||||
|
||||
```shell
|
||||
wget -q -O - https://get.k8s.io | bash
|
||||
```
|
||||
|
|
@ -81,6 +82,7 @@ files to your existing Meteor project `Dockerfile` and
|
|||
|
||||
`Dockerfile` should contain the below lines. You should replace the
|
||||
`ROOT_URL` with the actual hostname of your app.
|
||||
|
||||
```
|
||||
FROM chees/meteor-kubernetes
|
||||
ENV ROOT_URL http://myawesomeapp.com
|
||||
|
|
@ -89,6 +91,7 @@ ENV ROOT_URL http://myawesomeapp.com
|
|||
The `.dockerignore` file should contain the below lines. This tells
|
||||
Docker to ignore the files on those directories when it's building
|
||||
your container.
|
||||
|
||||
```
|
||||
.meteor/local
|
||||
packages/*/.build*
|
||||
|
|
@ -103,6 +106,7 @@ free to use this app for this example.
|
|||
|
||||
Now you can build your container by running this in
|
||||
your Meteor project directory:
|
||||
|
||||
```
|
||||
docker build -t my-meteor .
|
||||
```
|
||||
|
|
@ -113,6 +117,7 @@ Pushing to a registry
|
|||
For the [Docker Hub](https://hub.docker.com/), tag your app image with
|
||||
your username and push to the Hub with the below commands. Replace
|
||||
`<username>` with your Hub username.
|
||||
|
||||
```
|
||||
docker tag my-meteor <username>/my-meteor
|
||||
docker push <username>/my-meteor
|
||||
|
|
@ -122,6 +127,7 @@ For [Google Container
|
|||
Registry](https://cloud.google.com/tools/container-registry/), tag
|
||||
your app image with your project ID, and push to GCR. Replace
|
||||
`<project>` with your project ID.
|
||||
|
||||
```
|
||||
docker tag my-meteor gcr.io/<project>/my-meteor
|
||||
gcloud docker push gcr.io/<project>/my-meteor
|
||||
|
|
@ -139,17 +145,20 @@ We will need to provide MongoDB a persistent Kuberetes volume to
|
|||
store its data. See the [volumes documentation](../../docs/user-guide/volumes.md) for
|
||||
options. We're going to use Google Compute Engine persistent
|
||||
disks. Create the MongoDB disk by running:
|
||||
|
||||
```
|
||||
gcloud compute disks create --size=200GB mongo-disk
|
||||
```
|
||||
|
||||
Now you can start Mongo using that disk:
|
||||
|
||||
```
|
||||
kubectl create -f examples/meteor/mongo-pod.json
|
||||
kubectl create -f examples/meteor/mongo-service.json
|
||||
```
|
||||
|
||||
Wait until Mongo is started completely and then start up your Meteor app:
|
||||
|
||||
```
|
||||
kubectl create -f examples/meteor/meteor-service.json
|
||||
kubectl create -f examples/meteor/meteor-controller.json
|
||||
|
|
@ -161,12 +170,14 @@ the Meteor pods are started. We also created the service before creating the rc
|
|||
aid the scheduler in placing pods, as the scheduler ranks pod placement according to
|
||||
service anti-affinity (among other things). You can find the IP of your load balancer
|
||||
by running:
|
||||
|
||||
```
|
||||
kubectl get service meteor --template="{{range .status.loadBalancer.ingress}} {{.ip}} {{end}}"
|
||||
```
|
||||
|
||||
You will have to open up port 80 if it's not open yet in your
|
||||
environment. On Google Compute Engine, you may run the below command.
|
||||
|
||||
```
|
||||
gcloud compute firewall-rules create meteor-80 --allow=tcp:80 --target-tags kubernetes-minion
|
||||
```
|
||||
|
|
@ -181,6 +192,7 @@ to get an insight of what happens during the `docker build` step. The
|
|||
image is based on the Node.js official image. It then installs Meteor
|
||||
and copies in your apps' code. The last line specifies what happens
|
||||
when your app container is run.
|
||||
|
||||
```
|
||||
ENTRYPOINT MONGO_URL=mongodb://$MONGO_SERVICE_HOST:$MONGO_SERVICE_PORT /usr/local/bin/node main.js
|
||||
```
|
||||
|
|
@ -203,6 +215,7 @@ more information.
|
|||
As mentioned above, the mongo container uses a volume which is mapped
|
||||
to a persistent disk by Kubernetes. In [`mongo-pod.json`](mongo-pod.json) the container
|
||||
section specifies the volume:
|
||||
|
||||
```
|
||||
"volumeMounts": [
|
||||
{
|
||||
|
|
@ -213,6 +226,7 @@ section specifies the volume:
|
|||
|
||||
The name `mongo-disk` refers to the volume specified outside the
|
||||
container section:
|
||||
|
||||
```
|
||||
"volumes": [
|
||||
{
|
||||
|
|
|
|||
|
|
@ -58,6 +58,7 @@ gcloud config set project <project-name>
|
|||
```
|
||||
|
||||
Next, start up a Kubernetes cluster:
|
||||
|
||||
```shell
|
||||
wget -q -O - https://get.k8s.io | bash
|
||||
```
|
||||
|
|
@ -280,11 +281,13 @@ $ kubectl get services
|
|||
```
|
||||
|
||||
Then, find the external IP for your WordPress service by running:
|
||||
|
||||
```
|
||||
$ kubectl get services/wpfrontend --template="{{range .status.loadBalancer.ingress}} {{.ip}} {{end}}"
|
||||
```
|
||||
|
||||
or by listing the forwarding rules for your project:
|
||||
|
||||
```shell
|
||||
$ gcloud compute forwarding-rules list
|
||||
```
|
||||
|
|
|
|||
|
|
@ -49,6 +49,7 @@ Here is the config for the initial master and sentinel pod: [redis-master.yaml](
|
|||
|
||||
|
||||
Create this master as follows:
|
||||
|
||||
```sh
|
||||
kubectl create -f examples/redis/redis-master.yaml
|
||||
```
|
||||
|
|
@ -61,6 +62,7 @@ In Redis, we will use a Kubernetes Service to provide a discoverable endpoints f
|
|||
Here is the definition of the sentinel service: [redis-sentinel-service.yaml](redis-sentinel-service.yaml)
|
||||
|
||||
Create this service:
|
||||
|
||||
```sh
|
||||
kubectl create -f examples/redis/redis-sentinel-service.yaml
|
||||
```
|
||||
|
|
@ -83,6 +85,7 @@ kubectl create -f examples/redis/redis-controller.yaml
|
|||
We'll do the same thing for the sentinel. Here is the controller config: [redis-sentinel-controller.yaml](redis-sentinel-controller.yaml)
|
||||
|
||||
We create it as follows:
|
||||
|
||||
```sh
|
||||
kubectl create -f examples/redis/redis-sentinel-controller.yaml
|
||||
```
|
||||
|
|
@ -106,6 +109,7 @@ Unlike our original redis-master pod, these pods exist independently, and they u
|
|||
The final step in the cluster turn up is to delete the original redis-master pod that we created manually. While it was useful for bootstrapping discovery in the cluster, we really don't want the lifespan of our sentinel to be tied to the lifespan of one of our redis servers, and now that we have a successful, replicated redis sentinel service up and running, the binding is unnecessary.
|
||||
|
||||
Delete the master as follows:
|
||||
|
||||
```sh
|
||||
kubectl delete pods redis-master
|
||||
```
|
||||
|
|
|
|||
|
|
@ -133,6 +133,7 @@ type: LoadBalancer
|
|||
The external load balancer allows us to access the service from outside via an external IP, which is 104.197.19.120 in this case.
|
||||
|
||||
Note that you may need to create a firewall rule to allow the traffic, assuming you are using Google Compute Engine:
|
||||
|
||||
```
|
||||
$ gcloud compute firewall-rules create rethinkdb --allow=tcp:8080
|
||||
```
|
||||
|
|
@ -154,7 +155,7 @@ since the ui is not stateless when playing with Web Admin UI will cause `Connect
|
|||
* `gen_pod.sh` is using to generate pod templates for my local cluster,
|
||||
the generated pods which is using `nodeSelector` to force k8s to schedule containers to my designate nodes, for I need to access persistent data on my host dirs. Note that one needs to label the node before 'nodeSelector' can work, see this [tutorial](../../docs/user-guide/node-selection/)
|
||||
|
||||
* see [/antmanler/rethinkdb-k8s](https://github.com/antmanler/rethinkdb-k8s) for detail
|
||||
* see [antmanler/rethinkdb-k8s](https://github.com/antmanler/rethinkdb-k8s) for detail
|
||||
|
||||
|
||||
<!-- BEGIN MUNGE: GENERATED_ANALYTICS -->
|
||||
|
|
|
|||
|
|
@ -47,16 +47,19 @@ kubectl run my-nginx --image=nginx --replicas=2 --port=80
|
|||
```
|
||||
|
||||
Once the pods are created, you can list them to see what is up and running:
|
||||
|
||||
```bash
|
||||
kubectl get pods
|
||||
```
|
||||
|
||||
You can also see the replication controller that was created:
|
||||
|
||||
```bash
|
||||
kubectl get rc
|
||||
```
|
||||
|
||||
To stop the two replicated containers, stop the replication controller:
|
||||
|
||||
```bash
|
||||
kubectl stop rc my-nginx
|
||||
```
|
||||
|
|
|
|||
|
|
@ -142,6 +142,7 @@ $ kubectl logs spark-master
|
|||
15/06/26 14:15:55 INFO Master: Registering worker 10.244.1.15:44839 with 1 cores, 2.6 GB RAM
|
||||
15/06/26 14:15:55 INFO Master: Registering worker 10.244.0.19:60970 with 1 cores, 2.6 GB RAM
|
||||
```
|
||||
|
||||
## Step Three: Do something with the cluster
|
||||
|
||||
Get the address and port of the Master service.
|
||||
|
|
@ -196,6 +197,7 @@ SparkContext available as sc, HiveContext available as sqlContext.
|
|||
>>> sc.parallelize(range(1000)).map(lambda x:socket.gethostname()).distinct().collect()
|
||||
['spark-worker-controller-u40r2', 'spark-worker-controller-hifwi', 'spark-worker-controller-vpgyg']
|
||||
```
|
||||
|
||||
## Result
|
||||
|
||||
You now have services, replication controllers, and pods for the Spark master and Spark workers.
|
||||
|
|
|
|||
Loading…
Reference in New Issue