rename resize to scale
This commit is contained in:
parent
866b0d173f
commit
a138cc3bf4
|
|
@ -125,7 +125,7 @@ subsets:
|
||||||
You can see that the _Service_ has found the pod we created in step one.
|
You can see that the _Service_ has found the pod we created in step one.
|
||||||
|
|
||||||
### Adding replicated nodes
|
### Adding replicated nodes
|
||||||
Of course, a single node cluster isn't particularly interesting. The real power of Kubernetes and Cassandra lies in easily building a replicated, resizable Cassandra cluster.
|
Of course, a single node cluster isn't particularly interesting. The real power of Kubernetes and Cassandra lies in easily building a replicated, scalable Cassandra cluster.
|
||||||
|
|
||||||
In Kubernetes a _Replication Controller_ is responsible for replicating sets of identical pods. Like a _Service_ it has a selector query which identifies the members of it's set. Unlike a _Service_ it also has a desired number of replicas, and it will create or delete _Pods_ to ensure that the number of _Pods_ matches up with it's desired state.
|
In Kubernetes a _Replication Controller_ is responsible for replicating sets of identical pods. Like a _Service_ it has a selector query which identifies the members of it's set. Unlike a _Service_ it also has a desired number of replicas, and it will create or delete _Pods_ to ensure that the number of _Pods_ matches up with it's desired state.
|
||||||
|
|
||||||
|
|
@ -185,9 +185,9 @@ $ kubectl create -f cassandra-controller.yaml
|
||||||
|
|
||||||
Now this is actually not that interesting, since we haven't actually done anything new. Now it will get interesting.
|
Now this is actually not that interesting, since we haven't actually done anything new. Now it will get interesting.
|
||||||
|
|
||||||
Let's resize our cluster to 2:
|
Let's scale our cluster to 2:
|
||||||
```sh
|
```sh
|
||||||
$ kubectl resize rc cassandra --replicas=2
|
$ kubectl scale rc cassandra --replicas=2
|
||||||
```
|
```
|
||||||
|
|
||||||
Now if you list the pods in your cluster, you should see two cassandra pods:
|
Now if you list the pods in your cluster, you should see two cassandra pods:
|
||||||
|
|
@ -218,9 +218,9 @@ UN 10.244.0.5 74.09 KB 256 100.0% 86feda0f-f070-4a5b-bda1-2ee
|
||||||
UN 10.244.3.3 51.28 KB 256 100.0% dafe3154-1d67-42e1-ac1d-78e7e80dce2b rack1
|
UN 10.244.3.3 51.28 KB 256 100.0% dafe3154-1d67-42e1-ac1d-78e7e80dce2b rack1
|
||||||
```
|
```
|
||||||
|
|
||||||
Now let's resize our cluster to 4 nodes:
|
Now let's scale our cluster to 4 nodes:
|
||||||
```sh
|
```sh
|
||||||
$ kubectl resize rc cassandra --replicas=4
|
$ kubectl scale rc cassandra --replicas=4
|
||||||
```
|
```
|
||||||
|
|
||||||
Examining the status again:
|
Examining the status again:
|
||||||
|
|
@ -251,13 +251,13 @@ kubectl create -f cassandra-service.yaml
|
||||||
kubectl create -f cassandra-controller.yaml
|
kubectl create -f cassandra-controller.yaml
|
||||||
|
|
||||||
# scale up to 2 nodes
|
# scale up to 2 nodes
|
||||||
kubectl resize rc cassandra --replicas=2
|
kubectl scale rc cassandra --replicas=2
|
||||||
|
|
||||||
# validate the cluster
|
# validate the cluster
|
||||||
docker exec <container-id> nodetool status
|
docker exec <container-id> nodetool status
|
||||||
|
|
||||||
# scale up to 4 nodes
|
# scale up to 4 nodes
|
||||||
kubectl resize rc cassandra --replicas=4
|
kubectl scale rc cassandra --replicas=4
|
||||||
```
|
```
|
||||||
|
|
||||||
### Seed Provider Source
|
### Seed Provider Source
|
||||||
|
|
|
||||||
|
|
@ -235,8 +235,8 @@ $ curl 104.197.12.157:9200/_nodes?pretty=true
|
||||||
```
|
```
|
||||||
Let's ramp up the number of Elasticsearch nodes from 4 to 10:
|
Let's ramp up the number of Elasticsearch nodes from 4 to 10:
|
||||||
```
|
```
|
||||||
$ kubectl resize --replicas=10 replicationcontrollers music-db --namespace=mytunes
|
$ kubectl scale --replicas=10 replicationcontrollers music-db --namespace=mytunes
|
||||||
resized
|
scaled
|
||||||
$ kubectl get pods --namespace=mytunes
|
$ kubectl get pods --namespace=mytunes
|
||||||
POD IP CONTAINER(S) IMAGE(S) HOST LABELS STATUS CREATED MESSAGE
|
POD IP CONTAINER(S) IMAGE(S) HOST LABELS STATUS CREATED MESSAGE
|
||||||
music-db-0fwsu 10.244.2.48 kubernetes-minion-m49b/104.197.35.221 name=music-db Running 33 minutes
|
music-db-0fwsu 10.244.2.48 kubernetes-minion-m49b/104.197.35.221 name=music-db Running 33 minutes
|
||||||
|
|
|
||||||
|
|
@ -52,7 +52,7 @@ $ kubectl create -f hazelcast-service.yaml
|
||||||
```
|
```
|
||||||
|
|
||||||
### Adding replicated nodes
|
### Adding replicated nodes
|
||||||
The real power of Kubernetes and Hazelcast lies in easily building a replicated, resizable Hazelcast cluster.
|
The real power of Kubernetes and Hazelcast lies in easily building a replicated, scalable Hazelcast cluster.
|
||||||
|
|
||||||
In Kubernetes a _Replication Controller_ is responsible for replicating sets of identical pods. Like a _Service_ it has a selector query which identifies the members of it's set. Unlike a _Service_ it also has a desired number of replicas, and it will create or delete _Pods_ to ensure that the number of _Pods_ matches up with it's desired state.
|
In Kubernetes a _Replication Controller_ is responsible for replicating sets of identical pods. Like a _Service_ it has a selector query which identifies the members of it's set. Unlike a _Service_ it also has a desired number of replicas, and it will create or delete _Pods_ to ensure that the number of _Pods_ matches up with it's desired state.
|
||||||
|
|
||||||
|
|
@ -129,9 +129,9 @@ You can see that the _Service_ has found the pod created by the replication cont
|
||||||
|
|
||||||
Now it gets even more interesting.
|
Now it gets even more interesting.
|
||||||
|
|
||||||
Let's resize our cluster to 2 pods:
|
Let's scale our cluster to 2 pods:
|
||||||
```sh
|
```sh
|
||||||
$ kubectl resize rc hazelcast --replicas=2
|
$ kubectl scale rc hazelcast --replicas=2
|
||||||
```
|
```
|
||||||
|
|
||||||
Now if you list the pods in your cluster, you should see two hazelcast pods:
|
Now if you list the pods in your cluster, you should see two hazelcast pods:
|
||||||
|
|
@ -175,9 +175,9 @@ Members [2] {
|
||||||
2015-05-09 22:06:31.177 INFO 5 --- [ main] com.hazelcast.core.LifecycleService : [10.244.66.2]:5701 [someGroup] [3.4.2] Address[10.244.66.2]:5701 is STARTED
|
2015-05-09 22:06:31.177 INFO 5 --- [ main] com.hazelcast.core.LifecycleService : [10.244.66.2]:5701 [someGroup] [3.4.2] Address[10.244.66.2]:5701 is STARTED
|
||||||
```
|
```
|
||||||
|
|
||||||
Now let's resize our cluster to 4 nodes:
|
Now let's scale our cluster to 4 nodes:
|
||||||
```sh
|
```sh
|
||||||
$ kubectl resize rc hazelcast --replicas=4
|
$ kubectl scale rc hazelcast --replicas=4
|
||||||
```
|
```
|
||||||
|
|
||||||
Examine the status again by checking a node’s log and you should see the 4 members connected.
|
Examine the status again by checking a node’s log and you should see the 4 members connected.
|
||||||
|
|
@ -193,10 +193,10 @@ kubectl create -f hazelcast-service.yaml
|
||||||
kubectl create -f hazelcast-controller.yaml
|
kubectl create -f hazelcast-controller.yaml
|
||||||
|
|
||||||
# scale up to 2 nodes
|
# scale up to 2 nodes
|
||||||
kubectl resize rc hazelcast --replicas=2
|
kubectl scale rc hazelcast --replicas=2
|
||||||
|
|
||||||
# scale up to 4 nodes
|
# scale up to 4 nodes
|
||||||
kubectl resize rc hazelcast --replicas=4
|
kubectl scale rc hazelcast --replicas=4
|
||||||
```
|
```
|
||||||
|
|
||||||
### Hazelcast Discovery Source
|
### Hazelcast Discovery Source
|
||||||
|
|
|
||||||
|
|
@ -56,15 +56,15 @@ We create it as follows:
|
||||||
kubectl create -f examples/redis/redis-sentinel-controller.yaml
|
kubectl create -f examples/redis/redis-sentinel-controller.yaml
|
||||||
```
|
```
|
||||||
|
|
||||||
### Resize our replicated pods
|
### Scale our replicated pods
|
||||||
Initially creating those pods didn't actually do anything, since we only asked for one sentinel and one redis server, and they already existed, nothing changed. Now we will add more replicas:
|
Initially creating those pods didn't actually do anything, since we only asked for one sentinel and one redis server, and they already existed, nothing changed. Now we will add more replicas:
|
||||||
|
|
||||||
```sh
|
```sh
|
||||||
kubectl resize rc redis --replicas=3
|
kubectl scale rc redis --replicas=3
|
||||||
```
|
```
|
||||||
|
|
||||||
```sh
|
```sh
|
||||||
kubectl resize rc redis-sentinel --replicas=3
|
kubectl scale rc redis-sentinel --replicas=3
|
||||||
```
|
```
|
||||||
|
|
||||||
This will create two additional replicas of the redis server and two additional replicas of the redis sentinel.
|
This will create two additional replicas of the redis server and two additional replicas of the redis sentinel.
|
||||||
|
|
@ -86,7 +86,7 @@ Now let's take a close look at what happens after this pod is deleted. There ar
|
||||||
3. The redis sentinels themselves, realize that the master has disappeared from the cluster, and begin the election procedure for selecting a new master. They perform this election and selection, and chose one of the existing redis server replicas to be the new master.
|
3. The redis sentinels themselves, realize that the master has disappeared from the cluster, and begin the election procedure for selecting a new master. They perform this election and selection, and chose one of the existing redis server replicas to be the new master.
|
||||||
|
|
||||||
### Conclusion
|
### Conclusion
|
||||||
At this point we now have a reliable, scalable Redis installation. By resizing the replication controller for redis servers, we can increase or decrease the number of read-slaves in our cluster. Likewise, if failures occur, the redis-sentinels will perform master election and select a new master.
|
At this point we now have a reliable, scalable Redis installation. By scaling the replication controller for redis servers, we can increase or decrease the number of read-slaves in our cluster. Likewise, if failures occur, the redis-sentinels will perform master election and select a new master.
|
||||||
|
|
||||||
### tl; dr
|
### tl; dr
|
||||||
For those of you who are impatient, here is the summary of commands we ran in this tutorial
|
For those of you who are impatient, here is the summary of commands we ran in this tutorial
|
||||||
|
|
@ -104,9 +104,9 @@ kubectl create -f examples/redis/redis-controller.yaml
|
||||||
# Create a replication controller for redis sentinels
|
# Create a replication controller for redis sentinels
|
||||||
kubectl create -f examples/redis/redis-sentinel-controller.yaml
|
kubectl create -f examples/redis/redis-sentinel-controller.yaml
|
||||||
|
|
||||||
# Resize both replication controllers
|
# Scale both replication controllers
|
||||||
kubectl resize rc redis --replicas=3
|
kubectl scale rc redis --replicas=3
|
||||||
kubectl resize rc redis-sentinel --replicas=3
|
kubectl scale rc redis-sentinel --replicas=3
|
||||||
|
|
||||||
# Delete the original master pod
|
# Delete the original master pod
|
||||||
kubectl delete pods redis-master
|
kubectl delete pods redis-master
|
||||||
|
|
|
||||||
|
|
@ -61,12 +61,12 @@ rethinkdb-rc-1.16.0-6odi0
|
||||||
Scale
|
Scale
|
||||||
-----
|
-----
|
||||||
|
|
||||||
You can scale up you cluster using `kubectl resize`, and new pod will join to exsits cluster automatically, for example
|
You can scale up you cluster using `kubectl scale`, and new pod will join to exsits cluster automatically, for example
|
||||||
|
|
||||||
|
|
||||||
```shell
|
```shell
|
||||||
$kubectl resize rc rethinkdb-rc-1.16.0 --replicas=3
|
$kubectl scale rc rethinkdb-rc-1.16.0 --replicas=3
|
||||||
resized
|
scaled
|
||||||
$kubectl get po
|
$kubectl get po
|
||||||
POD IP CONTAINER(S) IMAGE(S) HOST LABELS STATUS CREATED MESSAGE
|
POD IP CONTAINER(S) IMAGE(S) HOST LABELS STATUS CREATED MESSAGE
|
||||||
rethinkdb-rc-1.16.0-6odi0 10.244.3.3 kubernetes-minion-s59e/104.197.79.42 db=rethinkdb,role=replicas Running About a minute
|
rethinkdb-rc-1.16.0-6odi0 10.244.3.3 kubernetes-minion-s59e/104.197.79.42 db=rethinkdb,role=replicas Running About a minute
|
||||||
|
|
|
||||||
|
|
@ -47,12 +47,12 @@ $ ./cluster/kubectl.sh create -f examples/update-demo/nautilus-rc.yaml
|
||||||
|
|
||||||
After pulling the image from the Docker Hub to your worker nodes (which may take a minute or so) you'll see a couple of squares in the UI detailing the pods that are running along with the image that they are serving up. A cute little nautilus.
|
After pulling the image from the Docker Hub to your worker nodes (which may take a minute or so) you'll see a couple of squares in the UI detailing the pods that are running along with the image that they are serving up. A cute little nautilus.
|
||||||
|
|
||||||
### Step Three: Try resizing the controller
|
### Step Three: Try scaling the controller
|
||||||
|
|
||||||
Now we will increase the number of replicas from two to four:
|
Now we will increase the number of replicas from two to four:
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
$ ./cluster/kubectl.sh resize rc update-demo-nautilus --replicas=4
|
$ ./cluster/kubectl.sh scale rc update-demo-nautilus --replicas=4
|
||||||
```
|
```
|
||||||
|
|
||||||
If you go back to the [demo website](http://localhost:8001/static/index.html) you should eventually see four boxes, one for each pod.
|
If you go back to the [demo website](http://localhost:8001/static/index.html) you should eventually see four boxes, one for each pod.
|
||||||
|
|
@ -66,7 +66,7 @@ $ ./cluster/kubectl.sh rolling-update update-demo-nautilus --update-period=10s -
|
||||||
The rolling-update command in kubectl will do 2 things:
|
The rolling-update command in kubectl will do 2 things:
|
||||||
|
|
||||||
1. Create a new replication controller with a pod template that uses the new image (`gcr.io/google_containers/update-demo:kitten`)
|
1. Create a new replication controller with a pod template that uses the new image (`gcr.io/google_containers/update-demo:kitten`)
|
||||||
2. Resize the old and new replication controllers until the new controller replaces the old. This will kill the current pods one at a time, spinnning up new ones to replace them.
|
2. Scale the old and new replication controllers until the new controller replaces the old. This will kill the current pods one at a time, spinnning up new ones to replace them.
|
||||||
|
|
||||||
Watch the [demo website](http://localhost:8001/static/index.html), it will update one pod every 10 seconds until all of the pods have the new image.
|
Watch the [demo website](http://localhost:8001/static/index.html), it will update one pod every 10 seconds until all of the pods have the new image.
|
||||||
|
|
||||||
|
|
|
||||||
Loading…
Reference in New Issue