Merge pull request #8596 from andronat/fix_8319

Kubectl command renaming (run-container to run and resize to scale)
This commit is contained in:
Tim Hockin 2015-05-27 15:37:54 -07:00
commit a40837a542
8 changed files with 108 additions and 108 deletions

View File

@ -76,15 +76,15 @@ Here is the service description:
```yaml
apiVersion: v1beta3
kind: Service
metadata:
labels:
metadata:
labels:
name: cassandra
name: cassandra
spec:
spec:
ports:
- port: 9042
targetPort: 9042
selector:
selector:
name: cassandra
```
@ -125,7 +125,7 @@ subsets:
You can see that the _Service_ has found the pod we created in step one.
### Adding replicated nodes
Of course, a single node cluster isn't particularly interesting. The real power of Kubernetes and Cassandra lies in easily building a replicated, resizable Cassandra cluster.
Of course, a single node cluster isn't particularly interesting. The real power of Kubernetes and Cassandra lies in easily building a replicated, scalable Cassandra cluster.
In Kubernetes a _Replication Controller_ is responsible for replicating sets of identical pods. Like a _Service_ it has a selector query which identifies the members of it's set. Unlike a _Service_ it also has a desired number of replicas, and it will create or delete _Pods_ to ensure that the number of _Pods_ matches up with it's desired state.
@ -134,26 +134,26 @@ Replication Controllers will "adopt" existing pods that match their selector que
```yaml
apiVersion: v1beta3
kind: ReplicationController
metadata:
labels:
metadata:
labels:
name: cassandra
name: cassandra
spec:
spec:
replicas: 1
selector:
selector:
name: cassandra
template:
metadata:
labels:
template:
metadata:
labels:
name: cassandra
spec:
containers:
- command:
spec:
containers:
- command:
- /run.sh
resources:
limits:
cpu: 1
env:
env:
- name: MAX_HEAP_SIZE
key: MAX_HEAP_SIZE
value: 512M
@ -162,15 +162,15 @@ spec:
value: 100M
image: "kubernetes/cassandra:v2"
name: cassandra
ports:
ports:
- containerPort: 9042
name: cql
- containerPort: 9160
name: thrift
volumeMounts:
volumeMounts:
- mountPath: /cassandra_data
name: data
volumes:
volumes:
- name: data
emptyDir: {}
```
@ -185,9 +185,9 @@ $ kubectl create -f cassandra-controller.yaml
Now this is actually not that interesting, since we haven't actually done anything new. Now it will get interesting.
Let's resize our cluster to 2:
Let's scale our cluster to 2:
```sh
$ kubectl resize rc cassandra --replicas=2
$ kubectl scale rc cassandra --replicas=2
```
Now if you list the pods in your cluster, you should see two cassandra pods:
@ -195,10 +195,10 @@ Now if you list the pods in your cluster, you should see two cassandra pods:
```sh
$ kubectl get pods
POD IP CONTAINER(S) IMAGE(S) HOST LABELS STATUS CREATED MESSAGE
cassandra 10.244.3.3 kubernetes-minion-sft2/104.197.42.181 name=cassandra Running 7 minutes
cassandra kubernetes/cassandra:v2 Running 7 minutes
cassandra-gnhk8 10.244.0.5 kubernetes-minion-dqz3/104.197.2.71 name=cassandra Running About a minute
cassandra kubernetes/cassandra:v2 Running 51 seconds
cassandra 10.244.3.3 kubernetes-minion-sft2/104.197.42.181 name=cassandra Running 7 minutes
cassandra kubernetes/cassandra:v2 Running 7 minutes
cassandra-gnhk8 10.244.0.5 kubernetes-minion-dqz3/104.197.2.71 name=cassandra Running About a minute
cassandra kubernetes/cassandra:v2 Running 51 seconds
```
@ -218,9 +218,9 @@ UN 10.244.0.5 74.09 KB 256 100.0% 86feda0f-f070-4a5b-bda1-2ee
UN 10.244.3.3 51.28 KB 256 100.0% dafe3154-1d67-42e1-ac1d-78e7e80dce2b rack1
```
Now let's resize our cluster to 4 nodes:
Now let's scale our cluster to 4 nodes:
```sh
$ kubectl resize rc cassandra --replicas=4
$ kubectl scale rc cassandra --replicas=4
```
Examining the status again:
@ -251,13 +251,13 @@ kubectl create -f cassandra-service.yaml
kubectl create -f cassandra-controller.yaml
# scale up to 2 nodes
kubectl resize rc cassandra --replicas=2
kubectl scale rc cassandra --replicas=2
# validate the cluster
docker exec <container-id> nodetool status
# scale up to 4 nodes
kubectl resize rc cassandra --replicas=4
kubectl scale rc cassandra --replicas=4
```
### Seed Provider Source

View File

@ -143,19 +143,19 @@ Let's see what we've got:
$ kubectl get pods,rc,services,secrets --namespace=mytunes
POD IP CONTAINER(S) IMAGE(S) HOST LABELS STATUS CREATED MESSAGE
music-db-0fwsu 10.244.2.48 kubernetes-minion-m49b/104.197.35.221 name=music-db Running 6 minutes
es kubernetes/elasticsearch:1.0 Running 29 seconds
music-db-5pc2e 10.244.0.24 kubernetes-minion-3c8c/146.148.41.184 name=music-db Running 6 minutes
es kubernetes/elasticsearch:1.0 Running 6 minutes
music-db-bjqmv 10.244.3.31 kubernetes-minion-zey5/104.154.59.10 name=music-db Running 6 minutes
es kubernetes/elasticsearch:1.0 Running 19 seconds
music-db-swtrs 10.244.1.37 kubernetes-minion-f9dw/130.211.159.230 name=music-db Running 6 minutes
es kubernetes/elasticsearch:1.0 Running 6 minutes
music-db-0fwsu 10.244.2.48 kubernetes-minion-m49b/104.197.35.221 name=music-db Running 6 minutes
es kubernetes/elasticsearch:1.0 Running 29 seconds
music-db-5pc2e 10.244.0.24 kubernetes-minion-3c8c/146.148.41.184 name=music-db Running 6 minutes
es kubernetes/elasticsearch:1.0 Running 6 minutes
music-db-bjqmv 10.244.3.31 kubernetes-minion-zey5/104.154.59.10 name=music-db Running 6 minutes
es kubernetes/elasticsearch:1.0 Running 19 seconds
music-db-swtrs 10.244.1.37 kubernetes-minion-f9dw/130.211.159.230 name=music-db Running 6 minutes
es kubernetes/elasticsearch:1.0 Running 6 minutes
CONTROLLER CONTAINER(S) IMAGE(S) SELECTOR REPLICAS
music-db es kubernetes/elasticsearch:1.0 name=music-db 4
NAME LABELS SELECTOR IP(S) PORT(S)
music-server name=music-db name=music-db 10.0.138.61 9200/TCP
104.197.12.157
104.197.12.157
NAME TYPE DATA
apiserver-secret Opaque 2
```
@ -235,30 +235,30 @@ $ curl 104.197.12.157:9200/_nodes?pretty=true
```
Let's ramp up the number of Elasticsearch nodes from 4 to 10:
```
$ kubectl resize --replicas=10 replicationcontrollers music-db --namespace=mytunes
resized
$ kubectl scale --replicas=10 replicationcontrollers music-db --namespace=mytunes
scaled
$ kubectl get pods --namespace=mytunes
POD IP CONTAINER(S) IMAGE(S) HOST LABELS STATUS CREATED MESSAGE
music-db-0fwsu 10.244.2.48 kubernetes-minion-m49b/104.197.35.221 name=music-db Running 33 minutes
es kubernetes/elasticsearch:1.0 Running 26 minutes
music-db-2erje 10.244.2.50 kubernetes-minion-m49b/104.197.35.221 name=music-db Running 48 seconds
es kubernetes/elasticsearch:1.0 Running 46 seconds
music-db-5pc2e 10.244.0.24 kubernetes-minion-3c8c/146.148.41.184 name=music-db Running 33 minutes
es kubernetes/elasticsearch:1.0 Running 32 minutes
music-db-8rkvp 10.244.3.33 kubernetes-minion-zey5/104.154.59.10 name=music-db Running 48 seconds
es kubernetes/elasticsearch:1.0 Running 46 seconds
music-db-bjqmv 10.244.3.31 kubernetes-minion-zey5/104.154.59.10 name=music-db Running 33 minutes
es kubernetes/elasticsearch:1.0 Running 26 minutes
music-db-efc46 10.244.2.49 kubernetes-minion-m49b/104.197.35.221 name=music-db Running 48 seconds
es kubernetes/elasticsearch:1.0 Running 46 seconds
music-db-fhqyg 10.244.0.25 kubernetes-minion-3c8c/146.148.41.184 name=music-db Running 48 seconds
es kubernetes/elasticsearch:1.0 Running 47 seconds
music-db-guxe4 10.244.3.32 kubernetes-minion-zey5/104.154.59.10 name=music-db Running 48 seconds
es kubernetes/elasticsearch:1.0 Running 46 seconds
music-db-pbiq1 10.244.1.38 kubernetes-minion-f9dw/130.211.159.230 name=music-db Running 48 seconds
es kubernetes/elasticsearch:1.0 Running 47 seconds
music-db-swtrs 10.244.1.37 kubernetes-minion-f9dw/130.211.159.230 name=music-db Running 33 minutes
es kubernetes/elasticsearch:1.0 Running 32 minutes
music-db-0fwsu 10.244.2.48 kubernetes-minion-m49b/104.197.35.221 name=music-db Running 33 minutes
es kubernetes/elasticsearch:1.0 Running 26 minutes
music-db-2erje 10.244.2.50 kubernetes-minion-m49b/104.197.35.221 name=music-db Running 48 seconds
es kubernetes/elasticsearch:1.0 Running 46 seconds
music-db-5pc2e 10.244.0.24 kubernetes-minion-3c8c/146.148.41.184 name=music-db Running 33 minutes
es kubernetes/elasticsearch:1.0 Running 32 minutes
music-db-8rkvp 10.244.3.33 kubernetes-minion-zey5/104.154.59.10 name=music-db Running 48 seconds
es kubernetes/elasticsearch:1.0 Running 46 seconds
music-db-bjqmv 10.244.3.31 kubernetes-minion-zey5/104.154.59.10 name=music-db Running 33 minutes
es kubernetes/elasticsearch:1.0 Running 26 minutes
music-db-efc46 10.244.2.49 kubernetes-minion-m49b/104.197.35.221 name=music-db Running 48 seconds
es kubernetes/elasticsearch:1.0 Running 46 seconds
music-db-fhqyg 10.244.0.25 kubernetes-minion-3c8c/146.148.41.184 name=music-db Running 48 seconds
es kubernetes/elasticsearch:1.0 Running 47 seconds
music-db-guxe4 10.244.3.32 kubernetes-minion-zey5/104.154.59.10 name=music-db Running 48 seconds
es kubernetes/elasticsearch:1.0 Running 46 seconds
music-db-pbiq1 10.244.1.38 kubernetes-minion-f9dw/130.211.159.230 name=music-db Running 48 seconds
es kubernetes/elasticsearch:1.0 Running 47 seconds
music-db-swtrs 10.244.1.37 kubernetes-minion-f9dw/130.211.159.230 name=music-db Running 33 minutes
es kubernetes/elasticsearch:1.0 Running 32 minutes
```
Let's check to make sure that these 10 nodes are part of the same Elasticsearch cluster:

View File

@ -52,7 +52,7 @@ $ kubectl create -f hazelcast-service.yaml
```
### Adding replicated nodes
The real power of Kubernetes and Hazelcast lies in easily building a replicated, resizable Hazelcast cluster.
The real power of Kubernetes and Hazelcast lies in easily building a replicated, scalable Hazelcast cluster.
In Kubernetes a _Replication Controller_ is responsible for replicating sets of identical pods. Like a _Service_ it has a selector query which identifies the members of it's set. Unlike a _Service_ it also has a desired number of replicas, and it will create or delete _Pods_ to ensure that the number of _Pods_ matches up with it's desired state.
@ -129,9 +129,9 @@ You can see that the _Service_ has found the pod created by the replication cont
Now it gets even more interesting.
Let's resize our cluster to 2 pods:
Let's scale our cluster to 2 pods:
```sh
$ kubectl resize rc hazelcast --replicas=2
$ kubectl scale rc hazelcast --replicas=2
```
Now if you list the pods in your cluster, you should see two hazelcast pods:
@ -141,7 +141,7 @@ $ kubectl get pods
POD IP CONTAINER(S) IMAGE(S) HOST LABELS STATUS CREATED MESSAGE
hazelcast-pkyzd 10.244.90.3 e2e-test-minion-vj7k/104.197.8.214 name=hazelcast Running 14 seconds
hazelcast pires/hazelcast-k8s:0.2 Running 2 seconds
hazelcast-ulkws 10.244.66.2 e2e-test-minion-2x1f/146.148.62.37 name=hazelcast Running 7 seconds
hazelcast-ulkws 10.244.66.2 e2e-test-minion-2x1f/146.148.62.37 name=hazelcast Running 7 seconds
hazelcast pires/hazelcast-k8s:0.2 Running 6 seconds
```
@ -175,9 +175,9 @@ Members [2] {
2015-05-09 22:06:31.177 INFO 5 --- [ main] com.hazelcast.core.LifecycleService : [10.244.66.2]:5701 [someGroup] [3.4.2] Address[10.244.66.2]:5701 is STARTED
```
Now let's resize our cluster to 4 nodes:
Now let's scale our cluster to 4 nodes:
```sh
$ kubectl resize rc hazelcast --replicas=4
$ kubectl scale rc hazelcast --replicas=4
```
Examine the status again by checking a nodes log and you should see the 4 members connected.
@ -193,10 +193,10 @@ kubectl create -f hazelcast-service.yaml
kubectl create -f hazelcast-controller.yaml
# scale up to 2 nodes
kubectl resize rc hazelcast --replicas=2
kubectl scale rc hazelcast --replicas=2
# scale up to 4 nodes
kubectl resize rc hazelcast --replicas=4
kubectl scale rc hazelcast --replicas=4
```
### Hazelcast Discovery Source

View File

@ -184,7 +184,7 @@ At this point, all requests we make to the Kubernetes cluster from the command l
Let's create some content.
```shell
$ cluster/kubectl.sh run-container snowflake --image=kubernetes/serve_hostname --replicas=2
$ cluster/kubectl.sh run snowflake --image=kubernetes/serve_hostname --replicas=2
```
We have just created a replication controller whose replica size is 2 that is running the pod called snowflake with a basic container that just serves the hostname.
@ -192,14 +192,14 @@ We have just created a replication controller whose replica size is 2 that is ru
```shell
cluster/kubectl.sh get rc
CONTROLLER CONTAINER(S) IMAGE(S) SELECTOR REPLICAS
snowflake snowflake kubernetes/serve_hostname run-container=snowflake 2
snowflake snowflake kubernetes/serve_hostname run=snowflake 2
$ cluster/kubectl.sh get pods
POD IP CONTAINER(S) IMAGE(S) HOST LABELS STATUS CREATED MESSAGE
snowflake-mbrfi 10.244.2.4 kubernetes-minion-ilqx/104.197.8.214 run-container=snowflake Running About an hour
snowflake kubernetes/serve_hostname Running About an hour
snowflake-p78ev 10.244.2.5 kubernetes-minion-ilqx/104.197.8.214 run-container=snowflake Running About an hour
snowflake kubernetes/serve_hostname Running About an hour
snowflake-mbrfi 10.244.2.4 kubernetes-minion-ilqx/104.197.8.214 run=snowflake Running About an hour
snowflake kubernetes/serve_hostname Running About an hour
snowflake-p78ev 10.244.2.5 kubernetes-minion-ilqx/104.197.8.214 run=snowflake Running About an hour
snowflake kubernetes/serve_hostname Running About an hour
```
And this is great, developers are able to do what they want, and they do not have to worry about affecting content in the production namespace.
@ -223,23 +223,23 @@ POD IP CONTAINER(S) IMAGE(S)
Production likes to run cattle, so let's create some cattle pods.
```shell
$ cluster/kubectl.sh run-container cattle --image=kubernetes/serve_hostname --replicas=5
$ cluster/kubectl.sh run cattle --image=kubernetes/serve_hostname --replicas=5
$ cluster/kubectl.sh get rc
CONTROLLER CONTAINER(S) IMAGE(S) SELECTOR REPLICAS
cattle cattle kubernetes/serve_hostname run-container=cattle 5
cattle cattle kubernetes/serve_hostname run=cattle 5
$ cluster/kubectl.sh get pods
POD IP CONTAINER(S) IMAGE(S) HOST LABELS STATUS CREATED MESSAGE
cattle-1kyvj 10.244.0.4 kubernetes-minion-7s1y/23.236.54.97 run-container=cattle Running About an hour
cattle kubernetes/serve_hostname Running About an hour
cattle-kobrk 10.244.1.4 kubernetes-minion-cfs6/104.154.61.231 run-container=cattle Running About an hour
cattle kubernetes/serve_hostname Running About an hour
cattle-l1v9t 10.244.0.5 kubernetes-minion-7s1y/23.236.54.97 run-container=cattle Running About an hour
cattle kubernetes/serve_hostname Running About an hour
cattle-ne2sj 10.244.3.7 kubernetes-minion-x8gx/104.154.47.83 run-container=cattle Running About an hour
cattle kubernetes/serve_hostname Running About an hour
cattle-qrk4x 10.244.0.6 kubernetes-minion-7s1y/23.236.54.97 run-container=cattle Running About an hour
cattle-1kyvj 10.244.0.4 kubernetes-minion-7s1y/23.236.54.97 run=cattle Running About an hour
cattle kubernetes/serve_hostname Running About an hour
cattle-kobrk 10.244.1.4 kubernetes-minion-cfs6/104.154.61.231 run=cattle Running About an hour
cattle kubernetes/serve_hostname Running About an hour
cattle-l1v9t 10.244.0.5 kubernetes-minion-7s1y/23.236.54.97 run=cattle Running About an hour
cattle kubernetes/serve_hostname Running About an hour
cattle-ne2sj 10.244.3.7 kubernetes-minion-x8gx/104.154.47.83 run=cattle Running About an hour
cattle kubernetes/serve_hostname Running About an hour
cattle-qrk4x 10.244.0.6 kubernetes-minion-7s1y/23.236.54.97 run=cattle Running About an hour
cattle kubernetes/serve_hostname
```

View File

@ -56,15 +56,15 @@ We create it as follows:
kubectl create -f examples/redis/redis-sentinel-controller.yaml
```
### Resize our replicated pods
### Scale our replicated pods
Initially creating those pods didn't actually do anything, since we only asked for one sentinel and one redis server, and they already existed, nothing changed. Now we will add more replicas:
```sh
kubectl resize rc redis --replicas=3
kubectl scale rc redis --replicas=3
```
```sh
kubectl resize rc redis-sentinel --replicas=3
kubectl scale rc redis-sentinel --replicas=3
```
This will create two additional replicas of the redis server and two additional replicas of the redis sentinel.
@ -86,7 +86,7 @@ Now let's take a close look at what happens after this pod is deleted. There ar
3. The redis sentinels themselves, realize that the master has disappeared from the cluster, and begin the election procedure for selecting a new master. They perform this election and selection, and chose one of the existing redis server replicas to be the new master.
### Conclusion
At this point we now have a reliable, scalable Redis installation. By resizing the replication controller for redis servers, we can increase or decrease the number of read-slaves in our cluster. Likewise, if failures occur, the redis-sentinels will perform master election and select a new master.
At this point we now have a reliable, scalable Redis installation. By scaling the replication controller for redis servers, we can increase or decrease the number of read-slaves in our cluster. Likewise, if failures occur, the redis-sentinels will perform master election and select a new master.
### tl; dr
For those of you who are impatient, here is the summary of commands we ran in this tutorial
@ -104,9 +104,9 @@ kubectl create -f examples/redis/redis-controller.yaml
# Create a replication controller for redis sentinels
kubectl create -f examples/redis/redis-sentinel-controller.yaml
# Resize both replication controllers
kubectl resize rc redis --replicas=3
kubectl resize rc redis-sentinel --replicas=3
# Scale both replication controllers
kubectl scale rc redis --replicas=3
kubectl scale rc redis-sentinel --replicas=3
# Delete the original master pod
kubectl delete pods redis-master

View File

@ -49,8 +49,8 @@ check out again:
```shell
$kubectl get po
POD IP CONTAINER(S) IMAGE(S) HOST LABELS STATUS CREATED MESSAGE
rethinkdb-rc-1.16.0-6odi0 kubernetes-minion-s59e/ db=rethinkdb,role=replicas Pending 11 seconds
rethinkdb antmanler/rethinkdb:1.16.0
rethinkdb-rc-1.16.0-6odi0 kubernetes-minion-s59e/ db=rethinkdb,role=replicas Pending 11 seconds
rethinkdb antmanler/rethinkdb:1.16.0
```
**Done!**
@ -61,20 +61,20 @@ rethinkdb-rc-1.16.0-6odi0
Scale
-----
You can scale up you cluster using `kubectl resize`, and new pod will join to exsits cluster automatically, for example
You can scale up you cluster using `kubectl scale`, and new pod will join to exsits cluster automatically, for example
```shell
$kubectl resize rc rethinkdb-rc-1.16.0 --replicas=3
resized
$kubectl scale rc rethinkdb-rc-1.16.0 --replicas=3
scaled
$kubectl get po
POD IP CONTAINER(S) IMAGE(S) HOST LABELS STATUS CREATED MESSAGE
rethinkdb-rc-1.16.0-6odi0 10.244.3.3 kubernetes-minion-s59e/104.197.79.42 db=rethinkdb,role=replicas Running About a minute
rethinkdb antmanler/rethinkdb:1.16.0 Running About a minute
rethinkdb-rc-1.16.0-e3mxv kubernetes-minion-d7ub/ db=rethinkdb,role=replicas Pending 6 seconds
rethinkdb antmanler/rethinkdb:1.16.0
rethinkdb-rc-1.16.0-manu6 kubernetes-minion-cybz/ db=rethinkdb,role=replicas Pending 6 seconds
rethinkdb antmanler/rethinkdb:1.16.0
rethinkdb-rc-1.16.0-6odi0 10.244.3.3 kubernetes-minion-s59e/104.197.79.42 db=rethinkdb,role=replicas Running About a minute
rethinkdb antmanler/rethinkdb:1.16.0 Running About a minute
rethinkdb-rc-1.16.0-e3mxv kubernetes-minion-d7ub/ db=rethinkdb,role=replicas Pending 6 seconds
rethinkdb antmanler/rethinkdb:1.16.0
rethinkdb-rc-1.16.0-manu6 kubernetes-minion-cybz/ db=rethinkdb,role=replicas Pending 6 seconds
rethinkdb antmanler/rethinkdb:1.16.0
```
Admin
@ -93,7 +93,7 @@ find the service
$kubectl get se
NAME LABELS SELECTOR IP(S) PORT(S)
rethinkdb-admin db=influxdb db=rethinkdb,role=admin 10.0.131.19 8080/TCP
104.197.19.120
104.197.19.120
rethinkdb-driver db=influxdb db=rethinkdb 10.0.27.114 28015/TCP
```

View File

@ -12,7 +12,7 @@ The `kubectl` line below spins up two containers running
[Nginx](http://nginx.org/en/) running on port 80:
```bash
kubectl run-container my-nginx --image=nginx --replicas=2 --port=80
kubectl run my-nginx --image=nginx --replicas=2 --port=80
```
Once the pods are created, you can list them to see what is up and running:

View File

@ -47,12 +47,12 @@ $ ./cluster/kubectl.sh create -f examples/update-demo/nautilus-rc.yaml
After pulling the image from the Docker Hub to your worker nodes (which may take a minute or so) you'll see a couple of squares in the UI detailing the pods that are running along with the image that they are serving up. A cute little nautilus.
### Step Three: Try resizing the controller
### Step Three: Try scaling the controller
Now we will increase the number of replicas from two to four:
```bash
$ ./cluster/kubectl.sh resize rc update-demo-nautilus --replicas=4
$ ./cluster/kubectl.sh scale rc update-demo-nautilus --replicas=4
```
If you go back to the [demo website](http://localhost:8001/static/index.html) you should eventually see four boxes, one for each pod.
@ -66,7 +66,7 @@ $ ./cluster/kubectl.sh rolling-update update-demo-nautilus --update-period=10s -
The rolling-update command in kubectl will do 2 things:
1. Create a new replication controller with a pod template that uses the new image (`gcr.io/google_containers/update-demo:kitten`)
2. Resize the old and new replication controllers until the new controller replaces the old. This will kill the current pods one at a time, spinnning up new ones to replace them.
2. Scale the old and new replication controllers until the new controller replaces the old. This will kill the current pods one at a time, spinnning up new ones to replace them.
Watch the [demo website](http://localhost:8001/static/index.html), it will update one pod every 10 seconds until all of the pods have the new image.