rename resize to scale
This commit is contained in:
parent
866b0d173f
commit
a138cc3bf4
|
@ -76,15 +76,15 @@ Here is the service description:
|
|||
```yaml
|
||||
apiVersion: v1beta3
|
||||
kind: Service
|
||||
metadata:
|
||||
labels:
|
||||
metadata:
|
||||
labels:
|
||||
name: cassandra
|
||||
name: cassandra
|
||||
spec:
|
||||
spec:
|
||||
ports:
|
||||
- port: 9042
|
||||
targetPort: 9042
|
||||
selector:
|
||||
selector:
|
||||
name: cassandra
|
||||
```
|
||||
|
||||
|
@ -125,7 +125,7 @@ subsets:
|
|||
You can see that the _Service_ has found the pod we created in step one.
|
||||
|
||||
### Adding replicated nodes
|
||||
Of course, a single node cluster isn't particularly interesting. The real power of Kubernetes and Cassandra lies in easily building a replicated, resizable Cassandra cluster.
|
||||
Of course, a single node cluster isn't particularly interesting. The real power of Kubernetes and Cassandra lies in easily building a replicated, scalable Cassandra cluster.
|
||||
|
||||
In Kubernetes a _Replication Controller_ is responsible for replicating sets of identical pods. Like a _Service_ it has a selector query which identifies the members of it's set. Unlike a _Service_ it also has a desired number of replicas, and it will create or delete _Pods_ to ensure that the number of _Pods_ matches up with it's desired state.
|
||||
|
||||
|
@ -134,26 +134,26 @@ Replication Controllers will "adopt" existing pods that match their selector que
|
|||
```yaml
|
||||
apiVersion: v1beta3
|
||||
kind: ReplicationController
|
||||
metadata:
|
||||
labels:
|
||||
metadata:
|
||||
labels:
|
||||
name: cassandra
|
||||
name: cassandra
|
||||
spec:
|
||||
spec:
|
||||
replicas: 1
|
||||
selector:
|
||||
selector:
|
||||
name: cassandra
|
||||
template:
|
||||
metadata:
|
||||
labels:
|
||||
template:
|
||||
metadata:
|
||||
labels:
|
||||
name: cassandra
|
||||
spec:
|
||||
containers:
|
||||
- command:
|
||||
spec:
|
||||
containers:
|
||||
- command:
|
||||
- /run.sh
|
||||
resources:
|
||||
limits:
|
||||
cpu: 1
|
||||
env:
|
||||
env:
|
||||
- name: MAX_HEAP_SIZE
|
||||
key: MAX_HEAP_SIZE
|
||||
value: 512M
|
||||
|
@ -162,15 +162,15 @@ spec:
|
|||
value: 100M
|
||||
image: "kubernetes/cassandra:v2"
|
||||
name: cassandra
|
||||
ports:
|
||||
ports:
|
||||
- containerPort: 9042
|
||||
name: cql
|
||||
- containerPort: 9160
|
||||
name: thrift
|
||||
volumeMounts:
|
||||
volumeMounts:
|
||||
- mountPath: /cassandra_data
|
||||
name: data
|
||||
volumes:
|
||||
volumes:
|
||||
- name: data
|
||||
emptyDir: {}
|
||||
```
|
||||
|
@ -185,9 +185,9 @@ $ kubectl create -f cassandra-controller.yaml
|
|||
|
||||
Now this is actually not that interesting, since we haven't actually done anything new. Now it will get interesting.
|
||||
|
||||
Let's resize our cluster to 2:
|
||||
Let's scale our cluster to 2:
|
||||
```sh
|
||||
$ kubectl resize rc cassandra --replicas=2
|
||||
$ kubectl scale rc cassandra --replicas=2
|
||||
```
|
||||
|
||||
Now if you list the pods in your cluster, you should see two cassandra pods:
|
||||
|
@ -195,10 +195,10 @@ Now if you list the pods in your cluster, you should see two cassandra pods:
|
|||
```sh
|
||||
$ kubectl get pods
|
||||
POD IP CONTAINER(S) IMAGE(S) HOST LABELS STATUS CREATED MESSAGE
|
||||
cassandra 10.244.3.3 kubernetes-minion-sft2/104.197.42.181 name=cassandra Running 7 minutes
|
||||
cassandra kubernetes/cassandra:v2 Running 7 minutes
|
||||
cassandra-gnhk8 10.244.0.5 kubernetes-minion-dqz3/104.197.2.71 name=cassandra Running About a minute
|
||||
cassandra kubernetes/cassandra:v2 Running 51 seconds
|
||||
cassandra 10.244.3.3 kubernetes-minion-sft2/104.197.42.181 name=cassandra Running 7 minutes
|
||||
cassandra kubernetes/cassandra:v2 Running 7 minutes
|
||||
cassandra-gnhk8 10.244.0.5 kubernetes-minion-dqz3/104.197.2.71 name=cassandra Running About a minute
|
||||
cassandra kubernetes/cassandra:v2 Running 51 seconds
|
||||
|
||||
```
|
||||
|
||||
|
@ -218,9 +218,9 @@ UN 10.244.0.5 74.09 KB 256 100.0% 86feda0f-f070-4a5b-bda1-2ee
|
|||
UN 10.244.3.3 51.28 KB 256 100.0% dafe3154-1d67-42e1-ac1d-78e7e80dce2b rack1
|
||||
```
|
||||
|
||||
Now let's resize our cluster to 4 nodes:
|
||||
Now let's scale our cluster to 4 nodes:
|
||||
```sh
|
||||
$ kubectl resize rc cassandra --replicas=4
|
||||
$ kubectl scale rc cassandra --replicas=4
|
||||
```
|
||||
|
||||
Examining the status again:
|
||||
|
@ -251,13 +251,13 @@ kubectl create -f cassandra-service.yaml
|
|||
kubectl create -f cassandra-controller.yaml
|
||||
|
||||
# scale up to 2 nodes
|
||||
kubectl resize rc cassandra --replicas=2
|
||||
kubectl scale rc cassandra --replicas=2
|
||||
|
||||
# validate the cluster
|
||||
docker exec <container-id> nodetool status
|
||||
|
||||
# scale up to 4 nodes
|
||||
kubectl resize rc cassandra --replicas=4
|
||||
kubectl scale rc cassandra --replicas=4
|
||||
```
|
||||
|
||||
### Seed Provider Source
|
||||
|
|
|
@ -143,19 +143,19 @@ Let's see what we've got:
|
|||
$ kubectl get pods,rc,services,secrets --namespace=mytunes
|
||||
|
||||
POD IP CONTAINER(S) IMAGE(S) HOST LABELS STATUS CREATED MESSAGE
|
||||
music-db-0fwsu 10.244.2.48 kubernetes-minion-m49b/104.197.35.221 name=music-db Running 6 minutes
|
||||
es kubernetes/elasticsearch:1.0 Running 29 seconds
|
||||
music-db-5pc2e 10.244.0.24 kubernetes-minion-3c8c/146.148.41.184 name=music-db Running 6 minutes
|
||||
es kubernetes/elasticsearch:1.0 Running 6 minutes
|
||||
music-db-bjqmv 10.244.3.31 kubernetes-minion-zey5/104.154.59.10 name=music-db Running 6 minutes
|
||||
es kubernetes/elasticsearch:1.0 Running 19 seconds
|
||||
music-db-swtrs 10.244.1.37 kubernetes-minion-f9dw/130.211.159.230 name=music-db Running 6 minutes
|
||||
es kubernetes/elasticsearch:1.0 Running 6 minutes
|
||||
music-db-0fwsu 10.244.2.48 kubernetes-minion-m49b/104.197.35.221 name=music-db Running 6 minutes
|
||||
es kubernetes/elasticsearch:1.0 Running 29 seconds
|
||||
music-db-5pc2e 10.244.0.24 kubernetes-minion-3c8c/146.148.41.184 name=music-db Running 6 minutes
|
||||
es kubernetes/elasticsearch:1.0 Running 6 minutes
|
||||
music-db-bjqmv 10.244.3.31 kubernetes-minion-zey5/104.154.59.10 name=music-db Running 6 minutes
|
||||
es kubernetes/elasticsearch:1.0 Running 19 seconds
|
||||
music-db-swtrs 10.244.1.37 kubernetes-minion-f9dw/130.211.159.230 name=music-db Running 6 minutes
|
||||
es kubernetes/elasticsearch:1.0 Running 6 minutes
|
||||
CONTROLLER CONTAINER(S) IMAGE(S) SELECTOR REPLICAS
|
||||
music-db es kubernetes/elasticsearch:1.0 name=music-db 4
|
||||
NAME LABELS SELECTOR IP(S) PORT(S)
|
||||
music-server name=music-db name=music-db 10.0.138.61 9200/TCP
|
||||
104.197.12.157
|
||||
104.197.12.157
|
||||
NAME TYPE DATA
|
||||
apiserver-secret Opaque 2
|
||||
```
|
||||
|
@ -235,30 +235,30 @@ $ curl 104.197.12.157:9200/_nodes?pretty=true
|
|||
```
|
||||
Let's ramp up the number of Elasticsearch nodes from 4 to 10:
|
||||
```
|
||||
$ kubectl resize --replicas=10 replicationcontrollers music-db --namespace=mytunes
|
||||
resized
|
||||
$ kubectl scale --replicas=10 replicationcontrollers music-db --namespace=mytunes
|
||||
scaled
|
||||
$ kubectl get pods --namespace=mytunes
|
||||
POD IP CONTAINER(S) IMAGE(S) HOST LABELS STATUS CREATED MESSAGE
|
||||
music-db-0fwsu 10.244.2.48 kubernetes-minion-m49b/104.197.35.221 name=music-db Running 33 minutes
|
||||
es kubernetes/elasticsearch:1.0 Running 26 minutes
|
||||
music-db-2erje 10.244.2.50 kubernetes-minion-m49b/104.197.35.221 name=music-db Running 48 seconds
|
||||
es kubernetes/elasticsearch:1.0 Running 46 seconds
|
||||
music-db-5pc2e 10.244.0.24 kubernetes-minion-3c8c/146.148.41.184 name=music-db Running 33 minutes
|
||||
es kubernetes/elasticsearch:1.0 Running 32 minutes
|
||||
music-db-8rkvp 10.244.3.33 kubernetes-minion-zey5/104.154.59.10 name=music-db Running 48 seconds
|
||||
es kubernetes/elasticsearch:1.0 Running 46 seconds
|
||||
music-db-bjqmv 10.244.3.31 kubernetes-minion-zey5/104.154.59.10 name=music-db Running 33 minutes
|
||||
es kubernetes/elasticsearch:1.0 Running 26 minutes
|
||||
music-db-efc46 10.244.2.49 kubernetes-minion-m49b/104.197.35.221 name=music-db Running 48 seconds
|
||||
es kubernetes/elasticsearch:1.0 Running 46 seconds
|
||||
music-db-fhqyg 10.244.0.25 kubernetes-minion-3c8c/146.148.41.184 name=music-db Running 48 seconds
|
||||
es kubernetes/elasticsearch:1.0 Running 47 seconds
|
||||
music-db-guxe4 10.244.3.32 kubernetes-minion-zey5/104.154.59.10 name=music-db Running 48 seconds
|
||||
es kubernetes/elasticsearch:1.0 Running 46 seconds
|
||||
music-db-pbiq1 10.244.1.38 kubernetes-minion-f9dw/130.211.159.230 name=music-db Running 48 seconds
|
||||
es kubernetes/elasticsearch:1.0 Running 47 seconds
|
||||
music-db-swtrs 10.244.1.37 kubernetes-minion-f9dw/130.211.159.230 name=music-db Running 33 minutes
|
||||
es kubernetes/elasticsearch:1.0 Running 32 minutes
|
||||
music-db-0fwsu 10.244.2.48 kubernetes-minion-m49b/104.197.35.221 name=music-db Running 33 minutes
|
||||
es kubernetes/elasticsearch:1.0 Running 26 minutes
|
||||
music-db-2erje 10.244.2.50 kubernetes-minion-m49b/104.197.35.221 name=music-db Running 48 seconds
|
||||
es kubernetes/elasticsearch:1.0 Running 46 seconds
|
||||
music-db-5pc2e 10.244.0.24 kubernetes-minion-3c8c/146.148.41.184 name=music-db Running 33 minutes
|
||||
es kubernetes/elasticsearch:1.0 Running 32 minutes
|
||||
music-db-8rkvp 10.244.3.33 kubernetes-minion-zey5/104.154.59.10 name=music-db Running 48 seconds
|
||||
es kubernetes/elasticsearch:1.0 Running 46 seconds
|
||||
music-db-bjqmv 10.244.3.31 kubernetes-minion-zey5/104.154.59.10 name=music-db Running 33 minutes
|
||||
es kubernetes/elasticsearch:1.0 Running 26 minutes
|
||||
music-db-efc46 10.244.2.49 kubernetes-minion-m49b/104.197.35.221 name=music-db Running 48 seconds
|
||||
es kubernetes/elasticsearch:1.0 Running 46 seconds
|
||||
music-db-fhqyg 10.244.0.25 kubernetes-minion-3c8c/146.148.41.184 name=music-db Running 48 seconds
|
||||
es kubernetes/elasticsearch:1.0 Running 47 seconds
|
||||
music-db-guxe4 10.244.3.32 kubernetes-minion-zey5/104.154.59.10 name=music-db Running 48 seconds
|
||||
es kubernetes/elasticsearch:1.0 Running 46 seconds
|
||||
music-db-pbiq1 10.244.1.38 kubernetes-minion-f9dw/130.211.159.230 name=music-db Running 48 seconds
|
||||
es kubernetes/elasticsearch:1.0 Running 47 seconds
|
||||
music-db-swtrs 10.244.1.37 kubernetes-minion-f9dw/130.211.159.230 name=music-db Running 33 minutes
|
||||
es kubernetes/elasticsearch:1.0 Running 32 minutes
|
||||
|
||||
```
|
||||
Let's check to make sure that these 10 nodes are part of the same Elasticsearch cluster:
|
||||
|
|
|
@ -52,7 +52,7 @@ $ kubectl create -f hazelcast-service.yaml
|
|||
```
|
||||
|
||||
### Adding replicated nodes
|
||||
The real power of Kubernetes and Hazelcast lies in easily building a replicated, resizable Hazelcast cluster.
|
||||
The real power of Kubernetes and Hazelcast lies in easily building a replicated, scalable Hazelcast cluster.
|
||||
|
||||
In Kubernetes a _Replication Controller_ is responsible for replicating sets of identical pods. Like a _Service_ it has a selector query which identifies the members of it's set. Unlike a _Service_ it also has a desired number of replicas, and it will create or delete _Pods_ to ensure that the number of _Pods_ matches up with it's desired state.
|
||||
|
||||
|
@ -129,9 +129,9 @@ You can see that the _Service_ has found the pod created by the replication cont
|
|||
|
||||
Now it gets even more interesting.
|
||||
|
||||
Let's resize our cluster to 2 pods:
|
||||
Let's scale our cluster to 2 pods:
|
||||
```sh
|
||||
$ kubectl resize rc hazelcast --replicas=2
|
||||
$ kubectl scale rc hazelcast --replicas=2
|
||||
```
|
||||
|
||||
Now if you list the pods in your cluster, you should see two hazelcast pods:
|
||||
|
@ -141,7 +141,7 @@ $ kubectl get pods
|
|||
POD IP CONTAINER(S) IMAGE(S) HOST LABELS STATUS CREATED MESSAGE
|
||||
hazelcast-pkyzd 10.244.90.3 e2e-test-minion-vj7k/104.197.8.214 name=hazelcast Running 14 seconds
|
||||
hazelcast pires/hazelcast-k8s:0.2 Running 2 seconds
|
||||
hazelcast-ulkws 10.244.66.2 e2e-test-minion-2x1f/146.148.62.37 name=hazelcast Running 7 seconds
|
||||
hazelcast-ulkws 10.244.66.2 e2e-test-minion-2x1f/146.148.62.37 name=hazelcast Running 7 seconds
|
||||
hazelcast pires/hazelcast-k8s:0.2 Running 6 seconds
|
||||
```
|
||||
|
||||
|
@ -175,9 +175,9 @@ Members [2] {
|
|||
2015-05-09 22:06:31.177 INFO 5 --- [ main] com.hazelcast.core.LifecycleService : [10.244.66.2]:5701 [someGroup] [3.4.2] Address[10.244.66.2]:5701 is STARTED
|
||||
```
|
||||
|
||||
Now let's resize our cluster to 4 nodes:
|
||||
Now let's scale our cluster to 4 nodes:
|
||||
```sh
|
||||
$ kubectl resize rc hazelcast --replicas=4
|
||||
$ kubectl scale rc hazelcast --replicas=4
|
||||
```
|
||||
|
||||
Examine the status again by checking a node’s log and you should see the 4 members connected.
|
||||
|
@ -193,10 +193,10 @@ kubectl create -f hazelcast-service.yaml
|
|||
kubectl create -f hazelcast-controller.yaml
|
||||
|
||||
# scale up to 2 nodes
|
||||
kubectl resize rc hazelcast --replicas=2
|
||||
kubectl scale rc hazelcast --replicas=2
|
||||
|
||||
# scale up to 4 nodes
|
||||
kubectl resize rc hazelcast --replicas=4
|
||||
kubectl scale rc hazelcast --replicas=4
|
||||
```
|
||||
|
||||
### Hazelcast Discovery Source
|
||||
|
|
|
@ -56,15 +56,15 @@ We create it as follows:
|
|||
kubectl create -f examples/redis/redis-sentinel-controller.yaml
|
||||
```
|
||||
|
||||
### Resize our replicated pods
|
||||
### Scale our replicated pods
|
||||
Initially creating those pods didn't actually do anything, since we only asked for one sentinel and one redis server, and they already existed, nothing changed. Now we will add more replicas:
|
||||
|
||||
```sh
|
||||
kubectl resize rc redis --replicas=3
|
||||
kubectl scale rc redis --replicas=3
|
||||
```
|
||||
|
||||
```sh
|
||||
kubectl resize rc redis-sentinel --replicas=3
|
||||
kubectl scale rc redis-sentinel --replicas=3
|
||||
```
|
||||
|
||||
This will create two additional replicas of the redis server and two additional replicas of the redis sentinel.
|
||||
|
@ -86,7 +86,7 @@ Now let's take a close look at what happens after this pod is deleted. There ar
|
|||
3. The redis sentinels themselves, realize that the master has disappeared from the cluster, and begin the election procedure for selecting a new master. They perform this election and selection, and chose one of the existing redis server replicas to be the new master.
|
||||
|
||||
### Conclusion
|
||||
At this point we now have a reliable, scalable Redis installation. By resizing the replication controller for redis servers, we can increase or decrease the number of read-slaves in our cluster. Likewise, if failures occur, the redis-sentinels will perform master election and select a new master.
|
||||
At this point we now have a reliable, scalable Redis installation. By scaling the replication controller for redis servers, we can increase or decrease the number of read-slaves in our cluster. Likewise, if failures occur, the redis-sentinels will perform master election and select a new master.
|
||||
|
||||
### tl; dr
|
||||
For those of you who are impatient, here is the summary of commands we ran in this tutorial
|
||||
|
@ -104,9 +104,9 @@ kubectl create -f examples/redis/redis-controller.yaml
|
|||
# Create a replication controller for redis sentinels
|
||||
kubectl create -f examples/redis/redis-sentinel-controller.yaml
|
||||
|
||||
# Resize both replication controllers
|
||||
kubectl resize rc redis --replicas=3
|
||||
kubectl resize rc redis-sentinel --replicas=3
|
||||
# Scale both replication controllers
|
||||
kubectl scale rc redis --replicas=3
|
||||
kubectl scale rc redis-sentinel --replicas=3
|
||||
|
||||
# Delete the original master pod
|
||||
kubectl delete pods redis-master
|
||||
|
|
|
@ -49,8 +49,8 @@ check out again:
|
|||
```shell
|
||||
$kubectl get po
|
||||
POD IP CONTAINER(S) IMAGE(S) HOST LABELS STATUS CREATED MESSAGE
|
||||
rethinkdb-rc-1.16.0-6odi0 kubernetes-minion-s59e/ db=rethinkdb,role=replicas Pending 11 seconds
|
||||
rethinkdb antmanler/rethinkdb:1.16.0
|
||||
rethinkdb-rc-1.16.0-6odi0 kubernetes-minion-s59e/ db=rethinkdb,role=replicas Pending 11 seconds
|
||||
rethinkdb antmanler/rethinkdb:1.16.0
|
||||
```
|
||||
|
||||
**Done!**
|
||||
|
@ -61,20 +61,20 @@ rethinkdb-rc-1.16.0-6odi0
|
|||
Scale
|
||||
-----
|
||||
|
||||
You can scale up you cluster using `kubectl resize`, and new pod will join to exsits cluster automatically, for example
|
||||
You can scale up you cluster using `kubectl scale`, and new pod will join to exsits cluster automatically, for example
|
||||
|
||||
|
||||
```shell
|
||||
$kubectl resize rc rethinkdb-rc-1.16.0 --replicas=3
|
||||
resized
|
||||
$kubectl scale rc rethinkdb-rc-1.16.0 --replicas=3
|
||||
scaled
|
||||
$kubectl get po
|
||||
POD IP CONTAINER(S) IMAGE(S) HOST LABELS STATUS CREATED MESSAGE
|
||||
rethinkdb-rc-1.16.0-6odi0 10.244.3.3 kubernetes-minion-s59e/104.197.79.42 db=rethinkdb,role=replicas Running About a minute
|
||||
rethinkdb antmanler/rethinkdb:1.16.0 Running About a minute
|
||||
rethinkdb-rc-1.16.0-e3mxv kubernetes-minion-d7ub/ db=rethinkdb,role=replicas Pending 6 seconds
|
||||
rethinkdb antmanler/rethinkdb:1.16.0
|
||||
rethinkdb-rc-1.16.0-manu6 kubernetes-minion-cybz/ db=rethinkdb,role=replicas Pending 6 seconds
|
||||
rethinkdb antmanler/rethinkdb:1.16.0
|
||||
rethinkdb-rc-1.16.0-6odi0 10.244.3.3 kubernetes-minion-s59e/104.197.79.42 db=rethinkdb,role=replicas Running About a minute
|
||||
rethinkdb antmanler/rethinkdb:1.16.0 Running About a minute
|
||||
rethinkdb-rc-1.16.0-e3mxv kubernetes-minion-d7ub/ db=rethinkdb,role=replicas Pending 6 seconds
|
||||
rethinkdb antmanler/rethinkdb:1.16.0
|
||||
rethinkdb-rc-1.16.0-manu6 kubernetes-minion-cybz/ db=rethinkdb,role=replicas Pending 6 seconds
|
||||
rethinkdb antmanler/rethinkdb:1.16.0
|
||||
```
|
||||
|
||||
Admin
|
||||
|
@ -93,7 +93,7 @@ find the service
|
|||
$kubectl get se
|
||||
NAME LABELS SELECTOR IP(S) PORT(S)
|
||||
rethinkdb-admin db=influxdb db=rethinkdb,role=admin 10.0.131.19 8080/TCP
|
||||
104.197.19.120
|
||||
104.197.19.120
|
||||
rethinkdb-driver db=influxdb db=rethinkdb 10.0.27.114 28015/TCP
|
||||
```
|
||||
|
||||
|
|
|
@ -47,12 +47,12 @@ $ ./cluster/kubectl.sh create -f examples/update-demo/nautilus-rc.yaml
|
|||
|
||||
After pulling the image from the Docker Hub to your worker nodes (which may take a minute or so) you'll see a couple of squares in the UI detailing the pods that are running along with the image that they are serving up. A cute little nautilus.
|
||||
|
||||
### Step Three: Try resizing the controller
|
||||
### Step Three: Try scaling the controller
|
||||
|
||||
Now we will increase the number of replicas from two to four:
|
||||
|
||||
```bash
|
||||
$ ./cluster/kubectl.sh resize rc update-demo-nautilus --replicas=4
|
||||
$ ./cluster/kubectl.sh scale rc update-demo-nautilus --replicas=4
|
||||
```
|
||||
|
||||
If you go back to the [demo website](http://localhost:8001/static/index.html) you should eventually see four boxes, one for each pod.
|
||||
|
@ -66,7 +66,7 @@ $ ./cluster/kubectl.sh rolling-update update-demo-nautilus --update-period=10s -
|
|||
The rolling-update command in kubectl will do 2 things:
|
||||
|
||||
1. Create a new replication controller with a pod template that uses the new image (`gcr.io/google_containers/update-demo:kitten`)
|
||||
2. Resize the old and new replication controllers until the new controller replaces the old. This will kill the current pods one at a time, spinnning up new ones to replace them.
|
||||
2. Scale the old and new replication controllers until the new controller replaces the old. This will kill the current pods one at a time, spinnning up new ones to replace them.
|
||||
|
||||
Watch the [demo website](http://localhost:8001/static/index.html), it will update one pod every 10 seconds until all of the pods have the new image.
|
||||
|
||||
|
|
Loading…
Reference in New Issue