Merge pull request #231 from pwittrock/hellonode

Update hello-node example to use deployments
This commit is contained in:
johndmulhausen 2016-03-25 13:27:47 -07:00
commit 4af425edad
2 changed files with 146 additions and 57 deletions

View File

@ -101,7 +101,7 @@ docker ps
CONTAINER ID IMAGE COMMAND
2c66d0efcbd4 gcr.io/PROJECT_ID/hello-node:v1 "/bin/sh -c 'node
$ docker stop 2c66d0efcbd4
docker stop 2c66d0efcbd4
2c66d0efcbd4
```
@ -128,77 +128,120 @@ Its now time to deploy your own containerized application to the Kubernetes c
## Create your pod
A kubernetes **pod** is a group of containers, tied together for the purposes of administration and networking. It can contain a single container or multiple.
A kubernetes **[pod](/docs/user-guide/pods/)** is a group of containers, tied together for the purposes of administration and networking. It can contain a single container or multiple.
Create a pod with the `kubectl run` command:
```shell
kubectl run hello-node \
--image=gcr.io/PROJECT_ID/hello-node:v1 \
--port=8080
CONTROLLER CONTAINER(S) IMAGE(S) SELECTOR REPLICAS
hello-node hello-node gcr.io/..../hello-node:v1 run=hello-node 1
kubectl run hello-node --image=gcr.io/PROJECT_ID/hello-node:v1 --port=8080
deployment "hello-node" created
```
Now is probably a good time to run through some of the following interesting kubectl commands (none of these will change the state of the cluster, full documentation is available [here](https://cloud.google.com/container-engine/docs/kubectl/)):
As shown in the output, the `kubectl run` created a **[deployment](/docs/user-guide/deployments/)** object. Deployments are the recommended way for managing creation and scaling of pods. In this example, a new deployment manages a single pod replica running the *hello-node:v1* image.
To view the deployment we just created run:
```shell
$ kubectl get pods
$ kubectl logs
$ kubectl cluster-info
$ kubectl config view
$ kubectl get events
kubectl get deployments
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
hello-node 1 1 1 1 3m
```
To view the pod created by the deployment run:
```shell
kubectl get pods
NAME READY STATUS RESTARTS AGE
hello-node-714049816-ztzrb 1/1 Running 0 6m
```
To view the stdout / stderr from a pod (hello-node image has no output, so logs will be empty in this case) run:
```shell
kubectl logs <POD-NAME>
```
To view metadata about the cluster run:
```shell
kubectl cluster-info
```
To view cluster events run:
```shell
kubectl get events
```
To view the kubectl configuration run:
```shell
kubectl config view
```
Full documentation for kubectl commands is available **[here](/docs/user-guide/kubectl-overview/)**:
At this point you should have our container running under the control of Kubernetes but we still have to make it accessible to the outside world.
## Allow external traffic
By default, the pod is only accessible by its internal IP within the Kubernetes cluster. In order to make the `hello-node` container accessible from outside the kubernetes virtual network, you have to expose the pod as a kubernetes service.
By default, the pod is only accessible by its internal IP within the Kubernetes cluster. In order to make the `hello-node` container accessible from outside the kubernetes virtual network, you have to expose the pod as a kubernetes **[service](/docs/user-guide/services/)**.
From our development machine we can expose the pod with the `kubectl` expose command and the `--type="LoadBalancer"` flag which creates an external IP to accept traffic:
```shell
kubectl expose rc hello-node --type="LoadBalancer"
kubectl expose deployment hello-node --type="LoadBalancer"
```
The flag used in this command specifies that well be using the load-balancer provided by the underlying infrastructure (in this case the [Compute Engine load balancer](https://cloud.google.com/compute/docs/load-balancing/)). The `rc` refers to the Kubernetes "replication controller" -- which is a Kubernetes service which controls load balancing and scaling behavior for your cluster.
The flag used in this command specifies that well be using the load-balancer provided by the underlying infrastructure (in this case the [Compute Engine load balancer](https://cloud.google.com/compute/docs/load-balancing/)). Note that we expose the deployment, and not the pod directly. This will cause the resulting service to load balance traffic across all pods managed by the deployment (in this case only 1 pod, but we will add more replicas later).
The Kubernetes master creates the load balancer and related Compute Engine forwarding rules, target pools, and firewall rules to make the service fully accessible from outside of Google Cloud Platform.
To find the publicly-accessible IP address, ask `kubectl` to describe the `hello-node` cluster service:
To find the ip addresses associated with the service run:
```shell
kubectl get services hello-node
NAME CLUSTER_IP EXTERNAL_IP PORT(S) SELECTOR AGE
hello-node 10.3.246.12 23.251.159.72 8080/TCP run=hello-node 53s
hello-node 10.3.246.12 8080/TCP run=hello-node 23s
```
Note there are 2 IP addresses listed, both serving port 8080. One is the internal IP that is only visible inside your cloud virtual network; the other is the external load-balanced IP. In this example, the external IP address is 23.251.159.72. Traffic to the load-balanced IP will be load balanced to the three nodes you provisioned when initially creating the cluster!
The `EXTERNAL_IP` may take several minutes to become available and visible. If the `EXTERNAL_IP` is missing, wait a few minutes and try again.
You should now be able to reach the service by pointing your browser to this address: http://<EXTERNAL_IP>**:8080**
```shell
kubectl get services hello-node
NAME CLUSTER_IP EXTERNAL_IP PORT(S) SELECTOR AGE
hello-node 10.3.246.12 23.251.159.72 8080/TCP run=hello-node 2m
```
Note there are 2 IP addresses listed, both serving port 8080. `CLUSTER_IP` is only visible inside your cloud virtual network. `EXTERNAL_IP` is externally accessible. In this example, the external IP address is 23.251.159.72.
You should now be able to reach the service by pointing your browser to this address: http://EXTERNAL_IP**:8080** or running `curl http://EXTERNAL_IP:8080`
![image](/images/hellonode/image_12.png)
## Scale up your website
One of the powerful features offered by Kubernetes is how easy it is to scale your application. Suppose you suddenly need more capacity for your application; you can simply tell the replication controller to manage a new number of replicas for your pod:
One of the powerful features offered by Kubernetes is how easy it is to scale your application. Suppose you suddenly need more capacity for your application; you can simply tell the deployment to manage a new number of replicas for your pod:
```shell
kubectl scale rc hello-node --replicas=4
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
hello-node-6uzt8 1/1 Running 0 8m
hello-node-gxhty 1/1 Running 0 34s
hello-node-z2odh 1/1 Running 0 34s
kubectl scale deployment hello-node --replicas=4
```
You now have four replicas of your application, each running independently on the cluster with the load balancer you created earlier and serving traffic to all of them.
You now have four replicas of your application, each running independently on the cluster with the load balancer you created earlier and serving traffic to all of them.
```shell
kubectl get rc hello-node
CONTROLLER CONTAINER(S) IMAGE(S) SELECTOR REPLICAS
hello-node hello-node gcr.io/..../hello-node:v1 run=hello-node 3
kubectl get deployment
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
hello-node 4 4 4 3 40m
```
```shell
kubectl get pods
NAME READY STATUS RESTARTS AGE
hello-node-714049816-g4azy 1/1 Running 0 1m
hello-node-714049816-rk0u6 1/1 Running 0 1m
hello-node-714049816-sh812 1/1 Running 0 1m
hello-node-714049816-ztzrb 1/1 Running 0 41m
```
Note the **declarative approach** here - rather than starting or stopping new instances you declare how many instances you want to be running. Kubernetes reconciliation loops simply make sure the reality matches what you requested and take action if needed.
@ -227,30 +270,82 @@ docker push gcr.io/PROJECT_ID/hello-node:v2
Building and pushing this updated image should be much quicker as we take full advantage of the Docker cache.
Were now ready for kubernetes to smoothly update our replication controller to the new version of the application:
Were now ready for kubernetes to smoothly update our deployment to the new version of the application. In order to change
the image label for our running container, we will need to edit the existing *hello-node deployment* and change the image from
`gcr.io/PROJECT_ID/hello-node:v1` to `gcr.io/PROJECT_ID/hello-node:v2`. To do this, we will use the `kubectl edit` command.
This will open up a text editor displaying the full deployment yaml configuration. It isn't necessary to understand the full yaml config
right now, instead just understand that by updating the `spec.template.spec.containers.image` field in the config we are telling
the deployment to update the pods to use the new image.
```shell
kubectl rolling-update hello-node \
--image=gcr.io/PROJECT_ID/hello-node:v2 \
--update-period=2s
Creating hello-node-324d23dd3e0e2474d6b76dc599abb519
At beginning of loop: hello-node replicas: 2, hello-node-324d23dd3e0e2474d6b76dc599abb519 replicas: 1
...
At end of loop: hello-node replicas: 0, hello-node-324d23dd3e0e2474d6b76dc599abb519 replicas: 3
Update succeeded. Deleting old controller: hello-node
Renaming hello-node-324d23dd3e0e2474d6b76dc599abb519 to hello-node
hello-node
kubectl edit deployment hello-node
```
You should see in the standard output how the rolling update actually works:
```yaml
# Please edit the object below. Lines beginning with a '#' will be ignored,
# and an empty file will abort the edit. If an error occurs while saving this file will be
# reopened with the relevant failures.
#
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
annotations:
deployment.kubernetes.io/revision: "1"
creationTimestamp: 2016-03-24T17:55:28Z
generation: 3
labels:
run: hello-node
name: hello-node
namespace: default
resourceVersion: "151017"
selfLink: /apis/extensions/v1beta1/namespaces/default/deployments/hello-node
uid: 981fe302-f1e9-11e5-9a78-42010af00005
spec:
replicas: 4
selector:
matchLabels:
run: hello-node
strategy:
rollingUpdate:
maxSurge: 1
maxUnavailable: 1
type: RollingUpdate
template:
metadata:
creationTimestamp: null
labels:
run: hello-node
spec:
containers:
- image: gcr.io/PROJECT_ID/hello-node:v1 # Update this line
imagePullPolicy: IfNotPresent
name: hello-node
ports:
- containerPort: 8080
protocol: TCP
resources: {}
terminationMessagePath: /dev/termination-log
dnsPolicy: ClusterFirst
restartPolicy: Always
securityContext: {}
terminationGracePeriodSeconds: 30
```
1. A new replication controller is created based on the new image
After making the change save and close the file.
2. The replica count on the new and old controllers is increased/decreased by one respectively until the desired number of replicas is reached
```
deployment "hello-node" edited
```
3. The original replication controller is deleted.
This updates the deployment with the new image, causing new pods to be created with the new image and old pods to be deleted.
While this is happening, the users of the services should not see any interruption. After a little while they will start accessing the new version of your application. You can find more details on rolling updates in [this documentation](https://cloud.google.com/container-engine/docs/rolling-updates).
```
kubectl get deployments
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
hello-node 4 5 4 3 1h
```
While this is happening, the users of the services should not see any interruption. After a little while they will start accessing the new version of your application. You can find more details in the [deployment documentation](/docs/user-guide/deployments/).
Hopefully with these deployment, scaling and update features youll agree that once youve setup your environment (your GKE/Kubernetes cluster here), Kubernetes is here to help you focus on the application rather than the infrastructure.
@ -261,7 +356,7 @@ While logged into your development machine, execute the following commands:
```shell
kubectl config view | grep "password"
password: vUYwC5ATJMWa6goh
$ kubectl cluster-info
kubectl cluster-info
...
KubeUI is running at https://<ip-address>/api/v1/proxy/namespaces/kube-system/services/kube-ui
...
@ -275,16 +370,10 @@ Navigate to the URL that is shown under after KubeUI is running at and log in wi
That's it for the demo! So you don't leave this all running and incur charges, let's learn how to tear things down.
First, delete the Service, which also deletes your external load balancer:
Delete the Deployment (which also deletes the running pods) and Service (which also deletes your external load balancer):
```shell
kubectl delete services hello-node
```
Delete the running pods:
```shell
kubectl delete rc hello-node
kubectl delete service,deployment hello-node
```
Delete your cluster:
@ -306,7 +395,7 @@ Finally delete the Docker registry storage bucket hosting your image(s) :
```shell
gsutil ls
gs://artifacts.<PROJECT_ID>.appspot.com/
$ gsutil rm -r gs://artifacts.<PROJECT_ID>.appspot.com/
gsutil rm -r gs://artifacts.<PROJECT_ID>.appspot.com/
Removing gs://artifacts.<PROJECT_ID>.appspot.com/...
```

Binary file not shown.

Before

Width:  |  Height:  |  Size: 84 KiB

After

Width:  |  Height:  |  Size: 94 KiB