This commit is contained in:
johndmulhausen 2016-03-16 15:54:46 -07:00
commit de1558b744
12 changed files with 358 additions and 140 deletions

1
.gitignore vendored
View File

@ -3,3 +3,4 @@
.jekyll-metadata .jekyll-metadata
_site/** _site/**
.sass-cache/** .sass-cache/**
CNAME

View File

@ -25,7 +25,7 @@ First install rvm
Then load it into your environment Then load it into your environment
source /Users/(USERNAME)/.rvm/scripts/rvm (or whatever is prompted by the installer) source ${HOME}/.rvm/scripts/rvm (or whatever is prompted by the installer)
Then install Ruby 2.2 or higher Then install Ruby 2.2 or higher

View File

@ -155,7 +155,7 @@ toc:
- title: kube-proxy CLI - title: kube-proxy CLI
path: /docs/admin/kube-proxy/ path: /docs/admin/kube-proxy/
- title: kub-scheduler CLI - title: kube-scheduler CLI
path: /docs/admin/kube-scheduler/ path: /docs/admin/kube-scheduler/
- title: kubelet CLI - title: kubelet CLI

View File

@ -161,6 +161,14 @@ users:
user: user:
client-certificate: /path/to/cert.pem # cert for the webhook plugin to use client-certificate: /path/to/cert.pem # cert for the webhook plugin to use
client-key: /path/to/key.pem # key matching the cert client-key: /path/to/key.pem # key matching the cert
# kubeconfig files require a context. Provide one for the API Server.
current-context: webhook
contexts:
- context:
cluster: name-of-remote-authz-service
user: name-of-api-sever
name: webhook
``` ```
### Request Payloads ### Request Payloads

View File

@ -44,15 +44,11 @@ process.
These controllers include: These controllers include:
* Node Controller * Node Controller: Responsible for noticing & responding when nodes go down.
* Responsible for noticing & responding when nodes go down. * Replication Controller: Responsible for maintaining the correct number of pods for every replication
* Replication Controller controller object in the system.
* Responsible for maintaining the correct number of pods for every replication * Endpoints Controller: Populates the Endpoints object (i.e., join Services & Pods).
controller object in the system. * Service Account & Token Controllers: Create default accounts and API access tokens for new namespaces.
* Endpoints Controller
* Populates the Endpoints object (i.e., join Services & Pods).
* Service Account & Token Controllers
* Create default accounts and API access tokens for new namespaces.
* ... and others. * ... and others.
### kube-scheduler ### kube-scheduler
@ -110,14 +106,15 @@ the Kubernetes runtime environment.
### kubelet ### kubelet
[kubelet](/docs/admin/kubelet) is the primary node agent. It: [kubelet](/docs/admin/kubelet) is the primary node agent. It:
* Watches for pods that have been assigned to its node (either by apiserver * Watches for pods that have been assigned to its node (either by apiserver
or via local configuration file) and: or via local configuration file) and:
* Mounts the pod's required volumes * Mounts the pod's required volumes
* Downloads the pod's secrets * Downloads the pod's secrets
* Run the pod's containers via docker (or, experimentally, rkt). * Run the pod's containers via docker (or, experimentally, rkt).
* Periodically executes any requested container liveness probes. * Periodically executes any requested container liveness probes.
* Reports the status of the pod back to the rest of the system, by creating a * Reports the status of the pod back to the rest of the system, by creating a
"mirror pod" if necessary. "mirror pod" if necessary.
* Reports the status of the node back to the rest of the system. * Reports the status of the node back to the rest of the system.
### kube-proxy ### kube-proxy

View File

@ -61,8 +61,8 @@ the following conditions mean the node is in sane state:
"conditions": [ "conditions": [
{ {
"kind": "Ready", "kind": "Ready",
"status": "True", "status": "True"
}, }
] ]
``` ```

View File

@ -12,6 +12,8 @@ The list of binary releases is available for download from the [GitHub Kubernete
Download the latest release and unpack this tar file on Linux or OS X, cd to the created `kubernetes/` directory, and then follow the getting started guide for your cloud. Download the latest release and unpack this tar file on Linux or OS X, cd to the created `kubernetes/` directory, and then follow the getting started guide for your cloud.
On OS X you can also use the [homebrew](http://brew.sh/) package manager: `brew install kubernetes-cli`
### Building from source ### Building from source
Get the Kubernetes source. If you are simply building a release from source there is no need to set up a full golang environment as all building happens in a Docker container. Get the Kubernetes source. If you are simply building a release from source there is no need to set up a full golang environment as all building happens in a Docker container.

View File

@ -10,11 +10,11 @@ Sometimes things go wrong. This guide is aimed at making them right. It has tw
You should also check the [known issues](/docs/user-guide/known-issues) for the release you're using. You should also check the [known issues](/docs/user-guide/known-issues) for the release you're using.
# Getting help ### Getting help
If your problem isn't answered by any of the guides above, there are variety of ways for you to get help from the Kubernetes team. If your problem isn't answered by any of the guides above, there are variety of ways for you to get help from the Kubernetes team.
## Questions ### Questions
If you aren't familiar with it, many of your questions may be answered by the [user guide](/docs/user-guide/). If you aren't familiar with it, many of your questions may be answered by the [user guide](/docs/user-guide/).
@ -29,21 +29,21 @@ You may also find the Stack Overflow topics relevant:
* [Kubernetes](http://stackoverflow.com/questions/tagged/kubernetes) * [Kubernetes](http://stackoverflow.com/questions/tagged/kubernetes)
* [Google Container Engine - GKE](http://stackoverflow.com/questions/tagged/google-container-engine) * [Google Container Engine - GKE](http://stackoverflow.com/questions/tagged/google-container-engine)
# Help! My question isn't covered! I need help now! ## Help! My question isn't covered! I need help now!
## Stack Overflow ### Stack Overflow
Someone else from the community may have already asked a similar question or may be able to help with your problem. The Kubernetes team will also monitor [posts tagged kubernetes](http://stackoverflow.com/questions/tagged/kubernetes). If there aren't any existing questions that help, please [ask a new one](http://stackoverflow.com/questions/ask?tags=kubernetes)! Someone else from the community may have already asked a similar question or may be able to help with your problem. The Kubernetes team will also monitor [posts tagged kubernetes](http://stackoverflow.com/questions/tagged/kubernetes). If there aren't any existing questions that help, please [ask a new one](http://stackoverflow.com/questions/ask?tags=kubernetes)!
## <a name="slack"></a>Slack ### Slack
The Kubernetes team hangs out on Slack in the `#kubernetes-users` channel. You can participate in the Kubernetes team [here](https://kubernetes.slack.com). Slack requires registration, but the Kubernetes team is open invitation to anyone to register [here](http://slack.kubernetes.io). Feel free to come and ask any and all questions. The Kubernetes team hangs out on Slack in the `#kubernetes-users` channel. You can participate in the Kubernetes team [here](https://kubernetes.slack.com). Slack requires registration, but the Kubernetes team is open invitation to anyone to register [here](http://slack.kubernetes.io). Feel free to come and ask any and all questions.
## Mailing List ### Mailing List
The Google Container Engine mailing list is [google-containers@googlegroups.com](https://groups.google.com/forum/#!forum/google-containers) The Google Container Engine mailing list is [google-containers@googlegroups.com](https://groups.google.com/forum/#!forum/google-containers)
## Bugs and Feature requests ### Bugs and Feature requests
If you have what looks like a bug, or you would like to make a feature request, please use the [Github issue tracking system](https://github.com/kubernetes/kubernetes/issues). If you have what looks like a bug, or you would like to make a feature request, please use the [Github issue tracking system](https://github.com/kubernetes/kubernetes/issues).

View File

@ -0,0 +1,16 @@
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: nginx-deployment
spec:
replicas: 3
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.91
ports:
- containerPort: 80

View File

@ -6,7 +6,7 @@
## What is a _Deployment_? ## What is a _Deployment_?
A _Deployment_ provides declarative updates for Pods and ReplicationControllers. A _Deployment_ provides declarative updates for Pods and ReplicaSets.
Users describe the desired state in a Deployment object, and the deployment Users describe the desired state in a Deployment object, and the deployment
controller changes the actual state to the desired state at a controlled rate. controller changes the actual state to the desired state at a controlled rate.
Users can define Deployments to create new resources, or replace existing ones Users can define Deployments to create new resources, or replace existing ones
@ -14,66 +14,64 @@ by new ones.
A typical use case is: A typical use case is:
* Create a Deployment to bring up a replication controller and pods. * Create a Deployment to bring up a replica set and pods.
* Later, update that Deployment to recreate the pods (for example, to use a new image). * Later, update that Deployment to recreate the pods (for example, to use a new image).
* Rollback to an earlier Deployment revision if the current Deployment isn't stable.
* Pause and resume a Deployment.
## Creating a Deployment ## Creating a Deployment
Here is an example Deployment. It creates a replication controller to Here is an example Deployment. It creates a replica set to
bring up 3 nginx pods. bring up 3 nginx pods.
{% include code.html language="yaml" file="nginx-deployment.yaml" ghlink="/docs/user-guide/nginx-deployment.yaml" %} {% include code.html language="yaml" file="nginx-deployment.yaml" ghlink="/docs/user-guide/nginx-deployment.yaml" %}
Run the example by downloading the example file and then running this command: Run the example by downloading the example file and then running this command:
```shell ```console
$ kubectl create -f docs/user-guide/nginx-deployment.yaml $ kubectl create -f docs/user-guide/nginx-deployment.yaml --record
deployment "nginx-deployment" created deployment "nginx-deployment" created
``` ```
Running Setting the kubectl flag `--record` to `true` allows you to record current command in the annotations of the resources being created or updated. It will be useful for future introspection; for example, to see the commands executed in each Deployment revision.
```shell Then running `get` immediately will give:
```console
$ kubectl get deployments $ kubectl get deployments
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
nginx-deployment 3 0 0 0 1s
``` ```
immediately will give: This indicates that the Deployment's number of desired replicas is 3 (according to deployment's `.spec.replicas`), the number of current replicas (`.status.replicas`) is 0, the number of up-to-date replicas (`.status.updatedReplicas`) is 0, and the number of available replicas (`.status.availableReplicas`) is also 0.
```shell Running the `get` again a few seconds later, should give:
```console
$ kubectl get deployments $ kubectl get deployments
NAME UPDATEDREPLICAS AGE NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
nginx-deployment 0/3 8s nginx-deployment 3 3 3 3 18s
``` ```
This indicates that the Deployment is trying to update 3 replicas, and has not updated any of them yet. This indicates that the Deployment has created all three replicas, and all replicas are up-to-date (contains the latest pod template) and available (pod status is ready for at least deployment's `.spec.minReadySeconds`). Running `kubectl get rs` and `kubectl get pods` will show the replica set (RS) and pods created.
Running the `get` again after a minute, should give: ```console
$ kubectl get rs
```shell NAME DESIRED CURRENT AGE
$ kubectl get deployments nginx-deployment-2035384211 3 3 18s
NAME UPDATEDREPLICAS AGE
nginx-deployment 3/3 1m
``` ```
This indicates that the Deployment has created all three replicas. You may notice that the name of the replica set is always `<the name of the Deployment>-<hash value of the pod template>`.
Running `kubectl get rc` and `kubectl get pods` will show the replication controller (RC) and pods created.
```shell ```console
$ kubectl get rc $ kubectl get pods --show-labels
CONTROLLER CONTAINER(S) IMAGE(S) SELECTOR REPLICAS AGE NAME READY STATUS RESTARTS AGE LABELS
REPLICAS AGE nginx-deployment-2035384211-7ci7o 1/1 Running 0 18s app=nginx,pod-template-hash=2035384211
deploymentrc-1975012602 nginx nginx:1.7.9 pod-template-hash=1975012602,app=nginx 3 2m nginx-deployment-2035384211-kzszj 1/1 Running 0 18s app=nginx,pod-template-hash=2035384211
nginx-deployment-2035384211-qqcnn 1/1 Running 0 18s app=nginx,pod-template-hash=2035384211
``` ```
```shell The created replica set will ensure that there are three nginx pods at all times.
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
deploymentrc-1975012602-4f2tb 1/1 Running 0 1m
deploymentrc-1975012602-j975u 1/1 Running 0 1m
deploymentrc-1975012602-uashb 1/1 Running 0 1m
```
The created RC will ensure that there are three nginx pods at all times.
## Updating a Deployment ## Updating a Deployment
@ -83,120 +81,301 @@ For this, we update our deployment file as follows:
{% include code.html language="yaml" file="new-nginx-deployment.yaml" ghlink="/docs/user-guide/new-nginx-deployment.yaml" %} {% include code.html language="yaml" file="new-nginx-deployment.yaml" ghlink="/docs/user-guide/new-nginx-deployment.yaml" %}
We can then `apply` the Deployment: We can then `apply` the new Deployment:
```shell ```console
$ kubectl apply -f docs/user-guide/new-nginx-deployment.yaml $ kubectl apply -f docs/user-guide/new-nginx-deployment.yaml
deployment "nginx-deployment" configured deployment "nginx-deployment" configured
``` ```
Running a `get` immediately will still give: Alternatively, we can `edit` the Deployment and change `.spec.template.spec.containers[0].image` from `nginx:1.7.9` to `nginx:1.9.1`:
```shell ```console
$ kubectl get deployments $ kubectl edit deployment/nginx-deployment
NAME UPDATEDREPLICAS AGE deployment "nginx-deployment" edited
nginx-deployment 3/3 8s
``` ```
This indicates that deployment status has not been updated yet (it is still Running a `get` immediately will give:
showing old status).
Running a `get` again after a minute, should show:
```shell ```console
$ kubectl get deployments $ kubectl get deployments
NAME UPDATEDREPLICAS AGE NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
nginx-deployment 1/3 1m nginx-deployment 3 3 0 3 20s
``` ```
This indicates that the Deployment has updated one of the three pods that it needs The 0 number of up-to-date replicas indicates that the deployment hasn't updated the replicas to the latest configuration. The current replicas indicates the total replicas (3 with old configuration and 0 with new configuration) this Deployment manages, and the available replicas indicates the number of current replicas that are available.
to update.
Eventually, it will update all the pods.
```shell The Deployment will update all the pods in a few seconds.
```console
$ kubectl get deployments $ kubectl get deployments
NAME UPDATEDREPLICAS AGE NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
nginx-deployment 3/3 3m nginx-deployment 3 3 3 3 36s
``` ```
We can run `kubectl get rc` to see that the Deployment updated the pods by creating a new RC, We can run `kubectl get rs` to see that the Deployment updated the pods by creating a new replica set and scaling it up to 3 replicas, as well as scaling down the old replica set to 0 replicas.
which it scaled up to 3 replicas, and has scaled down the old RC to 0 replicas.
```shell ```console
kubectl get rc $ kubectl get rs
CONTROLLER CONTAINER(S) IMAGE(S) SELECTOR REPLICAS AGE NAME DESIRED CURRENT AGE
deploymentrc-1562004724 nginx nginx:1.9.1 pod-template-hash=1562004724,app=nginx 3 5m nginx-deployment-1564180365 3 3 6s
deploymentrc-1975012602 nginx nginx:1.7.9 pod-template-hash=1975012602,app=nginx 0 7m nginx-deployment-2035384211 0 0 36s
``` ```
Running `get pods` should now show only the new pods: Running `get pods` should now show only the new pods:
```shell ```console
kubectl get pods $ kubectl get pods
NAME READY STATUS RESTARTS AGE NAME READY STATUS RESTARTS AGE
deploymentrc-1562004724-0tgk5 1/1 Running 0 9m nginx-deployment-1564180365-khku8 1/1 Running 0 14s
deploymentrc-1562004724-1rkfl 1/1 Running 0 8m nginx-deployment-1564180365-nacti 1/1 Running 0 14s
deploymentrc-1562004724-6v702 1/1 Running 0 8m nginx-deployment-1564180365-z9gth 1/1 Running 0 14s
``` ```
Next time we want to update these pods, we can just update and re-apply the Deployment again. Next time we want to update these pods, we only need to update and re-apply the Deployment again.
Deployment ensures that not all pods are down while they are being updated. By Deployment can ensure that only a certain number of pods may be down while they are being updated. By
default, it ensures that minimum of 1 less than the desired number of pods are default, it ensures that at least 1 less than the desired number of pods are
up. For example, if you look at the above deployment closely, you will see that up (1 max unavailable).
Deployment can also ensure that only a certain number of pods may be created above the desired number of pods. By default, it ensures that at most 1 more than the desired number of pods are up (1 max surge).
For example, if you look at the above deployment closely, you will see that
it first created a new pod, then deleted some old pods and created new ones. It it first created a new pod, then deleted some old pods and created new ones. It
does not kill old pods until a sufficient number of new pods have come up. does not kill old pods until a sufficient number of new pods have come up, and does not create new pods until a sufficient number of old pods have been killed. It makes sure that number of available pods is at least 2 and the number of total pods is at most 4.
```shell ```console
$ kubectl describe deployments $ kubectl describe deployments
Name: nginx-deployment Name: nginx-deployment
Namespace: default Namespace: default
CreationTimestamp: Thu, 22 Oct 2015 17:58:49 -0700 CreationTimestamp: Tue, 15 Mar 2016 12:01:06 -0700
Labels: app=nginx-deployment Labels: app=nginx
Selector: app=nginx Selector: app=nginx
Replicas: 3 updated / 3 total Replicas: 3 updated | 3 total | 3 available | 0 unavailable
StrategyType: RollingUpdate StrategyType: RollingUpdate
RollingUpdateStrategy: 1 max unavailable, 1 max surge, 0 min ready seconds MinReadySeconds: 0
OldReplicationControllers: deploymentrc-1562004724 (3/3 replicas created) RollingUpdateStrategy: 1 max unavailable, 1 max surge
NewReplicationController: <none> OldReplicaSets: <none>
NewReplicaSet: nginx-deployment-1564180365 (3/3 replicas created)
Events: Events:
FirstSeen LastSeen Count From SubobjectPath Reason Message FirstSeen LastSeen Count From SubobjectPath Type Reason Message
10m 10m 1 {deployment-controller } ScalingRC Scaled up rc deploymentrc-1975012602 to 3 --------- -------- ----- ---- ------------- -------- ------ -------
2m 2m 1 {deployment-controller } ScalingRC Scaled up rc deploymentrc-1562004724 to 1 36s 36s 1 {deployment-controller } Normal ScalingReplicaSet Scaled up replica set nginx-deployment-2035384211 to 3
2m 2m 1 {deployment-controller } ScalingRC Scaled down rc deploymentrc-1975012602 to 1 23s 23s 1 {deployment-controller } Normal ScalingReplicaSet Scaled up replica set nginx-deployment-1564180365 to 1
1m 1m 1 {deployment-controller } ScalingRC Scaled up rc deploymentrc-1562004724 to 3 23s 23s 1 {deployment-controller } Normal ScalingReplicaSet Scaled down replica set nginx-deployment-2035384211 to 2
1m 1m 1 {deployment-controller } ScalingRC Scaled down rc deploymentrc-1975012602 to 0 23s 23s 1 {deployment-controller } Normal ScalingReplicaSet Scaled up replica set nginx-deployment-1564180365 to 2
21s 21s 1 {deployment-controller } Normal ScalingReplicaSet Scaled down replica set nginx-deployment-2035384211 to 0
21s 21s 1 {deployment-controller } Normal ScalingReplicaSet Scaled up replica set nginx-deployment-1564180365 to 3
``` ```
Here we see that when we first created the Deployment, it created an RC and scaled it up to 3 replicas directly. Here we see that when we first created the Deployment, it created a replica set (nginx-deployment-2035384211) and scaled it up to 3 replicas directly.
When we updated the Deployment, it created a new RC and scaled it up to 1 and then scaled down the old RC by 1, so that at least 2 pods were available at all times. When we updated the Deployment, it created a new replica set (nginx-deployment-1564180365) and scaled it up to 1 and then scaled down the old replica set to 2, so that at least 2 pods were available and at most 4 pods were created at all times.
It then scaled up the new RC to 3 and when those pods were ready, it scaled down the old RC to 0. It then continued scaling up and down the new and the old replica set, with the same rolling update strategy. Finally, we'll have 3 available replicas in the new replica set, and the old replica set is scaled down to 0.
### Multiple Updates ### Multiple Updates
Each time a new deployment object is observed, a replication controller is Each time a new deployment object is observed by the deployment controller, a replica set is
created to bring up the desired pods if there is no existing RC doing so. created to bring up the desired pods if there is no existing replica set doing so.
Existing RCs controlling pods whose labels match `.spec.selector` but whose Existing replica set controlling pods whose labels match `.spec.selector` but whose
template does not match `.spec.template` are scaled down. template does not match `.spec.template` are scaled down.
Eventually, the new RC will be scaled to `.spec.replicas` and all old RCs will Eventually, the new replica set will be scaled to `.spec.replicas` and all old replica sets will
be scaled to 0. be scaled to 0.
If the user updates a Deployment while an existing deployment is in progress, If the user updates a Deployment while an existing deployment is in progress,
the Deployment will create a new RC as per the update and start scaling that up, and the Deployment will create a new replica set as per the update and start scaling that up, and
will roll the RC that it was scaling up previously-- it will add it to its list of old RCs and will will roll the replica set that it was scaling up previously -- it will add it to its list of old replica sets and will
start scaling it down. start scaling it down.
For example, suppose the user creates a deployment to create 5 replicas of `nginx:1.7.9`, For example, suppose the user creates a Deployment to create 5 replicas of `nginx:1.7.9`,
but then updates the deployment to create 5 replicas of `nginx:1.9.1`, when only 3 but then updates the Deployment to create 5 replicas of `nginx:1.9.1`, when only 3
replicas of `nginx:1.7.9` had been created. In that case, deployment will immediately start replicas of `nginx:1.7.9` had been created. In that case, Deployment will immediately start
killing the 3 `nginx:1.7.9` pods that it had created, and will start creating killing the 3 `nginx:1.7.9` pods that it had created, and will start creating
`nginx:1.9.1` pods. It will not wait for 5 replicas of `nginx:1.7.9` to be created `nginx:1.9.1` pods. It will not wait for 5 replicas of `nginx:1.7.9` to be created
before changing course. before changing course.
## Rolling Back a Deployment
Sometimes we may want to rollback a Deployment; for example, when the previous Deployment is crash looping.
Suppose that we made a typo while updating the Deployment, by putting the image name as `nginx:1.91` instead of `nginx:1.9.1`:
```console
$ kubectl apply -f docs/user-guide/bad-nginx-deployment.yaml
deployment "nginx-deployment" configured
```
You will see that both the number of old replicas (nginx-deployment-1564180365 and nginx-deployment-2035384211) and new replicas (nginx-deployment-3066724191) are 2.
```console
$ kubectl get rs
NAME DESIRED CURRENT AGE
nginx-deployment-1564180365 2 2 25s
nginx-deployment-2035384211 0 0 36s
nginx-deployment-3066724191 2 2 6s
```
Looking at the pods created, you will see that the 2 pods created by new replica set are crash looping.
```console
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
nginx-deployment-1564180365-70iae 1/1 Running 0 25s
nginx-deployment-1564180365-jbqqo 1/1 Running 0 25s
nginx-deployment-3066724191-08mng 0/1 ImagePullBackOff 0 6s
nginx-deployment-3066724191-eocby 0/1 ImagePullBackOff 0 6s
```
Note that the Deployment controller will stop the bad rollout automatically, and will stop scaling up the new replica set.
```console
$ kubectl describe deployment
Name: nginx-deployment
Namespace: default
CreationTimestamp: Tue, 15 Mar 2016 14:48:04 -0700
Labels: app=nginx
Selector: app=nginx
Replicas: 2 updated | 3 total | 2 available | 2 unavailable
StrategyType: RollingUpdate
MinReadySeconds: 0
RollingUpdateStrategy: 1 max unavailable, 1 max surge
OldReplicaSets: nginx-deployment-1564180365 (2/2 replicas created)
NewReplicaSet: nginx-deployment-3066724191 (2/2 replicas created)
Events:
FirstSeen LastSeen Count From SubobjectPath Type Reason Message
--------- -------- ----- ---- ------------- -------- ------ -------
1m 1m 1 {deployment-controller } Normal ScalingReplicaSet Scaled up replica set nginx-deployment-2035384211 to 3
22s 22s 1 {deployment-controller } Normal ScalingReplicaSet Scaled up replica set nginx-deployment-1564180365 to 1
22s 22s 1 {deployment-controller } Normal ScalingReplicaSet Scaled down replica set nginx-deployment-2035384211 to 2
22s 22s 1 {deployment-controller } Normal ScalingReplicaSet Scaled up replica set nginx-deployment-1564180365 to 2
21s 21s 1 {deployment-controller } Normal ScalingReplicaSet Scaled down replica set nginx-deployment-2035384211 to 0
21s 21s 1 {deployment-controller } Normal ScalingReplicaSet Scaled up replica set nginx-deployment-1564180365 to 3
13s 13s 1 {deployment-controller } Normal ScalingReplicaSet Scaled up replica set nginx-deployment-3066724191 to 1
13s 13s 1 {deployment-controller } Normal ScalingReplicaSet Scaled down replica set nginx-deployment-1564180365 to 2
13s 13s 1 {deployment-controller } Normal ScalingReplicaSet Scaled up replica set nginx-deployment-3066724191 to 2
```
To fix this, we need to rollback to a previous revision of Deployment that is stable.
First, check the revisions of this deployment:
```console
$ kubectl rollout history deployment/nginx-deployment
deployments "nginx-deployment":
REVISION CHANGE-CAUSE
1 kubectl create -f docs/user-guide/nginx-deployment.yaml --record
2 kubectl apply -f docs/user-guide/new-nginx-deployment.yaml
3 kubectl apply -f docs/user-guide/bad-nginx-deployment.yaml
```
Because we recorded the command while creating this Deployment using `--record`, we can easily see the changes we made in each revision.
To further see the details of each revision, run:
```console
$ kubectl rollout history deployment/nginx-deployment --revision=2
deployments "nginx-deployment" revision 2
Labels: app=nginx,pod-template-hash=1564180365
Annotations: kubernetes.io/change-cause=kubectl apply -f docs/user-guide/new-nginx-deployment.yaml
Image(s): nginx:1.9.1
No volumes.
```
Now we've decided to undo the current rollout and rollback to the previous revision:
```console
$ kubectl rollout undo deployment/nginx-deployment
deployment "nginx-deployment" rolled back
```
Alternatively, you can rollback to a specific revision by specify that in `--to-revision`:
```console
$ kubectl rollout undo deployment/nginx-deployment --to-revision=2
deployment "nginx-deployment" rolled back
```
The Deployment is now rolled back to a previous stable revision. As you can see, a `DeploymentRollback` event for rolling back to revision 2 is generated from Deployment controller.
```console
$ kubectl get deployment
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
nginx-deployment 3 3 3 3 30m
$ kubectl describe deployment
Name: nginx-deployment
Namespace: default
CreationTimestamp: Tue, 15 Mar 2016 14:48:04 -0700
Labels: app=nginx
Selector: app=nginx
Replicas: 3 updated | 3 total | 3 available | 0 unavailable
StrategyType: RollingUpdate
MinReadySeconds: 0
RollingUpdateStrategy: 1 max unavailable, 1 max surge
OldReplicaSets: <none>
NewReplicaSet: nginx-deployment-1564180365 (3/3 replicas created)
Events:
FirstSeen LastSeen Count From SubobjectPath Type Reason Message
--------- -------- ----- ---- ------------- -------- ------ -------
30m 30m 1 {deployment-controller } Normal ScalingReplicaSet Scaled up replica set nginx-deployment-2035384211 to 3
29m 29m 1 {deployment-controller } Normal ScalingReplicaSet Scaled up replica set nginx-deployment-1564180365 to 1
29m 29m 1 {deployment-controller } Normal ScalingReplicaSet Scaled down replica set nginx-deployment-2035384211 to 2
29m 29m 1 {deployment-controller } Normal ScalingReplicaSet Scaled up replica set nginx-deployment-1564180365 to 2
29m 29m 1 {deployment-controller } Normal ScalingReplicaSet Scaled down replica set nginx-deployment-2035384211 to 0
29m 29m 1 {deployment-controller } Normal ScalingReplicaSet Scaled up replica set nginx-deployment-3066724191 to 2
29m 29m 1 {deployment-controller } Normal ScalingReplicaSet Scaled up replica set nginx-deployment-3066724191 to 1
29m 29m 1 {deployment-controller } Normal ScalingReplicaSet Scaled down replica set nginx-deployment-1564180365 to 2
2m 2m 1 {deployment-controller } Normal ScalingReplicaSet Scaled down replica set nginx-deployment-3066724191 to 0
2m 2m 1 {deployment-controller } Normal DeploymentRollback Rolled back deployment "nginx-deployment" to revision 2
29m 2m 2 {deployment-controller } Normal ScalingReplicaSet Scaled up replica set nginx-deployment-1564180365 to 3
```
## Pausing and Resuming a Deployment
You can also pause a Deployment mid-way and then resume it. A use case is to support canary deployment.
Update the Deployment again and then pause the Deployment with `kubectl rollout pause`:
```console
$ kubectl apply -f docs/user-guide/new-nginx-deployment; kubectl rollout pause deployment/nginx-deployment
deployment "nginx-deployment" configured
deployment "nginx-deployment" paused
```
Note that any current state of the Deployment will continue its function, but new updates to the Deployment will not have an effect as long as the Deployment is paused.
The Deployment was still in progress when we paused it, so the actions of scaling up and down replica sets are paused too.
```console
$ kubectl get rs
NAME DESIRED CURRENT AGE
nginx-deployment-1564180365 2 2 1h
nginx-deployment-2035384211 2 2 1h
nginx-deployment-3066724191 0 0 1h
```
To resume the Deployment, simply do `kubectl rollout resume`:
```console
$ kubectl rollout resume deployment/nginx-deployment
deployment "nginx-deployment" resumed
```
Then the Deployment will continue and finish the rollout:
```console
$ kubectl get rs
NAME DESIRED CURRENT AGE
nginx-deployment-1564180365 3 3 1h
nginx-deployment-2035384211 0 0 1h
nginx-deployment-3066724191 0 0 1h
```
Note: A paused Deployment cannot be scaled at this moment, and we will add this feature in 1.3 release, see [issue #20853](https://github.com/kubernetes/kubernetes/issues/20853). You cannot rollback a paused Deployment either, and you should resume a Deployment first before doing a rollback.
## Writing a Deployment Spec ## Writing a Deployment Spec
As with all other Kubernetes configs, a Deployment needs `apiVersion`, `kind`, and As with all other Kubernetes configs, a Deployment needs `apiVersion`, `kind`, and
`metadata` fields. For general information about working with config files, `metadata` fields. For general information about working with config files,
see [here](/docs/user-guide/deploying-applications), [here](/docs/user-guide/configuring-containers), and [here](/docs/user-guide/working-with-resources). see [deploying applications](/docs/user-guide/deploying-applications), [configuring containers](/docs/user-guide/configuring-containers), and [using kubectl to manage resources](/docs/user-guide/working-with-resources) documents.
A Deployment also needs a [`.spec` section](https://github.com/kubernetes/kubernetes/tree/{{page.githubbranch}}/docs/devel/api-conventions.md#spec-and-status). A Deployment also needs a [`.spec` section](https://github.com/kubernetes/kubernetes/tree/{{page.githubbranch}}/docs/devel/api-conventions.md#spec-and-status).
@ -231,13 +410,12 @@ the default value.
All existing pods are killed before new ones are created when All existing pods are killed before new ones are created when
`.spec.strategy.type==Recreate`. `.spec.strategy.type==Recreate`.
__Note: This is not implemented yet__.
#### Rolling Update Deployment #### Rolling Update Deployment
The Deployment updates pods in a [rolling update](/docs/user-guide/update-demo/) fashion The Deployment updates pods in a [rolling update](/docs/user-guide/update-demo/) fashion
when `.spec.strategy.type==RollingUpdate`. when `.spec.strategy.type==RollingUpdate`.
Users can specify `maxUnavailable`, `maxSurge` and `minReadySeconds` to control Users can specify `maxUnavailable` and `maxSurge` to control
the rolling update process. the rolling update process.
##### Max Unavailable ##### Max Unavailable
@ -250,9 +428,9 @@ The absolute number is calculated from percentage by rounding up.
This can not be 0 if `.spec.strategy.rollingUpdate.maxSurge` is 0. This can not be 0 if `.spec.strategy.rollingUpdate.maxSurge` is 0.
By default, a fixed value of 1 is used. By default, a fixed value of 1 is used.
For example, when this value is set to 30%, the old RC can be scaled down to For example, when this value is set to 30%, the old replica set can be scaled down to
70% of desired pods immediately when the rolling update starts. Once new pods are 70% of desired pods immediately when the rolling update starts. Once new pods are
ready, old RC can be scaled down further, followed by scaling up the new RC, ready, old replica set can be scaled down further, followed by scaling up the new replica set,
ensuring that the total number of pods available at all times during the ensuring that the total number of pods available at all times during the
update is at least 70% of the desired pods. update is at least 70% of the desired pods.
@ -266,13 +444,13 @@ This can not be 0 if `MaxUnavailable` is 0.
The absolute number is calculated from percentage by rounding up. The absolute number is calculated from percentage by rounding up.
By default, a value of 1 is used. By default, a value of 1 is used.
For example, when this value is set to 30%, the new RC can be scaled up immediately when For example, when this value is set to 30%, the new replica set can be scaled up immediately when
the rolling update starts, such that the total number of old and new pods do not exceed the rolling update starts, such that the total number of old and new pods do not exceed
130% of desired pods. Once old pods have been killed, 130% of desired pods. Once old pods have been killed,
the new RC can be scaled up further, ensuring that the total number of pods running the new replica set can be scaled up further, ensuring that the total number of pods running
at any time during the update is at most 130% of desired pods. at any time during the update is at most 130% of desired pods.
##### Min Ready Seconds ### Min Ready Seconds
`.spec.minReadySeconds` is an optional field that specifies the `.spec.minReadySeconds` is an optional field that specifies the
minimum number of seconds for which a newly created pod should be ready minimum number of seconds for which a newly created pod should be ready
@ -280,9 +458,25 @@ without any of its containers crashing, for it to be considered available.
This defaults to 0 (the pod will be considered available as soon as it is ready). This defaults to 0 (the pod will be considered available as soon as it is ready).
To learn more about when a pod is considered ready, see [Container Probes](/docs/user-guide/pod-states/#container-probes). To learn more about when a pod is considered ready, see [Container Probes](/docs/user-guide/pod-states/#container-probes).
### Rollback To
`.spec.rollbackTo` is an optional field with the configuration the Deployment is rolling back to. Setting this field will trigger a rollback, and this field will be cleared every time a rollback is done.
#### Revision
`.spec.rollbackTo.revision` is an optional field specifying the revision to rollback to. This defaults to 0, meaning rollback to the last revision in history.
### Revision History Limit
`.spec.revisionHistoryLimit` is an optional field that specifies the number of old replica sets to retain to allow rollback. All old replica sets will be kept by default, if this field is not set. The configuration of each Deployment revision is stored in its replica sets; therefore, once an old replica set is deleted, you lose the ability to rollback to that revision of Deployment.
### Paused
`.spec.paused` is an optional boolean field for pausing and resuming a Deployment. It defaults to false (a Deployment is not paused).
## Alternative to Deployments ## Alternative to Deployments
### kubectl rolling update ### kubectl rolling update
[Kubectl rolling update](/docs/user-guide/kubectl/kubectl_rolling-update) also updates pods and replication controllers in a similar fashion. [Kubectl rolling update](/docs/user-guide/kubectl/kubectl_rolling-update) updates pods and replication controllers in a similar fashion.
But deployments is declarative and is server side. But deployments is recommended, since it's declarative and is server side, and has more features, such as rolling back to any previous revision even after the rolling update is done. Also, replica sets supersede replication controllers.

View File

@ -43,7 +43,7 @@ Before you start using the Ingress resource, there are a few things you should u
* The Ingress is a beta resource, not available in any Kubernetes release prior to 1.1. * The Ingress is a beta resource, not available in any Kubernetes release prior to 1.1.
* You need an Ingress controller to satisfy an Ingress. Simply creating the resource will have no effect. * You need an Ingress controller to satisfy an Ingress. Simply creating the resource will have no effect.
* On GCE/GKE there should be a [L7 cluster addon](https://releases.k8s.io/{{page.githubbranch}}/cluster/addons/cluster-loadbalancing/glbc/README.md#prerequisites), on other platforms you either need to write your own or [deploy an existing controller](https://github.com/kubernetes/contrib/tree/master/Ingress) as a pod. * On GCE/GKE there should be a [L7 cluster addon](https://releases.k8s.io/{{page.githubbranch}}/cluster/addons/cluster-loadbalancing/glbc/README.md#prerequisites), on other platforms you either need to write your own or [deploy an existing controller](https://github.com/kubernetes/contrib/tree/master/ingress) as a pod.
* The resource currently does not support HTTPS, but will do so before it leaves beta. * The resource currently does not support HTTPS, but will do so before it leaves beta.
## The Ingress Resource ## The Ingress Resource
@ -239,7 +239,7 @@ You can achieve the same by invoking `kubectl replace -f` on a modified Ingress
* Combining L4 and L7 Ingress * Combining L4 and L7 Ingress
* More Ingress controllers * More Ingress controllers
Please track the [L7 and Ingress proposal](https://github.com/kubernetes/kubernetes/pull/12827) for more details on the evolution of the resource, and the [Ingress sub-repository](https://github.com/kubernetes/contrib/tree/master/Ingress) for more details on the evolution of various Ingress controllers. Please track the [L7 and Ingress proposal](https://github.com/kubernetes/kubernetes/pull/12827) for more details on the evolution of the resource, and the [Ingress sub-repository](https://github.com/kubernetes/contrib/tree/master/ingress) for more details on the evolution of various Ingress controllers.
## Alternatives ## Alternatives
@ -248,4 +248,4 @@ You can expose a Service in multiple ways that don't directly involve the Ingres
* Use [Service.Type=LoadBalancer](/docs/user-guide/services/#type-loadbalancer) * Use [Service.Type=LoadBalancer](/docs/user-guide/services/#type-loadbalancer)
* Use [Service.Type=NodePort](/docs/user-guide/services/#type-nodeport) * Use [Service.Type=NodePort](/docs/user-guide/services/#type-nodeport)
* Use a [Port Proxy](https://github.com/kubernetes/contrib/tree/master/for-demos/proxy-to-service) * Use a [Port Proxy](https://github.com/kubernetes/contrib/tree/master/for-demos/proxy-to-service)
* Deploy the [Service loadbalancer](https://github.com/kubernetes/contrib/tree/master/service-loadbalancer). This allows you to share a single IP among multiple Services and achieve more advanced loadbalancing through Service Annotations. * Deploy the [Service loadbalancer](https://github.com/kubernetes/contrib/tree/master/service-loadbalancer). This allows you to share a single IP among multiple Services and achieve more advanced loadbalancing through Service Annotations.

View File

@ -165,4 +165,4 @@ selector:
#### Selecting sets of nodes #### Selecting sets of nodes
One use case for selecting over labels is to constrain the set of nodes onto which a pod can schedule. One use case for selecting over labels is to constrain the set of nodes onto which a pod can schedule.
See the documentation on [node selection](node-selection/README.md) for more information. See the documentation on [node selection](/docs/user-guide/node-selection) for more information.