Adding example for DaemonSet Rolling Update task

Adding example for DaemonSet Rolling Update task

Adding fluentd daemonset example

Adding fluentd daemonset example

Creating fluend daemonset for update

Creating fluend daemonset for update

Adding proper description for YAML file

Adding proper description for YAML file
This commit is contained in:
Rajesh Deshpande 2020-03-20 17:49:13 +05:30 committed by rajeshdeshpande02
parent 5321c65824
commit dfb8d40026
3 changed files with 135 additions and 35 deletions

View File

@ -43,21 +43,43 @@ To enable the rolling update feature of a DaemonSet, you must set its
You may want to set [`.spec.updateStrategy.rollingUpdate.maxUnavailable`](/docs/concepts/workloads/controllers/deployment/#max-unavailable) (default
to 1) and [`.spec.minReadySeconds`](/docs/concepts/workloads/controllers/deployment/#min-ready-seconds) (default to 0) as well.
### Creating a DaemonSet with `RollingUpdate` update strategy
### Step 1: Checking DaemonSet `RollingUpdate` update strategy
This YAML file specifies a DaemonSet with an update strategy as 'RollingUpdate'
First, check the update strategy of your DaemonSet, and make sure it's set to
{{< codenew file="controllers/fluentd-daemonset.yaml" >}}
After verifying the update strategy of the DaemonSet manifest, create the DaemonSet:
```shell
kubectl create -f https://k8s.io/examples/controllers/fluentd-daemonset.yaml
```
Alternatively, use `kubectl apply` to create the same DaemonSet if you plan to
update the DaemonSet with `kubectl apply`.
```shell
kubectl apply -f https://k8s.io/examples/controllers/fluentd-daemonset.yaml
```
### Checking DaemonSet `RollingUpdate` update strategy
Check the update strategy of your DaemonSet, and make sure it's set to
`RollingUpdate`:
```shell
kubectl get ds/<daemonset-name> -o go-template='{{.spec.updateStrategy.type}}{{"\n"}}'
kubectl get ds/fluentd-elasticsearch -o go-template='{{.spec.updateStrategy.type}}{{"\n"}}' -n kube-system
```
If you haven't created the DaemonSet in the system, check your DaemonSet
manifest with the following command instead:
```shell
kubectl apply -f ds.yaml --dry-run=client -o go-template='{{.spec.updateStrategy.type}}{{"\n"}}'
<<<<<<< HEAD
kubectl apply -f https://k8s.io/examples/controllers/fluentd-daemonset.yaml --dry-run=client -o go-template='{{.spec.updateStrategy.type}}{{"\n"}}'
=======
kubectl apply -f https://k8s.io/examples/controllers/fluentd-daemonset.yaml --dry-run -o go-template='{{.spec.updateStrategy.type}}{{"\n"}}'
>>>>>>> Adding example for DaemonSet Rolling Update task
```
The output from both commands should be:
@ -69,28 +91,13 @@ RollingUpdate
If the output isn't `RollingUpdate`, go back and modify the DaemonSet object or
manifest accordingly.
### Step 2: Creating a DaemonSet with `RollingUpdate` update strategy
If you have already created the DaemonSet, you may skip this step and jump to
step 3.
After verifying the update strategy of the DaemonSet manifest, create the DaemonSet:
```shell
kubectl create -f ds.yaml
```
Alternatively, use `kubectl apply` to create the same DaemonSet if you plan to
update the DaemonSet with `kubectl apply`.
```shell
kubectl apply -f ds.yaml
```
### Step 3: Updating a DaemonSet template
### Updating a DaemonSet template
Any updates to a `RollingUpdate` DaemonSet `.spec.template` will trigger a rolling
update. This can be done with several different `kubectl` commands.
update. Let's update the DaemonSet by applying a new YAML file. This can be done with several different `kubectl` commands.
{{< codenew file="controllers/fluentd-daemonset-update.yaml" >}}
#### Declarative commands
@ -99,21 +106,17 @@ If you update DaemonSets using
use `kubectl apply`:
```shell
kubectl apply -f ds-v2.yaml
kubectl apply -f https://k8s.io/examples/application/fluentd-daemonset-update.yaml
```
#### Imperative commands
If you update DaemonSets using
[imperative commands](/docs/tasks/manage-kubernetes-objects/imperative-command/),
use `kubectl edit` or `kubectl patch`:
use `kubectl edit` :
```shell
kubectl edit ds/<daemonset-name>
```
```shell
kubectl patch ds/<daemonset-name> -p=<strategic-merge-patch>
kubectl edit ds/fluentd-elasticsearch -n kube-system
```
##### Updating only the container image
@ -122,21 +125,21 @@ If you just need to update the container image in the DaemonSet template, i.e.
`.spec.template.spec.containers[*].image`, use `kubectl set image`:
```shell
kubectl set image ds/<daemonset-name> <container-name>=<container-new-image>
kubectl set image ds/fluentd-elasticsearch fluentd-elasticsearch=quay.io/fluentd_elasticsearch/fluentd:v2.6.0 -n kube-system
```
### Step 4: Watching the rolling update status
### Watching the rolling update status
Finally, watch the rollout status of the latest DaemonSet rolling update:
```shell
kubectl rollout status ds/<daemonset-name>
kubectl rollout status ds/fluentd-elasticsearch -n kube-system
```
When the rollout is complete, the output is similar to this:
```shell
daemonset "<daemonset-name>" successfully rolled out
daemonset "fluentd-elasticsearch" successfully rolled out
```
## Troubleshooting
@ -156,7 +159,7 @@ When this happens, find the nodes that don't have the DaemonSet pods scheduled o
by comparing the output of `kubectl get nodes` and the output of:
```shell
kubectl get pods -l <daemonset-selector-key>=<daemonset-selector-value> -o wide
kubectl get pods -l name=fluentd-elasticsearch -o wide -n kube-system
```
Once you've found those nodes, delete some non-DaemonSet pods from the node to
@ -183,6 +186,13 @@ If `.spec.minReadySeconds` is specified in the DaemonSet, clock skew between
master and nodes will make DaemonSet unable to detect the right rollout
progress.
## Clean up
Delete DaemonSet from a namespace :
```shell
kubectl delete ds fluentd-elasticsearch -n kube-system
```
{{% /capture %}}

View File

@ -0,0 +1,48 @@
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: fluentd-elasticsearch
namespace: kube-system
labels:
k8s-app: fluentd-logging
spec:
selector:
matchLabels:
name: fluentd-elasticsearch
updateStrategy:
type: RollingUpdate
rollingUpdate:
maxUnavailable: 1
template:
metadata:
labels:
name: fluentd-elasticsearch
spec:
tolerations:
# this toleration is to have the daemonset runnable on master nodes
# remove it if your masters can't run pods
- key: node-role.kubernetes.io/master
effect: NoSchedule
containers:
- name: fluentd-elasticsearch
image: quay.io/fluentd_elasticsearch/fluentd:v2.5.2
resources:
limits:
memory: 200Mi
requests:
cpu: 100m
memory: 200Mi
volumeMounts:
- name: varlog
mountPath: /var/log
- name: varlibdockercontainers
mountPath: /var/lib/docker/containers
readOnly: true
terminationGracePeriodSeconds: 30
volumes:
- name: varlog
hostPath:
path: /var/log
- name: varlibdockercontainers
hostPath:
path: /var/lib/docker/containers

View File

@ -0,0 +1,42 @@
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: fluentd-elasticsearch
namespace: kube-system
labels:
k8s-app: fluentd-logging
spec:
selector:
matchLabels:
name: fluentd-elasticsearch
updateStrategy:
type: RollingUpdate
rollingUpdate:
maxUnavailable: 1
template:
metadata:
labels:
name: fluentd-elasticsearch
spec:
tolerations:
# this toleration is to have the daemonset runnable on master nodes
# remove it if your masters can't run pods
- key: node-role.kubernetes.io/master
effect: NoSchedule
containers:
- name: fluentd-elasticsearch
image: quay.io/fluentd_elasticsearch/fluentd:v2.5.2
volumeMounts:
- name: varlog
mountPath: /var/log
- name: varlibdockercontainers
mountPath: /var/lib/docker/containers
readOnly: true
terminationGracePeriodSeconds: 30
volumes:
- name: varlog
hostPath:
path: /var/log
- name: varlibdockercontainers
hostPath:
path: /var/lib/docker/containers