mirror of https://github.com/fluxcd/flagger.git
Update podinfo to v2.0
This commit is contained in:
parent
c0b60b1497
commit
225a9015bb
|
|
@ -25,7 +25,7 @@ spec:
|
|||
spec:
|
||||
containers:
|
||||
- name: podinfod
|
||||
image: quay.io/stefanprodan/podinfo:1.7.0
|
||||
image: stefanprodan/podinfo:2.0.0
|
||||
imagePullPolicy: IfNotPresent
|
||||
ports:
|
||||
- containerPort: 9898
|
||||
|
|
|
|||
|
|
@ -1,6 +1,6 @@
|
|||
apiVersion: v1
|
||||
version: 2.3.0
|
||||
appVersion: 1.7.0
|
||||
version: 3.0.0
|
||||
appVersion: 2.0.0
|
||||
name: podinfo
|
||||
engine: gotpl
|
||||
description: Flagger canary deployment demo chart
|
||||
|
|
|
|||
|
|
@ -24,7 +24,7 @@ spec:
|
|||
spec:
|
||||
terminationGracePeriodSeconds: 30
|
||||
containers:
|
||||
- name: {{ .Chart.Name }}
|
||||
- name: podinfo
|
||||
image: "{{ .Values.image.repository }}:{{ .Values.image.tag }}"
|
||||
imagePullPolicy: {{ .Values.image.pullPolicy }}
|
||||
command:
|
||||
|
|
|
|||
|
|
@ -1,7 +1,7 @@
|
|||
# Default values for podinfo.
|
||||
image:
|
||||
repository: quay.io/stefanprodan/podinfo
|
||||
tag: 1.7.0
|
||||
repository: stefanprodan/podinfo
|
||||
tag: 2.0.0
|
||||
pullPolicy: IfNotPresent
|
||||
|
||||
service:
|
||||
|
|
|
|||
|
|
@ -138,7 +138,7 @@ helm upgrade -i frontend flagger/podinfo/ \
|
|||
--reuse-values \
|
||||
--set canary.loadtest.enabled=true \
|
||||
--set canary.helmtest.enabled=true \
|
||||
--set image.tag=1.7.1
|
||||
--set image.tag=2.0.1
|
||||
```
|
||||
|
||||
Flagger detects that the deployment revision changed and starts the canary analysis:
|
||||
|
|
@ -283,17 +283,17 @@ metadata:
|
|||
namespace: test
|
||||
annotations:
|
||||
flux.weave.works/automated: "true"
|
||||
flux.weave.works/tag.chart-image: semver:~1.7
|
||||
flux.weave.works/tag.chart-image: semver:~2.0
|
||||
spec:
|
||||
releaseName: frontend
|
||||
chart:
|
||||
repository: https://stefanprodan.github.io/flagger/
|
||||
name: podinfo
|
||||
version: 2.3.0
|
||||
git: https://github.com/weaveowrks/flagger
|
||||
ref: master
|
||||
path: charts/podinfo
|
||||
values:
|
||||
image:
|
||||
repository: quay.io/stefanprodan/podinfo
|
||||
tag: 1.7.0
|
||||
repository: stefanprodan/podinfo
|
||||
tag: 2.0.0
|
||||
backend: http://backend-podinfo:9898/echo
|
||||
canary:
|
||||
enabled: true
|
||||
|
|
@ -311,26 +311,26 @@ In the `chart` section I've defined the release source by specifying the Helm re
|
|||
In the `values` section I've overwritten the defaults set in values.yaml.
|
||||
|
||||
With the `flux.weave.works` annotations I instruct Flux to automate this release.
|
||||
When an image tag in the sem ver range of `1.7.0 - 1.7.99` is pushed to Quay,
|
||||
When an image tag in the sem ver range of `2.0.0 - 2.0.99` is pushed to Quay,
|
||||
Flux will upgrade the Helm release and from there Flagger will pick up the change and start a canary deployment.
|
||||
|
||||
Install [Weave Flux](https://github.com/weaveworks/flux) and its Helm Operator by specifying your Git repo URL:
|
||||
|
||||
```bash
|
||||
helm repo add weaveworks https://weaveworks.github.io/flux
|
||||
helm repo add fluxcd https://charts.fluxcd.io
|
||||
|
||||
helm install --name flux \
|
||||
--set helmOperator.create=true \
|
||||
--set helmOperator.createCRD=true \
|
||||
--set git.url=git@github.com:<USERNAME>/<REPOSITORY> \
|
||||
--namespace flux \
|
||||
weaveworks/flux
|
||||
--namespace fluxcd \
|
||||
fluxcd/flux
|
||||
```
|
||||
|
||||
At startup Flux generates a SSH key and logs the public key. Find the SSH public key with:
|
||||
|
||||
```bash
|
||||
kubectl -n flux logs deployment/flux | grep identity.pub | cut -d '"' -f2
|
||||
kubectl -n fluxcd logs deployment/flux | grep identity.pub | cut -d '"' -f2
|
||||
```
|
||||
|
||||
In order to sync your cluster state with Git you need to copy the public key and create a
|
||||
|
|
@ -344,9 +344,9 @@ launch the `frontend` and `backend` apps.
|
|||
|
||||
A CI/CD pipeline for the `frontend` release could look like this:
|
||||
|
||||
* cut a release from the master branch of the podinfo code repo with the git tag `1.7.1`
|
||||
* CI builds the image and pushes the `podinfo:1.7.1` image to the container registry
|
||||
* Flux scans the registry and updates the Helm release `image.tag` to `1.7.1`
|
||||
* cut a release from the master branch of the podinfo code repo with the git tag `2.0.1`
|
||||
* CI builds the image and pushes the `podinfo:2.0.1` image to the container registry
|
||||
* Flux scans the registry and updates the Helm release `image.tag` to `2.0.1`
|
||||
* Flux commits and push the change to the cluster repo
|
||||
* Flux applies the updated Helm release on the cluster
|
||||
* Flux Helm Operator picks up the change and calls Tiller to upgrade the release
|
||||
|
|
@ -356,7 +356,7 @@ A CI/CD pipeline for the `frontend` release could look like this:
|
|||
* Based on the analysis result the canary deployment is promoted to production or rolled back
|
||||
* Flagger sends a Slack notification with the canary result
|
||||
|
||||
If the canary fails, fix the bug, do another patch release eg `1.7.2` and the whole process will run again.
|
||||
If the canary fails, fix the bug, do another patch release eg `2.0.2` and the whole process will run again.
|
||||
|
||||
A canary deployment can fail due to any of the following reasons:
|
||||
|
||||
|
|
|
|||
|
|
@ -131,7 +131,7 @@ Trigger a canary deployment by updating the container image:
|
|||
|
||||
```bash
|
||||
kubectl -n test set image deployment/abtest \
|
||||
podinfod=quay.io/stefanprodan/podinfo:1.7.1
|
||||
podinfod=stefanprodan/podinfo:2.0.1
|
||||
```
|
||||
|
||||
Flagger detects that the deployment revision changed and starts a new rollout:
|
||||
|
|
|
|||
|
|
@ -178,7 +178,7 @@ Trigger a canary deployment by updating the container image:
|
|||
|
||||
```bash
|
||||
kubectl -n test set image deployment/podinfo \
|
||||
podinfod=quay.io/stefanprodan/podinfo:1.7.1
|
||||
podinfod=stefanprodan/podinfo:2.0.1
|
||||
```
|
||||
|
||||
Flagger detects that the deployment revision changed and starts a new rollout:
|
||||
|
|
@ -241,7 +241,7 @@ Trigger a canary deployment:
|
|||
|
||||
```bash
|
||||
kubectl -n test set image deployment/podinfo \
|
||||
podinfod=quay.io/stefanprodan/podinfo:1.4.2
|
||||
podinfod=stefanprodan/podinfo:2.0.2
|
||||
```
|
||||
|
||||
Exec into the load tester pod with:
|
||||
|
|
|
|||
|
|
@ -172,7 +172,7 @@ Trigger a deployment by updating the container image:
|
|||
|
||||
```bash
|
||||
kubectl -n test set image deployment/podinfo \
|
||||
podinfod=quay.io/stefanprodan/podinfo:1.7.1
|
||||
podinfod=stefanprodan/podinfo:2.0.1
|
||||
```
|
||||
|
||||
Flagger detects that the deployment revision changed and starts a new rollout:
|
||||
|
|
@ -297,7 +297,7 @@ Trigger a deployment by updating the container image:
|
|||
|
||||
```bash
|
||||
kubectl -n test set image deployment/podinfo \
|
||||
podinfod=quay.io/stefanprodan/podinfo:1.7.3
|
||||
podinfod=stefanprodan/podinfo:2.0.3
|
||||
```
|
||||
|
||||
Generate 404s:
|
||||
|
|
|
|||
|
|
@ -197,7 +197,7 @@ Trigger a canary deployment by updating the container image:
|
|||
|
||||
```bash
|
||||
kubectl -n test set image deployment/podinfo \
|
||||
podinfod=quay.io/stefanprodan/podinfo:1.7.1
|
||||
podinfod=stefanprodan/podinfo:2.0.1
|
||||
```
|
||||
|
||||
Flagger detects that the deployment revision changed and starts a new rollout:
|
||||
|
|
@ -251,7 +251,7 @@ Trigger another canary deployment:
|
|||
|
||||
```bash
|
||||
kubectl -n test set image deployment/podinfo \
|
||||
podinfod=quay.io/stefanprodan/podinfo:1.4.2
|
||||
podinfod=stefanprodan/podinfo:2.0.2
|
||||
```
|
||||
|
||||
Generate HTTP 500 errors:
|
||||
|
|
@ -334,7 +334,7 @@ Trigger a canary deployment by updating the container image:
|
|||
|
||||
```bash
|
||||
kubectl -n test set image deployment/podinfo \
|
||||
podinfod=quay.io/stefanprodan/podinfo:1.4.3
|
||||
podinfod=stefanprodan/podinfo:2.0.3
|
||||
```
|
||||
|
||||
Generate 404s:
|
||||
|
|
@ -362,5 +362,3 @@ Canary failed! Scaling down podinfo.test
|
|||
```
|
||||
|
||||
If you have Slack configured, Flagger will send a notification with the reason why the canary failed.
|
||||
|
||||
|
||||
|
|
|
|||
|
|
@ -150,7 +150,7 @@ Trigger a canary deployment by updating the container image:
|
|||
|
||||
```bash
|
||||
kubectl -n test set image deployment/podinfo \
|
||||
podinfod=quay.io/stefanprodan/podinfo:1.7.1
|
||||
podinfod=stefanprodan/podinfo:2.0.1
|
||||
```
|
||||
|
||||
Flagger detects that the deployment revision changed and starts a new rollout:
|
||||
|
|
@ -208,7 +208,7 @@ Trigger another canary deployment:
|
|||
|
||||
```bash
|
||||
kubectl -n test set image deployment/podinfo \
|
||||
podinfod=quay.io/stefanprodan/podinfo:1.7.2
|
||||
podinfod=stefanprodan/podinfo:2.0.2
|
||||
```
|
||||
|
||||
Exec into the load tester pod with:
|
||||
|
|
@ -297,7 +297,7 @@ Trigger a canary deployment by updating the container image:
|
|||
|
||||
```bash
|
||||
kubectl -n test set image deployment/podinfo \
|
||||
podinfod=quay.io/stefanprodan/podinfo:1.7.3
|
||||
podinfod=stefanprodan/podinfo:2.0.3
|
||||
```
|
||||
|
||||
Generate 404s:
|
||||
|
|
@ -444,7 +444,7 @@ Trigger a canary deployment by updating the container image:
|
|||
|
||||
```bash
|
||||
kubectl -n test set image deployment/podinfo \
|
||||
podinfod=quay.io/stefanprodan/podinfo:1.7.4
|
||||
podinfod=stefanprodan/podinfo:2.0.4
|
||||
```
|
||||
|
||||
Flagger detects that the deployment revision changed and starts the A/B testing:
|
||||
|
|
|
|||
|
|
@ -189,7 +189,7 @@ Trigger a canary deployment by updating the container image:
|
|||
|
||||
```bash
|
||||
kubectl -n test set image deployment/podinfo \
|
||||
podinfod=quay.io/stefanprodan/podinfo:1.7.1
|
||||
podinfod=stefanprodan/podinfo:2.0.1
|
||||
```
|
||||
|
||||
Flagger detects that the deployment revision changed and starts a new rollout:
|
||||
|
|
@ -243,7 +243,7 @@ Trigger another canary deployment:
|
|||
|
||||
```bash
|
||||
kubectl -n test set image deployment/podinfo \
|
||||
podinfod=quay.io/stefanprodan/podinfo:1.4.2
|
||||
podinfod=stefanprodan/podinfo:2.0.2
|
||||
```
|
||||
|
||||
Generate HTTP 500 errors:
|
||||
|
|
@ -313,7 +313,7 @@ Trigger a canary deployment by updating the container image:
|
|||
|
||||
```bash
|
||||
kubectl -n test set image deployment/podinfo \
|
||||
podinfod=quay.io/stefanprodan/podinfo:1.4.3
|
||||
podinfod=stefanprodan/podinfo:2.0.3
|
||||
```
|
||||
|
||||
Generate high response latency:
|
||||
|
|
@ -387,7 +387,7 @@ Trigger a canary deployment by updating the container image:
|
|||
|
||||
```bash
|
||||
kubectl -n test set image deployment/podinfo \
|
||||
podinfod=quay.io/stefanprodan/podinfo:1.5.0
|
||||
podinfod=stefanprodan/podinfo:2.0.4
|
||||
```
|
||||
|
||||
Flagger detects that the deployment revision changed and starts the A/B testing:
|
||||
|
|
|
|||
|
|
@ -79,8 +79,15 @@ spec:
|
|||
# milliseconds
|
||||
threshold: 500
|
||||
interval: 30s
|
||||
# generate traffic during analysis
|
||||
# testing (optional)
|
||||
webhooks:
|
||||
- name: acceptance-test
|
||||
type: pre-rollout
|
||||
url: http://flagger-loadtester.test/
|
||||
timeout: 30s
|
||||
metadata:
|
||||
type: bash
|
||||
cmd: "curl -sd 'test' http://podinfo-canary:9898/token | grep token"
|
||||
- name: load-test
|
||||
url: http://flagger-loadtester.test/
|
||||
timeout: 5s
|
||||
|
|
@ -94,6 +101,9 @@ Save the above resource as podinfo-canary.yaml and then apply it:
|
|||
kubectl apply -f ./podinfo-canary.yaml
|
||||
```
|
||||
|
||||
When the canary analysis starts, Flagger will call the pre-rollout webhooks before routing traffic to the canary.
|
||||
The canary analysis will run for five minutes while validating the HTTP metrics and rollout hooks every minute.
|
||||
|
||||
After a couple of seconds Flagger will create the canary objects:
|
||||
|
||||
```bash
|
||||
|
|
@ -119,7 +129,7 @@ Trigger a canary deployment by updating the container image:
|
|||
|
||||
```bash
|
||||
kubectl -n test set image deployment/podinfo \
|
||||
podinfod=quay.io/stefanprodan/podinfo:1.7.1
|
||||
podinfod=stefanprodan/podinfo:2.0.1
|
||||
```
|
||||
|
||||
Flagger detects that the deployment revision changed and starts a new rollout:
|
||||
|
|
@ -169,14 +179,17 @@ prod backend Failed 0 2019-01-14T17:05:07Z
|
|||
|
||||
During the canary analysis you can generate HTTP 500 errors and high latency to test if Flagger pauses the rollout.
|
||||
|
||||
Create a tester pod and exec into it:
|
||||
Trigger another canary deployment:
|
||||
|
||||
```bash
|
||||
kubectl -n test run tester \
|
||||
--image=quay.io/stefanprodan/podinfo:1.2.1 \
|
||||
-- ./podinfo --port=9898
|
||||
kubectl -n test set image deployment/podinfo \
|
||||
podinfod=stefanprodan/podinfo:2.0.2
|
||||
```
|
||||
|
||||
kubectl -n test exec -it tester-xx-xx sh
|
||||
Exec into the load tester pod with:
|
||||
|
||||
```bash
|
||||
kubectl -n test exec -it flagger-loadtester-xx-xx sh
|
||||
```
|
||||
|
||||
Generate HTTP 500 errors:
|
||||
|
|
@ -216,4 +229,3 @@ Events:
|
|||
Warning Synced 1m flagger Rolling back podinfo.test failed checks threshold reached 10
|
||||
Warning Synced 1m flagger Canary failed! Scaling down podinfo.test
|
||||
```
|
||||
|
||||
|
|
|
|||
|
|
@ -24,7 +24,7 @@ spec:
|
|||
spec:
|
||||
containers:
|
||||
- name: podinfod
|
||||
image: quay.io/stefanprodan/podinfo:1.7.0
|
||||
image: stefanprodan/podinfo:2.0.0
|
||||
imagePullPolicy: IfNotPresent
|
||||
ports:
|
||||
- containerPort: 9898
|
||||
|
|
|
|||
Loading…
Reference in New Issue